Compare commits
330 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a011139894 | ||
|
|
36866f1d36 | ||
|
|
407531bcb1 | ||
|
|
3adbb2ff41 | ||
|
|
499ae1c7a1 | ||
|
|
438ea6ccb0 | ||
|
|
598a29a733 | ||
|
|
6d102fc826 | ||
|
|
fca07fbb62 | ||
|
|
cdedcc24b8 | ||
|
|
60d5f27140 | ||
|
|
cb413bae49 | ||
|
|
e9f78ea70c | ||
|
|
6858cb066f | ||
|
|
4be0d426f4 | ||
|
|
7d7d5d6c3c | ||
|
|
0422387e90 | ||
|
|
2ed5fd9ac4 | ||
|
|
2beb2acc24 | ||
|
|
56ce591908 | ||
|
|
b190e676b4 | ||
|
|
19520b2ec9 | ||
|
|
eeb96ae8b5 | ||
|
|
cddedd37d5 | ||
|
|
4d6626b099 | ||
|
|
7a55833bb2 | ||
|
|
7e4702cf09 | ||
|
|
685f08697a | ||
|
|
a255db706d | ||
|
|
9d76902710 | ||
|
|
62ee7f6980 | ||
|
|
2f6707825a | ||
|
|
7dda77dcb4 | ||
|
|
ddec22d04c | ||
|
|
32e90859f4 | ||
|
|
8b8970c787 | ||
|
|
03d35ba799 | ||
|
|
c035d7d88a | ||
|
|
46f9e9efff | ||
|
|
4fa8d7ed79 | ||
|
|
cd71b505a9 | ||
|
|
c7db08ed3e | ||
|
|
3582a1004c | ||
|
|
22cbd2dbb5 | ||
|
|
c87af9e85c | ||
|
|
6c202effa4 | ||
|
|
632f52af22 | ||
|
|
46e59529a4 | ||
|
|
bdf060236a | ||
|
|
d9d2a09282 | ||
|
|
b020fd4ad2 | ||
|
|
4ef3526354 | ||
|
|
20ddeb6e1b | ||
|
|
d27f110498 | ||
|
|
910797ccb6 | ||
|
|
7de9d15aef | ||
|
|
6a9ffe7e06 | ||
|
|
12dcea4f70 | ||
|
|
b3b39bd8f1 | ||
|
|
c7caecf77c | ||
|
|
1fe30363c7 | ||
|
|
54a7256c8d | ||
|
|
8e8e4ff132 | ||
|
|
1dace72092 | ||
|
|
3a5c1d9faf | ||
|
|
f38c754301 | ||
|
|
fff38f484d | ||
|
|
95390b655f | ||
|
|
5967c421ca | ||
|
|
b8b5214f44 | ||
|
|
cdd3b67a5c | ||
|
|
28c9de3f6a | ||
|
|
f3b9bfc114 | ||
|
|
c9eba39edd | ||
|
|
40a1c7116e | ||
|
|
c03af9cfcc | ||
|
|
c4cbc32cc5 | ||
|
|
1231ce199e | ||
|
|
e0cac6fd99 | ||
|
|
d9db1534b1 | ||
|
|
6a0aaaf069 | ||
|
|
4c04798aa5 | ||
|
|
3f84b0a015 | ||
|
|
917380ddbb | ||
|
|
d9ae067e52 | ||
|
|
b2e8bf6e89 | ||
|
|
170cbe98c5 | ||
|
|
c94f662095 | ||
|
|
0987dcfb1c | ||
|
|
6920c01d4a | ||
|
|
cc0cc8cdf0 | ||
|
|
fb13969798 | ||
|
|
278258ee9f | ||
|
|
9e542cf86b | ||
|
|
244e952f79 | ||
|
|
aa2a8fa223 | ||
|
|
467acb47bf | ||
|
|
0c0d6b2bfc | ||
|
|
ce0e5be406 | ||
|
|
65ce4c90fa | ||
|
|
9897a08d09 | ||
|
|
f5753ba720 | ||
|
|
fcf32a935b | ||
|
|
ec50788987 | ||
|
|
ac0a2da3b5 | ||
|
|
9f84dc42fe | ||
|
|
21f9304235 | ||
|
|
5cedd22bbd | ||
|
|
c0dacbc4dd | ||
|
|
dd6e9ea70c | ||
|
|
87598dcd7f | ||
|
|
3bb7b677f8 | ||
|
|
988a7223f4 | ||
|
|
7f044372fa | ||
|
|
552897abbc | ||
|
|
946a8c5baa | ||
|
|
888b31aa92 | ||
|
|
e2dec2510f | ||
|
|
da5ad2ab9f | ||
|
|
eaa4b04a22 | ||
|
|
3051b13108 | ||
|
|
4c4e48bab7 | ||
|
|
01a3eb29cb | ||
|
|
73f7249c5f | ||
|
|
18c6559199 | ||
|
|
e66ece993f | ||
|
|
0686860624 | ||
|
|
24ce46b380 | ||
|
|
a49bf81ff2 | ||
|
|
64501fd7f1 | ||
|
|
db3c0b0907 | ||
|
|
edda117a7a | ||
|
|
cdface0dd5 | ||
|
|
be6afe2d3a | ||
|
|
9163780000 | ||
|
|
d7aa7dfe64 | ||
|
|
f1decb531d | ||
|
|
99399c698b | ||
|
|
1f5f42f216 | ||
|
|
9082c4702f | ||
|
|
6cedcfbf77 | ||
|
|
8a631f045e | ||
|
|
a6a2ee5b6b | ||
|
|
016708276c | ||
|
|
4cfdc4c513 | ||
|
|
0f257c9308 | ||
|
|
c8104b6e78 | ||
|
|
1a1d731043 | ||
|
|
c5a000d2ae | ||
|
|
94d1924fa9 | ||
|
|
6c1cf68bca | ||
|
|
395af051bd | ||
|
|
42fd66675e | ||
|
|
21a3f3699b | ||
|
|
d168b2acac | ||
|
|
2ce8233921 | ||
|
|
697a4fa8a4 | ||
|
|
2f83c6c7d1 | ||
|
|
127f414e9c | ||
|
|
33c4ccffab | ||
|
|
bafe7f5a09 | ||
|
|
baf41112d1 | ||
|
|
a90dde94e1 | ||
|
|
7dfbfc7227 | ||
|
|
b10843d051 | ||
|
|
520ac8f4dc | ||
|
|
537a6e50e9 | ||
|
|
2d0cbdf1a8 | ||
|
|
5afb562aa3 | ||
|
|
db069c3d4a | ||
|
|
fae40c7e2f | ||
|
|
0c43b592dc | ||
|
|
2ab8924e2d | ||
|
|
0e31cfa784 | ||
|
|
8f7ffcf350 | ||
|
|
9c8507a0fd | ||
|
|
e9b2cab088 | ||
|
|
d3ccacccb1 | ||
|
|
df386c8fbc | ||
|
|
4d15dd6e17 | ||
|
|
56a0499636 | ||
|
|
10fc4768e8 | ||
|
|
2b63d7d10d | ||
|
|
1f177528c1 | ||
|
|
fc3bbb70a3 | ||
|
|
ce3cab0295 | ||
|
|
c784e5285e | ||
|
|
2bf9055cae | ||
|
|
8aba5aed4f | ||
|
|
0ce7cf5e10 | ||
|
|
96edcbccd7 | ||
|
|
4603afb6de | ||
|
|
56317b00af | ||
|
|
cacec9c1f3 | ||
|
|
44ee07f0b2 | ||
|
|
6a8d5e1731 | ||
|
|
d9962f65b3 | ||
|
|
119e88d87b | ||
|
|
71d9e010d9 | ||
|
|
5718caa957 | ||
|
|
efd8a32ed6 | ||
|
|
b22d700e16 | ||
|
|
ccdacea0c4 | ||
|
|
4bdcbc1cb5 | ||
|
|
833c6cf2ec | ||
|
|
dd6dbdd90a | ||
|
|
63013cc565 | ||
|
|
912402364a | ||
|
|
159f51b12b | ||
|
|
7678a91b0e | ||
|
|
b13899c63d | ||
|
|
3a0d882c5e | ||
|
|
cb81f0ad6d | ||
|
|
518bacf628 | ||
|
|
ca63b03e55 | ||
|
|
cecef88d6b | ||
|
|
7ffd805a03 | ||
|
|
a7e2a0c981 | ||
|
|
2a570bb4ca | ||
|
|
5ca8f0706d | ||
|
|
a9b4436cdc | ||
|
|
5f91999512 | ||
|
|
9f000beeaf | ||
|
|
ff0a71f212 | ||
|
|
22dfc6ec24 | ||
|
|
48147c079e | ||
|
|
d715479ef6 | ||
|
|
fc8298c468 | ||
|
|
e94ca5dc91 | ||
|
|
114b71b751 | ||
|
|
b2770a2087 | ||
|
|
cba1878bb2 | ||
|
|
a2e037d6af | ||
|
|
65a2b6a223 | ||
|
|
9ed799e803 | ||
|
|
c1c0ecca13 | ||
|
|
ee62836383 | ||
|
|
705f598b1a | ||
|
|
414de88925 | ||
|
|
53ffd245dd | ||
|
|
cf1b756206 | ||
|
|
22b58e31ef | ||
|
|
b7f9bf5a28 | ||
|
|
aba680b6c2 | ||
|
|
fabada95f6 | ||
|
|
9ccd8bb3ea | ||
|
|
1d68acf8f0 | ||
|
|
1e7697b551 | ||
|
|
4a4ec88d00 | ||
|
|
6adc778d62 | ||
|
|
6b7ebdb7e9 | ||
|
|
3d7facd774 | ||
|
|
eaee1f2cab | ||
|
|
ff012221ae | ||
|
|
c398553748 | ||
|
|
3ccbcf6185 | ||
|
|
f0abc0ef59 | ||
|
|
a99fa3375d | ||
|
|
22c7e09b3f | ||
|
|
0dfe1d5b35 | ||
|
|
a99a3bc6d7 | ||
|
|
9804f25de3 | ||
|
|
ae98200660 | ||
|
|
e45420646f | ||
|
|
21be82ef8b | ||
|
|
001afe00cb | ||
|
|
19a5985f29 | ||
|
|
2715ee6c61 | ||
|
|
dc157fa28f | ||
|
|
1ff14b4e05 | ||
|
|
480ac254ab | ||
|
|
4b95db81aa | ||
|
|
c81e898435 | ||
|
|
f1646b96ca | ||
|
|
44f2b63e43 | ||
|
|
847a2bdc85 | ||
|
|
03f0f99469 | ||
|
|
3900e66158 | ||
|
|
3dff6cda40 | ||
|
|
73d05095b5 | ||
|
|
fcdc1728eb | ||
|
|
8b942ea237 | ||
|
|
88a1c5ca5d | ||
|
|
047176b297 | ||
|
|
dc4d0d8e71 | ||
|
|
b9c5c7bbde | ||
|
|
9daeed923f | ||
|
|
66b260cea9 | ||
|
|
58cf01c2ad | ||
|
|
d866841c19 | ||
|
|
a462a644fb | ||
|
|
678675a9a6 | ||
|
|
de9069ef1d | ||
|
|
c0c0a1a83a | ||
|
|
1d004b6dbd | ||
|
|
b90e1200d7 | ||
|
|
4493a0a804 | ||
|
|
58835b2b42 | ||
|
|
427597b603 | ||
|
|
7d64879ba8 | ||
|
|
bb715704b7 | ||
|
|
d67e9cc507 | ||
|
|
2927bbb2d6 | ||
|
|
0527b59180 | ||
|
|
a5ce1032d3 | ||
|
|
1c2acdc985 | ||
|
|
4e75534ef8 | ||
|
|
7a573cafd1 | ||
|
|
844194ee29 | ||
|
|
609c5921d4 | ||
|
|
c79eaa089a | ||
|
|
e9d962f273 | ||
|
|
b5405174ec | ||
|
|
6eee601521 | ||
|
|
2fac2bee7c | ||
|
|
c140eeee6b | ||
|
|
c5988a04f9 | ||
|
|
a2e0f98693 | ||
|
|
1111153f06 | ||
|
|
e5a836cb7d | ||
|
|
b0de84cbc5 | ||
|
|
cbb718e10d | ||
|
|
b5ad9369fe | ||
|
|
4401de0413 | ||
|
|
6e671c5245 | ||
|
|
08848be784 | ||
|
|
b599fbae97 | ||
|
|
a8dabc99f6 | ||
|
|
f1130db131 | ||
|
|
735ec35546 |
32
.github/ISSUE_TEMPLATE/bug_report.md
vendored
32
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -11,30 +11,38 @@ NOTE:
|
|||||||
all of the below are optional, consider them as inspiration, delete and rewrite at will, thx md
|
all of the below are optional, consider them as inspiration, delete and rewrite at will, thx md
|
||||||
|
|
||||||
|
|
||||||
**Describe the bug**
|
### Describe the bug
|
||||||
a description of what the bug is
|
a description of what the bug is
|
||||||
|
|
||||||
**To Reproduce**
|
### To Reproduce
|
||||||
List of steps to reproduce the issue, or, if it's hard to reproduce, then at least a detailed explanation of what you did to run into it
|
List of steps to reproduce the issue, or, if it's hard to reproduce, then at least a detailed explanation of what you did to run into it
|
||||||
|
|
||||||
**Expected behavior**
|
### Expected behavior
|
||||||
a description of what you expected to happen
|
a description of what you expected to happen
|
||||||
|
|
||||||
**Screenshots**
|
### Screenshots
|
||||||
if applicable, add screenshots to help explain your problem, such as the kickass crashpage :^)
|
if applicable, add screenshots to help explain your problem, such as the kickass crashpage :^)
|
||||||
|
|
||||||
**Server details**
|
### Server details (if you are using docker/podman)
|
||||||
if the issue is possibly on the server-side, then mention some of the following:
|
remove the ones that are not relevant:
|
||||||
* server OS / version:
|
* **server OS / version:**
|
||||||
* python version:
|
* **how you're running copyparty:** (docker/podman/something-else)
|
||||||
* copyparty arguments:
|
* **docker image:** (variant, version, and arch if you know)
|
||||||
* filesystem (`lsblk -f` on linux):
|
* **copyparty arguments and/or config-file:**
|
||||||
|
|
||||||
**Client details**
|
### Server details (if you're NOT using docker/podman)
|
||||||
|
remove the ones that are not relevant:
|
||||||
|
* **server OS / version:**
|
||||||
|
* **what copyparty did you grab:** (sfx/exe/pip/aur/...)
|
||||||
|
* **how you're running it:** (in a terminal, as a systemd-service, ...)
|
||||||
|
* run copyparty with `--version` and grab the last 3 lines (they start with `copyparty`, `CPython`, `sqlite`) and paste them below this line:
|
||||||
|
* **copyparty arguments and/or config-file:**
|
||||||
|
|
||||||
|
### Client details
|
||||||
if the issue is possibly on the client-side, then mention some of the following:
|
if the issue is possibly on the client-side, then mention some of the following:
|
||||||
* the device type and model:
|
* the device type and model:
|
||||||
* OS version:
|
* OS version:
|
||||||
* browser version:
|
* browser version:
|
||||||
|
|
||||||
**Additional context**
|
### Additional context
|
||||||
any other context about the problem here
|
any other context about the problem here
|
||||||
|
|||||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -30,6 +30,7 @@ copyparty/res/COPYING.txt
|
|||||||
copyparty/web/deps/
|
copyparty/web/deps/
|
||||||
srv/
|
srv/
|
||||||
scripts/docker/i/
|
scripts/docker/i/
|
||||||
|
scripts/deps-docker/uncomment.py
|
||||||
contrib/package/arch/pkg/
|
contrib/package/arch/pkg/
|
||||||
contrib/package/arch/src/
|
contrib/package/arch/src/
|
||||||
|
|
||||||
|
|||||||
@@ -28,6 +28,8 @@ aside from documentation and ideas, some other things that would be cool to have
|
|||||||
|
|
||||||
* **translations** -- the copyparty web-UI has translations for english and norwegian at the top of [browser.js](https://github.com/9001/copyparty/blob/hovudstraum/copyparty/web/browser.js); if you'd like to add a translation for another language then that'd be welcome! and if that language has a grammar that doesn't fit into the way the strings are assembled, then we'll fix that as we go :>
|
* **translations** -- the copyparty web-UI has translations for english and norwegian at the top of [browser.js](https://github.com/9001/copyparty/blob/hovudstraum/copyparty/web/browser.js); if you'd like to add a translation for another language then that'd be welcome! and if that language has a grammar that doesn't fit into the way the strings are assembled, then we'll fix that as we go :>
|
||||||
|
|
||||||
|
* but please note that support for [RTL (Right-to-Left) languages](https://en.wikipedia.org/wiki/Right-to-left_script) is currently not planned, since the javascript is a bit too jank for that
|
||||||
|
|
||||||
* **UI ideas** -- at some point I was thinking of rewriting the UI in react/preact/something-not-vanilla-javascript, but I'll admit the comfiness of not having any build stage combined with raw performance has kinda convinced me otherwise :p but I'd be very open to ideas on how the UI could be improved, or be more intuitive.
|
* **UI ideas** -- at some point I was thinking of rewriting the UI in react/preact/something-not-vanilla-javascript, but I'll admit the comfiness of not having any build stage combined with raw performance has kinda convinced me otherwise :p but I'd be very open to ideas on how the UI could be improved, or be more intuitive.
|
||||||
|
|
||||||
* **docker improvements** -- I don't really know what I'm doing when it comes to containers, so I'm sure there's a *huge* room for improvement here, mainly regarding how you're supposed to use the container with kubernetes / docker-compose / any of the other popular ways to do things. At some point I swear I'll start learning about docker so I can pick up clach04's [docker-compose draft](https://github.com/9001/copyparty/issues/38) and learn how that stuff ticks, unless someone beats me to it!
|
* **docker improvements** -- I don't really know what I'm doing when it comes to containers, so I'm sure there's a *huge* room for improvement here, mainly regarding how you're supposed to use the container with kubernetes / docker-compose / any of the other popular ways to do things. At some point I swear I'll start learning about docker so I can pick up clach04's [docker-compose draft](https://github.com/9001/copyparty/issues/38) and learn how that stuff ticks, unless someone beats me to it!
|
||||||
|
|||||||
@@ -15,22 +15,18 @@ produces a chronological list of all uploads by collecting info from up2k databa
|
|||||||
# [`partyfuse.py`](partyfuse.py)
|
# [`partyfuse.py`](partyfuse.py)
|
||||||
* mount a copyparty server as a local filesystem (read-only)
|
* mount a copyparty server as a local filesystem (read-only)
|
||||||
* **supports Windows!** -- expect `194 MiB/s` sequential read
|
* **supports Windows!** -- expect `194 MiB/s` sequential read
|
||||||
* **supports Linux** -- expect `117 MiB/s` sequential read
|
* **supports Linux** -- expect `600 MiB/s` sequential read
|
||||||
* **supports macos** -- expect `85 MiB/s` sequential read
|
* **supports macos** -- expect `85 MiB/s` sequential read
|
||||||
|
|
||||||
filecache is default-on for windows and macos;
|
|
||||||
* macos readsize is 64kB, so speed ~32 MiB/s without the cache
|
|
||||||
* windows readsize varies by software; explorer=1M, pv=32k
|
|
||||||
|
|
||||||
note that copyparty should run with `-ed` to enable dotfiles (hidden otherwise)
|
note that copyparty should run with `-ed` to enable dotfiles (hidden otherwise)
|
||||||
|
|
||||||
also consider using [../docs/rclone.md](../docs/rclone.md) instead for 5x performance
|
and consider using [../docs/rclone.md](../docs/rclone.md) instead; usually a bit faster, especially on windows
|
||||||
|
|
||||||
|
|
||||||
## to run this on windows:
|
## to run this on windows:
|
||||||
* install [winfsp](https://github.com/billziss-gh/winfsp/releases/latest) and [python 3](https://www.python.org/downloads/)
|
* install [winfsp](https://github.com/billziss-gh/winfsp/releases/latest) and [python 3](https://www.python.org/downloads/)
|
||||||
* [x] add python 3.x to PATH (it asks during install)
|
* [x] add python 3.x to PATH (it asks during install)
|
||||||
* `python -m pip install --user fusepy`
|
* `python -m pip install --user fusepy` (or grab a copy of `fuse.py` from the `connect` page on your copyparty, and keep it in the same folder)
|
||||||
* `python ./partyfuse.py n: http://192.168.1.69:3923/`
|
* `python ./partyfuse.py n: http://192.168.1.69:3923/`
|
||||||
|
|
||||||
10% faster in [msys2](https://www.msys2.org/), 700% faster if debug prints are enabled:
|
10% faster in [msys2](https://www.msys2.org/), 700% faster if debug prints are enabled:
|
||||||
@@ -82,3 +78,6 @@ cd /mnt/nas/music/.hist
|
|||||||
# [`prisonparty.sh`](prisonparty.sh)
|
# [`prisonparty.sh`](prisonparty.sh)
|
||||||
* run copyparty in a chroot, preventing any accidental file access
|
* run copyparty in a chroot, preventing any accidental file access
|
||||||
* creates bindmounts for /bin, /lib, and so on, see `sysdirs=`
|
* creates bindmounts for /bin, /lib, and so on, see `sysdirs=`
|
||||||
|
|
||||||
|
# [`bubbleparty.sh`](bubbleparty.sh)
|
||||||
|
* run copyparty in an isolated process, preventing any accidental file access and more
|
||||||
|
|||||||
19
bin/bubbleparty.sh
Executable file
19
bin/bubbleparty.sh
Executable file
@@ -0,0 +1,19 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
# usage: ./bubbleparty.sh ./copyparty-sfx.py ....
|
||||||
|
bwrap \
|
||||||
|
--unshare-all \
|
||||||
|
--ro-bind /usr /usr \
|
||||||
|
--ro-bind /bin /bin \
|
||||||
|
--ro-bind /lib /lib \
|
||||||
|
--ro-bind /etc/resolv.conf /etc/resolv.conf \
|
||||||
|
--dev-bind /dev /dev \
|
||||||
|
--dir /tmp \
|
||||||
|
--dir /var \
|
||||||
|
--bind $(pwd) $(pwd) \
|
||||||
|
--share-net \
|
||||||
|
--die-with-parent \
|
||||||
|
--file 11 /etc/passwd \
|
||||||
|
--file 12 /etc/group \
|
||||||
|
"$@" \
|
||||||
|
11< <(getent passwd $(id -u) 65534) \
|
||||||
|
12< <(getent group $(id -g) 65534)
|
||||||
@@ -20,6 +20,8 @@ each plugin must define a `main()` which takes 3 arguments;
|
|||||||
|
|
||||||
## on404
|
## on404
|
||||||
|
|
||||||
|
* [redirect.py](redirect.py) sends an HTTP 301 or 302, redirecting the client to another page/file
|
||||||
|
* [randpic.py](randpic.py) redirects `/foo/bar/randpic.jpg` to a random pic in `/foo/bar/`
|
||||||
* [sorry.py](answer.py) replies with a custom message instead of the usual 404
|
* [sorry.py](answer.py) replies with a custom message instead of the usual 404
|
||||||
* [nooo.py](nooo.py) replies with an endless noooooooooooooo
|
* [nooo.py](nooo.py) replies with an endless noooooooooooooo
|
||||||
* [never404.py](never404.py) 100% guarantee that 404 will never be a thing again as it automatically creates dummy files whenever necessary
|
* [never404.py](never404.py) 100% guarantee that 404 will never be a thing again as it automatically creates dummy files whenever necessary
|
||||||
|
|||||||
35
bin/handlers/randpic.py
Normal file
35
bin/handlers/randpic.py
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
import os
|
||||||
|
import random
|
||||||
|
from urllib.parse import quote
|
||||||
|
|
||||||
|
|
||||||
|
# assuming /foo/bar/ is a valid URL but /foo/bar/randpic.png does not exist,
|
||||||
|
# hijack the 404 with a redirect to a random pic in that folder
|
||||||
|
#
|
||||||
|
# thx to lia & kipu for the idea
|
||||||
|
|
||||||
|
|
||||||
|
def main(cli, vn, rem):
|
||||||
|
req_fn = rem.split("/")[-1]
|
||||||
|
if not cli.can_read or not req_fn.startswith("randpic"):
|
||||||
|
return
|
||||||
|
|
||||||
|
req_abspath = vn.canonical(rem)
|
||||||
|
req_ap_dir = os.path.dirname(req_abspath)
|
||||||
|
files_in_dir = os.listdir(req_ap_dir)
|
||||||
|
|
||||||
|
if "." in req_fn:
|
||||||
|
file_ext = "." + req_fn.split(".")[-1]
|
||||||
|
files_in_dir = [x for x in files_in_dir if x.lower().endswith(file_ext)]
|
||||||
|
|
||||||
|
if not files_in_dir:
|
||||||
|
return
|
||||||
|
|
||||||
|
selected_file = random.choice(files_in_dir)
|
||||||
|
|
||||||
|
req_url = "/".join([vn.vpath, rem]).strip("/")
|
||||||
|
req_dir = req_url.rsplit("/", 1)[0]
|
||||||
|
new_url = "/".join([req_dir, quote(selected_file)]).strip("/")
|
||||||
|
|
||||||
|
cli.reply(b"redirecting...", 302, headers={"Location": "/" + new_url})
|
||||||
|
return "true"
|
||||||
52
bin/handlers/redirect.py
Normal file
52
bin/handlers/redirect.py
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
# if someone hits a 404, redirect them to another location
|
||||||
|
|
||||||
|
|
||||||
|
def send_http_302_temporary_redirect(cli, new_path):
|
||||||
|
"""
|
||||||
|
replies with an HTTP 302, which is a temporary redirect;
|
||||||
|
"new_path" can be any of the following:
|
||||||
|
- "http://a.com/" would redirect to another website,
|
||||||
|
- "/foo/bar" would redirect to /foo/bar on the same server;
|
||||||
|
note the leading '/' in the location which is important
|
||||||
|
"""
|
||||||
|
cli.reply(b"redirecting...", 302, headers={"Location": new_path})
|
||||||
|
|
||||||
|
|
||||||
|
def send_http_301_permanent_redirect(cli, new_path):
|
||||||
|
"""
|
||||||
|
replies with an HTTP 301, which is a permanent redirect;
|
||||||
|
otherwise identical to send_http_302_temporary_redirect
|
||||||
|
"""
|
||||||
|
cli.reply(b"redirecting...", 301, headers={"Location": new_path})
|
||||||
|
|
||||||
|
|
||||||
|
def send_errorpage_with_redirect_link(cli, new_path):
|
||||||
|
"""
|
||||||
|
replies with a website explaining that the page has moved;
|
||||||
|
"new_path" must be an absolute location on the same server
|
||||||
|
but without a leading '/', so for example "foo/bar"
|
||||||
|
would redirect to "/foo/bar"
|
||||||
|
"""
|
||||||
|
cli.redirect(new_path, click=False, msg="this page has moved")
|
||||||
|
|
||||||
|
|
||||||
|
def main(cli, vn, rem):
|
||||||
|
"""
|
||||||
|
this is the function that gets called by copyparty;
|
||||||
|
note that vn.vpath and cli.vpath does not have a leading '/'
|
||||||
|
so we're adding the slash in the debug messages below
|
||||||
|
"""
|
||||||
|
print(f"this client just hit a 404: {cli.ip}")
|
||||||
|
print(f"they were accessing this volume: /{vn.vpath}")
|
||||||
|
print(f"and the original request-path (straight from the URL) was /{cli.vpath}")
|
||||||
|
print(f"...which resolves to the following filesystem path: {vn.canonical(rem)}")
|
||||||
|
|
||||||
|
new_path = "/foo/bar/"
|
||||||
|
print(f"will now redirect the client to {new_path}")
|
||||||
|
|
||||||
|
# uncomment one of these:
|
||||||
|
send_http_302_temporary_redirect(cli, new_path)
|
||||||
|
#send_http_301_permanent_redirect(cli, new_path)
|
||||||
|
#send_errorpage_with_redirect_link(cli, new_path)
|
||||||
|
|
||||||
|
return "true"
|
||||||
@@ -2,7 +2,7 @@ standalone programs which are executed by copyparty when an event happens (uploa
|
|||||||
|
|
||||||
these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info
|
these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info
|
||||||
|
|
||||||
run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbr/xar/xbd/xad/xban)
|
run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbc/xac/xbr/xar/xbd/xad/xban)
|
||||||
|
|
||||||
> **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead
|
> **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead
|
||||||
|
|
||||||
@@ -30,4 +30,5 @@ these are `--xiu` hooks; unlike `xbu` and `xau` (which get executed on every sin
|
|||||||
# on message
|
# on message
|
||||||
* [wget.py](wget.py) lets you download files by POSTing URLs to copyparty
|
* [wget.py](wget.py) lets you download files by POSTing URLs to copyparty
|
||||||
* [qbittorrent-magnet.py](qbittorrent-magnet.py) starts downloading a torrent if you post a magnet url
|
* [qbittorrent-magnet.py](qbittorrent-magnet.py) starts downloading a torrent if you post a magnet url
|
||||||
|
* [usb-eject.py](usb-eject.py) adds web-UI buttons to safe-remove usb flashdrives shared through copyparty
|
||||||
* [msg-log.py](msg-log.py) is a guestbook; logs messages to a doc in the same folder
|
* [msg-log.py](msg-log.py) is a guestbook; logs messages to a doc in the same folder
|
||||||
|
|||||||
57
bin/hooks/usb-eject.js
Normal file
57
bin/hooks/usb-eject.js
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
// see usb-eject.py for usage
|
||||||
|
|
||||||
|
function usbclick() {
|
||||||
|
QS('#treeul a[href="/usb/"]').click();
|
||||||
|
}
|
||||||
|
|
||||||
|
function eject_cb() {
|
||||||
|
var t = this.responseText;
|
||||||
|
if (t.indexOf('can be safely unplugged') < 0 && t.indexOf('Device can be removed') < 0)
|
||||||
|
return toast.err(30, 'usb eject failed:\n\n' + t);
|
||||||
|
|
||||||
|
toast.ok(5, esc(t.replace(/ - /g, '\n\n')).trim());
|
||||||
|
usbclick(); setTimeout(usbclick, 10);
|
||||||
|
};
|
||||||
|
|
||||||
|
function add_eject_2(a) {
|
||||||
|
var aw = a.getAttribute('href').split(/\//g);
|
||||||
|
if (aw.length != 4 || aw[3])
|
||||||
|
return;
|
||||||
|
|
||||||
|
var v = aw[2],
|
||||||
|
k = 'umount_' + v,
|
||||||
|
o = ebi(k);
|
||||||
|
|
||||||
|
if (o)
|
||||||
|
o.parentNode.removeChild(o);
|
||||||
|
|
||||||
|
a.appendChild(mknod('span', k, '⏏'), a);
|
||||||
|
o = ebi(k);
|
||||||
|
o.style.cssText = 'position:absolute; right:1em; margin-top:-.2em; font-size:1.3em';
|
||||||
|
o.onclick = function (e) {
|
||||||
|
ev(e);
|
||||||
|
var xhr = new XHR();
|
||||||
|
xhr.open('POST', get_evpath(), true);
|
||||||
|
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded;charset=UTF-8');
|
||||||
|
xhr.send('msg=' + uricom_enc(':usb-eject:' + v + ':'));
|
||||||
|
xhr.onload = xhr.onerror = eject_cb;
|
||||||
|
toast.inf(10, "ejecting " + v + "...");
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
function add_eject() {
|
||||||
|
var o = QSA('#treeul a[href^="/usb/"]');
|
||||||
|
for (var a = o.length - 1; a > 0; a--)
|
||||||
|
add_eject_2(o[a]);
|
||||||
|
};
|
||||||
|
|
||||||
|
(function() {
|
||||||
|
var f0 = treectl.rendertree;
|
||||||
|
treectl.rendertree = function (res, ts, top0, dst, rst) {
|
||||||
|
var ret = f0(res, ts, top0, dst, rst);
|
||||||
|
add_eject();
|
||||||
|
return ret;
|
||||||
|
};
|
||||||
|
})();
|
||||||
|
|
||||||
|
setTimeout(add_eject, 50);
|
||||||
58
bin/hooks/usb-eject.py
Normal file
58
bin/hooks/usb-eject.py
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import os
|
||||||
|
import stat
|
||||||
|
import subprocess as sp
|
||||||
|
import sys
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
if you've found yourself using copyparty to serve flashdrives on a LAN
|
||||||
|
and your only wish is that the web-UI had a button to unmount / safely
|
||||||
|
remove those flashdrives, then boy howdy are you in the right place :D
|
||||||
|
|
||||||
|
put usb-eject.js in the webroot (or somewhere else http-accessible)
|
||||||
|
then run copyparty with these args:
|
||||||
|
|
||||||
|
-v /run/media/egon:/usb:A:c,hist=/tmp/junk
|
||||||
|
--xm=c1,bin/hooks/usb-eject.py
|
||||||
|
--js-browser=/usb-eject.js
|
||||||
|
|
||||||
|
which does the following respectively,
|
||||||
|
|
||||||
|
* share all of /run/media/egon as /usb with admin for everyone
|
||||||
|
and put the histpath somewhere it won't cause trouble
|
||||||
|
* run the usb-eject hook with stdout redirect to the web-ui
|
||||||
|
* add the complementary usb-eject.js to the browser
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
try:
|
||||||
|
label = sys.argv[1].split(":usb-eject:")[1].split(":")[0]
|
||||||
|
mp = "/run/media/egon/" + label
|
||||||
|
# print("ejecting [%s]... " % (mp,), end="")
|
||||||
|
mp = os.path.abspath(os.path.realpath(mp.encode("utf-8")))
|
||||||
|
st = os.lstat(mp)
|
||||||
|
if not stat.S_ISDIR(st.st_mode):
|
||||||
|
raise Exception("not a regular directory")
|
||||||
|
|
||||||
|
# if you're running copyparty as root (thx for the faith)
|
||||||
|
# you'll need something like this to make dbus talkative
|
||||||
|
cmd = b"sudo -u egon DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus gio mount -e"
|
||||||
|
|
||||||
|
# but if copyparty and the ui-session is running
|
||||||
|
# as the same user (good) then this is plenty
|
||||||
|
cmd = b"gio mount -e"
|
||||||
|
|
||||||
|
cmd = cmd.split(b" ") + [mp]
|
||||||
|
ret = sp.check_output(cmd).decode("utf-8", "replace")
|
||||||
|
print(ret.strip() or (label + " can be safely unplugged"))
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
print("unmount failed: %r" % (ex,))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@@ -31,6 +31,9 @@ plugins in this section should only be used with appropriate precautions:
|
|||||||
* [very-bad-idea.py](./very-bad-idea.py) combined with [meadup.js](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/meadup.js) converts copyparty into a janky yet extremely flexible chromecast clone
|
* [very-bad-idea.py](./very-bad-idea.py) combined with [meadup.js](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/meadup.js) converts copyparty into a janky yet extremely flexible chromecast clone
|
||||||
* also adds a virtual keyboard by @steinuil to the basic-upload tab for comfy couch crowd control
|
* also adds a virtual keyboard by @steinuil to the basic-upload tab for comfy couch crowd control
|
||||||
* anything uploaded through the [android app](https://github.com/9001/party-up) (files or links) are executed on the server, meaning anyone can infect your PC with malware... so protect this with a password and keep it on a LAN!
|
* anything uploaded through the [android app](https://github.com/9001/party-up) (files or links) are executed on the server, meaning anyone can infect your PC with malware... so protect this with a password and keep it on a LAN!
|
||||||
|
* [kamelåså](https://github.com/steinuil/kameloso) is a much better (and MUCH safer) alternative to this plugin
|
||||||
|
* powered by [chicken-curry-banana-pineapple-peanut pizza](https://a.ocv.me/pub/g/i/2025/01/298437ce-8351-4c8c-861c-fa131d217999.jpg?cache) so you know it's good
|
||||||
|
* and, unlike this plugin, kamelåså even has windows support (nice)
|
||||||
|
|
||||||
|
|
||||||
# dependencies
|
# dependencies
|
||||||
|
|||||||
@@ -6,6 +6,11 @@ WARNING -- DANGEROUS PLUGIN --
|
|||||||
running this plugin, they can execute malware on your machine
|
running this plugin, they can execute malware on your machine
|
||||||
so please keep this on a LAN and protect it with a password
|
so please keep this on a LAN and protect it with a password
|
||||||
|
|
||||||
|
here is a MUCH BETTER ALTERNATIVE (which also works on Windows):
|
||||||
|
https://github.com/steinuil/kameloso
|
||||||
|
|
||||||
|
----------------------------------------------------------------------
|
||||||
|
|
||||||
use copyparty as a chromecast replacement:
|
use copyparty as a chromecast replacement:
|
||||||
* post a URL and it will open in the default browser
|
* post a URL and it will open in the default browser
|
||||||
* upload a file and it will open in the default application
|
* upload a file and it will open in the default application
|
||||||
|
|||||||
676
bin/partyfuse.py
676
bin/partyfuse.py
File diff suppressed because it is too large
Load Diff
788
bin/u2c.py
788
bin/u2c.py
File diff suppressed because it is too large
Load Diff
76
bin/zmq-recv.py
Executable file
76
bin/zmq-recv.py
Executable file
@@ -0,0 +1,76 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import zmq
|
||||||
|
|
||||||
|
"""
|
||||||
|
zmq-recv.py: demo zmq receiver
|
||||||
|
2025-01-22, v1.0, ed <irc.rizon.net>, MIT-Licensed
|
||||||
|
https://github.com/9001/copyparty/blob/hovudstraum/bin/zmq-recv.py
|
||||||
|
|
||||||
|
basic zmq-server to receive events from copyparty; try one of
|
||||||
|
the below and then "send a message to serverlog" in the web-ui:
|
||||||
|
|
||||||
|
1) dumb fire-and-forget to any and all listeners;
|
||||||
|
run this script with "sub" and run copyparty with this:
|
||||||
|
--xm zmq:pub:tcp://*:5556
|
||||||
|
|
||||||
|
2) one lucky listener gets the message, blocks if no listeners:
|
||||||
|
run this script with "pull" and run copyparty with this:
|
||||||
|
--xm t3,zmq:push:tcp://*:5557
|
||||||
|
|
||||||
|
3) blocking syn/ack mode, client must ack each message;
|
||||||
|
run this script with "rep" and run copyparty with this:
|
||||||
|
--xm t3,zmq:req:tcp://localhost:5555
|
||||||
|
|
||||||
|
note: to conditionally block uploads based on message contents,
|
||||||
|
use rep_server to answer with "return 1" and run copyparty with
|
||||||
|
--xau t3,c,zmq:req:tcp://localhost:5555
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
ctx = zmq.Context()
|
||||||
|
|
||||||
|
|
||||||
|
def sub_server():
|
||||||
|
# PUB/SUB allows any number of servers/clients, and
|
||||||
|
# messages are fire-and-forget
|
||||||
|
sck = ctx.socket(zmq.SUB)
|
||||||
|
sck.connect("tcp://localhost:5556")
|
||||||
|
sck.setsockopt_string(zmq.SUBSCRIBE, "")
|
||||||
|
while True:
|
||||||
|
print("copyparty says %r" % (sck.recv_string(),))
|
||||||
|
|
||||||
|
|
||||||
|
def pull_server():
|
||||||
|
# PUSH/PULL allows any number of servers/clients, and
|
||||||
|
# each message is sent to a exactly one PULL client
|
||||||
|
sck = ctx.socket(zmq.PULL)
|
||||||
|
sck.connect("tcp://localhost:5557")
|
||||||
|
while True:
|
||||||
|
print("copyparty says %r" % (sck.recv_string(),))
|
||||||
|
|
||||||
|
|
||||||
|
def rep_server():
|
||||||
|
# REP/REQ is a server/client pair where each message must be
|
||||||
|
# acked by the other before another message can be sent, so
|
||||||
|
# copyparty will do a blocking-wait for the ack
|
||||||
|
sck = ctx.socket(zmq.REP)
|
||||||
|
sck.bind("tcp://*:5555")
|
||||||
|
while True:
|
||||||
|
print("copyparty says %r" % (sck.recv_string(),))
|
||||||
|
reply = b"thx"
|
||||||
|
# reply = b"return 1" # non-zero to block an upload
|
||||||
|
sck.send(reply)
|
||||||
|
|
||||||
|
|
||||||
|
mode = sys.argv[1].lower() if len(sys.argv) > 1 else ""
|
||||||
|
|
||||||
|
if mode == "sub":
|
||||||
|
sub_server()
|
||||||
|
elif mode == "pull":
|
||||||
|
pull_server()
|
||||||
|
elif mode == "rep":
|
||||||
|
rep_server()
|
||||||
|
else:
|
||||||
|
print("specify mode as first argument: SUB | PULL | REP")
|
||||||
@@ -12,13 +12,21 @@
|
|||||||
* assumes the webserver and copyparty is running on the same server/IP
|
* assumes the webserver and copyparty is running on the same server/IP
|
||||||
* modify `10.13.1.1` as necessary if you wish to support browsers without javascript
|
* modify `10.13.1.1` as necessary if you wish to support browsers without javascript
|
||||||
|
|
||||||
### [`sharex.sxcu`](sharex.sxcu)
|
### [`sharex.sxcu`](sharex.sxcu) - Windows screenshot uploader
|
||||||
* sharex config file to upload screenshots and grab the URL
|
* [sharex](https://getsharex.com/) config file to upload screenshots and grab the URL
|
||||||
* `RequestURL`: full URL to the target folder
|
* `RequestURL`: full URL to the target folder
|
||||||
* `pw`: password (remove the `pw` line if anon-write)
|
* `pw`: password (remove the `pw` line if anon-write)
|
||||||
* the `act:bput` thing is optional since copyparty v1.9.29
|
* the `act:bput` thing is optional since copyparty v1.9.29
|
||||||
* using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu)
|
* using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu)
|
||||||
|
|
||||||
|
### [`ishare.iscu`](ishare.iscu) - MacOS screenshot uploader
|
||||||
|
* [ishare](https://isharemac.app/) config file to upload screenshots and grab the URL
|
||||||
|
* `RequestURL`: full URL to the target folder
|
||||||
|
* `pw`: password (remove the `pw` line if anon-write)
|
||||||
|
|
||||||
|
### [`flameshot.sh`](flameshot.sh) - Linux screenshot uploader
|
||||||
|
* takes a screenshot with [flameshot](https://flameshot.org/) on Linux, uploads it, and writes the URL to clipboard
|
||||||
|
|
||||||
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
|
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
|
||||||
* browser integration, kind of? custom rightclick actions and stuff
|
* browser integration, kind of? custom rightclick actions and stuff
|
||||||
* rightclick a pic and send it to copyparty straight from your browser
|
* rightclick a pic and send it to copyparty straight from your browser
|
||||||
@@ -50,5 +58,10 @@ init-scripts to start copyparty as a service
|
|||||||
* [`openrc/copyparty`](openrc/copyparty)
|
* [`openrc/copyparty`](openrc/copyparty)
|
||||||
|
|
||||||
# Reverse-proxy
|
# Reverse-proxy
|
||||||
copyparty has basic support for running behind another webserver
|
copyparty supports running behind another webserver
|
||||||
* [`nginx/copyparty.conf`](nginx/copyparty.conf)
|
* [`apache/copyparty.conf`](apache/copyparty.conf)
|
||||||
|
* [`haproxy/copyparty.conf`](haproxy/copyparty.conf)
|
||||||
|
* [`lighttpd/subdomain.conf`](lighttpd/subdomain.conf)
|
||||||
|
* [`lighttpd/subpath.conf`](lighttpd/subpath.conf)
|
||||||
|
* [`nginx/copyparty.conf`](nginx/copyparty.conf) -- recommended
|
||||||
|
* [`traefik/copyparty.yaml`](traefik/copyparty.yaml)
|
||||||
|
|||||||
@@ -1,14 +1,29 @@
|
|||||||
# when running copyparty behind a reverse proxy,
|
# if you would like to use unix-sockets (recommended),
|
||||||
# the following arguments are recommended:
|
# you must run copyparty with one of the following:
|
||||||
#
|
#
|
||||||
# -i 127.0.0.1 only accept connections from nginx
|
# -i unix:777:/dev/shm/party.sock
|
||||||
|
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||||
#
|
#
|
||||||
# if you are doing location-based proxying (such as `/stuff` below)
|
# if you are doing location-based proxying (such as `/stuff` below)
|
||||||
# you must run copyparty with --rp-loc=stuff
|
# you must run copyparty with --rp-loc=stuff
|
||||||
#
|
#
|
||||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||||
|
|
||||||
|
|
||||||
LoadModule proxy_module modules/mod_proxy.so
|
LoadModule proxy_module modules/mod_proxy.so
|
||||||
ProxyPass "/stuff" "http://127.0.0.1:3923/stuff"
|
|
||||||
# do not specify ProxyPassReverse
|
|
||||||
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
|
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
|
||||||
|
# NOTE: do not specify ProxyPassReverse
|
||||||
|
|
||||||
|
|
||||||
|
##
|
||||||
|
## then, enable one of the below:
|
||||||
|
|
||||||
|
# use subdomain proxying to unix-socket (best)
|
||||||
|
ProxyPass "/" "unix:///dev/shm/party.sock|http://whatever/"
|
||||||
|
|
||||||
|
# use subdomain proxying to 127.0.0.1 (slower)
|
||||||
|
#ProxyPass "/" "http://127.0.0.1:3923/"
|
||||||
|
|
||||||
|
# use subpath proxying to 127.0.0.1 (slow and maybe buggy)
|
||||||
|
#ProxyPass "/stuff" "http://127.0.0.1:3923/stuff"
|
||||||
|
|||||||
14
contrib/flameshot.sh
Executable file
14
contrib/flameshot.sh
Executable file
@@ -0,0 +1,14 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# take a screenshot with flameshot and send it to copyparty;
|
||||||
|
# the image url will be placed on your clipboard
|
||||||
|
|
||||||
|
password=wark
|
||||||
|
url=https://a.ocv.me/up/
|
||||||
|
filename=$(date +%Y-%m%d-%H%M%S).png
|
||||||
|
|
||||||
|
flameshot gui -s -r |
|
||||||
|
curl -T- $url$filename?pw=$password |
|
||||||
|
tail -n 1 |
|
||||||
|
xsel -ib
|
||||||
24
contrib/haproxy/copyparty.conf
Normal file
24
contrib/haproxy/copyparty.conf
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
# this config is essentially two separate examples;
|
||||||
|
#
|
||||||
|
# foo1 connects to copyparty using tcp, and
|
||||||
|
# foo2 uses unix-sockets for 27% higher performance
|
||||||
|
#
|
||||||
|
# to use foo2 you must run copyparty with one of the following:
|
||||||
|
#
|
||||||
|
# -i unix:777:/dev/shm/party.sock
|
||||||
|
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||||
|
|
||||||
|
defaults
|
||||||
|
mode http
|
||||||
|
option forwardfor
|
||||||
|
timeout connect 1s
|
||||||
|
timeout client 610s
|
||||||
|
timeout server 610s
|
||||||
|
|
||||||
|
listen foo1
|
||||||
|
bind *:8081
|
||||||
|
server srv1 127.0.0.1:3923 maxconn 512
|
||||||
|
|
||||||
|
listen foo2
|
||||||
|
bind *:8082
|
||||||
|
server srv1 /dev/shm/party.sock maxconn 512
|
||||||
10
contrib/ishare.iscu
Normal file
10
contrib/ishare.iscu
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
{
|
||||||
|
"Name": "copyparty",
|
||||||
|
"RequestURL": "http://127.0.0.1:3923/screenshots/",
|
||||||
|
"Headers": {
|
||||||
|
"pw": "PUT_YOUR_PASSWORD_HERE_MY_DUDE",
|
||||||
|
"accept": "json"
|
||||||
|
},
|
||||||
|
"FileFormName": "f",
|
||||||
|
"ResponseURL": "{{fileurl}}"
|
||||||
|
}
|
||||||
24
contrib/lighttpd/subdomain.conf
Normal file
24
contrib/lighttpd/subdomain.conf
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
# example usage for benchmarking:
|
||||||
|
#
|
||||||
|
# taskset -c 1 lighttpd -Df ~/dev/copyparty/contrib/lighttpd/subdomain.conf
|
||||||
|
#
|
||||||
|
# lighttpd can connect to copyparty using either tcp (127.0.0.1)
|
||||||
|
# or a unix-socket, but unix-sockets are 37% faster because
|
||||||
|
# lighttpd doesn't reuse tcp connections, so we're doing unix-sockets
|
||||||
|
#
|
||||||
|
# this means we must run copyparty with one of the following:
|
||||||
|
#
|
||||||
|
# -i unix:777:/dev/shm/party.sock
|
||||||
|
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||||
|
#
|
||||||
|
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||||
|
|
||||||
|
server.port = 80
|
||||||
|
server.document-root = "/var/empty"
|
||||||
|
server.upload-dirs = ( "/dev/shm", "/tmp" )
|
||||||
|
server.modules = ( "mod_proxy" )
|
||||||
|
proxy.forwarded = ( "for" => 1, "proto" => 1 )
|
||||||
|
proxy.server = ( "" => ( ( "host" => "/dev/shm/party.sock" ) ) )
|
||||||
|
|
||||||
|
# if you really need to use tcp instead of unix-sockets, do this instead:
|
||||||
|
#proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3923" ) ) )
|
||||||
31
contrib/lighttpd/subpath.conf
Normal file
31
contrib/lighttpd/subpath.conf
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
# example usage for benchmarking:
|
||||||
|
#
|
||||||
|
# taskset -c 1 lighttpd -Df ~/dev/copyparty/contrib/lighttpd/subpath.conf
|
||||||
|
#
|
||||||
|
# lighttpd can connect to copyparty using either tcp (127.0.0.1)
|
||||||
|
# or a unix-socket, but unix-sockets are 37% faster because
|
||||||
|
# lighttpd doesn't reuse tcp connections, so we're doing unix-sockets
|
||||||
|
#
|
||||||
|
# this means we must run copyparty with one of the following:
|
||||||
|
#
|
||||||
|
# -i unix:777:/dev/shm/party.sock
|
||||||
|
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||||
|
#
|
||||||
|
# also since this example proxies a subpath instead of the
|
||||||
|
# recommended subdomain-proxying, we must also specify this:
|
||||||
|
#
|
||||||
|
# --rp-loc files
|
||||||
|
#
|
||||||
|
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||||
|
|
||||||
|
server.port = 80
|
||||||
|
server.document-root = "/var/empty"
|
||||||
|
server.upload-dirs = ( "/dev/shm", "/tmp" )
|
||||||
|
server.modules = ( "mod_proxy" )
|
||||||
|
$HTTP["url"] =~ "^/files" {
|
||||||
|
proxy.forwarded = ( "for" => 1, "proto" => 1 )
|
||||||
|
proxy.server = ( "" => ( ( "host" => "/dev/shm/party.sock" ) ) )
|
||||||
|
|
||||||
|
# if you really need to use tcp instead of unix-sockets, do this instead:
|
||||||
|
#proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3923" ) ) )
|
||||||
|
}
|
||||||
@@ -1,14 +1,10 @@
|
|||||||
# when running copyparty behind a reverse proxy,
|
# look for "max clients:" when starting copyparty, as nginx should
|
||||||
# the following arguments are recommended:
|
# not accept more consecutive clients than what copyparty is able to;
|
||||||
#
|
|
||||||
# -i 127.0.0.1 only accept connections from nginx
|
|
||||||
#
|
|
||||||
# -nc must match or exceed the webserver's max number of concurrent clients;
|
|
||||||
# copyparty default is 1024 if OS permits it (see "max clients:" on startup),
|
|
||||||
# nginx default is 512 (worker_processes 1, worker_connections 512)
|
# nginx default is 512 (worker_processes 1, worker_connections 512)
|
||||||
#
|
#
|
||||||
# you may also consider adding -j0 for CPU-intensive configurations
|
# rarely, in some extreme usecases, it can be good to add -j0
|
||||||
# (5'000 requests per second, or 20gbps upload/download in parallel)
|
# (40'000 requests per second, or 20gbps upload/download in parallel)
|
||||||
|
# but this is usually counterproductive and slightly buggy
|
||||||
#
|
#
|
||||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||||
#
|
#
|
||||||
@@ -20,10 +16,33 @@
|
|||||||
#
|
#
|
||||||
# and then enable it below by uncomenting the cloudflare-only.conf line
|
# and then enable it below by uncomenting the cloudflare-only.conf line
|
||||||
|
|
||||||
upstream cpp {
|
|
||||||
|
upstream cpp_tcp {
|
||||||
|
# alternative 1: connect to copyparty using tcp;
|
||||||
|
# cpp_uds is slightly faster and more secure, but
|
||||||
|
# cpp_tcp is easier to setup and "just works"
|
||||||
|
# ...you should however restrict copyparty to only
|
||||||
|
# accept connections from nginx by adding these args:
|
||||||
|
# -i 127.0.0.1
|
||||||
|
|
||||||
server 127.0.0.1:3923 fail_timeout=1s;
|
server 127.0.0.1:3923 fail_timeout=1s;
|
||||||
keepalive 1;
|
keepalive 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
upstream cpp_uds {
|
||||||
|
# alternative 2: unix-socket, aka. "unix domain socket";
|
||||||
|
# 5-10% faster, and better isolation from other software,
|
||||||
|
# but there must be at least one unix-group which both
|
||||||
|
# nginx and copyparty is a member of; if that group is
|
||||||
|
# "www" then run copyparty with the following args:
|
||||||
|
# -i unix:770:www:/dev/shm/party.sock
|
||||||
|
|
||||||
|
server unix:/dev/shm/party.sock fail_timeout=1s;
|
||||||
|
keepalive 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
server {
|
server {
|
||||||
listen 443 ssl;
|
listen 443 ssl;
|
||||||
listen [::]:443 ssl;
|
listen [::]:443 ssl;
|
||||||
@@ -34,13 +53,18 @@ server {
|
|||||||
#include /etc/nginx/cloudflare-only.conf;
|
#include /etc/nginx/cloudflare-only.conf;
|
||||||
|
|
||||||
location / {
|
location / {
|
||||||
proxy_pass http://cpp;
|
# recommendation: replace cpp_tcp with cpp_uds below
|
||||||
|
proxy_pass http://cpp_tcp;
|
||||||
proxy_redirect off;
|
proxy_redirect off;
|
||||||
# disable buffering (next 4 lines)
|
# disable buffering (next 4 lines)
|
||||||
proxy_http_version 1.1;
|
proxy_http_version 1.1;
|
||||||
client_max_body_size 0;
|
client_max_body_size 0;
|
||||||
proxy_buffering off;
|
proxy_buffering off;
|
||||||
proxy_request_buffering off;
|
proxy_request_buffering off;
|
||||||
|
# improve download speed from 600 to 1500 MiB/s
|
||||||
|
proxy_buffers 32 8k;
|
||||||
|
proxy_buffer_size 16k;
|
||||||
|
proxy_busy_buffers_size 24k;
|
||||||
|
|
||||||
proxy_set_header Host $host;
|
proxy_set_header Host $host;
|
||||||
proxy_set_header X-Real-IP $remote_addr;
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
@@ -52,6 +76,7 @@ server {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
# default client_max_body_size (1M) blocks uploads larger than 256 MiB
|
# default client_max_body_size (1M) blocks uploads larger than 256 MiB
|
||||||
client_max_body_size 1024M;
|
client_max_body_size 1024M;
|
||||||
client_header_timeout 610m;
|
client_header_timeout 610m;
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Maintainer: icxes <dev.null@need.moe>
|
# Maintainer: icxes <dev.null@need.moe>
|
||||||
pkgname=copyparty
|
pkgname=copyparty
|
||||||
pkgver="1.14.3"
|
pkgver="1.16.14"
|
||||||
pkgrel=1
|
pkgrel=1
|
||||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||||
arch=("any")
|
arch=("any")
|
||||||
@@ -16,12 +16,13 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
|||||||
"libkeyfinder-git: detection of musical keys"
|
"libkeyfinder-git: detection of musical keys"
|
||||||
"qm-vamp-plugins: BPM detection"
|
"qm-vamp-plugins: BPM detection"
|
||||||
"python-pyopenssl: ftps functionality"
|
"python-pyopenssl: ftps functionality"
|
||||||
"python-argon2_cffi: hashed passwords in config"
|
"python-pyzmq: send zeromq messages from event-hooks"
|
||||||
|
"python-argon2-cffi: hashed passwords in config"
|
||||||
"python-impacket-git: smb support (bad idea)"
|
"python-impacket-git: smb support (bad idea)"
|
||||||
)
|
)
|
||||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||||
backup=("etc/${pkgname}.d/init" )
|
backup=("etc/${pkgname}.d/init" )
|
||||||
sha256sums=("7fcdcec0d7b118bf17a98b1a409331dcfc0fbbd431b362b991f5800006fd8c98")
|
sha256sums=("62ecebf89ebd30e8537e06d0ed533542fe8bbb15147e02714131d7412ea60425")
|
||||||
|
|
||||||
build() {
|
build() {
|
||||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
{ lib, stdenv, makeWrapper, fetchurl, utillinux, python, jinja2, impacket, pyftpdlib, pyopenssl, argon2-cffi, pillow, pyvips, ffmpeg, mutagen,
|
{ lib, stdenv, makeWrapper, fetchurl, utillinux, python, jinja2, impacket, pyftpdlib, pyopenssl, argon2-cffi, pillow, pyvips, pyzmq, ffmpeg, mutagen,
|
||||||
|
|
||||||
# use argon2id-hashed passwords in config files (sha2 is always available)
|
# use argon2id-hashed passwords in config files (sha2 is always available)
|
||||||
withHashedPasswords ? true,
|
withHashedPasswords ? true,
|
||||||
@@ -21,6 +21,9 @@ withMediaProcessing ? true,
|
|||||||
# if MediaProcessing is not enabled, you probably want this instead (less accurate, but much safer and faster)
|
# if MediaProcessing is not enabled, you probably want this instead (less accurate, but much safer and faster)
|
||||||
withBasicAudioMetadata ? false,
|
withBasicAudioMetadata ? false,
|
||||||
|
|
||||||
|
# send ZeroMQ messages from event-hooks
|
||||||
|
withZeroMQ ? true,
|
||||||
|
|
||||||
# enable FTPS support in the FTP server
|
# enable FTPS support in the FTP server
|
||||||
withFTPS ? false,
|
withFTPS ? false,
|
||||||
|
|
||||||
@@ -43,6 +46,7 @@ let
|
|||||||
++ lib.optional withMediaProcessing ffmpeg
|
++ lib.optional withMediaProcessing ffmpeg
|
||||||
++ lib.optional withBasicAudioMetadata mutagen
|
++ lib.optional withBasicAudioMetadata mutagen
|
||||||
++ lib.optional withHashedPasswords argon2-cffi
|
++ lib.optional withHashedPasswords argon2-cffi
|
||||||
|
++ lib.optional withZeroMQ pyzmq
|
||||||
);
|
);
|
||||||
in stdenv.mkDerivation {
|
in stdenv.mkDerivation {
|
||||||
pname = "copyparty";
|
pname = "copyparty";
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"url": "https://github.com/9001/copyparty/releases/download/v1.14.3/copyparty-sfx.py",
|
"url": "https://github.com/9001/copyparty/releases/download/v1.16.14/copyparty-sfx.py",
|
||||||
"version": "1.14.3",
|
"version": "1.16.14",
|
||||||
"hash": "sha256-1yVeJfYnyNNKYX3KdmYP0ECx7K8EjuWvApMw0diJ1sk="
|
"hash": "sha256-hFIJdIOt1n2Raw9VdvTmX+C/xr8tgLMa956lhJ7DZvo="
|
||||||
}
|
}
|
||||||
@@ -15,6 +15,7 @@ save one of these as `.epilogue.html` inside a folder to customize it:
|
|||||||
point `--js-browser` to one of these by URL:
|
point `--js-browser` to one of these by URL:
|
||||||
|
|
||||||
* [`minimal-up2k.js`](minimal-up2k.js) is similar to the above `minimal-up2k.html` except it applies globally to all write-only folders
|
* [`minimal-up2k.js`](minimal-up2k.js) is similar to the above `minimal-up2k.html` except it applies globally to all write-only folders
|
||||||
|
* [`quickmove.js`](quickmove.js) adds a hotkey to move selected files into a subfolder
|
||||||
* [`up2k-hooks.js`](up2k-hooks.js) lets you specify a ruleset for files to skip uploading
|
* [`up2k-hooks.js`](up2k-hooks.js) lets you specify a ruleset for files to skip uploading
|
||||||
* [`up2k-hook-ytid.js`](up2k-hook-ytid.js) is a more specific example checking youtube-IDs against some API
|
* [`up2k-hook-ytid.js`](up2k-hook-ytid.js) is a more specific example checking youtube-IDs against some API
|
||||||
|
|
||||||
|
|||||||
117
contrib/plugins/graft-thumbs.js
Normal file
117
contrib/plugins/graft-thumbs.js
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
// USAGE:
|
||||||
|
// place this file somewhere in the webroot and then
|
||||||
|
// python3 -m copyparty --js-browser /.res/graft-thumbs.js
|
||||||
|
//
|
||||||
|
// DESCRIPTION:
|
||||||
|
// this is a gridview plugin which, for each file in a folder,
|
||||||
|
// looks for another file with the same filename (but with a
|
||||||
|
// different file extension)
|
||||||
|
//
|
||||||
|
// if one of those files is an image and the other is not,
|
||||||
|
// then this plugin assumes the image is a "sidecar thumbnail"
|
||||||
|
// for the other file, and it will graft the image thumbnail
|
||||||
|
// onto the non-image file (for example an mp3)
|
||||||
|
//
|
||||||
|
// optional feature 1, default-enabled:
|
||||||
|
// the image-file is then hidden from the directory listing
|
||||||
|
//
|
||||||
|
// optional feature 2, default-enabled:
|
||||||
|
// when clicking the audio file, the image will also open
|
||||||
|
|
||||||
|
|
||||||
|
(function() {
|
||||||
|
|
||||||
|
// `graft_thumbs` assumes the gridview has just been rendered;
|
||||||
|
// it looks for sidecars, and transplants those thumbnails onto
|
||||||
|
// the other file with the same basename (filename sans extension)
|
||||||
|
|
||||||
|
var graft_thumbs = function () {
|
||||||
|
if (!thegrid.en)
|
||||||
|
return; // not in grid mode
|
||||||
|
|
||||||
|
var files = msel.getall(),
|
||||||
|
pairs = {};
|
||||||
|
|
||||||
|
console.log(files);
|
||||||
|
|
||||||
|
for (var a = 0; a < files.length; a++) {
|
||||||
|
var file = files[a],
|
||||||
|
is_pic = /\.(jpe?g|png|gif|webp)$/i.exec(file.vp),
|
||||||
|
is_audio = re_au_all.exec(file.vp),
|
||||||
|
basename = file.vp.replace(/\.[^\.]+$/, ""),
|
||||||
|
entry = pairs[basename];
|
||||||
|
|
||||||
|
if (!entry)
|
||||||
|
// first time seeing this basename; create a new entry in pairs
|
||||||
|
entry = pairs[basename] = {};
|
||||||
|
|
||||||
|
if (is_pic)
|
||||||
|
entry.thumb = file;
|
||||||
|
else if (is_audio)
|
||||||
|
entry.audio = file;
|
||||||
|
}
|
||||||
|
|
||||||
|
var basenames = Object.keys(pairs);
|
||||||
|
for (var a = 0; a < basenames.length; a++)
|
||||||
|
(function(a) {
|
||||||
|
var pair = pairs[basenames[a]];
|
||||||
|
|
||||||
|
if (!pair.thumb || !pair.audio)
|
||||||
|
return; // not a matching pair of files
|
||||||
|
|
||||||
|
var img_thumb = QS('#ggrid a[ref="' + pair.thumb.id + '"] img[onload]'),
|
||||||
|
img_audio = QS('#ggrid a[ref="' + pair.audio.id + '"] img[onload]');
|
||||||
|
|
||||||
|
if (!img_thumb || !img_audio)
|
||||||
|
return; // something's wrong... let's bail
|
||||||
|
|
||||||
|
// alright, graft the thumb...
|
||||||
|
img_audio.src = img_thumb.src;
|
||||||
|
|
||||||
|
// ...and hide the sidecar
|
||||||
|
img_thumb.closest('a').style.display = 'none';
|
||||||
|
|
||||||
|
// ...and add another onclick-handler to the audio,
|
||||||
|
// so it also opens the pic while playing the song
|
||||||
|
img_audio.addEventListener('click', function() {
|
||||||
|
img_thumb.click();
|
||||||
|
return false; // let it bubble to the next listener
|
||||||
|
});
|
||||||
|
|
||||||
|
})(a);
|
||||||
|
};
|
||||||
|
|
||||||
|
// ...and then the trick! near the end of loadgrid,
|
||||||
|
// thegrid.bagit is called to initialize the baguettebox
|
||||||
|
// (image/video gallery); this is the perfect function to
|
||||||
|
// "hook" (hijack) so we can run our code :^)
|
||||||
|
|
||||||
|
// need to grab a backup of the original function first,
|
||||||
|
var orig_func = thegrid.bagit;
|
||||||
|
|
||||||
|
// and then replace it with our own:
|
||||||
|
thegrid.bagit = function (isrc) {
|
||||||
|
|
||||||
|
if (isrc !== '#ggrid')
|
||||||
|
// we only want to modify the grid, so
|
||||||
|
// let the original function handle this one
|
||||||
|
return orig_func(isrc);
|
||||||
|
|
||||||
|
graft_thumbs();
|
||||||
|
|
||||||
|
// when changing directories, the grid is
|
||||||
|
// rendered before msel returns the correct
|
||||||
|
// filenames, so schedule another run:
|
||||||
|
setTimeout(graft_thumbs, 1);
|
||||||
|
|
||||||
|
// and finally, call the original thegrid.bagit function
|
||||||
|
return orig_func(isrc);
|
||||||
|
};
|
||||||
|
|
||||||
|
if (ls0) {
|
||||||
|
// the server included an initial listing json (ls0),
|
||||||
|
// so the grid has already been rendered without our hook
|
||||||
|
graft_thumbs();
|
||||||
|
}
|
||||||
|
|
||||||
|
})();
|
||||||
140
contrib/plugins/quickmove.js
Normal file
140
contrib/plugins/quickmove.js
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
"use strict";
|
||||||
|
|
||||||
|
|
||||||
|
// USAGE:
|
||||||
|
// place this file somewhere in the webroot,
|
||||||
|
// for example in a folder named ".res" to hide it, and then
|
||||||
|
// python3 copyparty-sfx.py -v .::A --js-browser /.res/quickmove.js
|
||||||
|
//
|
||||||
|
// DESCRIPTION:
|
||||||
|
// the command above launches copyparty with one single volume;
|
||||||
|
// ".::A" = current folder as webroot, and everyone has Admin
|
||||||
|
//
|
||||||
|
// the plugin adds hotkey "W" which moves all selected files
|
||||||
|
// into a subfolder named "foobar" inside the current folder
|
||||||
|
|
||||||
|
|
||||||
|
(function() {
|
||||||
|
|
||||||
|
var action_to_perform = ask_for_confirmation_and_then_move;
|
||||||
|
// this decides what the new hotkey should do;
|
||||||
|
// ask_for_confirmation_and_then_move = show a yes/no box,
|
||||||
|
// move_selected_files = just move the files immediately
|
||||||
|
|
||||||
|
var move_destination = "foobar";
|
||||||
|
// this is the target folder to move files to;
|
||||||
|
// by default it is a subfolder of the current folder,
|
||||||
|
// but it can also be an absolute path like "/foo/bar"
|
||||||
|
|
||||||
|
// ===
|
||||||
|
// === END OF CONFIG
|
||||||
|
// ===
|
||||||
|
|
||||||
|
var main_hotkey_handler, // copyparty's original hotkey handler
|
||||||
|
plugin_enabler, // timer to engage this plugin when safe
|
||||||
|
files_to_move; // list of files to move
|
||||||
|
|
||||||
|
function ask_for_confirmation_and_then_move() {
|
||||||
|
var num_files = msel.getsel().length,
|
||||||
|
msg = "move the selected " + num_files + " files?";
|
||||||
|
|
||||||
|
if (!num_files)
|
||||||
|
return toast.warn(2, 'no files were selected to be moved');
|
||||||
|
|
||||||
|
modal.confirm(msg, move_selected_files, null);
|
||||||
|
}
|
||||||
|
|
||||||
|
function move_selected_files() {
|
||||||
|
var selection = msel.getsel();
|
||||||
|
|
||||||
|
if (!selection.length)
|
||||||
|
return toast.warn(2, 'no files were selected to be moved');
|
||||||
|
|
||||||
|
if (thegrid.bbox) {
|
||||||
|
// close image/video viewer
|
||||||
|
thegrid.bbox = null;
|
||||||
|
baguetteBox.destroy();
|
||||||
|
}
|
||||||
|
|
||||||
|
files_to_move = [];
|
||||||
|
for (var a = 0; a < selection.length; a++)
|
||||||
|
files_to_move.push(selection[a].vp);
|
||||||
|
|
||||||
|
move_next_file();
|
||||||
|
}
|
||||||
|
|
||||||
|
function move_next_file() {
|
||||||
|
var num_files = files_to_move.length,
|
||||||
|
filepath = files_to_move.pop(),
|
||||||
|
filename = vsplit(filepath)[1];
|
||||||
|
|
||||||
|
toast.inf(10, "moving " + num_files + " files...\n\n" + filename);
|
||||||
|
|
||||||
|
var dst = move_destination;
|
||||||
|
|
||||||
|
if (!dst.endsWith('/'))
|
||||||
|
// must have a trailing slash, so add it
|
||||||
|
dst += '/';
|
||||||
|
|
||||||
|
if (!dst.startsWith('/'))
|
||||||
|
// destination is a relative path, so prefix current folder path
|
||||||
|
dst = get_evpath() + dst;
|
||||||
|
|
||||||
|
// and finally append the filename
|
||||||
|
dst += '/' + filename;
|
||||||
|
|
||||||
|
// prepare the move-request to be sent
|
||||||
|
var xhr = new XHR();
|
||||||
|
xhr.onload = xhr.onerror = function() {
|
||||||
|
if (this.status !== 201)
|
||||||
|
return toast.err(30, 'move failed: ' + esc(this.responseText));
|
||||||
|
|
||||||
|
if (files_to_move.length)
|
||||||
|
return move_next_file(); // still more files to go
|
||||||
|
|
||||||
|
toast.ok(1, 'move OK');
|
||||||
|
treectl.goto(); // reload the folder contents
|
||||||
|
};
|
||||||
|
xhr.open('POST', filepath + '?move=' + dst);
|
||||||
|
xhr.send();
|
||||||
|
}
|
||||||
|
|
||||||
|
function our_hotkey_handler(e) {
|
||||||
|
// bail if either ALT, CTRL, or SHIFT is pressed
|
||||||
|
if (e.altKey || e.shiftKey || e.isComposing || ctrl(e))
|
||||||
|
return main_hotkey_handler(e); // let copyparty handle this keystroke
|
||||||
|
|
||||||
|
var key_name = (e.code || e.key) + '',
|
||||||
|
ae = document.activeElement,
|
||||||
|
aet = ae && ae != document.body ? ae.nodeName.toLowerCase() : '';
|
||||||
|
|
||||||
|
// check the current aet (active element type),
|
||||||
|
// only continue if one of the following currently has input focus:
|
||||||
|
// nothing | link | button | table-row | table-cell | div | text
|
||||||
|
if (aet && !/^(a|button|tr|td|div|pre)$/.test(aet))
|
||||||
|
return main_hotkey_handler(e); // let copyparty handle this keystroke
|
||||||
|
|
||||||
|
if (key_name == 'KeyW') {
|
||||||
|
// okay, this one's for us... do the thing
|
||||||
|
action_to_perform();
|
||||||
|
return ev(e);
|
||||||
|
}
|
||||||
|
|
||||||
|
return main_hotkey_handler(e); // let copyparty handle this keystroke
|
||||||
|
}
|
||||||
|
|
||||||
|
function enable_plugin() {
|
||||||
|
if (!window.hotkeys_attached)
|
||||||
|
return console.log('quickmove is waiting for the page to finish loading');
|
||||||
|
|
||||||
|
clearInterval(plugin_enabler);
|
||||||
|
main_hotkey_handler = document.onkeydown;
|
||||||
|
document.onkeydown = our_hotkey_handler;
|
||||||
|
console.log('quickmove is now enabled');
|
||||||
|
}
|
||||||
|
|
||||||
|
// copyparty doesn't enable its hotkeys until the page
|
||||||
|
// has finished loading, so we'll wait for that too
|
||||||
|
plugin_enabler = setInterval(enable_plugin, 100);
|
||||||
|
|
||||||
|
})();
|
||||||
25
contrib/traefik/copyparty.yaml
Normal file
25
contrib/traefik/copyparty.yaml
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# ./traefik --configFile=copyparty.yaml
|
||||||
|
|
||||||
|
entryPoints:
|
||||||
|
web:
|
||||||
|
address: :8080
|
||||||
|
transport:
|
||||||
|
# don't disconnect during big uploads
|
||||||
|
respondingTimeouts:
|
||||||
|
readTimeout: "0s"
|
||||||
|
log:
|
||||||
|
level: DEBUG
|
||||||
|
providers:
|
||||||
|
file:
|
||||||
|
# WARNING: must be same filename as current file
|
||||||
|
filename: "copyparty.yaml"
|
||||||
|
http:
|
||||||
|
services:
|
||||||
|
service-cpp:
|
||||||
|
loadBalancer:
|
||||||
|
servers:
|
||||||
|
- url: "http://127.0.0.1:3923/"
|
||||||
|
routers:
|
||||||
|
my-router:
|
||||||
|
rule: "PathPrefix(`/`)"
|
||||||
|
service: service-cpp
|
||||||
@@ -16,9 +16,10 @@ except:
|
|||||||
TYPE_CHECKING = False
|
TYPE_CHECKING = False
|
||||||
|
|
||||||
if True:
|
if True:
|
||||||
from typing import Any, Callable
|
from typing import Any, Callable, Optional
|
||||||
|
|
||||||
PY2 = sys.version_info < (3,)
|
PY2 = sys.version_info < (3,)
|
||||||
|
PY36 = sys.version_info > (3, 6)
|
||||||
if not PY2:
|
if not PY2:
|
||||||
unicode: Callable[[Any], str] = str
|
unicode: Callable[[Any], str] = str
|
||||||
else:
|
else:
|
||||||
@@ -50,6 +51,64 @@ try:
|
|||||||
except:
|
except:
|
||||||
CORES = (os.cpu_count() if hasattr(os, "cpu_count") else 0) or 2
|
CORES = (os.cpu_count() if hasattr(os, "cpu_count") else 0) or 2
|
||||||
|
|
||||||
|
# all embedded resources to be retrievable over http
|
||||||
|
zs = """
|
||||||
|
web/a/partyfuse.py
|
||||||
|
web/a/u2c.py
|
||||||
|
web/a/webdav-cfg.bat
|
||||||
|
web/baguettebox.js
|
||||||
|
web/browser.css
|
||||||
|
web/browser.html
|
||||||
|
web/browser.js
|
||||||
|
web/browser2.html
|
||||||
|
web/cf.html
|
||||||
|
web/copyparty.gif
|
||||||
|
web/dd/2.png
|
||||||
|
web/dd/3.png
|
||||||
|
web/dd/4.png
|
||||||
|
web/dd/5.png
|
||||||
|
web/deps/busy.mp3
|
||||||
|
web/deps/easymde.css
|
||||||
|
web/deps/easymde.js
|
||||||
|
web/deps/marked.js
|
||||||
|
web/deps/fuse.py
|
||||||
|
web/deps/mini-fa.css
|
||||||
|
web/deps/mini-fa.woff
|
||||||
|
web/deps/prism.css
|
||||||
|
web/deps/prism.js
|
||||||
|
web/deps/prismd.css
|
||||||
|
web/deps/scp.woff2
|
||||||
|
web/deps/sha512.ac.js
|
||||||
|
web/deps/sha512.hw.js
|
||||||
|
web/iiam.gif
|
||||||
|
web/md.css
|
||||||
|
web/md.html
|
||||||
|
web/md.js
|
||||||
|
web/md2.css
|
||||||
|
web/md2.js
|
||||||
|
web/mde.css
|
||||||
|
web/mde.html
|
||||||
|
web/mde.js
|
||||||
|
web/msg.css
|
||||||
|
web/msg.html
|
||||||
|
web/rups.css
|
||||||
|
web/rups.html
|
||||||
|
web/rups.js
|
||||||
|
web/shares.css
|
||||||
|
web/shares.html
|
||||||
|
web/shares.js
|
||||||
|
web/splash.css
|
||||||
|
web/splash.html
|
||||||
|
web/splash.js
|
||||||
|
web/svcs.html
|
||||||
|
web/svcs.js
|
||||||
|
web/ui.css
|
||||||
|
web/up2k.js
|
||||||
|
web/util.js
|
||||||
|
web/w.hash.js
|
||||||
|
"""
|
||||||
|
RES = set(zs.strip().split("\n"))
|
||||||
|
|
||||||
|
|
||||||
class EnvParams(object):
|
class EnvParams(object):
|
||||||
def __init__(self) -> None:
|
def __init__(self) -> None:
|
||||||
|
|||||||
@@ -27,6 +27,7 @@ from .__init__ import (
|
|||||||
EXE,
|
EXE,
|
||||||
MACOS,
|
MACOS,
|
||||||
PY2,
|
PY2,
|
||||||
|
PY36,
|
||||||
VT100,
|
VT100,
|
||||||
WINDOWS,
|
WINDOWS,
|
||||||
E,
|
E,
|
||||||
@@ -49,12 +50,19 @@ from .util import (
|
|||||||
PARTFTPY_VER,
|
PARTFTPY_VER,
|
||||||
PY_DESC,
|
PY_DESC,
|
||||||
PYFTPD_VER,
|
PYFTPD_VER,
|
||||||
|
RAM_AVAIL,
|
||||||
|
RAM_TOTAL,
|
||||||
SQLITE_VER,
|
SQLITE_VER,
|
||||||
UNPLICATIONS,
|
UNPLICATIONS,
|
||||||
|
URL_BUG,
|
||||||
|
URL_PRJ,
|
||||||
Daemon,
|
Daemon,
|
||||||
align_tab,
|
align_tab,
|
||||||
ansi_re,
|
ansi_re,
|
||||||
|
b64enc,
|
||||||
dedent,
|
dedent,
|
||||||
|
has_resource,
|
||||||
|
load_resource,
|
||||||
min_ex,
|
min_ex,
|
||||||
pybin,
|
pybin,
|
||||||
termsize,
|
termsize,
|
||||||
@@ -204,7 +212,7 @@ def init_E(EE: EnvParams) -> None:
|
|||||||
errs.append("Using [%s] instead" % (p,))
|
errs.append("Using [%s] instead" % (p,))
|
||||||
|
|
||||||
if errs:
|
if errs:
|
||||||
print("WARNING: " + ". ".join(errs))
|
warn(". ".join(errs))
|
||||||
|
|
||||||
return p # type: ignore
|
return p # type: ignore
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
@@ -234,7 +242,7 @@ def init_E(EE: EnvParams) -> None:
|
|||||||
raise
|
raise
|
||||||
|
|
||||||
|
|
||||||
def get_srvname() -> str:
|
def get_srvname(verbose) -> str:
|
||||||
try:
|
try:
|
||||||
ret: str = unicode(socket.gethostname()).split(".")[0]
|
ret: str = unicode(socket.gethostname()).split(".")[0]
|
||||||
except:
|
except:
|
||||||
@@ -244,7 +252,8 @@ def get_srvname() -> str:
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
fp = os.path.join(E.cfg, "name.txt")
|
fp = os.path.join(E.cfg, "name.txt")
|
||||||
lprint("using hostname from {}\n".format(fp))
|
if verbose:
|
||||||
|
lprint("using hostname from {}\n".format(fp))
|
||||||
try:
|
try:
|
||||||
with open(fp, "rb") as f:
|
with open(fp, "rb") as f:
|
||||||
ret = f.read().decode("utf-8", "replace").strip()
|
ret = f.read().decode("utf-8", "replace").strip()
|
||||||
@@ -266,7 +275,7 @@ def get_fk_salt() -> str:
|
|||||||
with open(fp, "rb") as f:
|
with open(fp, "rb") as f:
|
||||||
ret = f.read().strip()
|
ret = f.read().strip()
|
||||||
except:
|
except:
|
||||||
ret = base64.b64encode(os.urandom(18))
|
ret = b64enc(os.urandom(18))
|
||||||
with open(fp, "wb") as f:
|
with open(fp, "wb") as f:
|
||||||
f.write(ret + b"\n")
|
f.write(ret + b"\n")
|
||||||
|
|
||||||
@@ -279,7 +288,7 @@ def get_dk_salt() -> str:
|
|||||||
with open(fp, "rb") as f:
|
with open(fp, "rb") as f:
|
||||||
ret = f.read().strip()
|
ret = f.read().strip()
|
||||||
except:
|
except:
|
||||||
ret = base64.b64encode(os.urandom(30))
|
ret = b64enc(os.urandom(30))
|
||||||
with open(fp, "wb") as f:
|
with open(fp, "wb") as f:
|
||||||
f.write(ret + b"\n")
|
f.write(ret + b"\n")
|
||||||
|
|
||||||
@@ -292,7 +301,7 @@ def get_ah_salt() -> str:
|
|||||||
with open(fp, "rb") as f:
|
with open(fp, "rb") as f:
|
||||||
ret = f.read().strip()
|
ret = f.read().strip()
|
||||||
except:
|
except:
|
||||||
ret = base64.b64encode(os.urandom(18))
|
ret = b64enc(os.urandom(18))
|
||||||
with open(fp, "wb") as f:
|
with open(fp, "wb") as f:
|
||||||
f.write(ret + b"\n")
|
f.write(ret + b"\n")
|
||||||
|
|
||||||
@@ -322,21 +331,19 @@ def ensure_locale() -> None:
|
|||||||
|
|
||||||
|
|
||||||
def ensure_webdeps() -> None:
|
def ensure_webdeps() -> None:
|
||||||
ap = os.path.join(E.mod, "web/deps/mini-fa.woff")
|
if has_resource(E, "web/deps/mini-fa.woff"):
|
||||||
if os.path.exists(ap):
|
|
||||||
return
|
return
|
||||||
|
|
||||||
warn(
|
t = """could not find webdeps;
|
||||||
"""could not find webdeps;
|
|
||||||
if you are running the sfx, or exe, or pypi package, or docker image,
|
if you are running the sfx, or exe, or pypi package, or docker image,
|
||||||
then this is a bug! Please let me know so I can fix it, thanks :-)
|
then this is a bug! Please let me know so I can fix it, thanks :-)
|
||||||
https://github.com/9001/copyparty/issues/new?labels=bug&template=bug_report.md
|
%s
|
||||||
|
|
||||||
however, if you are a dev, or running copyparty from source, and you want
|
however, if you are a dev, or running copyparty from source, and you want
|
||||||
full client functionality, you will need to build or obtain the webdeps:
|
full client functionality, you will need to build or obtain the webdeps:
|
||||||
https://github.com/9001/copyparty/blob/hovudstraum/docs/devnotes.md#building
|
%s/blob/hovudstraum/docs/devnotes.md#building
|
||||||
"""
|
"""
|
||||||
)
|
warn(t % (URL_BUG, URL_PRJ))
|
||||||
|
|
||||||
|
|
||||||
def configure_ssl_ver(al: argparse.Namespace) -> None:
|
def configure_ssl_ver(al: argparse.Namespace) -> None:
|
||||||
@@ -350,7 +357,7 @@ def configure_ssl_ver(al: argparse.Namespace) -> None:
|
|||||||
# oh man i love openssl
|
# oh man i love openssl
|
||||||
# check this out
|
# check this out
|
||||||
# hold my beer
|
# hold my beer
|
||||||
assert ssl # type: ignore
|
assert ssl # type: ignore # !rm
|
||||||
ptn = re.compile(r"^OP_NO_(TLS|SSL)v")
|
ptn = re.compile(r"^OP_NO_(TLS|SSL)v")
|
||||||
sslver = terse_sslver(al.ssl_ver).split(",")
|
sslver = terse_sslver(al.ssl_ver).split(",")
|
||||||
flags = [k for k in ssl.__dict__ if ptn.match(k)]
|
flags = [k for k in ssl.__dict__ if ptn.match(k)]
|
||||||
@@ -384,7 +391,7 @@ def configure_ssl_ver(al: argparse.Namespace) -> None:
|
|||||||
|
|
||||||
|
|
||||||
def configure_ssl_ciphers(al: argparse.Namespace) -> None:
|
def configure_ssl_ciphers(al: argparse.Namespace) -> None:
|
||||||
assert ssl # type: ignore
|
assert ssl # type: ignore # !rm
|
||||||
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
|
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
|
||||||
if al.ssl_ver:
|
if al.ssl_ver:
|
||||||
ctx.options &= ~al.ssl_flags_en
|
ctx.options &= ~al.ssl_flags_en
|
||||||
@@ -516,14 +523,18 @@ def sfx_tpoke(top: str):
|
|||||||
|
|
||||||
|
|
||||||
def showlic() -> None:
|
def showlic() -> None:
|
||||||
p = os.path.join(E.mod, "res", "COPYING.txt")
|
try:
|
||||||
if not os.path.exists(p):
|
with load_resource(E, "res/COPYING.txt") as f:
|
||||||
|
buf = f.read()
|
||||||
|
except:
|
||||||
|
buf = b""
|
||||||
|
|
||||||
|
if buf:
|
||||||
|
print(buf.decode("utf-8", "replace"))
|
||||||
|
else:
|
||||||
print("no relevant license info to display")
|
print("no relevant license info to display")
|
||||||
return
|
return
|
||||||
|
|
||||||
with open(p, "rb") as f:
|
|
||||||
print(f.read().decode("utf-8", "replace"))
|
|
||||||
|
|
||||||
|
|
||||||
def get_sects():
|
def get_sects():
|
||||||
return [
|
return [
|
||||||
@@ -676,6 +687,8 @@ def get_sects():
|
|||||||
\033[36mxbu\033[35m executes CMD before a file upload starts
|
\033[36mxbu\033[35m executes CMD before a file upload starts
|
||||||
\033[36mxau\033[35m executes CMD after a file upload finishes
|
\033[36mxau\033[35m executes CMD after a file upload finishes
|
||||||
\033[36mxiu\033[35m executes CMD after all uploads finish and volume is idle
|
\033[36mxiu\033[35m executes CMD after all uploads finish and volume is idle
|
||||||
|
\033[36mxbc\033[35m executes CMD before a file copy
|
||||||
|
\033[36mxac\033[35m executes CMD after a file copy
|
||||||
\033[36mxbr\033[35m executes CMD before a file rename/move
|
\033[36mxbr\033[35m executes CMD before a file rename/move
|
||||||
\033[36mxar\033[35m executes CMD after a file rename/move
|
\033[36mxar\033[35m executes CMD after a file rename/move
|
||||||
\033[36mxbd\033[35m executes CMD before a file delete
|
\033[36mxbd\033[35m executes CMD before a file delete
|
||||||
@@ -727,6 +740,10 @@ def get_sects():
|
|||||||
the \033[33m,,\033[35m stops copyparty from reading the rest as flags and
|
the \033[33m,,\033[35m stops copyparty from reading the rest as flags and
|
||||||
the \033[33m--\033[35m stops notify-send from reading the message as args
|
the \033[33m--\033[35m stops notify-send from reading the message as args
|
||||||
and the alert will be "hey" followed by the messagetext
|
and the alert will be "hey" followed by the messagetext
|
||||||
|
|
||||||
|
\033[36m--xau zmq:pub:tcp://*:5556\033[35m announces uploads on zeromq;
|
||||||
|
\033[36m--xau t3,zmq:push:tcp://*:5557\033[35m also works, and you can
|
||||||
|
\033[36m--xau t3,j,zmq:req:tcp://localhost:5555\033[35m too for example
|
||||||
\033[0m
|
\033[0m
|
||||||
each hook is executed once for each event, except for \033[36mxiu\033[0m
|
each hook is executed once for each event, except for \033[36mxiu\033[0m
|
||||||
which builds up a backlog of uploads, running the hook just once
|
which builds up a backlog of uploads, running the hook just once
|
||||||
@@ -758,11 +775,22 @@ def get_sects():
|
|||||||
values for --urlform:
|
values for --urlform:
|
||||||
\033[36mstash\033[35m dumps the data to file and returns length + checksum
|
\033[36mstash\033[35m dumps the data to file and returns length + checksum
|
||||||
\033[36msave,get\033[35m dumps to file and returns the page like a GET
|
\033[36msave,get\033[35m dumps to file and returns the page like a GET
|
||||||
\033[36mprint,get\033[35m prints the data in the log and returns GET
|
\033[36mprint \033[35m prints the data to log and returns an error
|
||||||
(leave out the ",get" to return an error instead)\033[0m
|
\033[36mprint,xm \033[35m prints the data to log and returns --xm output
|
||||||
|
\033[36mprint,get\033[35m prints the data to log and returns GET\033[0m
|
||||||
|
|
||||||
note that the \033[35m--xm\033[0m hook will only run if \033[35m--urlform\033[0m
|
note that the \033[35m--xm\033[0m hook will only run if \033[35m--urlform\033[0m is
|
||||||
is either \033[36mprint\033[0m or the default \033[36mprint,get\033[0m
|
either \033[36mprint\033[0m or \033[36mprint,get\033[0m or the default \033[36mprint,xm\033[0m
|
||||||
|
|
||||||
|
if an \033[35m--xm\033[0m hook returns text, then
|
||||||
|
the response code will be HTTP 202;
|
||||||
|
http/get responses will be HTTP 200
|
||||||
|
|
||||||
|
if there are multiple \033[35m--xm\033[0m hooks defined, then
|
||||||
|
the first hook that produced output is returned
|
||||||
|
|
||||||
|
if there are no \033[35m--xm\033[0m hooks defined, then the default
|
||||||
|
\033[36mprint,xm\033[0m behaves like \033[36mprint,get\033[0m (returning html)
|
||||||
"""
|
"""
|
||||||
),
|
),
|
||||||
],
|
],
|
||||||
@@ -772,7 +800,7 @@ def get_sects():
|
|||||||
dedent(
|
dedent(
|
||||||
"""
|
"""
|
||||||
specify --exp or the "exp" volflag to enable placeholder expansions
|
specify --exp or the "exp" volflag to enable placeholder expansions
|
||||||
in README.md / .prologue.html / .epilogue.html
|
in README.md / PREADME.md / .prologue.html / .epilogue.html
|
||||||
|
|
||||||
--exp-md (volflag exp_md) holds the list of placeholders which can be
|
--exp-md (volflag exp_md) holds the list of placeholders which can be
|
||||||
expanded in READMEs, and --exp-lg (volflag exp_lg) likewise for logues;
|
expanded in READMEs, and --exp-lg (volflag exp_lg) likewise for logues;
|
||||||
@@ -866,8 +894,9 @@ def get_sects():
|
|||||||
use argon2id with timecost 3, 256 MiB, 4 threads, version 19 (0x13/v1.3)
|
use argon2id with timecost 3, 256 MiB, 4 threads, version 19 (0x13/v1.3)
|
||||||
|
|
||||||
\033[36m--ah-alg scrypt\033[0m # which is the same as:
|
\033[36m--ah-alg scrypt\033[0m # which is the same as:
|
||||||
\033[36m--ah-alg scrypt,13,2,8,4\033[0m
|
\033[36m--ah-alg scrypt,13,2,8,4,32\033[0m
|
||||||
use scrypt with cost 2**13, 2 iterations, blocksize 8, 4 threads
|
use scrypt with cost 2**13, 2 iterations, blocksize 8, 4 threads,
|
||||||
|
and allow using up to 32 MiB RAM (ram=cost*blksz roughly)
|
||||||
|
|
||||||
\033[36m--ah-alg sha2\033[0m # which is the same as:
|
\033[36m--ah-alg sha2\033[0m # which is the same as:
|
||||||
\033[36m--ah-alg sha2,424242\033[0m
|
\033[36m--ah-alg sha2,424242\033[0m
|
||||||
@@ -888,7 +917,7 @@ def get_sects():
|
|||||||
dedent(
|
dedent(
|
||||||
"""
|
"""
|
||||||
the mDNS protocol is multicast-based, which means there are thousands
|
the mDNS protocol is multicast-based, which means there are thousands
|
||||||
of fun and intersesting ways for it to break unexpectedly
|
of fun and interesting ways for it to break unexpectedly
|
||||||
|
|
||||||
things to check if it does not work at all:
|
things to check if it does not work at all:
|
||||||
|
|
||||||
@@ -942,7 +971,7 @@ def add_general(ap, nc, srvname):
|
|||||||
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m], see --help-accounts")
|
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m], see --help-accounts")
|
||||||
ap2.add_argument("--grp", metavar="G:N,N", type=u, action="append", help="add group, \033[33mNAME\033[0m:\033[33mUSER1\033[0m,\033[33mUSER2\033[0m,\033[33m...\033[0m; example [\033[32madmins:ed,foo,bar\033[0m]")
|
ap2.add_argument("--grp", metavar="G:N,N", type=u, action="append", help="add group, \033[33mNAME\033[0m:\033[33mUSER1\033[0m,\033[33mUSER2\033[0m,\033[33m...\033[0m; example [\033[32madmins:ed,foo,bar\033[0m]")
|
||||||
ap2.add_argument("-ed", action="store_true", help="enable the ?dots url parameter / client option which allows clients to see dotfiles / hidden files (volflag=dots)")
|
ap2.add_argument("-ed", action="store_true", help="enable the ?dots url parameter / client option which allows clients to see dotfiles / hidden files (volflag=dots)")
|
||||||
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,get", help="how to handle url-form POSTs; see \033[33m--help-urlform\033[0m")
|
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,xm", help="how to handle url-form POSTs; see \033[33m--help-urlform\033[0m")
|
||||||
ap2.add_argument("--wintitle", metavar="TXT", type=u, default="cpp @ $pub", help="server terminal title, for example [\033[32m$ip-10.1.2.\033[0m] or [\033[32m$ip-]")
|
ap2.add_argument("--wintitle", metavar="TXT", type=u, default="cpp @ $pub", help="server terminal title, for example [\033[32m$ip-10.1.2.\033[0m] or [\033[32m$ip-]")
|
||||||
ap2.add_argument("--name", metavar="TXT", type=u, default=srvname, help="server name (displayed topleft in browser and in mDNS)")
|
ap2.add_argument("--name", metavar="TXT", type=u, default=srvname, help="server name (displayed topleft in browser and in mDNS)")
|
||||||
ap2.add_argument("--mime", metavar="EXT=MIME", type=u, action="append", help="map file \033[33mEXT\033[0mension to \033[33mMIME\033[0mtype, for example [\033[32mjpg=image/jpeg\033[0m]")
|
ap2.add_argument("--mime", metavar="EXT=MIME", type=u, action="append", help="map file \033[33mEXT\033[0mension to \033[33mMIME\033[0mtype, for example [\033[32mjpg=image/jpeg\033[0m]")
|
||||||
@@ -992,10 +1021,12 @@ def add_upload(ap):
|
|||||||
ap2.add_argument("--reg-cap", metavar="N", type=int, default=38400, help="max number of uploads to keep in memory when running without \033[33m-e2d\033[0m; roughly 1 MiB RAM per 600")
|
ap2.add_argument("--reg-cap", metavar="N", type=int, default=38400, help="max number of uploads to keep in memory when running without \033[33m-e2d\033[0m; roughly 1 MiB RAM per 600")
|
||||||
ap2.add_argument("--no-fpool", action="store_true", help="disable file-handle pooling -- instead, repeatedly close and reopen files during upload (bad idea to enable this on windows and/or cow filesystems)")
|
ap2.add_argument("--no-fpool", action="store_true", help="disable file-handle pooling -- instead, repeatedly close and reopen files during upload (bad idea to enable this on windows and/or cow filesystems)")
|
||||||
ap2.add_argument("--use-fpool", action="store_true", help="force file-handle pooling, even when it might be dangerous (multiprocessing, filesystems lacking sparse-files support, ...)")
|
ap2.add_argument("--use-fpool", action="store_true", help="force file-handle pooling, even when it might be dangerous (multiprocessing, filesystems lacking sparse-files support, ...)")
|
||||||
ap2.add_argument("--hardlink", action="store_true", help="prefer hardlinks instead of symlinks when possible (within same filesystem) (volflag=hardlink)")
|
ap2.add_argument("--dedup", action="store_true", help="enable symlink-based upload deduplication (volflag=dedup)")
|
||||||
ap2.add_argument("--never-symlink", action="store_true", help="do not fallback to symlinks when a hardlink cannot be made (volflag=neversymlink)")
|
ap2.add_argument("--safe-dedup", metavar="N", type=int, default=50, help="how careful to be when deduplicating files; [\033[32m1\033[0m] = just verify the filesize, [\033[32m50\033[0m] = verify file contents have not been altered (volflag=safededup)")
|
||||||
ap2.add_argument("--no-dedup", action="store_true", help="disable symlink/hardlink creation; copy file contents instead (volflag=copydupes)")
|
ap2.add_argument("--hardlink", action="store_true", help="enable hardlink-based dedup; will fallback on symlinks when that is impossible (across filesystems) (volflag=hardlink)")
|
||||||
|
ap2.add_argument("--hardlink-only", action="store_true", help="do not fallback to symlinks when a hardlink cannot be made (volflag=hardlinkonly)")
|
||||||
ap2.add_argument("--no-dupe", action="store_true", help="reject duplicate files during upload; only matches within the same volume (volflag=nodupe)")
|
ap2.add_argument("--no-dupe", action="store_true", help="reject duplicate files during upload; only matches within the same volume (volflag=nodupe)")
|
||||||
|
ap2.add_argument("--no-clone", action="store_true", help="do not use existing data on disk to satisfy dupe uploads; reduces server HDD reads in exchange for much more network load (volflag=noclone)")
|
||||||
ap2.add_argument("--no-snap", action="store_true", help="disable snapshots -- forget unfinished uploads on shutdown; don't create .hist/up2k.snap files -- abandoned/interrupted uploads must be cleaned up manually")
|
ap2.add_argument("--no-snap", action="store_true", help="disable snapshots -- forget unfinished uploads on shutdown; don't create .hist/up2k.snap files -- abandoned/interrupted uploads must be cleaned up manually")
|
||||||
ap2.add_argument("--snap-wri", metavar="SEC", type=int, default=300, help="write upload state to ./hist/up2k.snap every \033[33mSEC\033[0m seconds; allows resuming incomplete uploads after a server crash")
|
ap2.add_argument("--snap-wri", metavar="SEC", type=int, default=300, help="write upload state to ./hist/up2k.snap every \033[33mSEC\033[0m seconds; allows resuming incomplete uploads after a server crash")
|
||||||
ap2.add_argument("--snap-drop", metavar="MIN", type=float, default=1440.0, help="forget unfinished uploads after \033[33mMIN\033[0m minutes; impossible to resume them after that (360=6h, 1440=24h)")
|
ap2.add_argument("--snap-drop", metavar="MIN", type=float, default=1440.0, help="forget unfinished uploads after \033[33mMIN\033[0m minutes; impossible to resume them after that (360=6h, 1440=24h)")
|
||||||
@@ -1007,7 +1038,8 @@ def add_upload(ap):
|
|||||||
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
|
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
|
||||||
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
|
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
|
||||||
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
|
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
|
||||||
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for this size. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
|
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for \033[33mdefault\033[0m, and never exceed \033[33mmax\033[0m. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
|
||||||
|
ap2.add_argument("--u2ow", metavar="NUM", type=int, default=0, help="web-client: default setting for when to overwrite existing files; [\033[32m0\033[0m]=never, [\033[32m1\033[0m]=if-client-newer, [\033[32m2\033[0m]=always (volflag=u2ow)")
|
||||||
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
|
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
|
||||||
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
|
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
|
||||||
|
|
||||||
@@ -1027,7 +1059,7 @@ def add_network(ap):
|
|||||||
else:
|
else:
|
||||||
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
|
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
|
||||||
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
|
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
|
||||||
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
|
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=128.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
|
||||||
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
|
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
|
||||||
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
|
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
|
||||||
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0.0, help="debug: socket write delay in seconds")
|
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0.0, help="debug: socket write delay in seconds")
|
||||||
@@ -1066,13 +1098,18 @@ def add_cert(ap, cert_path):
|
|||||||
|
|
||||||
|
|
||||||
def add_auth(ap):
|
def add_auth(ap):
|
||||||
|
ses_db = os.path.join(E.cfg, "sessions.db")
|
||||||
ap2 = ap.add_argument_group('IdP / identity provider / user authentication options')
|
ap2 = ap.add_argument_group('IdP / identity provider / user authentication options')
|
||||||
ap2.add_argument("--idp-h-usr", metavar="HN", type=u, default="", help="bypass the copyparty authentication checks and assume the request-header \033[33mHN\033[0m contains the username of the requesting user (for use with authentik/oauth/...)\n\033[1;31mWARNING:\033[0m if you enable this, make sure clients are unable to specify this header themselves; must be washed away and replaced by a reverse-proxy")
|
ap2.add_argument("--idp-h-usr", metavar="HN", type=u, default="", help="bypass the copyparty authentication checks if the request-header \033[33mHN\033[0m contains a username to associate the request with (for use with authentik/oauth/...)\n\033[1;31mWARNING:\033[0m if you enable this, make sure clients are unable to specify this header themselves; must be washed away and replaced by a reverse-proxy")
|
||||||
ap2.add_argument("--idp-h-grp", metavar="HN", type=u, default="", help="assume the request-header \033[33mHN\033[0m contains the groupname of the requesting user; can be referenced in config files for group-based access control")
|
ap2.add_argument("--idp-h-grp", metavar="HN", type=u, default="", help="assume the request-header \033[33mHN\033[0m contains the groupname of the requesting user; can be referenced in config files for group-based access control")
|
||||||
ap2.add_argument("--idp-h-key", metavar="HN", type=u, default="", help="optional but recommended safeguard; your reverse-proxy will insert a secret header named \033[33mHN\033[0m into all requests, and the other IdP headers will be ignored if this header is not present")
|
ap2.add_argument("--idp-h-key", metavar="HN", type=u, default="", help="optional but recommended safeguard; your reverse-proxy will insert a secret header named \033[33mHN\033[0m into all requests, and the other IdP headers will be ignored if this header is not present")
|
||||||
ap2.add_argument("--idp-gsep", metavar="RE", type=u, default="|:;+,", help="if there are multiple groups in \033[33m--idp-h-grp\033[0m, they are separated by one of the characters in \033[33mRE\033[0m")
|
ap2.add_argument("--idp-gsep", metavar="RE", type=u, default="|:;+,", help="if there are multiple groups in \033[33m--idp-h-grp\033[0m, they are separated by one of the characters in \033[33mRE\033[0m")
|
||||||
ap2.add_argument("--no-bauth", action="store_true", help="disable basic-authentication support; do not accept passwords from the 'Authenticate' header at all. NOTE: This breaks support for the android app")
|
ap2.add_argument("--no-bauth", action="store_true", help="disable basic-authentication support; do not accept passwords from the 'Authenticate' header at all. NOTE: This breaks support for the android app")
|
||||||
ap2.add_argument("--bauth-last", action="store_true", help="keeps basic-authentication enabled, but only as a last-resort; if a cookie is also provided then the cookie wins")
|
ap2.add_argument("--bauth-last", action="store_true", help="keeps basic-authentication enabled, but only as a last-resort; if a cookie is also provided then the cookie wins")
|
||||||
|
ap2.add_argument("--ses-db", metavar="PATH", type=u, default=ses_db, help="where to store the sessions database (if you run multiple copyparty instances, make sure they use different DBs)")
|
||||||
|
ap2.add_argument("--ses-len", metavar="CHARS", type=int, default=20, help="session key length; default is 120 bits ((20//4)*4*6)")
|
||||||
|
ap2.add_argument("--no-ses", action="store_true", help="disable sessions; use plaintext passwords in cookies")
|
||||||
|
ap2.add_argument("--ipu", metavar="CIDR=USR", type=u, action="append", help="users with IP matching \033[33mCIDR\033[0m are auto-authenticated as username \033[33mUSR\033[0m; example: [\033[32m172.16.24.0/24=dave]")
|
||||||
|
|
||||||
|
|
||||||
def add_chpw(ap):
|
def add_chpw(ap):
|
||||||
@@ -1104,6 +1141,8 @@ def add_zc_mdns(ap):
|
|||||||
ap2.add_argument("--zm6", action="store_true", help="IPv6 only")
|
ap2.add_argument("--zm6", action="store_true", help="IPv6 only")
|
||||||
ap2.add_argument("--zmv", action="store_true", help="verbose mdns")
|
ap2.add_argument("--zmv", action="store_true", help="verbose mdns")
|
||||||
ap2.add_argument("--zmvv", action="store_true", help="verboser mdns")
|
ap2.add_argument("--zmvv", action="store_true", help="verboser mdns")
|
||||||
|
ap2.add_argument("--zm-no-pe", action="store_true", help="mute parser errors (invalid incoming MDNS packets)")
|
||||||
|
ap2.add_argument("--zm-nwa-1", action="store_true", help="disable workaround for avahi-bug #379 (corruption in Avahi's mDNS reflection feature)")
|
||||||
ap2.add_argument("--zms", metavar="dhf", type=u, default="", help="list of services to announce -- d=webdav h=http f=ftp s=smb -- lowercase=plaintext uppercase=TLS -- default: all enabled services except http/https (\033[32mDdfs\033[0m if \033[33m--ftp\033[0m and \033[33m--smb\033[0m is set, \033[32mDd\033[0m otherwise)")
|
ap2.add_argument("--zms", metavar="dhf", type=u, default="", help="list of services to announce -- d=webdav h=http f=ftp s=smb -- lowercase=plaintext uppercase=TLS -- default: all enabled services except http/https (\033[32mDdfs\033[0m if \033[33m--ftp\033[0m and \033[33m--smb\033[0m is set, \033[32mDd\033[0m otherwise)")
|
||||||
ap2.add_argument("--zm-ld", metavar="PATH", type=u, default="", help="link a specific folder for webdav shares")
|
ap2.add_argument("--zm-ld", metavar="PATH", type=u, default="", help="link a specific folder for webdav shares")
|
||||||
ap2.add_argument("--zm-lh", metavar="PATH", type=u, default="", help="link a specific folder for http shares")
|
ap2.add_argument("--zm-lh", metavar="PATH", type=u, default="", help="link a specific folder for http shares")
|
||||||
@@ -1145,6 +1184,7 @@ def add_webdav(ap):
|
|||||||
ap2.add_argument("--dav-mac", action="store_true", help="disable apple-garbage filter -- allow macos to create junk files (._* and .DS_Store, .Spotlight-*, .fseventsd, .Trashes, .AppleDouble, __MACOS)")
|
ap2.add_argument("--dav-mac", action="store_true", help="disable apple-garbage filter -- allow macos to create junk files (._* and .DS_Store, .Spotlight-*, .fseventsd, .Trashes, .AppleDouble, __MACOS)")
|
||||||
ap2.add_argument("--dav-rt", action="store_true", help="show symlink-destination's lastmodified instead of the link itself; always enabled for recursive listings (volflag=davrt)")
|
ap2.add_argument("--dav-rt", action="store_true", help="show symlink-destination's lastmodified instead of the link itself; always enabled for recursive listings (volflag=davrt)")
|
||||||
ap2.add_argument("--dav-auth", action="store_true", help="force auth for all folders (required by davfs2 when only some folders are world-readable) (volflag=davauth)")
|
ap2.add_argument("--dav-auth", action="store_true", help="force auth for all folders (required by davfs2 when only some folders are world-readable) (volflag=davauth)")
|
||||||
|
ap2.add_argument("--dav-ua1", metavar="PTN", type=u, default=r" kioworker/", help="regex of tricky user-agents which expect 401 from GET requests; disable with [\033[32mno\033[0m] or blank")
|
||||||
|
|
||||||
|
|
||||||
def add_tftp(ap):
|
def add_tftp(ap):
|
||||||
@@ -1166,7 +1206,7 @@ def add_smb(ap):
|
|||||||
ap2.add_argument("--smbw", action="store_true", help="enable write support (please dont)")
|
ap2.add_argument("--smbw", action="store_true", help="enable write support (please dont)")
|
||||||
ap2.add_argument("--smb1", action="store_true", help="disable SMBv2, only enable SMBv1 (CIFS)")
|
ap2.add_argument("--smb1", action="store_true", help="disable SMBv2, only enable SMBv1 (CIFS)")
|
||||||
ap2.add_argument("--smb-port", metavar="PORT", type=int, default=445, help="port to listen on -- if you change this value, you must NAT from TCP:445 to this port using iptables or similar")
|
ap2.add_argument("--smb-port", metavar="PORT", type=int, default=445, help="port to listen on -- if you change this value, you must NAT from TCP:445 to this port using iptables or similar")
|
||||||
ap2.add_argument("--smb-nwa-1", action="store_true", help="disable impacket#1433 workaround (truncate directory listings to 64kB)")
|
ap2.add_argument("--smb-nwa-1", action="store_true", help="truncate directory listings to 64kB (~400 files); avoids impacket-0.11 bug, fixes impacket-0.12 performance")
|
||||||
ap2.add_argument("--smb-nwa-2", action="store_true", help="disable impacket workaround for filecopy globs")
|
ap2.add_argument("--smb-nwa-2", action="store_true", help="disable impacket workaround for filecopy globs")
|
||||||
ap2.add_argument("--smba", action="store_true", help="small performance boost: disable per-account permissions, enables account coalescing instead (if one user has write/delete-access, then everyone does)")
|
ap2.add_argument("--smba", action="store_true", help="small performance boost: disable per-account permissions, enables account coalescing instead (if one user has write/delete-access, then everyone does)")
|
||||||
ap2.add_argument("--smbv", action="store_true", help="verbose")
|
ap2.add_argument("--smbv", action="store_true", help="verbose")
|
||||||
@@ -1186,6 +1226,8 @@ def add_hooks(ap):
|
|||||||
ap2.add_argument("--xbu", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file upload starts")
|
ap2.add_argument("--xbu", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file upload starts")
|
||||||
ap2.add_argument("--xau", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file upload finishes")
|
ap2.add_argument("--xau", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file upload finishes")
|
||||||
ap2.add_argument("--xiu", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after all uploads finish and volume is idle")
|
ap2.add_argument("--xiu", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after all uploads finish and volume is idle")
|
||||||
|
ap2.add_argument("--xbc", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file copy")
|
||||||
|
ap2.add_argument("--xac", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file copy")
|
||||||
ap2.add_argument("--xbr", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file move/rename")
|
ap2.add_argument("--xbr", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file move/rename")
|
||||||
ap2.add_argument("--xar", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file move/rename")
|
ap2.add_argument("--xar", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file move/rename")
|
||||||
ap2.add_argument("--xbd", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file delete")
|
ap2.add_argument("--xbd", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file delete")
|
||||||
@@ -1218,15 +1260,17 @@ def add_optouts(ap):
|
|||||||
ap2.add_argument("--no-dav", action="store_true", help="disable webdav support")
|
ap2.add_argument("--no-dav", action="store_true", help="disable webdav support")
|
||||||
ap2.add_argument("--no-del", action="store_true", help="disable delete operations")
|
ap2.add_argument("--no-del", action="store_true", help="disable delete operations")
|
||||||
ap2.add_argument("--no-mv", action="store_true", help="disable move/rename operations")
|
ap2.add_argument("--no-mv", action="store_true", help="disable move/rename operations")
|
||||||
|
ap2.add_argument("--no-cp", action="store_true", help="disable copy operations")
|
||||||
ap2.add_argument("-nth", action="store_true", help="no title hostname; don't show \033[33m--name\033[0m in <title>")
|
ap2.add_argument("-nth", action="store_true", help="no title hostname; don't show \033[33m--name\033[0m in <title>")
|
||||||
ap2.add_argument("-nih", action="store_true", help="no info hostname -- don't show in UI")
|
ap2.add_argument("-nih", action="store_true", help="no info hostname -- don't show in UI")
|
||||||
ap2.add_argument("-nid", action="store_true", help="no info disk-usage -- don't show in UI")
|
ap2.add_argument("-nid", action="store_true", help="no info disk-usage -- don't show in UI")
|
||||||
ap2.add_argument("-nb", action="store_true", help="no powered-by-copyparty branding in UI")
|
ap2.add_argument("-nb", action="store_true", help="no powered-by-copyparty branding in UI")
|
||||||
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar")
|
ap2.add_argument("--zip-who", metavar="LVL", type=int, default=3, help="who can download as zip/tar? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=authenticated-with-read-access, [\033[32m3\033[0m]=everyone-with-read-access (volflag=zip_who)\n\033[1;31mWARNING:\033[0m if a nested volume has a more restrictive value than a parent volume, then this will be \033[33mignored\033[0m if the download is initiated from the parent, more lenient volume")
|
||||||
|
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar; same as \033[33m--zip-who=0\033[0m")
|
||||||
ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)")
|
ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)")
|
||||||
ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time")
|
ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time")
|
||||||
ap2.add_argument("--no-pipe", action="store_true", help="disable race-the-beam (lockstep download of files which are currently being uploaded) (volflag=nopipe)")
|
ap2.add_argument("--no-pipe", action="store_true", help="disable race-the-beam (lockstep download of files which are currently being uploaded) (volflag=nopipe)")
|
||||||
ap2.add_argument("--no-db-ip", action="store_true", help="do not write uploader IPs into the database")
|
ap2.add_argument("--no-db-ip", action="store_true", help="do not write uploader-IP into the database; will also disable unpost, you may want \033[32m--forget-ip\033[0m instead (volflag=no_db_ip)")
|
||||||
|
|
||||||
|
|
||||||
def add_safety(ap):
|
def add_safety(ap):
|
||||||
@@ -1240,7 +1284,7 @@ def add_safety(ap):
|
|||||||
ap2.add_argument("--no-dot-mv", action="store_true", help="disallow moving dotfiles; makes it impossible to move folders containing dotfiles")
|
ap2.add_argument("--no-dot-mv", action="store_true", help="disallow moving dotfiles; makes it impossible to move folders containing dotfiles")
|
||||||
ap2.add_argument("--no-dot-ren", action="store_true", help="disallow renaming dotfiles; makes it impossible to turn something into a dotfile")
|
ap2.add_argument("--no-dot-ren", action="store_true", help="disallow renaming dotfiles; makes it impossible to turn something into a dotfile")
|
||||||
ap2.add_argument("--no-logues", action="store_true", help="disable rendering .prologue/.epilogue.html into directory listings")
|
ap2.add_argument("--no-logues", action="store_true", help="disable rendering .prologue/.epilogue.html into directory listings")
|
||||||
ap2.add_argument("--no-readme", action="store_true", help="disable rendering readme.md into directory listings")
|
ap2.add_argument("--no-readme", action="store_true", help="disable rendering readme/preadme.md into directory listings")
|
||||||
ap2.add_argument("--vague-403", action="store_true", help="send 404 instead of 403 (security through ambiguity, very enterprise)")
|
ap2.add_argument("--vague-403", action="store_true", help="send 404 instead of 403 (security through ambiguity, very enterprise)")
|
||||||
ap2.add_argument("--force-js", action="store_true", help="don't send folder listings as HTML, force clients to use the embedded json instead -- slight protection against misbehaving search engines which ignore \033[33m--no-robots\033[0m")
|
ap2.add_argument("--force-js", action="store_true", help="don't send folder listings as HTML, force clients to use the embedded json instead -- slight protection against misbehaving search engines which ignore \033[33m--no-robots\033[0m")
|
||||||
ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything (volflag=norobots)")
|
ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything (volflag=norobots)")
|
||||||
@@ -1285,12 +1329,14 @@ def add_logging(ap):
|
|||||||
ap2.add_argument("--ansi", action="store_true", help="force colors; overrides environment-variable NO_COLOR")
|
ap2.add_argument("--ansi", action="store_true", help="force colors; overrides environment-variable NO_COLOR")
|
||||||
ap2.add_argument("--no-logflush", action="store_true", help="don't flush the logfile after each write; tiny bit faster")
|
ap2.add_argument("--no-logflush", action="store_true", help="don't flush the logfile after each write; tiny bit faster")
|
||||||
ap2.add_argument("--no-voldump", action="store_true", help="do not list volumes and permissions on startup")
|
ap2.add_argument("--no-voldump", action="store_true", help="do not list volumes and permissions on startup")
|
||||||
|
ap2.add_argument("--log-utc", action="store_true", help="do not use local timezone; assume the TZ env-var is UTC (tiny bit faster)")
|
||||||
ap2.add_argument("--log-tdec", metavar="N", type=int, default=3, help="timestamp resolution / number of timestamp decimals")
|
ap2.add_argument("--log-tdec", metavar="N", type=int, default=3, help="timestamp resolution / number of timestamp decimals")
|
||||||
ap2.add_argument("--log-badpwd", metavar="N", type=int, default=1, help="log failed login attempt passwords: 0=terse, 1=plaintext, 2=hashed")
|
ap2.add_argument("--log-badpwd", metavar="N", type=int, default=1, help="log failed login attempt passwords: 0=terse, 1=plaintext, 2=hashed")
|
||||||
ap2.add_argument("--log-conn", action="store_true", help="debug: print tcp-server msgs")
|
ap2.add_argument("--log-conn", action="store_true", help="debug: print tcp-server msgs")
|
||||||
ap2.add_argument("--log-htp", action="store_true", help="debug: print http-server threadpool scaling")
|
ap2.add_argument("--log-htp", action="store_true", help="debug: print http-server threadpool scaling")
|
||||||
ap2.add_argument("--ihead", metavar="HEADER", type=u, action='append', help="print request \033[33mHEADER\033[0m; [\033[32m*\033[0m]=all")
|
ap2.add_argument("--ihead", metavar="HEADER", type=u, action='append', help="print request \033[33mHEADER\033[0m; [\033[32m*\033[0m]=all")
|
||||||
ap2.add_argument("--lf-url", metavar="RE", type=u, default=r"^/\.cpr/|\?th=[wj]$|/\.(_|ql_|DS_Store$|localized$)", help="dont log URLs matching regex \033[33mRE\033[0m")
|
ap2.add_argument("--ohead", metavar="HEADER", type=u, action='append', help="print response \033[33mHEADER\033[0m; [\033[32m*\033[0m]=all")
|
||||||
|
ap2.add_argument("--lf-url", metavar="RE", type=u, default=r"^/\.cpr/|[?&]th=[wjp]|/\.(_|ql_|DS_Store$|localized$)", help="dont log URLs matching regex \033[33mRE\033[0m")
|
||||||
|
|
||||||
|
|
||||||
def add_admin(ap):
|
def add_admin(ap):
|
||||||
@@ -1298,9 +1344,16 @@ def add_admin(ap):
|
|||||||
ap2.add_argument("--no-reload", action="store_true", help="disable ?reload=cfg (reload users/volumes/volflags from config file)")
|
ap2.add_argument("--no-reload", action="store_true", help="disable ?reload=cfg (reload users/volumes/volflags from config file)")
|
||||||
ap2.add_argument("--no-rescan", action="store_true", help="disable ?scan (volume reindexing)")
|
ap2.add_argument("--no-rescan", action="store_true", help="disable ?scan (volume reindexing)")
|
||||||
ap2.add_argument("--no-stack", action="store_true", help="disable ?stack (list all stacks)")
|
ap2.add_argument("--no-stack", action="store_true", help="disable ?stack (list all stacks)")
|
||||||
|
ap2.add_argument("--no-ups-page", action="store_true", help="disable ?ru (list of recent uploads)")
|
||||||
|
ap2.add_argument("--no-up-list", action="store_true", help="don't show list of incoming files in controlpanel")
|
||||||
|
ap2.add_argument("--dl-list", metavar="LVL", type=int, default=2, help="who can see active downloads in the controlpanel? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=everyone")
|
||||||
|
ap2.add_argument("--ups-who", metavar="LVL", type=int, default=2, help="who can see recent uploads on the ?ru page? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=everyone (volflag=ups_who)")
|
||||||
|
ap2.add_argument("--ups-when", action="store_true", help="let everyone see upload timestamps on the ?ru page, not just admins")
|
||||||
|
|
||||||
|
|
||||||
def add_thumbnail(ap):
|
def add_thumbnail(ap):
|
||||||
|
th_ram = (RAM_AVAIL or RAM_TOTAL or 9) * 0.6
|
||||||
|
th_ram = int(max(min(th_ram, 6), 1) * 10) / 10
|
||||||
ap2 = ap.add_argument_group('thumbnail options')
|
ap2 = ap.add_argument_group('thumbnail options')
|
||||||
ap2.add_argument("--no-thumb", action="store_true", help="disable all thumbnails (volflag=dthumb)")
|
ap2.add_argument("--no-thumb", action="store_true", help="disable all thumbnails (volflag=dthumb)")
|
||||||
ap2.add_argument("--no-vthumb", action="store_true", help="disable video thumbnails (volflag=dvthumb)")
|
ap2.add_argument("--no-vthumb", action="store_true", help="disable video thumbnails (volflag=dvthumb)")
|
||||||
@@ -1308,7 +1361,7 @@ def add_thumbnail(ap):
|
|||||||
ap2.add_argument("--th-size", metavar="WxH", default="320x256", help="thumbnail res (volflag=thsize)")
|
ap2.add_argument("--th-size", metavar="WxH", default="320x256", help="thumbnail res (volflag=thsize)")
|
||||||
ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails")
|
ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails")
|
||||||
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60.0, help="conversion timeout in seconds (volflag=convt)")
|
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60.0, help="conversion timeout in seconds (volflag=convt)")
|
||||||
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=6.0, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
|
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=th_ram, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
|
||||||
ap2.add_argument("--th-crop", metavar="TXT", type=u, default="y", help="crop thumbnails to 4:3 or keep dynamic height; client can override in UI unless force. [\033[32my\033[0m]=crop, [\033[32mn\033[0m]=nocrop, [\033[32mfy\033[0m]=force-y, [\033[32mfn\033[0m]=force-n (volflag=crop)")
|
ap2.add_argument("--th-crop", metavar="TXT", type=u, default="y", help="crop thumbnails to 4:3 or keep dynamic height; client can override in UI unless force. [\033[32my\033[0m]=crop, [\033[32mn\033[0m]=nocrop, [\033[32mfy\033[0m]=force-y, [\033[32mfn\033[0m]=force-n (volflag=crop)")
|
||||||
ap2.add_argument("--th-x3", metavar="TXT", type=u, default="n", help="show thumbs at 3x resolution; client can override in UI unless force. [\033[32my\033[0m]=yes, [\033[32mn\033[0m]=no, [\033[32mfy\033[0m]=force-yes, [\033[32mfn\033[0m]=force-no (volflag=th3x)")
|
ap2.add_argument("--th-x3", metavar="TXT", type=u, default="n", help="show thumbs at 3x resolution; client can override in UI unless force. [\033[32my\033[0m]=yes, [\033[32mn\033[0m]=no, [\033[32mfy\033[0m]=force-yes, [\033[32mfn\033[0m]=force-no (volflag=th3x)")
|
||||||
ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference")
|
ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference")
|
||||||
@@ -1323,27 +1376,37 @@ def add_thumbnail(ap):
|
|||||||
# https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html
|
# https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html
|
||||||
# https://github.com/libvips/libvips
|
# https://github.com/libvips/libvips
|
||||||
# ffmpeg -hide_banner -demuxers | awk '/^ D /{print$2}' | while IFS= read -r x; do ffmpeg -hide_banner -h demuxer=$x; done | grep -E '^Demuxer |extensions:'
|
# ffmpeg -hide_banner -demuxers | awk '/^ D /{print$2}' | while IFS= read -r x; do ffmpeg -hide_banner -h demuxer=$x; done | grep -E '^Demuxer |extensions:'
|
||||||
ap2.add_argument("--th-r-pil", metavar="T,T", type=u, default="avif,avifs,blp,bmp,dcx,dds,dib,emf,eps,fits,flc,fli,fpx,gif,heic,heics,heif,heifs,icns,ico,im,j2p,j2k,jp2,jpeg,jpg,jpx,pbm,pcx,pgm,png,pnm,ppm,psd,qoi,sgi,spi,tga,tif,tiff,webp,wmf,xbm,xpm", help="image formats to decode using pillow")
|
ap2.add_argument("--th-r-pil", metavar="T,T", type=u, default="avif,avifs,blp,bmp,cbz,dcx,dds,dib,emf,eps,fits,flc,fli,fpx,gif,heic,heics,heif,heifs,icns,ico,im,j2p,j2k,jp2,jpeg,jpg,jpx,pbm,pcx,pgm,png,pnm,ppm,psd,qoi,sgi,spi,tga,tif,tiff,webp,wmf,xbm,xpm", help="image formats to decode using pillow")
|
||||||
ap2.add_argument("--th-r-vips", metavar="T,T", type=u, default="avif,exr,fit,fits,fts,gif,hdr,heic,jp2,jpeg,jpg,jpx,jxl,nii,pfm,pgm,png,ppm,svg,tif,tiff,webp", help="image formats to decode using pyvips")
|
ap2.add_argument("--th-r-vips", metavar="T,T", type=u, default="avif,exr,fit,fits,fts,gif,hdr,heic,jp2,jpeg,jpg,jpx,jxl,nii,pfm,pgm,png,ppm,svg,tif,tiff,webp", help="image formats to decode using pyvips")
|
||||||
ap2.add_argument("--th-r-ffi", metavar="T,T", type=u, default="apng,avif,avifs,bmp,dds,dib,fit,fits,fts,gif,hdr,heic,heics,heif,heifs,icns,ico,jp2,jpeg,jpg,jpx,jxl,pbm,pcx,pfm,pgm,png,pnm,ppm,psd,qoi,sgi,tga,tif,tiff,webp,xbm,xpm", help="image formats to decode using ffmpeg")
|
ap2.add_argument("--th-r-ffi", metavar="T,T", type=u, default="apng,avif,avifs,bmp,cbz,dds,dib,fit,fits,fts,gif,hdr,heic,heics,heif,heifs,icns,ico,jp2,jpeg,jpg,jpx,jxl,pbm,pcx,pfm,pgm,png,pnm,ppm,psd,qoi,sgi,tga,tif,tiff,webp,xbm,xpm", help="image formats to decode using ffmpeg")
|
||||||
ap2.add_argument("--th-r-ffv", metavar="T,T", type=u, default="3gp,asf,av1,avc,avi,flv,h264,h265,hevc,m4v,mjpeg,mjpg,mkv,mov,mp4,mpeg,mpeg2,mpegts,mpg,mpg2,mts,nut,ogm,ogv,rm,ts,vob,webm,wmv", help="video formats to decode using ffmpeg")
|
ap2.add_argument("--th-r-ffv", metavar="T,T", type=u, default="3gp,asf,av1,avc,avi,flv,h264,h265,hevc,m4v,mjpeg,mjpg,mkv,mov,mp4,mpeg,mpeg2,mpegts,mpg,mpg2,mts,nut,ogm,ogv,rm,ts,vob,webm,wmv", help="video formats to decode using ffmpeg")
|
||||||
ap2.add_argument("--th-r-ffa", metavar="T,T", type=u, default="aac,ac3,aif,aiff,alac,alaw,amr,apac,ape,au,bonk,dfpwm,dts,flac,gsm,ilbc,it,itgz,itxz,itz,m4a,mdgz,mdxz,mdz,mo3,mod,mp2,mp3,mpc,mptm,mt2,mulaw,ogg,okt,opus,ra,s3m,s3gz,s3xz,s3z,tak,tta,ulaw,wav,wma,wv,xm,xmgz,xmxz,xmz,xpk", help="audio formats to decode using ffmpeg")
|
ap2.add_argument("--th-r-ffa", metavar="T,T", type=u, default="aac,ac3,aif,aiff,alac,alaw,amr,apac,ape,au,bonk,dfpwm,dts,flac,gsm,ilbc,it,itgz,itxz,itz,m4a,mdgz,mdxz,mdz,mo3,mod,mp2,mp3,mpc,mptm,mt2,mulaw,ogg,okt,opus,ra,s3m,s3gz,s3xz,s3z,tak,tta,ulaw,wav,wma,wv,xm,xmgz,xmxz,xmz,xpk", help="audio formats to decode using ffmpeg")
|
||||||
ap2.add_argument("--au-unpk", metavar="E=F.C", type=u, default="mdz=mod.zip, mdgz=mod.gz, mdxz=mod.xz, s3z=s3m.zip, s3gz=s3m.gz, s3xz=s3m.xz, xmz=xm.zip, xmgz=xm.gz, xmxz=xm.xz, itz=it.zip, itgz=it.gz, itxz=it.xz", help="audio formats to decompress before passing to ffmpeg")
|
ap2.add_argument("--au-unpk", metavar="E=F.C", type=u, default="mdz=mod.zip, mdgz=mod.gz, mdxz=mod.xz, s3z=s3m.zip, s3gz=s3m.gz, s3xz=s3m.xz, xmz=xm.zip, xmgz=xm.gz, xmxz=xm.xz, itz=it.zip, itgz=it.gz, itxz=it.xz, cbz=jpg.cbz", help="audio/image formats to decompress before passing to ffmpeg")
|
||||||
|
|
||||||
|
|
||||||
def add_transcoding(ap):
|
def add_transcoding(ap):
|
||||||
ap2 = ap.add_argument_group('transcoding options')
|
ap2 = ap.add_argument_group('transcoding options')
|
||||||
ap2.add_argument("--q-opus", metavar="KBPS", type=int, default=128, help="target bitrate for transcoding to opus; set 0 to disable")
|
ap2.add_argument("--q-opus", metavar="KBPS", type=int, default=128, help="target bitrate for transcoding to opus; set 0 to disable")
|
||||||
ap2.add_argument("--q-mp3", metavar="QUALITY", type=u, default="q2", help="target quality for transcoding to mp3, for example [\033[32m192k\033[0m] (CBR) or [\033[32mq0\033[0m] (CQ/CRF, q0=maxquality, q9=smallest); set 0 to disable")
|
ap2.add_argument("--q-mp3", metavar="QUALITY", type=u, default="q2", help="target quality for transcoding to mp3, for example [\033[32m192k\033[0m] (CBR) or [\033[32mq0\033[0m] (CQ/CRF, q0=maxquality, q9=smallest); set 0 to disable")
|
||||||
|
ap2.add_argument("--no-caf", action="store_true", help="disable transcoding to caf-opus (affects iOS v12~v17), will use mp3 instead")
|
||||||
|
ap2.add_argument("--no-owa", action="store_true", help="disable transcoding to webm-opus (iOS v18 and later), will use mp3 instead")
|
||||||
ap2.add_argument("--no-acode", action="store_true", help="disable audio transcoding")
|
ap2.add_argument("--no-acode", action="store_true", help="disable audio transcoding")
|
||||||
ap2.add_argument("--no-bacode", action="store_true", help="disable batch audio transcoding by folder download (zip/tar)")
|
ap2.add_argument("--no-bacode", action="store_true", help="disable batch audio transcoding by folder download (zip/tar)")
|
||||||
ap2.add_argument("--ac-maxage", metavar="SEC", type=int, default=86400, help="delete cached transcode output after \033[33mSEC\033[0m seconds")
|
ap2.add_argument("--ac-maxage", metavar="SEC", type=int, default=86400, help="delete cached transcode output after \033[33mSEC\033[0m seconds")
|
||||||
|
|
||||||
|
|
||||||
|
def add_rss(ap):
|
||||||
|
ap2 = ap.add_argument_group('RSS options')
|
||||||
|
ap2.add_argument("--rss", action="store_true", help="enable RSS output (experimental) (volflag=rss)")
|
||||||
|
ap2.add_argument("--rss-nf", metavar="HITS", type=int, default=250, help="default number of files to return (url-param 'nf')")
|
||||||
|
ap2.add_argument("--rss-fext", metavar="E,E", type=u, default="", help="default list of file extensions to include (url-param 'fext'); blank=all")
|
||||||
|
ap2.add_argument("--rss-sort", metavar="ORD", type=u, default="m", help="default sort order (url-param 'sort'); [\033[32mm\033[0m]=last-modified [\033[32mu\033[0m]=upload-time [\033[32mn\033[0m]=filename [\033[32ms\033[0m]=filesize; Uppercase=oldest-first. Note that upload-time is 0 for non-uploaded files")
|
||||||
|
|
||||||
|
|
||||||
def add_db_general(ap, hcores):
|
def add_db_general(ap, hcores):
|
||||||
noidx = APPLESAN_TXT if MACOS else ""
|
noidx = APPLESAN_TXT if MACOS else ""
|
||||||
ap2 = ap.add_argument_group('general db options')
|
ap2 = ap.add_argument_group('general db options')
|
||||||
ap2.add_argument("-e2d", action="store_true", help="enable up2k database, making files searchable + enables upload deduplication")
|
ap2.add_argument("-e2d", action="store_true", help="enable up2k database; this enables file search, upload-undo, improves deduplication")
|
||||||
ap2.add_argument("-e2ds", action="store_true", help="scan writable folders for new files on startup; sets \033[33m-e2d\033[0m")
|
ap2.add_argument("-e2ds", action="store_true", help="scan writable folders for new files on startup; sets \033[33m-e2d\033[0m")
|
||||||
ap2.add_argument("-e2dsa", action="store_true", help="scans all folders on startup; sets \033[33m-e2ds\033[0m")
|
ap2.add_argument("-e2dsa", action="store_true", help="scans all folders on startup; sets \033[33m-e2ds\033[0m")
|
||||||
ap2.add_argument("-e2v", action="store_true", help="verify file integrity; rehash all files and compare with db")
|
ap2.add_argument("-e2v", action="store_true", help="verify file integrity; rehash all files and compare with db")
|
||||||
@@ -1352,16 +1415,20 @@ def add_db_general(ap, hcores):
|
|||||||
ap2.add_argument("--hist", metavar="PATH", type=u, default="", help="where to store volume data (db, thumbs); default is a folder named \".hist\" inside each volume (volflag=hist)")
|
ap2.add_argument("--hist", metavar="PATH", type=u, default="", help="where to store volume data (db, thumbs); default is a folder named \".hist\" inside each volume (volflag=hist)")
|
||||||
ap2.add_argument("--no-hash", metavar="PTN", type=u, default="", help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
|
ap2.add_argument("--no-hash", metavar="PTN", type=u, default="", help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
|
||||||
ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
|
ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
|
||||||
|
ap2.add_argument("--no-dirsz", action="store_true", help="do not show total recursive size of folders in listings, show inode size instead; slightly faster (volflag=nodirsz)")
|
||||||
|
ap2.add_argument("--re-dirsz", action="store_true", help="if the directory-sizes in the UI are bonkers, use this along with \033[33m-e2dsa\033[0m to rebuild the index from scratch")
|
||||||
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
|
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
|
||||||
ap2.add_argument("--re-dhash", action="store_true", help="force a cache rebuild on startup; enable this once if it gets out of sync (should never be necessary)")
|
ap2.add_argument("--re-dhash", action="store_true", help="force a cache rebuild on startup; enable this once if it gets out of sync (should never be necessary)")
|
||||||
ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice -- only useful for offloading uploads to a cloud service or something (volflag=noforget)")
|
ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice -- only useful for offloading uploads to a cloud service or something (volflag=noforget)")
|
||||||
|
ap2.add_argument("--forget-ip", metavar="MIN", type=int, default=0, help="remove uploader-IP from database (and make unpost impossible) \033[33mMIN\033[0m minutes after upload, for GDPR reasons. Default [\033[32m0\033[0m] is never-forget. [\033[32m1440\033[0m]=day, [\033[32m10080\033[0m]=week, [\033[32m43200\033[0m]=month. (volflag=forget_ip)")
|
||||||
ap2.add_argument("--dbd", metavar="PROFILE", default="wal", help="database durability profile; sets the tradeoff between robustness and speed, see \033[33m--help-dbd\033[0m (volflag=dbd)")
|
ap2.add_argument("--dbd", metavar="PROFILE", default="wal", help="database durability profile; sets the tradeoff between robustness and speed, see \033[33m--help-dbd\033[0m (volflag=dbd)")
|
||||||
ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (volflag=xlink)")
|
ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (probably buggy, not recommended) (volflag=xlink)")
|
||||||
ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing")
|
ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing")
|
||||||
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="rescan filesystem for changes every \033[33mSEC\033[0m seconds; 0=off (volflag=scan)")
|
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="rescan filesystem for changes every \033[33mSEC\033[0m seconds; 0=off (volflag=scan)")
|
||||||
ap2.add_argument("--db-act", metavar="SEC", type=float, default=10.0, help="defer any scheduled volume reindexing until \033[33mSEC\033[0m seconds after last db write (uploads, renames, ...)")
|
ap2.add_argument("--db-act", metavar="SEC", type=float, default=10.0, help="defer any scheduled volume reindexing until \033[33mSEC\033[0m seconds after last db write (uploads, renames, ...)")
|
||||||
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=45, help="search deadline -- terminate searches running for more than \033[33mSEC\033[0m seconds")
|
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=45, help="search deadline -- terminate searches running for more than \033[33mSEC\033[0m seconds")
|
||||||
ap2.add_argument("--srch-hits", metavar="N", type=int, default=7999, help="max search results to allow clients to fetch; 125 results will be shown initially")
|
ap2.add_argument("--srch-hits", metavar="N", type=int, default=7999, help="max search results to allow clients to fetch; 125 results will be shown initially")
|
||||||
|
ap2.add_argument("--srch-excl", metavar="PTN", type=u, default="", help="regex: exclude files from search results if the file-URL matches \033[33mPTN\033[0m (case-sensitive). Example: [\033[32mpassword|logs/[0-9]\033[0m] any URL containing 'password' or 'logs/DIGIT' (volflag=srch_excl)")
|
||||||
ap2.add_argument("--dotsrch", action="store_true", help="show dotfiles in search results (volflags: dotsrch | nodotsrch)")
|
ap2.add_argument("--dotsrch", action="store_true", help="show dotfiles in search results (volflags: dotsrch | nodotsrch)")
|
||||||
|
|
||||||
|
|
||||||
@@ -1418,9 +1485,13 @@ def add_ui(ap, retry):
|
|||||||
ap2.add_argument("--themes", metavar="NUM", type=int, default=8, help="number of themes installed")
|
ap2.add_argument("--themes", metavar="NUM", type=int, default=8, help="number of themes installed")
|
||||||
ap2.add_argument("--au-vol", metavar="0-100", type=int, default=50, choices=range(0, 101), help="default audio/video volume percent")
|
ap2.add_argument("--au-vol", metavar="0-100", type=int, default=50, choices=range(0, 101), help="default audio/video volume percent")
|
||||||
ap2.add_argument("--sort", metavar="C,C,C", type=u, default="href", help="default sort order, comma-separated column IDs (see header tooltips), prefix with '-' for descending. Examples: \033[32mhref -href ext sz ts tags/Album tags/.tn\033[0m (volflag=sort)")
|
ap2.add_argument("--sort", metavar="C,C,C", type=u, default="href", help="default sort order, comma-separated column IDs (see header tooltips), prefix with '-' for descending. Examples: \033[32mhref -href ext sz ts tags/Album tags/.tn\033[0m (volflag=sort)")
|
||||||
|
ap2.add_argument("--nsort", action="store_true", help="default-enable natural sort of filenames with leading numbers (volflag=nsort)")
|
||||||
|
ap2.add_argument("--hsortn", metavar="N", type=int, default=2, help="number of sorting rules to include in media URLs by default (volflag=hsortn)")
|
||||||
ap2.add_argument("--unlist", metavar="REGEX", type=u, default="", help="don't show files matching \033[33mREGEX\033[0m in file list. Purely cosmetic! Does not affect API calls, just the browser. Example: [\033[32m\\.(js|css)$\033[0m] (volflag=unlist)")
|
ap2.add_argument("--unlist", metavar="REGEX", type=u, default="", help="don't show files matching \033[33mREGEX\033[0m in file list. Purely cosmetic! Does not affect API calls, just the browser. Example: [\033[32m\\.(js|css)$\033[0m] (volflag=unlist)")
|
||||||
ap2.add_argument("--favico", metavar="TXT", type=u, default="c 000 none" if retry else "🎉 000 none", help="\033[33mfavicon-text\033[0m [ \033[33mforeground\033[0m [ \033[33mbackground\033[0m ] ], set blank to disable")
|
ap2.add_argument("--favico", metavar="TXT", type=u, default="c 000 none" if retry else "🎉 000 none", help="\033[33mfavicon-text\033[0m [ \033[33mforeground\033[0m [ \033[33mbackground\033[0m ] ], set blank to disable")
|
||||||
|
ap2.add_argument("--ext-th", metavar="E=VP", type=u, action="append", help="use thumbnail-image \033[33mVP\033[0m for file-extension \033[33mE\033[0m, example: [\033[32mexe=/.res/exe.png\033[0m] (volflag=ext_th)")
|
||||||
ap2.add_argument("--mpmc", metavar="URL", type=u, default="", help="change the mediaplayer-toggle mouse cursor; URL to a folder with {2..5}.png inside (or disable with [\033[32m.\033[0m])")
|
ap2.add_argument("--mpmc", metavar="URL", type=u, default="", help="change the mediaplayer-toggle mouse cursor; URL to a folder with {2..5}.png inside (or disable with [\033[32m.\033[0m])")
|
||||||
|
ap2.add_argument("--spinner", metavar="TXT", type=u, default="🌲", help="\033[33memoji\033[0m or \033[33memoji,css\033[0m Example: [\033[32m🥖,padding:0\033[0m]")
|
||||||
ap2.add_argument("--css-browser", metavar="L", type=u, default="", help="URL to additional CSS to include in the filebrowser html")
|
ap2.add_argument("--css-browser", metavar="L", type=u, default="", help="URL to additional CSS to include in the filebrowser html")
|
||||||
ap2.add_argument("--js-browser", metavar="L", type=u, default="", help="URL to additional JS to include in the filebrowser html")
|
ap2.add_argument("--js-browser", metavar="L", type=u, default="", help="URL to additional JS to include in the filebrowser html")
|
||||||
ap2.add_argument("--js-other", metavar="L", type=u, default="", help="URL to additional JS to include in all other pages")
|
ap2.add_argument("--js-other", metavar="L", type=u, default="", help="URL to additional JS to include in all other pages")
|
||||||
@@ -1430,12 +1501,15 @@ def add_ui(ap, retry):
|
|||||||
ap2.add_argument("--txt-max", metavar="KiB", type=int, default=64, help="max size of embedded textfiles on ?doc= (anything bigger will be lazy-loaded by JS)")
|
ap2.add_argument("--txt-max", metavar="KiB", type=int, default=64, help="max size of embedded textfiles on ?doc= (anything bigger will be lazy-loaded by JS)")
|
||||||
ap2.add_argument("--doctitle", metavar="TXT", type=u, default="copyparty @ --name", help="title / service-name to show in html documents")
|
ap2.add_argument("--doctitle", metavar="TXT", type=u, default="copyparty @ --name", help="title / service-name to show in html documents")
|
||||||
ap2.add_argument("--bname", metavar="TXT", type=u, default="--name", help="server name (displayed in filebrowser document title)")
|
ap2.add_argument("--bname", metavar="TXT", type=u, default="--name", help="server name (displayed in filebrowser document title)")
|
||||||
ap2.add_argument("--pb-url", metavar="URL", type=u, default="https://github.com/9001/copyparty", help="powered-by link; disable with \033[33m-np\033[0m")
|
ap2.add_argument("--pb-url", metavar="URL", type=u, default=URL_PRJ, help="powered-by link; disable with \033[33m-np\033[0m")
|
||||||
ap2.add_argument("--ver", action="store_true", help="show version on the control panel (incompatible with \033[33m-nb\033[0m)")
|
ap2.add_argument("--ver", action="store_true", help="show version on the control panel (incompatible with \033[33m-nb\033[0m)")
|
||||||
ap2.add_argument("--k304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable k304 on the controlpanel (workaround for buggy reverse-proxies); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
|
ap2.add_argument("--k304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable k304 on the controlpanel (workaround for buggy reverse-proxies); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
|
||||||
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox")
|
ap2.add_argument("--no304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable no304 on the controlpanel (workaround for buggy caching in browsers); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
|
||||||
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for prologue/epilogue docs (volflag=lg_sbf)")
|
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to allow in the iframe 'sandbox' attribute for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#sandbox")
|
||||||
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README.md documents (volflags: no_sb_md | sb_md)")
|
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to allow in the iframe 'sandbox' attribute for prologue/epilogue docs (volflag=lg_sbf)")
|
||||||
|
ap2.add_argument("--md-sba", metavar="TXT", type=u, default="", help="the value of the iframe 'allow' attribute for README.md docs, for example [\033[32mfullscreen\033[0m] (volflag=md_sba)")
|
||||||
|
ap2.add_argument("--lg-sba", metavar="TXT", type=u, default="", help="the value of the iframe 'allow' attribute for prologue/epilogue docs (volflag=lg_sba); see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Permissions-Policy#iframes")
|
||||||
|
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README/PREADME.md documents (volflags: no_sb_md | sb_md)")
|
||||||
ap2.add_argument("--no-sb-lg", action="store_true", help="don't sandbox prologue/epilogue docs (volflags: no_sb_lg | sb_lg); enables non-js support")
|
ap2.add_argument("--no-sb-lg", action="store_true", help="don't sandbox prologue/epilogue docs (volflags: no_sb_lg | sb_lg); enables non-js support")
|
||||||
|
|
||||||
|
|
||||||
@@ -1459,16 +1533,19 @@ def add_debug(ap):
|
|||||||
ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to \033[33m--bf-nc\033[0m and \033[33m--bf-dir\033[0m")
|
ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to \033[33m--bf-nc\033[0m and \033[33m--bf-dir\033[0m")
|
||||||
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
|
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
|
||||||
ap2.add_argument("--bf-dir", metavar="PATH", type=u, default="bf", help="bak-flips: store corrupted chunks at \033[33mPATH\033[0m; default: folder named 'bf' wherever copyparty was started")
|
ap2.add_argument("--bf-dir", metavar="PATH", type=u, default="bf", help="bak-flips: store corrupted chunks at \033[33mPATH\033[0m; default: folder named 'bf' wherever copyparty was started")
|
||||||
|
ap2.add_argument("--bf-log", metavar="PATH", type=u, default="", help="bak-flips: log corruption info to a textfile at \033[33mPATH\033[0m")
|
||||||
|
ap2.add_argument("--no-cfg-cmt-warn", action="store_true", help=argparse.SUPPRESS)
|
||||||
|
|
||||||
|
|
||||||
# fmt: on
|
# fmt: on
|
||||||
|
|
||||||
|
|
||||||
def run_argparse(
|
def run_argparse(
|
||||||
argv: list[str], formatter: Any, retry: bool, nc: int
|
argv: list[str], formatter: Any, retry: bool, nc: int, verbose=True
|
||||||
) -> argparse.Namespace:
|
) -> argparse.Namespace:
|
||||||
ap = argparse.ArgumentParser(
|
ap = argparse.ArgumentParser(
|
||||||
formatter_class=formatter,
|
formatter_class=formatter,
|
||||||
|
usage=argparse.SUPPRESS,
|
||||||
prog="copyparty",
|
prog="copyparty",
|
||||||
description="http file sharing hub v{} ({})".format(S_VERSION, S_BUILD_DT),
|
description="http file sharing hub v{} ({})".format(S_VERSION, S_BUILD_DT),
|
||||||
)
|
)
|
||||||
@@ -1486,7 +1563,7 @@ def run_argparse(
|
|||||||
|
|
||||||
tty = os.environ.get("TERM", "").lower() == "linux"
|
tty = os.environ.get("TERM", "").lower() == "linux"
|
||||||
|
|
||||||
srvname = get_srvname()
|
srvname = get_srvname(verbose)
|
||||||
|
|
||||||
add_general(ap, nc, srvname)
|
add_general(ap, nc, srvname)
|
||||||
add_network(ap)
|
add_network(ap)
|
||||||
@@ -1505,6 +1582,7 @@ def run_argparse(
|
|||||||
add_db_metadata(ap)
|
add_db_metadata(ap)
|
||||||
add_thumbnail(ap)
|
add_thumbnail(ap)
|
||||||
add_transcoding(ap)
|
add_transcoding(ap)
|
||||||
|
add_rss(ap)
|
||||||
add_ftp(ap)
|
add_ftp(ap)
|
||||||
add_webdav(ap)
|
add_webdav(ap)
|
||||||
add_tftp(ap)
|
add_tftp(ap)
|
||||||
@@ -1553,16 +1631,13 @@ def run_argparse(
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
|
|
||||||
def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
|
def main(argv: Optional[list[str]] = None) -> None:
|
||||||
time.strptime("19970815", "%Y%m%d") # python#7980
|
time.strptime("19970815", "%Y%m%d") # python#7980
|
||||||
if WINDOWS:
|
if WINDOWS:
|
||||||
os.system("rem") # enables colors
|
os.system("rem") # enables colors
|
||||||
|
|
||||||
init_E(E)
|
init_E(E)
|
||||||
|
|
||||||
if rsrc: # pyz
|
|
||||||
E.mod = rsrc
|
|
||||||
|
|
||||||
if argv is None:
|
if argv is None:
|
||||||
argv = sys.argv
|
argv = sys.argv
|
||||||
|
|
||||||
@@ -1619,6 +1694,7 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
|
|||||||
("--hdr-au-usr", "--idp-h-usr"),
|
("--hdr-au-usr", "--idp-h-usr"),
|
||||||
("--idp-h-sep", "--idp-gsep"),
|
("--idp-h-sep", "--idp-gsep"),
|
||||||
("--th-no-crop", "--th-crop=n"),
|
("--th-no-crop", "--th-crop=n"),
|
||||||
|
("--never-symlink", "--hardlink-only"),
|
||||||
]
|
]
|
||||||
for dk, nk in deprecated:
|
for dk, nk in deprecated:
|
||||||
idx = -1
|
idx = -1
|
||||||
@@ -1643,7 +1719,7 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
|
|||||||
argv.extend(["--qr"])
|
argv.extend(["--qr"])
|
||||||
if ANYWIN or not os.geteuid():
|
if ANYWIN or not os.geteuid():
|
||||||
# win10 allows symlinks if admin; can be unexpected
|
# win10 allows symlinks if admin; can be unexpected
|
||||||
argv.extend(["-p80,443,3923", "--ign-ebind", "--no-dedup"])
|
argv.extend(["-p80,443,3923", "--ign-ebind"])
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@@ -1665,7 +1741,7 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
|
|||||||
for fmtr in [RiceFormatter, RiceFormatter, Dodge11874, BasicDodge11874]:
|
for fmtr in [RiceFormatter, RiceFormatter, Dodge11874, BasicDodge11874]:
|
||||||
try:
|
try:
|
||||||
al = run_argparse(argv, fmtr, retry, nc)
|
al = run_argparse(argv, fmtr, retry, nc)
|
||||||
dal = run_argparse([], fmtr, retry, nc)
|
dal = run_argparse([], fmtr, retry, nc, False)
|
||||||
break
|
break
|
||||||
except SystemExit:
|
except SystemExit:
|
||||||
raise
|
raise
|
||||||
@@ -1691,7 +1767,7 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
|
|||||||
except:
|
except:
|
||||||
lprint("\nfailed to disable quick-edit-mode:\n" + min_ex() + "\n")
|
lprint("\nfailed to disable quick-edit-mode:\n" + min_ex() + "\n")
|
||||||
|
|
||||||
if al.ansi:
|
if not al.ansi:
|
||||||
al.wintitle = ""
|
al.wintitle = ""
|
||||||
|
|
||||||
# propagate implications
|
# propagate implications
|
||||||
@@ -1729,6 +1805,9 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
|
|||||||
if al.ihead:
|
if al.ihead:
|
||||||
al.ihead = [x.lower() for x in al.ihead]
|
al.ihead = [x.lower() for x in al.ihead]
|
||||||
|
|
||||||
|
if al.ohead:
|
||||||
|
al.ohead = [x.lower() for x in al.ohead]
|
||||||
|
|
||||||
if HAVE_SSL:
|
if HAVE_SSL:
|
||||||
if al.ssl_ver:
|
if al.ssl_ver:
|
||||||
configure_ssl_ver(al)
|
configure_ssl_ver(al)
|
||||||
@@ -1749,7 +1828,7 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
|
|||||||
print("error: python2 cannot --smb")
|
print("error: python2 cannot --smb")
|
||||||
return
|
return
|
||||||
|
|
||||||
if sys.version_info < (3, 6):
|
if not PY36:
|
||||||
al.no_scandir = True
|
al.no_scandir = True
|
||||||
|
|
||||||
if not hasattr(os, "sendfile"):
|
if not hasattr(os, "sendfile"):
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
VERSION = (1, 14, 4)
|
VERSION = (1, 16, 15)
|
||||||
CODENAME = "one step forward"
|
CODENAME = "COPYparty"
|
||||||
BUILD_DT = (2024, 9, 2)
|
BUILD_DT = (2025, 2, 25)
|
||||||
|
|
||||||
S_VERSION = ".".join(map(str, VERSION))
|
S_VERSION = ".".join(map(str, VERSION))
|
||||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||||
|
|||||||
@@ -66,6 +66,7 @@ if PY2:
|
|||||||
LEELOO_DALLAS = "leeloo_dallas"
|
LEELOO_DALLAS = "leeloo_dallas"
|
||||||
|
|
||||||
SEE_LOG = "see log for details"
|
SEE_LOG = "see log for details"
|
||||||
|
SEESLOG = " (see serverlog for details)"
|
||||||
SSEELOG = " ({})".format(SEE_LOG)
|
SSEELOG = " ({})".format(SEE_LOG)
|
||||||
BAD_CFG = "invalid config; {}".format(SEE_LOG)
|
BAD_CFG = "invalid config; {}".format(SEE_LOG)
|
||||||
SBADCFG = " ({})".format(BAD_CFG)
|
SBADCFG = " ({})".format(BAD_CFG)
|
||||||
@@ -164,8 +165,11 @@ class Lim(object):
|
|||||||
self.chk_rem(rem)
|
self.chk_rem(rem)
|
||||||
if sz != -1:
|
if sz != -1:
|
||||||
self.chk_sz(sz)
|
self.chk_sz(sz)
|
||||||
self.chk_vsz(broker, ptop, sz, volgetter)
|
else:
|
||||||
self.chk_df(abspath, sz) # side effects; keep last-ish
|
sz = 0
|
||||||
|
|
||||||
|
self.chk_vsz(broker, ptop, sz, volgetter)
|
||||||
|
self.chk_df(abspath, sz) # side effects; keep last-ish
|
||||||
|
|
||||||
ap2, vp2 = self.rot(abspath)
|
ap2, vp2 = self.rot(abspath)
|
||||||
if abspath == ap2:
|
if abspath == ap2:
|
||||||
@@ -205,7 +209,15 @@ class Lim(object):
|
|||||||
|
|
||||||
if self.dft < time.time():
|
if self.dft < time.time():
|
||||||
self.dft = int(time.time()) + 300
|
self.dft = int(time.time()) + 300
|
||||||
self.dfv = get_df(abspath)[0] or 0
|
|
||||||
|
df, du, err = get_df(abspath, True)
|
||||||
|
if err:
|
||||||
|
t = "failed to read disk space usage for %r: %s"
|
||||||
|
self.log(t % (abspath, err), 3)
|
||||||
|
self.dfv = 0xAAAAAAAAA # 42.6 GiB
|
||||||
|
else:
|
||||||
|
self.dfv = df or 0
|
||||||
|
|
||||||
for j in list(self.reg.values()) if self.reg else []:
|
for j in list(self.reg.values()) if self.reg else []:
|
||||||
self.dfv -= int(j["size"] / (len(j["hash"]) or 999) * len(j["need"]))
|
self.dfv -= int(j["size"] / (len(j["hash"]) or 999) * len(j["need"]))
|
||||||
|
|
||||||
@@ -355,18 +367,21 @@ class VFS(object):
|
|||||||
self.ahtml: dict[str, list[str]] = {}
|
self.ahtml: dict[str, list[str]] = {}
|
||||||
self.aadmin: dict[str, list[str]] = {}
|
self.aadmin: dict[str, list[str]] = {}
|
||||||
self.adot: dict[str, list[str]] = {}
|
self.adot: dict[str, list[str]] = {}
|
||||||
self.all_vols: dict[str, VFS] = {}
|
self.js_ls = {}
|
||||||
|
self.js_htm = ""
|
||||||
|
|
||||||
if realpath:
|
if realpath:
|
||||||
rp = realpath + ("" if realpath.endswith(os.sep) else os.sep)
|
rp = realpath + ("" if realpath.endswith(os.sep) else os.sep)
|
||||||
vp = vpath + ("/" if vpath else "")
|
vp = vpath + ("/" if vpath else "")
|
||||||
self.histpath = os.path.join(realpath, ".hist") # db / thumbcache
|
self.histpath = os.path.join(realpath, ".hist") # db / thumbcache
|
||||||
self.all_vols = {vpath: self} # flattened recursive
|
self.all_vols = {vpath: self} # flattened recursive
|
||||||
|
self.all_nodes = {vpath: self} # also jumpvols
|
||||||
self.all_aps = [(rp, self)]
|
self.all_aps = [(rp, self)]
|
||||||
self.all_vps = [(vp, self)]
|
self.all_vps = [(vp, self)]
|
||||||
else:
|
else:
|
||||||
self.histpath = ""
|
self.histpath = ""
|
||||||
self.all_vols = {}
|
self.all_vols = {}
|
||||||
|
self.all_nodes = {}
|
||||||
self.all_aps = []
|
self.all_aps = []
|
||||||
self.all_vps = []
|
self.all_vps = []
|
||||||
|
|
||||||
@@ -384,9 +399,11 @@ class VFS(object):
|
|||||||
def get_all_vols(
|
def get_all_vols(
|
||||||
self,
|
self,
|
||||||
vols: dict[str, "VFS"],
|
vols: dict[str, "VFS"],
|
||||||
|
nodes: dict[str, "VFS"],
|
||||||
aps: list[tuple[str, "VFS"]],
|
aps: list[tuple[str, "VFS"]],
|
||||||
vps: list[tuple[str, "VFS"]],
|
vps: list[tuple[str, "VFS"]],
|
||||||
) -> None:
|
) -> None:
|
||||||
|
nodes[self.vpath] = self
|
||||||
if self.realpath:
|
if self.realpath:
|
||||||
vols[self.vpath] = self
|
vols[self.vpath] = self
|
||||||
rp = self.realpath
|
rp = self.realpath
|
||||||
@@ -396,7 +413,7 @@ class VFS(object):
|
|||||||
vps.append((vp, self))
|
vps.append((vp, self))
|
||||||
|
|
||||||
for v in self.nodes.values():
|
for v in self.nodes.values():
|
||||||
v.get_all_vols(vols, aps, vps)
|
v.get_all_vols(vols, nodes, aps, vps)
|
||||||
|
|
||||||
def add(self, src: str, dst: str) -> "VFS":
|
def add(self, src: str, dst: str) -> "VFS":
|
||||||
"""get existing, or add new path to the vfs"""
|
"""get existing, or add new path to the vfs"""
|
||||||
@@ -509,7 +526,7 @@ class VFS(object):
|
|||||||
"""returns [vfsnode,fs_remainder] if user has the requested permissions"""
|
"""returns [vfsnode,fs_remainder] if user has the requested permissions"""
|
||||||
if relchk(vpath):
|
if relchk(vpath):
|
||||||
if self.log:
|
if self.log:
|
||||||
self.log("vfs", "invalid relpath [{}]".format(vpath))
|
self.log("vfs", "invalid relpath %r @%s" % (vpath, uname))
|
||||||
raise Pebkac(422)
|
raise Pebkac(422)
|
||||||
|
|
||||||
cvpath = undot(vpath)
|
cvpath = undot(vpath)
|
||||||
@@ -526,11 +543,11 @@ class VFS(object):
|
|||||||
if req and uname not in d and uname != LEELOO_DALLAS:
|
if req and uname not in d and uname != LEELOO_DALLAS:
|
||||||
if vpath != cvpath and vpath != "." and self.log:
|
if vpath != cvpath and vpath != "." and self.log:
|
||||||
ap = vn.canonical(rem)
|
ap = vn.canonical(rem)
|
||||||
t = "{} has no {} in [{}] => [{}] => [{}]"
|
t = "%s has no %s in %r => %r => %r"
|
||||||
self.log("vfs", t.format(uname, msg, vpath, cvpath, ap), 6)
|
self.log("vfs", t % (uname, msg, vpath, cvpath, ap), 6)
|
||||||
|
|
||||||
t = 'you don\'t have %s-access in "/%s" or below "/%s"'
|
t = "you don't have %s-access in %r or below %r"
|
||||||
raise Pebkac(err, t % (msg, cvpath, vn.vpath))
|
raise Pebkac(err, t % (msg, "/" + cvpath, "/" + vn.vpath))
|
||||||
|
|
||||||
return vn, rem
|
return vn, rem
|
||||||
|
|
||||||
@@ -540,15 +557,14 @@ class VFS(object):
|
|||||||
return self._get_dbv(vrem)
|
return self._get_dbv(vrem)
|
||||||
|
|
||||||
shv, srem = src
|
shv, srem = src
|
||||||
return shv, vjoin(srem, vrem)
|
return shv._get_dbv(vjoin(srem, vrem))
|
||||||
|
|
||||||
def _get_dbv(self, vrem: str) -> tuple["VFS", str]:
|
def _get_dbv(self, vrem: str) -> tuple["VFS", str]:
|
||||||
dbv = self.dbv
|
dbv = self.dbv
|
||||||
if not dbv:
|
if not dbv:
|
||||||
return self, vrem
|
return self, vrem
|
||||||
|
|
||||||
tv = [self.vpath[len(dbv.vpath) :].lstrip("/"), vrem]
|
vrem = vjoin(self.vpath[len(dbv.vpath) :].lstrip("/"), vrem)
|
||||||
vrem = "/".join([x for x in tv if x])
|
|
||||||
return dbv, vrem
|
return dbv, vrem
|
||||||
|
|
||||||
def canonical(self, rem: str, resolve: bool = True) -> str:
|
def canonical(self, rem: str, resolve: bool = True) -> str:
|
||||||
@@ -580,10 +596,11 @@ class VFS(object):
|
|||||||
scandir: bool,
|
scandir: bool,
|
||||||
permsets: list[list[bool]],
|
permsets: list[list[bool]],
|
||||||
lstat: bool = False,
|
lstat: bool = False,
|
||||||
|
throw: bool = False,
|
||||||
) -> tuple[str, list[tuple[str, os.stat_result]], dict[str, "VFS"]]:
|
) -> tuple[str, list[tuple[str, os.stat_result]], dict[str, "VFS"]]:
|
||||||
"""replaces _ls for certain shares (single-file, or file selection)"""
|
"""replaces _ls for certain shares (single-file, or file selection)"""
|
||||||
vn, rem = self.shr_src # type: ignore
|
vn, rem = self.shr_src # type: ignore
|
||||||
abspath, real, _ = vn.ls(rem, "\n", scandir, permsets, lstat)
|
abspath, real, _ = vn.ls(rem, "\n", scandir, permsets, lstat, throw)
|
||||||
real = [x for x in real if os.path.basename(x[0]) in self.shr_files]
|
real = [x for x in real if os.path.basename(x[0]) in self.shr_files]
|
||||||
return abspath, real, {}
|
return abspath, real, {}
|
||||||
|
|
||||||
@@ -594,11 +611,12 @@ class VFS(object):
|
|||||||
scandir: bool,
|
scandir: bool,
|
||||||
permsets: list[list[bool]],
|
permsets: list[list[bool]],
|
||||||
lstat: bool = False,
|
lstat: bool = False,
|
||||||
|
throw: bool = False,
|
||||||
) -> tuple[str, list[tuple[str, os.stat_result]], dict[str, "VFS"]]:
|
) -> tuple[str, list[tuple[str, os.stat_result]], dict[str, "VFS"]]:
|
||||||
"""return user-readable [fsdir,real,virt] items at vpath"""
|
"""return user-readable [fsdir,real,virt] items at vpath"""
|
||||||
virt_vis = {} # nodes readable by user
|
virt_vis = {} # nodes readable by user
|
||||||
abspath = self.canonical(rem)
|
abspath = self.canonical(rem)
|
||||||
real = list(statdir(self.log, scandir, lstat, abspath))
|
real = list(statdir(self.log, scandir, lstat, abspath, throw))
|
||||||
real.sort()
|
real.sort()
|
||||||
if not rem:
|
if not rem:
|
||||||
# no vfs nodes in the list of real inodes
|
# no vfs nodes in the list of real inodes
|
||||||
@@ -640,7 +658,7 @@ class VFS(object):
|
|||||||
seen: list[str],
|
seen: list[str],
|
||||||
uname: str,
|
uname: str,
|
||||||
permsets: list[list[bool]],
|
permsets: list[list[bool]],
|
||||||
wantdots: bool,
|
wantdots: int,
|
||||||
scandir: bool,
|
scandir: bool,
|
||||||
lstat: bool,
|
lstat: bool,
|
||||||
subvols: bool = True,
|
subvols: bool = True,
|
||||||
@@ -660,6 +678,10 @@ class VFS(object):
|
|||||||
"""
|
"""
|
||||||
recursively yields from ./rem;
|
recursively yields from ./rem;
|
||||||
rel is a unix-style user-defined vpath (not vfs-related)
|
rel is a unix-style user-defined vpath (not vfs-related)
|
||||||
|
|
||||||
|
NOTE: don't invoke this function from a dbv; subvols are only
|
||||||
|
descended into if rem is blank due to the _ls `if not rem:`
|
||||||
|
which intention is to prevent unintended access to subvols
|
||||||
"""
|
"""
|
||||||
|
|
||||||
fsroot, vfs_ls, vfs_virt = self.ls(rem, uname, scandir, permsets, lstat=lstat)
|
fsroot, vfs_ls, vfs_virt = self.ls(rem, uname, scandir, permsets, lstat=lstat)
|
||||||
@@ -671,8 +693,8 @@ class VFS(object):
|
|||||||
and fsroot in seen
|
and fsroot in seen
|
||||||
):
|
):
|
||||||
if self.log:
|
if self.log:
|
||||||
t = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}/{}"
|
t = "bailing from symlink loop,\n prev: %r\n curr: %r\n from: %r / %r"
|
||||||
self.log("vfs.walk", t.format(seen[-1], fsroot, self.vpath, rem), 3)
|
self.log("vfs.walk", t % (seen[-1], fsroot, self.vpath, rem), 3)
|
||||||
return
|
return
|
||||||
|
|
||||||
if "xdev" in self.flags or "xvol" in self.flags:
|
if "xdev" in self.flags or "xvol" in self.flags:
|
||||||
@@ -684,7 +706,7 @@ class VFS(object):
|
|||||||
rm1.append(le)
|
rm1.append(le)
|
||||||
_ = [vfs_ls.remove(x) for x in rm1] # type: ignore
|
_ = [vfs_ls.remove(x) for x in rm1] # type: ignore
|
||||||
|
|
||||||
dots_ok = wantdots and uname in dbv.axs.udot
|
dots_ok = wantdots and (wantdots == 2 or uname in dbv.axs.udot)
|
||||||
if not dots_ok:
|
if not dots_ok:
|
||||||
vfs_ls = [x for x in vfs_ls if "/." not in "/" + x[0]]
|
vfs_ls = [x for x in vfs_ls if "/." not in "/" + x[0]]
|
||||||
|
|
||||||
@@ -738,7 +760,7 @@ class VFS(object):
|
|||||||
# if single folder: the folder itself is the top-level item
|
# if single folder: the folder itself is the top-level item
|
||||||
folder = "" if flt or not wrap else (vpath.split("/")[-1].lstrip(".") or "top")
|
folder = "" if flt or not wrap else (vpath.split("/")[-1].lstrip(".") or "top")
|
||||||
|
|
||||||
g = self.walk(folder, vrem, [], uname, [[True, False]], True, scandir, False)
|
g = self.walk(folder, vrem, [], uname, [[True, False]], 1, scandir, False)
|
||||||
for _, _, vpath, apath, files, rd, vd in g:
|
for _, _, vpath, apath, files, rd, vd in g:
|
||||||
if flt:
|
if flt:
|
||||||
files = [x for x in files if x[0] in flt]
|
files = [x for x in files if x[0] in flt]
|
||||||
@@ -796,8 +818,8 @@ class VFS(object):
|
|||||||
|
|
||||||
if vdev != st.st_dev:
|
if vdev != st.st_dev:
|
||||||
if self.log:
|
if self.log:
|
||||||
t = "xdev: {}[{}] => {}[{}]"
|
t = "xdev: %s[%r] => %s[%r]"
|
||||||
self.log("vfs", t.format(vdev, self.realpath, st.st_dev, ap), 3)
|
self.log("vfs", t % (vdev, self.realpath, st.st_dev, ap), 3)
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@@ -807,7 +829,7 @@ class VFS(object):
|
|||||||
return vn
|
return vn
|
||||||
|
|
||||||
if self.log:
|
if self.log:
|
||||||
self.log("vfs", "xvol: [{}]".format(ap), 3)
|
self.log("vfs", "xvol: %r" % (ap,), 3)
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@@ -840,8 +862,10 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
# fwd-decl
|
# fwd-decl
|
||||||
self.vfs = VFS(log_func, "", "", AXS(), {})
|
self.vfs = VFS(log_func, "", "", AXS(), {})
|
||||||
self.acct: dict[str, str] = {}
|
self.acct: dict[str, str] = {} # uname->pw
|
||||||
self.iacct: dict[str, str] = {}
|
self.iacct: dict[str, str] = {} # pw->uname
|
||||||
|
self.ases: dict[str, str] = {} # uname->session
|
||||||
|
self.sesa: dict[str, str] = {} # session->uname
|
||||||
self.defpw: dict[str, str] = {}
|
self.defpw: dict[str, str] = {}
|
||||||
self.grps: dict[str, list[str]] = {}
|
self.grps: dict[str, list[str]] = {}
|
||||||
self.re_pwd: Optional[re.Pattern] = None
|
self.re_pwd: Optional[re.Pattern] = None
|
||||||
@@ -853,6 +877,7 @@ class AuthSrv(object):
|
|||||||
self.idp_accs: dict[str, list[str]] = {} # username->groupnames
|
self.idp_accs: dict[str, list[str]] = {} # username->groupnames
|
||||||
self.idp_usr_gh: dict[str, str] = {} # username->group-header-value (cache)
|
self.idp_usr_gh: dict[str, str] = {} # username->group-header-value (cache)
|
||||||
|
|
||||||
|
self.hid_cache: dict[str, str] = {}
|
||||||
self.mutex = threading.Lock()
|
self.mutex = threading.Lock()
|
||||||
self.reload()
|
self.reload()
|
||||||
|
|
||||||
@@ -889,7 +914,7 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
self.idp_accs[uname] = gnames
|
self.idp_accs[uname] = gnames
|
||||||
|
|
||||||
t = "reinitializing due to new user from IdP: [%s:%s]"
|
t = "reinitializing due to new user from IdP: [%r:%r]"
|
||||||
self.log(t % (uname, gnames), 3)
|
self.log(t % (uname, gnames), 3)
|
||||||
|
|
||||||
if not broker:
|
if not broker:
|
||||||
@@ -897,7 +922,7 @@ class AuthSrv(object):
|
|||||||
self._reload()
|
self._reload()
|
||||||
return True
|
return True
|
||||||
|
|
||||||
broker.ask("_reload_blocking", False).get()
|
broker.ask("reload", False, True).get()
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def _map_volume_idp(
|
def _map_volume_idp(
|
||||||
@@ -921,7 +946,7 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
for un, gn in un_gn:
|
for un, gn in un_gn:
|
||||||
# if ap/vp has a user/group placeholder, make sure to keep
|
# if ap/vp has a user/group placeholder, make sure to keep
|
||||||
# track so the same user/gruop is mapped when setting perms;
|
# track so the same user/group is mapped when setting perms;
|
||||||
# otherwise clear un/gn to indicate it's a regular volume
|
# otherwise clear un/gn to indicate it's a regular volume
|
||||||
|
|
||||||
src1 = src0.replace("${u}", un or "\n")
|
src1 = src0.replace("${u}", un or "\n")
|
||||||
@@ -1264,10 +1289,10 @@ class AuthSrv(object):
|
|||||||
# one or more bools before the final flag; eat them
|
# one or more bools before the final flag; eat them
|
||||||
n1, uname = uname.split(",", 1)
|
n1, uname = uname.split(",", 1)
|
||||||
for _, vp, _, _ in vols:
|
for _, vp, _, _ in vols:
|
||||||
self._read_volflag(flags[vp], n1, True, False)
|
self._read_volflag(vp, flags[vp], n1, True, False)
|
||||||
|
|
||||||
for _, vp, _, _ in vols:
|
for _, vp, _, _ in vols:
|
||||||
self._read_volflag(flags[vp], uname, cval, False)
|
self._read_volflag(vp, flags[vp], uname, cval, False)
|
||||||
|
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -1354,20 +1379,42 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
def _read_volflag(
|
def _read_volflag(
|
||||||
self,
|
self,
|
||||||
|
vpath: str,
|
||||||
flags: dict[str, Any],
|
flags: dict[str, Any],
|
||||||
name: str,
|
name: str,
|
||||||
value: Union[str, bool, list[str]],
|
value: Union[str, bool, list[str]],
|
||||||
is_list: bool,
|
is_list: bool,
|
||||||
) -> None:
|
) -> None:
|
||||||
|
if name not in flagdescs:
|
||||||
|
name = name.lower()
|
||||||
|
|
||||||
|
# volflags are snake_case, but a leading dash is the removal operator
|
||||||
|
stripped = name.lstrip("-")
|
||||||
|
zi = len(name) - len(stripped)
|
||||||
|
if zi > 1:
|
||||||
|
t = "WARNING: the config for volume [/%s] specified a volflag with multiple leading hyphens (%s); use one hyphen to remove, or zero hyphens to add a flag. Will now enable flag [%s]"
|
||||||
|
self.log(t % (vpath, name, stripped), 3)
|
||||||
|
name = stripped
|
||||||
|
zi = 0
|
||||||
|
|
||||||
|
if stripped not in flagdescs and "-" in stripped:
|
||||||
|
name = ("-" * zi) + stripped.replace("-", "_")
|
||||||
|
|
||||||
desc = flagdescs.get(name.lstrip("-"), "?").replace("\n", " ")
|
desc = flagdescs.get(name.lstrip("-"), "?").replace("\n", " ")
|
||||||
|
|
||||||
|
if not name:
|
||||||
|
self._e("└─unreadable-line")
|
||||||
|
t = "WARNING: the config for volume [/%s] indicated that a volflag was to be defined, but the volflag name was blank"
|
||||||
|
self.log(t % (vpath,), 3)
|
||||||
|
return
|
||||||
|
|
||||||
if re.match("^-[^-]+$", name):
|
if re.match("^-[^-]+$", name):
|
||||||
t = "└─unset volflag [{}] ({})"
|
t = "└─unset volflag [{}] ({})"
|
||||||
self._e(t.format(name[1:], desc))
|
self._e(t.format(name[1:], desc))
|
||||||
flags[name] = True
|
flags[name] = True
|
||||||
return
|
return
|
||||||
|
|
||||||
zs = "mtp on403 on404 xbu xau xiu xbr xar xbd xad xm xban"
|
zs = "ext_th mtp on403 on404 xbu xau xiu xbc xac xbr xar xbd xad xm xban"
|
||||||
if name not in zs.split():
|
if name not in zs.split():
|
||||||
if value is True:
|
if value is True:
|
||||||
t = "└─add volflag [{}] = {} ({})"
|
t = "└─add volflag [{}] = {} ({})"
|
||||||
@@ -1490,6 +1537,14 @@ class AuthSrv(object):
|
|||||||
if not mount and not self.args.idp_h_usr:
|
if not mount and not self.args.idp_h_usr:
|
||||||
# -h says our defaults are CWD at root and read/write for everyone
|
# -h says our defaults are CWD at root and read/write for everyone
|
||||||
axs = AXS(["*"], ["*"], None, None)
|
axs = AXS(["*"], ["*"], None, None)
|
||||||
|
if os.path.exists("/z/initcfg"):
|
||||||
|
t = "Read-access has been disabled due to failsafe: Docker detected, but the config does not define any volumes. This failsafe is to prevent unintended access if this is due to accidental loss of config. You can override this safeguard and allow read/write to all of /w/ by adding the following arguments to the docker container: -v .::rw"
|
||||||
|
self.log(t, 1)
|
||||||
|
axs = AXS()
|
||||||
|
elif self.args.c:
|
||||||
|
t = "Read-access has been disabled due to failsafe: No volumes were defined by the config-file. This failsafe is to prevent unintended access if this is due to accidental loss of config. You can override this safeguard and allow read/write to the working-directory by adding the following arguments: -v .::rw"
|
||||||
|
self.log(t, 1)
|
||||||
|
axs = AXS()
|
||||||
vfs = VFS(self.log_func, absreal("."), "", axs, {})
|
vfs = VFS(self.log_func, absreal("."), "", axs, {})
|
||||||
elif "" not in mount:
|
elif "" not in mount:
|
||||||
# there's volumes but no root; make root inaccessible
|
# there's volumes but no root; make root inaccessible
|
||||||
@@ -1515,21 +1570,38 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
assert vfs # type: ignore
|
assert vfs # type: ignore
|
||||||
vfs.all_vols = {}
|
vfs.all_vols = {}
|
||||||
|
vfs.all_nodes = {}
|
||||||
vfs.all_aps = []
|
vfs.all_aps = []
|
||||||
vfs.all_vps = []
|
vfs.all_vps = []
|
||||||
vfs.get_all_vols(vfs.all_vols, vfs.all_aps, vfs.all_vps)
|
vfs.get_all_vols(vfs.all_vols, vfs.all_nodes, vfs.all_aps, vfs.all_vps)
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_nodes.values():
|
||||||
vol.all_aps.sort(key=lambda x: len(x[0]), reverse=True)
|
vol.all_aps.sort(key=lambda x: len(x[0]), reverse=True)
|
||||||
vol.all_vps.sort(key=lambda x: len(x[0]), reverse=True)
|
vol.all_vps.sort(key=lambda x: len(x[0]), reverse=True)
|
||||||
vol.root = vfs
|
vol.root = vfs
|
||||||
|
|
||||||
|
zs = "neversymlink"
|
||||||
|
k_ign = set(zs.split())
|
||||||
|
for vol in vfs.all_vols.values():
|
||||||
|
unknown_flags = set()
|
||||||
|
for k, v in vol.flags.items():
|
||||||
|
stripped = k.lstrip("-")
|
||||||
|
if k != stripped and stripped not in vol.flags:
|
||||||
|
t = "WARNING: the config for volume [/%s] tried to remove volflag [%s] by specifying [%s] but that volflag was not already set"
|
||||||
|
self.log(t % (vol.vpath, stripped, k), 3)
|
||||||
|
k = stripped
|
||||||
|
if k not in flagdescs and k not in k_ign:
|
||||||
|
unknown_flags.add(k)
|
||||||
|
if unknown_flags:
|
||||||
|
t = "WARNING: the config for volume [/%s] has unrecognized volflags; will ignore: '%s'"
|
||||||
|
self.log(t % (vol.vpath, "', '".join(unknown_flags)), 3)
|
||||||
|
|
||||||
enshare = self.args.shr
|
enshare = self.args.shr
|
||||||
shr = enshare[1:-1]
|
shr = enshare[1:-1]
|
||||||
shrs = enshare[1:]
|
shrs = enshare[1:]
|
||||||
if enshare:
|
if enshare:
|
||||||
import sqlite3
|
import sqlite3
|
||||||
|
|
||||||
shv = VFS(self.log_func, "", shr, AXS(), {"d2d": True})
|
shv = VFS(self.log_func, "", shr, AXS(), {})
|
||||||
|
|
||||||
db_path = self.args.shr_db
|
db_path = self.args.shr_db
|
||||||
db = sqlite3.connect(db_path)
|
db = sqlite3.connect(db_path)
|
||||||
@@ -1542,14 +1614,14 @@ class AuthSrv(object):
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
if self.args.shr_v:
|
if self.args.shr_v:
|
||||||
t = "loading %s share [%s] by [%s] => [%s]"
|
t = "loading %s share %r by %r => %r"
|
||||||
self.log(t % (s_pr, s_k, s_un, s_vp))
|
self.log(t % (s_pr, s_k, s_un, s_vp))
|
||||||
|
|
||||||
if s_pw:
|
if s_pw:
|
||||||
# gotta reuse the "account" for all shares with this pw,
|
# gotta reuse the "account" for all shares with this pw,
|
||||||
# so do a light scramble as this appears in the web-ui
|
# so do a light scramble as this appears in the web-ui
|
||||||
zs = ub64enc(hashlib.sha512(s_pw.encode("utf-8")).digest())[4:16]
|
zb = hashlib.sha512(s_pw.encode("utf-8")).digest()
|
||||||
sun = "s_%s" % (zs.decode("utf-8"),)
|
sun = "s_%s" % (ub64enc(zb)[4:16].decode("ascii"),)
|
||||||
acct[sun] = s_pw
|
acct[sun] = s_pw
|
||||||
else:
|
else:
|
||||||
sun = "*"
|
sun = "*"
|
||||||
@@ -1569,7 +1641,7 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
vfs.nodes[shr] = vfs.all_vols[shr] = shv
|
vfs.nodes[shr] = vfs.all_vols[shr] = shv
|
||||||
for vol in shv.nodes.values():
|
for vol in shv.nodes.values():
|
||||||
vfs.all_vols[vol.vpath] = vol
|
vfs.all_vols[vol.vpath] = vfs.all_nodes[vol.vpath] = vol
|
||||||
vol.get_dbv = vol._get_share_src
|
vol.get_dbv = vol._get_share_src
|
||||||
vol.ls = vol._ls_nope
|
vol.ls = vol._ls_nope
|
||||||
|
|
||||||
@@ -1654,8 +1726,12 @@ class AuthSrv(object):
|
|||||||
promote = []
|
promote = []
|
||||||
demote = []
|
demote = []
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_vols.values():
|
||||||
zb = hashlib.sha512(afsenc(vol.realpath)).digest()
|
hid = self.hid_cache.get(vol.realpath)
|
||||||
hid = base64.b32encode(zb).decode("ascii").lower()
|
if not hid:
|
||||||
|
zb = hashlib.sha512(afsenc(vol.realpath)).digest()
|
||||||
|
hid = base64.b32encode(zb).decode("ascii").lower()
|
||||||
|
self.hid_cache[vol.realpath] = hid
|
||||||
|
|
||||||
vflag = vol.flags.get("hist")
|
vflag = vol.flags.get("hist")
|
||||||
if vflag == "-":
|
if vflag == "-":
|
||||||
pass
|
pass
|
||||||
@@ -1708,7 +1784,19 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
self.log("\n\n".join(ta) + "\n", c=3)
|
self.log("\n\n".join(ta) + "\n", c=3)
|
||||||
|
|
||||||
vfs.histtab = {zv.realpath: zv.histpath for zv in vfs.all_vols.values()}
|
rhisttab = {}
|
||||||
|
vfs.histtab = {}
|
||||||
|
for zv in vfs.all_vols.values():
|
||||||
|
histp = zv.histpath
|
||||||
|
is_shr = shr and zv.vpath.split("/")[0] == shr
|
||||||
|
if histp and not is_shr and histp in rhisttab:
|
||||||
|
zv2 = rhisttab[histp]
|
||||||
|
t = "invalid config; multiple volumes share the same histpath (database location):\n histpath: %s\n volume 1: /%s [%s]\n volume 2: %s [%s]"
|
||||||
|
t = t % (histp, zv2.vpath, zv2.realpath, zv.vpath, zv.realpath)
|
||||||
|
self.log(t, 1)
|
||||||
|
raise Exception(t)
|
||||||
|
rhisttab[histp] = zv
|
||||||
|
vfs.histtab[zv.realpath] = histp
|
||||||
|
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_vols.values():
|
||||||
lim = Lim(self.log_func)
|
lim = Lim(self.log_func)
|
||||||
@@ -1723,7 +1811,7 @@ class AuthSrv(object):
|
|||||||
use = True
|
use = True
|
||||||
try:
|
try:
|
||||||
_ = float(zs)
|
_ = float(zs)
|
||||||
zs = "%sg" % (zs)
|
zs = "%sg" % (zs,)
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
lim.dfl = unhumanize(zs)
|
lim.dfl = unhumanize(zs)
|
||||||
@@ -1767,12 +1855,12 @@ class AuthSrv(object):
|
|||||||
vol.lim = lim
|
vol.lim = lim
|
||||||
|
|
||||||
if self.args.no_robots:
|
if self.args.no_robots:
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_nodes.values():
|
||||||
# volflag "robots" overrides global "norobots", allowing indexing by search engines for this vol
|
# volflag "robots" overrides global "norobots", allowing indexing by search engines for this vol
|
||||||
if not vol.flags.get("robots"):
|
if not vol.flags.get("robots"):
|
||||||
vol.flags["norobots"] = True
|
vol.flags["norobots"] = True
|
||||||
|
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_nodes.values():
|
||||||
if self.args.no_vthumb:
|
if self.args.no_vthumb:
|
||||||
vol.flags["dvthumb"] = True
|
vol.flags["dvthumb"] = True
|
||||||
if self.args.no_athumb:
|
if self.args.no_athumb:
|
||||||
@@ -1784,13 +1872,17 @@ class AuthSrv(object):
|
|||||||
vol.flags["dithumb"] = True
|
vol.flags["dithumb"] = True
|
||||||
|
|
||||||
have_fk = False
|
have_fk = False
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_nodes.values():
|
||||||
fk = vol.flags.get("fk")
|
fk = vol.flags.get("fk")
|
||||||
fka = vol.flags.get("fka")
|
fka = vol.flags.get("fka")
|
||||||
if fka and not fk:
|
if fka and not fk:
|
||||||
fk = fka
|
fk = fka
|
||||||
if fk:
|
if fk:
|
||||||
vol.flags["fk"] = int(fk) if fk is not True else 8
|
fk = 8 if fk is True else int(fk)
|
||||||
|
if fk > 72:
|
||||||
|
t = "max filekey-length is 72; volume /%s specified %d (anything higher than 16 is pointless btw)"
|
||||||
|
raise Exception(t % (vol.vpath, fk))
|
||||||
|
vol.flags["fk"] = fk
|
||||||
have_fk = True
|
have_fk = True
|
||||||
|
|
||||||
dk = vol.flags.get("dk")
|
dk = vol.flags.get("dk")
|
||||||
@@ -1816,7 +1908,7 @@ class AuthSrv(object):
|
|||||||
zs = os.path.join(E.cfg, "fk-salt.txt")
|
zs = os.path.join(E.cfg, "fk-salt.txt")
|
||||||
self.log(t % (fk_len, 16, zs), 3)
|
self.log(t % (fk_len, 16, zs), 3)
|
||||||
|
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_nodes.values():
|
||||||
if "pk" in vol.flags and "gz" not in vol.flags and "xz" not in vol.flags:
|
if "pk" in vol.flags and "gz" not in vol.flags and "xz" not in vol.flags:
|
||||||
vol.flags["gz"] = False # def.pk
|
vol.flags["gz"] = False # def.pk
|
||||||
|
|
||||||
@@ -1827,7 +1919,7 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
all_mte = {}
|
all_mte = {}
|
||||||
errors = False
|
errors = False
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_nodes.values():
|
||||||
if (self.args.e2ds and vol.axs.uwrite) or self.args.e2dsa:
|
if (self.args.e2ds and vol.axs.uwrite) or self.args.e2dsa:
|
||||||
vol.flags["e2ds"] = True
|
vol.flags["e2ds"] = True
|
||||||
|
|
||||||
@@ -1838,6 +1930,7 @@ class AuthSrv(object):
|
|||||||
["no_hash", "nohash"],
|
["no_hash", "nohash"],
|
||||||
["no_idx", "noidx"],
|
["no_idx", "noidx"],
|
||||||
["og_ua", "og_ua"],
|
["og_ua", "og_ua"],
|
||||||
|
["srch_excl", "srch_excl"],
|
||||||
]:
|
]:
|
||||||
if vf in vol.flags:
|
if vf in vol.flags:
|
||||||
ptn = re.compile(vol.flags.pop(vf))
|
ptn = re.compile(vol.flags.pop(vf))
|
||||||
@@ -1863,11 +1956,8 @@ class AuthSrv(object):
|
|||||||
if vf not in vol.flags:
|
if vf not in vol.flags:
|
||||||
vol.flags[vf] = getattr(self.args, ga)
|
vol.flags[vf] = getattr(self.args, ga)
|
||||||
|
|
||||||
for k in ("nrand",):
|
zs = "forget_ip nrand u2abort u2ow ups_who zip_who"
|
||||||
if k not in vol.flags:
|
for k in zs.split():
|
||||||
vol.flags[k] = getattr(self.args, k)
|
|
||||||
|
|
||||||
for k in ("nrand", "u2abort"):
|
|
||||||
if k in vol.flags:
|
if k in vol.flags:
|
||||||
vol.flags[k] = int(vol.flags[k])
|
vol.flags[k] = int(vol.flags[k])
|
||||||
|
|
||||||
@@ -1891,6 +1981,11 @@ class AuthSrv(object):
|
|||||||
if len(zs) == 3: # fc5 => ffcc55
|
if len(zs) == 3: # fc5 => ffcc55
|
||||||
vol.flags["tcolor"] = "".join([x * 2 for x in zs])
|
vol.flags["tcolor"] = "".join([x * 2 for x in zs])
|
||||||
|
|
||||||
|
if vol.flags.get("neversymlink"):
|
||||||
|
vol.flags["hardlinkonly"] = True # was renamed
|
||||||
|
if vol.flags.get("hardlinkonly"):
|
||||||
|
vol.flags["hardlink"] = True
|
||||||
|
|
||||||
for k1, k2 in IMPLICATIONS:
|
for k1, k2 in IMPLICATIONS:
|
||||||
if k1 in vol.flags:
|
if k1 in vol.flags:
|
||||||
vol.flags[k2] = True
|
vol.flags[k2] = True
|
||||||
@@ -1913,9 +2008,11 @@ class AuthSrv(object):
|
|||||||
vol.flags[k] = odfusion(getattr(self.args, k), vol.flags[k])
|
vol.flags[k] = odfusion(getattr(self.args, k), vol.flags[k])
|
||||||
|
|
||||||
# append additive args from argv to volflags
|
# append additive args from argv to volflags
|
||||||
hooks = "xbu xau xiu xbr xar xbd xad xm xban".split()
|
hooks = "xbu xau xiu xbc xac xbr xar xbd xad xm xban".split()
|
||||||
for name in "mtp on404 on403".split() + hooks:
|
for name in "ext_th mtp on404 on403".split() + hooks:
|
||||||
self._read_volflag(vol.flags, name, getattr(self.args, name), True)
|
self._read_volflag(
|
||||||
|
vol.vpath, vol.flags, name, getattr(self.args, name), True
|
||||||
|
)
|
||||||
|
|
||||||
for hn in hooks:
|
for hn in hooks:
|
||||||
cmds = vol.flags.get(hn)
|
cmds = vol.flags.get(hn)
|
||||||
@@ -1943,6 +2040,16 @@ class AuthSrv(object):
|
|||||||
ncmds.append(ocmd)
|
ncmds.append(ocmd)
|
||||||
vol.flags[hn] = ncmds
|
vol.flags[hn] = ncmds
|
||||||
|
|
||||||
|
ext_th = vol.flags["ext_th_d"] = {}
|
||||||
|
etv = "(?)"
|
||||||
|
try:
|
||||||
|
for etv in vol.flags.get("ext_th") or []:
|
||||||
|
k, v = etv.split("=")
|
||||||
|
ext_th[k] = v
|
||||||
|
except:
|
||||||
|
t = "WARNING: volume [/%s]: invalid value specified for ext-th: %s"
|
||||||
|
self.log(t % (vol.vpath, etv), 3)
|
||||||
|
|
||||||
# d2d drops all database features for a volume
|
# d2d drops all database features for a volume
|
||||||
for grp, rm in [["d2d", "e2d"], ["d2t", "e2t"], ["d2d", "e2v"]]:
|
for grp, rm in [["d2d", "e2d"], ["d2t", "e2t"], ["d2d", "e2v"]]:
|
||||||
if not vol.flags.get(grp, False):
|
if not vol.flags.get(grp, False):
|
||||||
@@ -1995,9 +2102,6 @@ class AuthSrv(object):
|
|||||||
for x in drop:
|
for x in drop:
|
||||||
vol.flags.pop(x)
|
vol.flags.pop(x)
|
||||||
|
|
||||||
if vol.flags.get("neversymlink") and not vol.flags.get("hardlink"):
|
|
||||||
vol.flags["copydupes"] = True
|
|
||||||
|
|
||||||
# verify tags mentioned by -mt[mp] are used by -mte
|
# verify tags mentioned by -mt[mp] are used by -mte
|
||||||
local_mtp = {}
|
local_mtp = {}
|
||||||
local_only_mtp = {}
|
local_only_mtp = {}
|
||||||
@@ -2042,8 +2146,24 @@ class AuthSrv(object):
|
|||||||
self.log(t.format(mtp), 1)
|
self.log(t.format(mtp), 1)
|
||||||
errors = True
|
errors = True
|
||||||
|
|
||||||
have_daw = False
|
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_vols.values():
|
||||||
|
re1: Optional[re.Pattern] = vol.flags.get("srch_excl")
|
||||||
|
excl = [re1.pattern] if re1 else []
|
||||||
|
|
||||||
|
vpaths = []
|
||||||
|
vtop = vol.vpath
|
||||||
|
for vp2 in vfs.all_vols.keys():
|
||||||
|
if vp2.startswith((vtop + "/").lstrip("/")) and vtop != vp2:
|
||||||
|
vpaths.append(re.escape(vp2[len(vtop) :].lstrip("/")))
|
||||||
|
if vpaths:
|
||||||
|
excl.append("^(%s)/" % ("|".join(vpaths),))
|
||||||
|
|
||||||
|
vol.flags["srch_re_dots"] = re.compile("|".join(excl or ["^$"]))
|
||||||
|
excl.extend([r"^\.", r"/\."])
|
||||||
|
vol.flags["srch_re_nodot"] = re.compile("|".join(excl))
|
||||||
|
|
||||||
|
have_daw = False
|
||||||
|
for vol in vfs.all_nodes.values():
|
||||||
daw = vol.flags.get("daw") or self.args.daw
|
daw = vol.flags.get("daw") or self.args.daw
|
||||||
if daw:
|
if daw:
|
||||||
vol.flags["daw"] = True
|
vol.flags["daw"] = True
|
||||||
@@ -2058,13 +2178,12 @@ class AuthSrv(object):
|
|||||||
self.log("--smb can only be used when --ah-alg is none", 1)
|
self.log("--smb can only be used when --ah-alg is none", 1)
|
||||||
errors = True
|
errors = True
|
||||||
|
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_nodes.values():
|
||||||
for k in list(vol.flags.keys()):
|
for k in list(vol.flags.keys()):
|
||||||
if re.match("^-[^-]+$", k):
|
if re.match("^-[^-]+$", k):
|
||||||
vol.flags.pop(k[1:], None)
|
vol.flags.pop(k[1:], None)
|
||||||
vol.flags.pop(k)
|
vol.flags.pop(k)
|
||||||
|
|
||||||
for vol in vfs.all_vols.values():
|
|
||||||
if vol.flags.get("dots"):
|
if vol.flags.get("dots"):
|
||||||
for name in vol.axs.uread:
|
for name in vol.axs.uread:
|
||||||
vol.axs.udot.add(name)
|
vol.axs.udot.add(name)
|
||||||
@@ -2076,6 +2195,8 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
have_e2d = False
|
have_e2d = False
|
||||||
have_e2t = False
|
have_e2t = False
|
||||||
|
have_dedup = False
|
||||||
|
unsafe_dedup = []
|
||||||
t = "volumes and permissions:\n"
|
t = "volumes and permissions:\n"
|
||||||
for zv in vfs.all_vols.values():
|
for zv in vfs.all_vols.values():
|
||||||
if not self.warn_anonwrite or verbosity < 5:
|
if not self.warn_anonwrite or verbosity < 5:
|
||||||
@@ -2108,22 +2229,34 @@ class AuthSrv(object):
|
|||||||
if "e2t" in zv.flags:
|
if "e2t" in zv.flags:
|
||||||
have_e2t = True
|
have_e2t = True
|
||||||
|
|
||||||
|
if "dedup" in zv.flags:
|
||||||
|
have_dedup = True
|
||||||
|
if "e2d" not in zv.flags and "hardlink" not in zv.flags:
|
||||||
|
unsafe_dedup.append("/" + zv.vpath)
|
||||||
|
|
||||||
t += "\n"
|
t += "\n"
|
||||||
|
|
||||||
if self.warn_anonwrite and verbosity > 4:
|
if self.warn_anonwrite and verbosity > 4:
|
||||||
if not self.args.no_voldump:
|
if not self.args.no_voldump:
|
||||||
self.log(t)
|
self.log(t)
|
||||||
|
|
||||||
if have_e2d:
|
if have_e2d or self.args.idp_h_usr:
|
||||||
t = self.chk_sqlite_threadsafe()
|
t = self.chk_sqlite_threadsafe()
|
||||||
if t:
|
if t:
|
||||||
self.log("\n\033[{}\033[0m\n".format(t))
|
self.log("\n\033[{}\033[0m\n".format(t))
|
||||||
|
if have_e2d:
|
||||||
if not have_e2t:
|
if not have_e2t:
|
||||||
t = "hint: argument -e2ts enables multimedia indexing (artist/title/...)"
|
t = "hint: enable multimedia indexing (artist/title/...) with argument -e2ts"
|
||||||
self.log(t, 6)
|
self.log(t, 6)
|
||||||
else:
|
else:
|
||||||
t = "hint: argument -e2dsa enables searching, upload-undo, and better deduplication"
|
t = "hint: enable searching and upload-undo with argument -e2dsa"
|
||||||
|
self.log(t, 6)
|
||||||
|
|
||||||
|
if unsafe_dedup:
|
||||||
|
t = "WARNING: symlink-based deduplication is enabled for some volumes, but without indexing. Please enable -e2dsa and/or --hardlink to avoid problems when moving/renaming files. Affected volumes: %s"
|
||||||
|
self.log(t % (", ".join(unsafe_dedup)), 3)
|
||||||
|
elif not have_dedup:
|
||||||
|
t = "hint: enable upload deduplication with --dedup (but see readme for consequences)"
|
||||||
self.log(t, 6)
|
self.log(t, 6)
|
||||||
|
|
||||||
zv, _ = vfs.get("/", "*", False, False)
|
zv, _ = vfs.get("/", "*", False, False)
|
||||||
@@ -2165,8 +2298,11 @@ class AuthSrv(object):
|
|||||||
self.grps = grps
|
self.grps = grps
|
||||||
self.iacct = {v: k for k, v in acct.items()}
|
self.iacct = {v: k for k, v in acct.items()}
|
||||||
|
|
||||||
|
self.load_sessions()
|
||||||
|
|
||||||
self.re_pwd = None
|
self.re_pwd = None
|
||||||
pwds = [re.escape(x) for x in self.iacct.keys()]
|
pwds = [re.escape(x) for x in self.iacct.keys()]
|
||||||
|
pwds.extend(list(self.sesa))
|
||||||
if pwds:
|
if pwds:
|
||||||
if self.ah.on:
|
if self.ah.on:
|
||||||
zs = r"(\[H\] pw:.*|[?&]pw=)([^&]+)"
|
zs = r"(\[H\] pw:.*|[?&]pw=)([^&]+)"
|
||||||
@@ -2189,6 +2325,11 @@ class AuthSrv(object):
|
|||||||
for x, y in vfs.all_vols.items()
|
for x, y in vfs.all_vols.items()
|
||||||
if x != shr and not x.startswith(shrs)
|
if x != shr and not x.startswith(shrs)
|
||||||
}
|
}
|
||||||
|
vfs.all_nodes = {
|
||||||
|
x: y
|
||||||
|
for x, y in vfs.all_nodes.items()
|
||||||
|
if x != shr and not x.startswith(shrs)
|
||||||
|
}
|
||||||
|
|
||||||
assert db and cur and cur2 and shv # type: ignore
|
assert db and cur and cur2 and shv # type: ignore
|
||||||
for row in cur.execute("select * from sh"):
|
for row in cur.execute("select * from sh"):
|
||||||
@@ -2241,6 +2382,143 @@ class AuthSrv(object):
|
|||||||
cur.close()
|
cur.close()
|
||||||
db.close()
|
db.close()
|
||||||
|
|
||||||
|
self.js_ls = {}
|
||||||
|
self.js_htm = {}
|
||||||
|
for vn in self.vfs.all_nodes.values():
|
||||||
|
vf = vn.flags
|
||||||
|
vn.js_ls = {
|
||||||
|
"idx": "e2d" in vf,
|
||||||
|
"itag": "e2t" in vf,
|
||||||
|
"dnsort": "nsort" in vf,
|
||||||
|
"dhsortn": vf["hsortn"],
|
||||||
|
"dsort": vf["sort"],
|
||||||
|
"dcrop": vf["crop"],
|
||||||
|
"dth3x": vf["th3x"],
|
||||||
|
"u2ts": vf["u2ts"],
|
||||||
|
"frand": bool(vf.get("rand")),
|
||||||
|
"lifetime": vf.get("lifetime") or 0,
|
||||||
|
"unlist": vf.get("unlist") or "",
|
||||||
|
"sb_lg": "" if "no_sb_lg" in vf else (vf.get("lg_sbf") or "y"),
|
||||||
|
}
|
||||||
|
js_htm = {
|
||||||
|
"SPINNER": self.args.spinner,
|
||||||
|
"s_name": self.args.bname,
|
||||||
|
"have_up2k_idx": "e2d" in vf,
|
||||||
|
"have_acode": not self.args.no_acode,
|
||||||
|
"have_shr": self.args.shr,
|
||||||
|
"have_zip": not self.args.no_zip,
|
||||||
|
"have_mv": not self.args.no_mv,
|
||||||
|
"have_del": not self.args.no_del,
|
||||||
|
"have_unpost": int(self.args.unpost),
|
||||||
|
"have_emp": self.args.emp,
|
||||||
|
"ext_th": vf.get("ext_th_d") or {},
|
||||||
|
"sb_md": "" if "no_sb_md" in vf else (vf.get("md_sbf") or "y"),
|
||||||
|
"sba_md": vf.get("md_sba") or "",
|
||||||
|
"sba_lg": vf.get("lg_sba") or "",
|
||||||
|
"txt_ext": self.args.textfiles.replace(",", " "),
|
||||||
|
"def_hcols": list(vf.get("mth") or []),
|
||||||
|
"unlist0": vf.get("unlist") or "",
|
||||||
|
"dgrid": "grid" in vf,
|
||||||
|
"dgsel": "gsel" in vf,
|
||||||
|
"dnsort": "nsort" in vf,
|
||||||
|
"dhsortn": vf["hsortn"],
|
||||||
|
"dsort": vf["sort"],
|
||||||
|
"dcrop": vf["crop"],
|
||||||
|
"dth3x": vf["th3x"],
|
||||||
|
"dvol": self.args.au_vol,
|
||||||
|
"idxh": int(self.args.ih),
|
||||||
|
"themes": self.args.themes,
|
||||||
|
"turbolvl": self.args.turbo,
|
||||||
|
"u2j": self.args.u2j,
|
||||||
|
"u2sz": self.args.u2sz,
|
||||||
|
"u2ts": vf["u2ts"],
|
||||||
|
"u2ow": vf["u2ow"],
|
||||||
|
"frand": bool(vf.get("rand")),
|
||||||
|
"lifetime": vn.js_ls["lifetime"],
|
||||||
|
"u2sort": self.args.u2sort,
|
||||||
|
}
|
||||||
|
vn.js_htm = json.dumps(js_htm)
|
||||||
|
|
||||||
|
vols = list(vfs.all_nodes.values())
|
||||||
|
if enshare:
|
||||||
|
assert shv # type: ignore # !rm
|
||||||
|
vols.append(shv)
|
||||||
|
vols.extend(list(shv.nodes.values()))
|
||||||
|
|
||||||
|
for vol in vols:
|
||||||
|
dbv = vol.get_dbv("")[0]
|
||||||
|
vol.js_ls = vol.js_ls or dbv.js_ls or {}
|
||||||
|
vol.js_htm = vol.js_htm or dbv.js_htm or "{}"
|
||||||
|
|
||||||
|
zs = str(vol.flags.get("tcolor") or self.args.tcolor)
|
||||||
|
vol.flags["tcolor"] = zs.lstrip("#")
|
||||||
|
|
||||||
|
def load_sessions(self, quiet=False) -> None:
|
||||||
|
# mutex me
|
||||||
|
if self.args.no_ses:
|
||||||
|
self.ases = {}
|
||||||
|
self.sesa = {}
|
||||||
|
return
|
||||||
|
|
||||||
|
import sqlite3
|
||||||
|
|
||||||
|
ases = {}
|
||||||
|
blen = (self.args.ses_len // 4) * 4 # 3 bytes in 4 chars
|
||||||
|
blen = (blen * 3) // 4 # bytes needed for ses_len chars
|
||||||
|
|
||||||
|
db = sqlite3.connect(self.args.ses_db)
|
||||||
|
cur = db.cursor()
|
||||||
|
|
||||||
|
for uname, sid in cur.execute("select un, si from us"):
|
||||||
|
if uname in self.acct:
|
||||||
|
ases[uname] = sid
|
||||||
|
|
||||||
|
n = []
|
||||||
|
q = "insert into us values (?,?,?)"
|
||||||
|
for uname in self.acct:
|
||||||
|
if uname not in ases:
|
||||||
|
sid = ub64enc(os.urandom(blen)).decode("ascii")
|
||||||
|
cur.execute(q, (uname, sid, int(time.time())))
|
||||||
|
ases[uname] = sid
|
||||||
|
n.append(uname)
|
||||||
|
|
||||||
|
if n:
|
||||||
|
db.commit()
|
||||||
|
|
||||||
|
cur.close()
|
||||||
|
db.close()
|
||||||
|
|
||||||
|
self.ases = ases
|
||||||
|
self.sesa = {v: k for k, v in ases.items()}
|
||||||
|
if n and not quiet:
|
||||||
|
t = ", ".join(n[:3])
|
||||||
|
if len(n) > 3:
|
||||||
|
t += "..."
|
||||||
|
self.log("added %d new sessions (%s)" % (len(n), t))
|
||||||
|
|
||||||
|
def forget_session(self, broker: Optional["BrokerCli"], uname: str) -> None:
|
||||||
|
with self.mutex:
|
||||||
|
self._forget_session(uname)
|
||||||
|
|
||||||
|
if broker:
|
||||||
|
broker.ask("_reload_sessions").get()
|
||||||
|
|
||||||
|
def _forget_session(self, uname: str) -> None:
|
||||||
|
if self.args.no_ses:
|
||||||
|
return
|
||||||
|
|
||||||
|
import sqlite3
|
||||||
|
|
||||||
|
db = sqlite3.connect(self.args.ses_db)
|
||||||
|
cur = db.cursor()
|
||||||
|
cur.execute("delete from us where un = ?", (uname,))
|
||||||
|
db.commit()
|
||||||
|
cur.close()
|
||||||
|
db.close()
|
||||||
|
|
||||||
|
self.sesa.pop(self.ases.get(uname, ""), "")
|
||||||
|
self.ases.pop(uname, "")
|
||||||
|
|
||||||
def chpw(self, broker: Optional["BrokerCli"], uname, pw) -> tuple[bool, str]:
|
def chpw(self, broker: Optional["BrokerCli"], uname, pw) -> tuple[bool, str]:
|
||||||
if not self.args.chpw:
|
if not self.args.chpw:
|
||||||
return False, "feature disabled in server config"
|
return False, "feature disabled in server config"
|
||||||
@@ -2260,7 +2538,7 @@ class AuthSrv(object):
|
|||||||
if hpw == self.acct[uname]:
|
if hpw == self.acct[uname]:
|
||||||
return False, "that's already your password my dude"
|
return False, "that's already your password my dude"
|
||||||
|
|
||||||
if hpw in self.iacct:
|
if hpw in self.iacct or hpw in self.sesa:
|
||||||
return False, "password is taken"
|
return False, "password is taken"
|
||||||
|
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
@@ -2284,7 +2562,7 @@ class AuthSrv(object):
|
|||||||
self._reload()
|
self._reload()
|
||||||
return True, "new password OK"
|
return True, "new password OK"
|
||||||
|
|
||||||
broker.ask("_reload_blocking", False, False).get()
|
broker.ask("reload", False, False).get()
|
||||||
return True, "new password OK"
|
return True, "new password OK"
|
||||||
|
|
||||||
def setup_chpw(self, acct: dict[str, str]) -> None:
|
def setup_chpw(self, acct: dict[str, str]) -> None:
|
||||||
@@ -2325,7 +2603,7 @@ class AuthSrv(object):
|
|||||||
return
|
return
|
||||||
|
|
||||||
elif self.args.chpw_v == 2:
|
elif self.args.chpw_v == 2:
|
||||||
t = "chpw: %d changed" % (len(uok))
|
t = "chpw: %d changed" % (len(uok),)
|
||||||
if urst:
|
if urst:
|
||||||
t += ", \033[0munchanged:\033[35m %s" % (", ".join(list(urst)))
|
t += ", \033[0munchanged:\033[35m %s" % (", ".join(list(urst)))
|
||||||
|
|
||||||
@@ -2483,7 +2761,7 @@ class AuthSrv(object):
|
|||||||
[],
|
[],
|
||||||
u,
|
u,
|
||||||
[[True, False]],
|
[[True, False]],
|
||||||
True,
|
1,
|
||||||
not self.args.no_scandir,
|
not self.args.no_scandir,
|
||||||
False,
|
False,
|
||||||
False,
|
False,
|
||||||
@@ -2536,10 +2814,12 @@ class AuthSrv(object):
|
|||||||
]
|
]
|
||||||
|
|
||||||
csv = set("i p th_covers zm_on zm_off zs_on zs_off".split())
|
csv = set("i p th_covers zm_on zm_off zs_on zs_off".split())
|
||||||
zs = "c ihead mtm mtp on403 on404 xad xar xau xiu xban xbd xbr xbu xm"
|
zs = "c ihead ohead mtm mtp on403 on404 xac xad xar xau xiu xban xbc xbd xbr xbu xm"
|
||||||
lst = set(zs.split())
|
lst = set(zs.split())
|
||||||
askip = set("a v c vc cgen exp_lg exp_md theme".split())
|
askip = set("a v c vc cgen exp_lg exp_md theme".split())
|
||||||
fskip = set("exp_lg exp_md mv_re_r mv_re_t rm_re_r rm_re_t".split())
|
|
||||||
|
t = "exp_lg exp_md ext_th_d mv_re_r mv_re_t rm_re_r rm_re_t srch_re_dots srch_re_nodot"
|
||||||
|
fskip = set(t.split())
|
||||||
|
|
||||||
# keymap from argv to vflag
|
# keymap from argv to vflag
|
||||||
amap = vf_bmap()
|
amap = vf_bmap()
|
||||||
@@ -2804,6 +3084,19 @@ def expand_config_file(
|
|||||||
|
|
||||||
ret.append("#\033[36m closed{}\033[0m".format(ipath))
|
ret.append("#\033[36m closed{}\033[0m".format(ipath))
|
||||||
|
|
||||||
|
zsl = []
|
||||||
|
for ln in ret:
|
||||||
|
zs = ln.split(" #")[0]
|
||||||
|
if " #" in zs and zs.split("#")[0].strip():
|
||||||
|
zsl.append(ln)
|
||||||
|
if zsl and "no-cfg-cmt-warn" not in "\n".join(ret):
|
||||||
|
t = "\033[33mWARNING: there is less than two spaces before the # in the following config lines, so instead of assuming that this is a comment, the whole line will become part of the config value:\n\n>>> %s\n\nif you are familiar with this and would like to mute this warning, specify the global-option no-cfg-cmt-warn\n\033[0m"
|
||||||
|
t = t % ("\n>>> ".join(zsl),)
|
||||||
|
if log:
|
||||||
|
log(t)
|
||||||
|
else:
|
||||||
|
print(t, file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
def upgrade_cfg_fmt(
|
def upgrade_cfg_fmt(
|
||||||
log: Optional["NamedLogger"], args: argparse.Namespace, orig: list[str], cfg_fp: str
|
log: Optional["NamedLogger"], args: argparse.Namespace, orig: list[str], cfg_fp: str
|
||||||
|
|||||||
@@ -9,14 +9,14 @@ import queue
|
|||||||
|
|
||||||
from .__init__ import CORES, TYPE_CHECKING
|
from .__init__ import CORES, TYPE_CHECKING
|
||||||
from .broker_mpw import MpWorker
|
from .broker_mpw import MpWorker
|
||||||
from .broker_util import ExceptionalQueue, try_exec
|
from .broker_util import ExceptionalQueue, NotExQueue, try_exec
|
||||||
from .util import Daemon, mp
|
from .util import Daemon, mp
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .svchub import SvcHub
|
from .svchub import SvcHub
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any
|
from typing import Any, Union
|
||||||
|
|
||||||
|
|
||||||
class MProcess(mp.Process):
|
class MProcess(mp.Process):
|
||||||
@@ -43,6 +43,9 @@ class BrokerMp(object):
|
|||||||
self.procs = []
|
self.procs = []
|
||||||
self.mutex = threading.Lock()
|
self.mutex = threading.Lock()
|
||||||
|
|
||||||
|
self.retpend: dict[int, Any] = {}
|
||||||
|
self.retpend_mutex = threading.Lock()
|
||||||
|
|
||||||
self.num_workers = self.args.j or CORES
|
self.num_workers = self.args.j or CORES
|
||||||
self.log("broker", "booting {} subprocesses".format(self.num_workers))
|
self.log("broker", "booting {} subprocesses".format(self.num_workers))
|
||||||
for n in range(1, self.num_workers + 1):
|
for n in range(1, self.num_workers + 1):
|
||||||
@@ -54,6 +57,8 @@ class BrokerMp(object):
|
|||||||
self.procs.append(proc)
|
self.procs.append(proc)
|
||||||
proc.start()
|
proc.start()
|
||||||
|
|
||||||
|
Daemon(self.periodic, "mp-periodic")
|
||||||
|
|
||||||
def shutdown(self) -> None:
|
def shutdown(self) -> None:
|
||||||
self.log("broker", "shutting down")
|
self.log("broker", "shutting down")
|
||||||
for n, proc in enumerate(self.procs):
|
for n, proc in enumerate(self.procs):
|
||||||
@@ -76,6 +81,10 @@ class BrokerMp(object):
|
|||||||
for _, proc in enumerate(self.procs):
|
for _, proc in enumerate(self.procs):
|
||||||
proc.q_pend.put((0, "reload", []))
|
proc.q_pend.put((0, "reload", []))
|
||||||
|
|
||||||
|
def reload_sessions(self) -> None:
|
||||||
|
for _, proc in enumerate(self.procs):
|
||||||
|
proc.q_pend.put((0, "reload_sessions", []))
|
||||||
|
|
||||||
def collector(self, proc: MProcess) -> None:
|
def collector(self, proc: MProcess) -> None:
|
||||||
"""receive message from hub in other process"""
|
"""receive message from hub in other process"""
|
||||||
while True:
|
while True:
|
||||||
@@ -86,8 +95,10 @@ class BrokerMp(object):
|
|||||||
self.log(*args)
|
self.log(*args)
|
||||||
|
|
||||||
elif dest == "retq":
|
elif dest == "retq":
|
||||||
# response from previous ipc call
|
with self.retpend_mutex:
|
||||||
raise Exception("invalid broker_mp usage")
|
retq = self.retpend.pop(retq_id)
|
||||||
|
|
||||||
|
retq.put(args[0])
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# new ipc invoking managed service in hub
|
# new ipc invoking managed service in hub
|
||||||
@@ -104,8 +115,7 @@ class BrokerMp(object):
|
|||||||
if retq_id:
|
if retq_id:
|
||||||
proc.q_pend.put((retq_id, "retq", rv))
|
proc.q_pend.put((retq_id, "retq", rv))
|
||||||
|
|
||||||
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
||||||
|
|
||||||
# new non-ipc invoking managed service in hub
|
# new non-ipc invoking managed service in hub
|
||||||
obj = self.hub
|
obj = self.hub
|
||||||
for node in dest.split("."):
|
for node in dest.split("."):
|
||||||
@@ -117,17 +127,30 @@ class BrokerMp(object):
|
|||||||
retq.put(rv)
|
retq.put(rv)
|
||||||
return retq
|
return retq
|
||||||
|
|
||||||
|
def wask(self, dest: str, *args: Any) -> list[Union[ExceptionalQueue, NotExQueue]]:
|
||||||
|
# call from hub to workers
|
||||||
|
ret = []
|
||||||
|
for p in self.procs:
|
||||||
|
retq = ExceptionalQueue(1)
|
||||||
|
retq_id = id(retq)
|
||||||
|
with self.retpend_mutex:
|
||||||
|
self.retpend[retq_id] = retq
|
||||||
|
|
||||||
|
p.q_pend.put((retq_id, dest, list(args)))
|
||||||
|
ret.append(retq)
|
||||||
|
return ret
|
||||||
|
|
||||||
def say(self, dest: str, *args: Any) -> None:
|
def say(self, dest: str, *args: Any) -> None:
|
||||||
"""
|
"""
|
||||||
send message to non-hub component in other process,
|
send message to non-hub component in other process,
|
||||||
returns a Queue object which eventually contains the response if want_retval
|
returns a Queue object which eventually contains the response if want_retval
|
||||||
(not-impl here since nothing uses it yet)
|
(not-impl here since nothing uses it yet)
|
||||||
"""
|
"""
|
||||||
if dest == "listen":
|
if dest == "httpsrv.listen":
|
||||||
for p in self.procs:
|
for p in self.procs:
|
||||||
p.q_pend.put((0, dest, [args[0], len(self.procs)]))
|
p.q_pend.put((0, dest, [args[0], len(self.procs)]))
|
||||||
|
|
||||||
elif dest == "set_netdevs":
|
elif dest == "httpsrv.set_netdevs":
|
||||||
for p in self.procs:
|
for p in self.procs:
|
||||||
p.q_pend.put((0, dest, list(args)))
|
p.q_pend.put((0, dest, list(args)))
|
||||||
|
|
||||||
@@ -136,3 +159,19 @@ class BrokerMp(object):
|
|||||||
|
|
||||||
else:
|
else:
|
||||||
raise Exception("what is " + str(dest))
|
raise Exception("what is " + str(dest))
|
||||||
|
|
||||||
|
def periodic(self) -> None:
|
||||||
|
while True:
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
tdli = {}
|
||||||
|
tdls = {}
|
||||||
|
qs = self.wask("httpsrv.read_dls")
|
||||||
|
for q in qs:
|
||||||
|
qr = q.get()
|
||||||
|
dli, dls = qr
|
||||||
|
tdli.update(dli)
|
||||||
|
tdls.update(dls)
|
||||||
|
tdl = (tdli, tdls)
|
||||||
|
for p in self.procs:
|
||||||
|
p.q_pend.put((0, "httpsrv.write_dls", tdl))
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ import queue
|
|||||||
|
|
||||||
from .__init__ import ANYWIN
|
from .__init__ import ANYWIN
|
||||||
from .authsrv import AuthSrv
|
from .authsrv import AuthSrv
|
||||||
from .broker_util import BrokerCli, ExceptionalQueue
|
from .broker_util import BrokerCli, ExceptionalQueue, NotExQueue
|
||||||
from .httpsrv import HttpSrv
|
from .httpsrv import HttpSrv
|
||||||
from .util import FAKE_MP, Daemon, HMaccas
|
from .util import FAKE_MP, Daemon, HMaccas
|
||||||
|
|
||||||
@@ -82,35 +82,40 @@ class MpWorker(BrokerCli):
|
|||||||
while True:
|
while True:
|
||||||
retq_id, dest, args = self.q_pend.get()
|
retq_id, dest, args = self.q_pend.get()
|
||||||
|
|
||||||
# self.logw("work: [{}]".format(d[0]))
|
if dest == "retq":
|
||||||
|
# response from previous ipc call
|
||||||
|
with self.retpend_mutex:
|
||||||
|
retq = self.retpend.pop(retq_id)
|
||||||
|
|
||||||
|
retq.put(args)
|
||||||
|
continue
|
||||||
|
|
||||||
if dest == "shutdown":
|
if dest == "shutdown":
|
||||||
self.httpsrv.shutdown()
|
self.httpsrv.shutdown()
|
||||||
self.logw("ok bye")
|
self.logw("ok bye")
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
return
|
return
|
||||||
|
|
||||||
elif dest == "reload":
|
if dest == "reload":
|
||||||
self.logw("mpw.asrv reloading")
|
self.logw("mpw.asrv reloading")
|
||||||
self.asrv.reload()
|
self.asrv.reload()
|
||||||
self.logw("mpw.asrv reloaded")
|
self.logw("mpw.asrv reloaded")
|
||||||
|
continue
|
||||||
|
|
||||||
elif dest == "listen":
|
if dest == "reload_sessions":
|
||||||
self.httpsrv.listen(args[0], args[1])
|
with self.asrv.mutex:
|
||||||
|
self.asrv.load_sessions()
|
||||||
|
continue
|
||||||
|
|
||||||
elif dest == "set_netdevs":
|
obj = self
|
||||||
self.httpsrv.set_netdevs(args[0])
|
for node in dest.split("."):
|
||||||
|
obj = getattr(obj, node)
|
||||||
|
|
||||||
elif dest == "retq":
|
rv = obj(*args) # type: ignore
|
||||||
# response from previous ipc call
|
if retq_id:
|
||||||
with self.retpend_mutex:
|
self.say("retq", rv, retq_id=retq_id)
|
||||||
retq = self.retpend.pop(retq_id)
|
|
||||||
|
|
||||||
retq.put(args)
|
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
||||||
|
|
||||||
else:
|
|
||||||
raise Exception("what is " + str(dest))
|
|
||||||
|
|
||||||
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
|
||||||
retq = ExceptionalQueue(1)
|
retq = ExceptionalQueue(1)
|
||||||
retq_id = id(retq)
|
retq_id = id(retq)
|
||||||
with self.retpend_mutex:
|
with self.retpend_mutex:
|
||||||
@@ -119,5 +124,5 @@ class MpWorker(BrokerCli):
|
|||||||
self.q_yield.put((retq_id, dest, list(args)))
|
self.q_yield.put((retq_id, dest, list(args)))
|
||||||
return retq
|
return retq
|
||||||
|
|
||||||
def say(self, dest: str, *args: Any) -> None:
|
def say(self, dest: str, *args: Any, retq_id=0) -> None:
|
||||||
self.q_yield.put((0, dest, list(args)))
|
self.q_yield.put((retq_id, dest, list(args)))
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import os
|
|||||||
import threading
|
import threading
|
||||||
|
|
||||||
from .__init__ import TYPE_CHECKING
|
from .__init__ import TYPE_CHECKING
|
||||||
from .broker_util import BrokerCli, ExceptionalQueue, try_exec
|
from .broker_util import BrokerCli, ExceptionalQueue, NotExQueue
|
||||||
from .httpsrv import HttpSrv
|
from .httpsrv import HttpSrv
|
||||||
from .util import HMaccas
|
from .util import HMaccas
|
||||||
|
|
||||||
@@ -13,7 +13,7 @@ if TYPE_CHECKING:
|
|||||||
from .svchub import SvcHub
|
from .svchub import SvcHub
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any
|
from typing import Any, Union
|
||||||
|
|
||||||
|
|
||||||
class BrokerThr(BrokerCli):
|
class BrokerThr(BrokerCli):
|
||||||
@@ -34,6 +34,7 @@ class BrokerThr(BrokerCli):
|
|||||||
self.iphash = HMaccas(os.path.join(self.args.E.cfg, "iphash"), 8)
|
self.iphash = HMaccas(os.path.join(self.args.E.cfg, "iphash"), 8)
|
||||||
self.httpsrv = HttpSrv(self, None)
|
self.httpsrv = HttpSrv(self, None)
|
||||||
self.reload = self.noop
|
self.reload = self.noop
|
||||||
|
self.reload_sessions = self.noop
|
||||||
|
|
||||||
def shutdown(self) -> None:
|
def shutdown(self) -> None:
|
||||||
# self.log("broker", "shutting down")
|
# self.log("broker", "shutting down")
|
||||||
@@ -42,26 +43,21 @@ class BrokerThr(BrokerCli):
|
|||||||
def noop(self) -> None:
|
def noop(self) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
||||||
|
|
||||||
# new ipc invoking managed service in hub
|
# new ipc invoking managed service in hub
|
||||||
obj = self.hub
|
obj = self.hub
|
||||||
for node in dest.split("."):
|
for node in dest.split("."):
|
||||||
obj = getattr(obj, node)
|
obj = getattr(obj, node)
|
||||||
|
|
||||||
rv = try_exec(True, obj, *args)
|
return NotExQueue(obj(*args)) # type: ignore
|
||||||
|
|
||||||
# pretend we're broker_mp
|
|
||||||
retq = ExceptionalQueue(1)
|
|
||||||
retq.put(rv)
|
|
||||||
return retq
|
|
||||||
|
|
||||||
def say(self, dest: str, *args: Any) -> None:
|
def say(self, dest: str, *args: Any) -> None:
|
||||||
if dest == "listen":
|
if dest == "httpsrv.listen":
|
||||||
self.httpsrv.listen(args[0], 1)
|
self.httpsrv.listen(args[0], 1)
|
||||||
return
|
return
|
||||||
|
|
||||||
if dest == "set_netdevs":
|
if dest == "httpsrv.set_netdevs":
|
||||||
self.httpsrv.set_netdevs(args[0])
|
self.httpsrv.set_netdevs(args[0])
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -70,4 +66,4 @@ class BrokerThr(BrokerCli):
|
|||||||
for node in dest.split("."):
|
for node in dest.split("."):
|
||||||
obj = getattr(obj, node)
|
obj = getattr(obj, node)
|
||||||
|
|
||||||
try_exec(False, obj, *args)
|
obj(*args) # type: ignore
|
||||||
|
|||||||
@@ -33,6 +33,18 @@ class ExceptionalQueue(Queue, object):
|
|||||||
return rv
|
return rv
|
||||||
|
|
||||||
|
|
||||||
|
class NotExQueue(object):
|
||||||
|
"""
|
||||||
|
BrokerThr uses this instead of ExceptionalQueue; 7x faster
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, rv: Any) -> None:
|
||||||
|
self.rv = rv
|
||||||
|
|
||||||
|
def get(self) -> Any:
|
||||||
|
return self.rv
|
||||||
|
|
||||||
|
|
||||||
class BrokerCli(object):
|
class BrokerCli(object):
|
||||||
"""
|
"""
|
||||||
helps mypy understand httpsrv.broker but still fails a few levels deeper,
|
helps mypy understand httpsrv.broker but still fails a few levels deeper,
|
||||||
@@ -48,7 +60,7 @@ class BrokerCli(object):
|
|||||||
def __init__(self) -> None:
|
def __init__(self) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
||||||
return ExceptionalQueue(1)
|
return ExceptionalQueue(1)
|
||||||
|
|
||||||
def say(self, dest: str, *args: Any) -> None:
|
def say(self, dest: str, *args: Any) -> None:
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import shutil
|
|||||||
import time
|
import time
|
||||||
|
|
||||||
from .__init__ import ANYWIN
|
from .__init__ import ANYWIN
|
||||||
from .util import Netdev, runcmd, wrename, wunlink
|
from .util import Netdev, load_resource, runcmd, wrename, wunlink
|
||||||
|
|
||||||
HAVE_CFSSL = not os.environ.get("PRTY_NO_CFSSL")
|
HAVE_CFSSL = not os.environ.get("PRTY_NO_CFSSL")
|
||||||
|
|
||||||
@@ -29,13 +29,15 @@ def ensure_cert(log: "RootLogger", args) -> None:
|
|||||||
|
|
||||||
i feel awful about this and so should they
|
i feel awful about this and so should they
|
||||||
"""
|
"""
|
||||||
cert_insec = os.path.join(args.E.mod, "res/insecure.pem")
|
with load_resource(args.E, "res/insecure.pem") as f:
|
||||||
|
cert_insec = f.read()
|
||||||
cert_appdata = os.path.join(args.E.cfg, "cert.pem")
|
cert_appdata = os.path.join(args.E.cfg, "cert.pem")
|
||||||
if not os.path.isfile(args.cert):
|
if not os.path.isfile(args.cert):
|
||||||
if cert_appdata != args.cert:
|
if cert_appdata != args.cert:
|
||||||
raise Exception("certificate file does not exist: " + args.cert)
|
raise Exception("certificate file does not exist: " + args.cert)
|
||||||
|
|
||||||
shutil.copy(cert_insec, args.cert)
|
with open(args.cert, "wb") as f:
|
||||||
|
f.write(cert_insec)
|
||||||
|
|
||||||
with open(args.cert, "rb") as f:
|
with open(args.cert, "rb") as f:
|
||||||
buf = f.read()
|
buf = f.read()
|
||||||
@@ -50,7 +52,9 @@ def ensure_cert(log: "RootLogger", args) -> None:
|
|||||||
raise Exception(m + "private key must appear before server certificate")
|
raise Exception(m + "private key must appear before server certificate")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if filecmp.cmp(args.cert, cert_insec):
|
with open(args.cert, "rb") as f:
|
||||||
|
active_cert = f.read()
|
||||||
|
if active_cert == cert_insec:
|
||||||
t = "using default TLS certificate; https will be insecure:\033[36m {}"
|
t = "using default TLS certificate; https will be insecure:\033[36m {}"
|
||||||
log("cert", t.format(args.cert), 3)
|
log("cert", t.format(args.cert), 3)
|
||||||
except:
|
except:
|
||||||
@@ -151,14 +155,22 @@ def _gen_srv(log: "RootLogger", args, netdevs: dict[str, Netdev]):
|
|||||||
raise Exception("no useable cert found")
|
raise Exception("no useable cert found")
|
||||||
|
|
||||||
expired = time.time() + args.crt_sdays * 60 * 60 * 24 * 0.5 > expiry
|
expired = time.time() + args.crt_sdays * 60 * 60 * 24 * 0.5 > expiry
|
||||||
cert_insec = os.path.join(args.E.mod, "res/insecure.pem")
|
if expired:
|
||||||
|
raise Exception("old server-cert has expired")
|
||||||
|
|
||||||
for n in names:
|
for n in names:
|
||||||
if n not in inf["sans"]:
|
if n not in inf["sans"]:
|
||||||
raise Exception("does not have {}".format(n))
|
raise Exception("does not have {}".format(n))
|
||||||
if expired:
|
|
||||||
raise Exception("old server-cert has expired")
|
with load_resource(args.E, "res/insecure.pem") as f:
|
||||||
if not filecmp.cmp(args.cert, cert_insec):
|
cert_insec = f.read()
|
||||||
|
|
||||||
|
with open(args.cert, "rb") as f:
|
||||||
|
active_cert = f.read()
|
||||||
|
|
||||||
|
if active_cert and active_cert != cert_insec:
|
||||||
return
|
return
|
||||||
|
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
log("cert", "will create new server-cert; {}".format(ex))
|
log("cert", "will create new server-cert; {}".format(ex))
|
||||||
|
|
||||||
|
|||||||
@@ -2,9 +2,12 @@
|
|||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
# awk -F\" '/add_argument\("-[^-]/{print(substr($2,2))}' copyparty/__main__.py | sort | tr '\n' ' '
|
# awk -F\" '/add_argument\("-[^-]/{print(substr($2,2))}' copyparty/__main__.py | sort | tr '\n' ' '
|
||||||
zs = "a c e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vp e2vu ed emp i j lo mcr mte mth mtm mtp nb nc nid nih nw p q s ss sss v z zv"
|
zs = "a c e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vp e2vu ed emp i j lo mcr mte mth mtm mtp nb nc nid nih nth nw p q s ss sss v z zv"
|
||||||
onedash = set(zs.split())
|
onedash = set(zs.split())
|
||||||
|
|
||||||
|
# verify that all volflags are documented here:
|
||||||
|
# grep volflag= __main__.py | sed -r 's/.*volflag=//;s/\).*//' | sort | uniq | while IFS= read -r x; do grep -E "\"$x(=[^ \"]+)?\": \"" cfg.py || printf '%s\n' "$x"; done
|
||||||
|
|
||||||
|
|
||||||
def vf_bmap() -> dict[str, str]:
|
def vf_bmap() -> dict[str, str]:
|
||||||
"""argv-to-volflag: simple bools"""
|
"""argv-to-volflag: simple bools"""
|
||||||
@@ -12,8 +15,9 @@ def vf_bmap() -> dict[str, str]:
|
|||||||
"dav_auth": "davauth",
|
"dav_auth": "davauth",
|
||||||
"dav_rt": "davrt",
|
"dav_rt": "davrt",
|
||||||
"ed": "dots",
|
"ed": "dots",
|
||||||
"never_symlink": "neversymlink",
|
"hardlink_only": "hardlinkonly",
|
||||||
"no_dedup": "copydupes",
|
"no_clone": "noclone",
|
||||||
|
"no_dirsz": "nodirsz",
|
||||||
"no_dupe": "nodupe",
|
"no_dupe": "nodupe",
|
||||||
"no_forget": "noforget",
|
"no_forget": "noforget",
|
||||||
"no_pipe": "nopipe",
|
"no_pipe": "nopipe",
|
||||||
@@ -23,6 +27,7 @@ def vf_bmap() -> dict[str, str]:
|
|||||||
"no_athumb": "dathumb",
|
"no_athumb": "dathumb",
|
||||||
}
|
}
|
||||||
for k in (
|
for k in (
|
||||||
|
"dedup",
|
||||||
"dotsrch",
|
"dotsrch",
|
||||||
"e2d",
|
"e2d",
|
||||||
"e2ds",
|
"e2ds",
|
||||||
@@ -38,12 +43,15 @@ def vf_bmap() -> dict[str, str]:
|
|||||||
"gsel",
|
"gsel",
|
||||||
"hardlink",
|
"hardlink",
|
||||||
"magic",
|
"magic",
|
||||||
|
"no_db_ip",
|
||||||
"no_sb_md",
|
"no_sb_md",
|
||||||
"no_sb_lg",
|
"no_sb_lg",
|
||||||
|
"nsort",
|
||||||
"og",
|
"og",
|
||||||
"og_no_head",
|
"og_no_head",
|
||||||
"og_s_title",
|
"og_s_title",
|
||||||
"rand",
|
"rand",
|
||||||
|
"rss",
|
||||||
"xdev",
|
"xdev",
|
||||||
"xlink",
|
"xlink",
|
||||||
"xvol",
|
"xvol",
|
||||||
@@ -58,6 +66,7 @@ def vf_vmap() -> dict[str, str]:
|
|||||||
"no_hash": "nohash",
|
"no_hash": "nohash",
|
||||||
"no_idx": "noidx",
|
"no_idx": "noidx",
|
||||||
"re_maxage": "scan",
|
"re_maxage": "scan",
|
||||||
|
"safe_dedup": "safededup",
|
||||||
"th_convt": "convt",
|
"th_convt": "convt",
|
||||||
"th_size": "thsize",
|
"th_size": "thsize",
|
||||||
"th_crop": "crop",
|
"th_crop": "crop",
|
||||||
@@ -65,10 +74,15 @@ def vf_vmap() -> dict[str, str]:
|
|||||||
}
|
}
|
||||||
for k in (
|
for k in (
|
||||||
"dbd",
|
"dbd",
|
||||||
|
"forget_ip",
|
||||||
|
"hsortn",
|
||||||
"html_head",
|
"html_head",
|
||||||
"lg_sbf",
|
"lg_sbf",
|
||||||
"md_sbf",
|
"md_sbf",
|
||||||
|
"lg_sba",
|
||||||
|
"md_sba",
|
||||||
"nrand",
|
"nrand",
|
||||||
|
"u2ow",
|
||||||
"og_desc",
|
"og_desc",
|
||||||
"og_site",
|
"og_site",
|
||||||
"og_th",
|
"og_th",
|
||||||
@@ -85,6 +99,8 @@ def vf_vmap() -> dict[str, str]:
|
|||||||
"unlist",
|
"unlist",
|
||||||
"u2abort",
|
"u2abort",
|
||||||
"u2ts",
|
"u2ts",
|
||||||
|
"ups_who",
|
||||||
|
"zip_who",
|
||||||
):
|
):
|
||||||
ret[k] = k
|
ret[k] = k
|
||||||
return ret
|
return ret
|
||||||
@@ -96,13 +112,16 @@ def vf_cmap() -> dict[str, str]:
|
|||||||
for k in (
|
for k in (
|
||||||
"exp_lg",
|
"exp_lg",
|
||||||
"exp_md",
|
"exp_md",
|
||||||
|
"ext_th",
|
||||||
"mte",
|
"mte",
|
||||||
"mth",
|
"mth",
|
||||||
"mtp",
|
"mtp",
|
||||||
|
"xac",
|
||||||
"xad",
|
"xad",
|
||||||
"xar",
|
"xar",
|
||||||
"xau",
|
"xau",
|
||||||
"xban",
|
"xban",
|
||||||
|
"xbc",
|
||||||
"xbd",
|
"xbd",
|
||||||
"xbr",
|
"xbr",
|
||||||
"xbu",
|
"xbu",
|
||||||
@@ -129,15 +148,19 @@ permdescs = {
|
|||||||
|
|
||||||
flagcats = {
|
flagcats = {
|
||||||
"uploads, general": {
|
"uploads, general": {
|
||||||
"nodupe": "rejects existing files (instead of symlinking them)",
|
"dedup": "enable symlink-based file deduplication",
|
||||||
"hardlink": "does dedup with hardlinks instead of symlinks",
|
"hardlink": "enable hardlink-based file deduplication,\nwith fallback on symlinks when that is impossible",
|
||||||
"neversymlink": "disables symlink fallback; full copy instead",
|
"hardlinkonly": "dedup with hardlink only, never symlink;\nmake a full copy if hardlink is impossible",
|
||||||
"copydupes": "disables dedup, always saves full copies of dupes",
|
"safededup": "verify on-disk data before using it for dedup",
|
||||||
|
"noclone": "take dupe data from clients, even if available on HDD",
|
||||||
|
"nodupe": "rejects existing files (instead of linking/cloning them)",
|
||||||
"sparse": "force use of sparse files, mainly for s3-backed storage",
|
"sparse": "force use of sparse files, mainly for s3-backed storage",
|
||||||
|
"nosparse": "deny use of sparse files, mainly for slow storage",
|
||||||
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
|
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
|
||||||
"nosub": "forces all uploads into the top folder of the vfs",
|
"nosub": "forces all uploads into the top folder of the vfs",
|
||||||
"magic": "enables filetype detection for nameless uploads",
|
"magic": "enables filetype detection for nameless uploads",
|
||||||
"gz": "allows server-side gzip of uploads with ?gz (also c,xz)",
|
"gz": "allows server-side gzip compression of uploads with ?gz",
|
||||||
|
"xz": "allows server-side lzma compression of uploads with ?xz",
|
||||||
"pk": "forces server-side compression, optional arg: xz,9",
|
"pk": "forces server-side compression, optional arg: xz,9",
|
||||||
},
|
},
|
||||||
"upload rules": {
|
"upload rules": {
|
||||||
@@ -148,6 +171,7 @@ flagcats = {
|
|||||||
"medialinks": "return medialinks for non-up2k uploads (not hotlinks)",
|
"medialinks": "return medialinks for non-up2k uploads (not hotlinks)",
|
||||||
"rand": "force randomized filenames, 9 chars long by default",
|
"rand": "force randomized filenames, 9 chars long by default",
|
||||||
"nrand=N": "randomized filenames are N chars long",
|
"nrand=N": "randomized filenames are N chars long",
|
||||||
|
"u2ow=N": "overwrite existing files? 0=no 1=if-older 2=always",
|
||||||
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
|
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
|
||||||
"u2abort=1": "allow aborting unfinished uploads? 0=no 1=strict 2=ip-chk 3=acct-chk",
|
"u2abort=1": "allow aborting unfinished uploads? 0=no 1=strict 2=ip-chk 3=acct-chk",
|
||||||
"sz=1k-3m": "allow filesizes between 1 KiB and 3MiB",
|
"sz=1k-3m": "allow filesizes between 1 KiB and 3MiB",
|
||||||
@@ -159,13 +183,16 @@ flagcats = {
|
|||||||
"lifetime=3600": "uploads are deleted after 1 hour",
|
"lifetime=3600": "uploads are deleted after 1 hour",
|
||||||
},
|
},
|
||||||
"database, general": {
|
"database, general": {
|
||||||
"e2d": "enable database; makes files searchable + enables upload dedup",
|
"e2d": "enable database; makes files searchable + enables upload-undo",
|
||||||
"e2ds": "scan writable folders for new files on startup; also sets -e2d",
|
"e2ds": "scan writable folders for new files on startup; also sets -e2d",
|
||||||
"e2dsa": "scans all folders for new files on startup; also sets -e2d",
|
"e2dsa": "scans all folders for new files on startup; also sets -e2d",
|
||||||
"e2t": "enable multimedia indexing; makes it possible to search for tags",
|
"e2t": "enable multimedia indexing; makes it possible to search for tags",
|
||||||
"e2ts": "scan existing files for tags on startup; also sets -e2t",
|
"e2ts": "scan existing files for tags on startup; also sets -e2t",
|
||||||
"e2tsa": "delete all metadata from DB (full rescan); also sets -e2ts",
|
"e2tsr": "delete all metadata from DB (full rescan); also sets -e2ts",
|
||||||
"d2ts": "disables metadata collection for existing files",
|
"d2ts": "disables metadata collection for existing files",
|
||||||
|
"e2v": "verify integrity on startup by hashing files and comparing to db",
|
||||||
|
"e2vu": "when e2v fails, update the db (assume on-disk files are good)",
|
||||||
|
"e2vp": "when e2v fails, panic and quit copyparty",
|
||||||
"d2ds": "disables onboot indexing, overrides -e2ds*",
|
"d2ds": "disables onboot indexing, overrides -e2ds*",
|
||||||
"d2t": "disables metadata collection, overrides -e2t*",
|
"d2t": "disables metadata collection, overrides -e2t*",
|
||||||
"d2v": "disables file verification, overrides -e2v*",
|
"d2v": "disables file verification, overrides -e2v*",
|
||||||
@@ -175,15 +202,20 @@ flagcats = {
|
|||||||
"nohash=\\.iso$": "skips hashing file contents if path matches *.iso",
|
"nohash=\\.iso$": "skips hashing file contents if path matches *.iso",
|
||||||
"noidx=\\.iso$": "fully ignores the contents at paths matching *.iso",
|
"noidx=\\.iso$": "fully ignores the contents at paths matching *.iso",
|
||||||
"noforget": "don't forget files when deleted from disk",
|
"noforget": "don't forget files when deleted from disk",
|
||||||
|
"forget_ip=43200": "forget uploader-IP after 30 days (GDPR)",
|
||||||
|
"no_db_ip": "never store uploader-IP in the db; disables unpost",
|
||||||
"fat32": "avoid excessive reindexing on android sdcardfs",
|
"fat32": "avoid excessive reindexing on android sdcardfs",
|
||||||
"dbd=[acid|swal|wal|yolo]": "database speed-durability tradeoff",
|
"dbd=[acid|swal|wal|yolo]": "database speed-durability tradeoff",
|
||||||
"xlink": "cross-volume dupe detection / linking",
|
"xlink": "cross-volume dupe detection / linking (dangerous)",
|
||||||
"xdev": "do not descend into other filesystems",
|
"xdev": "do not descend into other filesystems",
|
||||||
"xvol": "do not follow symlinks leaving the volume root",
|
"xvol": "do not follow symlinks leaving the volume root",
|
||||||
"dotsrch": "show dotfiles in search results",
|
"dotsrch": "show dotfiles in search results",
|
||||||
"nodotsrch": "hide dotfiles in search results (default)",
|
"nodotsrch": "hide dotfiles in search results (default)",
|
||||||
|
"srch_excl": "exclude search results with URL matching this regex",
|
||||||
},
|
},
|
||||||
'database, audio tags\n"mte", "mth", "mtp", "mtm" all work the same as -mte, -mth, ...': {
|
'database, audio tags\n"mte", "mth", "mtp", "mtm" all work the same as -mte, -mth, ...': {
|
||||||
|
"mte=artist,title": "media-tags to index/display",
|
||||||
|
"mth=fmt,res,ac": "media-tags to hide by default",
|
||||||
"mtp=.bpm=f,audio-bpm.py": 'uses the "audio-bpm.py" program to\ngenerate ".bpm" tags from uploads (f = overwrite tags)',
|
"mtp=.bpm=f,audio-bpm.py": 'uses the "audio-bpm.py" program to\ngenerate ".bpm" tags from uploads (f = overwrite tags)',
|
||||||
"mtp=ahash,vhash=media-hash.py": "collects two tags at once",
|
"mtp=ahash,vhash=media-hash.py": "collects two tags at once",
|
||||||
},
|
},
|
||||||
@@ -197,6 +229,7 @@ flagcats = {
|
|||||||
"crop": "center-cropping (y/n/fy/fn)",
|
"crop": "center-cropping (y/n/fy/fn)",
|
||||||
"th3x": "3x resolution (y/n/fy/fn)",
|
"th3x": "3x resolution (y/n/fy/fn)",
|
||||||
"convt": "conversion timeout in seconds",
|
"convt": "conversion timeout in seconds",
|
||||||
|
"ext_th=s=/b.png": "use /b.png as thumbnail for file-extension s",
|
||||||
},
|
},
|
||||||
"handlers\n(better explained in --help-handlers)": {
|
"handlers\n(better explained in --help-handlers)": {
|
||||||
"on404=PY": "handle 404s by executing PY file",
|
"on404=PY": "handle 404s by executing PY file",
|
||||||
@@ -206,6 +239,8 @@ flagcats = {
|
|||||||
"xbu=CMD": "execute CMD before a file upload starts",
|
"xbu=CMD": "execute CMD before a file upload starts",
|
||||||
"xau=CMD": "execute CMD after a file upload finishes",
|
"xau=CMD": "execute CMD after a file upload finishes",
|
||||||
"xiu=CMD": "execute CMD after all uploads finish and volume is idle",
|
"xiu=CMD": "execute CMD after all uploads finish and volume is idle",
|
||||||
|
"xbc=CMD": "execute CMD before a file copy",
|
||||||
|
"xac=CMD": "execute CMD after a file copy",
|
||||||
"xbr=CMD": "execute CMD before a file rename/move",
|
"xbr=CMD": "execute CMD before a file rename/move",
|
||||||
"xar=CMD": "execute CMD after a file rename/move",
|
"xar=CMD": "execute CMD after a file rename/move",
|
||||||
"xbd=CMD": "execute CMD before a file delete",
|
"xbd=CMD": "execute CMD before a file delete",
|
||||||
@@ -217,8 +252,12 @@ flagcats = {
|
|||||||
"grid": "show grid/thumbnails by default",
|
"grid": "show grid/thumbnails by default",
|
||||||
"gsel": "select files in grid by ctrl-click",
|
"gsel": "select files in grid by ctrl-click",
|
||||||
"sort": "default sort order",
|
"sort": "default sort order",
|
||||||
|
"nsort": "natural-sort of leading digits in filenames",
|
||||||
|
"hsortn": "number of sort-rules to add to media URLs",
|
||||||
"unlist": "dont list files matching REGEX",
|
"unlist": "dont list files matching REGEX",
|
||||||
"html_head=TXT": "includes TXT in the <head>, or @PATH for file at PATH",
|
"html_head=TXT": "includes TXT in the <head>, or @PATH for file at PATH",
|
||||||
|
"tcolor=#fc0": "theme color (a hint for webbrowsers, discord, etc.)",
|
||||||
|
"nodirsz": "don't show total folder size",
|
||||||
"robots": "allows indexing by search engines (default)",
|
"robots": "allows indexing by search engines (default)",
|
||||||
"norobots": "kindly asks search engines to leave",
|
"norobots": "kindly asks search engines to leave",
|
||||||
"no_sb_md": "disable js sandbox for markdown files",
|
"no_sb_md": "disable js sandbox for markdown files",
|
||||||
@@ -227,12 +266,37 @@ flagcats = {
|
|||||||
"sb_lg": "enable js sandbox for prologue/epilogue (default)",
|
"sb_lg": "enable js sandbox for prologue/epilogue (default)",
|
||||||
"md_sbf": "list of markdown-sandbox safeguards to disable",
|
"md_sbf": "list of markdown-sandbox safeguards to disable",
|
||||||
"lg_sbf": "list of *logue-sandbox safeguards to disable",
|
"lg_sbf": "list of *logue-sandbox safeguards to disable",
|
||||||
|
"md_sba": "value of iframe allow-prop for markdown-sandbox",
|
||||||
|
"lg_sba": "value of iframe allow-prop for *logue-sandbox",
|
||||||
"nohtml": "return html and markdown as text/html",
|
"nohtml": "return html and markdown as text/html",
|
||||||
},
|
},
|
||||||
|
"opengraph (discord embeds)": {
|
||||||
|
"og": "enable OG (disables hotlinking)",
|
||||||
|
"og_site": "sitename; defaults to --name, disable with '-'",
|
||||||
|
"og_desc": "description text for all files; disable with '-'",
|
||||||
|
"og_th=jf": "thumbnail format; j / jf / jf3 / w / w3 / ...",
|
||||||
|
"og_title_a": "audio title format; default: {{ artist }} - {{ title }}",
|
||||||
|
"og_title_v": "video title format; default: {{ title }}",
|
||||||
|
"og_title_i": "image title format; default: {{ title }}",
|
||||||
|
"og_title=foo": "fallback title if there's nothing in the db",
|
||||||
|
"og_s_title": "force default title; do not read from tags",
|
||||||
|
"og_tpl": "custom html; see --og-tpl in --help",
|
||||||
|
"og_no_head": "you want to add tags manually with og_tpl",
|
||||||
|
"og_ua": "if defined: only send OG html if useragent matches this regex",
|
||||||
|
},
|
||||||
|
"textfiles": {
|
||||||
|
"exp": "enable textfile expansion; see --help-exp",
|
||||||
|
"exp_md": "placeholders to expand in markdown files; see --help",
|
||||||
|
"exp_lg": "placeholders to expand in prologue/epilogue; see --help",
|
||||||
|
},
|
||||||
"others": {
|
"others": {
|
||||||
"dots": "allow all users with read-access to\nenable the option to show dotfiles in listings",
|
"dots": "allow all users with read-access to\nenable the option to show dotfiles in listings",
|
||||||
"fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
|
"fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
|
||||||
"fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers',
|
"fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers',
|
||||||
|
"rss": "allow '?rss' URL suffix (experimental)",
|
||||||
|
"ups_who=2": "restrict viewing the list of recent uploads",
|
||||||
|
"zip_who=2": "restrict access to download-as-zip/tar",
|
||||||
|
"nopipe": "disable race-the-beam (download unfinished uploads)",
|
||||||
"mv_retry": "ms-windows: timeout for renaming busy files",
|
"mv_retry": "ms-windows: timeout for renaming busy files",
|
||||||
"rm_retry": "ms-windows: timeout for deleting busy files",
|
"rm_retry": "ms-windows: timeout for deleting busy files",
|
||||||
"davauth": "ask webdav clients to login for all folders",
|
"davauth": "ask webdav clients to login for all folders",
|
||||||
@@ -242,3 +306,10 @@ flagcats = {
|
|||||||
|
|
||||||
|
|
||||||
flagdescs = {k.split("=")[0]: v for tab in flagcats.values() for k, v in tab.items()}
|
flagdescs = {k.split("=")[0]: v for tab in flagcats.values() for k, v in tab.items()}
|
||||||
|
|
||||||
|
|
||||||
|
if True: # so it gets removed in release-builds
|
||||||
|
for fun in [vf_bmap, vf_cmap, vf_vmap]:
|
||||||
|
for k in fun().values():
|
||||||
|
if k not in flagdescs:
|
||||||
|
raise Exception("undocumented volflag: " + k)
|
||||||
|
|||||||
@@ -1,3 +1,6 @@
|
|||||||
|
# coding: utf-8
|
||||||
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
import importlib
|
import importlib
|
||||||
import sys
|
import sys
|
||||||
import xml.etree.ElementTree as ET
|
import xml.etree.ElementTree as ET
|
||||||
@@ -8,6 +11,10 @@ if True: # pylint: disable=using-constant-test
|
|||||||
from typing import Any, Optional
|
from typing import Any, Optional
|
||||||
|
|
||||||
|
|
||||||
|
class BadXML(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
def get_ET() -> ET.XMLParser:
|
def get_ET() -> ET.XMLParser:
|
||||||
pn = "xml.etree.ElementTree"
|
pn = "xml.etree.ElementTree"
|
||||||
cn = "_elementtree"
|
cn = "_elementtree"
|
||||||
@@ -34,7 +41,7 @@ def get_ET() -> ET.XMLParser:
|
|||||||
XMLParser: ET.XMLParser = get_ET()
|
XMLParser: ET.XMLParser = get_ET()
|
||||||
|
|
||||||
|
|
||||||
class DXMLParser(XMLParser): # type: ignore
|
class _DXMLParser(XMLParser): # type: ignore
|
||||||
def __init__(self) -> None:
|
def __init__(self) -> None:
|
||||||
tb = ET.TreeBuilder()
|
tb = ET.TreeBuilder()
|
||||||
super(DXMLParser, self).__init__(target=tb)
|
super(DXMLParser, self).__init__(target=tb)
|
||||||
@@ -49,8 +56,12 @@ class DXMLParser(XMLParser): # type: ignore
|
|||||||
raise BadXML("{}, {}".format(a, ka))
|
raise BadXML("{}, {}".format(a, ka))
|
||||||
|
|
||||||
|
|
||||||
class BadXML(Exception):
|
class _NG(XMLParser): # type: ignore
|
||||||
pass
|
def __int__(self) -> None:
|
||||||
|
raise BadXML("dxml selftest failed")
|
||||||
|
|
||||||
|
|
||||||
|
DXMLParser = _DXMLParser
|
||||||
|
|
||||||
|
|
||||||
def parse_xml(txt: str) -> ET.Element:
|
def parse_xml(txt: str) -> ET.Element:
|
||||||
@@ -59,6 +70,40 @@ def parse_xml(txt: str) -> ET.Element:
|
|||||||
return parser.close() # type: ignore
|
return parser.close() # type: ignore
|
||||||
|
|
||||||
|
|
||||||
|
def selftest() -> bool:
|
||||||
|
qbe = r"""<!DOCTYPE d [
|
||||||
|
<!ENTITY a "nice_bakuretsu">
|
||||||
|
]>
|
||||||
|
<root>&a;&a;&a;</root>"""
|
||||||
|
|
||||||
|
emb = r"""<!DOCTYPE d [
|
||||||
|
<!ENTITY a SYSTEM "file:///etc/hostname">
|
||||||
|
]>
|
||||||
|
<root>&a;</root>"""
|
||||||
|
|
||||||
|
# future-proofing; there's never been any known vulns
|
||||||
|
# regarding DTDs and ET.XMLParser, but might as well
|
||||||
|
# block them since webdav-clients don't use them
|
||||||
|
dtd = r"""<!DOCTYPE d SYSTEM "a.dtd">
|
||||||
|
<root>a</root>"""
|
||||||
|
|
||||||
|
for txt in (qbe, emb, dtd):
|
||||||
|
try:
|
||||||
|
parse_xml(txt)
|
||||||
|
t = "WARNING: dxml selftest failed:\n%s\n"
|
||||||
|
print(t % (txt,), file=sys.stderr)
|
||||||
|
return False
|
||||||
|
except BadXML:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
DXML_OK = selftest()
|
||||||
|
if not DXML_OK:
|
||||||
|
DXMLParser = _NG
|
||||||
|
|
||||||
|
|
||||||
def mktnod(name: str, text: str) -> ET.Element:
|
def mktnod(name: str, text: str) -> ET.Element:
|
||||||
el = ET.Element(name)
|
el = ET.Element(name)
|
||||||
el.text = text
|
el.text = text
|
||||||
|
|||||||
@@ -42,14 +42,14 @@ class Fstab(object):
|
|||||||
self.cache = {}
|
self.cache = {}
|
||||||
|
|
||||||
fs = "ext4"
|
fs = "ext4"
|
||||||
msg = "failed to determine filesystem at [{}]; assuming {}\n{}"
|
msg = "failed to determine filesystem at %r; assuming %s\n%s"
|
||||||
|
|
||||||
if ANYWIN:
|
if ANYWIN:
|
||||||
fs = "vfat"
|
fs = "vfat"
|
||||||
try:
|
try:
|
||||||
path = self._winpath(path)
|
path = self._winpath(path)
|
||||||
except:
|
except:
|
||||||
self.log(msg.format(path, fs, min_ex()), 3)
|
self.log(msg % (path, fs, min_ex()), 3)
|
||||||
return fs
|
return fs
|
||||||
|
|
||||||
path = undot(path)
|
path = undot(path)
|
||||||
@@ -61,11 +61,11 @@ class Fstab(object):
|
|||||||
try:
|
try:
|
||||||
fs = self.get_w32(path) if ANYWIN else self.get_unix(path)
|
fs = self.get_w32(path) if ANYWIN else self.get_unix(path)
|
||||||
except:
|
except:
|
||||||
self.log(msg.format(path, fs, min_ex()), 3)
|
self.log(msg % (path, fs, min_ex()), 3)
|
||||||
|
|
||||||
fs = fs.lower()
|
fs = fs.lower()
|
||||||
self.cache[path] = fs
|
self.cache[path] = fs
|
||||||
self.log("found {} at {}".format(fs, path))
|
self.log("found %s at %r" % (fs, path))
|
||||||
return fs
|
return fs
|
||||||
|
|
||||||
def _winpath(self, path: str) -> str:
|
def _winpath(self, path: str) -> str:
|
||||||
@@ -119,7 +119,7 @@ class Fstab(object):
|
|||||||
self.srctab = srctab
|
self.srctab = srctab
|
||||||
|
|
||||||
def relabel(self, path: str, nval: str) -> None:
|
def relabel(self, path: str, nval: str) -> None:
|
||||||
assert self.tab
|
assert self.tab # !rm
|
||||||
self.cache = {}
|
self.cache = {}
|
||||||
if ANYWIN:
|
if ANYWIN:
|
||||||
path = self._winpath(path)
|
path = self._winpath(path)
|
||||||
@@ -156,7 +156,7 @@ class Fstab(object):
|
|||||||
self.log("failed to build tab:\n{}".format(min_ex()), 3)
|
self.log("failed to build tab:\n{}".format(min_ex()), 3)
|
||||||
self.build_fallback()
|
self.build_fallback()
|
||||||
|
|
||||||
assert self.tab
|
assert self.tab # !rm
|
||||||
ret = self.tab._find(path)[0]
|
ret = self.tab._find(path)[0]
|
||||||
if self.trusted or path == ret.vpath:
|
if self.trusted or path == ret.vpath:
|
||||||
return ret.realpath.split("/")[0]
|
return ret.realpath.split("/")[0]
|
||||||
@@ -167,6 +167,6 @@ class Fstab(object):
|
|||||||
if not self.tab:
|
if not self.tab:
|
||||||
self.build_fallback()
|
self.build_fallback()
|
||||||
|
|
||||||
assert self.tab
|
assert self.tab # !rm
|
||||||
ret = self.tab._find(path)[0]
|
ret = self.tab._find(path)[0]
|
||||||
return ret.realpath
|
return ret.realpath
|
||||||
|
|||||||
@@ -76,6 +76,7 @@ class FtpAuth(DummyAuthorizer):
|
|||||||
else:
|
else:
|
||||||
raise AuthenticationFailed("banned")
|
raise AuthenticationFailed("banned")
|
||||||
|
|
||||||
|
args = self.hub.args
|
||||||
asrv = self.hub.asrv
|
asrv = self.hub.asrv
|
||||||
uname = "*"
|
uname = "*"
|
||||||
if username != "anonymous":
|
if username != "anonymous":
|
||||||
@@ -86,6 +87,9 @@ class FtpAuth(DummyAuthorizer):
|
|||||||
uname = zs
|
uname = zs
|
||||||
break
|
break
|
||||||
|
|
||||||
|
if args.ipu and uname == "*":
|
||||||
|
uname = args.ipu_iu[args.ipu_nm.map(ip)]
|
||||||
|
|
||||||
if not uname or not (asrv.vfs.aread.get(uname) or asrv.vfs.awrite.get(uname)):
|
if not uname or not (asrv.vfs.aread.get(uname) or asrv.vfs.awrite.get(uname)):
|
||||||
g = self.hub.gpwd
|
g = self.hub.gpwd
|
||||||
if g.lim:
|
if g.lim:
|
||||||
@@ -163,7 +167,7 @@ class FtpFs(AbstractedFS):
|
|||||||
t = "Unsupported characters in [{}]"
|
t = "Unsupported characters in [{}]"
|
||||||
raise FSE(t.format(vpath), 1)
|
raise FSE(t.format(vpath), 1)
|
||||||
|
|
||||||
fn = sanitize_fn(fn or "", "", [".prologue.html", ".epilogue.html"])
|
fn = sanitize_fn(fn or "", "")
|
||||||
vpath = vjoin(rd, fn)
|
vpath = vjoin(rd, fn)
|
||||||
vfs, rem = self.hub.asrv.vfs.get(vpath, self.uname, r, w, m, d)
|
vfs, rem = self.hub.asrv.vfs.get(vpath, self.uname, r, w, m, d)
|
||||||
if not vfs.realpath:
|
if not vfs.realpath:
|
||||||
@@ -292,6 +296,7 @@ class FtpFs(AbstractedFS):
|
|||||||
self.uname,
|
self.uname,
|
||||||
not self.args.no_scandir,
|
not self.args.no_scandir,
|
||||||
[[True, False], [False, True]],
|
[[True, False], [False, True]],
|
||||||
|
throw=True,
|
||||||
)
|
)
|
||||||
vfs_ls = [x[0] for x in vfs_ls1]
|
vfs_ls = [x[0] for x in vfs_ls1]
|
||||||
vfs_ls.extend(vfs_virt.keys())
|
vfs_ls.extend(vfs_virt.keys())
|
||||||
|
|||||||
1904
copyparty/httpcli.py
1904
copyparty/httpcli.py
File diff suppressed because it is too large
Load Diff
@@ -59,6 +59,8 @@ class HttpConn(object):
|
|||||||
self.asrv: AuthSrv = hsrv.asrv # mypy404
|
self.asrv: AuthSrv = hsrv.asrv # mypy404
|
||||||
self.u2fh: Util.FHC = hsrv.u2fh # mypy404
|
self.u2fh: Util.FHC = hsrv.u2fh # mypy404
|
||||||
self.pipes: Util.CachedDict = hsrv.pipes # mypy404
|
self.pipes: Util.CachedDict = hsrv.pipes # mypy404
|
||||||
|
self.ipu_iu: Optional[dict[str, str]] = hsrv.ipu_iu
|
||||||
|
self.ipu_nm: Optional[NetMap] = hsrv.ipu_nm
|
||||||
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
|
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
|
||||||
self.xff_nm: Optional[NetMap] = hsrv.xff_nm
|
self.xff_nm: Optional[NetMap] = hsrv.xff_nm
|
||||||
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore
|
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore
|
||||||
@@ -103,9 +105,6 @@ class HttpConn(object):
|
|||||||
self.log_src = ("%s \033[%dm%d" % (ip, color, self.addr[1])).ljust(26)
|
self.log_src = ("%s \033[%dm%d" % (ip, color, self.addr[1])).ljust(26)
|
||||||
return self.log_src
|
return self.log_src
|
||||||
|
|
||||||
def respath(self, res_name: str) -> str:
|
|
||||||
return os.path.join(self.E.mod, "web", res_name)
|
|
||||||
|
|
||||||
def log(self, msg: str, c: Union[int, str] = 0) -> None:
|
def log(self, msg: str, c: Union[int, str] = 0) -> None:
|
||||||
self.log_func(self.log_src, msg, c)
|
self.log_func(self.log_src, msg, c)
|
||||||
|
|
||||||
@@ -165,6 +164,7 @@ class HttpConn(object):
|
|||||||
|
|
||||||
self.log_src = self.log_src.replace("[36m", "[35m")
|
self.log_src = self.log_src.replace("[36m", "[35m")
|
||||||
try:
|
try:
|
||||||
|
assert ssl # type: ignore # !rm
|
||||||
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
|
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
|
||||||
ctx.load_cert_chain(self.args.cert)
|
ctx.load_cert_chain(self.args.cert)
|
||||||
if self.args.ssl_ver:
|
if self.args.ssl_ver:
|
||||||
@@ -190,7 +190,7 @@ class HttpConn(object):
|
|||||||
|
|
||||||
if self.args.ssl_dbg and hasattr(self.s, "shared_ciphers"):
|
if self.args.ssl_dbg and hasattr(self.s, "shared_ciphers"):
|
||||||
ciphers = self.s.shared_ciphers()
|
ciphers = self.s.shared_ciphers()
|
||||||
assert ciphers
|
assert ciphers # !rm
|
||||||
overlap = [str(y[::-1]) for y in ciphers]
|
overlap = [str(y[::-1]) for y in ciphers]
|
||||||
self.log("TLS cipher overlap:" + "\n".join(overlap))
|
self.log("TLS cipher overlap:" + "\n".join(overlap))
|
||||||
for k, v in [
|
for k, v in [
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
import base64
|
import hashlib
|
||||||
import math
|
import math
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
@@ -67,17 +67,21 @@ from .util import (
|
|||||||
Magician,
|
Magician,
|
||||||
Netdev,
|
Netdev,
|
||||||
NetMap,
|
NetMap,
|
||||||
absreal,
|
|
||||||
build_netmap,
|
build_netmap,
|
||||||
|
has_resource,
|
||||||
ipnorm,
|
ipnorm,
|
||||||
|
load_ipu,
|
||||||
|
load_resource,
|
||||||
min_ex,
|
min_ex,
|
||||||
shut_socket,
|
shut_socket,
|
||||||
spack,
|
spack,
|
||||||
start_log_thrs,
|
start_log_thrs,
|
||||||
start_stackmon,
|
start_stackmon,
|
||||||
|
ub64enc,
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
from .authsrv import VFS
|
||||||
from .broker_util import BrokerCli
|
from .broker_util import BrokerCli
|
||||||
from .ssdp import SSDPr
|
from .ssdp import SSDPr
|
||||||
|
|
||||||
@@ -91,6 +95,11 @@ if not hasattr(socket, "AF_UNIX"):
|
|||||||
setattr(socket, "AF_UNIX", -9001)
|
setattr(socket, "AF_UNIX", -9001)
|
||||||
|
|
||||||
|
|
||||||
|
def load_jinja2_resource(E: EnvParams, name: str):
|
||||||
|
with load_resource(E, "web/" + name, "r") as f:
|
||||||
|
return f.read()
|
||||||
|
|
||||||
|
|
||||||
class HttpSrv(object):
|
class HttpSrv(object):
|
||||||
"""
|
"""
|
||||||
handles incoming connections using HttpConn to process http,
|
handles incoming connections using HttpConn to process http,
|
||||||
@@ -122,6 +131,12 @@ class HttpSrv(object):
|
|||||||
self.bans: dict[str, int] = {}
|
self.bans: dict[str, int] = {}
|
||||||
self.aclose: dict[str, int] = {}
|
self.aclose: dict[str, int] = {}
|
||||||
|
|
||||||
|
dli: dict[str, tuple[float, int, "VFS", str, str]] = {} # info
|
||||||
|
dls: dict[str, tuple[float, int]] = {} # state
|
||||||
|
self.dli = self.tdli = dli
|
||||||
|
self.dls = self.tdls = dls
|
||||||
|
self.iiam = '<img src="%s.cpr/iiam.gif?cache=i" />' % (self.args.SRS,)
|
||||||
|
|
||||||
self.bound: set[tuple[str, int]] = set()
|
self.bound: set[tuple[str, int]] = set()
|
||||||
self.name = "hsrv" + nsuf
|
self.name = "hsrv" + nsuf
|
||||||
self.mutex = threading.Lock()
|
self.mutex = threading.Lock()
|
||||||
@@ -137,6 +152,7 @@ class HttpSrv(object):
|
|||||||
self.t_periodic: Optional[threading.Thread] = None
|
self.t_periodic: Optional[threading.Thread] = None
|
||||||
|
|
||||||
self.u2fh = FHC()
|
self.u2fh = FHC()
|
||||||
|
self.u2sc: dict[str, tuple[int, "hashlib._Hash"]] = {}
|
||||||
self.pipes = CachedDict(0.2)
|
self.pipes = CachedDict(0.2)
|
||||||
self.metrics = Metrics(self)
|
self.metrics = Metrics(self)
|
||||||
self.nreq = 0
|
self.nreq = 0
|
||||||
@@ -152,33 +168,33 @@ class HttpSrv(object):
|
|||||||
self.u2idx_free: dict[str, U2idx] = {}
|
self.u2idx_free: dict[str, U2idx] = {}
|
||||||
self.u2idx_n = 0
|
self.u2idx_n = 0
|
||||||
|
|
||||||
|
assert jinja2 # type: ignore # !rm
|
||||||
env = jinja2.Environment()
|
env = jinja2.Environment()
|
||||||
env.loader = jinja2.FileSystemLoader(os.path.join(self.E.mod, "web"))
|
env.loader = jinja2.FunctionLoader(lambda f: load_jinja2_resource(self.E, f))
|
||||||
jn = [
|
jn = [
|
||||||
"splash",
|
|
||||||
"shares",
|
|
||||||
"svcs",
|
|
||||||
"browser",
|
"browser",
|
||||||
"browser2",
|
"browser2",
|
||||||
"msg",
|
"cf",
|
||||||
"md",
|
"md",
|
||||||
"mde",
|
"mde",
|
||||||
"cf",
|
"msg",
|
||||||
|
"rups",
|
||||||
|
"shares",
|
||||||
|
"splash",
|
||||||
|
"svcs",
|
||||||
]
|
]
|
||||||
self.j2 = {x: env.get_template(x + ".html") for x in jn}
|
self.j2 = {x: env.get_template(x + ".html") for x in jn}
|
||||||
zs = os.path.join(self.E.mod, "web", "deps", "prism.js.gz")
|
self.prism = has_resource(self.E, "web/deps/prism.js.gz")
|
||||||
self.prism = os.path.exists(zs)
|
|
||||||
|
if self.args.ipu:
|
||||||
|
self.ipu_iu, self.ipu_nm = load_ipu(self.log, self.args.ipu)
|
||||||
|
else:
|
||||||
|
self.ipu_iu = self.ipu_nm = None
|
||||||
|
|
||||||
self.ipa_nm = build_netmap(self.args.ipa)
|
self.ipa_nm = build_netmap(self.args.ipa)
|
||||||
self.xff_nm = build_netmap(self.args.xff_src)
|
self.xff_nm = build_netmap(self.args.xff_src)
|
||||||
self.xff_lan = build_netmap("lan")
|
self.xff_lan = build_netmap("lan")
|
||||||
|
|
||||||
self.statics: set[str] = set()
|
|
||||||
self._build_statics()
|
|
||||||
|
|
||||||
self.ptn_cc = re.compile(r"[\x00-\x1f]")
|
|
||||||
self.ptn_hsafe = re.compile(r"[\x00-\x1f<>\"'&]")
|
|
||||||
|
|
||||||
self.mallow = "GET HEAD POST PUT DELETE OPTIONS".split()
|
self.mallow = "GET HEAD POST PUT DELETE OPTIONS".split()
|
||||||
if not self.args.no_dav:
|
if not self.args.no_dav:
|
||||||
zs = "PROPFIND PROPPATCH LOCK UNLOCK MKCOL COPY MOVE"
|
zs = "PROPFIND PROPPATCH LOCK UNLOCK MKCOL COPY MOVE"
|
||||||
@@ -193,6 +209,9 @@ class HttpSrv(object):
|
|||||||
self.start_threads(4)
|
self.start_threads(4)
|
||||||
|
|
||||||
if nid:
|
if nid:
|
||||||
|
self.tdli = {}
|
||||||
|
self.tdls = {}
|
||||||
|
|
||||||
if self.args.stackmon:
|
if self.args.stackmon:
|
||||||
start_stackmon(self.args.stackmon, nid)
|
start_stackmon(self.args.stackmon, nid)
|
||||||
|
|
||||||
@@ -209,14 +228,6 @@ class HttpSrv(object):
|
|||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def _build_statics(self) -> None:
|
|
||||||
for dp, _, df in os.walk(os.path.join(self.E.mod, "web")):
|
|
||||||
for fn in df:
|
|
||||||
ap = absreal(os.path.join(dp, fn))
|
|
||||||
self.statics.add(ap)
|
|
||||||
if ap.endswith(".gz"):
|
|
||||||
self.statics.add(ap[:-3])
|
|
||||||
|
|
||||||
def set_netdevs(self, netdevs: dict[str, Netdev]) -> None:
|
def set_netdevs(self, netdevs: dict[str, Netdev]) -> None:
|
||||||
ips = set()
|
ips = set()
|
||||||
for ip, _ in self.bound:
|
for ip, _ in self.bound:
|
||||||
@@ -237,7 +248,7 @@ class HttpSrv(object):
|
|||||||
if self.args.log_htp:
|
if self.args.log_htp:
|
||||||
self.log(self.name, "workers -= {} = {}".format(n, self.tp_nthr), 6)
|
self.log(self.name, "workers -= {} = {}".format(n, self.tp_nthr), 6)
|
||||||
|
|
||||||
assert self.tp_q
|
assert self.tp_q # !rm
|
||||||
for _ in range(n):
|
for _ in range(n):
|
||||||
self.tp_q.put(None)
|
self.tp_q.put(None)
|
||||||
|
|
||||||
@@ -431,7 +442,7 @@ class HttpSrv(object):
|
|||||||
)
|
)
|
||||||
|
|
||||||
def thr_poolw(self) -> None:
|
def thr_poolw(self) -> None:
|
||||||
assert self.tp_q
|
assert self.tp_q # !rm
|
||||||
while True:
|
while True:
|
||||||
task = self.tp_q.get()
|
task = self.tp_q.get()
|
||||||
if not task:
|
if not task:
|
||||||
@@ -543,8 +554,8 @@ class HttpSrv(object):
|
|||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
v = base64.urlsafe_b64encode(spack(b">xxL", int(v)))
|
# spack gives 4 lsb, take 3 lsb, get 4 ch
|
||||||
self.cb_v = v.decode("ascii")[-4:]
|
self.cb_v = ub64enc(spack(b">L", int(v))[1:]).decode("ascii")
|
||||||
self.cb_ts = time.time()
|
self.cb_ts = time.time()
|
||||||
return self.cb_v
|
return self.cb_v
|
||||||
|
|
||||||
@@ -575,3 +586,32 @@ class HttpSrv(object):
|
|||||||
ident += "a"
|
ident += "a"
|
||||||
|
|
||||||
self.u2idx_free[ident] = u2idx
|
self.u2idx_free[ident] = u2idx
|
||||||
|
|
||||||
|
def read_dls(
|
||||||
|
self,
|
||||||
|
) -> tuple[
|
||||||
|
dict[str, tuple[float, int, str, str, str]], dict[str, tuple[float, int]]
|
||||||
|
]:
|
||||||
|
"""
|
||||||
|
mp-broker asking for local dl-info + dl-state;
|
||||||
|
reduce overhead by sending just the vfs vpath
|
||||||
|
"""
|
||||||
|
dli = {k: (a, b, c.vpath, d, e) for k, (a, b, c, d, e) in self.dli.items()}
|
||||||
|
return (dli, self.dls)
|
||||||
|
|
||||||
|
def write_dls(
|
||||||
|
self,
|
||||||
|
sdli: dict[str, tuple[float, int, str, str, str]],
|
||||||
|
dls: dict[str, tuple[float, int]],
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
mp-broker pushing total dl-info + dl-state;
|
||||||
|
swap out the vfs vpath with the vfs node
|
||||||
|
"""
|
||||||
|
dli: dict[str, tuple[float, int, "VFS", str, str]] = {}
|
||||||
|
for k, (a, b, c, d, e) in sdli.items():
|
||||||
|
vn = self.asrv.vfs.all_nodes[c]
|
||||||
|
dli[k] = (a, b, vn, d, e)
|
||||||
|
|
||||||
|
self.tdli = dli
|
||||||
|
self.tdls = dls
|
||||||
|
|||||||
@@ -25,6 +25,7 @@ from .stolen.dnslib import (
|
|||||||
DNSHeader,
|
DNSHeader,
|
||||||
DNSQuestion,
|
DNSQuestion,
|
||||||
DNSRecord,
|
DNSRecord,
|
||||||
|
set_avahi_379,
|
||||||
)
|
)
|
||||||
from .util import CachedSet, Daemon, Netdev, list_ips, min_ex
|
from .util import CachedSet, Daemon, Netdev, list_ips, min_ex
|
||||||
|
|
||||||
@@ -72,6 +73,9 @@ class MDNS(MCast):
|
|||||||
self.ngen = ngen
|
self.ngen = ngen
|
||||||
self.ttl = 300
|
self.ttl = 300
|
||||||
|
|
||||||
|
if not self.args.zm_nwa_1:
|
||||||
|
set_avahi_379()
|
||||||
|
|
||||||
zs = self.args.name + ".local."
|
zs = self.args.name + ".local."
|
||||||
zs = zs.encode("ascii", "replace").decode("ascii", "replace")
|
zs = zs.encode("ascii", "replace").decode("ascii", "replace")
|
||||||
self.hn = "-".join(x for x in zs.split("?") if x) or (
|
self.hn = "-".join(x for x in zs.split("?") if x) or (
|
||||||
@@ -336,6 +340,9 @@ class MDNS(MCast):
|
|||||||
self.log("stopped", 2)
|
self.log("stopped", 2)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
if self.args.zm_no_pe:
|
||||||
|
continue
|
||||||
|
|
||||||
t = "{} {} \033[33m|{}| {}\n{}".format(
|
t = "{} {} \033[33m|{}| {}\n{}".format(
|
||||||
self.srv[sck].name, addr, len(buf), repr(buf)[2:-1], min_ex()
|
self.srv[sck].name, addr, len(buf), repr(buf)[2:-1], min_ex()
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ class Metrics(object):
|
|||||||
|
|
||||||
def tx(self, cli: "HttpCli") -> bool:
|
def tx(self, cli: "HttpCli") -> bool:
|
||||||
if not cli.avol:
|
if not cli.avol:
|
||||||
raise Pebkac(403, "not allowed for user " + cli.uname)
|
raise Pebkac(403, "'stats' not allowed for user " + cli.uname)
|
||||||
|
|
||||||
args = cli.args
|
args = cli.args
|
||||||
if not args.stats:
|
if not args.stats:
|
||||||
@@ -72,6 +72,9 @@ class Metrics(object):
|
|||||||
v = "{:.3f}".format(self.hsrv.t0)
|
v = "{:.3f}".format(self.hsrv.t0)
|
||||||
addug("cpp_boot_unixtime", "seconds", v, t)
|
addug("cpp_boot_unixtime", "seconds", v, t)
|
||||||
|
|
||||||
|
t = "number of active downloads"
|
||||||
|
addg("cpp_active_dl", str(len(self.hsrv.tdls)), t)
|
||||||
|
|
||||||
t = "number of open http(s) client connections"
|
t = "number of open http(s) client connections"
|
||||||
addg("cpp_http_conns", str(self.hsrv.ncli), t)
|
addg("cpp_http_conns", str(self.hsrv.ncli), t)
|
||||||
|
|
||||||
@@ -88,7 +91,7 @@ class Metrics(object):
|
|||||||
addg("cpp_total_bans", str(self.hsrv.nban), t)
|
addg("cpp_total_bans", str(self.hsrv.nban), t)
|
||||||
|
|
||||||
if not args.nos_vst:
|
if not args.nos_vst:
|
||||||
x = self.hsrv.broker.ask("up2k.get_state")
|
x = self.hsrv.broker.ask("up2k.get_state", True, "")
|
||||||
vs = json.loads(x.get())
|
vs = json.loads(x.get())
|
||||||
|
|
||||||
nvidle = 0
|
nvidle = 0
|
||||||
@@ -128,7 +131,7 @@ class Metrics(object):
|
|||||||
addbh("cpp_disk_size_bytes", "total HDD size of volume")
|
addbh("cpp_disk_size_bytes", "total HDD size of volume")
|
||||||
addbh("cpp_disk_free_bytes", "free HDD space in volume")
|
addbh("cpp_disk_free_bytes", "free HDD space in volume")
|
||||||
for vpath, vol in allvols:
|
for vpath, vol in allvols:
|
||||||
free, total = get_df(vol.realpath)
|
free, total, _ = get_df(vol.realpath, False)
|
||||||
if free is None or total is None:
|
if free is None or total is None:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ from __future__ import print_function, unicode_literals
|
|||||||
import argparse
|
import argparse
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
|
import re
|
||||||
import shutil
|
import shutil
|
||||||
import subprocess as sp
|
import subprocess as sp
|
||||||
import sys
|
import sys
|
||||||
@@ -62,6 +63,9 @@ def have_ff(scmd: str) -> bool:
|
|||||||
HAVE_FFMPEG = not os.environ.get("PRTY_NO_FFMPEG") and have_ff("ffmpeg")
|
HAVE_FFMPEG = not os.environ.get("PRTY_NO_FFMPEG") and have_ff("ffmpeg")
|
||||||
HAVE_FFPROBE = not os.environ.get("PRTY_NO_FFPROBE") and have_ff("ffprobe")
|
HAVE_FFPROBE = not os.environ.get("PRTY_NO_FFPROBE") and have_ff("ffprobe")
|
||||||
|
|
||||||
|
CBZ_PICS = set("png jpg jpeg gif bmp tga tif tiff webp avif".split())
|
||||||
|
CBZ_01 = re.compile(r"(^|[^0-9v])0+[01]\b")
|
||||||
|
|
||||||
|
|
||||||
class MParser(object):
|
class MParser(object):
|
||||||
def __init__(self, cmdline: str) -> None:
|
def __init__(self, cmdline: str) -> None:
|
||||||
@@ -126,6 +130,7 @@ def au_unpk(
|
|||||||
log: "NamedLogger", fmt_map: dict[str, str], abspath: str, vn: Optional[VFS] = None
|
log: "NamedLogger", fmt_map: dict[str, str], abspath: str, vn: Optional[VFS] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
ret = ""
|
ret = ""
|
||||||
|
maxsz = 1024 * 1024 * 64
|
||||||
try:
|
try:
|
||||||
ext = abspath.split(".")[-1].lower()
|
ext = abspath.split(".")[-1].lower()
|
||||||
au, pk = fmt_map[ext].split(".")
|
au, pk = fmt_map[ext].split(".")
|
||||||
@@ -148,24 +153,48 @@ def au_unpk(
|
|||||||
zf = zipfile.ZipFile(abspath, "r")
|
zf = zipfile.ZipFile(abspath, "r")
|
||||||
zil = zf.infolist()
|
zil = zf.infolist()
|
||||||
zil = [x for x in zil if x.filename.lower().split(".")[-1] == au]
|
zil = [x for x in zil if x.filename.lower().split(".")[-1] == au]
|
||||||
|
if not zil:
|
||||||
|
raise Exception("no audio inside zip")
|
||||||
fi = zf.open(zil[0])
|
fi = zf.open(zil[0])
|
||||||
|
|
||||||
|
elif pk == "cbz":
|
||||||
|
import zipfile
|
||||||
|
|
||||||
|
zf = zipfile.ZipFile(abspath, "r")
|
||||||
|
znil = [(x.filename.lower(), x) for x in zf.infolist()]
|
||||||
|
nf = len(znil)
|
||||||
|
znil = [x for x in znil if x[0].split(".")[-1] in CBZ_PICS]
|
||||||
|
znil = [x for x in znil if "cover" in x[0]] or znil
|
||||||
|
znil = [x for x in znil if CBZ_01.search(x[0])] or znil
|
||||||
|
t = "cbz: %d files, %d hits" % (nf, len(znil))
|
||||||
|
if znil:
|
||||||
|
t += ", using " + znil[0][1].filename
|
||||||
|
log(t)
|
||||||
|
if not znil:
|
||||||
|
raise Exception("no images inside cbz")
|
||||||
|
fi = zf.open(znil[0][1])
|
||||||
|
|
||||||
else:
|
else:
|
||||||
raise Exception("unknown compression %s" % (pk,))
|
raise Exception("unknown compression %s" % (pk,))
|
||||||
|
|
||||||
|
fsz = 0
|
||||||
with os.fdopen(fd, "wb") as fo:
|
with os.fdopen(fd, "wb") as fo:
|
||||||
while True:
|
while True:
|
||||||
buf = fi.read(32768)
|
buf = fi.read(32768)
|
||||||
if not buf:
|
if not buf:
|
||||||
break
|
break
|
||||||
|
|
||||||
|
fsz += len(buf)
|
||||||
|
if fsz > maxsz:
|
||||||
|
raise Exception("zipbomb defused")
|
||||||
|
|
||||||
fo.write(buf)
|
fo.write(buf)
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
if ret:
|
if ret:
|
||||||
t = "failed to decompress audio file [%s]: %r"
|
t = "failed to decompress audio file %r: %r"
|
||||||
log(t % (abspath, ex))
|
log(t % (abspath, ex))
|
||||||
wunlink(log, ret, vn.flags if vn else VF_CAREFUL)
|
wunlink(log, ret, vn.flags if vn else VF_CAREFUL)
|
||||||
|
|
||||||
@@ -473,7 +502,7 @@ class MTag(object):
|
|||||||
sv = str(zv).split("/")[0].strip().lstrip("0")
|
sv = str(zv).split("/")[0].strip().lstrip("0")
|
||||||
ret[sk] = sv or 0
|
ret[sk] = sv or 0
|
||||||
|
|
||||||
# normalize key notation to rkeobo
|
# normalize key notation to rekobo
|
||||||
okey = ret.get("key")
|
okey = ret.get("key")
|
||||||
if okey:
|
if okey:
|
||||||
key = str(okey).replace(" ", "").replace("maj", "").replace("min", "m")
|
key = str(okey).replace(" ", "").replace("maj", "").replace("min", "m")
|
||||||
@@ -553,7 +582,7 @@ class MTag(object):
|
|||||||
raise Exception()
|
raise Exception()
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
if self.args.mtag_v:
|
if self.args.mtag_v:
|
||||||
self.log("mutagen-err [{}] @ [{}]".format(ex, abspath), "90")
|
self.log("mutagen-err [%s] @ %r" % (ex, abspath), "90")
|
||||||
|
|
||||||
return self.get_ffprobe(abspath) if self.can_ffprobe else {}
|
return self.get_ffprobe(abspath) if self.can_ffprobe else {}
|
||||||
|
|
||||||
@@ -670,8 +699,8 @@ class MTag(object):
|
|||||||
ret[tag] = zj[tag]
|
ret[tag] = zj[tag]
|
||||||
except:
|
except:
|
||||||
if self.args.mtag_v:
|
if self.args.mtag_v:
|
||||||
t = "mtag error: tagname {}, parser {}, file {} => {}"
|
t = "mtag error: tagname %r, parser %r, file %r => %r"
|
||||||
self.log(t.format(tagname, parser.bin, abspath, min_ex()))
|
self.log(t % (tagname, parser.bin, abspath, min_ex()), 6)
|
||||||
|
|
||||||
if ap != abspath:
|
if ap != abspath:
|
||||||
wunlink(self.log, ap, VF_CAREFUL)
|
wunlink(self.log, ap, VF_CAREFUL)
|
||||||
|
|||||||
@@ -163,6 +163,7 @@ class MCast(object):
|
|||||||
sck.settimeout(None)
|
sck.settimeout(None)
|
||||||
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||||
try:
|
try:
|
||||||
|
# safe for this purpose; https://lwn.net/Articles/853637/
|
||||||
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
|
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|||||||
@@ -24,17 +24,13 @@ class PWHash(object):
|
|||||||
def __init__(self, args: argparse.Namespace):
|
def __init__(self, args: argparse.Namespace):
|
||||||
self.args = args
|
self.args = args
|
||||||
|
|
||||||
try:
|
zsl = args.ah_alg.split(",")
|
||||||
alg, ac = args.ah_alg.split(",")
|
alg = zsl[0]
|
||||||
except:
|
|
||||||
alg = args.ah_alg
|
|
||||||
ac = {}
|
|
||||||
|
|
||||||
if alg == "none":
|
if alg == "none":
|
||||||
alg = ""
|
alg = ""
|
||||||
|
|
||||||
self.alg = alg
|
self.alg = alg
|
||||||
self.ac = ac
|
self.ac = zsl[1:]
|
||||||
if not alg:
|
if not alg:
|
||||||
self.on = False
|
self.on = False
|
||||||
self.hash = unicode
|
self.hash = unicode
|
||||||
@@ -90,17 +86,23 @@ class PWHash(object):
|
|||||||
its = 2
|
its = 2
|
||||||
blksz = 8
|
blksz = 8
|
||||||
para = 4
|
para = 4
|
||||||
|
ramcap = 0 # openssl 1.1 = 32 MiB
|
||||||
try:
|
try:
|
||||||
cost = 2 << int(self.ac[0])
|
cost = 2 << int(self.ac[0])
|
||||||
its = int(self.ac[1])
|
its = int(self.ac[1])
|
||||||
blksz = int(self.ac[2])
|
blksz = int(self.ac[2])
|
||||||
para = int(self.ac[3])
|
para = int(self.ac[3])
|
||||||
|
ramcap = int(self.ac[4]) * 1024 * 1024
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
cfg = {"salt": self.salt, "n": cost, "r": blksz, "p": para, "dklen": 24}
|
||||||
|
if ramcap:
|
||||||
|
cfg["maxmem"] = ramcap
|
||||||
|
|
||||||
ret = plain.encode("utf-8")
|
ret = plain.encode("utf-8")
|
||||||
for _ in range(its):
|
for _ in range(its):
|
||||||
ret = hashlib.scrypt(ret, salt=self.salt, n=cost, r=blksz, p=para, dklen=24)
|
ret = hashlib.scrypt(ret, **cfg)
|
||||||
|
|
||||||
return "+" + base64.urlsafe_b64encode(ret).decode("utf-8")
|
return "+" + base64.urlsafe_b64encode(ret).decode("utf-8")
|
||||||
|
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ from types import SimpleNamespace
|
|||||||
from .__init__ import ANYWIN, EXE, TYPE_CHECKING
|
from .__init__ import ANYWIN, EXE, TYPE_CHECKING
|
||||||
from .authsrv import LEELOO_DALLAS, VFS
|
from .authsrv import LEELOO_DALLAS, VFS
|
||||||
from .bos import bos
|
from .bos import bos
|
||||||
from .util import Daemon, min_ex, pybin, runhook
|
from .util import Daemon, absreal, min_ex, pybin, runhook, vjoin
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Any, Union
|
from typing import Any, Union
|
||||||
@@ -151,6 +151,8 @@ class SMB(object):
|
|||||||
def _uname(self) -> str:
|
def _uname(self) -> str:
|
||||||
if self.noacc:
|
if self.noacc:
|
||||||
return LEELOO_DALLAS
|
return LEELOO_DALLAS
|
||||||
|
if not self.asrv.acct:
|
||||||
|
return "*"
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# you found it! my single worst bit of code so far
|
# you found it! my single worst bit of code so far
|
||||||
@@ -189,7 +191,7 @@ class SMB(object):
|
|||||||
vfs, rem = self.asrv.vfs.get(vpath, uname, *perms)
|
vfs, rem = self.asrv.vfs.get(vpath, uname, *perms)
|
||||||
if not vfs.realpath:
|
if not vfs.realpath:
|
||||||
raise Exception("unmapped vfs")
|
raise Exception("unmapped vfs")
|
||||||
return vfs, vfs.canonical(rem)
|
return vfs, vjoin(vfs.realpath, rem)
|
||||||
|
|
||||||
def _listdir(self, vpath: str, *a: Any, **ka: Any) -> list[str]:
|
def _listdir(self, vpath: str, *a: Any, **ka: Any) -> list[str]:
|
||||||
vpath = vpath.replace("\\", "/").lstrip("/")
|
vpath = vpath.replace("\\", "/").lstrip("/")
|
||||||
@@ -213,7 +215,7 @@ class SMB(object):
|
|||||||
sz = 112 * 2 # ['.', '..']
|
sz = 112 * 2 # ['.', '..']
|
||||||
for n, fn in enumerate(ls):
|
for n, fn in enumerate(ls):
|
||||||
if sz >= 64000:
|
if sz >= 64000:
|
||||||
t = "listing only %d of %d files (%d byte) in /%s; see impacket#1433"
|
t = "listing only %d of %d files (%d byte) in /%s for performance; see --smb-nwa-1"
|
||||||
warning(t, n, len(ls), sz, vpath)
|
warning(t, n, len(ls), sz, vpath)
|
||||||
break
|
break
|
||||||
|
|
||||||
@@ -242,6 +244,7 @@ class SMB(object):
|
|||||||
t = "blocked write (no-write-acc %s): /%s @%s"
|
t = "blocked write (no-write-acc %s): /%s @%s"
|
||||||
yeet(t % (vfs.axs.uwrite, vpath, uname))
|
yeet(t % (vfs.axs.uwrite, vpath, uname))
|
||||||
|
|
||||||
|
ap = absreal(ap)
|
||||||
xbu = vfs.flags.get("xbu")
|
xbu = vfs.flags.get("xbu")
|
||||||
if xbu and not runhook(
|
if xbu and not runhook(
|
||||||
self.nlog,
|
self.nlog,
|
||||||
@@ -260,7 +263,7 @@ class SMB(object):
|
|||||||
time.time(),
|
time.time(),
|
||||||
"",
|
"",
|
||||||
):
|
):
|
||||||
yeet("blocked by xbu server config: " + vpath)
|
yeet("blocked by xbu server config: %r" % (vpath,))
|
||||||
|
|
||||||
ret = bos.open(ap, flags, *a, mode=chmod, **ka)
|
ret = bos.open(ap, flags, *a, mode=chmod, **ka)
|
||||||
if wr:
|
if wr:
|
||||||
|
|||||||
@@ -84,7 +84,7 @@ class SSDPr(object):
|
|||||||
name = self.args.doctitle
|
name = self.args.doctitle
|
||||||
zs = zs.strip().format(c(ubase), c(url), c(name), c(self.args.zsid))
|
zs = zs.strip().format(c(ubase), c(url), c(name), c(self.args.zsid))
|
||||||
hc.reply(zs.encode("utf-8", "replace"))
|
hc.reply(zs.encode("utf-8", "replace"))
|
||||||
return False # close connectino
|
return False # close connection
|
||||||
|
|
||||||
|
|
||||||
class SSDPd(MCast):
|
class SSDPd(MCast):
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ from itertools import chain
|
|||||||
from .bimap import Bimap, BimapError
|
from .bimap import Bimap, BimapError
|
||||||
from .bit import get_bits, set_bits
|
from .bit import get_bits, set_bits
|
||||||
from .buffer import BufferError
|
from .buffer import BufferError
|
||||||
from .label import DNSBuffer, DNSLabel
|
from .label import DNSBuffer, DNSLabel, set_avahi_379
|
||||||
from .ranges import IP4, IP6, H, I, check_bytes
|
from .ranges import IP4, IP6, H, I, check_bytes
|
||||||
|
|
||||||
|
|
||||||
@@ -426,7 +426,7 @@ class RR(object):
|
|||||||
if rdlength:
|
if rdlength:
|
||||||
rdata = RDMAP.get(QTYPE.get(rtype), RD).parse(buffer, rdlength)
|
rdata = RDMAP.get(QTYPE.get(rtype), RD).parse(buffer, rdlength)
|
||||||
else:
|
else:
|
||||||
rdata = ""
|
rdata = RD(b"a")
|
||||||
return cls(rname, rtype, rclass, ttl, rdata)
|
return cls(rname, rtype, rclass, ttl, rdata)
|
||||||
except (BufferError, BimapError) as e:
|
except (BufferError, BimapError) as e:
|
||||||
raise DNSError("Error unpacking RR [offset=%d]: %s" % (buffer.offset, e))
|
raise DNSError("Error unpacking RR [offset=%d]: %s" % (buffer.offset, e))
|
||||||
|
|||||||
@@ -11,6 +11,23 @@ LDH = set(range(33, 127))
|
|||||||
ESCAPE = re.compile(r"\\([0-9][0-9][0-9])")
|
ESCAPE = re.compile(r"\\([0-9][0-9][0-9])")
|
||||||
|
|
||||||
|
|
||||||
|
avahi_379 = 0
|
||||||
|
|
||||||
|
|
||||||
|
def set_avahi_379():
|
||||||
|
global avahi_379
|
||||||
|
avahi_379 = 1
|
||||||
|
|
||||||
|
|
||||||
|
def log_avahi_379(args):
|
||||||
|
global avahi_379
|
||||||
|
if avahi_379 == 2:
|
||||||
|
return
|
||||||
|
avahi_379 = 2
|
||||||
|
t = "Invalid pointer in DNSLabel [offset=%d,pointer=%d,length=%d];\n\033[35m NOTE: this is probably avahi-bug #379, packet corruption in Avahi's mDNS-reflection feature. Copyparty has a workaround and is OK, but other devices need either --zm4 or --zm6"
|
||||||
|
raise BufferError(t % args)
|
||||||
|
|
||||||
|
|
||||||
class DNSLabelError(Exception):
|
class DNSLabelError(Exception):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@@ -96,8 +113,11 @@ class DNSBuffer(Buffer):
|
|||||||
)
|
)
|
||||||
if pointer < self.offset:
|
if pointer < self.offset:
|
||||||
self.offset = pointer
|
self.offset = pointer
|
||||||
|
elif avahi_379:
|
||||||
|
log_avahi_379((self.offset, pointer, len(self.data)))
|
||||||
|
label.extend(b"a")
|
||||||
|
break
|
||||||
else:
|
else:
|
||||||
|
|
||||||
raise BufferError(
|
raise BufferError(
|
||||||
"Invalid pointer in DNSLabel [offset=%d,pointer=%d,length=%d]"
|
"Invalid pointer in DNSLabel [offset=%d,pointer=%d,length=%d]"
|
||||||
% (self.offset, pointer, len(self.data))
|
% (self.offset, pointer, len(self.data))
|
||||||
|
|||||||
@@ -594,3 +594,20 @@ def _get_bit(x: int, i: int) -> bool:
|
|||||||
|
|
||||||
class DataTooLongError(ValueError):
|
class DataTooLongError(ValueError):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def qr2svg(qr: QrCode, border: int) -> str:
|
||||||
|
parts: list[str] = []
|
||||||
|
for y in range(qr.size):
|
||||||
|
sy = border + y
|
||||||
|
for x in range(qr.size):
|
||||||
|
if qr.modules[y][x]:
|
||||||
|
parts.append("M%d,%dh1v1h-1z" % (border + x, sy))
|
||||||
|
t = """\
|
||||||
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" version="1.1" viewBox="0 0 {0} {0}" stroke="none">
|
||||||
|
<rect width="100%" height="100%" fill="#F7F7F7"/>
|
||||||
|
<path d="{1}" fill="#111111"/>
|
||||||
|
</svg>
|
||||||
|
"""
|
||||||
|
return t.format(qr.size + border * 2, " ".join(parts))
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ def errdesc(
|
|||||||
report = ["copyparty failed to add the following files to the archive:", ""]
|
report = ["copyparty failed to add the following files to the archive:", ""]
|
||||||
|
|
||||||
for fn, err in errors:
|
for fn, err in errors:
|
||||||
report.extend([" file: {}".format(fn), "error: {}".format(err), ""])
|
report.extend([" file: %r" % (fn,), "error: %s" % (err,), ""])
|
||||||
|
|
||||||
btxt = "\r\n".join(report).encode("utf-8", "replace")
|
btxt = "\r\n".join(report).encode("utf-8", "replace")
|
||||||
btxt = vol_san(list(vfs.all_vols.values()), btxt)
|
btxt = vol_san(list(vfs.all_vols.values()), btxt)
|
||||||
|
|||||||
@@ -2,8 +2,6 @@
|
|||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import base64
|
|
||||||
import calendar
|
|
||||||
import errno
|
import errno
|
||||||
import gzip
|
import gzip
|
||||||
import logging
|
import logging
|
||||||
@@ -16,7 +14,7 @@ import string
|
|||||||
import sys
|
import sys
|
||||||
import threading
|
import threading
|
||||||
import time
|
import time
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime
|
||||||
|
|
||||||
# from inspect import currentframe
|
# from inspect import currentframe
|
||||||
# print(currentframe().f_lineno)
|
# print(currentframe().f_lineno)
|
||||||
@@ -52,6 +50,8 @@ from .util import (
|
|||||||
FFMPEG_URL,
|
FFMPEG_URL,
|
||||||
HAVE_PSUTIL,
|
HAVE_PSUTIL,
|
||||||
HAVE_SQLITE3,
|
HAVE_SQLITE3,
|
||||||
|
HAVE_ZMQ,
|
||||||
|
URL_BUG,
|
||||||
UTC,
|
UTC,
|
||||||
VERSIONS,
|
VERSIONS,
|
||||||
Daemon,
|
Daemon,
|
||||||
@@ -62,12 +62,15 @@ from .util import (
|
|||||||
alltrace,
|
alltrace,
|
||||||
ansi_re,
|
ansi_re,
|
||||||
build_netmap,
|
build_netmap,
|
||||||
|
expat_ver,
|
||||||
|
load_ipu,
|
||||||
min_ex,
|
min_ex,
|
||||||
mp,
|
mp,
|
||||||
odfusion,
|
odfusion,
|
||||||
pybin,
|
pybin,
|
||||||
start_log_thrs,
|
start_log_thrs,
|
||||||
start_stackmon,
|
start_stackmon,
|
||||||
|
ub64enc,
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
@@ -104,6 +107,7 @@ class SvcHub(object):
|
|||||||
self.argv = argv
|
self.argv = argv
|
||||||
self.E: EnvParams = args.E
|
self.E: EnvParams = args.E
|
||||||
self.no_ansi = args.no_ansi
|
self.no_ansi = args.no_ansi
|
||||||
|
self.tz = UTC if args.log_utc else None
|
||||||
self.logf: Optional[typing.TextIO] = None
|
self.logf: Optional[typing.TextIO] = None
|
||||||
self.logf_base_fn = ""
|
self.logf_base_fn = ""
|
||||||
self.is_dut = False # running in unittest; always False
|
self.is_dut = False # running in unittest; always False
|
||||||
@@ -111,14 +115,15 @@ class SvcHub(object):
|
|||||||
self.stopping = False
|
self.stopping = False
|
||||||
self.stopped = False
|
self.stopped = False
|
||||||
self.reload_req = False
|
self.reload_req = False
|
||||||
self.reloading = 0
|
self.reload_mutex = threading.Lock()
|
||||||
self.stop_cond = threading.Condition()
|
self.stop_cond = threading.Condition()
|
||||||
self.nsigs = 3
|
self.nsigs = 3
|
||||||
self.retcode = 0
|
self.retcode = 0
|
||||||
self.httpsrv_up = 0
|
self.httpsrv_up = 0
|
||||||
|
|
||||||
self.log_mutex = threading.Lock()
|
self.log_mutex = threading.Lock()
|
||||||
self.next_day = 0
|
self.cday = 0
|
||||||
|
self.cmon = 0
|
||||||
self.tstack = 0.0
|
self.tstack = 0.0
|
||||||
|
|
||||||
self.iphash = HMaccas(os.path.join(self.E.cfg, "iphash"), 8)
|
self.iphash = HMaccas(os.path.join(self.E.cfg, "iphash"), 8)
|
||||||
@@ -209,6 +214,15 @@ class SvcHub(object):
|
|||||||
t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
|
t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
|
||||||
self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
|
self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
|
||||||
|
|
||||||
|
zs = ""
|
||||||
|
if args.th_ram_max < 0.22:
|
||||||
|
zs = "generate thumbnails"
|
||||||
|
elif args.th_ram_max < 1:
|
||||||
|
zs = "generate audio waveforms or spectrograms"
|
||||||
|
if zs:
|
||||||
|
t = "WARNING: --th-ram-max is very small (%.2f GiB); will not be able to %s"
|
||||||
|
self.log("root", t % (args.th_ram_max, zs), 3)
|
||||||
|
|
||||||
if args.chpw and args.idp_h_usr:
|
if args.chpw and args.idp_h_usr:
|
||||||
t = "ERROR: user-changeable passwords is incompatible with IdP/identity-providers; you must disable either --chpw or --idp-h-usr"
|
t = "ERROR: user-changeable passwords is incompatible with IdP/identity-providers; you must disable either --chpw or --idp-h-usr"
|
||||||
self.log("root", t, 1)
|
self.log("root", t, 1)
|
||||||
@@ -220,6 +234,15 @@ class SvcHub(object):
|
|||||||
noch.update([x for x in zsl if x])
|
noch.update([x for x in zsl if x])
|
||||||
args.chpw_no = noch
|
args.chpw_no = noch
|
||||||
|
|
||||||
|
if args.ipu:
|
||||||
|
iu, nm = load_ipu(self.log, args.ipu, True)
|
||||||
|
setattr(args, "ipu_iu", iu)
|
||||||
|
setattr(args, "ipu_nm", nm)
|
||||||
|
|
||||||
|
if not self.args.no_ses:
|
||||||
|
self.setup_session_db()
|
||||||
|
|
||||||
|
args.shr1 = ""
|
||||||
if args.shr:
|
if args.shr:
|
||||||
self.setup_share_db()
|
self.setup_share_db()
|
||||||
|
|
||||||
@@ -368,6 +391,72 @@ class SvcHub(object):
|
|||||||
|
|
||||||
self.broker = Broker(self)
|
self.broker = Broker(self)
|
||||||
|
|
||||||
|
# create netmaps early to avoid firewall gaps,
|
||||||
|
# but the mutex blocks multiprocessing startup
|
||||||
|
for zs in "ipu_iu ftp_ipa_nm tftp_ipa_nm".split():
|
||||||
|
try:
|
||||||
|
getattr(args, zs).mutex = threading.Lock()
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def setup_session_db(self) -> None:
|
||||||
|
if not HAVE_SQLITE3:
|
||||||
|
self.args.no_ses = True
|
||||||
|
t = "WARNING: sqlite3 not available; disabling sessions, will use plaintext passwords in cookies"
|
||||||
|
self.log("root", t, 3)
|
||||||
|
return
|
||||||
|
|
||||||
|
import sqlite3
|
||||||
|
|
||||||
|
create = True
|
||||||
|
db_path = self.args.ses_db
|
||||||
|
self.log("root", "opening sessions-db %s" % (db_path,))
|
||||||
|
for n in range(2):
|
||||||
|
try:
|
||||||
|
db = sqlite3.connect(db_path)
|
||||||
|
cur = db.cursor()
|
||||||
|
try:
|
||||||
|
cur.execute("select count(*) from us").fetchone()
|
||||||
|
create = False
|
||||||
|
break
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
except Exception as ex:
|
||||||
|
if n:
|
||||||
|
raise
|
||||||
|
t = "sessions-db corrupt; deleting and recreating: %r"
|
||||||
|
self.log("root", t % (ex,), 3)
|
||||||
|
try:
|
||||||
|
cur.close() # type: ignore
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
db.close() # type: ignore
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
os.unlink(db_path)
|
||||||
|
|
||||||
|
sch = [
|
||||||
|
r"create table kv (k text, v int)",
|
||||||
|
r"create table us (un text, si text, t0 int)",
|
||||||
|
# username, session-id, creation-time
|
||||||
|
r"create index us_un on us(un)",
|
||||||
|
r"create index us_si on us(si)",
|
||||||
|
r"create index us_t0 on us(t0)",
|
||||||
|
r"insert into kv values ('sver', 1)",
|
||||||
|
]
|
||||||
|
|
||||||
|
assert db # type: ignore # !rm
|
||||||
|
assert cur # type: ignore # !rm
|
||||||
|
if create:
|
||||||
|
for cmd in sch:
|
||||||
|
cur.execute(cmd)
|
||||||
|
self.log("root", "created new sessions-db")
|
||||||
|
db.commit()
|
||||||
|
|
||||||
|
cur.close()
|
||||||
|
db.close()
|
||||||
|
|
||||||
def setup_share_db(self) -> None:
|
def setup_share_db(self) -> None:
|
||||||
al = self.args
|
al = self.args
|
||||||
if not HAVE_SQLITE3:
|
if not HAVE_SQLITE3:
|
||||||
@@ -384,6 +473,7 @@ class SvcHub(object):
|
|||||||
raise Exception(t)
|
raise Exception(t)
|
||||||
|
|
||||||
al.shr = "/%s/" % (al.shr,)
|
al.shr = "/%s/" % (al.shr,)
|
||||||
|
al.shr1 = al.shr[1:]
|
||||||
|
|
||||||
create = True
|
create = True
|
||||||
modified = False
|
modified = False
|
||||||
@@ -426,8 +516,8 @@ class SvcHub(object):
|
|||||||
r"create index sh_t1 on sh(t1)",
|
r"create index sh_t1 on sh(t1)",
|
||||||
]
|
]
|
||||||
|
|
||||||
assert db # type: ignore
|
assert db # type: ignore # !rm
|
||||||
assert cur # type: ignore
|
assert cur # type: ignore # !rm
|
||||||
if create:
|
if create:
|
||||||
dver = 2
|
dver = 2
|
||||||
modified = True
|
modified = True
|
||||||
@@ -544,7 +634,7 @@ class SvcHub(object):
|
|||||||
fng = []
|
fng = []
|
||||||
t_ff = "transcode audio, create spectrograms, video thumbnails"
|
t_ff = "transcode audio, create spectrograms, video thumbnails"
|
||||||
to_check = [
|
to_check = [
|
||||||
(HAVE_SQLITE3, "sqlite", "file and media indexing"),
|
(HAVE_SQLITE3, "sqlite", "sessions and file/media indexing"),
|
||||||
(HAVE_PIL, "pillow", "image thumbnails (plenty fast)"),
|
(HAVE_PIL, "pillow", "image thumbnails (plenty fast)"),
|
||||||
(HAVE_VIPS, "vips", "image thumbnails (faster, eats more ram)"),
|
(HAVE_VIPS, "vips", "image thumbnails (faster, eats more ram)"),
|
||||||
(HAVE_WEBP, "pillow-webp", "create thumbnails as webp files"),
|
(HAVE_WEBP, "pillow-webp", "create thumbnails as webp files"),
|
||||||
@@ -552,6 +642,7 @@ class SvcHub(object):
|
|||||||
(HAVE_FFPROBE, "ffprobe", t_ff + ", read audio/media tags"),
|
(HAVE_FFPROBE, "ffprobe", t_ff + ", read audio/media tags"),
|
||||||
(HAVE_MUTAGEN, "mutagen", "read audio tags (ffprobe is better but slower)"),
|
(HAVE_MUTAGEN, "mutagen", "read audio tags (ffprobe is better but slower)"),
|
||||||
(HAVE_ARGON2, "argon2", "secure password hashing (advanced users only)"),
|
(HAVE_ARGON2, "argon2", "secure password hashing (advanced users only)"),
|
||||||
|
(HAVE_ZMQ, "pyzmq", "send zeromq messages from event-hooks"),
|
||||||
(HAVE_HEIF, "pillow-heif", "read .heif images with pillow (rarely useful)"),
|
(HAVE_HEIF, "pillow-heif", "read .heif images with pillow (rarely useful)"),
|
||||||
(HAVE_AVIF, "pillow-avif", "read .avif images with pillow (rarely useful)"),
|
(HAVE_AVIF, "pillow-avif", "read .avif images with pillow (rarely useful)"),
|
||||||
]
|
]
|
||||||
@@ -608,6 +699,15 @@ class SvcHub(object):
|
|||||||
if self.args.bauth_last:
|
if self.args.bauth_last:
|
||||||
self.log("root", "WARNING: ignoring --bauth-last due to --no-bauth", 3)
|
self.log("root", "WARNING: ignoring --bauth-last due to --no-bauth", 3)
|
||||||
|
|
||||||
|
if not self.args.no_dav:
|
||||||
|
from .dxml import DXML_OK
|
||||||
|
|
||||||
|
if not DXML_OK:
|
||||||
|
if not self.args.no_dav:
|
||||||
|
self.args.no_dav = True
|
||||||
|
t = "WARNING:\nDisabling WebDAV support because dxml selftest failed. Please report this bug;\n%s\n...and include the following information in the bug-report:\n%s | expat %s\n"
|
||||||
|
self.log("root", t % (URL_BUG, VERSIONS, expat_ver()), 1)
|
||||||
|
|
||||||
def _process_config(self) -> bool:
|
def _process_config(self) -> bool:
|
||||||
al = self.args
|
al = self.args
|
||||||
|
|
||||||
@@ -669,7 +769,7 @@ class SvcHub(object):
|
|||||||
vs = os.path.expandvars(os.path.expanduser(vs))
|
vs = os.path.expandvars(os.path.expanduser(vs))
|
||||||
setattr(al, k, vs)
|
setattr(al, k, vs)
|
||||||
|
|
||||||
for k in "sus_urls nonsus_urls".split(" "):
|
for k in "dav_ua1 sus_urls nonsus_urls".split(" "):
|
||||||
vs = getattr(al, k)
|
vs = getattr(al, k)
|
||||||
if not vs or vs == "no":
|
if not vs or vs == "no":
|
||||||
setattr(al, k, None)
|
setattr(al, k, None)
|
||||||
@@ -693,8 +793,8 @@ class SvcHub(object):
|
|||||||
al.idp_h_grp = al.idp_h_grp.lower()
|
al.idp_h_grp = al.idp_h_grp.lower()
|
||||||
al.idp_h_key = al.idp_h_key.lower()
|
al.idp_h_key = al.idp_h_key.lower()
|
||||||
|
|
||||||
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa)
|
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa, True)
|
||||||
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa)
|
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa, True)
|
||||||
|
|
||||||
mte = ODict.fromkeys(DEF_MTE.split(","), True)
|
mte = ODict.fromkeys(DEF_MTE.split(","), True)
|
||||||
al.mte = odfusion(mte, al.mte)
|
al.mte = odfusion(mte, al.mte)
|
||||||
@@ -706,7 +806,7 @@ class SvcHub(object):
|
|||||||
al.exp_md = odfusion(exp, al.exp_md.replace(" ", ","))
|
al.exp_md = odfusion(exp, al.exp_md.replace(" ", ","))
|
||||||
al.exp_lg = odfusion(exp, al.exp_lg.replace(" ", ","))
|
al.exp_lg = odfusion(exp, al.exp_lg.replace(" ", ","))
|
||||||
|
|
||||||
for k in ["no_hash", "no_idx", "og_ua"]:
|
for k in ["no_hash", "no_idx", "og_ua", "srch_excl"]:
|
||||||
ptn = getattr(self.args, k)
|
ptn = getattr(self.args, k)
|
||||||
if ptn:
|
if ptn:
|
||||||
setattr(self.args, k, re.compile(ptn))
|
setattr(self.args, k, re.compile(ptn))
|
||||||
@@ -741,6 +841,24 @@ class SvcHub(object):
|
|||||||
if len(al.tcolor) == 3: # fc5 => ffcc55
|
if len(al.tcolor) == 3: # fc5 => ffcc55
|
||||||
al.tcolor = "".join([x * 2 for x in al.tcolor])
|
al.tcolor = "".join([x * 2 for x in al.tcolor])
|
||||||
|
|
||||||
|
zs = al.u2sz
|
||||||
|
zsl = zs.split(",")
|
||||||
|
if len(zsl) not in (1, 3):
|
||||||
|
t = "invalid --u2sz; must be either one number, or a comma-separated list of three numbers (min,default,max)"
|
||||||
|
raise Exception(t)
|
||||||
|
if len(zsl) < 3:
|
||||||
|
zsl = ["1", zs, zs]
|
||||||
|
zi2 = 1
|
||||||
|
for zs in zsl:
|
||||||
|
zi = int(zs)
|
||||||
|
# arbitrary constraint (anything above 2 GiB is probably unintended)
|
||||||
|
if zi < 1 or zi > 2047:
|
||||||
|
raise Exception("invalid --u2sz; minimum is 1, max is 2047")
|
||||||
|
if zi < zi2:
|
||||||
|
raise Exception("invalid --u2sz; values must be equal or ascending")
|
||||||
|
zi2 = zi
|
||||||
|
al.u2sz = ",".join(zsl)
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def _ipa2re(self, txt) -> Optional[re.Pattern]:
|
def _ipa2re(self, txt) -> Optional[re.Pattern]:
|
||||||
@@ -791,7 +909,7 @@ class SvcHub(object):
|
|||||||
self.args.nc = min(self.args.nc, soft // 2)
|
self.args.nc = min(self.args.nc, soft // 2)
|
||||||
|
|
||||||
def _logname(self) -> str:
|
def _logname(self) -> str:
|
||||||
dt = datetime.now(UTC)
|
dt = datetime.now(self.tz)
|
||||||
fn = str(self.args.lo)
|
fn = str(self.args.lo)
|
||||||
for fs in "YmdHMS":
|
for fs in "YmdHMS":
|
||||||
fs = "%" + fs
|
fs = "%" + fs
|
||||||
@@ -908,41 +1026,23 @@ class SvcHub(object):
|
|||||||
except:
|
except:
|
||||||
self.log("root", "ssdp startup failed;\n" + min_ex(), 3)
|
self.log("root", "ssdp startup failed;\n" + min_ex(), 3)
|
||||||
|
|
||||||
def reload(self) -> str:
|
def reload(self, rescan_all_vols: bool, up2k: bool) -> str:
|
||||||
with self.up2k.mutex:
|
t = "config has been reloaded"
|
||||||
if self.reloading:
|
with self.reload_mutex:
|
||||||
return "cannot reload; already in progress"
|
|
||||||
self.reloading = 1
|
|
||||||
|
|
||||||
Daemon(self._reload, "reloading")
|
|
||||||
return "reload initiated"
|
|
||||||
|
|
||||||
def _reload(self, rescan_all_vols: bool = True, up2k: bool = True) -> None:
|
|
||||||
with self.up2k.mutex:
|
|
||||||
if self.reloading != 1:
|
|
||||||
return
|
|
||||||
self.reloading = 2
|
|
||||||
self.log("root", "reloading config")
|
self.log("root", "reloading config")
|
||||||
self.asrv.reload(9 if up2k else 4)
|
self.asrv.reload(9 if up2k else 4)
|
||||||
if up2k:
|
if up2k:
|
||||||
self.up2k.reload(rescan_all_vols)
|
self.up2k.reload(rescan_all_vols)
|
||||||
|
t += "; volumes are now reinitializing"
|
||||||
else:
|
else:
|
||||||
self.log("root", "reload done")
|
self.log("root", "reload done")
|
||||||
self.broker.reload()
|
self.broker.reload()
|
||||||
self.reloading = 0
|
return t
|
||||||
|
|
||||||
def _reload_blocking(self, rescan_all_vols: bool = True, up2k: bool = True) -> None:
|
def _reload_sessions(self) -> None:
|
||||||
while True:
|
with self.asrv.mutex:
|
||||||
with self.up2k.mutex:
|
self.asrv.load_sessions(True)
|
||||||
if self.reloading < 2:
|
self.broker.reload_sessions()
|
||||||
self.reloading = 1
|
|
||||||
break
|
|
||||||
time.sleep(0.05)
|
|
||||||
|
|
||||||
# try to handle multiple pending IdP reloads at once:
|
|
||||||
time.sleep(0.2)
|
|
||||||
|
|
||||||
self._reload(rescan_all_vols=rescan_all_vols, up2k=up2k)
|
|
||||||
|
|
||||||
def stop_thr(self) -> None:
|
def stop_thr(self) -> None:
|
||||||
while not self.stop_req:
|
while not self.stop_req:
|
||||||
@@ -951,7 +1051,7 @@ class SvcHub(object):
|
|||||||
|
|
||||||
if self.reload_req:
|
if self.reload_req:
|
||||||
self.reload_req = False
|
self.reload_req = False
|
||||||
self.reload()
|
self.reload(True, True)
|
||||||
|
|
||||||
self.shutdown()
|
self.shutdown()
|
||||||
|
|
||||||
@@ -1064,12 +1164,12 @@ class SvcHub(object):
|
|||||||
return
|
return
|
||||||
|
|
||||||
with self.log_mutex:
|
with self.log_mutex:
|
||||||
zd = datetime.now(UTC)
|
dt = datetime.now(self.tz)
|
||||||
ts = self.log_dfmt % (
|
ts = self.log_dfmt % (
|
||||||
zd.year,
|
dt.year,
|
||||||
zd.month * 100 + zd.day,
|
dt.month * 100 + dt.day,
|
||||||
(zd.hour * 100 + zd.minute) * 100 + zd.second,
|
(dt.hour * 100 + dt.minute) * 100 + dt.second,
|
||||||
zd.microsecond // self.log_div,
|
dt.microsecond // self.log_div,
|
||||||
)
|
)
|
||||||
|
|
||||||
if c and not self.args.no_ansi:
|
if c and not self.args.no_ansi:
|
||||||
@@ -1090,41 +1190,26 @@ class SvcHub(object):
|
|||||||
if not self.args.no_logflush:
|
if not self.args.no_logflush:
|
||||||
self.logf.flush()
|
self.logf.flush()
|
||||||
|
|
||||||
now = time.time()
|
if dt.day != self.cday or dt.month != self.cmon:
|
||||||
if int(now) >= self.next_day:
|
self._set_next_day(dt)
|
||||||
self._set_next_day()
|
|
||||||
|
|
||||||
def _set_next_day(self) -> None:
|
def _set_next_day(self, dt: datetime) -> None:
|
||||||
if self.next_day and self.logf and self.logf_base_fn != self._logname():
|
if self.cday and self.logf and self.logf_base_fn != self._logname():
|
||||||
self.logf.close()
|
self.logf.close()
|
||||||
self._setup_logfile("")
|
self._setup_logfile("")
|
||||||
|
|
||||||
dt = datetime.now(UTC)
|
self.cday = dt.day
|
||||||
|
self.cmon = dt.month
|
||||||
# unix timestamp of next 00:00:00 (leap-seconds safe)
|
|
||||||
day_now = dt.day
|
|
||||||
while dt.day == day_now:
|
|
||||||
dt += timedelta(hours=12)
|
|
||||||
|
|
||||||
dt = dt.replace(hour=0, minute=0, second=0)
|
|
||||||
try:
|
|
||||||
tt = dt.utctimetuple()
|
|
||||||
except:
|
|
||||||
# still makes me hella uncomfortable
|
|
||||||
tt = dt.timetuple()
|
|
||||||
|
|
||||||
self.next_day = calendar.timegm(tt)
|
|
||||||
|
|
||||||
def _log_enabled(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
|
def _log_enabled(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
|
||||||
"""handles logging from all components"""
|
"""handles logging from all components"""
|
||||||
with self.log_mutex:
|
with self.log_mutex:
|
||||||
now = time.time()
|
dt = datetime.now(self.tz)
|
||||||
if int(now) >= self.next_day:
|
if dt.day != self.cday or dt.month != self.cmon:
|
||||||
dt = datetime.fromtimestamp(now, UTC)
|
|
||||||
zs = "{}\n" if self.no_ansi else "\033[36m{}\033[0m\n"
|
zs = "{}\n" if self.no_ansi else "\033[36m{}\033[0m\n"
|
||||||
zs = zs.format(dt.strftime("%Y-%m-%d"))
|
zs = zs.format(dt.strftime("%Y-%m-%d"))
|
||||||
print(zs, end="")
|
print(zs, end="")
|
||||||
self._set_next_day()
|
self._set_next_day(dt)
|
||||||
if self.logf:
|
if self.logf:
|
||||||
self.logf.write(zs)
|
self.logf.write(zs)
|
||||||
|
|
||||||
@@ -1143,12 +1228,11 @@ class SvcHub(object):
|
|||||||
else:
|
else:
|
||||||
msg = "%s%s\033[0m" % (c, msg)
|
msg = "%s%s\033[0m" % (c, msg)
|
||||||
|
|
||||||
zd = datetime.fromtimestamp(now, UTC)
|
|
||||||
ts = self.log_efmt % (
|
ts = self.log_efmt % (
|
||||||
zd.hour,
|
dt.hour,
|
||||||
zd.minute,
|
dt.minute,
|
||||||
zd.second,
|
dt.second,
|
||||||
zd.microsecond // self.log_div,
|
dt.microsecond // self.log_div,
|
||||||
)
|
)
|
||||||
msg = fmt % (ts, src, msg)
|
msg = fmt % (ts, src, msg)
|
||||||
try:
|
try:
|
||||||
@@ -1246,5 +1330,5 @@ class SvcHub(object):
|
|||||||
zs = "{}\n{}".format(VERSIONS, alltrace())
|
zs = "{}\n{}".format(VERSIONS, alltrace())
|
||||||
zb = zs.encode("utf-8", "replace")
|
zb = zs.encode("utf-8", "replace")
|
||||||
zb = gzip.compress(zb)
|
zb = gzip.compress(zb)
|
||||||
zs = base64.b64encode(zb).decode("ascii")
|
zs = ub64enc(zb).decode("ascii")
|
||||||
self.log("stacks", zs)
|
self.log("stacks", zs)
|
||||||
|
|||||||
@@ -100,12 +100,12 @@ def gen_hdr(
|
|||||||
|
|
||||||
# spec says to put zeros when !crc if bit3 (streaming)
|
# spec says to put zeros when !crc if bit3 (streaming)
|
||||||
# however infozip does actual sz and it even works on winxp
|
# however infozip does actual sz and it even works on winxp
|
||||||
# (same reasning for z64 extradata later)
|
# (same reasoning for z64 extradata later)
|
||||||
vsz = 0xFFFFFFFF if z64 else sz
|
vsz = 0xFFFFFFFF if z64 else sz
|
||||||
ret += spack(b"<LL", vsz, vsz)
|
ret += spack(b"<LL", vsz, vsz)
|
||||||
|
|
||||||
# windows support (the "?" replace below too)
|
# windows support (the "?" replace below too)
|
||||||
fn = sanitize_fn(fn, "/", [])
|
fn = sanitize_fn(fn, "/")
|
||||||
bfn = fn.encode("utf-8" if utf8 else "cp437", "replace").replace(b"?", b"_")
|
bfn = fn.encode("utf-8" if utf8 else "cp437", "replace").replace(b"?", b"_")
|
||||||
|
|
||||||
# add ntfs (0x24) and/or unix (0x10) extrafields for utc, add z64 if requested
|
# add ntfs (0x24) and/or unix (0x10) extrafields for utc, add z64 if requested
|
||||||
|
|||||||
@@ -95,7 +95,7 @@ class TcpSrv(object):
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
# binding 0.0.0.0 after :: fails on dualstack
|
# binding 0.0.0.0 after :: fails on dualstack
|
||||||
# but is necessary on non-dualstakc
|
# but is necessary on non-dualstack
|
||||||
if successful_binds:
|
if successful_binds:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
@@ -371,7 +371,7 @@ class TcpSrv(object):
|
|||||||
if self.args.q:
|
if self.args.q:
|
||||||
print(msg)
|
print(msg)
|
||||||
|
|
||||||
self.hub.broker.say("listen", srv)
|
self.hub.broker.say("httpsrv.listen", srv)
|
||||||
|
|
||||||
self.srv = srvs
|
self.srv = srvs
|
||||||
self.bound = bound
|
self.bound = bound
|
||||||
@@ -379,7 +379,7 @@ class TcpSrv(object):
|
|||||||
self._distribute_netdevs()
|
self._distribute_netdevs()
|
||||||
|
|
||||||
def _distribute_netdevs(self):
|
def _distribute_netdevs(self):
|
||||||
self.hub.broker.say("set_netdevs", self.netdevs)
|
self.hub.broker.say("httpsrv.set_netdevs", self.netdevs)
|
||||||
self.hub.start_zeroconf()
|
self.hub.start_zeroconf()
|
||||||
gencert(self.log, self.args, self.netdevs)
|
gencert(self.log, self.args, self.netdevs)
|
||||||
self.hub.restart_ftpd()
|
self.hub.restart_ftpd()
|
||||||
@@ -402,17 +402,17 @@ class TcpSrv(object):
|
|||||||
if not netdevs:
|
if not netdevs:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
added = "nothing"
|
add = []
|
||||||
removed = "nothing"
|
rem = []
|
||||||
for k, v in netdevs.items():
|
for k, v in netdevs.items():
|
||||||
if k not in self.netdevs:
|
if k not in self.netdevs:
|
||||||
added = "{} = {}".format(k, v)
|
add.append("\n\033[32m added %s = %s" % (k, v))
|
||||||
for k, v in self.netdevs.items():
|
for k, v in self.netdevs.items():
|
||||||
if k not in netdevs:
|
if k not in netdevs:
|
||||||
removed = "{} = {}".format(k, v)
|
rem.append("\n\033[33mremoved %s = %s" % (k, v))
|
||||||
|
|
||||||
t = "network change detected:\n added {}\033[0;33m\nremoved {}"
|
t = "network change detected:%s%s"
|
||||||
self.log("tcpsrv", t.format(added, removed), 3)
|
self.log("tcpsrv", t % ("".join(add), "".join(rem)), 3)
|
||||||
self.netdevs = netdevs
|
self.netdevs = netdevs
|
||||||
self._distribute_netdevs()
|
self._distribute_netdevs()
|
||||||
|
|
||||||
|
|||||||
@@ -269,6 +269,7 @@ class Tftpd(object):
|
|||||||
"*",
|
"*",
|
||||||
not self.args.no_scandir,
|
not self.args.no_scandir,
|
||||||
[[True, False]],
|
[[True, False]],
|
||||||
|
throw=True,
|
||||||
)
|
)
|
||||||
dnames = set([x[0] for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)])
|
dnames = set([x[0] for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)])
|
||||||
dirs1 = [(v.st_mtime, v.st_size, k + "/") for k, v in vfs_ls if k in dnames]
|
dirs1 = [(v.st_mtime, v.st_size, k + "/") for k, v in vfs_ls if k in dnames]
|
||||||
@@ -356,7 +357,7 @@ class Tftpd(object):
|
|||||||
time.time(),
|
time.time(),
|
||||||
"",
|
"",
|
||||||
):
|
):
|
||||||
yeet("blocked by xbu server config: " + vpath)
|
yeet("blocked by xbu server config: %r" % (vpath,))
|
||||||
|
|
||||||
if not self.args.tftp_nols and bos.path.isdir(ap):
|
if not self.args.tftp_nols and bos.path.isdir(ap):
|
||||||
return self._ls(vpath, "", 0, True)
|
return self._ls(vpath, "", 0, True)
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ import os
|
|||||||
from .__init__ import TYPE_CHECKING
|
from .__init__ import TYPE_CHECKING
|
||||||
from .authsrv import VFS
|
from .authsrv import VFS
|
||||||
from .bos import bos
|
from .bos import bos
|
||||||
from .th_srv import HAVE_WEBP, thumb_path
|
from .th_srv import EXTS_AC, HAVE_WEBP, thumb_path
|
||||||
from .util import Cooldown
|
from .util import Cooldown
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
@@ -57,13 +57,17 @@ class ThumbCli(object):
|
|||||||
if is_vid and "dvthumb" in dbv.flags:
|
if is_vid and "dvthumb" in dbv.flags:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
want_opus = fmt in ("opus", "caf", "mp3")
|
want_opus = fmt in EXTS_AC
|
||||||
is_au = ext in self.fmt_ffa
|
is_au = ext in self.fmt_ffa
|
||||||
is_vau = want_opus and ext in self.fmt_ffv
|
is_vau = want_opus and ext in self.fmt_ffv
|
||||||
if is_au or is_vau:
|
if is_au or is_vau:
|
||||||
if want_opus:
|
if want_opus:
|
||||||
if self.args.no_acode:
|
if self.args.no_acode:
|
||||||
return None
|
return None
|
||||||
|
elif fmt == "caf" and self.args.no_caf:
|
||||||
|
fmt = "mp3"
|
||||||
|
elif fmt == "owa" and self.args.no_owa:
|
||||||
|
fmt = "mp3"
|
||||||
else:
|
else:
|
||||||
if "dathumb" in dbv.flags:
|
if "dathumb" in dbv.flags:
|
||||||
return None
|
return None
|
||||||
@@ -109,13 +113,13 @@ class ThumbCli(object):
|
|||||||
fmt = sfmt
|
fmt = sfmt
|
||||||
|
|
||||||
elif fmt[:1] == "p" and not is_au and not is_vid:
|
elif fmt[:1] == "p" and not is_au and not is_vid:
|
||||||
t = "cannot thumbnail [%s]: png only allowed for waveforms"
|
t = "cannot thumbnail %r: png only allowed for waveforms"
|
||||||
self.log(t % (rem), 6)
|
self.log(t % (rem,), 6)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||||
if not histpath:
|
if not histpath:
|
||||||
self.log("no histpath for [{}]".format(ptop))
|
self.log("no histpath for %r" % (ptop,))
|
||||||
return None
|
return None
|
||||||
|
|
||||||
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)
|
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
import base64
|
|
||||||
import hashlib
|
import hashlib
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
@@ -21,19 +20,19 @@ from .util import (
|
|||||||
FFMPEG_URL,
|
FFMPEG_URL,
|
||||||
Cooldown,
|
Cooldown,
|
||||||
Daemon,
|
Daemon,
|
||||||
Pebkac,
|
|
||||||
afsenc,
|
afsenc,
|
||||||
fsenc,
|
fsenc,
|
||||||
min_ex,
|
min_ex,
|
||||||
runcmd,
|
runcmd,
|
||||||
statdir,
|
statdir,
|
||||||
|
ub64enc,
|
||||||
vsplit,
|
vsplit,
|
||||||
wrename,
|
wrename,
|
||||||
wunlink,
|
wunlink,
|
||||||
)
|
)
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from typing import Optional, Union
|
from typing import Any, Optional, Union
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .svchub import SvcHub
|
from .svchub import SvcHub
|
||||||
@@ -47,6 +46,9 @@ HAVE_HEIF = False
|
|||||||
HAVE_AVIF = False
|
HAVE_AVIF = False
|
||||||
HAVE_WEBP = False
|
HAVE_WEBP = False
|
||||||
|
|
||||||
|
EXTS_TH = set(["jpg", "webp", "png"])
|
||||||
|
EXTS_AC = set(["opus", "owa", "caf", "mp3"])
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if os.environ.get("PRTY_NO_PIL"):
|
if os.environ.get("PRTY_NO_PIL"):
|
||||||
raise Exception()
|
raise Exception()
|
||||||
@@ -109,6 +111,9 @@ except:
|
|||||||
HAVE_VIPS = False
|
HAVE_VIPS = False
|
||||||
|
|
||||||
|
|
||||||
|
th_dir_cache = {}
|
||||||
|
|
||||||
|
|
||||||
def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -> str:
|
def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -> str:
|
||||||
# base16 = 16 = 256
|
# base16 = 16 = 256
|
||||||
# b64-lc = 38 = 1444
|
# b64-lc = 38 = 1444
|
||||||
@@ -122,16 +127,22 @@ def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -
|
|||||||
if ext in ffa and fmt[:2] in ("wf", "jf"):
|
if ext in ffa and fmt[:2] in ("wf", "jf"):
|
||||||
fmt = fmt.replace("f", "")
|
fmt = fmt.replace("f", "")
|
||||||
|
|
||||||
rd += "\n" + fmt
|
dcache = th_dir_cache
|
||||||
h = hashlib.sha512(afsenc(rd)).digest()
|
rd_key = rd + "\n" + fmt
|
||||||
b64 = base64.urlsafe_b64encode(h).decode("ascii")[:24]
|
rd = dcache.get(rd_key)
|
||||||
rd = ("%s/%s/" % (b64[:2], b64[2:4])).lower() + b64
|
if not rd:
|
||||||
|
h = hashlib.sha512(afsenc(rd_key)).digest()
|
||||||
|
b64 = ub64enc(h).decode("ascii")[:24]
|
||||||
|
rd = ("%s/%s/" % (b64[:2], b64[2:4])).lower() + b64
|
||||||
|
if len(dcache) > 9001:
|
||||||
|
dcache.clear()
|
||||||
|
dcache[rd_key] = rd
|
||||||
|
|
||||||
# could keep original filenames but this is safer re pathlen
|
# could keep original filenames but this is safer re pathlen
|
||||||
h = hashlib.sha512(afsenc(fn)).digest()
|
h = hashlib.sha512(afsenc(fn)).digest()
|
||||||
fn = base64.urlsafe_b64encode(h).decode("ascii")[:24]
|
fn = ub64enc(h).decode("ascii")[:24]
|
||||||
|
|
||||||
if fmt in ("opus", "caf", "mp3"):
|
if fmt in EXTS_AC:
|
||||||
cat = "ac"
|
cat = "ac"
|
||||||
else:
|
else:
|
||||||
fc = fmt[:1]
|
fc = fmt[:1]
|
||||||
@@ -155,6 +166,7 @@ class ThumbSrv(object):
|
|||||||
self.ram: dict[str, float] = {}
|
self.ram: dict[str, float] = {}
|
||||||
self.memcond = threading.Condition(self.mutex)
|
self.memcond = threading.Condition(self.mutex)
|
||||||
self.stopping = False
|
self.stopping = False
|
||||||
|
self.rm_nullthumbs = True # forget failed conversions on startup
|
||||||
self.nthr = max(1, self.args.th_mt)
|
self.nthr = max(1, self.args.th_mt)
|
||||||
|
|
||||||
self.q: Queue[Optional[tuple[str, str, str, VFS]]] = Queue(self.nthr * 4)
|
self.q: Queue[Optional[tuple[str, str, str, VFS]]] = Queue(self.nthr * 4)
|
||||||
@@ -230,7 +242,7 @@ class ThumbSrv(object):
|
|||||||
def get(self, ptop: str, rem: str, mtime: float, fmt: str) -> Optional[str]:
|
def get(self, ptop: str, rem: str, mtime: float, fmt: str) -> Optional[str]:
|
||||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||||
if not histpath:
|
if not histpath:
|
||||||
self.log("no histpath for [{}]".format(ptop))
|
self.log("no histpath for %r" % (ptop,))
|
||||||
return None
|
return None
|
||||||
|
|
||||||
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)
|
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)
|
||||||
@@ -240,7 +252,7 @@ class ThumbSrv(object):
|
|||||||
with self.mutex:
|
with self.mutex:
|
||||||
try:
|
try:
|
||||||
self.busy[tpath].append(cond)
|
self.busy[tpath].append(cond)
|
||||||
self.log("joined waiting room for %s" % (tpath,))
|
self.log("joined waiting room for %r" % (tpath,))
|
||||||
except:
|
except:
|
||||||
thdir = os.path.dirname(tpath)
|
thdir = os.path.dirname(tpath)
|
||||||
bos.makedirs(os.path.join(thdir, "w"))
|
bos.makedirs(os.path.join(thdir, "w"))
|
||||||
@@ -257,11 +269,11 @@ class ThumbSrv(object):
|
|||||||
allvols = list(self.asrv.vfs.all_vols.values())
|
allvols = list(self.asrv.vfs.all_vols.values())
|
||||||
vn = next((x for x in allvols if x.realpath == ptop), None)
|
vn = next((x for x in allvols if x.realpath == ptop), None)
|
||||||
if not vn:
|
if not vn:
|
||||||
self.log("ptop [{}] not in {}".format(ptop, allvols), 3)
|
self.log("ptop %r not in %s" % (ptop, allvols), 3)
|
||||||
vn = self.asrv.vfs.all_aps[0][1]
|
vn = self.asrv.vfs.all_aps[0][1]
|
||||||
|
|
||||||
self.q.put((abspath, tpath, fmt, vn))
|
self.q.put((abspath, tpath, fmt, vn))
|
||||||
self.log("conv {} :{} \033[0m{}".format(tpath, fmt, abspath), c=6)
|
self.log("conv %r :%s \033[0m%r" % (tpath, fmt, abspath), 6)
|
||||||
|
|
||||||
while not self.stopping:
|
while not self.stopping:
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
@@ -325,9 +337,10 @@ class ThumbSrv(object):
|
|||||||
ap_unpk = abspath
|
ap_unpk = abspath
|
||||||
|
|
||||||
if not bos.path.exists(tpath):
|
if not bos.path.exists(tpath):
|
||||||
want_mp3 = tpath.endswith(".mp3")
|
tex = tpath.rsplit(".", 1)[-1]
|
||||||
want_opus = tpath.endswith(".opus") or tpath.endswith(".caf")
|
want_mp3 = tex == "mp3"
|
||||||
want_png = tpath.endswith(".png")
|
want_opus = tex in ("opus", "owa", "caf")
|
||||||
|
want_png = tex == "png"
|
||||||
want_au = want_mp3 or want_opus
|
want_au = want_mp3 or want_opus
|
||||||
for lib in self.args.th_dec:
|
for lib in self.args.th_dec:
|
||||||
can_au = lib == "ff" and (
|
can_au = lib == "ff" and (
|
||||||
@@ -366,8 +379,8 @@ class ThumbSrv(object):
|
|||||||
fun(ap_unpk, ttpath, fmt, vn)
|
fun(ap_unpk, ttpath, fmt, vn)
|
||||||
break
|
break
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
msg = "{} could not create thumbnail of {}\n{}"
|
msg = "%s could not create thumbnail of %r\n%s"
|
||||||
msg = msg.format(fun.__name__, abspath, min_ex())
|
msg = msg % (fun.__name__, abspath, min_ex())
|
||||||
c: Union[str, int] = 1 if "<Signals.SIG" in msg else "90"
|
c: Union[str, int] = 1 if "<Signals.SIG" in msg else "90"
|
||||||
self.log(msg, c)
|
self.log(msg, c)
|
||||||
if getattr(ex, "returncode", 0) != 321:
|
if getattr(ex, "returncode", 0) != 321:
|
||||||
@@ -479,7 +492,7 @@ class ThumbSrv(object):
|
|||||||
if c == crops[-1]:
|
if c == crops[-1]:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
assert img # type: ignore
|
assert img # type: ignore # !rm
|
||||||
img.write_to_file(tpath, Q=40)
|
img.write_to_file(tpath, Q=40)
|
||||||
|
|
||||||
def conv_ffmpeg(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
def conv_ffmpeg(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||||
@@ -745,47 +758,102 @@ class ThumbSrv(object):
|
|||||||
if "ac" not in tags:
|
if "ac" not in tags:
|
||||||
raise Exception("not audio")
|
raise Exception("not audio")
|
||||||
|
|
||||||
|
sq = "%dk" % (self.args.q_opus,)
|
||||||
|
bq = sq.encode("ascii")
|
||||||
|
if tags["ac"][1] == "opus":
|
||||||
|
enc = "-c:a copy"
|
||||||
|
else:
|
||||||
|
enc = "-c:a libopus -b:a " + sq
|
||||||
|
|
||||||
|
fun = self._conv_caf if fmt == "caf" else self._conv_owa
|
||||||
|
|
||||||
|
fun(abspath, tpath, tags, rawtags, enc, bq, vn)
|
||||||
|
|
||||||
|
def _conv_owa(
|
||||||
|
self,
|
||||||
|
abspath: str,
|
||||||
|
tpath: str,
|
||||||
|
tags: dict[str, tuple[int, Any]],
|
||||||
|
rawtags: dict[str, list[Any]],
|
||||||
|
enc: str,
|
||||||
|
bq: bytes,
|
||||||
|
vn: VFS,
|
||||||
|
) -> None:
|
||||||
|
if tpath.endswith(".owa"):
|
||||||
|
container = b"webm"
|
||||||
|
tagset = [b"-map_metadata", b"-1"]
|
||||||
|
else:
|
||||||
|
container = b"opus"
|
||||||
|
tagset = self.big_tags(rawtags)
|
||||||
|
|
||||||
|
self.log("conv2 %s [%s]" % (container, enc), 6)
|
||||||
|
benc = enc.encode("ascii").split(b" ")
|
||||||
|
|
||||||
|
# fmt: off
|
||||||
|
cmd = [
|
||||||
|
b"ffmpeg",
|
||||||
|
b"-nostdin",
|
||||||
|
b"-v", b"error",
|
||||||
|
b"-hide_banner",
|
||||||
|
b"-i", fsenc(abspath),
|
||||||
|
] + tagset + [
|
||||||
|
b"-map", b"0:a:0",
|
||||||
|
] + benc + [
|
||||||
|
b"-f", container,
|
||||||
|
fsenc(tpath)
|
||||||
|
]
|
||||||
|
# fmt: on
|
||||||
|
self._run_ff(cmd, vn, oom=300)
|
||||||
|
|
||||||
|
def _conv_caf(
|
||||||
|
self,
|
||||||
|
abspath: str,
|
||||||
|
tpath: str,
|
||||||
|
tags: dict[str, tuple[int, Any]],
|
||||||
|
rawtags: dict[str, list[Any]],
|
||||||
|
enc: str,
|
||||||
|
bq: bytes,
|
||||||
|
vn: VFS,
|
||||||
|
) -> None:
|
||||||
|
tmp_opus = tpath + ".opus"
|
||||||
|
try:
|
||||||
|
wunlink(self.log, tmp_opus, vn.flags)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
try:
|
try:
|
||||||
dur = tags[".dur"][1]
|
dur = tags[".dur"][1]
|
||||||
except:
|
except:
|
||||||
dur = 0
|
dur = 0
|
||||||
|
|
||||||
src_opus = abspath.lower().endswith(".opus") or tags["ac"][1] == "opus"
|
self.log("conv2 caf-tmp [%s]" % (enc,), 6)
|
||||||
want_caf = tpath.endswith(".caf")
|
benc = enc.encode("ascii").split(b" ")
|
||||||
tmp_opus = tpath
|
|
||||||
if want_caf:
|
|
||||||
tmp_opus = tpath + ".opus"
|
|
||||||
try:
|
|
||||||
wunlink(self.log, tmp_opus, vn.flags)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
caf_src = abspath if src_opus else tmp_opus
|
# fmt: off
|
||||||
bq = ("%dk" % (self.args.q_opus,)).encode("ascii")
|
cmd = [
|
||||||
|
b"ffmpeg",
|
||||||
if not want_caf or not src_opus:
|
b"-nostdin",
|
||||||
# fmt: off
|
b"-v", b"error",
|
||||||
cmd = [
|
b"-hide_banner",
|
||||||
b"ffmpeg",
|
b"-i", fsenc(abspath),
|
||||||
b"-nostdin",
|
b"-map_metadata", b"-1",
|
||||||
b"-v", b"error",
|
b"-map", b"0:a:0",
|
||||||
b"-hide_banner",
|
] + benc + [
|
||||||
b"-i", fsenc(abspath),
|
b"-f", b"opus",
|
||||||
] + self.big_tags(rawtags) + [
|
fsenc(tmp_opus)
|
||||||
b"-map", b"0:a:0",
|
]
|
||||||
b"-c:a", b"libopus",
|
# fmt: on
|
||||||
b"-b:a", bq,
|
self._run_ff(cmd, vn, oom=300)
|
||||||
fsenc(tmp_opus)
|
|
||||||
]
|
|
||||||
# fmt: on
|
|
||||||
self._run_ff(cmd, vn, oom=300)
|
|
||||||
|
|
||||||
# iOS fails to play some "insufficiently complex" files
|
# iOS fails to play some "insufficiently complex" files
|
||||||
# (average file shorter than 8 seconds), so of course we
|
# (average file shorter than 8 seconds), so of course we
|
||||||
# fix that by mixing in some inaudible pink noise :^)
|
# fix that by mixing in some inaudible pink noise :^)
|
||||||
# 6.3 sec seems like the cutoff so lets do 7, and
|
# 6.3 sec seems like the cutoff so lets do 7, and
|
||||||
# 7 sec of psyqui-musou.opus @ 3:50 is 174 KiB
|
# 7 sec of psyqui-musou.opus @ 3:50 is 174 KiB
|
||||||
if want_caf and (dur < 20 or bos.path.getsize(caf_src) < 256 * 1024):
|
sz = bos.path.getsize(tmp_opus)
|
||||||
|
if dur < 20 or sz < 256 * 1024:
|
||||||
|
zs = bq.decode("ascii")
|
||||||
|
self.log("conv2 caf-transcode; dur=%d sz=%d q=%s" % (dur, sz, zs), 6)
|
||||||
# fmt: off
|
# fmt: off
|
||||||
cmd = [
|
cmd = [
|
||||||
b"ffmpeg",
|
b"ffmpeg",
|
||||||
@@ -804,15 +872,16 @@ class ThumbSrv(object):
|
|||||||
# fmt: on
|
# fmt: on
|
||||||
self._run_ff(cmd, vn, oom=300)
|
self._run_ff(cmd, vn, oom=300)
|
||||||
|
|
||||||
elif want_caf:
|
else:
|
||||||
# simple remux should be safe
|
# simple remux should be safe
|
||||||
|
self.log("conv2 caf-remux; dur=%d sz=%d" % (dur, sz), 6)
|
||||||
# fmt: off
|
# fmt: off
|
||||||
cmd = [
|
cmd = [
|
||||||
b"ffmpeg",
|
b"ffmpeg",
|
||||||
b"-nostdin",
|
b"-nostdin",
|
||||||
b"-v", b"error",
|
b"-v", b"error",
|
||||||
b"-hide_banner",
|
b"-hide_banner",
|
||||||
b"-i", fsenc(abspath if src_opus else tmp_opus),
|
b"-i", fsenc(tmp_opus),
|
||||||
b"-map_metadata", b"-1",
|
b"-map_metadata", b"-1",
|
||||||
b"-map", b"0:a:0",
|
b"-map", b"0:a:0",
|
||||||
b"-c:a", b"copy",
|
b"-c:a", b"copy",
|
||||||
@@ -822,11 +891,10 @@ class ThumbSrv(object):
|
|||||||
# fmt: on
|
# fmt: on
|
||||||
self._run_ff(cmd, vn, oom=300)
|
self._run_ff(cmd, vn, oom=300)
|
||||||
|
|
||||||
if tmp_opus != tpath:
|
try:
|
||||||
try:
|
wunlink(self.log, tmp_opus, vn.flags)
|
||||||
wunlink(self.log, tmp_opus, vn.flags)
|
except:
|
||||||
except:
|
pass
|
||||||
pass
|
|
||||||
|
|
||||||
def big_tags(self, raw_tags: dict[str, list[str]]) -> list[bytes]:
|
def big_tags(self, raw_tags: dict[str, list[str]]) -> list[bytes]:
|
||||||
ret = []
|
ret = []
|
||||||
@@ -853,7 +921,6 @@ class ThumbSrv(object):
|
|||||||
def cleaner(self) -> None:
|
def cleaner(self) -> None:
|
||||||
interval = self.args.th_clean
|
interval = self.args.th_clean
|
||||||
while True:
|
while True:
|
||||||
time.sleep(interval)
|
|
||||||
ndirs = 0
|
ndirs = 0
|
||||||
for vol, histpath in self.asrv.vfs.histtab.items():
|
for vol, histpath in self.asrv.vfs.histtab.items():
|
||||||
if histpath.startswith(vol):
|
if histpath.startswith(vol):
|
||||||
@@ -867,6 +934,8 @@ class ThumbSrv(object):
|
|||||||
self.log("\033[Jcln err in %s: %r" % (histpath, ex), 3)
|
self.log("\033[Jcln err in %s: %r" % (histpath, ex), 3)
|
||||||
|
|
||||||
self.log("\033[Jcln ok; rm {} dirs".format(ndirs))
|
self.log("\033[Jcln ok; rm {} dirs".format(ndirs))
|
||||||
|
self.rm_nullthumbs = False
|
||||||
|
time.sleep(interval)
|
||||||
|
|
||||||
def clean(self, histpath: str) -> int:
|
def clean(self, histpath: str) -> int:
|
||||||
ret = 0
|
ret = 0
|
||||||
@@ -881,13 +950,15 @@ class ThumbSrv(object):
|
|||||||
|
|
||||||
def _clean(self, cat: str, thumbpath: str) -> int:
|
def _clean(self, cat: str, thumbpath: str) -> int:
|
||||||
# self.log("cln {}".format(thumbpath))
|
# self.log("cln {}".format(thumbpath))
|
||||||
exts = ["jpg", "webp", "png"] if cat == "th" else ["opus", "caf", "mp3"]
|
exts = EXTS_TH if cat == "th" else EXTS_AC
|
||||||
maxage = getattr(self.args, cat + "_maxage")
|
maxage = getattr(self.args, cat + "_maxage")
|
||||||
now = time.time()
|
now = time.time()
|
||||||
prev_b64 = None
|
prev_b64 = None
|
||||||
prev_fp = ""
|
prev_fp = ""
|
||||||
try:
|
try:
|
||||||
t1 = statdir(self.log_func, not self.args.no_scandir, False, thumbpath)
|
t1 = statdir(
|
||||||
|
self.log_func, not self.args.no_scandir, False, thumbpath, False
|
||||||
|
)
|
||||||
ents = sorted(list(t1))
|
ents = sorted(list(t1))
|
||||||
except:
|
except:
|
||||||
return 0
|
return 0
|
||||||
@@ -928,6 +999,10 @@ class ThumbSrv(object):
|
|||||||
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if self.rm_nullthumbs and not inf.st_size:
|
||||||
|
bos.unlink(fp)
|
||||||
|
continue
|
||||||
|
|
||||||
if b64 == prev_b64:
|
if b64 == prev_b64:
|
||||||
self.log("rm replaced [{}]".format(fp))
|
self.log("rm replaced [{}]".format(fp))
|
||||||
bos.unlink(prev_fp)
|
bos.unlink(prev_fp)
|
||||||
|
|||||||
@@ -53,6 +53,8 @@ class U2idx(object):
|
|||||||
self.log("your python does not have sqlite3; searching will be disabled")
|
self.log("your python does not have sqlite3; searching will be disabled")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
assert sqlite3 # type: ignore # !rm
|
||||||
|
|
||||||
self.active_id = ""
|
self.active_id = ""
|
||||||
self.active_cur: Optional["sqlite3.Cursor"] = None
|
self.active_cur: Optional["sqlite3.Cursor"] = None
|
||||||
self.cur: dict[str, "sqlite3.Cursor"] = {}
|
self.cur: dict[str, "sqlite3.Cursor"] = {}
|
||||||
@@ -68,6 +70,9 @@ class U2idx(object):
|
|||||||
self.log_func("u2idx", msg, c)
|
self.log_func("u2idx", msg, c)
|
||||||
|
|
||||||
def shutdown(self) -> None:
|
def shutdown(self) -> None:
|
||||||
|
if not HAVE_SQLITE3:
|
||||||
|
return
|
||||||
|
|
||||||
for cur in self.cur.values():
|
for cur in self.cur.values():
|
||||||
db = cur.connection
|
db = cur.connection
|
||||||
try:
|
try:
|
||||||
@@ -78,6 +83,12 @@ class U2idx(object):
|
|||||||
cur.close()
|
cur.close()
|
||||||
db.close()
|
db.close()
|
||||||
|
|
||||||
|
for cur in (self.mem_cur, self.sh_cur):
|
||||||
|
if cur:
|
||||||
|
db = cur.connection
|
||||||
|
cur.close()
|
||||||
|
db.close()
|
||||||
|
|
||||||
def fsearch(
|
def fsearch(
|
||||||
self, uname: str, vols: list[VFS], body: dict[str, Any]
|
self, uname: str, vols: list[VFS], body: dict[str, Any]
|
||||||
) -> list[dict[str, Any]]:
|
) -> list[dict[str, Any]]:
|
||||||
@@ -93,7 +104,7 @@ class U2idx(object):
|
|||||||
uv: list[Union[str, int]] = [wark[:16], wark]
|
uv: list[Union[str, int]] = [wark[:16], wark]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
return self.run_query(uname, vols, uq, uv, False, 99999)[0]
|
return self.run_query(uname, vols, uq, uv, False, True, 99999)[0]
|
||||||
except:
|
except:
|
||||||
raise Pebkac(500, min_ex())
|
raise Pebkac(500, min_ex())
|
||||||
|
|
||||||
@@ -104,7 +115,7 @@ class U2idx(object):
|
|||||||
if not HAVE_SQLITE3 or not self.args.shr:
|
if not HAVE_SQLITE3 or not self.args.shr:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
assert sqlite3 # type: ignore
|
assert sqlite3 # type: ignore # !rm
|
||||||
|
|
||||||
db = sqlite3.connect(self.args.shr_db, timeout=2, check_same_thread=False)
|
db = sqlite3.connect(self.args.shr_db, timeout=2, check_same_thread=False)
|
||||||
cur = db.cursor()
|
cur = db.cursor()
|
||||||
@@ -120,12 +131,12 @@ class U2idx(object):
|
|||||||
if not HAVE_SQLITE3 or "e2d" not in vn.flags:
|
if not HAVE_SQLITE3 or "e2d" not in vn.flags:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
assert sqlite3 # type: ignore
|
assert sqlite3 # type: ignore # !rm
|
||||||
|
|
||||||
ptop = vn.realpath
|
ptop = vn.realpath
|
||||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||||
if not histpath:
|
if not histpath:
|
||||||
self.log("no histpath for [{}]".format(ptop))
|
self.log("no histpath for %r" % (ptop,))
|
||||||
return None
|
return None
|
||||||
|
|
||||||
db_path = os.path.join(histpath, "up2k.db")
|
db_path = os.path.join(histpath, "up2k.db")
|
||||||
@@ -140,7 +151,7 @@ class U2idx(object):
|
|||||||
db = sqlite3.connect(uri, timeout=2, uri=True, check_same_thread=False)
|
db = sqlite3.connect(uri, timeout=2, uri=True, check_same_thread=False)
|
||||||
cur = db.cursor()
|
cur = db.cursor()
|
||||||
cur.execute('pragma table_info("up")').fetchone()
|
cur.execute('pragma table_info("up")').fetchone()
|
||||||
self.log("ro: {}".format(db_path))
|
self.log("ro: %r" % (db_path,))
|
||||||
except:
|
except:
|
||||||
self.log("could not open read-only: {}\n{}".format(uri, min_ex()))
|
self.log("could not open read-only: {}\n{}".format(uri, min_ex()))
|
||||||
# may not fail until the pragma so unset it
|
# may not fail until the pragma so unset it
|
||||||
@@ -150,7 +161,7 @@ class U2idx(object):
|
|||||||
# on windows, this steals the write-lock from up2k.deferred_init --
|
# on windows, this steals the write-lock from up2k.deferred_init --
|
||||||
# seen on win 10.0.17763.2686, py 3.10.4, sqlite 3.37.2
|
# seen on win 10.0.17763.2686, py 3.10.4, sqlite 3.37.2
|
||||||
cur = sqlite3.connect(db_path, timeout=2, check_same_thread=False).cursor()
|
cur = sqlite3.connect(db_path, timeout=2, check_same_thread=False).cursor()
|
||||||
self.log("opened {}".format(db_path))
|
self.log("opened %r" % (db_path,))
|
||||||
|
|
||||||
self.cur[ptop] = cur
|
self.cur[ptop] = cur
|
||||||
return cur
|
return cur
|
||||||
@@ -299,7 +310,7 @@ class U2idx(object):
|
|||||||
q += " lower({}) {} ? ) ".format(field, oper)
|
q += " lower({}) {} ? ) ".format(field, oper)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
return self.run_query(uname, vols, q, va, have_mt, lim)
|
return self.run_query(uname, vols, q, va, have_mt, True, lim)
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
raise Pebkac(500, repr(ex))
|
raise Pebkac(500, repr(ex))
|
||||||
|
|
||||||
@@ -310,9 +321,11 @@ class U2idx(object):
|
|||||||
uq: str,
|
uq: str,
|
||||||
uv: list[Union[str, int]],
|
uv: list[Union[str, int]],
|
||||||
have_mt: bool,
|
have_mt: bool,
|
||||||
|
sort: bool,
|
||||||
lim: int,
|
lim: int,
|
||||||
) -> tuple[list[dict[str, Any]], list[str], bool]:
|
) -> tuple[list[dict[str, Any]], list[str], bool]:
|
||||||
if self.args.srch_dbg:
|
dbg = self.args.srch_dbg
|
||||||
|
if dbg:
|
||||||
t = "searching across all %s volumes in which the user has 'r' (full read access):\n %s"
|
t = "searching across all %s volumes in which the user has 'r' (full read access):\n %s"
|
||||||
zs = "\n ".join(["/%s = %s" % (x.vpath, x.realpath) for x in vols])
|
zs = "\n ".join(["/%s = %s" % (x.vpath, x.realpath) for x in vols])
|
||||||
self.log(t % (len(vols), zs), 5)
|
self.log(t % (len(vols), zs), 5)
|
||||||
@@ -355,14 +368,14 @@ class U2idx(object):
|
|||||||
if not cur:
|
if not cur:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
excl = []
|
dots = flags.get("dotsrch") and uname in vol.axs.udot
|
||||||
for vp2 in self.asrv.vfs.all_vols.keys():
|
zs = "srch_re_dots" if dots else "srch_re_nodot"
|
||||||
if vp2.startswith((vtop + "/").lstrip("/")) and vtop != vp2:
|
rex: re.Pattern = flags.get(zs) # type: ignore
|
||||||
excl.append(vp2[len(vtop) :].lstrip("/"))
|
|
||||||
|
|
||||||
if self.args.srch_dbg:
|
if dbg:
|
||||||
t = "searching in volume /%s (%s), excludelist %s"
|
t = "searching in volume /%s (%s), excluding %s"
|
||||||
self.log(t % (vtop, ptop, excl), 5)
|
self.log(t % (vtop, ptop, rex.pattern), 5)
|
||||||
|
rex_cfg: Optional[re.Pattern] = flags.get("srch_excl")
|
||||||
|
|
||||||
self.active_cur = cur
|
self.active_cur = cur
|
||||||
|
|
||||||
@@ -375,7 +388,6 @@ class U2idx(object):
|
|||||||
|
|
||||||
sret = []
|
sret = []
|
||||||
fk = flags.get("fk")
|
fk = flags.get("fk")
|
||||||
dots = flags.get("dotsrch") and uname in vol.axs.udot
|
|
||||||
fk_alg = 2 if "fka" in flags else 1
|
fk_alg = 2 if "fka" in flags else 1
|
||||||
c = cur.execute(uq, tuple(vuv))
|
c = cur.execute(uq, tuple(vuv))
|
||||||
for hit in c:
|
for hit in c:
|
||||||
@@ -384,20 +396,23 @@ class U2idx(object):
|
|||||||
if rd.startswith("//") or fn.startswith("//"):
|
if rd.startswith("//") or fn.startswith("//"):
|
||||||
rd, fn = s3dec(rd, fn)
|
rd, fn = s3dec(rd, fn)
|
||||||
|
|
||||||
if rd in excl or any([x for x in excl if rd.startswith(x + "/")]):
|
vp = vjoin(vjoin(vtop, rd), fn)
|
||||||
if self.args.srch_dbg:
|
|
||||||
zs = vjoin(vjoin(vtop, rd), fn)
|
if vp in seen_rps:
|
||||||
t = "database inconsistency in volume '/%s'; ignoring: %s"
|
|
||||||
self.log(t % (vtop, zs), 1)
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
rp = quotep("/".join([x for x in [vtop, rd, fn] if x]))
|
if rex.search(vp):
|
||||||
if not dots and "/." in ("/" + rp):
|
if dbg:
|
||||||
continue
|
if rex_cfg and rex_cfg.search(vp): # type: ignore
|
||||||
|
self.log("filtered by srch_excl: %s" % (vp,), 6)
|
||||||
if rp in seen_rps:
|
elif not dots and "/." in ("/" + vp):
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
t = "database inconsistency in volume '/%s'; ignoring: %s"
|
||||||
|
self.log(t % (vtop, vp), 1)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
rp = quotep(vp)
|
||||||
if not fk:
|
if not fk:
|
||||||
suf = ""
|
suf = ""
|
||||||
else:
|
else:
|
||||||
@@ -419,7 +434,7 @@ class U2idx(object):
|
|||||||
if lim < 0:
|
if lim < 0:
|
||||||
break
|
break
|
||||||
|
|
||||||
if self.args.srch_dbg:
|
if dbg:
|
||||||
t = "in volume '/%s': hit: %s"
|
t = "in volume '/%s': hit: %s"
|
||||||
self.log(t % (vtop, rp), 5)
|
self.log(t % (vtop, rp), 5)
|
||||||
|
|
||||||
@@ -449,14 +464,15 @@ class U2idx(object):
|
|||||||
ret.extend(sret)
|
ret.extend(sret)
|
||||||
# print("[{}] {}".format(ptop, sret))
|
# print("[{}] {}".format(ptop, sret))
|
||||||
|
|
||||||
if self.args.srch_dbg:
|
if dbg:
|
||||||
t = "in volume '/%s': got %d hits, %d total so far"
|
t = "in volume '/%s': got %d hits, %d total so far"
|
||||||
self.log(t % (vtop, len(sret), len(ret)), 5)
|
self.log(t % (vtop, len(sret), len(ret)), 5)
|
||||||
|
|
||||||
done_flag.append(True)
|
done_flag.append(True)
|
||||||
self.active_id = ""
|
self.active_id = ""
|
||||||
|
|
||||||
ret.sort(key=itemgetter("rp"))
|
if sort:
|
||||||
|
ret.sort(key=itemgetter("rp"))
|
||||||
|
|
||||||
return ret, list(taglist.keys()), lim < 0 and not clamped
|
return ret, list(taglist.keys()), lim < 0 and not clamped
|
||||||
|
|
||||||
@@ -467,5 +483,5 @@ class U2idx(object):
|
|||||||
return
|
return
|
||||||
|
|
||||||
if identifier == self.active_id:
|
if identifier == self.active_id:
|
||||||
assert self.active_cur
|
assert self.active_cur # !rm
|
||||||
self.active_cur.connection.interrupt()
|
self.active_cur.connection.interrupt()
|
||||||
|
|||||||
1373
copyparty/up2k.py
1373
copyparty/up2k.py
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -32,7 +32,7 @@ window.baguetteBox = (function () {
|
|||||||
scrollCSS = ['', ''],
|
scrollCSS = ['', ''],
|
||||||
scrollTimer = 0,
|
scrollTimer = 0,
|
||||||
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
|
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
|
||||||
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
|
re_v = /^[^?]+\.(webm|mkv|mp4|m4v|mov)(\?|$)/i,
|
||||||
anims = ['slideIn', 'fadeIn', 'none'],
|
anims = ['slideIn', 'fadeIn', 'none'],
|
||||||
data = {}, // all galleries
|
data = {}, // all galleries
|
||||||
imagesElements = [],
|
imagesElements = [],
|
||||||
@@ -633,6 +633,9 @@ window.baguetteBox = (function () {
|
|||||||
catch (ex) { }
|
catch (ex) { }
|
||||||
isFullscreen = false;
|
isFullscreen = false;
|
||||||
|
|
||||||
|
if (toast.tag == 'bb-ded')
|
||||||
|
toast.hide();
|
||||||
|
|
||||||
if (dtor || overlay.style.display === 'none')
|
if (dtor || overlay.style.display === 'none')
|
||||||
return;
|
return;
|
||||||
|
|
||||||
@@ -668,6 +671,7 @@ window.baguetteBox = (function () {
|
|||||||
if (v == keep)
|
if (v == keep)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
|
unbind(v, 'error', lerr);
|
||||||
v.src = '';
|
v.src = '';
|
||||||
v.load();
|
v.load();
|
||||||
|
|
||||||
@@ -695,6 +699,28 @@ window.baguetteBox = (function () {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function lerr() {
|
||||||
|
var t;
|
||||||
|
try {
|
||||||
|
t = this.getAttribute('src');
|
||||||
|
t = uricom_dec(t.split('/').pop().split('?')[0]);
|
||||||
|
}
|
||||||
|
catch (ex) { }
|
||||||
|
|
||||||
|
t = 'Failed to open ' + (t?t:'file');
|
||||||
|
console.log('bb-ded', t);
|
||||||
|
t += '\n\nEither the file is corrupt, or your browser does not understand the file format or codec';
|
||||||
|
|
||||||
|
try {
|
||||||
|
t += "\n\nerr#" + this.error.code + ", " + this.error.message;
|
||||||
|
}
|
||||||
|
catch (ex) { }
|
||||||
|
|
||||||
|
this.ded = esc(t);
|
||||||
|
if (this === vidimg())
|
||||||
|
toast.err(20, this.ded, 'bb-ded');
|
||||||
|
}
|
||||||
|
|
||||||
function loadImage(index, callback) {
|
function loadImage(index, callback) {
|
||||||
var imageContainer = imagesElements[index];
|
var imageContainer = imagesElements[index];
|
||||||
var galleryItem = currentGallery[index];
|
var galleryItem = currentGallery[index];
|
||||||
@@ -739,7 +765,8 @@ window.baguetteBox = (function () {
|
|||||||
var image = mknod(is_vid ? 'video' : 'img');
|
var image = mknod(is_vid ? 'video' : 'img');
|
||||||
clmod(imageContainer, 'vid', is_vid);
|
clmod(imageContainer, 'vid', is_vid);
|
||||||
|
|
||||||
image.addEventListener(is_vid ? 'loadedmetadata' : 'load', function () {
|
bind(image, 'error', lerr);
|
||||||
|
bind(image, is_vid ? 'loadedmetadata' : 'load', function () {
|
||||||
// Remove loader element
|
// Remove loader element
|
||||||
qsr('#baguette-img-' + index + ' .bbox-spinner');
|
qsr('#baguette-img-' + index + ' .bbox-spinner');
|
||||||
if (!options.async && callback)
|
if (!options.async && callback)
|
||||||
@@ -816,6 +843,12 @@ window.baguetteBox = (function () {
|
|||||||
});
|
});
|
||||||
updateOffset();
|
updateOffset();
|
||||||
|
|
||||||
|
var im = vidimg();
|
||||||
|
if (im && im.ded)
|
||||||
|
toast.err(20, im.ded, 'bb-ded');
|
||||||
|
else if (toast.tag == 'bb-ded')
|
||||||
|
toast.hide();
|
||||||
|
|
||||||
if (options.animation == 'none')
|
if (options.animation == 'none')
|
||||||
unvid(vid());
|
unvid(vid());
|
||||||
else
|
else
|
||||||
|
|||||||
@@ -64,7 +64,7 @@
|
|||||||
--u2-tab-bg: linear-gradient(to bottom, var(--bg), var(--bg-u1));
|
--u2-tab-bg: linear-gradient(to bottom, var(--bg), var(--bg-u1));
|
||||||
--u2-tab-b1: rgba(128,128,128,0.8);
|
--u2-tab-b1: rgba(128,128,128,0.8);
|
||||||
--u2-tab-1-fg: #fd7;
|
--u2-tab-1-fg: #fd7;
|
||||||
--u2-tab-1-bg: linear-gradient(to bottom, var(#353), var(--bg) 80%);
|
--u2-tab-1-bg: linear-gradient(to bottom, #353, var(--bg) 80%);
|
||||||
--u2-tab-1-b1: #7c5;
|
--u2-tab-1-b1: #7c5;
|
||||||
--u2-tab-1-b2: #583;
|
--u2-tab-1-b2: #583;
|
||||||
--u2-tab-1-sh: #280;
|
--u2-tab-1-sh: #280;
|
||||||
@@ -188,7 +188,6 @@ html.y {
|
|||||||
--srv-1: #555;
|
--srv-1: #555;
|
||||||
--srv-2: #c83;
|
--srv-2: #c83;
|
||||||
--srv-3: #c0a;
|
--srv-3: #c0a;
|
||||||
--srv-3b: rgba(255,68,204,0.6);
|
|
||||||
|
|
||||||
--tree-bg: #fff;
|
--tree-bg: #fff;
|
||||||
|
|
||||||
@@ -286,6 +285,7 @@ html.bz {
|
|||||||
--f-h-b1: #34384e;
|
--f-h-b1: #34384e;
|
||||||
--mp-sh: #11121d;
|
--mp-sh: #11121d;
|
||||||
/*--mp-b-bg: #2c3044;*/
|
/*--mp-b-bg: #2c3044;*/
|
||||||
|
--f-play-bg: var(--btn-1-bg);
|
||||||
}
|
}
|
||||||
html.by {
|
html.by {
|
||||||
--bg: #f2f2f2;
|
--bg: #f2f2f2;
|
||||||
@@ -345,6 +345,7 @@ html.cz {
|
|||||||
--srv-3: #fff;
|
--srv-3: #fff;
|
||||||
|
|
||||||
--u2-tab-b1: var(--bg-d3);
|
--u2-tab-b1: var(--bg-d3);
|
||||||
|
--u2-tab-1-bg: a;
|
||||||
}
|
}
|
||||||
html.cy {
|
html.cy {
|
||||||
--fg: #fff;
|
--fg: #fff;
|
||||||
@@ -374,17 +375,20 @@ html.cy {
|
|||||||
--btn-bs: 0 .25em 0 #f00;
|
--btn-bs: 0 .25em 0 #f00;
|
||||||
--chk-fg: #fd0;
|
--chk-fg: #fd0;
|
||||||
|
|
||||||
|
--txt-bg: #000;
|
||||||
--srv-1: #f00;
|
--srv-1: #f00;
|
||||||
--srv-3: #fff;
|
--srv-3: #fff;
|
||||||
--op-aa-bg: #fff;
|
--op-aa-bg: #fff;
|
||||||
|
|
||||||
--u2-b1-bg: #f00;
|
--u2-b1-bg: #f00;
|
||||||
--u2-b2-bg: #f00;
|
--u2-b2-bg: #f00;
|
||||||
|
|
||||||
|
--g-sel-fg: #fff;
|
||||||
|
--g-sel-bg: #aaa;
|
||||||
|
--g-fsel-bg: #aaa;
|
||||||
}
|
}
|
||||||
html.dz {
|
html.dz {
|
||||||
--fg: #4d4;
|
--fg: #4d4;
|
||||||
--fg-max: #fff;
|
|
||||||
--fg2-max: #fff;
|
|
||||||
--fg-weak: #2a2;
|
--fg-weak: #2a2;
|
||||||
|
|
||||||
--bg-u6: #020;
|
--bg-u6: #020;
|
||||||
@@ -394,11 +398,9 @@ html.dz {
|
|||||||
--bg-u2: #020;
|
--bg-u2: #020;
|
||||||
--bg-u1: #020;
|
--bg-u1: #020;
|
||||||
--bg: #010;
|
--bg: #010;
|
||||||
--bgg: var(--bg);
|
|
||||||
--bg-d1: #000;
|
--bg-d1: #000;
|
||||||
--bg-d2: #020;
|
--bg-d2: #020;
|
||||||
--bg-d3: #000;
|
--bg-d3: #000;
|
||||||
--bg-max: #000;
|
|
||||||
|
|
||||||
--tab-alt: #6f6;
|
--tab-alt: #6f6;
|
||||||
--row-alt: #030;
|
--row-alt: #030;
|
||||||
@@ -411,45 +413,21 @@ html.dz {
|
|||||||
--a-dark: #afa;
|
--a-dark: #afa;
|
||||||
--a-gray: #2a2;
|
--a-gray: #2a2;
|
||||||
|
|
||||||
--btn-fg: var(--a);
|
|
||||||
--btn-bg: rgba(64,128,64,0.15);
|
--btn-bg: rgba(64,128,64,0.15);
|
||||||
--btn-h-fg: var(--a-hil);
|
|
||||||
--btn-h-bg: #050;
|
--btn-h-bg: #050;
|
||||||
--btn-1-fg: #000;
|
--btn-1-fg: #000;
|
||||||
--btn-1-bg: #4f4;
|
--btn-1-bg: #4f4;
|
||||||
--btn-1h-fg: var(--btn-1-fg);
|
|
||||||
--btn-1h-bg: #3f3;
|
--btn-1h-bg: #3f3;
|
||||||
--btn-bs: 0 0 0 .1em #080 inset;
|
--btn-bs: 0 0 0 .1em #080 inset;
|
||||||
--btn-1-bs: a;
|
--btn-1-bs: a;
|
||||||
|
|
||||||
--chk-fg: var(--tab-alt);
|
|
||||||
--txt-sh: var(--bg-d2);
|
|
||||||
--txt-bg: var(--btn-bg);
|
|
||||||
|
|
||||||
--op-aa-fg: var(--a);
|
|
||||||
--op-aa-bg: var(--bg-d2);
|
|
||||||
--op-a-sh: rgba(0,0,0,0.5);
|
|
||||||
|
|
||||||
--u2-btn-b1: var(--fg-weak);
|
--u2-btn-b1: var(--fg-weak);
|
||||||
--u2-sbtn-b1: var(--fg-weak);
|
--u2-sbtn-b1: var(--fg-weak);
|
||||||
--u2-txt-bg: var(--bg-u5);
|
|
||||||
--u2-tab-bg: linear-gradient(to bottom, var(--bg), var(--bg-u1));
|
|
||||||
--u2-tab-b1: var(--fg-weak);
|
--u2-tab-b1: var(--fg-weak);
|
||||||
--u2-tab-1-fg: #fff;
|
--u2-tab-1-fg: #fff;
|
||||||
--u2-tab-1-bg: linear-gradient(to bottom, var(#353), var(--bg) 80%);
|
--u2-tab-1-bg: linear-gradient(to bottom, #151, var(--bg) 80%);
|
||||||
--u2-tab-1-b1: #7c5;
|
|
||||||
--u2-tab-1-b2: #583;
|
|
||||||
--u2-tab-1-sh: #280;
|
|
||||||
--u2-b-fg: #fff;
|
|
||||||
--u2-b1-bg: #3a3;
|
--u2-b1-bg: #3a3;
|
||||||
--u2-b2-bg: #3a3;
|
--u2-b2-bg: #3a3;
|
||||||
--u2-inf-bg: #07a;
|
|
||||||
--u2-inf-b1: #0be;
|
|
||||||
--u2-ok-bg: #380;
|
|
||||||
--u2-ok-b1: #8e4;
|
|
||||||
--u2-err-bg: #900;
|
|
||||||
--u2-err-b1: #d06;
|
|
||||||
--ud-b1: #888;
|
|
||||||
|
|
||||||
--sort-1: #fff;
|
--sort-1: #fff;
|
||||||
--sort-2: #3f3;
|
--sort-2: #3f3;
|
||||||
@@ -461,47 +439,12 @@ html.dz {
|
|||||||
|
|
||||||
--tree-bg: #010;
|
--tree-bg: #010;
|
||||||
|
|
||||||
--g-play-bg: #750;
|
|
||||||
--g-play-b1: #c90;
|
|
||||||
--g-play-b2: #da4;
|
|
||||||
--g-play-sh: #b83;
|
|
||||||
|
|
||||||
--g-sel-fg: #fff;
|
|
||||||
--g-sel-bg: #925;
|
|
||||||
--g-sel-b1: #c37;
|
--g-sel-b1: #c37;
|
||||||
--g-sel-sh: #b36;
|
--g-sel-sh: #b36;
|
||||||
--g-fsel-bg: #d39;
|
|
||||||
--g-fsel-b1: #d48;
|
--g-fsel-b1: #d48;
|
||||||
--g-fsel-ts: #804;
|
|
||||||
--g-fg: var(--a-hil);
|
|
||||||
--g-bg: var(--bg-u2);
|
|
||||||
--g-b1: var(--bg-u4);
|
|
||||||
--g-b2: var(--bg-u5);
|
|
||||||
--g-g1: var(--bg-u2);
|
|
||||||
--g-g2: var(--bg-u5);
|
|
||||||
--g-f-bg: var(--bg-u4);
|
|
||||||
--g-f-b1: var(--bg-u5);
|
|
||||||
--g-f-fg: var(--a-hil);
|
|
||||||
--g-sh: rgba(0,0,0,0.3);
|
|
||||||
|
|
||||||
--f-sh1: 0.33;
|
|
||||||
--f-sh2: 0.02;
|
|
||||||
--f-sh3: 0.2;
|
|
||||||
--f-h-b1: #3b3;
|
--f-h-b1: #3b3;
|
||||||
|
|
||||||
--f-play-bg: #fc5;
|
|
||||||
--f-play-fg: #000;
|
|
||||||
--f-sel-sh: #fc0;
|
|
||||||
--f-gray: #999;
|
|
||||||
|
|
||||||
--fm-off: #f6c;
|
|
||||||
--mp-sh: var(--bg-d3);
|
|
||||||
|
|
||||||
--err-fg: #fff;
|
|
||||||
--err-bg: #a20;
|
|
||||||
--err-b1: #f00;
|
|
||||||
--err-ts: #500;
|
|
||||||
|
|
||||||
text-shadow: none;
|
text-shadow: none;
|
||||||
font-family: 'scp', monospace, monospace;
|
font-family: 'scp', monospace, monospace;
|
||||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||||
@@ -598,7 +541,7 @@ html.dy {
|
|||||||
background: var(--sel-bg);
|
background: var(--sel-bg);
|
||||||
text-shadow: none;
|
text-shadow: none;
|
||||||
}
|
}
|
||||||
html,body,tr,th,td,#files,a {
|
html,body,tr,th,td,#files,a,#blogout {
|
||||||
color: inherit;
|
color: inherit;
|
||||||
background: none;
|
background: none;
|
||||||
font-weight: inherit;
|
font-weight: inherit;
|
||||||
@@ -681,11 +624,15 @@ html.y #path {
|
|||||||
#files tbody div a {
|
#files tbody div a {
|
||||||
color: var(--tab-alt);
|
color: var(--tab-alt);
|
||||||
}
|
}
|
||||||
a, #files tbody div a:last-child {
|
a, #blogout, #files tbody div a:last-child {
|
||||||
color: var(--a);
|
color: var(--a);
|
||||||
padding: .2em;
|
padding: .2em;
|
||||||
text-decoration: none;
|
text-decoration: none;
|
||||||
}
|
}
|
||||||
|
#blogout {
|
||||||
|
margin: -.2em;
|
||||||
|
}
|
||||||
|
#blogout:hover,
|
||||||
a:hover {
|
a:hover {
|
||||||
color: var(--a-hil);
|
color: var(--a-hil);
|
||||||
background: var(--a-h-bg);
|
background: var(--a-h-bg);
|
||||||
@@ -886,7 +833,7 @@ html.y #path a:hover {
|
|||||||
max-width: 52em;
|
max-width: 52em;
|
||||||
}
|
}
|
||||||
.mdo.sb,
|
.mdo.sb,
|
||||||
#epi.logue.mdo>iframe {
|
.logue.mdo>iframe {
|
||||||
max-width: 54em;
|
max-width: 54em;
|
||||||
}
|
}
|
||||||
.mdo,
|
.mdo,
|
||||||
@@ -929,6 +876,9 @@ html.y #path a:hover {
|
|||||||
color: var(--srv-3);
|
color: var(--srv-3);
|
||||||
border-bottom: 1px solid var(--srv-3b);
|
border-bottom: 1px solid var(--srv-3b);
|
||||||
}
|
}
|
||||||
|
#flogout {
|
||||||
|
display: inline;
|
||||||
|
}
|
||||||
#goh+span {
|
#goh+span {
|
||||||
color: var(--bg-u5);
|
color: var(--bg-u5);
|
||||||
padding-left: .5em;
|
padding-left: .5em;
|
||||||
@@ -1349,6 +1299,7 @@ html.y #widget.open {
|
|||||||
}
|
}
|
||||||
#widget.cmp #barpos,
|
#widget.cmp #barpos,
|
||||||
#widget.cmp #barbuf {
|
#widget.cmp #barbuf {
|
||||||
|
height: 1.6em;
|
||||||
width: calc(100% - 11em);
|
width: calc(100% - 11em);
|
||||||
border-radius: 0;
|
border-radius: 0;
|
||||||
left: 5em;
|
left: 5em;
|
||||||
@@ -1696,6 +1647,18 @@ html.dz .btn {
|
|||||||
background: var(--btn-1-bg);
|
background: var(--btn-1-bg);
|
||||||
text-shadow: none;
|
text-shadow: none;
|
||||||
}
|
}
|
||||||
|
#tree ul a.ld::before {
|
||||||
|
font-weight: bold;
|
||||||
|
font-family: sans-serif;
|
||||||
|
display: inline-block;
|
||||||
|
text-align: center;
|
||||||
|
width: 1em;
|
||||||
|
margin: 0 .3em 0 -1.3em;
|
||||||
|
color: var(--fg-max);
|
||||||
|
opacity: 0;
|
||||||
|
content: '◠';
|
||||||
|
animation: .5s linear infinite forwards spin, ease .25s 1 forwards fadein;
|
||||||
|
}
|
||||||
#tree ul a.par {
|
#tree ul a.par {
|
||||||
color: var(--fg-max);
|
color: var(--fg-max);
|
||||||
}
|
}
|
||||||
@@ -1732,15 +1695,24 @@ html.y #tree.nowrap .ntree a+a:hover {
|
|||||||
line-height: 0;
|
line-height: 0;
|
||||||
}
|
}
|
||||||
.dumb_loader_thing {
|
.dumb_loader_thing {
|
||||||
display: inline-block;
|
display: block;
|
||||||
margin: 1em .3em 1em 1em;
|
margin: 1em .3em 1em 1em;
|
||||||
padding: 0 1.2em 0 0;
|
padding: 0 1.2em 0 0;
|
||||||
font-size: 4em;
|
font-size: 4em;
|
||||||
|
min-width: 1em;
|
||||||
|
min-height: 1em;
|
||||||
opacity: 0;
|
opacity: 0;
|
||||||
animation: 1s linear .15s infinite forwards spin, .2s ease .15s 1 forwards fadein;
|
animation: 1s linear .15s infinite forwards spin, .2s ease .15s 1 forwards fadein;
|
||||||
position: absolute;
|
position: fixed;
|
||||||
|
top: .3em;
|
||||||
z-index: 9;
|
z-index: 9;
|
||||||
}
|
}
|
||||||
|
#dlt_t {
|
||||||
|
left: 0;
|
||||||
|
}
|
||||||
|
#dlt_f {
|
||||||
|
right: .5em;
|
||||||
|
}
|
||||||
#files .cfg {
|
#files .cfg {
|
||||||
display: none;
|
display: none;
|
||||||
font-size: 2em;
|
font-size: 2em;
|
||||||
@@ -1917,11 +1889,10 @@ html.y #tree.nowrap .ntree a+a:hover {
|
|||||||
#rn_f.m td+td {
|
#rn_f.m td+td {
|
||||||
width: 50%;
|
width: 50%;
|
||||||
}
|
}
|
||||||
#rn_f .err td {
|
#rn_f .err td,
|
||||||
background: var(--err-bg);
|
#rn_f .err input[readonly],
|
||||||
color: var(--fg-max);
|
#rui .ng input[readonly] {
|
||||||
}
|
color: var(--err-fg);
|
||||||
#rn_f .err input[readonly] {
|
|
||||||
background: var(--err-bg);
|
background: var(--err-bg);
|
||||||
}
|
}
|
||||||
#rui input[readonly] {
|
#rui input[readonly] {
|
||||||
@@ -2823,6 +2794,7 @@ html.b #u2conf a.b:hover {
|
|||||||
padding-left: .2em;
|
padding-left: .2em;
|
||||||
}
|
}
|
||||||
.fsearch_explain {
|
.fsearch_explain {
|
||||||
|
color: var(--a-dark);
|
||||||
padding-left: .7em;
|
padding-left: .7em;
|
||||||
font-size: 1.1em;
|
font-size: 1.1em;
|
||||||
line-height: 0;
|
line-height: 0;
|
||||||
@@ -3112,18 +3084,30 @@ html.by #u2cards a.act {
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
html.cy #wrap {
|
|
||||||
color: #000;
|
|
||||||
}
|
|
||||||
html.cy .mdo a {
|
html.cy .mdo a {
|
||||||
background: #f00;
|
background: #f00;
|
||||||
}
|
}
|
||||||
|
html.cy #wrap,
|
||||||
|
html.cy #acc_info a,
|
||||||
html.cy #op_up2k,
|
html.cy #op_up2k,
|
||||||
html.cy #files,
|
html.cy #files,
|
||||||
html.cy #files a,
|
html.cy #files a,
|
||||||
html.cy #files tbody div a:last-child {
|
html.cy #files tbody div a:last-child {
|
||||||
color: #000;
|
color: #000;
|
||||||
}
|
}
|
||||||
|
html.cy #u2tab a,
|
||||||
|
html.cy #u2cards a {
|
||||||
|
color: #f00;
|
||||||
|
}
|
||||||
|
html.cy #unpost a {
|
||||||
|
color: #ff0;
|
||||||
|
}
|
||||||
|
html.cy #barbuf {
|
||||||
|
filter: hue-rotate(267deg) brightness(0.8) contrast(4);
|
||||||
|
}
|
||||||
|
html.cy #pvol {
|
||||||
|
filter: hue-rotate(4deg) contrast(2.2);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -108,12 +108,9 @@
|
|||||||
|
|
||||||
{%- for f in files %}
|
{%- for f in files %}
|
||||||
<tr><td>{{ f.lead }}</td><td><a href="{{ f.href }}">{{ f.name|e }}</a></td><td>{{ f.sz }}</td>
|
<tr><td>{{ f.lead }}</td><td><a href="{{ f.href }}">{{ f.name|e }}</a></td><td>{{ f.sz }}</td>
|
||||||
{%- if f.tags is defined %}
|
{%- if f.tags is defined %}
|
||||||
{%- for k in taglist %}
|
{%- for k in taglist %}<td>{{ f.tags[k] }}</td>{%- endfor %}
|
||||||
<td>{{ f.tags[k] }}</td>
|
{%- endif %}<td>{{ f.ext }}</td><td>{{ f.dt }}</td></tr>
|
||||||
{%- endfor %}
|
|
||||||
{%- endif %}
|
|
||||||
<td>{{ f.ext }}</td><td>{{ f.dt }}</td></tr>
|
|
||||||
{%- endfor %}
|
{%- endfor %}
|
||||||
|
|
||||||
</tbody>
|
</tbody>
|
||||||
@@ -127,24 +124,21 @@
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{%- if srv_info %}
|
|
||||||
<div id="srv_info"><span>{{ srv_info }}</span></div>
|
<div id="srv_info"><span>{{ srv_info }}</span></div>
|
||||||
{%- endif %}
|
|
||||||
|
|
||||||
<div id="widget"></div>
|
<div id="widget"></div>
|
||||||
|
|
||||||
<script>
|
<script>
|
||||||
var SR = {{ r|tojson }},
|
var SR = "{{ r }}",
|
||||||
|
CGV1 = {{ cgv1 }},
|
||||||
CGV = {{ cgv|tojson }},
|
CGV = {{ cgv|tojson }},
|
||||||
TS = "{{ ts }}",
|
TS = "{{ ts }}",
|
||||||
dtheme = "{{ dtheme }}",
|
dtheme = "{{ dtheme }}",
|
||||||
srvinf = "{{ srv_info }}",
|
srvinf = "{{ srv_info }}",
|
||||||
s_name = "{{ s_name }}",
|
|
||||||
lang = "{{ lang }}",
|
lang = "{{ lang }}",
|
||||||
dfavico = "{{ favico }}",
|
dfavico = "{{ favico }}",
|
||||||
have_tags_idx = {{ have_tags_idx|tojson }},
|
have_tags_idx = {{ have_tags_idx }},
|
||||||
sb_lg = "{{ sb_lg }}",
|
sb_lg = "{{ sb_lg }}",
|
||||||
txt_ext = "{{ txt_ext }}",
|
|
||||||
logues = {{ logues|tojson if sb_lg else "[]" }},
|
logues = {{ logues|tojson if sb_lg else "[]" }},
|
||||||
ls0 = {{ ls0|tojson }};
|
ls0 = {{ ls0|tojson }};
|
||||||
|
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
BIN
copyparty/web/iiam.gif
Normal file
BIN
copyparty/web/iiam.gif
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 230 B |
@@ -128,9 +128,9 @@ write markdown (most html is 🙆 too)
|
|||||||
|
|
||||||
<script>
|
<script>
|
||||||
|
|
||||||
var SR = {{ r|tojson }},
|
var SR = "{{ r }}",
|
||||||
last_modified = {{ lastmod }},
|
last_modified = {{ lastmod }},
|
||||||
have_emp = {{ have_emp|tojson }},
|
have_emp = {{ "true" if have_emp else "false" }},
|
||||||
dfavico = "{{ favico }}";
|
dfavico = "{{ favico }}";
|
||||||
|
|
||||||
var md_opt = {
|
var md_opt = {
|
||||||
|
|||||||
@@ -17,14 +17,13 @@ var chromedbg = function () { console.log(arguments); }
|
|||||||
var dbg = function () { };
|
var dbg = function () { };
|
||||||
|
|
||||||
// replace dbg with the real deal here or in the console:
|
// replace dbg with the real deal here or in the console:
|
||||||
// dbg = chromedbg
|
// dbg = chromedbg;
|
||||||
// dbg = console.log
|
// dbg = console.log;
|
||||||
|
|
||||||
|
|
||||||
// dodge browser issues
|
// dodge browser issues
|
||||||
(function () {
|
(function () {
|
||||||
var ua = navigator.userAgent;
|
if (UA.indexOf(') Gecko/') !== -1 && /Linux| Mac /.exec(UA)) {
|
||||||
if (ua.indexOf(') Gecko/') !== -1 && /Linux| Mac /.exec(ua)) {
|
|
||||||
// necessary on ff-68.7 at least
|
// necessary on ff-68.7 at least
|
||||||
var s = mknod('style');
|
var s = mknod('style');
|
||||||
s.innerHTML = '@page { margin: .5in .6in .8in .6in; }';
|
s.innerHTML = '@page { margin: .5in .6in .8in .6in; }';
|
||||||
|
|||||||
@@ -450,7 +450,7 @@ function savechk_cb() {
|
|||||||
|
|
||||||
// firefox bug: initial selection offset isn't cleared properly through js
|
// firefox bug: initial selection offset isn't cleared properly through js
|
||||||
var ff_clearsel = (function () {
|
var ff_clearsel = (function () {
|
||||||
if (navigator.userAgent.indexOf(') Gecko/') === -1)
|
if (UA.indexOf(') Gecko/') === -1)
|
||||||
return function () { }
|
return function () { }
|
||||||
|
|
||||||
return function () {
|
return function () {
|
||||||
@@ -1078,26 +1078,28 @@ action_stack = (function () {
|
|||||||
var p1 = from.length,
|
var p1 = from.length,
|
||||||
p2 = to.length;
|
p2 = to.length;
|
||||||
|
|
||||||
while (p1-- > 0 && p2-- > 0)
|
while (p1 --> 0 && p2 --> 0)
|
||||||
if (from[p1] != to[p2])
|
if (from[p1] != to[p2])
|
||||||
break;
|
break;
|
||||||
|
|
||||||
if (car > ++p1) {
|
if (car > ++p1)
|
||||||
car = p1;
|
car = p1;
|
||||||
}
|
|
||||||
|
|
||||||
var txt = from.substring(car, p1)
|
var txt = from.substring(car, p1)
|
||||||
return {
|
return {
|
||||||
car: car,
|
car: car,
|
||||||
cdr: ++p2,
|
cdr: p2 + (car && 1),
|
||||||
txt: txt,
|
txt: txt,
|
||||||
cpos: cpos
|
cpos: cpos
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
var undiff = function (from, change) {
|
var undiff = function (from, change) {
|
||||||
|
var t1 = from.substring(0, change.car),
|
||||||
|
t2 = from.substring(change.cdr);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
txt: from.substring(0, change.car) + change.txt + from.substring(change.cdr),
|
txt: t1 + change.txt + t2,
|
||||||
cpos: change.cpos
|
cpos: change.cpos
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -26,9 +26,9 @@
|
|||||||
<a href="#" id="repl">π</a>
|
<a href="#" id="repl">π</a>
|
||||||
<script>
|
<script>
|
||||||
|
|
||||||
var SR = {{ r|tojson }},
|
var SR = "{{ r }}",
|
||||||
last_modified = {{ lastmod }},
|
last_modified = {{ lastmod }},
|
||||||
have_emp = {{ have_emp|tojson }},
|
have_emp = {{ "true" if have_emp else "false" }},
|
||||||
dfavico = "{{ favico }}";
|
dfavico = "{{ favico }}";
|
||||||
|
|
||||||
var md_opt = {
|
var md_opt = {
|
||||||
|
|||||||
107
copyparty/web/rups.css
Normal file
107
copyparty/web/rups.css
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
html {
|
||||||
|
color: #333;
|
||||||
|
background: #f7f7f7;
|
||||||
|
font-family: sans-serif;
|
||||||
|
font-family: var(--font-main), sans-serif;
|
||||||
|
touch-action: manipulation;
|
||||||
|
}
|
||||||
|
#wrap {
|
||||||
|
margin: 2em auto;
|
||||||
|
padding: 0 1em 3em 1em;
|
||||||
|
line-height: 2.3em;
|
||||||
|
}
|
||||||
|
a {
|
||||||
|
color: #047;
|
||||||
|
background: #fff;
|
||||||
|
text-decoration: none;
|
||||||
|
border-bottom: 1px solid #8ab;
|
||||||
|
border-radius: .2em;
|
||||||
|
padding: .2em .6em;
|
||||||
|
margin: 0 .3em;
|
||||||
|
}
|
||||||
|
#wrap td a {
|
||||||
|
margin: 0;
|
||||||
|
line-height: 1em;
|
||||||
|
display: inline-block;
|
||||||
|
white-space: initial;
|
||||||
|
font-family: var(--font-main), sans-serif;
|
||||||
|
}
|
||||||
|
#repl {
|
||||||
|
border: none;
|
||||||
|
background: none;
|
||||||
|
color: inherit;
|
||||||
|
padding: 0;
|
||||||
|
position: fixed;
|
||||||
|
bottom: .25em;
|
||||||
|
left: .2em;
|
||||||
|
}
|
||||||
|
#wrap table {
|
||||||
|
border-collapse: collapse;
|
||||||
|
position: relative;
|
||||||
|
margin-top: 2em;
|
||||||
|
}
|
||||||
|
#wrap th {
|
||||||
|
top: -1px;
|
||||||
|
position: sticky;
|
||||||
|
background: #f7f7f7;
|
||||||
|
}
|
||||||
|
#wrap td {
|
||||||
|
font-family: var(--font-mono), monospace, monospace;
|
||||||
|
white-space: pre; /*date*/
|
||||||
|
overflow: hidden; /*ipv6*/
|
||||||
|
}
|
||||||
|
#wrap th:first-child,
|
||||||
|
#wrap td:first-child {
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
#wrap td,
|
||||||
|
#wrap th {
|
||||||
|
text-align: left;
|
||||||
|
padding: .3em .6em;
|
||||||
|
max-width: 30vw;
|
||||||
|
}
|
||||||
|
#wrap tr:hover td {
|
||||||
|
background: #ddd;
|
||||||
|
box-shadow: 0 -1px 0 rgba(128, 128, 128, 0.5) inset;
|
||||||
|
}
|
||||||
|
#wrap th:first-child,
|
||||||
|
#wrap td:first-child {
|
||||||
|
border-radius: .5em 0 0 .5em;
|
||||||
|
}
|
||||||
|
#wrap th:last-child,
|
||||||
|
#wrap td:last-child {
|
||||||
|
border-radius: 0 .5em .5em 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
html.z {
|
||||||
|
background: #222;
|
||||||
|
color: #ccc;
|
||||||
|
}
|
||||||
|
html.bz {
|
||||||
|
background: #11121d;
|
||||||
|
color: #bbd;
|
||||||
|
}
|
||||||
|
html.z a {
|
||||||
|
color: #fff;
|
||||||
|
background: #057;
|
||||||
|
border-color: #37a;
|
||||||
|
}
|
||||||
|
html.z input[type=text] {
|
||||||
|
color: #ddd;
|
||||||
|
background: #223;
|
||||||
|
border: none;
|
||||||
|
border-bottom: 1px solid #fc5;
|
||||||
|
border-radius: .2em;
|
||||||
|
padding: .2em .3em;
|
||||||
|
}
|
||||||
|
html.z #wrap th {
|
||||||
|
background: #222;
|
||||||
|
}
|
||||||
|
html.bz #wrap th {
|
||||||
|
background: #223;
|
||||||
|
}
|
||||||
|
html.z #wrap tr:hover td {
|
||||||
|
background: #000;
|
||||||
|
}
|
||||||
50
copyparty/web/rups.html
Normal file
50
copyparty/web/rups.html
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<title>{{ s_doctitle }}</title>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
||||||
|
<meta name="robots" content="noindex, nofollow">
|
||||||
|
<meta name="theme-color" content="#{{ tcolor }}">
|
||||||
|
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/rups.css?_={{ ts }}">
|
||||||
|
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||||
|
{{ html_head }}
|
||||||
|
</head>
|
||||||
|
|
||||||
|
<body>
|
||||||
|
<div id="wrap">
|
||||||
|
<a href="#" id="re">refresh</a>
|
||||||
|
<a href="{{ r }}/?h">control-panel</a>
|
||||||
|
Filter: <input type="text" id="filter" size="20" placeholder="documents/passwords" />
|
||||||
|
<span id="hits"></span>
|
||||||
|
<table id="tab"><thead><tr>
|
||||||
|
<th>size</th>
|
||||||
|
<th>who</th>
|
||||||
|
<th>when</th>
|
||||||
|
<th>age</th>
|
||||||
|
<th>dir</th>
|
||||||
|
<th>file</th>
|
||||||
|
</tr></thead><tbody id="tb"></tbody></table>
|
||||||
|
</div>
|
||||||
|
<a href="#" id="repl">π</a>
|
||||||
|
<script>
|
||||||
|
|
||||||
|
var SR="{{ r }}",
|
||||||
|
lang="{{ lang }}",
|
||||||
|
dfavico="{{ favico }}";
|
||||||
|
|
||||||
|
var STG = window.localStorage;
|
||||||
|
document.documentElement.className = (STG && STG.cpp_thm) || "{{ this.args.theme }}";
|
||||||
|
|
||||||
|
</script>
|
||||||
|
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>
|
||||||
|
<script>var V={{ v }};</script>
|
||||||
|
<script src="{{ r }}/.cpr/rups.js?_={{ ts }}"></script>
|
||||||
|
{%- if js %}
|
||||||
|
<script src="{{ js }}_={{ ts }}"></script>
|
||||||
|
{%- endif %}
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
|
||||||
66
copyparty/web/rups.js
Normal file
66
copyparty/web/rups.js
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
function render() {
|
||||||
|
var ups = V.ups, now = V.now, html = [];
|
||||||
|
ebi('filter').value = V.filter;
|
||||||
|
ebi('hits').innerHTML = 'showing ' + ups.length + ' files';
|
||||||
|
|
||||||
|
for (var a = 0; a < ups.length; a++) {
|
||||||
|
var f = ups[a],
|
||||||
|
vsp = vsplit(f.vp.split('?')[0]),
|
||||||
|
dn = esc(uricom_dec(vsp[0])),
|
||||||
|
fn = esc(uricom_dec(vsp[1])),
|
||||||
|
at = f.at,
|
||||||
|
td = now - f.at,
|
||||||
|
ts = !at ? '(?)' : unix2iso(at),
|
||||||
|
sa = !at ? '(?)' : td > 60 ? shumantime(td) : (td + 's'),
|
||||||
|
sz = ('' + f.sz).replace(/\B(?=(\d{3})+(?!\d))/g, " ");
|
||||||
|
|
||||||
|
html.push('<tr><td>' + sz +
|
||||||
|
'</td><td>' + f.ip +
|
||||||
|
'</td><td>' + ts +
|
||||||
|
'</td><td>' + sa +
|
||||||
|
'</td><td><a href="' + vsp[0] + '">' + dn +
|
||||||
|
'</a></td><td><a href="' + f.vp + '">' + fn +
|
||||||
|
'</a></td></tr>');
|
||||||
|
}
|
||||||
|
if (!ups.length) {
|
||||||
|
var t = V.filter ? ' matching the filter' : '';
|
||||||
|
html = ['<tr><td colspan="6">there are no uploads' + t + '</td></tr>'];
|
||||||
|
}
|
||||||
|
ebi('tb').innerHTML = html.join('');
|
||||||
|
}
|
||||||
|
render();
|
||||||
|
|
||||||
|
var ti;
|
||||||
|
function ask(e) {
|
||||||
|
ev(e);
|
||||||
|
clearTimeout(ti);
|
||||||
|
ebi('hits').innerHTML = 'Loading...';
|
||||||
|
|
||||||
|
var xhr = new XHR(),
|
||||||
|
filter = unsmart(ebi('filter').value);
|
||||||
|
|
||||||
|
hist_replace(get_evpath().split('?')[0] + '?ru&filter=' + uricom_enc(filter));
|
||||||
|
|
||||||
|
xhr.onload = xhr.onerror = function () {
|
||||||
|
try {
|
||||||
|
V = JSON.parse(this.responseText)
|
||||||
|
}
|
||||||
|
catch (ex) {
|
||||||
|
ebi('tb').innerHTML = '<tr><td colspan="6">failed to decode server response as json: <pre>' + esc(this.responseText) + '</pre></td></tr>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
render();
|
||||||
|
};
|
||||||
|
xhr.open('GET', SR + '/?ru&j&filter=' + uricom_enc(filter), true);
|
||||||
|
xhr.send();
|
||||||
|
}
|
||||||
|
ebi('re').onclick = ask;
|
||||||
|
ebi('filter').oninput = function () {
|
||||||
|
clearTimeout(ti);
|
||||||
|
ti = setTimeout(ask, 500);
|
||||||
|
ebi('hits').innerHTML = '...';
|
||||||
|
};
|
||||||
|
ebi('filter').onkeydown = function (e) {
|
||||||
|
if (('' + e.key).endsWith('Enter'))
|
||||||
|
ask();
|
||||||
|
};
|
||||||
@@ -27,7 +27,7 @@ a {
|
|||||||
padding: .2em .6em;
|
padding: .2em .6em;
|
||||||
margin: 0 .3em;
|
margin: 0 .3em;
|
||||||
}
|
}
|
||||||
td a {
|
#wrap td a {
|
||||||
margin: 0;
|
margin: 0;
|
||||||
}
|
}
|
||||||
#w {
|
#w {
|
||||||
@@ -44,23 +44,33 @@ td a {
|
|||||||
bottom: .25em;
|
bottom: .25em;
|
||||||
left: .2em;
|
left: .2em;
|
||||||
}
|
}
|
||||||
table {
|
#wrap table {
|
||||||
border-collapse: collapse;
|
border-collapse: collapse;
|
||||||
position: relative;
|
position: relative;
|
||||||
|
margin-top: 2em;
|
||||||
}
|
}
|
||||||
th {
|
th {
|
||||||
top: -1px;
|
top: -1px;
|
||||||
position: sticky;
|
position: sticky;
|
||||||
background: #f7f7f7;
|
background: #f7f7f7;
|
||||||
}
|
}
|
||||||
td, th {
|
#wrap td,
|
||||||
|
#wrap th {
|
||||||
padding: .3em .6em;
|
padding: .3em .6em;
|
||||||
text-align: left;
|
text-align: left;
|
||||||
white-space: nowrap;
|
white-space: nowrap;
|
||||||
}
|
}
|
||||||
td+td+td+td+td+td+td+td {
|
#wrap td+td+td+td+td+td+td+td {
|
||||||
font-family: var(--font-mono), monospace, monospace;
|
font-family: var(--font-mono), monospace, monospace;
|
||||||
}
|
}
|
||||||
|
#wrap th:first-child,
|
||||||
|
#wrap td:first-child {
|
||||||
|
border-radius: .5em 0 0 .5em;
|
||||||
|
}
|
||||||
|
#wrap th:last-child,
|
||||||
|
#wrap td:last-child {
|
||||||
|
border-radius: 0 .5em .5em 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -80,3 +90,6 @@ html.bz {
|
|||||||
color: #bbd;
|
color: #bbd;
|
||||||
background: #11121d;
|
background: #11121d;
|
||||||
}
|
}
|
||||||
|
html.bz th {
|
||||||
|
background: #223;
|
||||||
|
}
|
||||||
|
|||||||
@@ -6,6 +6,7 @@
|
|||||||
<title>{{ s_doctitle }}</title>
|
<title>{{ s_doctitle }}</title>
|
||||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
||||||
|
<meta name="robots" content="noindex, nofollow">
|
||||||
<meta name="theme-color" content="#{{ tcolor }}">
|
<meta name="theme-color" content="#{{ tcolor }}">
|
||||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/shares.css?_={{ ts }}">
|
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/shares.css?_={{ ts }}">
|
||||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||||
@@ -14,16 +15,16 @@
|
|||||||
|
|
||||||
<body>
|
<body>
|
||||||
<div id="wrap">
|
<div id="wrap">
|
||||||
<a id="a" href="{{ r }}/?shares" class="af">refresh</a>
|
<a href="{{ r }}/?shares">refresh</a>
|
||||||
<a id="a" href="{{ r }}/?h" class="af">control-panel</a>
|
<a href="{{ r }}/?h">control-panel</a>
|
||||||
|
|
||||||
<span>axs = perms (read,write,move,delet)</span>
|
<span>axs = perms (read,write,move,delet)</span>
|
||||||
<span>nf = numFiles (0=dir)</span>
|
<span>nf = numFiles (0=dir)</span>
|
||||||
<span>min/hrs = time left</span>
|
<span>min/hrs = time left</span>
|
||||||
|
|
||||||
<table id="tab"><thead><tr>
|
<table id="tab"><thead><tr>
|
||||||
<th>delete</th>
|
|
||||||
<th>sharekey</th>
|
<th>sharekey</th>
|
||||||
|
<th>delete</th>
|
||||||
<th>pw</th>
|
<th>pw</th>
|
||||||
<th>source</th>
|
<th>source</th>
|
||||||
<th>axs</th>
|
<th>axs</th>
|
||||||
@@ -37,10 +38,13 @@
|
|||||||
</tr></thead><tbody>
|
</tr></thead><tbody>
|
||||||
{% for k, pw, vp, pr, st, un, t0, t1 in rows %}
|
{% for k, pw, vp, pr, st, un, t0, t1 in rows %}
|
||||||
<tr>
|
<tr>
|
||||||
|
<td>
|
||||||
|
<a href="{{ r }}{{ shr }}{{ k }}?qr">qr</a>
|
||||||
|
<a href="{{ r }}{{ shr }}{{ k }}">{{ k }}</a>
|
||||||
|
</td>
|
||||||
<td><a href="#" k="{{ k }}">delete</a></td>
|
<td><a href="#" k="{{ k }}">delete</a></td>
|
||||||
<td><a href="{{ r }}{{ shr }}{{ k }}">{{ k }}</a></td>
|
<td>{{ "yes" if pw else "--" }}</td>
|
||||||
<td>{{ pw }}</td>
|
<td><a href="{{ r }}/{{ vp|e }}">/{{ vp|e }}</a></td>
|
||||||
<td><a href="{{ r }}/{{ vp|e }}">{{ vp|e }}</a></td>
|
|
||||||
<td>{{ pr }}</td>
|
<td>{{ pr }}</td>
|
||||||
<td>{{ st }}</td>
|
<td>{{ st }}</td>
|
||||||
<td>{{ un|e }}</td>
|
<td>{{ un|e }}</td>
|
||||||
@@ -55,9 +59,11 @@
|
|||||||
{% if not rows %}
|
{% if not rows %}
|
||||||
(you don't have any active shares btw)
|
(you don't have any active shares btw)
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
<a href="#" id="repl">π</a>
|
||||||
<script>
|
<script>
|
||||||
|
|
||||||
var SR = {{ r|tojson }},
|
var SR="{{ r }}",
|
||||||
shr="{{ shr }}",
|
shr="{{ shr }}",
|
||||||
lang="{{ lang }}",
|
lang="{{ lang }}",
|
||||||
dfavico="{{ favico }}";
|
dfavico="{{ favico }}";
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ function rm() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
function bump() {
|
function bump() {
|
||||||
var k = this.closest('tr').getElementsByTagName('a')[0].getAttribute('k'),
|
var k = this.closest('tr').getElementsByTagName('a')[2].getAttribute('k'),
|
||||||
u = SR + shr + uricom_enc(k) + '?eshare=' + this.value,
|
u = SR + shr + uricom_enc(k) + '?eshare=' + this.value,
|
||||||
xhr = new XHR();
|
xhr = new XHR();
|
||||||
|
|
||||||
@@ -28,14 +28,36 @@ function cb() {
|
|||||||
document.location = '?shares';
|
document.location = '?shares';
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function qr(e) {
|
||||||
|
ev(e);
|
||||||
|
var href = this.href,
|
||||||
|
pw = this.closest('tr').cells[2].textContent;
|
||||||
|
|
||||||
|
if (pw.indexOf('yes') < 0)
|
||||||
|
return showqr(href);
|
||||||
|
|
||||||
|
modal.prompt("if you want to bypass the password protection by\nembedding the password into the qr-code, then\ntype the password now, otherwise leave this empty", "", function (v) {
|
||||||
|
if (v)
|
||||||
|
href += "&pw=" + v;
|
||||||
|
showqr(href);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function showqr(href) {
|
||||||
|
var vhref = href.replace('?qr&', '?').replace('?qr', '');
|
||||||
|
modal.alert(esc(vhref) + '<img class="b64" width="100" height="100" src="' + href + '" />');
|
||||||
|
}
|
||||||
|
|
||||||
(function() {
|
(function() {
|
||||||
var tab = ebi('tab').tBodies[0],
|
var tab = ebi('tab').tBodies[0],
|
||||||
tr = Array.prototype.slice.call(tab.rows, 0);
|
tr = Array.prototype.slice.call(tab.rows, 0);
|
||||||
|
|
||||||
var buf = [];
|
var buf = [];
|
||||||
for (var a = 0; a < tr.length; a++)
|
for (var a = 0; a < tr.length; a++) {
|
||||||
|
tr[a].cells[0].getElementsByTagName('a')[0].onclick = qr;
|
||||||
for (var b = 7; b < 9; b++)
|
for (var b = 7; b < 9; b++)
|
||||||
buf.push(parseInt(tr[a].cells[b].innerHTML));
|
buf.push(parseInt(tr[a].cells[b].innerHTML));
|
||||||
|
}
|
||||||
|
|
||||||
var ibuf = 0;
|
var ibuf = 0;
|
||||||
for (var a = 0; a < tr.length; a++)
|
for (var a = 0; a < tr.length; a++)
|
||||||
|
|||||||
@@ -53,7 +53,7 @@ a.r {
|
|||||||
border-color: #c7a;
|
border-color: #c7a;
|
||||||
}
|
}
|
||||||
a.g {
|
a.g {
|
||||||
color: #2b0;
|
color: #0a0;
|
||||||
border-color: #3a0;
|
border-color: #3a0;
|
||||||
box-shadow: 0 .3em 1em #4c0;
|
box-shadow: 0 .3em 1em #4c0;
|
||||||
}
|
}
|
||||||
@@ -90,6 +90,13 @@ table {
|
|||||||
text-align: left;
|
text-align: left;
|
||||||
white-space: nowrap;
|
white-space: nowrap;
|
||||||
}
|
}
|
||||||
|
.vols td:empty,
|
||||||
|
.vols th:empty {
|
||||||
|
padding: 0;
|
||||||
|
}
|
||||||
|
.vols img {
|
||||||
|
margin: -4px 0;
|
||||||
|
}
|
||||||
.num {
|
.num {
|
||||||
border-right: 1px solid #bbb;
|
border-right: 1px solid #bbb;
|
||||||
}
|
}
|
||||||
@@ -152,11 +159,13 @@ pre b,
|
|||||||
code b {
|
code b {
|
||||||
color: #000;
|
color: #000;
|
||||||
font-weight: normal;
|
font-weight: normal;
|
||||||
text-shadow: 0 0 .2em #0f0;
|
text-shadow: 0 0 .2em #3f3;
|
||||||
|
border-bottom: 1px solid #090;
|
||||||
}
|
}
|
||||||
html.z pre b,
|
html.z pre b,
|
||||||
html.z code b {
|
html.z code b {
|
||||||
color: #fff;
|
color: #fff;
|
||||||
|
border-bottom: 1px solid #9f9;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -220,3 +229,6 @@ html.bz {
|
|||||||
color: #bbd;
|
color: #bbd;
|
||||||
background: #11121d;
|
background: #11121d;
|
||||||
}
|
}
|
||||||
|
html.bz .vols img {
|
||||||
|
filter: sepia(0.8) hue-rotate(180deg);
|
||||||
|
}
|
||||||
|
|||||||
@@ -32,6 +32,30 @@
|
|||||||
</div>
|
</div>
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
|
|
||||||
|
{%- if ups %}
|
||||||
|
<h1 id="aa">incoming files:</h1>
|
||||||
|
<table class="vols">
|
||||||
|
<thead><tr><th>%</th><th>speed</th><th>eta</th><th>idle</th><th>dir</th><th>file</th></tr></thead>
|
||||||
|
<tbody>
|
||||||
|
{% for u in ups %}
|
||||||
|
<tr><td>{{ u[0] }}</td><td>{{ u[1] }}</td><td>{{ u[2] }}</td><td>{{ u[3] }}</td><td><a href="{{ u[4] }}">{{ u[5]|e }}</a></td><td>{{ u[6]|e }}</td></tr>
|
||||||
|
{% endfor %}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
{%- endif %}
|
||||||
|
|
||||||
|
{%- if dls %}
|
||||||
|
<h1 id="ae">active downloads:</h1>
|
||||||
|
<table class="vols">
|
||||||
|
<thead><tr><th>%</th><th>sent</th><th>speed</th><th>eta</th><th>idle</th><th></th><th>dir</th><th>file</th></tr></thead>
|
||||||
|
<tbody>
|
||||||
|
{% for u in dls %}
|
||||||
|
<tr><td>{{ u[0] }}</td><td>{{ u[1] }}</td><td>{{ u[2] }}</td><td>{{ u[3] }}</td><td>{{ u[4] }}</td><td>{{ u[5] }}</td><td><a href="{{ u[6] }}">{{ u[7]|e }}</a></td><td>{{ u[8] }}</td></tr>
|
||||||
|
{% endfor %}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
{%- endif %}
|
||||||
|
|
||||||
{%- if avol %}
|
{%- if avol %}
|
||||||
<h1>admin panel:</h1>
|
<h1>admin panel:</h1>
|
||||||
<table><tr><td> <!-- hehehe -->
|
<table><tr><td> <!-- hehehe -->
|
||||||
@@ -117,13 +141,23 @@
|
|||||||
|
|
||||||
{% if k304 or k304vis %}
|
{% if k304 or k304vis %}
|
||||||
{% if k304 %}
|
{% if k304 %}
|
||||||
<li><a id="h" href="{{ r }}/?k304=n">disable k304</a> (currently enabled)
|
<li><a id="h" href="{{ r }}/?cc&setck=k304=n">disable k304</a> (currently enabled)
|
||||||
{%- else %}
|
{%- else %}
|
||||||
<li><a id="i" href="{{ r }}/?k304=y" class="r">enable k304</a> (currently disabled)
|
<li><a id="i" href="{{ r }}/?cc&setck=k304=y" class="r">enable k304</a> (currently disabled)
|
||||||
{% endif %}
|
{% endif %}
|
||||||
<blockquote id="j">enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general</blockquote></li>
|
<blockquote id="j">enabling k304 will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general</blockquote></li>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
|
{% if no304 or no304vis %}
|
||||||
|
{% if no304 %}
|
||||||
|
<li><a id="ab" href="{{ r }}/?cc&setck=no304=n">disable no304</a> (currently enabled)
|
||||||
|
{%- else %}
|
||||||
|
<li><a id="ac" href="{{ r }}/?cc&setck=no304=y" class="r">enable no304</a> (currently disabled)
|
||||||
|
{% endif %}
|
||||||
|
<blockquote id="ad">enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!</blockquote></li>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
<li><a id="af" href="{{ r }}/?ru">show recent uploads</a></li>
|
||||||
<li><a id="k" href="{{ r }}/?reset" class="r" onclick="localStorage.clear();return true">reset client settings</a></li>
|
<li><a id="k" href="{{ r }}/?reset" class="r" onclick="localStorage.clear();return true">reset client settings</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
|
|
||||||
@@ -134,7 +168,7 @@
|
|||||||
{%- endif %}
|
{%- endif %}
|
||||||
<script>
|
<script>
|
||||||
|
|
||||||
var SR = {{ r|tojson }},
|
var SR="{{ r }}",
|
||||||
lang="{{ lang }}",
|
lang="{{ lang }}",
|
||||||
dfavico="{{ favico }}";
|
dfavico="{{ favico }}";
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ var Ls = {
|
|||||||
"e2": "leser inn konfigurasjonsfiler på nytt$N(kontoer, volumer, volumbrytere)$Nog kartlegger alle e2ds-volumer$N$Nmerk: endringer i globale parametere$Nkrever en full restart for å ta gjenge",
|
"e2": "leser inn konfigurasjonsfiler på nytt$N(kontoer, volumer, volumbrytere)$Nog kartlegger alle e2ds-volumer$N$Nmerk: endringer i globale parametere$Nkrever en full restart for å ta gjenge",
|
||||||
"f1": "du kan betrakte:",
|
"f1": "du kan betrakte:",
|
||||||
"g1": "du kan laste opp til:",
|
"g1": "du kan laste opp til:",
|
||||||
"cc1": "brytere og sånt",
|
"cc1": "brytere og sånt:",
|
||||||
"h1": "skru av k304",
|
"h1": "skru av k304",
|
||||||
"i1": "skru på k304",
|
"i1": "skru på k304",
|
||||||
"j1": "k304 bryter tilkoplingen for hver HTTP 304. Dette hjelper mot visse mellomtjenere som kan sette seg fast / plutselig slutter å laste sider, men det reduserer også ytelsen betydelig",
|
"j1": "k304 bryter tilkoplingen for hver HTTP 304. Dette hjelper mot visse mellomtjenere som kan sette seg fast / plutselig slutter å laste sider, men det reduserer også ytelsen betydelig",
|
||||||
@@ -25,20 +25,26 @@ var Ls = {
|
|||||||
"t1": "handling",
|
"t1": "handling",
|
||||||
"u2": "tid siden noen sist skrev til serveren$N( opplastning / navneendring / ... )$N$N17d = 17 dager$N1h23 = 1 time 23 minutter$N4m56 = 4 minuter 56 sekunder",
|
"u2": "tid siden noen sist skrev til serveren$N( opplastning / navneendring / ... )$N$N17d = 17 dager$N1h23 = 1 time 23 minutter$N4m56 = 4 minuter 56 sekunder",
|
||||||
"v1": "koble til",
|
"v1": "koble til",
|
||||||
"v2": "bruk denne serveren som en lokal harddisk$N$NADVARSEL: kommer til å vise passordet ditt!",
|
"v2": "bruk denne serveren som en lokal harddisk",
|
||||||
"w1": "bytt til https",
|
"w1": "bytt til https",
|
||||||
"x1": "bytt passord",
|
"x1": "bytt passord",
|
||||||
"y1": "dine delinger",
|
"y1": "dine delinger",
|
||||||
"z1": "lås opp område",
|
"z1": "lås opp område:",
|
||||||
"ta1": "du må skrive et nytt passord først",
|
"ta1": "du må skrive et nytt passord først",
|
||||||
"ta2": "gjenta for å bekrefte nytt passord:",
|
"ta2": "gjenta for å bekrefte nytt passord:",
|
||||||
"ta3": "fant en skrivefeil; vennligst prøv igjen",
|
"ta3": "fant en skrivefeil; vennligst prøv igjen",
|
||||||
|
"aa1": "innkommende:",
|
||||||
|
"ab1": "skru av no304",
|
||||||
|
"ac1": "skru på no304",
|
||||||
|
"ad1": "no304 stopper all bruk av cache. Hvis ikke k304 var nok, prøv denne. Vil mangedoble dataforbruk!",
|
||||||
|
"ae1": "utgående:",
|
||||||
|
"af1": "vis nylig opplastede filer",
|
||||||
},
|
},
|
||||||
"eng": {
|
"eng": {
|
||||||
"d2": "shows the state of all active threads",
|
"d2": "shows the state of all active threads",
|
||||||
"e2": "reload config files (accounts/volumes/volflags),$Nand rescan all e2ds volumes$N$Nnote: any changes to global settings$Nrequire a full restart to take effect",
|
"e2": "reload config files (accounts/volumes/volflags),$Nand rescan all e2ds volumes$N$Nnote: any changes to global settings$Nrequire a full restart to take effect",
|
||||||
"u2": "time since the last server write$N( upload / rename / ... )$N$N17d = 17 days$N1h23 = 1 hour 23 minutes$N4m56 = 4 minutes 56 seconds",
|
"u2": "time since the last server write$N( upload / rename / ... )$N$N17d = 17 days$N1h23 = 1 hour 23 minutes$N4m56 = 4 minutes 56 seconds",
|
||||||
"v2": "use this server as a local HDD$N$NWARNING: this will show your password!",
|
"v2": "use this server as a local HDD",
|
||||||
"ta1": "fill in your new password first",
|
"ta1": "fill in your new password first",
|
||||||
"ta2": "repeat to confirm new password:",
|
"ta2": "repeat to confirm new password:",
|
||||||
"ta3": "found a typo; please try again",
|
"ta3": "found a typo; please try again",
|
||||||
@@ -70,7 +76,7 @@ var Ls = {
|
|||||||
"t1": "操作",
|
"t1": "操作",
|
||||||
"u2": "自上次服务器写入的时间$N( 上传 / 重命名 / ... )$N$N17d = 17 天$N1h23 = 1 小时 23 分钟$N4m56 = 4 分钟 56 秒",
|
"u2": "自上次服务器写入的时间$N( 上传 / 重命名 / ... )$N$N17d = 17 天$N1h23 = 1 小时 23 分钟$N4m56 = 4 分钟 56 秒",
|
||||||
"v1": "连接",
|
"v1": "连接",
|
||||||
"v2": "将此服务器用作本地硬盘$N$N警告:这将显示你的密码!",
|
"v2": "将此服务器用作本地硬盘",
|
||||||
"w1": "切换到 https",
|
"w1": "切换到 https",
|
||||||
"x1": "更改密码",
|
"x1": "更改密码",
|
||||||
"y1": "你的分享",
|
"y1": "你的分享",
|
||||||
@@ -78,6 +84,12 @@ var Ls = {
|
|||||||
"ta1": "请先输入新密码",
|
"ta1": "请先输入新密码",
|
||||||
"ta2": "重复以确认新密码:",
|
"ta2": "重复以确认新密码:",
|
||||||
"ta3": "发现拼写错误;请重试",
|
"ta3": "发现拼写错误;请重试",
|
||||||
|
"aa1": "正在接收的文件:", //m
|
||||||
|
"ab1": "关闭 k304",
|
||||||
|
"ac1": "开启 k304",
|
||||||
|
"ad1": "启用 no304 将禁用所有缓存;如果 k304 不够,可以尝试此选项。这将消耗大量的网络流量!", //m
|
||||||
|
"ae1": "正在下载:", //m
|
||||||
|
"af1": "显示最近上传的文件", //m
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,7 @@
|
|||||||
<meta name="theme-color" content="#{{ tcolor }}">
|
<meta name="theme-color" content="#{{ tcolor }}">
|
||||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/splash.css?_={{ ts }}">
|
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/splash.css?_={{ ts }}">
|
||||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||||
<style>ul{padding-left:1.3em}li{margin:.4em 0}</style>
|
<style>ul{padding-left:1.3em}li{margin:.4em 0}.txa{float:right;margin:0 0 0 1em}</style>
|
||||||
{{ html_head }}
|
{{ html_head }}
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
@@ -31,15 +31,22 @@
|
|||||||
<br />
|
<br />
|
||||||
<span class="os win lin mac">placeholders:</span>
|
<span class="os win lin mac">placeholders:</span>
|
||||||
<span class="os win">
|
<span class="os win">
|
||||||
{% if accs %}<code><b>{{ pw }}</b></code>=password, {% endif %}<code><b>W:</b></code>=mountpoint
|
{% if accs %}<code><b id="pw0">{{ pw }}</b></code>=password, {% endif %}<code><b>W:</b></code>=mountpoint
|
||||||
</span>
|
</span>
|
||||||
<span class="os lin mac">
|
<span class="os lin mac">
|
||||||
{% if accs %}<code><b>{{ pw }}</b></code>=password, {% endif %}<code><b>mp</b></code>=mountpoint
|
{% if accs %}<code><b id="pw0">{{ pw }}</b></code>=password, {% endif %}<code><b>mp</b></code>=mountpoint
|
||||||
</span>
|
</span>
|
||||||
|
<a href="#" id="setpw">use real password</a>
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
{% if args.idp_h_usr %}
|
||||||
|
<p style="line-height:2em"><b>WARNING:</b> this server is using IdP-based authentication, so this stuff may not work as advertised. Depending on server config, these commands can probably only be used to access areas which don't require authentication, unless you auth using any non-IdP accounts defined in the copyparty config. Please see <a href="https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#connecting-webdav-clients">the IdP docs</a></p>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
{% if not args.no_dav %}
|
{% if not args.no_dav %}
|
||||||
<h1>WebDAV</h1>
|
<h1>WebDAV</h1>
|
||||||
|
|
||||||
@@ -53,7 +60,6 @@
|
|||||||
{% if s %}
|
{% if s %}
|
||||||
<li>running <code>rclone mount</code> on LAN (or just dont have valid certificates)? add <code>--no-check-certificate</code></li>
|
<li>running <code>rclone mount</code> on LAN (or just dont have valid certificates)? add <code>--no-check-certificate</code></li>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
<li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li>
|
|
||||||
<li>old version of rclone? replace all <code>=</code> with <code> </code> (space)</li>
|
<li>old version of rclone? replace all <code>=</code> with <code> </code> (space)</li>
|
||||||
</ul>
|
</ul>
|
||||||
|
|
||||||
@@ -137,7 +143,6 @@
|
|||||||
{% if args.ftps %}
|
{% if args.ftps %}
|
||||||
<li>running on LAN (or just dont have valid certificates)? add <code>no_check_certificate=true</code> to the config command</li>
|
<li>running on LAN (or just dont have valid certificates)? add <code>no_check_certificate=true</code> to the config command</li>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
<li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li>
|
|
||||||
<li>old version of rclone? replace all <code>=</code> with <code> </code> (space)</li>
|
<li>old version of rclone? replace all <code>=</code> with <code> </code> (space)</li>
|
||||||
</ul>
|
</ul>
|
||||||
<p>if you want to use the native FTP client in windows instead (please dont), press <code>win+R</code> and run this command:</p>
|
<p>if you want to use the native FTP client in windows instead (please dont), press <code>win+R</code> and run this command:</p>
|
||||||
@@ -192,6 +197,7 @@
|
|||||||
<h1>partyfuse</h1>
|
<h1>partyfuse</h1>
|
||||||
<p>
|
<p>
|
||||||
<a href="{{ r }}/.cpr/a/partyfuse.py">partyfuse.py</a> -- fast, read-only,
|
<a href="{{ r }}/.cpr/a/partyfuse.py">partyfuse.py</a> -- fast, read-only,
|
||||||
|
needs <a href="{{ r }}/.cpr/deps/fuse.py">fuse.py</a> in the same folder,
|
||||||
<span class="os win">needs <a href="https://winfsp.dev/rel/">winfsp</a></span>
|
<span class="os win">needs <a href="https://winfsp.dev/rel/">winfsp</a></span>
|
||||||
<span class="os lin">doesn't need root</span>
|
<span class="os lin">doesn't need root</span>
|
||||||
</p>
|
</p>
|
||||||
@@ -208,7 +214,6 @@
|
|||||||
|
|
||||||
{% if args.smb %}
|
{% if args.smb %}
|
||||||
<h1>SMB / CIFS</h1>
|
<h1>SMB / CIFS</h1>
|
||||||
<em><a href="https://github.com/SecureAuthCorp/impacket/issues/1433">bug:</a> max ~300 files in each folder</em>
|
|
||||||
|
|
||||||
<div class="os win">
|
<div class="os win">
|
||||||
<pre>
|
<pre>
|
||||||
@@ -231,11 +236,65 @@
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<div class="os win">
|
||||||
|
<h1>ShareX</h1>
|
||||||
|
|
||||||
|
<p>to upload screenshots using ShareX <a href="https://github.com/ShareX/ShareX/releases/tag/v12.1.1">v12</a> or <a href="https://getsharex.com/">v15+</a>, save this as <code>copyparty.sxcu</code> and run it:</p>
|
||||||
|
|
||||||
|
<pre class="dl" name="copyparty.sxcu">
|
||||||
|
{ "Name": "copyparty",
|
||||||
|
"RequestURL": "http{{ s }}://{{ ep }}/{{ rvp }}",
|
||||||
|
"Headers": {
|
||||||
|
{% if accs %}"pw": "<b>{{ pw }}</b>",{% endif %}
|
||||||
|
"accept": "url"
|
||||||
|
},
|
||||||
|
"DestinationType": "ImageUploader, TextUploader, FileUploader",
|
||||||
|
"FileFormName": "f" }
|
||||||
|
</pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<div class="os mac">
|
||||||
|
<h1>ishare</h1>
|
||||||
|
|
||||||
|
<p>to upload screenshots using <a href="https://isharemac.app/">ishare</a>, save this as <code>copyparty.iscu</code> and run it:</p>
|
||||||
|
|
||||||
|
<pre class="dl" name="copyparty.iscu">
|
||||||
|
{ "Name": "copyparty",
|
||||||
|
"RequestURL": "http{{ s }}://{{ ep }}/{{ rvp }}",
|
||||||
|
"Headers": {
|
||||||
|
{% if accs %}"pw": "<b>{{ pw }}</b>",{% endif %}
|
||||||
|
"accept": "json"
|
||||||
|
},
|
||||||
|
"ResponseURL": "{{ '{{fileurl}}' }}",
|
||||||
|
"FileFormName": "f" }
|
||||||
|
</pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<div class="os lin">
|
||||||
|
<h1>flameshot</h1>
|
||||||
|
|
||||||
|
<p>to upload screenshots using <a href="https://flameshot.org/">flameshot</a>, save this as <code>flameshot.sh</code> and run it:</p>
|
||||||
|
|
||||||
|
<pre class="dl" name="flameshot.sh">
|
||||||
|
#!/bin/bash
|
||||||
|
pw="<b>{{ pw }}</b>"
|
||||||
|
url="http{{ s }}://{{ ep }}/{{ rvp }}"
|
||||||
|
filename="$(date +%Y-%m%d-%H%M%S).png"
|
||||||
|
flameshot gui -s -r | curl -sT- "$url$filename?want=url&pw=$pw" | xsel -ib
|
||||||
|
</pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
<a href="#" id="repl">π</a>
|
<a href="#" id="repl">π</a>
|
||||||
<script>
|
<script>
|
||||||
|
|
||||||
var SR = {{ r|tojson }},
|
var SR="{{ r }}",
|
||||||
lang="{{ lang }}",
|
lang="{{ lang }}",
|
||||||
dfavico="{{ favico }}";
|
dfavico="{{ favico }}";
|
||||||
|
|
||||||
|
|||||||
@@ -1,11 +1,3 @@
|
|||||||
function QSA(x) {
|
|
||||||
return document.querySelectorAll(x);
|
|
||||||
}
|
|
||||||
var LINUX = /Linux/.test(navigator.userAgent),
|
|
||||||
MACOS = /[^a-z]mac ?os/i.test(navigator.userAgent),
|
|
||||||
WINDOWS = /Windows/.test(navigator.userAgent);
|
|
||||||
|
|
||||||
|
|
||||||
var oa = QSA('pre');
|
var oa = QSA('pre');
|
||||||
for (var a = 0; a < oa.length; a++) {
|
for (var a = 0; a < oa.length; a++) {
|
||||||
var html = oa[a].innerHTML,
|
var html = oa[a].innerHTML,
|
||||||
@@ -15,6 +7,21 @@ for (var a = 0; a < oa.length; a++) {
|
|||||||
oa[a].innerHTML = html.replace(rd, '$1').replace(/[ \r\n]+$/, '').replace(/\r?\n/g, '<br />');
|
oa[a].innerHTML = html.replace(rd, '$1').replace(/[ \r\n]+$/, '').replace(/\r?\n/g, '<br />');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function add_dls() {
|
||||||
|
oa = QSA('pre.dl');
|
||||||
|
for (var a = 0; a < oa.length; a++) {
|
||||||
|
var an = 'ta' + a,
|
||||||
|
o = ebi(an) || mknod('a', an, 'download');
|
||||||
|
|
||||||
|
oa[a].setAttribute('id', 'tx' + a);
|
||||||
|
oa[a].parentNode.insertBefore(o, oa[a]);
|
||||||
|
o.setAttribute('download', oa[a].getAttribute('name'));
|
||||||
|
o.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(oa[a].innerText));
|
||||||
|
clmod(o, 'txa', 1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
add_dls();
|
||||||
|
|
||||||
|
|
||||||
oa = QSA('.ossel a');
|
oa = QSA('.ossel a');
|
||||||
for (var a = 0; a < oa.length; a++)
|
for (var a = 0; a < oa.length; a++)
|
||||||
@@ -40,3 +47,21 @@ function setos(os) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
setos(WINDOWS ? 'win' : LINUX ? 'lin' : MACOS ? 'mac' : 'idk');
|
setos(WINDOWS ? 'win' : LINUX ? 'lin' : MACOS ? 'mac' : 'idk');
|
||||||
|
|
||||||
|
|
||||||
|
ebi('setpw').onclick = function (e) {
|
||||||
|
ev(e);
|
||||||
|
modal.prompt('password:', '', function (v) {
|
||||||
|
if (!v)
|
||||||
|
return;
|
||||||
|
|
||||||
|
var pw0 = ebi('pw0').innerHTML,
|
||||||
|
oa = QSA('b');
|
||||||
|
|
||||||
|
for (var a = 0; a < oa.length; a++)
|
||||||
|
if (oa[a].innerHTML == pw0)
|
||||||
|
oa[a].textContent = v;
|
||||||
|
|
||||||
|
add_dls();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|||||||
@@ -69,6 +69,18 @@ html {
|
|||||||
top: 2em;
|
top: 2em;
|
||||||
bottom: unset;
|
bottom: unset;
|
||||||
}
|
}
|
||||||
|
#toastt {
|
||||||
|
position: absolute;
|
||||||
|
height: 1px;
|
||||||
|
top: 1px;
|
||||||
|
right: 1px;
|
||||||
|
left: 1px;
|
||||||
|
animation: toastt var(--tmtime) 0.07s steps(var(--tmstep)) forwards;
|
||||||
|
transform-origin: right;
|
||||||
|
}
|
||||||
|
@keyframes toastt {
|
||||||
|
to {transform: scaleX(0)}
|
||||||
|
}
|
||||||
#toast a {
|
#toast a {
|
||||||
color: inherit;
|
color: inherit;
|
||||||
text-shadow: inherit;
|
text-shadow: inherit;
|
||||||
@@ -130,6 +142,9 @@ html {
|
|||||||
#toast.inf #toastc {
|
#toast.inf #toastc {
|
||||||
background: #0be;
|
background: #0be;
|
||||||
}
|
}
|
||||||
|
#toast.inf #toastt {
|
||||||
|
background: #8ef;
|
||||||
|
}
|
||||||
#toast.ok {
|
#toast.ok {
|
||||||
background: #380;
|
background: #380;
|
||||||
border-color: #8e4;
|
border-color: #8e4;
|
||||||
@@ -137,6 +152,9 @@ html {
|
|||||||
#toast.ok #toastc {
|
#toast.ok #toastc {
|
||||||
background: #8e4;
|
background: #8e4;
|
||||||
}
|
}
|
||||||
|
#toast.ok #toastt {
|
||||||
|
background: #cf9;
|
||||||
|
}
|
||||||
#toast.warn {
|
#toast.warn {
|
||||||
background: #960;
|
background: #960;
|
||||||
border-color: #fc0;
|
border-color: #fc0;
|
||||||
@@ -144,6 +162,9 @@ html {
|
|||||||
#toast.warn #toastc {
|
#toast.warn #toastc {
|
||||||
background: #fc0;
|
background: #fc0;
|
||||||
}
|
}
|
||||||
|
#toast.warn #toastt {
|
||||||
|
background: #fe9;
|
||||||
|
}
|
||||||
#toast.err {
|
#toast.err {
|
||||||
background: #900;
|
background: #900;
|
||||||
border-color: #d06;
|
border-color: #d06;
|
||||||
@@ -151,6 +172,9 @@ html {
|
|||||||
#toast.err #toastc {
|
#toast.err #toastc {
|
||||||
background: #d06;
|
background: #d06;
|
||||||
}
|
}
|
||||||
|
#toast.err #toastt {
|
||||||
|
background: #f9c;
|
||||||
|
}
|
||||||
#toast code {
|
#toast code {
|
||||||
padding: 0 .2em;
|
padding: 0 .2em;
|
||||||
background: rgba(0,0,0,0.2);
|
background: rgba(0,0,0,0.2);
|
||||||
@@ -293,6 +317,14 @@ html.y #tth {
|
|||||||
#modalc a {
|
#modalc a {
|
||||||
color: #07b;
|
color: #07b;
|
||||||
}
|
}
|
||||||
|
#modalc .b64 {
|
||||||
|
display: block;
|
||||||
|
margin: .1em auto;
|
||||||
|
width: 60%;
|
||||||
|
height: 60%;
|
||||||
|
background: #999;
|
||||||
|
background: rgba(128,128,128,0.2);
|
||||||
|
}
|
||||||
#modalb {
|
#modalb {
|
||||||
position: sticky;
|
position: sticky;
|
||||||
text-align: right;
|
text-align: right;
|
||||||
|
|||||||
@@ -17,10 +17,14 @@ function goto_up2k() {
|
|||||||
var up2k = null,
|
var up2k = null,
|
||||||
up2k_hooks = [],
|
up2k_hooks = [],
|
||||||
hws = [],
|
hws = [],
|
||||||
|
hws_ok = 0,
|
||||||
|
hws_ng = false,
|
||||||
sha_js = WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
|
sha_js = WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
|
||||||
m = 'will use ' + sha_js + ' instead of native sha512 due to';
|
m = 'will use ' + sha_js + ' instead of native sha512 due to';
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
if (sread('nosubtle') || window.nosubtle)
|
||||||
|
throw 'chickenbit';
|
||||||
var cf = crypto.subtle || crypto.webkitSubtle;
|
var cf = crypto.subtle || crypto.webkitSubtle;
|
||||||
cf.digest('SHA-512', new Uint8Array(1)).then(
|
cf.digest('SHA-512', new Uint8Array(1)).then(
|
||||||
function (x) { console.log('sha-ok'); up2k = up2k_init(cf); },
|
function (x) { console.log('sha-ok'); up2k = up2k_init(cf); },
|
||||||
@@ -152,12 +156,13 @@ function U2pvis(act, btns, uc, st) {
|
|||||||
r.mod0 = null;
|
r.mod0 = null;
|
||||||
|
|
||||||
var markup = {
|
var markup = {
|
||||||
'404': '<span class="err">404</span>',
|
'404': '<span class="err">' + L.utl_404 + '</span>',
|
||||||
'ERROR': '<span class="err">ERROR</span>',
|
'ERROR': '<span class="err">' + L.utl_err + '</span>',
|
||||||
'OS-error': '<span class="err">OS-error</span>',
|
'OS-error': '<span class="err">' + L.utl_oserr + '</span>',
|
||||||
'found': '<span class="inf">found</span>',
|
'found': '<span class="inf">' + L.utl_found + '</span>',
|
||||||
'YOLO': '<span class="inf">YOLO</span>',
|
'defer': '<span class="inf">' + L.utl_defer + '</span>',
|
||||||
'done': '<span class="ok">done</span>',
|
'YOLO': '<span class="inf">' + L.utl_yolo + '</span>',
|
||||||
|
'done': '<span class="ok">' + L.utl_done + '</span>',
|
||||||
};
|
};
|
||||||
|
|
||||||
r.addfile = function (entry, sz, draw) {
|
r.addfile = function (entry, sz, draw) {
|
||||||
@@ -241,7 +246,7 @@ function U2pvis(act, btns, uc, st) {
|
|||||||
p = bd * 100.0 / sz,
|
p = bd * 100.0 / sz,
|
||||||
nb = bd - bd0,
|
nb = bd - bd0,
|
||||||
spd = nb / (td / 1000),
|
spd = nb / (td / 1000),
|
||||||
eta = (sz - bd) / spd;
|
eta = spd ? (sz - bd) / spd : 3599;
|
||||||
|
|
||||||
return [p, s2ms(eta), spd / (1024 * 1024)];
|
return [p, s2ms(eta), spd / (1024 * 1024)];
|
||||||
};
|
};
|
||||||
@@ -445,9 +450,7 @@ function U2pvis(act, btns, uc, st) {
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
r.npotato = 0;
|
r.npotato = 0;
|
||||||
var html = [
|
var html = [L.u_pott.format(r.ctr.ok, r.ctr.ng, r.ctr.bz, r.ctr.q)];
|
||||||
"<p>files: <b>{0}</b> finished, <b>{1}</b> failed, <b>{2}</b> busy, <b>{3}</b> queued</p>".format(
|
|
||||||
r.ctr.ok, r.ctr.ng, r.ctr.bz, r.ctr.q)];
|
|
||||||
|
|
||||||
while (r.head < r.tab.length && has(["ok", "ng"], r.tab[r.head].in))
|
while (r.head < r.tab.length && has(["ok", "ng"], r.tab[r.head].in))
|
||||||
r.head++;
|
r.head++;
|
||||||
@@ -602,7 +605,7 @@ function U2pvis(act, btns, uc, st) {
|
|||||||
if (nf < 9000)
|
if (nf < 9000)
|
||||||
return go();
|
return go();
|
||||||
|
|
||||||
modal.confirm('about to show ' + nf + ' files\n\nthis may crash your browser, are you sure?', go, null);
|
modal.confirm(L.u_bigtab.format(nf), go, null);
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -692,8 +695,9 @@ function Donut(uc, st) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (++r.tc >= 10) {
|
if (++r.tc >= 10) {
|
||||||
|
var s = r.eta === null ? 'paused' : r.eta > 60 ? shumantime(r.eta) : (r.eta + 's');
|
||||||
wintitle("{0}%, {1}, #{2}, ".format(
|
wintitle("{0}%, {1}, #{2}, ".format(
|
||||||
f2f(v * 100 / t, 1), shumantime(r.eta), st.files.length - st.nfile.upload), true);
|
f2f(v * 100 / t, 1), s, st.files.length - st.nfile.upload), true);
|
||||||
r.tc = 0;
|
r.tc = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -854,8 +858,13 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
setmsg(suggest_up2k, 'msg');
|
setmsg(suggest_up2k, 'msg');
|
||||||
|
|
||||||
|
var u2szs = u2sz.split(','),
|
||||||
|
u2sz_min = parseInt(u2szs[0]),
|
||||||
|
u2sz_tgt = parseInt(u2szs[1]),
|
||||||
|
u2sz_max = parseInt(u2szs[2]);
|
||||||
|
|
||||||
var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
|
var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
|
||||||
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz.split(',')[1]),
|
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz_tgt),
|
||||||
uc = {},
|
uc = {},
|
||||||
fdom_ctr = 0,
|
fdom_ctr = 0,
|
||||||
biggest_file = 0;
|
biggest_file = 0;
|
||||||
@@ -872,10 +881,29 @@ function up2k_init(subtle) {
|
|||||||
bcfg_bind(uc, 'turbo', 'u2turbo', turbolvl > 1, draw_turbo);
|
bcfg_bind(uc, 'turbo', 'u2turbo', turbolvl > 1, draw_turbo);
|
||||||
bcfg_bind(uc, 'datechk', 'u2tdate', turbolvl < 3, null);
|
bcfg_bind(uc, 'datechk', 'u2tdate', turbolvl < 3, null);
|
||||||
bcfg_bind(uc, 'az', 'u2sort', u2sort.indexOf('n') + 1, set_u2sort);
|
bcfg_bind(uc, 'az', 'u2sort', u2sort.indexOf('n') + 1, set_u2sort);
|
||||||
bcfg_bind(uc, 'hashw', 'hashw', !!WebAssembly && (!subtle || !CHROME || MOBILE || VCHROME >= 107), set_hashw);
|
bcfg_bind(uc, 'hashw', 'hashw', !!WebAssembly && !(CHROME && MOBILE) && (!subtle || !CHROME), set_hashw);
|
||||||
bcfg_bind(uc, 'upnag', 'upnag', false, set_upnag);
|
bcfg_bind(uc, 'upnag', 'upnag', false, set_upnag);
|
||||||
bcfg_bind(uc, 'upsfx', 'upsfx', false, set_upsfx);
|
bcfg_bind(uc, 'upsfx', 'upsfx', false, set_upsfx);
|
||||||
|
|
||||||
|
uc.ow = parseInt(sread('u2ow', ['0', '1', '2']) || u2ow);
|
||||||
|
uc.owt = ['🛡️', '🕒', '♻️'];
|
||||||
|
function set_ow() {
|
||||||
|
QS('label[for="u2ow"]').innerHTML = uc.owt[uc.ow];
|
||||||
|
ebi('u2ow').checked = true; //cosmetic
|
||||||
|
}
|
||||||
|
ebi('u2ow').onclick = function (e) {
|
||||||
|
ev(e);
|
||||||
|
if (++uc.ow > 2)
|
||||||
|
uc.ow = 0;
|
||||||
|
swrite('u2ow', uc.ow);
|
||||||
|
set_ow();
|
||||||
|
if (uc.ow && !has(perms, 'delete'))
|
||||||
|
toast.warn(10, L.u_enoow, 'noow');
|
||||||
|
else if (toast.tag == 'noow')
|
||||||
|
toast.hide();
|
||||||
|
};
|
||||||
|
set_ow();
|
||||||
|
|
||||||
var st = {
|
var st = {
|
||||||
"files": [],
|
"files": [],
|
||||||
"nfile": {
|
"nfile": {
|
||||||
@@ -960,7 +988,7 @@ function up2k_init(subtle) {
|
|||||||
ud = function () { ebi('dir' + fdom_ctr).click(); };
|
ud = function () { ebi('dir' + fdom_ctr).click(); };
|
||||||
|
|
||||||
// too buggy on chrome <= 72
|
// too buggy on chrome <= 72
|
||||||
var m = / Chrome\/([0-9]+)\./.exec(navigator.userAgent);
|
var m = / Chrome\/([0-9]+)\./.exec(UA);
|
||||||
if (m && parseInt(m[1]) < 73)
|
if (m && parseInt(m[1]) < 73)
|
||||||
return uf();
|
return uf();
|
||||||
|
|
||||||
@@ -1037,7 +1065,7 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
catch (ex) {
|
catch (ex) {
|
||||||
document.body.ondragenter = document.body.ondragleave = document.body.ondragover = null;
|
document.body.ondragenter = document.body.ondragleave = document.body.ondragover = null;
|
||||||
return modal.alert('your browser does not support drag-and-drop uploading');
|
return modal.alert(L.u_nodrop);
|
||||||
}
|
}
|
||||||
if (btn)
|
if (btn)
|
||||||
return;
|
return;
|
||||||
@@ -1104,7 +1132,7 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (!good_files.length && bad_files.length)
|
if (!good_files.length && bad_files.length)
|
||||||
return toast.err(30, "that's not a folder!\n\nyour browser is too old,\nplease try dragdrop instead");
|
return toast.err(30, L.u_notdir);
|
||||||
|
|
||||||
return read_dirs(null, [], [], good_files, nil_files, bad_files);
|
return read_dirs(null, [], [], good_files, nil_files, bad_files);
|
||||||
}
|
}
|
||||||
@@ -1122,7 +1150,7 @@ function up2k_init(subtle) {
|
|||||||
if (err)
|
if (err)
|
||||||
return modal.alert('sorry, ' + err);
|
return modal.alert('sorry, ' + err);
|
||||||
|
|
||||||
toast.inf(0, 'Scanning files...');
|
toast.inf(0, L.u_scan);
|
||||||
|
|
||||||
if ((dz == 'up_dz' && uc.fsearch) || (dz == 'srch_dz' && !uc.fsearch))
|
if ((dz == 'up_dz' && uc.fsearch) || (dz == 'srch_dz' && !uc.fsearch))
|
||||||
tgl_fsearch();
|
tgl_fsearch();
|
||||||
@@ -1210,7 +1238,7 @@ function up2k_init(subtle) {
|
|||||||
match = false;
|
match = false;
|
||||||
|
|
||||||
if (match) {
|
if (match) {
|
||||||
var msg = ['directory iterator got stuck trying to access the following {0} items; will skip:<ul>'.format(missing.length)];
|
var msg = [L.u_dirstuck.format(missing.length) + '<ul>'];
|
||||||
for (var a = 0; a < Math.min(20, missing.length); a++)
|
for (var a = 0; a < Math.min(20, missing.length); a++)
|
||||||
msg.push('<li>' + esc(missing[a]) + '</li>');
|
msg.push('<li>' + esc(missing[a]) + '</li>');
|
||||||
|
|
||||||
@@ -1281,7 +1309,7 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
function gotallfiles(good_files, nil_files, bad_files) {
|
function gotallfiles(good_files, nil_files, bad_files) {
|
||||||
if (toast.txt == 'Scanning files...')
|
if (toast.txt == L.u_scan)
|
||||||
toast.hide();
|
toast.hide();
|
||||||
|
|
||||||
if (uc.fsearch && !uc.turbo)
|
if (uc.fsearch && !uc.turbo)
|
||||||
@@ -1291,7 +1319,7 @@ function up2k_init(subtle) {
|
|||||||
if (bad_files.length) {
|
if (bad_files.length) {
|
||||||
var msg = L.u_badf.format(bad_files.length, ntot);
|
var msg = L.u_badf.format(bad_files.length, ntot);
|
||||||
for (var a = 0, aa = Math.min(20, bad_files.length); a < aa; a++)
|
for (var a = 0, aa = Math.min(20, bad_files.length); a < aa; a++)
|
||||||
msg += '-- ' + bad_files[a][1] + '\n';
|
msg += '-- ' + esc(bad_files[a][1]) + '\n';
|
||||||
|
|
||||||
msg += L.u_just1;
|
msg += L.u_just1;
|
||||||
return modal.alert(msg, function () {
|
return modal.alert(msg, function () {
|
||||||
@@ -1303,7 +1331,7 @@ function up2k_init(subtle) {
|
|||||||
if (nil_files.length) {
|
if (nil_files.length) {
|
||||||
var msg = L.u_blankf.format(nil_files.length, ntot);
|
var msg = L.u_blankf.format(nil_files.length, ntot);
|
||||||
for (var a = 0, aa = Math.min(20, nil_files.length); a < aa; a++)
|
for (var a = 0, aa = Math.min(20, nil_files.length); a < aa; a++)
|
||||||
msg += '-- ' + nil_files[a][1] + '\n';
|
msg += '-- ' + esc(nil_files[a][1]) + '\n';
|
||||||
|
|
||||||
msg += L.u_just1;
|
msg += L.u_just1;
|
||||||
return modal.confirm(msg, function () {
|
return modal.confirm(msg, function () {
|
||||||
@@ -1351,9 +1379,21 @@ function up2k_init(subtle) {
|
|||||||
draw_each = good_files.length < 50;
|
draw_each = good_files.length < 50;
|
||||||
|
|
||||||
if (WebAssembly && !hws.length) {
|
if (WebAssembly && !hws.length) {
|
||||||
for (var a = 0; a < Math.min(navigator.hardwareConcurrency || 4, 16); a++)
|
var nw = Math.min(navigator.hardwareConcurrency || 4, 16);
|
||||||
|
|
||||||
|
if (CHROME) {
|
||||||
|
// chrome-bug 383568268 // #124
|
||||||
|
nw = Math.max(1, (nw > 4 ? 4 : (nw - 1)));
|
||||||
|
nw = (subtle && !MOBILE && nw > 2) ? 2 : nw;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (var a = 0; a < nw; a++)
|
||||||
hws.push(new Worker(SR + '/.cpr/w.hash.js?_=' + TS));
|
hws.push(new Worker(SR + '/.cpr/w.hash.js?_=' + TS));
|
||||||
|
|
||||||
|
if (!subtle)
|
||||||
|
for (var a = 0; a < hws.length; a++)
|
||||||
|
hws[a].postMessage('nosubtle');
|
||||||
|
|
||||||
console.log(hws.length + " hashers");
|
console.log(hws.length + " hashers");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1437,7 +1477,7 @@ function up2k_init(subtle) {
|
|||||||
if (!actx || actx.state != 'suspended' || toast.visible)
|
if (!actx || actx.state != 'suspended' || toast.visible)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
toast.warn(30, "<div onclick=\"start_actx();toast.inf(3,'thanks!')\">please click this text to<br />unlock full upload speed</div>");
|
toast.warn(30, "<div onclick=\"start_actx();toast.inf(3,'thanks!')\">" + L.u_actx + "</div>");
|
||||||
}, 500);
|
}, 500);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1479,7 +1519,7 @@ function up2k_init(subtle) {
|
|||||||
ev(e);
|
ev(e);
|
||||||
var txt = linklist();
|
var txt = linklist();
|
||||||
cliptxt(txt + '\n', function () {
|
cliptxt(txt + '\n', function () {
|
||||||
toast.inf(5, txt.split('\n').length + ' links copied to clipboard');
|
toast.inf(5, un_clip.format(txt.split('\n').length));
|
||||||
});
|
});
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -1544,8 +1584,10 @@ function up2k_init(subtle) {
|
|||||||
if (nhash) {
|
if (nhash) {
|
||||||
st.time.hashing += td;
|
st.time.hashing += td;
|
||||||
t.push(['u2etah', st.bytes.hashed, st.bytes.hashed, st.time.hashing]);
|
t.push(['u2etah', st.bytes.hashed, st.bytes.hashed, st.time.hashing]);
|
||||||
if (uc.fsearch)
|
if (uc.fsearch) {
|
||||||
|
st.time.busy += td;
|
||||||
t.push(['u2etat', st.bytes.hashed, st.bytes.hashed, st.time.hashing]);
|
t.push(['u2etat', st.bytes.hashed, st.bytes.hashed, st.time.hashing]);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
var b_up = st.bytes.inflight + st.bytes.uploaded,
|
var b_up = st.bytes.inflight + st.bytes.uploaded,
|
||||||
@@ -1746,14 +1788,6 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
var mou_ikkai = false;
|
var mou_ikkai = false;
|
||||||
|
|
||||||
if (st.busy.handshake.length &&
|
|
||||||
st.busy.handshake[0].t_busied < now - 30 * 1000
|
|
||||||
) {
|
|
||||||
console.log("retrying stuck handshake");
|
|
||||||
var t = st.busy.handshake.shift();
|
|
||||||
st.todo.handshake.unshift(t);
|
|
||||||
}
|
|
||||||
|
|
||||||
var nprev = -1;
|
var nprev = -1;
|
||||||
for (var a = 0; a < st.todo.upload.length; a++) {
|
for (var a = 0; a < st.todo.upload.length; a++) {
|
||||||
var nf = st.todo.upload[a].nfile;
|
var nf = st.todo.upload[a].nfile;
|
||||||
@@ -1872,10 +1906,12 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
function chill(t) {
|
function chill(t) {
|
||||||
var now = Date.now();
|
var now = Date.now();
|
||||||
if ((t.coolmul || 0) < 2 || now - t.cooldown < t.coolmul * 700)
|
if ((t.coolmul || 0) < 5 || now - t.cooldown < t.coolmul * 700)
|
||||||
t.coolmul = Math.min((t.coolmul || 0.5) * 2, 32);
|
t.coolmul = Math.min((t.coolmul || 0.5) * 2, 32);
|
||||||
|
|
||||||
t.cooldown = Math.max(t.cooldown || 1, Date.now() + t.coolmul * 1000);
|
var cd = now + 1000 * (t.coolmul + Math.random() * 4 + 2);
|
||||||
|
t.cooldown = Math.floor(Math.max(cd, t.cooldown || 1));
|
||||||
|
return t;
|
||||||
}
|
}
|
||||||
|
|
||||||
/////
|
/////
|
||||||
@@ -1955,38 +1991,90 @@ function up2k_init(subtle) {
|
|||||||
nchunk = 0,
|
nchunk = 0,
|
||||||
chunksize = get_chunksize(t.size),
|
chunksize = get_chunksize(t.size),
|
||||||
nchunks = Math.ceil(t.size / chunksize),
|
nchunks = Math.ceil(t.size / chunksize),
|
||||||
|
csz_mib = chunksize / 1048576,
|
||||||
|
tread = t.t_hashing,
|
||||||
|
cache_buf = null,
|
||||||
|
cache_car = 0,
|
||||||
|
cache_cdr = 0,
|
||||||
|
hashers = 0,
|
||||||
hashtab = {};
|
hashtab = {};
|
||||||
|
|
||||||
|
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
|
||||||
|
var use_workers = hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'),
|
||||||
|
hash_par = (!subtle && !use_workers) ? 0 : csz_mib < 48 ? 2 : csz_mib < 96 ? 1 : 0;
|
||||||
|
|
||||||
pvis.setab(t.n, nchunks);
|
pvis.setab(t.n, nchunks);
|
||||||
pvis.move(t.n, 'bz');
|
pvis.move(t.n, 'bz');
|
||||||
|
|
||||||
if (hws.length && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
|
if (use_workers)
|
||||||
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
|
|
||||||
return wexec_hash(t, chunksize, nchunks);
|
return wexec_hash(t, chunksize, nchunks);
|
||||||
|
|
||||||
var segm_next = function () {
|
var segm_next = function () {
|
||||||
if (nchunk >= nchunks || bpend)
|
if (nchunk >= nchunks || bpend)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
var reader = new FileReader(),
|
var nch = nchunk++,
|
||||||
nch = nchunk++,
|
|
||||||
car = nch * chunksize,
|
car = nch * chunksize,
|
||||||
cdr = Math.min(chunksize + car, t.size);
|
cdr = Math.min(chunksize + car, t.size);
|
||||||
|
|
||||||
st.bytes.hashed += cdr - car;
|
st.bytes.hashed += cdr - car;
|
||||||
st.etac.h++;
|
st.etac.h++;
|
||||||
|
|
||||||
var orz = function (e) {
|
if (MOBILE && CHROME && st.slow_io === null && nch == 1 && cdr - car >= 1024 * 512) {
|
||||||
bpend--;
|
var spd = Math.floor((cdr - car) / (Date.now() + 1 - tread));
|
||||||
segm_next();
|
st.slow_io = spd < 40 * 1024;
|
||||||
hash_calc(nch, e.target.result);
|
console.log('spd {0}, slow: {1}'.format(spd, st.slow_io));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (cdr <= cache_cdr && car >= cache_car) {
|
||||||
|
try {
|
||||||
|
var ofs = car - cache_car,
|
||||||
|
ofs2 = ofs + (cdr - car),
|
||||||
|
buf = cache_buf.subarray(ofs, ofs2);
|
||||||
|
|
||||||
|
hash_calc(nch, buf);
|
||||||
|
}
|
||||||
|
catch (ex) {
|
||||||
|
vis_exh(ex + '', 'up2k.js', '', '', ex);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
var reader = new FileReader(),
|
||||||
|
fr_cdr = cdr;
|
||||||
|
|
||||||
|
if (st.slow_io) {
|
||||||
|
var step = cdr - car,
|
||||||
|
tgt = 48 * 1048576;
|
||||||
|
|
||||||
|
while (step && fr_cdr - car < tgt)
|
||||||
|
fr_cdr += step;
|
||||||
|
if (fr_cdr - car > tgt && fr_cdr > cdr)
|
||||||
|
fr_cdr -= step;
|
||||||
|
if (fr_cdr > t.size)
|
||||||
|
fr_cdr = t.size;
|
||||||
|
}
|
||||||
|
|
||||||
|
var orz = function (e) {
|
||||||
|
bpend = 0;
|
||||||
|
var buf = e.target.result;
|
||||||
|
if (fr_cdr > cdr) {
|
||||||
|
cache_buf = new Uint8Array(buf);
|
||||||
|
cache_car = car;
|
||||||
|
cache_cdr = fr_cdr;
|
||||||
|
buf = cache_buf.subarray(0, cdr - car);
|
||||||
|
}
|
||||||
|
if (hashers < hash_par)
|
||||||
|
segm_next();
|
||||||
|
|
||||||
|
hash_calc(nch, buf);
|
||||||
|
};
|
||||||
reader.onload = function (e) {
|
reader.onload = function (e) {
|
||||||
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
|
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
|
||||||
};
|
};
|
||||||
reader.onerror = function () {
|
reader.onerror = function () {
|
||||||
var err = reader.error + '';
|
var err = esc('' + reader.error),
|
||||||
var handled = false;
|
handled = false;
|
||||||
|
|
||||||
if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender
|
if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender
|
||||||
err.indexOf('NotFoundError') !== -1 // macos-firefox permissions
|
err.indexOf('NotFoundError') !== -1 // macos-firefox permissions
|
||||||
@@ -2007,17 +2095,20 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
toast.err(0, 'y o u b r o k e i t\nfile: ' + esc(t.name + '') + '\nerror: ' + err);
|
toast.err(0, 'y o u b r o k e i t\nfile: ' + esc(t.name + '') + '\nerror: ' + err);
|
||||||
};
|
};
|
||||||
bpend++;
|
bpend = 1;
|
||||||
reader.readAsArrayBuffer(t.fobj.slice(car, cdr));
|
tread = Date.now();
|
||||||
|
reader.readAsArrayBuffer(t.fobj.slice(car, fr_cdr));
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
};
|
};
|
||||||
|
|
||||||
var hash_calc = function (nch, buf) {
|
var hash_calc = function (nch, buf) {
|
||||||
|
hashers++;
|
||||||
var orz = function (hashbuf) {
|
var orz = function (hashbuf) {
|
||||||
var hslice = new Uint8Array(hashbuf).subarray(0, 33),
|
var hslice = new Uint8Array(hashbuf).subarray(0, 33),
|
||||||
b64str = buf2b64(hslice);
|
b64str = buf2b64(hslice);
|
||||||
|
|
||||||
|
hashers--;
|
||||||
hashtab[nch] = b64str;
|
hashtab[nch] = b64str;
|
||||||
t.hash.push(nch);
|
t.hash.push(nch);
|
||||||
pvis.hashed(t);
|
pvis.hashed(t);
|
||||||
@@ -2069,16 +2160,27 @@ function up2k_init(subtle) {
|
|||||||
free = [],
|
free = [],
|
||||||
busy = {},
|
busy = {},
|
||||||
nbusy = 0,
|
nbusy = 0,
|
||||||
|
init = 0,
|
||||||
hashtab = {},
|
hashtab = {},
|
||||||
mem = (MOBILE ? 128 : 256) * 1024 * 1024;
|
mem = (MOBILE ? 128 : 256) * 1024 * 1024;
|
||||||
|
|
||||||
|
if (!hws_ok)
|
||||||
|
init = setTimeout(function() {
|
||||||
|
hws_ng = true;
|
||||||
|
toast.warn(30, 'webworkers failed to start\n\nwill be a bit slower due to\nhashing on main-thread');
|
||||||
|
apop(st.busy.hash, t);
|
||||||
|
st.todo.hash.unshift(t);
|
||||||
|
exec_hash();
|
||||||
|
}, 5000);
|
||||||
|
|
||||||
for (var a = 0; a < hws.length; a++) {
|
for (var a = 0; a < hws.length; a++) {
|
||||||
var w = hws[a];
|
var w = hws[a];
|
||||||
free.push(w);
|
|
||||||
w.onmessage = onmsg;
|
w.onmessage = onmsg;
|
||||||
|
if (init)
|
||||||
|
w.postMessage('ping');
|
||||||
|
if (mem > 0)
|
||||||
|
free.push(w);
|
||||||
mem -= chunksize;
|
mem -= chunksize;
|
||||||
if (mem <= 0)
|
|
||||||
break;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function go_next() {
|
function go_next() {
|
||||||
@@ -2108,6 +2210,12 @@ function up2k_init(subtle) {
|
|||||||
d = d.data;
|
d = d.data;
|
||||||
var k = d[0];
|
var k = d[0];
|
||||||
|
|
||||||
|
if (k == "pong")
|
||||||
|
if (++hws_ok == hws.length) {
|
||||||
|
clearTimeout(init);
|
||||||
|
go_next();
|
||||||
|
}
|
||||||
|
|
||||||
if (k == "panic")
|
if (k == "panic")
|
||||||
return vis_exh(d[1], 'up2k.js', '', '', d[1]);
|
return vis_exh(d[1], 'up2k.js', '', '', d[1]);
|
||||||
|
|
||||||
@@ -2170,7 +2278,8 @@ function up2k_init(subtle) {
|
|||||||
tasker();
|
tasker();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
go_next();
|
if (!init)
|
||||||
|
go_next();
|
||||||
}
|
}
|
||||||
|
|
||||||
/////
|
/////
|
||||||
@@ -2189,7 +2298,7 @@ function up2k_init(subtle) {
|
|||||||
xhr.onerror = xhr.ontimeout = function () {
|
xhr.onerror = xhr.ontimeout = function () {
|
||||||
console.log('head onerror, retrying', t.name, t);
|
console.log('head onerror, retrying', t.name, t);
|
||||||
if (!toast.visible)
|
if (!toast.visible)
|
||||||
toast.warn(9.98, L.u_enethd + "\n\nfile: " + t.name, t);
|
toast.warn(9.98, L.u_enethd + "\n\nfile: " + esc(t.name), t);
|
||||||
|
|
||||||
apop(st.busy.head, t);
|
apop(st.busy.head, t);
|
||||||
st.todo.head.unshift(t);
|
st.todo.head.unshift(t);
|
||||||
@@ -2255,18 +2364,20 @@ function up2k_init(subtle) {
|
|||||||
if (keepalive)
|
if (keepalive)
|
||||||
console.log("sending keepalive handshake", t.name, t);
|
console.log("sending keepalive handshake", t.name, t);
|
||||||
|
|
||||||
|
if (!t.srch && !t.t_handshake)
|
||||||
|
pvis.seth(t.n, 2, L.u_hs);
|
||||||
|
|
||||||
var xhr = new XMLHttpRequest();
|
var xhr = new XMLHttpRequest();
|
||||||
xhr.onerror = xhr.ontimeout = function () {
|
xhr.onerror = xhr.ontimeout = function () {
|
||||||
if (t.t_busied != me) // t.done ok
|
if (t.t_busied != me) // t.done ok
|
||||||
return console.log('zombie handshake onerror', t.name, t);
|
return console.log('zombie handshake onerror', t.name, t);
|
||||||
|
|
||||||
if (!toast.visible)
|
if (!toast.visible)
|
||||||
toast.warn(9.98, L.u_eneths + "\n\nfile: " + t.name, t);
|
toast.warn(9.98, L.u_eneths + "\n\nfile: " + esc(t.name), t);
|
||||||
|
|
||||||
console.log('handshake onerror, retrying', t.name, t);
|
console.log('handshake onerror, retrying', t.name, t);
|
||||||
apop(st.busy.handshake, t);
|
apop(st.busy.handshake, t);
|
||||||
st.todo.handshake.unshift(t);
|
st.todo.handshake.unshift(chill(t));
|
||||||
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
|
|
||||||
t.keepalive = keepalive;
|
t.keepalive = keepalive;
|
||||||
};
|
};
|
||||||
var orz = function (e) {
|
var orz = function (e) {
|
||||||
@@ -2279,9 +2390,9 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
catch (ex) {
|
catch (ex) {
|
||||||
apop(st.busy.handshake, t);
|
apop(st.busy.handshake, t);
|
||||||
st.todo.handshake.unshift(t);
|
st.todo.handshake.unshift(chill(t));
|
||||||
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
|
var txt = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
|
||||||
return toast.err(0, 'Handshake error; will retry...\n\n' + L.badreply + ':\n\n' + unpre(xhr.responseText));
|
return toast.err(0, txt + '\n\n' + L.badreply + ':\n\n' + unpre(xhr.responseText));
|
||||||
}
|
}
|
||||||
|
|
||||||
t.t_handshake = Date.now();
|
t.t_handshake = Date.now();
|
||||||
@@ -2367,7 +2478,7 @@ function up2k_init(subtle) {
|
|||||||
var idx = t.hash.indexOf(missing[a]);
|
var idx = t.hash.indexOf(missing[a]);
|
||||||
if (idx < 0)
|
if (idx < 0)
|
||||||
return modal.alert('wtf negative index for hash "{0}" in task:\n{1}'.format(
|
return modal.alert('wtf negative index for hash "{0}" in task:\n{1}'.format(
|
||||||
missing[a], JSON.stringify(t)));
|
missing[a], esc(JSON.stringify(t))));
|
||||||
|
|
||||||
t.postlist.push(idx);
|
t.postlist.push(idx);
|
||||||
cbd[idx] = 0;
|
cbd[idx] = 0;
|
||||||
@@ -2380,6 +2491,9 @@ function up2k_init(subtle) {
|
|||||||
msg = 'done';
|
msg = 'done';
|
||||||
|
|
||||||
if (t.postlist.length) {
|
if (t.postlist.length) {
|
||||||
|
if (t.rechecks && QS('#opa_del.act'))
|
||||||
|
toast.inf(30, L.u_started, L.u_unpt);
|
||||||
|
|
||||||
var arr = st.todo.upload,
|
var arr = st.todo.upload,
|
||||||
sort = arr.length && arr[arr.length - 1].nfile > t.n;
|
sort = arr.length && arr[arr.length - 1].nfile > t.n;
|
||||||
|
|
||||||
@@ -2458,8 +2572,10 @@ function up2k_init(subtle) {
|
|||||||
else {
|
else {
|
||||||
pvis.seth(t.n, 1, "ERROR");
|
pvis.seth(t.n, 1, "ERROR");
|
||||||
pvis.seth(t.n, 2, L.u_ehstmp, t);
|
pvis.seth(t.n, 2, L.u_ehstmp, t);
|
||||||
|
apop(st.busy.handshake, t);
|
||||||
|
|
||||||
var err = "",
|
var err = "",
|
||||||
|
cls = "ERROR",
|
||||||
rsp = unpre(xhr.responseText),
|
rsp = unpre(xhr.responseText),
|
||||||
ofs = rsp.lastIndexOf('\nURL: ');
|
ofs = rsp.lastIndexOf('\nURL: ');
|
||||||
|
|
||||||
@@ -2470,7 +2586,6 @@ function up2k_init(subtle) {
|
|||||||
var penalty = rsp.replace(/.*rate-limit /, "").split(' ')[0];
|
var penalty = rsp.replace(/.*rate-limit /, "").split(' ')[0];
|
||||||
console.log("rate-limit: " + penalty);
|
console.log("rate-limit: " + penalty);
|
||||||
t.cooldown = Date.now() + parseFloat(penalty) * 1000;
|
t.cooldown = Date.now() + parseFloat(penalty) * 1000;
|
||||||
apop(st.busy.handshake, t);
|
|
||||||
st.todo.handshake.unshift(t);
|
st.todo.handshake.unshift(t);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@@ -2489,25 +2604,35 @@ function up2k_init(subtle) {
|
|||||||
if (!t.rechecks && (err_pend || err_srcb)) {
|
if (!t.rechecks && (err_pend || err_srcb)) {
|
||||||
t.rechecks = 0;
|
t.rechecks = 0;
|
||||||
t.want_recheck = true;
|
t.want_recheck = true;
|
||||||
|
if (st.busy.upload.length || st.busy.handshake.length || st.bytes.uploaded) {
|
||||||
|
err = L.u_dupdefer;
|
||||||
|
cls = 'defer';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (err_pend) {
|
||||||
|
err += ' <a href="#" onclick="toast.inf(60, L.ue_ab);" class="fsearch_explain">(' + L.u_expl + ')</a>';
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (rsp.indexOf('server HDD is full') + 1)
|
|
||||||
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
|
|
||||||
|
|
||||||
if (err != "") {
|
if (err != "") {
|
||||||
if (!t.t_uploading)
|
if (!t.t_uploading)
|
||||||
st.bytes.finished += t.size;
|
st.bytes.finished += t.size;
|
||||||
|
|
||||||
pvis.seth(t.n, 1, "ERROR");
|
pvis.seth(t.n, 1, cls);
|
||||||
pvis.seth(t.n, 2, err);
|
pvis.seth(t.n, 2, err);
|
||||||
pvis.move(t.n, 'ng');
|
pvis.move(t.n, 'ng');
|
||||||
|
|
||||||
apop(st.busy.handshake, t);
|
|
||||||
tasker();
|
tasker();
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
st.todo.handshake.unshift(chill(t));
|
||||||
|
|
||||||
|
if (rsp.indexOf('server HDD is full') + 1)
|
||||||
|
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
|
||||||
|
|
||||||
err = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
|
err = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
|
||||||
xhrchk(xhr, err + "\n\nfile: " + t.name + "\n\nerror ", "404, target folder not found", "warn", t);
|
xhrchk(xhr, err + "\n\nfile: " + esc(t.name) + "\n\nerror ", "404, target folder not found", "warn", t);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
xhr.onload = function (e) {
|
xhr.onload = function (e) {
|
||||||
@@ -2528,9 +2653,17 @@ function up2k_init(subtle) {
|
|||||||
else if (t.umod)
|
else if (t.umod)
|
||||||
req.umod = true;
|
req.umod = true;
|
||||||
|
|
||||||
|
if (!t.srch) {
|
||||||
|
if (uc.ow == 1)
|
||||||
|
req.replace = 'mt';
|
||||||
|
if (uc.ow == 2)
|
||||||
|
req.replace = true;
|
||||||
|
}
|
||||||
|
|
||||||
xhr.open('POST', t.purl, true);
|
xhr.open('POST', t.purl, true);
|
||||||
xhr.responseType = 'text';
|
xhr.responseType = 'text';
|
||||||
xhr.timeout = 42000;
|
xhr.timeout = 42000 + (t.srch || t.t_uploaded ? 0 :
|
||||||
|
(t.size / (1048 * 20))); // safededup 20M/s hdd
|
||||||
xhr.send(JSON.stringify(req));
|
xhr.send(JSON.stringify(req));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2575,8 +2708,7 @@ function up2k_init(subtle) {
|
|||||||
nparts = upt.nparts,
|
nparts = upt.nparts,
|
||||||
pcar = nparts[0],
|
pcar = nparts[0],
|
||||||
pcdr = nparts[nparts.length - 1],
|
pcdr = nparts[nparts.length - 1],
|
||||||
snpart = pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr),
|
maxsz = (u2sz_max > 1 ? u2sz_max : 2040) * 1024 * 1024;
|
||||||
tries = 0;
|
|
||||||
|
|
||||||
if (t.done)
|
if (t.done)
|
||||||
return console.log('done; skip chunk', t.name, t);
|
return console.log('done; skip chunk', t.name, t);
|
||||||
@@ -2596,6 +2728,30 @@ function up2k_init(subtle) {
|
|||||||
if (cdr >= t.size)
|
if (cdr >= t.size)
|
||||||
cdr = t.size;
|
cdr = t.size;
|
||||||
|
|
||||||
|
if (cdr - car <= maxsz)
|
||||||
|
return upload_sub(t, upt, pcar, pcdr, car, cdr, chunksize, car, []);
|
||||||
|
|
||||||
|
var car0 = car, subs = [];
|
||||||
|
while (car < cdr) {
|
||||||
|
subs.push([car, Math.min(cdr, car + maxsz)]);
|
||||||
|
car += maxsz;
|
||||||
|
}
|
||||||
|
upload_sub(t, upt, pcar, pcdr, 0, 0, chunksize, car0, subs);
|
||||||
|
}
|
||||||
|
|
||||||
|
function upload_sub(t, upt, pcar, pcdr, car, cdr, chunksize, car0, subs) {
|
||||||
|
var nparts = upt.nparts,
|
||||||
|
is_sub = subs.length;
|
||||||
|
|
||||||
|
if (is_sub) {
|
||||||
|
var x = subs.shift();
|
||||||
|
car = x[0];
|
||||||
|
cdr = x[1];
|
||||||
|
}
|
||||||
|
|
||||||
|
var snpart = is_sub ? ('' + pcar + '(' + (car-car0) +'+'+ (cdr-car)) :
|
||||||
|
pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr);
|
||||||
|
|
||||||
var orz = function (xhr) {
|
var orz = function (xhr) {
|
||||||
st.bytes.inflight -= xhr.bsent;
|
st.bytes.inflight -= xhr.bsent;
|
||||||
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
|
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
|
||||||
@@ -2609,6 +2765,10 @@ function up2k_init(subtle) {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
if (xhr.status == 200) {
|
if (xhr.status == 200) {
|
||||||
|
car = car0;
|
||||||
|
if (subs.length)
|
||||||
|
return upload_sub(t, upt, pcar, pcdr, 0, 0, chunksize, car0, subs);
|
||||||
|
|
||||||
var bdone = cdr - car;
|
var bdone = cdr - car;
|
||||||
for (var a = pcar; a <= pcdr; a++) {
|
for (var a = pcar; a <= pcdr; a++) {
|
||||||
pvis.prog(t, a, Math.min(bdone, chunksize));
|
pvis.prog(t, a, Math.min(bdone, chunksize));
|
||||||
@@ -2617,6 +2777,7 @@ function up2k_init(subtle) {
|
|||||||
st.bytes.finished += cdr - car;
|
st.bytes.finished += cdr - car;
|
||||||
st.bytes.uploaded += cdr - car;
|
st.bytes.uploaded += cdr - car;
|
||||||
t.bytes_uploaded += cdr - car;
|
t.bytes_uploaded += cdr - car;
|
||||||
|
t.cooldown = t.coolmul = 0;
|
||||||
st.etac.u++;
|
st.etac.u++;
|
||||||
st.etac.t++;
|
st.etac.t++;
|
||||||
}
|
}
|
||||||
@@ -2628,7 +2789,7 @@ function up2k_init(subtle) {
|
|||||||
toast.inf(10, L.u_cbusy);
|
toast.inf(10, L.u_cbusy);
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
xhrchk(xhr, L.u_cuerr2.format(snpart, Math.ceil(t.size / chunksize), t.name), "404, target folder not found (???)", "warn", t);
|
xhrchk(xhr, L.u_cuerr2.format(snpart, Math.ceil(t.size / chunksize), esc(t.name)), "404, target folder not found (???)", "warn", t);
|
||||||
chill(t);
|
chill(t);
|
||||||
}
|
}
|
||||||
orz2(xhr);
|
orz2(xhr);
|
||||||
@@ -2672,10 +2833,10 @@ function up2k_init(subtle) {
|
|||||||
xhr.bsent = 0;
|
xhr.bsent = 0;
|
||||||
|
|
||||||
if (!toast.visible)
|
if (!toast.visible)
|
||||||
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), t.name), t);
|
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), esc(t.name)), t);
|
||||||
|
|
||||||
t.nojoin = t.nojoin || t.postlist.length; // maybe rproxy postsize limit
|
t.nojoin = t.nojoin || t.postlist.length; // maybe rproxy postsize limit
|
||||||
console.log('chunkpit onerror,', ++tries, t.name, t);
|
console.log('chunkpit onerror,', t.name, t);
|
||||||
orz2(xhr);
|
orz2(xhr);
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -2693,9 +2854,13 @@ function up2k_init(subtle) {
|
|||||||
xhr.open('POST', t.purl, true);
|
xhr.open('POST', t.purl, true);
|
||||||
xhr.setRequestHeader("X-Up2k-Hash", ctxt);
|
xhr.setRequestHeader("X-Up2k-Hash", ctxt);
|
||||||
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
|
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
|
||||||
|
if (is_sub)
|
||||||
|
xhr.setRequestHeader("X-Up2k-Subc", car - car0);
|
||||||
|
|
||||||
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
|
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
|
||||||
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
|
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
|
||||||
st.eta.t.split(' ').pop()));
|
st.eta.t.indexOf('/s, ')+1 ? st.eta.t.split(' ').pop() : 'x'));
|
||||||
|
|
||||||
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
|
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
|
||||||
if (xhr.overrideMimeType)
|
if (xhr.overrideMimeType)
|
||||||
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
|
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
|
||||||
@@ -2813,13 +2978,13 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var read_u2sz = function () {
|
var read_u2sz = function () {
|
||||||
var el = ebi('u2szg'), n = parseInt(el.value), dv = u2sz.split(',');
|
var el = ebi('u2szg'), n = parseInt(el.value);
|
||||||
stitch_tgt = n = (
|
stitch_tgt = n = (
|
||||||
isNaN(n) ? dv[1] :
|
isNaN(n) ? u2sz_tgt :
|
||||||
n < dv[0] ? dv[0] :
|
n < u2sz_min ? u2sz_min :
|
||||||
n > dv[2] ? dv[2] : n
|
n > u2sz_max ? u2sz_max : n
|
||||||
);
|
);
|
||||||
if (n == dv[1]) sdrop('u2sz'); else swrite('u2sz', n);
|
if (n == u2sz_tgt) sdrop('u2sz'); else swrite('u2sz', n);
|
||||||
if (el.value != n) el.value = n;
|
if (el.value != n) el.value = n;
|
||||||
};
|
};
|
||||||
ebi('u2szg').addEventListener('blur', read_u2sz);
|
ebi('u2szg').addEventListener('blur', read_u2sz);
|
||||||
@@ -2960,7 +3125,7 @@ function up2k_init(subtle) {
|
|||||||
new_state = false;
|
new_state = false;
|
||||||
fixed = true;
|
fixed = true;
|
||||||
}
|
}
|
||||||
if (new_state === undefined)
|
if (new_state === undefined && preferred === undefined)
|
||||||
new_state = can_write ? false : have_up2k_idx ? true : undefined;
|
new_state = can_write ? false : have_up2k_idx ? true : undefined;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -5,10 +5,17 @@ if (!window.console || !console.log)
|
|||||||
"log": function (msg) { }
|
"log": function (msg) { }
|
||||||
};
|
};
|
||||||
|
|
||||||
|
if (!Object.assign)
|
||||||
|
Object.assign = function (a, b) {
|
||||||
|
for (var k in b)
|
||||||
|
a[k] = b[k];
|
||||||
|
};
|
||||||
|
|
||||||
|
if (window.CGV1)
|
||||||
|
Object.assign(window, window.CGV1);
|
||||||
|
|
||||||
if (window.CGV)
|
if (window.CGV)
|
||||||
for (var k in CGV)
|
Object.assign(window, window.CGV);
|
||||||
window[k] = CGV[k];
|
|
||||||
|
|
||||||
|
|
||||||
var wah = '',
|
var wah = '',
|
||||||
@@ -22,14 +29,17 @@ var wah = '',
|
|||||||
HTTPS = ('' + location).indexOf('https:') === 0,
|
HTTPS = ('' + location).indexOf('https:') === 0,
|
||||||
TOUCH = 'ontouchstart' in window,
|
TOUCH = 'ontouchstart' in window,
|
||||||
MOBILE = TOUCH,
|
MOBILE = TOUCH,
|
||||||
CHROME = !!window.chrome,
|
CHROME = !!window.chrome, // safari=false
|
||||||
VCHROME = CHROME ? 1 : 0,
|
VCHROME = CHROME ? 1 : 0,
|
||||||
IE = /Trident\//.test(navigator.userAgent),
|
UA = '' + navigator.userAgent,
|
||||||
FIREFOX = ('netscape' in window) && / rv:/.test(navigator.userAgent),
|
IE = /Trident\//.test(UA),
|
||||||
IPHONE = TOUCH && /iPhone|iPad|iPod/i.test(navigator.userAgent),
|
FIREFOX = ('netscape' in window) && / rv:/.test(UA),
|
||||||
LINUX = /Linux/.test(navigator.userAgent),
|
IPHONE = TOUCH && /iPhone|iPad|iPod/i.test(UA),
|
||||||
MACOS = /[^a-z]mac ?os/i.test(navigator.userAgent),
|
LINUX = /Linux/.test(UA),
|
||||||
WINDOWS = /Windows/.test(navigator.userAgent);
|
MACOS = /Macintosh/.test(UA),
|
||||||
|
WINDOWS = /Windows/.test(UA),
|
||||||
|
APPLE = IPHONE || MACOS,
|
||||||
|
APPLEM = TOUCH && APPLE;
|
||||||
|
|
||||||
if (!window.WebAssembly || !WebAssembly.Memory)
|
if (!window.WebAssembly || !WebAssembly.Memory)
|
||||||
window.WebAssembly = false;
|
window.WebAssembly = false;
|
||||||
@@ -189,7 +199,7 @@ function vis_exh(msg, url, lineNo, columnNo, error) {
|
|||||||
'<p style="font-size:1.3em;margin:0;line-height:2em">try to <a href="#" onclick="localStorage.clear();location.reload();">reset copyparty settings</a> if you are stuck here, or <a href="#" onclick="ignex();">ignore this</a> / <a href="#" onclick="ignex(true);">ignore all</a> / <a href="?b=u">basic</a></p>',
|
'<p style="font-size:1.3em;margin:0;line-height:2em">try to <a href="#" onclick="localStorage.clear();location.reload();">reset copyparty settings</a> if you are stuck here, or <a href="#" onclick="ignex();">ignore this</a> / <a href="#" onclick="ignex(true);">ignore all</a> / <a href="?b=u">basic</a></p>',
|
||||||
'<p style="color:#fff">please send me a screenshot arigathanks gozaimuch: <a href="<ghi>" target="_blank">new github issue</a></p>',
|
'<p style="color:#fff">please send me a screenshot arigathanks gozaimuch: <a href="<ghi>" target="_blank">new github issue</a></p>',
|
||||||
'<p class="b">' + esc(url + ' @' + lineNo + ':' + columnNo), '<br />' + esc(msg).replace(/\n/g, '<br />') + '</p>',
|
'<p class="b">' + esc(url + ' @' + lineNo + ':' + columnNo), '<br />' + esc(msg).replace(/\n/g, '<br />') + '</p>',
|
||||||
'<p><b>UA:</b> ' + esc(navigator.userAgent + '')
|
'<p><b>UA:</b> ' + esc(UA)
|
||||||
];
|
];
|
||||||
|
|
||||||
try {
|
try {
|
||||||
@@ -424,7 +434,7 @@ function import_js(url, cb, ecb) {
|
|||||||
|
|
||||||
|
|
||||||
function unsmart(txt) {
|
function unsmart(txt) {
|
||||||
return !IPHONE ? txt : (txt.
|
return !APPLEM ? txt : (txt.
|
||||||
replace(/[\u2014]/g, "--").
|
replace(/[\u2014]/g, "--").
|
||||||
replace(/[\u2022]/g, "*").
|
replace(/[\u2022]/g, "*").
|
||||||
replace(/[\u2018\u2019]/g, "'").
|
replace(/[\u2018\u2019]/g, "'").
|
||||||
@@ -535,6 +545,14 @@ function clgot(el, cls) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function setcvar(k, v) {
|
||||||
|
try {
|
||||||
|
document.documentElement.style.setProperty(k, v);
|
||||||
|
}
|
||||||
|
catch (e) { }
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
var ANIM = true;
|
var ANIM = true;
|
||||||
try {
|
try {
|
||||||
var mq = window.matchMedia('(prefers-reduced-motion: reduce)');
|
var mq = window.matchMedia('(prefers-reduced-motion: reduce)');
|
||||||
@@ -563,7 +581,9 @@ function yscroll() {
|
|||||||
|
|
||||||
function showsort(tab) {
|
function showsort(tab) {
|
||||||
var v, vn, v1, v2, th = tab.tHead,
|
var v, vn, v1, v2, th = tab.tHead,
|
||||||
sopts = jread('fsort', jcp(dsort));
|
sopts = jread('fsort');
|
||||||
|
|
||||||
|
sopts = sopts && sopts.length ? sopts : dsort;
|
||||||
|
|
||||||
th && (th = th.rows[0]) && (th = th.cells);
|
th && (th = th.rows[0]) && (th = th.cells);
|
||||||
|
|
||||||
@@ -600,10 +620,13 @@ function sortTable(table, col, cb) {
|
|||||||
tr = Array.prototype.slice.call(tb.rows, 0),
|
tr = Array.prototype.slice.call(tb.rows, 0),
|
||||||
i, reverse = /s0[^r]/.exec(th[col].className + ' ') ? -1 : 1;
|
i, reverse = /s0[^r]/.exec(th[col].className + ' ') ? -1 : 1;
|
||||||
|
|
||||||
var stype = th[col].getAttribute('sort');
|
var kname = th[col].getAttribute('name'),
|
||||||
|
stype = th[col].getAttribute('sort');
|
||||||
try {
|
try {
|
||||||
var nrules = [], rules = jread("fsort", []);
|
var nrules = [],
|
||||||
rules.unshift([th[col].getAttribute('name'), reverse, stype || '']);
|
rules = kname == 'href' ? [] : jread("fsort", []);
|
||||||
|
|
||||||
|
rules.unshift([kname, reverse, stype || '']);
|
||||||
for (var a = 0; a < rules.length; a++) {
|
for (var a = 0; a < rules.length; a++) {
|
||||||
var add = true;
|
var add = true;
|
||||||
for (var b = 0; b < a; b++)
|
for (var b = 0; b < a; b++)
|
||||||
@@ -866,6 +889,11 @@ if (window.Number && Number.isFinite)
|
|||||||
|
|
||||||
function f2f(val, nd) {
|
function f2f(val, nd) {
|
||||||
// 10.toFixed(1) returns 10.00 for certain values of 10
|
// 10.toFixed(1) returns 10.00 for certain values of 10
|
||||||
|
if (!isNum(val)) {
|
||||||
|
val = parseFloat(val);
|
||||||
|
if (!isNum(val))
|
||||||
|
val = 999;
|
||||||
|
}
|
||||||
val = (val * Math.pow(10, nd)).toFixed(0).split('.')[0];
|
val = (val * Math.pow(10, nd)).toFixed(0).split('.')[0];
|
||||||
return nd ? (val.slice(0, -nd) || '0') + '.' + val.slice(-nd) : val;
|
return nd ? (val.slice(0, -nd) || '0') + '.' + val.slice(-nd) : val;
|
||||||
}
|
}
|
||||||
@@ -922,15 +950,18 @@ function shumantime(v, long) {
|
|||||||
|
|
||||||
|
|
||||||
function lhumantime(v) {
|
function lhumantime(v) {
|
||||||
var t = shumantime(v, 1),
|
var t = shumantime(v, 1);
|
||||||
tp = t.replace(/([a-z])/g, " $1 ").split(/ /g).slice(0, -1);
|
if (/[0-9]$/.exec(t))
|
||||||
|
t += 's';
|
||||||
|
|
||||||
|
var tp = t.replace(/([a-z])/g, " $1 ").split(/ /g).slice(0, -1);
|
||||||
|
|
||||||
if (!L || tp.length < 2 || tp[1].indexOf('$') + 1)
|
if (!L || tp.length < 2 || tp[1].indexOf('$') + 1)
|
||||||
return t;
|
return t;
|
||||||
|
|
||||||
var ret = '';
|
var ret = '';
|
||||||
for (var a = 0; a < tp.length; a += 2)
|
for (var a = 0; a < tp.length; a += 2)
|
||||||
ret += tp[a] + ' ' + L['ht_' + tp[a + 1]].replace(tp[a] == 1 ? /!.*/ : /!/, '') + L.ht_and;
|
ret += tp[a] + ' ' + L['ht_' + tp[a + 1] + (tp[a]==1?1:2)] + L.ht_and;
|
||||||
|
|
||||||
return ret.slice(0, -L.ht_and.length);
|
return ret.slice(0, -L.ht_and.length);
|
||||||
}
|
}
|
||||||
@@ -959,11 +990,33 @@ function apop(arr, v) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function jcp(obj) {
|
function jcp1(obj) {
|
||||||
return JSON.parse(JSON.stringify(obj));
|
return JSON.parse(JSON.stringify(obj));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function jcp2(src) {
|
||||||
|
if (Array.isArray(src)) {
|
||||||
|
var ret = [];
|
||||||
|
for (var a = 0; a < src.length; ++a) {
|
||||||
|
var sub = src[a];
|
||||||
|
ret.push((sub === null) ? sub : (sub instanceof Date) ? new Date(sub.valueOf()) : (typeof sub === 'object') ? jcp2(sub) : sub);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
var ret = {};
|
||||||
|
for (var key in src) {
|
||||||
|
var sub = src[key];
|
||||||
|
ret[key] = sub === null ? sub : (sub instanceof Date) ? new Date(sub.valueOf()) : (typeof sub === 'object') ? jcp2(sub) : sub;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ret;
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
// jcp1 50% faster on android-chrome, jcp2 7x everywhere else
|
||||||
|
var jcp = MOBILE && CHROME ? jcp1 : jcp2;
|
||||||
|
|
||||||
|
|
||||||
function sdrop(key) {
|
function sdrop(key) {
|
||||||
try {
|
try {
|
||||||
STG.removeItem(key);
|
STG.removeItem(key);
|
||||||
@@ -1306,7 +1359,7 @@ var tt = (function () {
|
|||||||
};
|
};
|
||||||
|
|
||||||
r.getmsg = function (el) {
|
r.getmsg = function (el) {
|
||||||
if (IPHONE && QS('body.bbox-open'))
|
if (APPLEM && QS('body.bbox-open'))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
var cfg = sread('tooltips');
|
var cfg = sread('tooltips');
|
||||||
@@ -1516,13 +1569,26 @@ var toast = (function () {
|
|||||||
if (sec)
|
if (sec)
|
||||||
te = setTimeout(r.hide, sec * 1000);
|
te = setTimeout(r.hide, sec * 1000);
|
||||||
|
|
||||||
if (same && delta < 1000)
|
if (same && delta < 1000) {
|
||||||
|
var tb = ebi('toastt');
|
||||||
|
if (tb) {
|
||||||
|
tb.style.animation = 'none';
|
||||||
|
tb.offsetHeight;
|
||||||
|
tb.style.animation = null;
|
||||||
|
}
|
||||||
return;
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
if (txt.indexOf('<body>') + 1)
|
if (txt.indexOf('<body>') + 1)
|
||||||
txt = txt.slice(0, txt.indexOf('<')) + ' [...]';
|
txt = txt.slice(0, txt.indexOf('<')) + ' [...]';
|
||||||
|
|
||||||
obj.innerHTML = '<a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
|
var html = '';
|
||||||
|
if (sec) {
|
||||||
|
setcvar('--tmtime', (sec - 0.15) + 's');
|
||||||
|
setcvar('--tmstep', Math.floor(sec * 20));
|
||||||
|
html += '<div id="toastt"></div>';
|
||||||
|
}
|
||||||
|
obj.innerHTML = html + '<a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
|
||||||
obj.className = cl;
|
obj.className = cl;
|
||||||
sec += obj.offsetWidth;
|
sec += obj.offsetWidth;
|
||||||
obj.className += ' vis';
|
obj.className += ' vis';
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ catch (ex) {
|
|||||||
function load_fb() {
|
function load_fb() {
|
||||||
subtle = null;
|
subtle = null;
|
||||||
importScripts('deps/sha512.hw.js');
|
importScripts('deps/sha512.hw.js');
|
||||||
|
console.log('using fallback hasher');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -29,6 +30,12 @@ var reader = null,
|
|||||||
|
|
||||||
|
|
||||||
onmessage = (d) => {
|
onmessage = (d) => {
|
||||||
|
if (d.data == 'nosubtle')
|
||||||
|
return load_fb();
|
||||||
|
|
||||||
|
if (d.data == 'ping')
|
||||||
|
return postMessage(['pong']);
|
||||||
|
|
||||||
if (busy)
|
if (busy)
|
||||||
return postMessage(["panic", 'worker got another task while busy']);
|
return postMessage(["panic", 'worker got another task while busy']);
|
||||||
|
|
||||||
@@ -57,7 +64,7 @@ onmessage = (d) => {
|
|||||||
};
|
};
|
||||||
reader.onerror = function () {
|
reader.onerror = function () {
|
||||||
busy = false;
|
busy = false;
|
||||||
var err = reader.error + '';
|
var err = esc('' + reader.error);
|
||||||
|
|
||||||
if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender
|
if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender
|
||||||
err.indexOf('NotFoundError') !== -1 // macos-firefox permissions
|
err.indexOf('NotFoundError') !== -1 // macos-firefox permissions
|
||||||
|
|||||||
@@ -25,6 +25,9 @@
|
|||||||
## [`changelog.md`](changelog.md)
|
## [`changelog.md`](changelog.md)
|
||||||
* occasionally grabbed from github release notes
|
* occasionally grabbed from github release notes
|
||||||
|
|
||||||
|
## [`synology-dsm.md`](synology-dsm.md)
|
||||||
|
* running copyparty on a synology nas
|
||||||
|
|
||||||
## [`devnotes.md`](devnotes.md)
|
## [`devnotes.md`](devnotes.md)
|
||||||
* technical stuff
|
* technical stuff
|
||||||
|
|
||||||
|
|||||||
@@ -1,3 +1,756 @@
|
|||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2025-0219-2309 `v1.16.14` overwrite by upload
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* #139 overwrite existing files by uploading over them e9f78ea7
|
||||||
|
* default-disabled; a new togglebutton in the upload-UI configures it
|
||||||
|
* can optionally compare last-modified-time and only overwrite older files
|
||||||
|
* [GDPR compliance](https://github.com/9001/copyparty#GDPR-compliance) (maybe/probably) 4be0d426
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* some cosmetic volflag stuff, all harmless b190e676
|
||||||
|
* disabling a volflag `foo` with `-foo` shows a warning that `-foo` was not a recognized volflag, but it still does the right thing
|
||||||
|
* some volflags give the *"unrecognized volflag, will ignore"* warning, but not to worry, they still work just fine:
|
||||||
|
* `xz` to allow serverside xz-compression of uploaded files
|
||||||
|
* the option to customize the loader-spinner would glitch out during the initial page load 7d7d5d6c
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* [randpic.py](https://github.com/9001/copyparty/blob/hovudstraum/bin/handlers/randpic.py), new 404-handler example, returns a random pic from a folder 60d5f271
|
||||||
|
* readme: [howto permanent cloudflare tunnel](https://github.com/9001/copyparty#permanent-cloudflare-tunnel) for easy hosting from home 2beb2acc
|
||||||
|
* [synology-dsm](https://github.com/9001/copyparty/blob/hovudstraum/docs/synology-dsm.md): mention how to update the docker image 56ce5919
|
||||||
|
* spinner improvements 6858cb06
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2025-0213-2057 `v1.16.13` configure with confidence
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* make the config-parser more helpful regarding volflags a255db70
|
||||||
|
* if an unrecognized volflag is specified, print a warning instead of silently ignoring it
|
||||||
|
* understand volflag-names with Uppercase and/or kebab-case (dashes), and not just snake_case (underscores)
|
||||||
|
* improve `--help-flags` to mention and explain all available flags
|
||||||
|
* #136 WebDAV: support COPY 62ee7f69
|
||||||
|
* also support overwrite of existing target files (default-enabled according to the spec)
|
||||||
|
* the user must have the delete-permission to actually replace files
|
||||||
|
* option to specify custom icons for certain file extensions 7e4702cf
|
||||||
|
* see `--ext-th` mentioned briefly in the [thumbnails section](https://github.com/9001/copyparty/#thumbnails)
|
||||||
|
* option to replace the loading-spinner animation 685f0869
|
||||||
|
* including how to [make it exceptionally normal-looking](https://github.com/9001/copyparty/tree/hovudstraum/docs/rice#boring-loader-spinner)
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* #136 WebDAV fixes 62ee7f69
|
||||||
|
* COPY/MOVE/MKCOL: challenge clients to provide the password as necessary
|
||||||
|
* most clients only need this in PROPFIND, but KDE-Dolphin is more picky
|
||||||
|
* MOVE: support `webdav://` Destination prefix as used by Dolphin, probably others
|
||||||
|
* #136 WebDAV: improve support for KDE-Dolphin as client 9d769027
|
||||||
|
* it masquerades as a graphical browser yet still expects 401, so special-case it with a useragent scan
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* Docker-only: quick hacky fix for the [musl CVE](https://www.openwall.com/lists/musl/2025/02/13/1) until the official fix is out 4d6626b0
|
||||||
|
* the docker images will be rebuilt when `musl-1.2.5-r9.apk` is released, in 6~24h or so
|
||||||
|
* until then, there is no support for reading korean XML files when running in docker
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2025-0209-2331 `v1.16.12` RTT
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* show rtt (network latency to server, including request processing time) in the top status text d27f1104
|
||||||
|
* and log the client-reported RTT to serverlog 20ddeb6e
|
||||||
|
* remember file selection when changing folders c7db08ed
|
||||||
|
* good for when you accidentally navigate elsewhere
|
||||||
|
* option to restrict download-as-zip/tar to admins-only c87af9e8
|
||||||
|
* #135 add [bubbleparty](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#bubblepartysh), thx @coderofsalvation! 3582a100
|
||||||
|
* runs copyparty in a [sandbox](https://github.com/containers/bubblewrap), making it harder to gain unintended access through bugs in python or copyparty
|
||||||
|
* better alternative to [prisonparty](https://github.com/9001/copyparty/tree/hovudstraum/bin#prisonpartysh), more similar to [the sandboxing in the nixos package](https://github.com/9001/copyparty/blob/7dda77dcb/contrib/nixos/modules/copyparty.nix#L232-L272)
|
||||||
|
* new plugin: [quickmove](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/quickmove.js) 46f9e9ef
|
||||||
|
* adds hotkey `W` to quickly move selected files into a subfolder
|
||||||
|
* #133 new plugin: [graft-thumbs.js](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/graft-thumbs.js) 6c202eff
|
||||||
|
* in folders with foobar.mp3 and foobar.png, can copy the thumbnail from the png to the jpg (and then hide the png)
|
||||||
|
* handlers: add [http-redirect example](https://github.com/9001/copyparty/blob/hovudstraum/bin/handlers/redirect.py) 22cbd2db
|
||||||
|
* add [ping.html](https://github.com/9001/copyparty/blob/hovudstraum/srv/ping.html) 7de9d15a 910797cc
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* improve iPad detection so they get opus instead of mp3 12dcea4f
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* safeguard against accidental config loss cd71b505
|
||||||
|
* while no copyparty servers have ended up in this unfortunate situation yet (afaik), be proactive and borrow some experience from other docker-based services
|
||||||
|
* readme: improve config examples 32e90859
|
||||||
|
* improve serverlog entries regarding 403s b020fd4a
|
||||||
|
* #132 mention fuse permissions in readme d9d2a092
|
||||||
|
* traefik-example: fix disconnect during big uploads 6a9ffe7e
|
||||||
|
* try to show an appropriate warning for media that the browser doesn't support playing 4ef35263
|
||||||
|
* was an attempt at detecting iphones failing to play high-color-precision webm files, but safari doesn't seem to realize itself that playback has failed, ah well
|
||||||
|
* copyparty.exe: update to python 3.12.9
|
||||||
|
* update deps: dompurify 3.2.4
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2025-0127-0140 `v1.16.11` fix no-acode
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* u2c (commandline uploader): print download-links for uploaded files 1fe30363
|
||||||
|
* `-u` prints a list after all uploads finished
|
||||||
|
* `-ud` print during upload, after each file
|
||||||
|
* `-uf a.txt` writes them to `a.txt`
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* [previous ver](https://github.com/9001/copyparty/releases/tag/v1.16.10) broke `--no-acode` (disable audio transcoding) by showing javascript errors 54a7256c
|
||||||
|
* reported on discord (thx)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2025-0125-1809 `v1.16.10` iOS9 is fine too
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* support audio playback on *really old* apple devices c9eba39e
|
||||||
|
* will now transcode to mp3 when necessary, since iOS didn't support opus-in-caf before iOS 11
|
||||||
|
* support audio playback on *future* apple devices 28c9de3f 95390b65
|
||||||
|
* iOS 17.5 introduced support for opus-in-weba (like webp just audio instead) and, unlike caf, this intentionally supports vbr-opus (awesome)
|
||||||
|
* ...but the current code in iOS is too buggy, so this new format is default-disabled and we'll stick to caf for now fff38f48
|
||||||
|
* ZeroMQ event-hooks can reject uploads 3a5c1d9f
|
||||||
|
* see [the example zmq listener](https://github.com/9001/copyparty/blob/1dace720/bin/zmq-recv.py#L26-L28)
|
||||||
|
* chat with ZeroMQ event-hooks from javascript cdd3b67a
|
||||||
|
* replies from ZMQ REP servers are included in the msg-to-log responses
|
||||||
|
* which makes [this joke](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/usb-eject.py) possible f38c7543
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* nope
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* option to restrict the recent-uploads listing to admins-only b8b5214f
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2025-0122-2326 `v1.16.9` ZeroMQ says hello
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* event-hooks can send zeromq / zmq / 0mq messages; see [readme](https://github.com/9001/copyparty#zeromq) or `--help-hooks` for examples d9db1534
|
||||||
|
* new volflags to specify the [allow-tag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Permissions-Policy#iframes) of the markdown/logue sandbox, to allow fullscreen and such (see `--help-flags`) 6a0aaaf0
|
||||||
|
* new volflag `nosparse` for possibly-better performance in very rare and specific scenarios 917380dd
|
||||||
|
* only enable this if you're uploading to s3 or something like that, and do plenty of benchmarking to make sure that it actually improved performance instead of making it worse
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* restrict max-length of filekeys to 72 characters e0cac6fd
|
||||||
|
* the hash-calculator mode of the commandline uploader produced incorrect whole-file hashes 4c04798a
|
||||||
|
* each chunk (`--chs`) was okay, but the final sum was not
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* selftest the xml-parser on startup with malicious xml b2e8bf6e
|
||||||
|
* just in case a future python-version suddenly makes it unsafe somehow
|
||||||
|
* disable some features if a dangerously misconfigured reverseproxy is detected 3f84b0a0
|
||||||
|
* the download-as-zip feature now defaults to utf8 filenames 1231ce19
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2025-0111-1611 `v1.16.8` android boost
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* 10x faster file hashing in android-chrome ec507889
|
||||||
|
* on a recent pixel, speed went from 13 to 139 MiB/s
|
||||||
|
* android's sandboxing makes small reads expensive, so do bigger reads instead
|
||||||
|
* so the browser-tab will use more RAM on android now, maybe around 200 MiB
|
||||||
|
* this only affects chrome-based browsers on android, not firefox
|
||||||
|
* PUT/multipart uploads: request-header `Accept: json` makes it return json instead of html, just like `?j` ce0e5be4
|
||||||
|
* add config examples for [ishare](https://isharemac.app/), a MacOS screenshot utility inspired by ShareX 0c0d6b2b
|
||||||
|
* also includes a bug-workaround for [ishare#107](https://github.com/castdrian/ishare/issues/107) - copyparty will now include a toplevel json property `fileurl` in the response if exactly one file was uploaded
|
||||||
|
* the [connect-page](https://a.ocv.me/?hc) generates an appropriate `copyparty.iscu` for ishare; [it looks like this](https://github.com/user-attachments/assets/820730ad-2319-4912-8eb2-733755a4cf54)
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* fix a potential upload deadlock when...
|
||||||
|
* ...the database (`-e2d`) is **not** enabled for any volume, and...
|
||||||
|
* ...either the shares feature, or user-changeable passwords, is enabled 9e542cf8
|
||||||
|
* when loading the partial-uploads registry on startup, a cosmetic desync could occur 467acb47
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* remove some deprecated properties in partial-upload metadata aa2a8fa2
|
||||||
|
* v1.15.7 is now the oldest version which still has any chance of reading a modern up2k.snap
|
||||||
|
* #129 added howto: [using webdav when copyparty is behind IdP](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#connecting-webdav-clients) -- thanks @wuast94 !
|
||||||
|
* added howto: [install copyparty on a synology nas](https://github.com/9001/copyparty/blob/hovudstraum/docs/synology-dsm.md) 21f93042
|
||||||
|
* more examples in the connect-page: 278258ee fb139697
|
||||||
|
* config-file for sharex on windows
|
||||||
|
* config-file for ishare on macos
|
||||||
|
* script for flameshot on linux
|
||||||
|
* #75 add recommendation to use the [kamelåså project](https://github.com/steinuil/kameloso) instead of copyparty's [very-bad-idea.py](https://github.com/9001/copyparty/tree/hovudstraum/bin/mtag#dangerous-plugins) 9f84dc42
|
||||||
|
* more reverse-proxy examples (haproxy, lighttpd, traefik, caddy) and improved nginx performance ac0a2da3
|
||||||
|
* readme has a [performance comparison](https://github.com/9001/copyparty?tab=readme-ov-file#reverse-proxy-performance) -- `haproxy > caddy > traefik > nginx > apache > lighttpd`
|
||||||
|
* copyparty.exe: updated pillow 244e952f
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1223-0005 `v1.16.7` an idp fix for xmas
|
||||||
|
|
||||||
|
# ☃️🎄 **there is still time** 🎅🎁
|
||||||
|
|
||||||
|
❄️❄️❄️ please [enjoy some appropriate music](https://a.ocv.me/pub/demo/music/.bonus/#af-55d4554d) -- you'll probably like this more than the idp thing honestly ❄️❄️❄️
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* more improvements to the recent-uploads feature 87598dcd
|
||||||
|
* move html rendering to clientside
|
||||||
|
* any changes to the filter-text applies in real-time
|
||||||
|
* loads 50% faster, reduces server-load by 30%
|
||||||
|
* inhibits search engines from indexing it
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* using idp without e2d could mess with uploads dd6e9ea7
|
||||||
|
* u2c (commandline uploader): fix window title 946a8c5b
|
||||||
|
* mDNS/SSDP: fix incorrect log colors when multiple primary IPs are lost 552897ab
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* ui: make it more obvious that the volume-control is a volume-control 7f044372
|
||||||
|
* copyparty.exe: update deps (jinja2, markupsafe, pyinstaller) c0dacbc4
|
||||||
|
* improve safety of custom plugins 988a7223
|
||||||
|
* if you've made your own plugins which expect certain values (host-header, filekeys) to be html-safe, then you'll want to upgrade
|
||||||
|
* also fixes rss-feed xml if password contains special characters
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1219-0037 `v1.16.6` merry \x58mas
|
||||||
|
|
||||||
|
# ☃️🎄 **it is time** 🎅🎁
|
||||||
|
|
||||||
|
❄️❄️❄️ please [enjoy some appropriate music](https://a.ocv.me/pub/demo/music/.bonus/#af-55d4554d) (trust me on this one, you won't regret it) ❄️❄️❄️
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* [list of recent uploads](https://a.ocv.me/?ru) eaa4b04a
|
||||||
|
* new button in the controlpanel; can be disabled with `--no-ups-page`
|
||||||
|
* only users with the dot-permission can see dotfiles
|
||||||
|
* only admins can see uploader-ip and upload-times
|
||||||
|
* enable `--ups-when` to let all users see upload-times
|
||||||
|
* #125 log decoded request-URLs 73f7249c
|
||||||
|
* non-ascii filenames would make the accesslog a wall of `%E5%B9%BB%E6%83%B3%E9%83%B7` so print [the decoded URL](https://github.com/user-attachments/assets/9d411183-30f3-4cb2-a880-84cf18011183) in addition to the original one, which is left as-is for debugging purposes
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* #126 improve dotfile handling 4c4e48ba
|
||||||
|
* was impossible to delete a folder which contained hidden files if the user did not have the permission to see hidden files
|
||||||
|
* would also affect moving, renaming, copying folders, in which case the dotfiles would not be carried over to the new location
|
||||||
|
* now, dotfiles are always deleted, and always moved/copied into a new destination, on the condition that this is safe -- if the user has the dotfile permission in the target loocation but not in the source location, the dotfiles will be left behind to avoid accidentally making then browsable
|
||||||
|
* ux: cosmetic eta/idle-timer fixes 01a3eb29
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* warn on ambiguous comments in config files da5ad2ab
|
||||||
|
* avoid writing mojibake to the log 3051b131
|
||||||
|
* use `\x`-encoding for unprintable text
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1211-2236 `v1.16.5` 4chrome
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* #124 add workaround for a chrome bug (crash during upload) 24ce46b3
|
||||||
|
* chrome and chromium-based browsers could OOM
|
||||||
|
* https://issues.chromium.org/issues/383568268
|
||||||
|
|
||||||
|
* #122 "hybrid IdP", regular users can still auth while [IdP](https://github.com/9001/copyparty#identity-providers) is enabled 64501fd7
|
||||||
|
* previously, enabling IdP would entirely disable password-based login
|
||||||
|
* now, password-auth is attempted for requests without a valid IdP header
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* the terminal window title would only change if `--no-ansi` was specified, which is exactly the opposite of what it should be (and now is) doing db3c0b09
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* mDNS: better log messages when several IPs are added/removed a49bf81f
|
||||||
|
* webdeps: update dompurify 06868606
|
||||||
|
|
||||||
|
----
|
||||||
|
|
||||||
|
this release includes a build of [copyparty-winpe64.exe](https://github.com/9001/copyparty/releases/download/v1.16.5/copyparty-winpe64.exe) since the last one was [almost a year ago](https://github.com/9001/copyparty/releases/tag/v1.10.1)
|
||||||
|
|
||||||
|
* winpe64.exe is only for *very* specific usecases, you almost definitely *do not* want to download it, please just grab the regular [copyparty.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty.exe) instead (works on all 64bit machines running win8 or newer)
|
||||||
|
|
||||||
|
* the only difference between winpe64.exe and [copyparty32.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty32.exe) is that winpe64.exe works in the win7x64 PE (rescue-env), which makes it *almost* entirely useless, and every bit as dangerous to use as copyparty32.exe
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1207-0024 `v1.16.4` ux is hard
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* improve the upload ui so it explains how to abort an unfinished upload when someone uploads to the wrong folder by accident be6afe2d
|
||||||
|
* also reduces serverload slightly when cloning an incoming file to multiple destinations
|
||||||
|
* u2c (commandline uploader): windows improvements 91637800
|
||||||
|
* now supports globbing (filename wildcards) on windows
|
||||||
|
* progressbar in the windows taskbar (requires conemu or the "new windows terminal")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1204-0003 `v1.16.3` 120%
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* #120 add option `--srch-excl` and volflag `srch_excl` for excluding certain paths from search results 697a4fa8
|
||||||
|
* mDNS: add workaround for https://github.com/avahi/avahi/issues/379 6c1cf68b 94d1924f
|
||||||
|
* Avahi mDNS Reflection, sometimes used in intricate LAN setups, doesn't understand NSEC records and corrupts them
|
||||||
|
* the workaround makes copyparty able to read the corrupted packets, but clients without a similar workaround will require either `--zm4` or `--zm6` so copyparty doesn't include the usual NSEC records
|
||||||
|
* this is mentioned in a very loud warning in the logs when necessary
|
||||||
|
* mDNS: option to silently ignore buggy devices instead of spamming the log with parser errors 395af051
|
||||||
|
* webdav: support listing unmapped root with infinite recursion (Depth:0) 21a3f369
|
||||||
|
* embed current sort config into media URLs (gallery/music) 0f257c93 4cfdc4c5 01670827
|
||||||
|
* ensures that anyone clicking your link will see the files in the same order as you
|
||||||
|
* can be confgured serverside (`--hsortn`, volflag `hsortn`) and clientside (`#sort` in settings)
|
||||||
|
* URL and UI options to disable checksum calculation of PUT, bup, basic uploads c5a000d2
|
||||||
|
* also allows [choosing either md5, sha1, sha256, or blake2](https://github.com/9001/copyparty/blob/hovudstraum/docs/devnotes.md#write) instead of the default sha512
|
||||||
|
* can give uploads a nice speed boost when copyparty is running on a potato
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* webdav: more correct login challenge 2ce82339
|
||||||
|
* the previous behavior could make some clients reluctant to send the password
|
||||||
|
* #120 forget metadata of all files (including uploads) when shadowed d168b2ac
|
||||||
|
* thanks to @Gremious for all the debugging to narrow this down!
|
||||||
|
* #120 drop volume caches if relevant config is changed (mainly indexing filters) 2f83c6c7
|
||||||
|
* #121 couldn't access arbitrary toplevel files from accounts with `h` permission 1f5f42f2
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* exclude thumbnails from accesslog by default 9082c470
|
||||||
|
* filesearch: show a final summary of time-elapsed and average hashing speed 8a631f04
|
||||||
|
* improve phrasing of debug messages during indexing at startup 127f414e
|
||||||
|
* `--license` no longer depends on opensource.org at build time 33c4ccff
|
||||||
|
* update deps 6cedcfbf
|
||||||
|
* copyparty.exe: python 3.12.7 => 3.12.8
|
||||||
|
* webdeps: hashwasm, dompurify
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1123-2336 `v1.16.2` webdav upload fix
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* add `--nsort` and volflag `nsort` to default-enable natural sort of filenames with leading digits 8f7ffcf3
|
||||||
|
* video-player: support `.mov` files which contain browser-native codecs 2d0cbdf1
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* #119 v1.16.0 broke webdav uploads from rclone and possibly other clients 7dfbfc72
|
||||||
|
* a collection of webdav unittests will be added soon to prevent similar issues in the future
|
||||||
|
* #118 ip-ranges can be mixed with `lan` when specifying the list of trusted proxies for `x-forwarded-for` with `--xff-src`
|
||||||
|
* found and fixed by @codemicro (thx!) 0e31cfa7
|
||||||
|
* ux:
|
||||||
|
* in the grid-view, markdown files would open in the generic text viewer 520ac8f4
|
||||||
|
* qr-codes (create-share, view-share) didn't render on chrome db069c3d
|
||||||
|
* qr-codes could cause layout-shifting 5afb562a
|
||||||
|
* fix layout-shifting for ongoing downloads in controlpanel 9c8507a0
|
||||||
|
* cosmetic eta jank b10843d0
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* upto 7% faster folder listings due to refactoring for more ux knobs 0c43b592
|
||||||
|
* fix resource leaks (only affected tests/debug) 2ab8924e
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1115-2218 `v1.16.1` cbz thumbnails
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* thumbnails of .cbz manga archives 4d15dd6e
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* when running with `-j0`, download-ETA could break in complex volume layouts 10fc4768
|
||||||
|
* linking to the image gallery didn't quite work if multiselect was enabled 56a04996
|
||||||
|
* password-hashing parameters (cpu/ram cost) could not be customized 1f177528
|
||||||
|
* the defaults must be perfect considering nobody ever tried changing them ¯\\_(ツ)_/¯
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* add intentional crash on startup if two volumes are configured to use the same histpath 2b63d7d1
|
||||||
|
* prevents funky deadlocks and an eventual database loss in case of a no-thoughts-head-empty moment, purely hypothetical of course 🗿
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1110-1932 `v1.16.0` COPYparty
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* #46 #115 copy/paste files and folders cacec9c1
|
||||||
|
* cut/paste still exists, but now you can copy too
|
||||||
|
* with a UI to rename files in case of filename collisions 56317b00
|
||||||
|
* files are created according to the dedup settings in the target volume (either full copies or symlinks/hardlinks)
|
||||||
|
* show currently active downloads in the controlpanel 8aba5aed
|
||||||
|
* can be made admin-only with `--dl-list=1` or disabled with `--dl-list=0`
|
||||||
|
* hides filenames of hidden files, and files from volumes where the viewer doesn't have access
|
||||||
|
* #114 async reinit on new [IdP users](https://github.com/9001/copyparty#identity-providers) 44ee07f0
|
||||||
|
* new IdP users can now always auth, even while a filesystem reindex is running
|
||||||
|
* ux:
|
||||||
|
* remember batch-rename settings from last time 6a8d5e17
|
||||||
|
* URL parameters to force grid/thumbs on/off 5718caa9
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* folders that fail to list due to a corrupt HDD/filesystem will now return a 404 instead of an empty listing 119e88d8
|
||||||
|
* also fixes similar issues in u2c and partyfuse
|
||||||
|
* u2c (commandline uploader): detect and adapt to proxies with short connection keepalives c784e528
|
||||||
|
* ui/ux:
|
||||||
|
* show the "switch-to-https" button in 404-messages too efd8a32e
|
||||||
|
* the folder-loading indicator could steal keyboard focus d9962f65
|
||||||
|
* hotkey-help was very trigger-happy 71d9e010
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* choose more conservative defaults when server has less than 1 GiB RAM 2bf9055c
|
||||||
|
* runs okay down to 128 MiB, but thumbnails die below 256 MiB
|
||||||
|
* update the [comparison to similar software](https://github.com/9001/copyparty/blob/hovudstraum/docs/versus.md) after years of optimizations on both sides 0ce7cf5e
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1027-0751 `v1.15.10` temporary upload links
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* [shares](https://github.com/9001/copyparty#shares) can now be uploaded into, and unpost works too 4bdcbc1c
|
||||||
|
* useful to create temporary URLs for other people to upload to
|
||||||
|
* shares can be write-only, so visitors can't browse or see any files
|
||||||
|
* #110 HTTP 304 (caching):
|
||||||
|
* support `If-Range` for HTTP 206 159f51b1
|
||||||
|
* add server-side and client-side options to force-disable cache dd6dbdd9
|
||||||
|
* `--no304=1` shows a button in the controlpanel to disable caching
|
||||||
|
* `--no304=2` makes that button auto-enabled
|
||||||
|
* even when `--no304` is not specified, accessing the URL `/?setck=no304=y` force-disables cache
|
||||||
|
* when cache is force-disabled, browsers will waste a lot of network traffic / data usage
|
||||||
|
* might help to avoid bugs in browsers or proxies, for example if media files suddenly stop loading
|
||||||
|
* but such bugs should be exceedingly rare, so do not enable this unless actually necessary
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* #110 HTTP 304 (caching):
|
||||||
|
* remove `Content-Length` and `Content-Type` response headers from 304 replies 91240236
|
||||||
|
* browsers don't need these, and some middlewares might get confused if they're present
|
||||||
|
* #113 fix crash on startup if `-j0` was combined with `--ipa` or `--ipu` 3a0d882c
|
||||||
|
* #111 fix javascript crash if `--u2sz` was set to an invalid value b13899c6
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* #110 HTTP 304 (caching):
|
||||||
|
* never automatically enable k304 because the `Vary` header killed support for caching in msie anyways 63013cc5
|
||||||
|
* change time comparison for `If-Modified-Since` to require an exact timestamp match, instead of the intended "modified since". This technically violates the http-spec, but should be safer for backdating file mtimes 159f51b1
|
||||||
|
* new option `--ohead` to log response headers 7678a91b
|
||||||
|
* added [nintendo 3ds](https://github.com/user-attachments/assets/88deab3d-6cad-4017-8841-2f041472b853) to the [list of supported browsers](https://github.com/9001/copyparty#browser-support) cb81f0ad
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1018-2342 `v1.15.9` rss server
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* #109 [rss feed generator](https://github.com/9001/copyparty#rss-feeds) 7ffd805a
|
||||||
|
* monitor folders recursively with RSS readers
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* #107 `--df` diskspace limits was incompatible with webdav 2a570bb4
|
||||||
|
* #108 up2k javascript crash (only affected the Chinese translation) a7e2a0c9
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* up2k: detect buggy webworkers 5ca8f070
|
||||||
|
* up2k: improve upload retry/timeout logic a9b4436c
|
||||||
|
* js: make handshake retries more aggressive
|
||||||
|
* u2c: reduce chunks timeout + ^
|
||||||
|
* main: reduce tcp timeout to 128sec (js is 42s)
|
||||||
|
* httpcli: less confusing log messages
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1016-2153 `v1.15.8` the sky is the limit
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* subchunks; avoid the Cloudflare filesize limit entirely fc8298c4 48147c07
|
||||||
|
* the previous max filesize was `383.9 GiB`, now only the sky is the limit
|
||||||
|
* if you're using another proxy with a more restrictive limit than Cloudflare's 100 MiB, for example 64 MiB, then `--u2sz 1,64,64`
|
||||||
|
* m4v videos can be played in the gallery ff0a71f2
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* up2k: uploading duplicate files could initially fail (but would succeed after a few automatic retries) due to a toctou 114b71b7
|
||||||
|
* [u2c](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#u2cpy) / commandline uploader:
|
||||||
|
* directory scanner got stuck if it found a FIFO cba1878b
|
||||||
|
* excessive number of FDs when uploading large files 65a2b6a2
|
||||||
|
* chunksize calculation; only affected files exactly 128 GiB large a2e037d6
|
||||||
|
* support filenames with newlines and invalid utf-8 b2770a20
|
||||||
|
* invalid utf-8 is replaced by `?` when they hit the server
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* don't show the toast countdown bar if duration is infinite 22dfc6ec
|
||||||
|
* chickenbit to disable the browser's built-in sha512 implementation and force the bundled wasm instead d715479e
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1013-2244 `v1.15.7` the 'a' in "ip address" stands for authentication
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* [cidr-based autologin](https://github.com/9001/copyparty#ip-auth) b7f9bf5a
|
||||||
|
* map a cidr ip-range to a username; anyone connecting from that ip-range will autologin as that user
|
||||||
|
* thx to @byteturtle for the idea!
|
||||||
|
* [u2c](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#u2cpy) / commandline uploader:
|
||||||
|
* option `--chs` to list individual chunk hashes cf1b7562
|
||||||
|
* fix progress indicator when resuming an upload 53ffd245
|
||||||
|
* up2k: verbose logging of detected/corrected bitflips ee628363
|
||||||
|
* *foreshadowing intensifies* (story still developing)
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* up2k with database disabled / running without `-e2d` 705f598b
|
||||||
|
* respect `noforget` when loading snaps
|
||||||
|
* ...but actually forget deleted files otherwise
|
||||||
|
* snap-loader adds empty need/hash entries as necessary
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* authed users can now unpost recent uploads of unauthed users from the same IP 22b58e31
|
||||||
|
* would have become problematic now that cidr-based autologin is a thing
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1011-2256 `v1.15.6` preadme
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* #105 files named `preadme.md` appear at the top of directory listings 1d68acf8
|
||||||
|
* entirely disable dedup with `--no-clone` / volflag `noclone` 3d7facd7 6b7ebdb7
|
||||||
|
* even if a file exists for sure on the server HDD, let the client continue uploading instead of reusing the existing data
|
||||||
|
* using this option "never" makes sense, unless you're using something like S3 Glacier storage where reading is really expensive but writing is cheap
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* up2k jank after detecting a bitflip or network glitch 4a4ec88d
|
||||||
|
* instead of resuming the interrupted upload like it should, the upload client could get stuck or start over
|
||||||
|
* #104 support viewing dotfile documents when dotfiles are hidden 9ccd8bb3
|
||||||
|
* fix a buttload of typos 6adc778d 1e7697b5
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1005-1803 `v1.15.5` pyz all the cores
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* the pkgres / pyz changes in 1.15.4 broke multiprocessing c3985537
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* pyz: drop easymde to save some bytes + make it a tiny bit faster
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-1004-2319 `v1.15.4` hermetic
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|
* [u2c](https://github.com/9001/copyparty/tree/hovudstraum/bin#u2cpy) (commandline uploader):
|
||||||
|
* remove all dependencies; now entirely self-contained 9daeed92
|
||||||
|
* made it 3x faster for small files, 2x faster in general
|
||||||
|
* improve `-x` behavior to not traverse into excluded folders b9c5c7bb
|
||||||
|
* [partyfuse](https://github.com/9001/copyparty/tree/hovudstraum/bin#partyfusepy) (fuse client; mount a copyparty server as a local filesystem):
|
||||||
|
* 9x faster directory listings 03f0f994
|
||||||
|
* 4x faster downloads on high-latency connections 847a2bdc
|
||||||
|
* embed `fuse.py` (its only dependency) -- can be downloaded from the connect-page 44f2b63e
|
||||||
|
* support mounting nginx and iis servers too, not just copyparty c81e8984
|
||||||
|
* reduce ram usage down to 10% when running without `-e2d` 88a1c5ca
|
||||||
|
* does not affect servers with `-e2d` enabled (was already optimal)
|
||||||
|
* share folders as qr-codes e4542064
|
||||||
|
* when creating a share, you get a qr-code for quick access
|
||||||
|
* buttons in the shares controlpanel to reshow it, optionally with the password embedded into the qr-code
|
||||||
|
* #98 read embedded webdeps and templates with `pkg_resources`; thx @shizmob! a462a644 d866841c
|
||||||
|
* [copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty.pyz) now runs straight from the source file without unpacking anything to disk
|
||||||
|
* ...and is now much slower at returning resource GETs, but that is fine
|
||||||
|
* og / opengraph / discord embeds: support filekeys ae982006
|
||||||
|
* add option for natural sorting; thx @oshiteku! 9804f25d
|
||||||
|
* eyecandy timer bar on toasts 0dfe1d5b
|
||||||
|
* smb-server: impacket 0.12 is out! dc4d0d8e
|
||||||
|
* now *possible* to list folders with more than 400 files (it's REALLY slow)
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* webdav:
|
||||||
|
* support `<allprop/>` in propfind dc157fa2
|
||||||
|
* list volumes when root is unmapped 480ac254
|
||||||
|
* previously, clients couldn't connect to the root of a copyparty server unless a volume existed at `/`
|
||||||
|
* #101 show `.prologue.html` and `.epilogue.html` in directory listings even if user cannot see hidden files 21be82ef
|
||||||
|
* #100 confusing toast when pressing F2 without selecting anything 2715ee6c
|
||||||
|
* fix prometheus metrics 678675a9
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* #100 allow uploading `.prologue.html` and `.epilogue.html` 19a5985f
|
||||||
|
* #102 make translation easier when running in docker
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-0916-0107 `v1.15.3` incoming eta
|
||||||
|
|
||||||
|
## 🧪 new features
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
* incoming uploads (and their ETA) are shown in the controlpanel 609c5921 844194ee
|
||||||
|
* list total directory sizes 427597b6
|
||||||
|
* show the total size and number of files of each directory in listings
|
||||||
|
* makes browsing a bit slower (up to 30%) so can be disabled with `--no-dirsz`
|
||||||
|
* sizes are calculated during startup, so it requires `-e2dsa`
|
||||||
|
* file-uploads will recalculate the sizes immediately, but a full rescan is necessary to see changes caused by moves/deletes
|
||||||
|
* optimizations;
|
||||||
|
* reduce broker overhead when multiprocessing is disabled 4e75534e
|
||||||
|
* should reduce cpu usage by uploads, thumbnails, prometheus metrics
|
||||||
|
* reduce cpu usage from downloading thumbnails 7d64879b
|
||||||
|
|
||||||
|
## 🩹 bugfixes
|
||||||
|
|
||||||
|
* fix sqlite indexes d67e9cc5
|
||||||
|
* upload handshakes would get exponentially slow if a volume has more than 200'000 files
|
||||||
|
* reindex on startup can be 150x faster in some rare cases (same filename in MANY folders)
|
||||||
|
* the database is now around 10% larger (likely worst-case)
|
||||||
|
* misc ux: 58835b2b
|
||||||
|
* shares: show media tags
|
||||||
|
* html hydrator assumed a folder named `foo.txt` was a doc
|
||||||
|
* due to sessions, use `pwd` as password placeholder on services
|
||||||
|
|
||||||
|
## 🔧 other changes
|
||||||
|
|
||||||
|
* add [example](https://github.com/9001/copyparty/tree/hovudstraum/contrib#flameshotsh) for uploading screenshots from linux with flameshot 1c2acdc9
|
||||||
|
* [nginx example](https://github.com/9001/copyparty/blob/hovudstraum/contrib/nginx/copyparty.conf): use unix-sockets for higher performance a5ce1032
|
||||||
|
* #97 chinese translation was improved, thx again @ultwcz 7a573caf
|
||||||
|
|
||||||
|
## 🗿 known issues
|
||||||
|
|
||||||
|
* prometheus metrics are busted
|
||||||
|
* **workaround:** disable monitoring of volume status with `--nos-vst`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-0909-2343 `v1.15.1` session
|
||||||
|
|
||||||
|
<img src="https://github.com/9001/copyparty/raw/hovudstraum/docs/logo.svg" width="250" align="right"/>
|
||||||
|
|
||||||
|
blessed by ⑨, this release is [certified strong](https://github.com/user-attachments/assets/05459032-736c-4b9a-9ade-a0044461194a) ([artist](https://x.com/hcnone))
|
||||||
|
|
||||||
|
## new features
|
||||||
|
|
||||||
|
* login sessions b5405174
|
||||||
|
* a random session cookie is generated for each known user, replacing the previous plaintext login cookie
|
||||||
|
* the logout button will nuke the session on all clients where that user is logged in
|
||||||
|
* the sessions are stored in the database at `--ses-db`, default `~/.config/copyparty/sessions.db` (docker uses `/cfg/sessions.db` similar to the other runtime configs)
|
||||||
|
* if you run multiple copyparty instances, much like [shares](https://github.com/9001/copyparty#shares) and [user-changeable passwords](https://github.com/9001/copyparty#user-changeable-passwords) you'll want to keep a separate db for each instance
|
||||||
|
* can be mostly disabled with `--no-ses` when it turns out to be buggy
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
|
||||||
|
* v1.13.8 broke the u2c `--ow` option to replace/overwrite files on the server during upload 6eee6015
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-0908-1925 `v1.15.0` fill the drives
|
||||||
|
|
||||||
|
## recent important news
|
||||||
|
|
||||||
|
* [v1.15.0 (2024-09-08)](https://github.com/9001/copyparty/releases/tag/v1.15.0) changed upload deduplication to be default-disabled
|
||||||
|
* [v1.14.3 (2024-08-30)](https://github.com/9001/copyparty/releases/tag/v1.14.3) fixed a bug that was introduced in v1.13.8 (2024-08-13); this bug could lead to **data loss** -- see the v1.14.3 release-notes for details
|
||||||
|
|
||||||
|
# upload deduplication now disabled by default
|
||||||
|
|
||||||
|
because many people found the behavior surprising. This also makes it easier to use copyparty together with other software, since there is no risk of damage to symlinks if there are no symlinks to damage
|
||||||
|
|
||||||
|
to enable deduplication, use either `--dedup` (old-default, symlink-based), or `--hardlink` (will use hardlinks when possible), or `--hardlink-only` (disallow symlinks). To choose the approach that fits your usecase, see [file deduplication](https://github.com/9001/copyparty#file-deduplication) in the readme
|
||||||
|
|
||||||
|
verification of local file consistency was also added; this happens when someone uploads a dupe, to ensure that no other software has modified the local file since last reindex. This unfortunately makes uploading of duplicate files much slower, and can be disabled with `--safe-dedup 1` if you know that only copyparty will be modifying the filesystem
|
||||||
|
|
||||||
|
## new features
|
||||||
|
|
||||||
|
* dedup improvements:
|
||||||
|
* verify consistency of local files before using them as dedup source 6e671c52
|
||||||
|
* if a local file has been altered by other software since the last reindexing, then this will now be detected
|
||||||
|
* u2c (commandline uploader): add mode to print hashes of local files 08848be7
|
||||||
|
* if you've lost a file but you know its `wark` (file identifier), you can now use u2c.exe to scan your whole filesystem for it: `u2c - .`
|
||||||
|
* #96 use local timezone in log messages b599fbae
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
|
||||||
|
* dedup fixes:
|
||||||
|
* symlinks could break if moved/renamed inside a volume where deduplication was disabled after some files within had already been deduplicated 4401de04
|
||||||
|
* when moving/renaming, only consider symlinks between volumes if `xlink` volflag is set b5ad9369
|
||||||
|
* database consistency verifier (`-e2vp`):
|
||||||
|
* support filenames with newlines, and warn about missing files b0de84cb
|
||||||
|
* opengraph/`--og`: fix viewing textfiles e5a836cb
|
||||||
|
* up2k.js: fix confusing message when uploading many copies of the same file f1130db1
|
||||||
|
|
||||||
|
## other changes
|
||||||
|
|
||||||
|
* disable upload deduplication by default a2e0f986
|
||||||
|
* up2k.js: increase handshake timeout to several minutes because of the dedup changes c5988a04
|
||||||
|
* copyparty.exe: update to python 3.12.6
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-0902-0108 `v1.14.4` another
|
||||||
|
|
||||||
|
## recent important news
|
||||||
|
|
||||||
|
* [v1.14.3 (2024-08-30)](https://github.com/9001/copyparty/releases/tag/v1.14.3) fixed a bug that was introduced in v1.13.8 (2024-08-13); this bug could lead to **data loss** -- see the v1.14.3 release-notes for details
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
|
||||||
|
* a network glitch could cause the uploader UI to panic d9e95262
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
# 2024-0830-2311 `v1.14.3` important dedup fix
|
# 2024-0830-2311 `v1.14.3` important dedup fix
|
||||||
|
|
||||||
|
|||||||
48
docs/chunksizes.py
Executable file
48
docs/chunksizes.py
Executable file
@@ -0,0 +1,48 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# there's far better ways to do this but its 4am and i dont wanna think
|
||||||
|
|
||||||
|
# just pypy it my dude
|
||||||
|
|
||||||
|
import math
|
||||||
|
|
||||||
|
def humansize(sz, terse=False):
|
||||||
|
for unit in ["B", "KiB", "MiB", "GiB", "TiB"]:
|
||||||
|
if sz < 1024:
|
||||||
|
break
|
||||||
|
|
||||||
|
sz /= 1024.0
|
||||||
|
|
||||||
|
ret = " ".join([str(sz)[:4].rstrip("."), unit])
|
||||||
|
|
||||||
|
if not terse:
|
||||||
|
return ret
|
||||||
|
|
||||||
|
return ret.replace("iB", "").replace(" ", "")
|
||||||
|
|
||||||
|
|
||||||
|
def up2k_chunksize(filesize):
|
||||||
|
chunksize = 1024 * 1024
|
||||||
|
stepsize = 512 * 1024
|
||||||
|
while True:
|
||||||
|
for mul in [1, 2]:
|
||||||
|
nchunks = math.ceil(filesize * 1.0 / chunksize)
|
||||||
|
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks <= 4096):
|
||||||
|
return chunksize
|
||||||
|
|
||||||
|
chunksize += stepsize
|
||||||
|
stepsize *= mul
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
prev = 1048576
|
||||||
|
n = n0 = 524288
|
||||||
|
while True:
|
||||||
|
csz = up2k_chunksize(n)
|
||||||
|
if csz > prev:
|
||||||
|
print(f"| {n-n0:>18_} | {humansize(n-n0):>8} | {prev:>13_} | {humansize(prev):>8} |".replace("_", " "))
|
||||||
|
prev = csz
|
||||||
|
n += n0
|
||||||
|
|
||||||
|
|
||||||
|
main()
|
||||||
@@ -6,6 +6,7 @@
|
|||||||
* [up2k](#up2k) - quick outline of the up2k protocol
|
* [up2k](#up2k) - quick outline of the up2k protocol
|
||||||
* [why not tus](#why-not-tus) - I didn't know about [tus](https://tus.io/)
|
* [why not tus](#why-not-tus) - I didn't know about [tus](https://tus.io/)
|
||||||
* [why chunk-hashes](#why-chunk-hashes) - a single sha512 would be better, right?
|
* [why chunk-hashes](#why-chunk-hashes) - a single sha512 would be better, right?
|
||||||
|
* [list of chunk-sizes](#list-of-chunk-sizes) - specific chunksizes are enforced
|
||||||
* [hashed passwords](#hashed-passwords) - regarding the curious decisions
|
* [hashed passwords](#hashed-passwords) - regarding the curious decisions
|
||||||
* [http api](#http-api)
|
* [http api](#http-api)
|
||||||
* [read](#read)
|
* [read](#read)
|
||||||
@@ -95,6 +96,44 @@ hashwasm would solve the streaming issue but reduces hashing speed for sha512 (x
|
|||||||
|
|
||||||
* blake2 might be a better choice since xxh is non-cryptographic, but that gets ~15 MiB/s on slower androids
|
* blake2 might be a better choice since xxh is non-cryptographic, but that gets ~15 MiB/s on slower androids
|
||||||
|
|
||||||
|
### list of chunk-sizes
|
||||||
|
|
||||||
|
specific chunksizes are enforced depending on total filesize
|
||||||
|
|
||||||
|
each pair of filesize/chunksize is the largest filesize which will use its listed chunksize; a 512 MiB file will use chunksize 2 MiB, but if the file is one byte larger than 512 MiB then it becomes 3 MiB
|
||||||
|
|
||||||
|
for the purpose of performance (or dodging arbitrary proxy limitations), it is possible to upload combined and/or partial chunks using stitching and/or subchunks respectively
|
||||||
|
|
||||||
|
| filesize | filesize | chunksize | chunksz |
|
||||||
|
| -----------------: | -------: | ------------: | ------: |
|
||||||
|
| 268 435 456 | 256 MiB | 1 048 576 | 1.0 MiB |
|
||||||
|
| 402 653 184 | 384 MiB | 1 572 864 | 1.5 MiB |
|
||||||
|
| 536 870 912 | 512 MiB | 2 097 152 | 2.0 MiB |
|
||||||
|
| 805 306 368 | 768 MiB | 3 145 728 | 3.0 MiB |
|
||||||
|
| 1 073 741 824 | 1.0 GiB | 4 194 304 | 4.0 MiB |
|
||||||
|
| 1 610 612 736 | 1.5 GiB | 6 291 456 | 6.0 MiB |
|
||||||
|
| 2 147 483 648 | 2.0 GiB | 8 388 608 | 8.0 MiB |
|
||||||
|
| 3 221 225 472 | 3.0 GiB | 12 582 912 | 12 MiB |
|
||||||
|
| 4 294 967 296 | 4.0 GiB | 16 777 216 | 16 MiB |
|
||||||
|
| 6 442 450 944 | 6.0 GiB | 25 165 824 | 24 MiB |
|
||||||
|
| 137 438 953 472 | 128 GiB | 33 554 432 | 32 MiB |
|
||||||
|
| 206 158 430 208 | 192 GiB | 50 331 648 | 48 MiB |
|
||||||
|
| 274 877 906 944 | 256 GiB | 67 108 864 | 64 MiB |
|
||||||
|
| 412 316 860 416 | 384 GiB | 100 663 296 | 96 MiB |
|
||||||
|
| 549 755 813 888 | 512 GiB | 134 217 728 | 128 MiB |
|
||||||
|
| 824 633 720 832 | 768 GiB | 201 326 592 | 192 MiB |
|
||||||
|
| 1 099 511 627 776 | 1.0 TiB | 268 435 456 | 256 MiB |
|
||||||
|
| 1 649 267 441 664 | 1.5 TiB | 402 653 184 | 384 MiB |
|
||||||
|
| 2 199 023 255 552 | 2.0 TiB | 536 870 912 | 512 MiB |
|
||||||
|
| 3 298 534 883 328 | 3.0 TiB | 805 306 368 | 768 MiB |
|
||||||
|
| 4 398 046 511 104 | 4.0 TiB | 1 073 741 824 | 1.0 GiB |
|
||||||
|
| 6 597 069 766 656 | 6.0 TiB | 1 610 612 736 | 1.5 GiB |
|
||||||
|
| 8 796 093 022 208 | 8.0 TiB | 2 147 483 648 | 2.0 GiB |
|
||||||
|
| 13 194 139 533 312 | 12.0 TiB | 3 221 225 472 | 3.0 GiB |
|
||||||
|
| 17 592 186 044 416 | 16.0 TiB | 4 294 967 296 | 4.0 GiB |
|
||||||
|
| 26 388 279 066 624 | 24.0 TiB | 6 442 450 944 | 6.0 GiB |
|
||||||
|
| 35 184 372 088 832 | 32.0 TiB | 8 589 934 592 | 8.0 GiB |
|
||||||
|
|
||||||
|
|
||||||
# hashed passwords
|
# hashed passwords
|
||||||
|
|
||||||
@@ -133,15 +172,19 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
|||||||
| GET | `?tar=xz:9` | ...as an xz-level-9 gnu-tar file |
|
| GET | `?tar=xz:9` | ...as an xz-level-9 gnu-tar file |
|
||||||
| GET | `?tar=pax` | ...as a pax-tar file |
|
| GET | `?tar=pax` | ...as a pax-tar file |
|
||||||
| GET | `?tar=pax,xz` | ...as an xz-level-1 pax-tar file |
|
| GET | `?tar=pax,xz` | ...as an xz-level-1 pax-tar file |
|
||||||
| GET | `?zip=utf-8` | ...as a zip file |
|
| GET | `?zip` | ...as a zip file |
|
||||||
| GET | `?zip` | ...as a WinXP-compatible zip file |
|
| GET | `?zip=dos` | ...as a WinXP-compatible zip file |
|
||||||
| GET | `?zip=crc` | ...as an MSDOS-compatible zip file |
|
| GET | `?zip=crc` | ...as an MSDOS-compatible zip file |
|
||||||
| GET | `?tar&w` | pregenerate webp thumbnails |
|
| GET | `?tar&w` | pregenerate webp thumbnails |
|
||||||
| GET | `?tar&j` | pregenerate jpg thumbnails |
|
| GET | `?tar&j` | pregenerate jpg thumbnails |
|
||||||
| GET | `?tar&p` | pregenerate audio waveforms |
|
| GET | `?tar&p` | pregenerate audio waveforms |
|
||||||
| GET | `?shares` | list your shared files/folders |
|
| GET | `?shares` | list your shared files/folders |
|
||||||
|
| GET | `?dls` | show active downloads (do this as admin) |
|
||||||
| GET | `?ups` | show recent uploads from your IP |
|
| GET | `?ups` | show recent uploads from your IP |
|
||||||
| GET | `?ups&filter=f` | ...where URL contains `f` |
|
| GET | `?ups&filter=f` | ...where URL contains `f` |
|
||||||
|
| GET | `?ru` | show all recent uploads |
|
||||||
|
| GET | `?ru&filter=f` | ...where URL contains `f` |
|
||||||
|
| GET | `?ru&j` | ...as json |
|
||||||
| GET | `?mime=foo` | specify return mimetype `foo` |
|
| GET | `?mime=foo` | specify return mimetype `foo` |
|
||||||
| GET | `?v` | render markdown file at URL |
|
| GET | `?v` | render markdown file at URL |
|
||||||
| GET | `?v` | open image/video/audio in mediaplayer |
|
| GET | `?v` | open image/video/audio in mediaplayer |
|
||||||
@@ -163,15 +206,21 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
|||||||
|
|
||||||
| method | params | result |
|
| method | params | result |
|
||||||
|--|--|--|
|
|--|--|--|
|
||||||
|
| POST | `?copy=/foo/bar` | copy the file/folder at URL to /foo/bar |
|
||||||
| POST | `?move=/foo/bar` | move/rename the file/folder at URL to /foo/bar |
|
| POST | `?move=/foo/bar` | move/rename the file/folder at URL to /foo/bar |
|
||||||
|
|
||||||
| method | params | body | result |
|
| method | params | body | result |
|
||||||
|--|--|--|--|
|
|--|--|--|--|
|
||||||
| PUT | | (binary data) | upload into file at URL |
|
| PUT | | (binary data) | upload into file at URL |
|
||||||
|
| PUT | `?j` | (binary data) | ...and reply with json |
|
||||||
|
| PUT | `?ck` | (binary data) | upload without checksum gen (faster) |
|
||||||
|
| PUT | `?ck=md5` | (binary data) | return md5 instead of sha512 |
|
||||||
| PUT | `?gz` | (binary data) | compress with gzip and write into file at URL |
|
| PUT | `?gz` | (binary data) | compress with gzip and write into file at URL |
|
||||||
| PUT | `?xz` | (binary data) | compress with xz and write into file at URL |
|
| PUT | `?xz` | (binary data) | compress with xz and write into file at URL |
|
||||||
| mPOST | | `f=FILE` | upload `FILE` into the folder at URL |
|
| mPOST | | `f=FILE` | upload `FILE` into the folder at URL |
|
||||||
| mPOST | `?j` | `f=FILE` | ...and reply with json |
|
| mPOST | `?j` | `f=FILE` | ...and reply with json |
|
||||||
|
| mPOST | `?ck` | `f=FILE` | ...and disable checksum gen (faster) |
|
||||||
|
| mPOST | `?ck=md5` | `f=FILE` | ...and return md5 instead of sha512 |
|
||||||
| mPOST | `?replace` | `f=FILE` | ...and overwrite existing files |
|
| mPOST | `?replace` | `f=FILE` | ...and overwrite existing files |
|
||||||
| mPOST | `?media` | `f=FILE` | ...and return medialink (not hotlink) |
|
| mPOST | `?media` | `f=FILE` | ...and return medialink (not hotlink) |
|
||||||
| mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL |
|
| mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL |
|
||||||
@@ -188,8 +237,15 @@ upload modifiers:
|
|||||||
| http-header | url-param | effect |
|
| http-header | url-param | effect |
|
||||||
|--|--|--|
|
|--|--|--|
|
||||||
| `Accept: url` | `want=url` | return just the file URL |
|
| `Accept: url` | `want=url` | return just the file URL |
|
||||||
|
| `Accept: json` | `want=json` | return upload info as json; same as `?j` |
|
||||||
| `Rand: 4` | `rand=4` | generate random filename with 4 characters |
|
| `Rand: 4` | `rand=4` | generate random filename with 4 characters |
|
||||||
| `Life: 30` | `life=30` | delete file after 30 seconds |
|
| `Life: 30` | `life=30` | delete file after 30 seconds |
|
||||||
|
| `CK: no` | `ck` | disable serverside checksum (maybe faster) |
|
||||||
|
| `CK: md5` | `ck=md5` | return md5 checksum instead of sha512 |
|
||||||
|
| `CK: sha1` | `ck=sha1` | return sha1 checksum |
|
||||||
|
| `CK: sha256` | `ck=sha256` | return sha256 checksum |
|
||||||
|
| `CK: b2` | `ck=b2` | return blake2b checksum |
|
||||||
|
| `CK: b2s` | `ck=b2s` | return blake2s checksum |
|
||||||
|
|
||||||
* `life` only has an effect if the volume has a lifetime, and the volume lifetime must be greater than the file's
|
* `life` only has an effect if the volume has a lifetime, and the volume lifetime must be greater than the file's
|
||||||
|
|
||||||
@@ -208,6 +264,12 @@ upload modifiers:
|
|||||||
| method | params | result |
|
| method | params | result |
|
||||||
|--|--|--|
|
|--|--|--|
|
||||||
| GET | `?pw=x` | logout |
|
| GET | `?pw=x` | logout |
|
||||||
|
| GET | `?grid` | ui: show grid-view |
|
||||||
|
| GET | `?imgs` | ui: show grid-view with thumbnails |
|
||||||
|
| GET | `?grid=0` | ui: show list-view |
|
||||||
|
| GET | `?imgs=0` | ui: show list-view |
|
||||||
|
| GET | `?thumb` | ui, grid-mode: show thumbnails |
|
||||||
|
| GET | `?thumb=0` | ui, grid-mode: show icons |
|
||||||
|
|
||||||
|
|
||||||
# event hooks
|
# event hooks
|
||||||
@@ -279,6 +341,8 @@ the rest is mostly optional; if you need a working env for vscode or similar
|
|||||||
python3 -m venv .venv
|
python3 -m venv .venv
|
||||||
. .venv/bin/activate
|
. .venv/bin/activate
|
||||||
pip install jinja2 strip_hints # MANDATORY
|
pip install jinja2 strip_hints # MANDATORY
|
||||||
|
pip install argon2-cffi # password hashing
|
||||||
|
pip install pyzmq # send 0mq from hooks
|
||||||
pip install mutagen # audio metadata
|
pip install mutagen # audio metadata
|
||||||
pip install pyftpdlib # ftp server
|
pip install pyftpdlib # ftp server
|
||||||
pip install partftpy # tftp server
|
pip install partftpy # tftp server
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ open up notepad and save the following as `c:\users\you\documents\party.conf` (f
|
|||||||
```yaml
|
```yaml
|
||||||
[global]
|
[global]
|
||||||
lo: ~/logs/cpp-%Y-%m%d.xz # log to c:\users\you\logs\
|
lo: ~/logs/cpp-%Y-%m%d.xz # log to c:\users\you\logs\
|
||||||
e2dsa, e2ts, no-dedup, z # sets 4 flags; see expl.
|
e2dsa, e2ts, z # sets 3 flags; see explanation
|
||||||
p: 80, 443 # listen on ports 80 and 443, not 3923
|
p: 80, 443 # listen on ports 80 and 443, not 3923
|
||||||
theme: 2 # default theme: protonmail-monokai
|
theme: 2 # default theme: protonmail-monokai
|
||||||
lang: nor # default language: viking
|
lang: nor # default language: viking
|
||||||
@@ -46,11 +46,10 @@ open up notepad and save the following as `c:\users\you\documents\party.conf` (f
|
|||||||
|
|
||||||
### config explained: [global]
|
### config explained: [global]
|
||||||
|
|
||||||
the `[global]` section accepts any config parameters [listed here](https://ocv.me/copyparty/helptext.html), also viewable by running copyparty (either the exe or the sfx.py) with `--help`, so this is the same as running copyparty with arguments `--lo c:\users\you\logs\copyparty-%Y-%m%d.xz -e2dsa -e2ts --no-dedup -z -p 80,443 --theme 2 --lang nor`
|
the `[global]` section accepts any config parameters [listed here](https://ocv.me/copyparty/helptext.html), also viewable by running copyparty (either the exe or the sfx.py) with `--help`, so this is the same as running copyparty with arguments `--lo c:\users\you\logs\copyparty-%Y-%m%d.xz -e2dsa -e2ts -z -p 80,443 --theme 2 --lang nor`
|
||||||
* `lo: ~/logs/cpp-%Y-%m%d.xz` writes compressed logs (the compression will make them delayed)
|
* `lo: ~/logs/cpp-%Y-%m%d.xz` writes compressed logs (the compression will make them delayed)
|
||||||
* `e2dsa` enables the upload deduplicator and file indexer, which enables searching
|
* `e2dsa` enables the file indexer, which enables searching and upload-undo
|
||||||
* `e2ts` enables music metadata indexing, making albums / titles etc. searchable too
|
* `e2ts` enables music metadata indexing, making albums / titles etc. searchable too
|
||||||
* `no-dedup` writes full dupes to disk instead of symlinking, since lots of windows software doesn't handle symlinks well
|
|
||||||
* but the improved upload speed from `e2dsa` is not affected
|
* but the improved upload speed from `e2dsa` is not affected
|
||||||
* `z` enables zeroconf, making the server available at `http://HOSTNAME.local/` from any other machine in the LAN
|
* `z` enables zeroconf, making the server available at `http://HOSTNAME.local/` from any other machine in the LAN
|
||||||
* `p: 80,443` listens on the ports `80` and `443` instead of the default `3923`
|
* `p: 80,443` listens on the ports `80` and `443` instead of the default `3923`
|
||||||
|
|||||||
22
docs/idp.md
22
docs/idp.md
@@ -20,3 +20,25 @@ this means that, if an IdP volume is located inside a folder that is readable by
|
|||||||
and likewise -- if the IdP volume is inside a folder that is only accessible by certain users, but the IdP volume is configured to allow access from unauthenticated users, then the contents of the volume will NOT be accessible until it is revived
|
and likewise -- if the IdP volume is inside a folder that is only accessible by certain users, but the IdP volume is configured to allow access from unauthenticated users, then the contents of the volume will NOT be accessible until it is revived
|
||||||
|
|
||||||
until this limitation is fixed (if ever), it is recommended to place IdP volumes inside an appropriate parent volume, so they can inherit acceptable permissions until their revival; see the "strategic volumes" at the bottom of [./examples/docker/idp/copyparty.conf](./examples/docker/idp/copyparty.conf)
|
until this limitation is fixed (if ever), it is recommended to place IdP volumes inside an appropriate parent volume, so they can inherit acceptable permissions until their revival; see the "strategic volumes" at the bottom of [./examples/docker/idp/copyparty.conf](./examples/docker/idp/copyparty.conf)
|
||||||
|
|
||||||
|
|
||||||
|
## Connecting webdav clients
|
||||||
|
|
||||||
|
If you use only idp and want to connect via rclone you have to adapt a few things.
|
||||||
|
The following steps are for Authelia, but should be easy adaptable to other IdPs and clients. There may be better/smarter ways to do this, but this is a known solution.
|
||||||
|
|
||||||
|
1. Add a rule for your domain and set it to one factor
|
||||||
|
```
|
||||||
|
rules:
|
||||||
|
- domain: 'sub.domain.tld'
|
||||||
|
policy: one_factor
|
||||||
|
```
|
||||||
|
2. After you created your rclone config find its location with `rclone config file` and add the headers option to it, change the string to `username:password` base64 encoded. Make sure to set the right url location, otherwise you will get a 401 from copyparty.
|
||||||
|
```
|
||||||
|
[servername-dav]
|
||||||
|
type = webdav
|
||||||
|
url = https://sub.domain.tld/u/user/priv/
|
||||||
|
vendor = owncloud
|
||||||
|
pacer_min_sleep = 0.01ms
|
||||||
|
headers = Proxy-Authorization,basic base64encodedstring==
|
||||||
|
```
|
||||||
@@ -255,6 +255,17 @@ cat copyparty/httpcli.py | awk '/^[^a-zA-Z0-9]+def / {printf "%s\n%s\n\n", f, pl
|
|||||||
# create a folder with symlinks to big files
|
# create a folder with symlinks to big files
|
||||||
for d in /usr /var; do find $d -type f -size +30M 2>/dev/null; done | while IFS= read -r x; do ln -s "$x" big/; done
|
for d in /usr /var; do find $d -type f -size +30M 2>/dev/null; done | while IFS= read -r x; do ln -s "$x" big/; done
|
||||||
|
|
||||||
|
# up2k worst-case testfiles: create 64 GiB (256 x 256 MiB) of sparse files; each file takes 1 MiB disk space; each 1 MiB chunk is globally unique
|
||||||
|
for f in {0..255}; do echo $f; truncate -s 256M $f; b1=$(printf '%02x' $f); for o in {0..255}; do b2=$(printf '%02x' $o); printf "\x$b1\x$b2" | dd of=$f bs=2 seek=$((o*1024*1024)) conv=notrunc 2>/dev/null; done; done
|
||||||
|
# create 6.06G file with 16 bytes of unique data at start+end of each 32M chunk
|
||||||
|
sz=6509559808; truncate -s $sz f; csz=33554432; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done
|
||||||
|
# same but for chunksizes 16M (3.1G), 24M (4.1G), 48M (128.1G)
|
||||||
|
sz=3321225472; csz=16777216;
|
||||||
|
sz=4394967296; csz=25165824;
|
||||||
|
sz=6509559808; csz=33554432;
|
||||||
|
sz=138438953472; csz=50331648;
|
||||||
|
f=csz-$csz; truncate -s $sz $f; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=$f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done
|
||||||
|
|
||||||
# py2 on osx
|
# py2 on osx
|
||||||
brew install python@2
|
brew install python@2
|
||||||
pip install virtualenv
|
pip install virtualenv
|
||||||
|
|||||||
@@ -48,6 +48,20 @@ and if you want to have a monospace font in the fancy markdown editor, do this:
|
|||||||
NB: `<textarea id="mt">` and `<div id="mtr">` in the regular markdown editor must have the same font; none of the suggestions above will cause any issues but keep it in mind if you're getting creative
|
NB: `<textarea id="mt">` and `<div id="mtr">` in the regular markdown editor must have the same font; none of the suggestions above will cause any issues but keep it in mind if you're getting creative
|
||||||
|
|
||||||
|
|
||||||
|
# boring loader spinner
|
||||||
|
|
||||||
|
replace the 🌲 with a spinning circle using commandline args:
|
||||||
|
|
||||||
|
`--spinner ',padding:0;border-radius:9em;border:.2em solid #444;border-top:.2em solid #fc0'`
|
||||||
|
|
||||||
|
or config file example:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
[global]
|
||||||
|
spinner: ,padding:0;border-radius:9em;border:.2em solid #444;border-top:.2em solid #fc0
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
# `<head>`
|
# `<head>`
|
||||||
|
|
||||||
to add stuff to the html `<head>`, for example a css `<link>` or `<meta>` tags, use either the global-option `--html-head` or the volflag `html_head`
|
to add stuff to the html `<head>`, for example a css `<link>` or `<meta>` tags, use either the global-option `--html-head` or the volflag `html_head`
|
||||||
@@ -61,11 +75,32 @@ if the value starts with `%` it will assume a jinja2 template and expand it; the
|
|||||||
|
|
||||||
add your own translations by using the english or norwegian one from `browser.js` as a template
|
add your own translations by using the english or norwegian one from `browser.js` as a template
|
||||||
|
|
||||||
|
> ⚠ Please do not contribute translations to [RTL (Right-to-Left) languages](https://en.wikipedia.org/wiki/Right-to-left_script) for now; the javascript is [not ready](https://github.com/9001/copyparty/blob/hovudstraum/docs/rice/rtl.patch) to deal with it
|
||||||
|
|
||||||
the easy way is to open up and modify `browser.js` in your own installation; depending on how you installed copyparty it might be named `browser.js.gz` instead, in which case just decompress it, restart copyparty, and start editing it anyways
|
the easy way is to open up and modify `browser.js` in your own installation; depending on how you installed copyparty it might be named `browser.js.gz` instead, in which case just decompress it, restart copyparty, and start editing it anyways
|
||||||
|
|
||||||
|
you will be delighted to see inline html in the translation strings; to help prevent syntax errors, there is [a very jank linux script](https://github.com/9001/copyparty/blob/hovudstraum/scripts/tlcheck.sh) which is slightly better than nothing -- just beware the false-positives, so even if it complains it's not necessarily wrong/bad
|
||||||
|
|
||||||
if you're running `copyparty-sfx.py` then you'll find it at `/tmp/pe-copyparty.1000/copyparty/web` (on linux) or `%TEMP%\pe-copyparty\copyparty\web` (on windows)
|
if you're running `copyparty-sfx.py` then you'll find it at `/tmp/pe-copyparty.1000/copyparty/web` (on linux) or `%TEMP%\pe-copyparty\copyparty\web` (on windows)
|
||||||
* make sure to keep backups of your work religiously! since that location is volatile af
|
* make sure to keep backups of your work religiously! since that location is volatile af
|
||||||
|
|
||||||
if editing `browser.js` is inconvenient in your setup then you can instead do this:
|
|
||||||
* add your translation to a separate javascript file (`tl.js`) and make it load before `browser.js` with the help of `--html-head='<script src="/tl.js"></script>'`
|
## translations (docker-friendly)
|
||||||
* as the page loads, `browser.js` will look for a function named `langmod` so define that function and make it insert your translation into the `Ls` and `LANGS` variables so it'll take effect
|
|
||||||
|
if editing `browser.js` is inconvenient in your setup, for example if you're running in docker, then you can instead do this:
|
||||||
|
* if you have python, go to the `scripts` folder and run `./tl.py fra Français` to generate a `tl.js` which is perfect for translating to French, using the three-letter language code `fra`
|
||||||
|
* if you do not have python, you can also just grab `tl.js` from the scripts folder, but I'll probably forget to keep that up to date... and then you'll have to find/replace all `"eng"` and `Ls.eng` to your three-letter language code
|
||||||
|
* put your `tl.js` inside a folder that is being shared by your copyparty, preferably the webroot
|
||||||
|
* run copyparty with the argument `--html-head='<script src="/tl.js"></script>'`
|
||||||
|
* if you placed `tl.js` in the webroot then you're all good, but if you put it somewhere else then change `/tl.js` accordingly
|
||||||
|
* if you are running copyparty with config files, you can do this:
|
||||||
|
```yaml
|
||||||
|
[global]
|
||||||
|
html-head: <script src="/tl.js"></script>
|
||||||
|
```
|
||||||
|
|
||||||
|
you can now edit `tl.js` and press CTRL-SHIFT-R in the browser to see your changes take effect as you go
|
||||||
|
|
||||||
|
if you want to contribute your translation back to the project (please do!) then you'll want to...
|
||||||
|
* grab all of the text inside your `var tl_cpanel = {` and add it to the translations inside `copyparty/web/splash.js` in the repo
|
||||||
|
* and the text inside your `var tl_browser = {` and add that to the translations inside `copyparty/web/browser.js` in the repo
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user