Compare commits
85 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cddedd37d5 | ||
|
|
4d6626b099 | ||
|
|
7a55833bb2 | ||
|
|
7e4702cf09 | ||
|
|
685f08697a | ||
|
|
a255db706d | ||
|
|
9d76902710 | ||
|
|
62ee7f6980 | ||
|
|
2f6707825a | ||
|
|
7dda77dcb4 | ||
|
|
ddec22d04c | ||
|
|
32e90859f4 | ||
|
|
8b8970c787 | ||
|
|
03d35ba799 | ||
|
|
c035d7d88a | ||
|
|
46f9e9efff | ||
|
|
4fa8d7ed79 | ||
|
|
cd71b505a9 | ||
|
|
c7db08ed3e | ||
|
|
3582a1004c | ||
|
|
22cbd2dbb5 | ||
|
|
c87af9e85c | ||
|
|
6c202effa4 | ||
|
|
632f52af22 | ||
|
|
46e59529a4 | ||
|
|
bdf060236a | ||
|
|
d9d2a09282 | ||
|
|
b020fd4ad2 | ||
|
|
4ef3526354 | ||
|
|
20ddeb6e1b | ||
|
|
d27f110498 | ||
|
|
910797ccb6 | ||
|
|
7de9d15aef | ||
|
|
6a9ffe7e06 | ||
|
|
12dcea4f70 | ||
|
|
b3b39bd8f1 | ||
|
|
c7caecf77c | ||
|
|
1fe30363c7 | ||
|
|
54a7256c8d | ||
|
|
8e8e4ff132 | ||
|
|
1dace72092 | ||
|
|
3a5c1d9faf | ||
|
|
f38c754301 | ||
|
|
fff38f484d | ||
|
|
95390b655f | ||
|
|
5967c421ca | ||
|
|
b8b5214f44 | ||
|
|
cdd3b67a5c | ||
|
|
28c9de3f6a | ||
|
|
f3b9bfc114 | ||
|
|
c9eba39edd | ||
|
|
40a1c7116e | ||
|
|
c03af9cfcc | ||
|
|
c4cbc32cc5 | ||
|
|
1231ce199e | ||
|
|
e0cac6fd99 | ||
|
|
d9db1534b1 | ||
|
|
6a0aaaf069 | ||
|
|
4c04798aa5 | ||
|
|
3f84b0a015 | ||
|
|
917380ddbb | ||
|
|
d9ae067e52 | ||
|
|
b2e8bf6e89 | ||
|
|
170cbe98c5 | ||
|
|
c94f662095 | ||
|
|
0987dcfb1c | ||
|
|
6920c01d4a | ||
|
|
cc0cc8cdf0 | ||
|
|
fb13969798 | ||
|
|
278258ee9f | ||
|
|
9e542cf86b | ||
|
|
244e952f79 | ||
|
|
aa2a8fa223 | ||
|
|
467acb47bf | ||
|
|
0c0d6b2bfc | ||
|
|
ce0e5be406 | ||
|
|
65ce4c90fa | ||
|
|
9897a08d09 | ||
|
|
f5753ba720 | ||
|
|
fcf32a935b | ||
|
|
ec50788987 | ||
|
|
ac0a2da3b5 | ||
|
|
9f84dc42fe | ||
|
|
21f9304235 | ||
|
|
5cedd22bbd |
32
.github/ISSUE_TEMPLATE/bug_report.md
vendored
32
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -11,30 +11,38 @@ NOTE:
|
||||
all of the below are optional, consider them as inspiration, delete and rewrite at will, thx md
|
||||
|
||||
|
||||
**Describe the bug**
|
||||
### Describe the bug
|
||||
a description of what the bug is
|
||||
|
||||
**To Reproduce**
|
||||
### To Reproduce
|
||||
List of steps to reproduce the issue, or, if it's hard to reproduce, then at least a detailed explanation of what you did to run into it
|
||||
|
||||
**Expected behavior**
|
||||
### Expected behavior
|
||||
a description of what you expected to happen
|
||||
|
||||
**Screenshots**
|
||||
### Screenshots
|
||||
if applicable, add screenshots to help explain your problem, such as the kickass crashpage :^)
|
||||
|
||||
**Server details**
|
||||
if the issue is possibly on the server-side, then mention some of the following:
|
||||
* server OS / version:
|
||||
* python version:
|
||||
* copyparty arguments:
|
||||
* filesystem (`lsblk -f` on linux):
|
||||
### Server details (if you are using docker/podman)
|
||||
remove the ones that are not relevant:
|
||||
* **server OS / version:**
|
||||
* **how you're running copyparty:** (docker/podman/something-else)
|
||||
* **docker image:** (variant, version, and arch if you know)
|
||||
* **copyparty arguments and/or config-file:**
|
||||
|
||||
**Client details**
|
||||
### Server details (if you're NOT using docker/podman)
|
||||
remove the ones that are not relevant:
|
||||
* **server OS / version:**
|
||||
* **what copyparty did you grab:** (sfx/exe/pip/aur/...)
|
||||
* **how you're running it:** (in a terminal, as a systemd-service, ...)
|
||||
* run copyparty with `--version` and grab the last 3 lines (they start with `copyparty`, `CPython`, `sqlite`) and paste them below this line:
|
||||
* **copyparty arguments and/or config-file:**
|
||||
|
||||
### Client details
|
||||
if the issue is possibly on the client-side, then mention some of the following:
|
||||
* the device type and model:
|
||||
* OS version:
|
||||
* browser version:
|
||||
|
||||
**Additional context**
|
||||
### Additional context
|
||||
any other context about the problem here
|
||||
|
||||
@@ -28,6 +28,8 @@ aside from documentation and ideas, some other things that would be cool to have
|
||||
|
||||
* **translations** -- the copyparty web-UI has translations for english and norwegian at the top of [browser.js](https://github.com/9001/copyparty/blob/hovudstraum/copyparty/web/browser.js); if you'd like to add a translation for another language then that'd be welcome! and if that language has a grammar that doesn't fit into the way the strings are assembled, then we'll fix that as we go :>
|
||||
|
||||
* but please note that support for [RTL (Right-to-Left) languages](https://en.wikipedia.org/wiki/Right-to-left_script) is currently not planned, since the javascript is a bit too jank for that
|
||||
|
||||
* **UI ideas** -- at some point I was thinking of rewriting the UI in react/preact/something-not-vanilla-javascript, but I'll admit the comfiness of not having any build stage combined with raw performance has kinda convinced me otherwise :p but I'd be very open to ideas on how the UI could be improved, or be more intuitive.
|
||||
|
||||
* **docker improvements** -- I don't really know what I'm doing when it comes to containers, so I'm sure there's a *huge* room for improvement here, mainly regarding how you're supposed to use the container with kubernetes / docker-compose / any of the other popular ways to do things. At some point I swear I'll start learning about docker so I can pick up clach04's [docker-compose draft](https://github.com/9001/copyparty/issues/38) and learn how that stuff ticks, unless someone beats me to it!
|
||||
|
||||
340
README.md
340
README.md
@@ -80,6 +80,7 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [metadata from audio files](#metadata-from-audio-files) - set `-e2t` to index tags on upload
|
||||
* [file parser plugins](#file-parser-plugins) - provide custom parsers to index additional tags
|
||||
* [event hooks](#event-hooks) - trigger a program on uploads, renames etc ([examples](./bin/hooks/))
|
||||
* [zeromq](#zeromq) - event-hooks can send zeromq messages
|
||||
* [upload events](#upload-events) - the older, more powerful approach ([examples](./bin/mtag/))
|
||||
* [handlers](#handlers) - redefine behavior with plugins ([examples](./bin/handlers/))
|
||||
* [ip auth](#ip-auth) - autologin based on IP range (CIDR)
|
||||
@@ -92,6 +93,7 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [listen on port 80 and 443](#listen-on-port-80-and-443) - become a *real* webserver
|
||||
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
|
||||
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
|
||||
* [reverse-proxy performance](#reverse-proxy-performance)
|
||||
* [prometheus](#prometheus) - metrics/stats can be enabled
|
||||
* [other extremely specific features](#other-extremely-specific-features) - you'll never find a use for these
|
||||
* [custom mimetypes](#custom-mimetypes) - change the association of a file extension
|
||||
@@ -140,7 +142,11 @@ just run **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/
|
||||
* or if you cannot install python, you can use [copyparty.exe](#copypartyexe) instead
|
||||
* or install [on arch](#arch-package) ╱ [on NixOS](#nixos-module) ╱ [through nix](#nix-package)
|
||||
* or if you are on android, [install copyparty in termux](#install-on-android)
|
||||
* or maybe you have a [synology nas / dsm](./docs/synology-dsm.md)
|
||||
* or if your computer is messed up and nothing else works, [try the pyz](#zipapp)
|
||||
* or if you don't trust copyparty yet and want to isolate it a little, then...
|
||||
* ...maybe [prisonparty](./bin/prisonparty.sh) to create a tiny [chroot](https://wiki.archlinux.org/title/Chroot) (very portable),
|
||||
* ...or [bubbleparty](./bin/bubbleparty.sh) to wrap it in [bubblewrap](https://github.com/containers/bubblewrap) (much better)
|
||||
* or if you prefer to [use docker](./scripts/docker/) 🐋 you can do that too
|
||||
* docker has all deps built-in, so skip this step:
|
||||
|
||||
@@ -252,7 +258,7 @@ also see [comparison to similar software](./docs/versus.md)
|
||||
* ☑ search by name/path/date/size
|
||||
* ☑ [search by ID3-tags etc.](#searching)
|
||||
* client support
|
||||
* ☑ [folder sync](#folder-sync)
|
||||
* ☑ [folder sync](#folder-sync) (one-way only; full sync will never be supported)
|
||||
* ☑ [curl-friendly](https://user-images.githubusercontent.com/241032/215322619-ea5fd606-3654-40ad-94ee-2bc058647bb2.png)
|
||||
* ☑ [opengraph](#opengraph) (discord embeds)
|
||||
* markdown
|
||||
@@ -350,10 +356,19 @@ same order here too
|
||||
* iPhones: the volume control doesn't work because [apple doesn't want it to](https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/Using_HTML5_Audio_Video/Device-SpecificConsiderations/Device-SpecificConsiderations.html#//apple_ref/doc/uid/TP40009523-CH5-SW11)
|
||||
* `AudioContext` will probably never be a viable workaround as apple introduces new issues faster than they fix current ones
|
||||
|
||||
* iPhones: music volume goes on a rollercoaster during song changes
|
||||
* nothing I can do about it because `AudioContext` is still broken in safari
|
||||
|
||||
* iPhones: the preload feature (in the media-player-options tab) can cause a tiny audio glitch 20sec before the end of each song, but disabling it may cause worse iOS bugs to appear instead
|
||||
* just a hunch, but disabling preloading may cause playback to stop entirely, or possibly mess with bluetooth speakers
|
||||
* tried to add a tooltip regarding this but looks like apple broke my tooltips
|
||||
|
||||
* iPhones: preloaded awo files make safari log MEDIA_ERR_NETWORK errors as playback starts, but the song plays just fine so eh whatever
|
||||
* awo, opus-weba, is apple's new take on opus support, replacing opus-caf which was technically limited to cbr opus
|
||||
|
||||
* iPhones: preloading another awo file may cause playback to stop
|
||||
* can be somewhat mitigated with `mp.au.play()` in `mp.onpreload` but that can hit a race condition in safari that starts playing the same audio object twice in parallel...
|
||||
|
||||
* Windows: folders cannot be accessed if the name ends with `.`
|
||||
* python or windows bug
|
||||
|
||||
@@ -460,6 +475,40 @@ examples:
|
||||
|
||||
anyone trying to bruteforce a password gets banned according to `--ban-pw`; default is 24h ban for 9 failed attempts in 1 hour
|
||||
|
||||
and if you want to use config files instead of commandline args (good!) then here's the same examples as a configfile; save it as `foobar.conf` and use it like this: `python copyparty-sfx.py -c foobar.conf`
|
||||
|
||||
```yaml
|
||||
[accounts]
|
||||
u1: p1 # create account "u1" with password "p1"
|
||||
u2: p2 # (note that comments must have
|
||||
u3: p3 # two spaces before the # sign)
|
||||
|
||||
[/] # this URL will be mapped to...
|
||||
/srv # ...this folder on the server filesystem
|
||||
accs:
|
||||
r: * # read-only for everyone, no account necessary
|
||||
|
||||
[/music] # create another volume at this URL,
|
||||
/mnt/music # which is mapped to this folder
|
||||
accs:
|
||||
r: u1, u2 # only these accounts can read,
|
||||
rw: u3 # and only u3 can read-write
|
||||
|
||||
[/inc]
|
||||
/mnt/incoming
|
||||
accs:
|
||||
w: u1 # u1 can upload but not see/download any files,
|
||||
rm: u2 # u2 can browse + move files out of this volume
|
||||
|
||||
[/i]
|
||||
/mnt/ss
|
||||
accs:
|
||||
rw: u1 # u1 can read-write,
|
||||
g: * # everyone can access files if they know the URL
|
||||
flags:
|
||||
fk: 4 # each file URL will have a 4-character password
|
||||
```
|
||||
|
||||
|
||||
## shadowing
|
||||
|
||||
@@ -467,6 +516,8 @@ hiding specific subfolders by mounting another volume on top of them
|
||||
|
||||
for example `-v /mnt::r -v /var/empty:web/certs:r` mounts the server folder `/mnt` as the webroot, but another volume is mounted at `/web/certs` -- so visitors can only see the contents of `/mnt` and `/mnt/web` (at URLs `/` and `/web`), but not `/mnt/web/certs` because URL `/web/certs` is mapped to `/var/empty`
|
||||
|
||||
the example config file right above this section may explain this better; the first volume `/` is mapped to `/srv` which means http://127.0.0.1:3923/music would try to read `/srv/music` on the server filesystem, but since there's another volume at `/music` mapped to `/mnt/music` then it'll go to `/mnt/music` instead
|
||||
|
||||
|
||||
## dotfiles
|
||||
|
||||
@@ -478,6 +529,19 @@ a client can request to see dotfiles in directory listings if global option `-ed
|
||||
|
||||
dotfiles do not appear in search results unless one of the above is true, **and** the global option / volflag `dotsrch` is set
|
||||
|
||||
config file example, where the same permission to see dotfiles is given in two different ways just for reference:
|
||||
|
||||
```yaml
|
||||
[/foo]
|
||||
/srv/foo
|
||||
accs:
|
||||
r.: ed # user "ed" has read-access + dot-access in this volume;
|
||||
# dotfiles are visible in listings, but not in searches
|
||||
flags:
|
||||
dotsrch # dotfiles will now appear in search results too
|
||||
dots # another way to let everyone see dotfiles in this vol
|
||||
```
|
||||
|
||||
|
||||
# the browser
|
||||
|
||||
@@ -600,6 +664,26 @@ enabling `multiselect` lets you click files to select them, and then shift-click
|
||||
* `multiselect` is mostly intended for phones/tablets, but the `sel` option in the `[⚙️] settings` tab is better suited for desktop use, allowing selection by CTRL-clicking and range-selection with SHIFT-click, all without affecting regular clicking
|
||||
* the `sel` option can be made default globally with `--gsel` or per-volume with volflag `gsel`
|
||||
|
||||
to show `/icons/exe.png` as the thumbnail for all .exe files, `--ext-th=exe=/icons/exe.png` (optionally as a volflag)
|
||||
|
||||
config file example:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
no-thumb # disable ALL thumbnails and audio transcoding
|
||||
no-vthumb # only disable video thumbnails
|
||||
|
||||
[/music]
|
||||
/mnt/nas/music
|
||||
accs:
|
||||
r: * # everyone can read
|
||||
flags:
|
||||
dthumb # disable ALL thumbnails and audio transcoding
|
||||
dvthumb # only disable video thumbnails
|
||||
ext-th: exe=/ico/exe.png # /ico/exe.png is the thumbnail of *.exe
|
||||
th-covers: folder.png,folder.jpg,cover.png,cover.jpg # the default
|
||||
```
|
||||
|
||||
|
||||
## zip downloads
|
||||
|
||||
@@ -613,8 +697,8 @@ select which type of archive you want in the `[⚙️] config` tab:
|
||||
| `pax` | `?tar=pax` | pax-format tar, futureproof, not as fast |
|
||||
| `tgz` | `?tar=gz` | gzip compressed gnu-tar (slow), for `curl \| tar -xvz` |
|
||||
| `txz` | `?tar=xz` | gnu-tar with xz / lzma compression (v.slow) |
|
||||
| `zip` | `?zip=utf8` | works everywhere, glitchy filenames on win7 and older |
|
||||
| `zip_dos` | `?zip` | traditional cp437 (no unicode) to fix glitchy filenames |
|
||||
| `zip` | `?zip` | works everywhere, glitchy filenames on win7 and older |
|
||||
| `zip_dos` | `?zip=dos` | traditional cp437 (no unicode) to fix glitchy filenames |
|
||||
| `zip_crc` | `?zip=crc` | cp437 with crc32 computed early for truly ancient software |
|
||||
|
||||
* gzip default level is `3` (0=fast, 9=best), change with `?tar=gz:9`
|
||||
@@ -622,7 +706,7 @@ select which type of archive you want in the `[⚙️] config` tab:
|
||||
* bz2 default level is `2` (1=fast, 9=best), change with `?tar=bz2:9`
|
||||
* hidden files ([dotfiles](#dotfiles)) are excluded unless account is allowed to list them
|
||||
* `up2k.db` and `dir.txt` is always excluded
|
||||
* bsdtar supports streaming unzipping: `curl foo?zip=utf8 | bsdtar -xv`
|
||||
* bsdtar supports streaming unzipping: `curl foo?zip | bsdtar -xv`
|
||||
* good, because copyparty's zip is faster than tar on small files
|
||||
* `zip_crc` will take longer to download since the server has to read each file twice
|
||||
* this is only to support MS-DOS PKZIP v2.04g (october 1993) and older
|
||||
@@ -646,7 +730,7 @@ dragdrop is the recommended way, but you may also:
|
||||
|
||||
* select some files (not folders) in your file explorer and press CTRL-V inside the browser window
|
||||
* use the [command-line uploader](https://github.com/9001/copyparty/tree/hovudstraum/bin#u2cpy)
|
||||
* upload using [curl or sharex](#client-examples)
|
||||
* upload using [curl, sharex, ishare, ...](#client-examples)
|
||||
|
||||
when uploading files through dragdrop or CTRL-V, this initiates an upload using `up2k`; there are two browser-based uploaders available:
|
||||
* `[🎈] bup`, the basic uploader, supports almost every browser since netscape 4.0
|
||||
@@ -724,6 +808,14 @@ undo/delete accidental uploads using the `[🧯]` tab in the UI
|
||||
|
||||
you can unpost even if you don't have regular move/delete access, however only for files uploaded within the past `--unpost` seconds (default 12 hours) and the server must be running with `-e2d`
|
||||
|
||||
config file example:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
e2d # enable up2k database (remember uploads)
|
||||
unpost: 43200 # 12 hours (default)
|
||||
```
|
||||
|
||||
|
||||
### self-destruct
|
||||
|
||||
@@ -885,8 +977,19 @@ will show uploader IP and upload-time if the visitor has the admin permission
|
||||
|
||||
* global-option `--ups-when` makes upload-time visible to all users, and not just admins
|
||||
|
||||
* global-option `--ups-who` (volflag `ups_who`) specifies who gets access (0=nobody, 1=admins, 2=everyone), default=2
|
||||
|
||||
note that the [🧯 unpost](#unpost) feature is better suited for viewing *your own* recent uploads, as it includes the option to undo/delete them
|
||||
|
||||
config file example:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
ups-when # everyone can see upload times
|
||||
ups-who: 1 # but only admins can see the list,
|
||||
# so ups-when doesn't take effect
|
||||
```
|
||||
|
||||
|
||||
## media player
|
||||
|
||||
@@ -924,6 +1027,11 @@ open the `[🎺]` media-player-settings tab to configure it,
|
||||
* `[aac]` converts `aac` and `m4a` files into opus (if supported by browser) or mp3
|
||||
* `[oth]` converts all other known formats into opus (if supported by browser) or mp3
|
||||
* `aac|ac3|aif|aiff|alac|alaw|amr|ape|au|dfpwm|dts|flac|gsm|it|m4a|mo3|mod|mp2|mp3|mpc|mptm|mt2|mulaw|ogg|okt|opus|ra|s3m|tak|tta|ulaw|wav|wma|wv|xm|xpk`
|
||||
* "transcode to":
|
||||
* `[opus]` produces an `opus` whenever transcoding is necessary (the best choice on Android and PCs)
|
||||
* `[awo]` is `opus` in a `weba` file, good for iPhones (iOS 17.5 and newer) but Apple is still fixing some state-confusion bugs as of iOS 18.2.1
|
||||
* `[caf]` is `opus` in a `caf` file, good for iPhones (iOS 11 through 17), technically unsupported by Apple but works for the mos tpart
|
||||
* `[mp3]` -- the myth, the legend, the undying master of mediocre sound quality that definitely works everywhere
|
||||
* "tint" reduces the contrast of the playback bar
|
||||
|
||||
|
||||
@@ -1026,7 +1134,16 @@ using arguments or config files, or a mix of both:
|
||||
|
||||
announce enabled services on the LAN ([pic](https://user-images.githubusercontent.com/241032/215344737-0eae8d98-9496-4256-9aa8-cd2f6971810d.png)) -- `-z` enables both [mdns](#mdns) and [ssdp](#ssdp)
|
||||
|
||||
* `--z-on` / `--z-off`' limits the feature to certain networks
|
||||
* `--z-on` / `--z-off` limits the feature to certain networks
|
||||
|
||||
config file example:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
z # enable all zeroconf features (mdns, ssdp)
|
||||
zm # only enables mdns (does nothing since we already have z)
|
||||
z-on: 192.168.0.0/16, 10.1.2.0/24 # restrict to certain subnets
|
||||
```
|
||||
|
||||
|
||||
### mdns
|
||||
@@ -1104,6 +1221,8 @@ on macos, connect from finder:
|
||||
|
||||
in order to grant full write-access to webdav clients, the volflag `daw` must be set and the account must also have delete-access (otherwise the client won't be allowed to replace the contents of existing files, which is how webdav works)
|
||||
|
||||
> note: if you have enabled [IdP authentication](#identity-providers) then that may cause issues for some/most webdav clients; see [the webdav section in the IdP docs](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#connecting-webdav-clients)
|
||||
|
||||
|
||||
### connecting to webdav from windows
|
||||
|
||||
@@ -1165,7 +1284,7 @@ dependencies: `python3 -m pip install --user -U impacket==0.11.0`
|
||||
|
||||
some **BIG WARNINGS** specific to SMB/CIFS, in decreasing importance:
|
||||
* not entirely confident that read-only is read-only
|
||||
* the smb backend is not fully integrated with vfs, meaning there could be security issues (path traversal). Please use `--smb-port` (see below) and [prisonparty](./bin/prisonparty.sh)
|
||||
* the smb backend is not fully integrated with vfs, meaning there could be security issues (path traversal). Please use `--smb-port` (see below) and [prisonparty](./bin/prisonparty.sh) or [bubbleparty](./bin/bubbleparty.sh)
|
||||
* account passwords work per-volume as expected, and so does account permissions (read/write/move/delete), but `--smbw` must be given to allow write-access from smb
|
||||
* [shadowing](#shadowing) probably works as expected but no guarantees
|
||||
|
||||
@@ -1251,6 +1370,18 @@ advantages of using symlinks (default):
|
||||
|
||||
global-option `--xlink` / volflag `xlink` additionally enables deduplication across volumes, but this is probably buggy and not recommended
|
||||
|
||||
config file example:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
e2dsa # scan and index filesystem on startup
|
||||
dedup # symlink-based deduplication for all volumes
|
||||
|
||||
[/media]
|
||||
/mnt/nas/media
|
||||
flags:
|
||||
hardlinkonly # this vol does hardlinks instead of symlinks
|
||||
```
|
||||
|
||||
|
||||
## file indexing
|
||||
@@ -1282,6 +1413,14 @@ note:
|
||||
* `e2tsr` is probably always overkill, since `e2ds`/`e2dsa` would pick up any file modifications and `e2ts` would then reindex those, unless there is a new copyparty version with new parsers and the release note says otherwise
|
||||
* the rescan button in the admin panel has no effect unless the volume has `-e2ds` or higher
|
||||
|
||||
config file example (these options are recommended btw):
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
e2dsa # scan and index all files in all volumes on startup
|
||||
e2ts # check newly-discovered or uploaded files for media tags
|
||||
```
|
||||
|
||||
### exclude-patterns
|
||||
|
||||
to save some time, you can provide a regex pattern for filepaths to only index by filename/path/size/last-modified (and not the hash of the file contents) by setting `--no-hash '\.iso$'` or the volflag `:c,nohash=\.iso$`, this has the following consequences:
|
||||
@@ -1291,12 +1430,24 @@ to save some time, you can provide a regex pattern for filepaths to only index
|
||||
|
||||
similarly, you can fully ignore files/folders using `--no-idx [...]` and `:c,noidx=\.iso$`
|
||||
|
||||
NOTE: `no-idx` and/or `no-hash` prevents deduplication of those files
|
||||
|
||||
* when running on macos, all the usual apple metadata files are excluded by default
|
||||
|
||||
if you set `--no-hash [...]` globally, you can enable hashing for specific volumes using flag `:c,nohash=`
|
||||
|
||||
to exclude certain filepaths from search-results, use `--srch-excl` or volflag `srch_excl` instead of `--no-idx`, for example `--srch-excl 'password|logs/[0-9]'`
|
||||
|
||||
config file example:
|
||||
|
||||
```yaml
|
||||
[/games]
|
||||
/mnt/nas/games
|
||||
flags:
|
||||
noidx: \.iso$ # skip indexing iso-files
|
||||
srch_excl: password|logs/[0-9] # filter search results
|
||||
```
|
||||
|
||||
### filesystem guards
|
||||
|
||||
avoid traversing into other filesystems using `--xdev` / volflag `:c,xdev`, skipping any symlinks or bind-mounts to another HDD for example
|
||||
@@ -1317,6 +1468,20 @@ argument `--re-maxage 60` will rescan all volumes every 60 sec, same as volflag
|
||||
|
||||
uploads are disabled while a rescan is happening, so rescans will be delayed by `--db-act` (default 10 sec) when there is write-activity going on (uploads, renames, ...)
|
||||
|
||||
note: folder-thumbnails are selected during filesystem indexing, so periodic rescans can be used to keep them accurate as images are uploaded/deleted (or manually do a rescan with the `reload` button in the controlpanel)
|
||||
|
||||
config file example:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
re-maxage: 3600
|
||||
|
||||
[/pics]
|
||||
/mnt/nas/pics
|
||||
flags:
|
||||
scan: 900
|
||||
```
|
||||
|
||||
|
||||
## upload rules
|
||||
|
||||
@@ -1342,6 +1507,26 @@ you can also set transaction limits which apply per-IP and per-volume, but these
|
||||
notes:
|
||||
* `vmaxb` and `vmaxn` requires either the `e2ds` volflag or `-e2dsa` global-option
|
||||
|
||||
config file example:
|
||||
|
||||
```yaml
|
||||
[/inc]
|
||||
/mnt/nas/uploads
|
||||
accs:
|
||||
w: * # anyone can upload here
|
||||
rw: ed # only user "ed" can read-write
|
||||
flags:
|
||||
e2ds: # filesystem indexing is required for many of these:
|
||||
sz: 1k-3m # accept upload only if filesize in this range
|
||||
df: 4g # free disk space cannot go lower than this
|
||||
vmaxb: 1g # volume can never exceed 1 GiB
|
||||
vmaxn: 4k # ...or 4000 files, whichever comes first
|
||||
nosub # must upload to toplevel folder
|
||||
lifetime: 300 # uploads are deleted after 5min
|
||||
maxn: 250,3600 # each IP can upload 250 files in 1 hour
|
||||
maxb: 1g,300 # each IP can upload 1 GiB over 5 minutes
|
||||
```
|
||||
|
||||
|
||||
## compress uploads
|
||||
|
||||
@@ -1387,10 +1572,24 @@ this can instead be kept in a single place using the `--hist` argument, or the `
|
||||
* `--hist ~/.cache/copyparty -v ~/music::r:c,hist=-` sets `~/.cache/copyparty` as the default place to put volume info, but `~/music` gets the regular `.hist` subfolder (`-` restores default behavior)
|
||||
|
||||
note:
|
||||
* putting the hist-folders on an SSD is strongly recommended for performance
|
||||
* markdown edits are always stored in a local `.hist` subdirectory
|
||||
* on windows the volflag path is cyglike, so `/c/temp` means `C:\temp` but use regular paths for `--hist`
|
||||
* you can use cygpaths for volumes too, `-v C:\Users::r` and `-v /c/users::r` both work
|
||||
|
||||
config file example:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
hist: ~/.cache/copyparty # put db/thumbs/etc. here by default
|
||||
|
||||
[/pics]
|
||||
/mnt/nas/pics
|
||||
flags:
|
||||
hist: - # restore the default (/mnt/nas/pics/.hist/)
|
||||
hist: /mnt/nas/cache/pics/ # can be absolute path
|
||||
```
|
||||
|
||||
|
||||
## metadata from audio files
|
||||
|
||||
@@ -1442,6 +1641,18 @@ copyparty can invoke external programs to collect additional metadata for files
|
||||
|
||||
if something doesn't work, try `--mtag-v` for verbose error messages
|
||||
|
||||
config file example; note that `mtp` is an additive option so all of the mtp options will take effect:
|
||||
|
||||
```yaml
|
||||
[/music]
|
||||
/mnt/nas/music
|
||||
flags:
|
||||
mtp: .bpm=~/bin/audio-bpm.py # assign ".bpm" (numeric) with script
|
||||
mtp: key=f,t5,~/bin/audio-key.py # force/overwrite, 5sec timeout
|
||||
mtp: ext=an,~/bin/file-ext.py # will only run on non-audio files
|
||||
mtp: arch,built,ver,orig=an,eexe,edll,~/bin/exe.py # only exe/dll
|
||||
```
|
||||
|
||||
|
||||
## event hooks
|
||||
|
||||
@@ -1454,12 +1665,51 @@ there's a bunch of flags and stuff, see `--help-hooks`
|
||||
if you want to write your own hooks, see [devnotes](./docs/devnotes.md#event-hooks)
|
||||
|
||||
|
||||
### zeromq
|
||||
|
||||
event-hooks can send zeromq messages instead of running programs
|
||||
|
||||
to send a 0mq message every time a file is uploaded,
|
||||
|
||||
* `--xau zmq:pub:tcp://*:5556` sends a PUB to any/all connected SUB clients
|
||||
* `--xau t3,zmq:push:tcp://*:5557` sends a PUSH to exactly one connected PULL client
|
||||
* `--xau t3,j,zmq:req:tcp://localhost:5555` sends a REQ to the connected REP client
|
||||
|
||||
the PUSH and REQ examples have `t3` (timeout after 3 seconds) because they block if there's no clients to talk to
|
||||
|
||||
* the REQ example does `t3,j` to send extended upload-info as json instead of just the filesystem-path
|
||||
|
||||
see [zmq-recv.py](https://github.com/9001/copyparty/blob/hovudstraum/bin/zmq-recv.py) if you need something to receive the messages with
|
||||
|
||||
config file example; note that the hooks are additive options, so all of the xau options will take effect:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
xau: zmq:pub:tcp://*:5556` # send a PUB to any/all connected SUB clients
|
||||
xau: t3,zmq:push:tcp://*:5557` # send PUSH to exactly one connected PULL cli
|
||||
xau: t3,j,zmq:req:tcp://localhost:5555` # send REQ to the connected REP cli
|
||||
```
|
||||
|
||||
|
||||
### upload events
|
||||
|
||||
the older, more powerful approach ([examples](./bin/mtag/)):
|
||||
|
||||
```
|
||||
-v /mnt/inc:inc:w:c,mte=+x1:c,mtp=x1=ad,kn,/usr/bin/notify-send
|
||||
-v /mnt/inc:inc:w:c,e2d,e2t,mte=+x1:c,mtp=x1=ad,kn,/usr/bin/notify-send
|
||||
```
|
||||
|
||||
that was the commandline example; here's the config file example:
|
||||
|
||||
```yaml
|
||||
[/inc]
|
||||
/mnt/inc
|
||||
accs:
|
||||
w: *
|
||||
flags:
|
||||
e2d, e2t # enable indexing of uploaded files and their tags
|
||||
mte: +x1
|
||||
mtp: x1=ad,kn,/usr/bin/notify-send
|
||||
```
|
||||
|
||||
so filesystem location `/mnt/inc` shared at `/inc`, write-only for everyone, appending `x1` to the list of tags to index (`mte`), and using `/usr/bin/notify-send` to "provide" tag `x1` for any filetype (`ad`) with kill-on-timeout disabled (`kn`)
|
||||
@@ -1473,6 +1723,8 @@ note that this is way more complicated than the new [event hooks](#event-hooks)
|
||||
|
||||
note that it will occupy the parsing threads, so fork anything expensive (or set `kn` to have copyparty fork it for you) -- otoh if you want to intentionally queue/singlethread you can combine it with `--mtag-mt 1`
|
||||
|
||||
for reference, if you were to do this using event hooks instead, it would be like this: `-e2d --xau notify-send,hello,--`
|
||||
|
||||
|
||||
## handlers
|
||||
|
||||
@@ -1480,6 +1732,8 @@ redefine behavior with plugins ([examples](./bin/handlers/))
|
||||
|
||||
replace 404 and 403 errors with something completely different (that's it for now)
|
||||
|
||||
as for client-side stuff, there is [plugins for modifying UI/UX](./contrib/plugins/)
|
||||
|
||||
|
||||
## ip auth
|
||||
|
||||
@@ -1541,12 +1795,18 @@ connecting to an aws s3 bucket and similar
|
||||
|
||||
there is no built-in support for this, but you can use FUSE-software such as [rclone](https://rclone.org/) / [geesefs](https://github.com/yandex-cloud/geesefs) / [JuiceFS](https://juicefs.com/en/) to first mount your cloud storage as a local disk, and then let copyparty use (a folder in) that disk as a volume
|
||||
|
||||
you may experience poor upload performance this way, but that can sometimes be fixed by specifying the volflag `sparse` to force the use of sparse files; this has improved the upload speeds from `1.5 MiB/s` to over `80 MiB/s` in one case, but note that you are also more likely to discover funny bugs in your FUSE software this way, so buckle up
|
||||
if copyparty is unable to access the local folder that rclone/geesefs/JuiceFS provides (for example if it looks invisible) then you may need to run rclone with `--allow-other` and/or enable `user_allow_other` in `/etc/fuse.conf`
|
||||
|
||||
you will probably get decent speeds with the default config, however most likely restricted to using one TCP connection per file, so the upload-client won't be able to send multiple chunks in parallel
|
||||
|
||||
> before [v1.13.5](https://github.com/9001/copyparty/releases/tag/v1.13.5) it was recommended to use the volflag `sparse` to force-allow multiple chunks in parallel; this would improve the upload-speed from `1.5 MiB/s` to over `80 MiB/s` at the risk of provoking latent bugs in S3 or JuiceFS. But v1.13.5 added chunk-stitching, so this is now probably much less important. On the contrary, `nosparse` *may* now increase performance in some cases. Please try all three options (default, `sparse`, `nosparse`) as the optimal choice depends on your network conditions and software stack (both the FUSE-driver and cloud-server)
|
||||
|
||||
someone has also tested geesefs in combination with [gocryptfs](https://nuetzlich.net/gocryptfs/) with surprisingly good results, getting 60 MiB/s upload speeds on a gbit line, but JuiceFS won with 80 MiB/s using its built-in encryption
|
||||
|
||||
you may improve performance by specifying larger values for `--iobuf` / `--s-rd-sz` / `--s-wr-sz`
|
||||
|
||||
> if you've experimented with this and made interesting observations, please share your findings so we can add a section with specific recommendations :-)
|
||||
|
||||
|
||||
## hiding from google
|
||||
|
||||
@@ -1669,10 +1929,16 @@ some reverse proxies (such as [Caddy](https://caddyserver.com/)) can automatical
|
||||
|
||||
for improved security (and a 10% performance boost) consider listening on a unix-socket with `-i unix:770:www:/tmp/party.sock` (permission `770` means only members of group `www` can access it)
|
||||
|
||||
example webserver configs:
|
||||
example webserver / reverse-proxy configs:
|
||||
|
||||
* [nginx config](contrib/nginx/copyparty.conf) -- entire domain/subdomain
|
||||
* [apache2 config](contrib/apache/copyparty.conf) -- location-based
|
||||
* [apache config](contrib/apache/copyparty.conf)
|
||||
* caddy uds: `caddy reverse-proxy --from :8080 --to unix///dev/shm/party.sock`
|
||||
* caddy tcp: `caddy reverse-proxy --from :8081 --to http://127.0.0.1:3923`
|
||||
* [haproxy config](contrib/haproxy/copyparty.conf)
|
||||
* [lighttpd subdomain](contrib/lighttpd/subdomain.conf) -- entire domain/subdomain
|
||||
* [lighttpd subpath](contrib/lighttpd/subpath.conf) -- location-based (not optimal, but in case you need it)
|
||||
* [nginx config](contrib/nginx/copyparty.conf) -- recommended
|
||||
* [traefik config](contrib/traefik/copyparty.yaml)
|
||||
|
||||
|
||||
### real-ip
|
||||
@@ -1684,6 +1950,38 @@ if you (and maybe everybody else) keep getting a message that says `thank you fo
|
||||
for most common setups, there should be a helpful message in the server-log explaining what to do, but see [docs/xff.md](docs/xff.md) if you want to learn more, including a quick hack to **just make it work** (which is **not** recommended, but hey...)
|
||||
|
||||
|
||||
### reverse-proxy performance
|
||||
|
||||
most reverse-proxies support connecting to copyparty either using uds/unix-sockets (`/dev/shm/party.sock`, faster/recommended) or using tcp (`127.0.0.1`)
|
||||
|
||||
with copyparty listening on a uds / unix-socket / unix-domain-socket and the reverse-proxy connecting to that:
|
||||
|
||||
| index.html | upload | download | software |
|
||||
| ------------ | ----------- | ----------- | -------- |
|
||||
| 28'900 req/s | 6'900 MiB/s | 7'400 MiB/s | no-proxy |
|
||||
| 18'750 req/s | 3'500 MiB/s | 2'370 MiB/s | haproxy |
|
||||
| 9'900 req/s | 3'750 MiB/s | 2'200 MiB/s | caddy |
|
||||
| 18'700 req/s | 2'200 MiB/s | 1'570 MiB/s | nginx |
|
||||
| 9'700 req/s | 1'750 MiB/s | 1'830 MiB/s | apache |
|
||||
| 9'900 req/s | 1'300 MiB/s | 1'470 MiB/s | lighttpd |
|
||||
|
||||
when connecting the reverse-proxy to `127.0.0.1` instead (the basic and/or old-fasioned way), speeds are a bit worse:
|
||||
|
||||
| index.html | upload | download | software |
|
||||
| ------------ | ----------- | ----------- | -------- |
|
||||
| 21'200 req/s | 5'700 MiB/s | 6'700 MiB/s | no-proxy |
|
||||
| 14'500 req/s | 1'700 MiB/s | 2'170 MiB/s | haproxy |
|
||||
| 11'100 req/s | 2'750 MiB/s | 2'000 MiB/s | traefik |
|
||||
| 8'400 req/s | 2'300 MiB/s | 1'950 MiB/s | caddy |
|
||||
| 13'400 req/s | 1'100 MiB/s | 1'480 MiB/s | nginx |
|
||||
| 8'400 req/s | 1'000 MiB/s | 1'000 MiB/s | apache |
|
||||
| 6'500 req/s | 1'270 MiB/s | 1'500 MiB/s | lighttpd |
|
||||
|
||||
in summary, `haproxy > caddy > traefik > nginx > apache > lighttpd`, and use uds when possible (traefik does not support it yet)
|
||||
|
||||
* if these results are bullshit because my config exampels are bad, please submit corrections!
|
||||
|
||||
|
||||
## prometheus
|
||||
|
||||
metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0)
|
||||
@@ -1759,7 +2057,7 @@ change the association of a file extension
|
||||
|
||||
using commandline args, you can do something like `--mime gif=image/jif` and `--mime ts=text/x.typescript` (can be specified multiple times)
|
||||
|
||||
in a config-file, this is the same as:
|
||||
in a config file, this is the same as:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
@@ -1997,7 +2295,8 @@ interact with copyparty using non-browser clients
|
||||
* can be downloaded from copyparty: controlpanel -> connect -> [partyfuse.py](http://127.0.0.1:3923/.cpr/a/partyfuse.py)
|
||||
* [rclone](https://rclone.org/) as client can give ~5x performance, see [./docs/rclone.md](docs/rclone.md)
|
||||
|
||||
* sharex (screenshot utility): see [./contrib/sharex.sxcu](contrib/#sharexsxcu)
|
||||
* sharex (screenshot utility): see [./contrib/sharex.sxcu](./contrib/#sharexsxcu)
|
||||
* and for screenshots on macos, see [./contrib/ishare.iscu](./contrib/#ishareiscu)
|
||||
* and for screenshots on linux, see [./contrib/flameshot.sh](./contrib/flameshot.sh)
|
||||
|
||||
* contextlet (web browser integration); see [contrib contextlet](contrib/#send-to-cppcontextletjson)
|
||||
@@ -2018,6 +2317,8 @@ NOTE: curl will not send the original filename if you use `-T` combined with url
|
||||
|
||||
sync folders to/from copyparty
|
||||
|
||||
NOTE: full bidirectional sync, like what [nextcloud](https://docs.nextcloud.com/server/latest/user_manual/sv/files/desktop_mobile_sync.html) and [syncthing](https://syncthing.net/) does, will never be supported! Only single-direction sync (server-to-client, or client-to-server) is possible with copyparty
|
||||
|
||||
the commandline uploader [u2c.py](https://github.com/9001/copyparty/tree/hovudstraum/bin#u2cpy) with `--dr` is the best way to sync a folder to copyparty; verifies checksums and does files in parallel, and deletes unexpected files on the server after upload has finished which makes file-renames really cheap (it'll rename serverside and skip uploading)
|
||||
|
||||
alternatively there is [rclone](./docs/rclone.md) which allows for bidirectional sync and is *way* more flexible (stream files straight from sftp/s3/gcs to copyparty, ...), although there is no integrity check and it won't work with files over 100 MiB if copyparty is behind cloudflare
|
||||
@@ -2261,13 +2562,13 @@ mandatory deps:
|
||||
|
||||
install these to enable bonus features
|
||||
|
||||
enable hashed passwords in config: `argon2-cffi`
|
||||
enable [hashed passwords](#password-hashing) in config: `argon2-cffi`
|
||||
|
||||
enable ftp-server:
|
||||
enable [ftp-server](#ftp-server):
|
||||
* for just plaintext FTP, `pyftpdlib` (is built into the SFX)
|
||||
* with TLS encryption, `pyftpdlib pyopenssl`
|
||||
|
||||
enable music tags:
|
||||
enable [music tags](#metadata-from-audio-files):
|
||||
* either `mutagen` (fast, pure-python, skips a few tags, makes copyparty GPL? idk)
|
||||
* or `ffprobe` (20x slower, more accurate, possibly dangerous depending on your distro and users)
|
||||
|
||||
@@ -2278,8 +2579,9 @@ enable [thumbnails](#thumbnails) of...
|
||||
* **AVIF pictures:** `pyvips` or `ffmpeg` or `pillow-avif-plugin`
|
||||
* **JPEG XL pictures:** `pyvips` or `ffmpeg`
|
||||
|
||||
enable [smb](#smb-server) support (**not** recommended):
|
||||
* `impacket==0.12.0`
|
||||
enable sending [zeromq messages](#zeromq) from event-hooks: `pyzmq`
|
||||
|
||||
enable [smb](#smb-server) support (**not** recommended): `impacket==0.12.0`
|
||||
|
||||
`pyvips` gives higher quality thumbnails than `Pillow` and is 320% faster, using 270% more ram: `sudo apt install libvips42 && python3 -m pip install --user -U pyvips`
|
||||
|
||||
|
||||
@@ -78,3 +78,6 @@ cd /mnt/nas/music/.hist
|
||||
# [`prisonparty.sh`](prisonparty.sh)
|
||||
* run copyparty in a chroot, preventing any accidental file access
|
||||
* creates bindmounts for /bin, /lib, and so on, see `sysdirs=`
|
||||
|
||||
# [`bubbleparty.sh`](bubbleparty.sh)
|
||||
* run copyparty in an isolated process, preventing any accidental file access and more
|
||||
|
||||
19
bin/bubbleparty.sh
Executable file
19
bin/bubbleparty.sh
Executable file
@@ -0,0 +1,19 @@
|
||||
#!/bin/sh
|
||||
# usage: ./bubbleparty.sh ./copyparty-sfx.py ....
|
||||
bwrap \
|
||||
--unshare-all \
|
||||
--ro-bind /usr /usr \
|
||||
--ro-bind /bin /bin \
|
||||
--ro-bind /lib /lib \
|
||||
--ro-bind /etc/resolv.conf /etc/resolv.conf \
|
||||
--dev-bind /dev /dev \
|
||||
--dir /tmp \
|
||||
--dir /var \
|
||||
--bind $(pwd) $(pwd) \
|
||||
--share-net \
|
||||
--die-with-parent \
|
||||
--file 11 /etc/passwd \
|
||||
--file 12 /etc/group \
|
||||
"$@" \
|
||||
11< <(getent passwd $(id -u) 65534) \
|
||||
12< <(getent group $(id -g) 65534)
|
||||
@@ -20,6 +20,7 @@ each plugin must define a `main()` which takes 3 arguments;
|
||||
|
||||
## on404
|
||||
|
||||
* [redirect.py](redirect.py) sends an HTTP 301 or 302, redirecting the client to another page/file
|
||||
* [sorry.py](answer.py) replies with a custom message instead of the usual 404
|
||||
* [nooo.py](nooo.py) replies with an endless noooooooooooooo
|
||||
* [never404.py](never404.py) 100% guarantee that 404 will never be a thing again as it automatically creates dummy files whenever necessary
|
||||
|
||||
52
bin/handlers/redirect.py
Normal file
52
bin/handlers/redirect.py
Normal file
@@ -0,0 +1,52 @@
|
||||
# if someone hits a 404, redirect them to another location
|
||||
|
||||
|
||||
def send_http_302_temporary_redirect(cli, new_path):
|
||||
"""
|
||||
replies with an HTTP 302, which is a temporary redirect;
|
||||
"new_path" can be any of the following:
|
||||
- "http://a.com/" would redirect to another website,
|
||||
- "/foo/bar" would redirect to /foo/bar on the same server;
|
||||
note the leading '/' in the location which is important
|
||||
"""
|
||||
cli.reply(b"redirecting...", 302, headers={"Location": new_path})
|
||||
|
||||
|
||||
def send_http_301_permanent_redirect(cli, new_path):
|
||||
"""
|
||||
replies with an HTTP 301, which is a permanent redirect;
|
||||
otherwise identical to send_http_302_temporary_redirect
|
||||
"""
|
||||
cli.reply(b"redirecting...", 301, headers={"Location": new_path})
|
||||
|
||||
|
||||
def send_errorpage_with_redirect_link(cli, new_path):
|
||||
"""
|
||||
replies with a website explaining that the page has moved;
|
||||
"new_path" must be an absolute location on the same server
|
||||
but without a leading '/', so for example "foo/bar"
|
||||
would redirect to "/foo/bar"
|
||||
"""
|
||||
cli.redirect(new_path, click=False, msg="this page has moved")
|
||||
|
||||
|
||||
def main(cli, vn, rem):
|
||||
"""
|
||||
this is the function that gets called by copyparty;
|
||||
note that vn.vpath and cli.vpath does not have a leading '/'
|
||||
so we're adding the slash in the debug messages below
|
||||
"""
|
||||
print(f"this client just hit a 404: {cli.ip}")
|
||||
print(f"they were accessing this volume: /{vn.vpath}")
|
||||
print(f"and the original request-path (straight from the URL) was /{cli.vpath}")
|
||||
print(f"...which resolves to the following filesystem path: {vn.canonical(rem)}")
|
||||
|
||||
new_path = "/foo/bar/"
|
||||
print(f"will now redirect the client to {new_path}")
|
||||
|
||||
# uncomment one of these:
|
||||
send_http_302_temporary_redirect(cli, new_path)
|
||||
#send_http_301_permanent_redirect(cli, new_path)
|
||||
#send_errorpage_with_redirect_link(cli, new_path)
|
||||
|
||||
return "true"
|
||||
@@ -30,4 +30,5 @@ these are `--xiu` hooks; unlike `xbu` and `xau` (which get executed on every sin
|
||||
# on message
|
||||
* [wget.py](wget.py) lets you download files by POSTing URLs to copyparty
|
||||
* [qbittorrent-magnet.py](qbittorrent-magnet.py) starts downloading a torrent if you post a magnet url
|
||||
* [usb-eject.py](usb-eject.py) adds web-UI buttons to safe-remove usb flashdrives shared through copyparty
|
||||
* [msg-log.py](msg-log.py) is a guestbook; logs messages to a doc in the same folder
|
||||
|
||||
57
bin/hooks/usb-eject.js
Normal file
57
bin/hooks/usb-eject.js
Normal file
@@ -0,0 +1,57 @@
|
||||
// see usb-eject.py for usage
|
||||
|
||||
function usbclick() {
|
||||
QS('#treeul a[href="/usb/"]').click();
|
||||
}
|
||||
|
||||
function eject_cb() {
|
||||
var t = this.responseText;
|
||||
if (t.indexOf('can be safely unplugged') < 0 && t.indexOf('Device can be removed') < 0)
|
||||
return toast.err(30, 'usb eject failed:\n\n' + t);
|
||||
|
||||
toast.ok(5, esc(t.replace(/ - /g, '\n\n')).trim());
|
||||
usbclick(); setTimeout(usbclick, 10);
|
||||
};
|
||||
|
||||
function add_eject_2(a) {
|
||||
var aw = a.getAttribute('href').split(/\//g);
|
||||
if (aw.length != 4 || aw[3])
|
||||
return;
|
||||
|
||||
var v = aw[2],
|
||||
k = 'umount_' + v,
|
||||
o = ebi(k);
|
||||
|
||||
if (o)
|
||||
o.parentNode.removeChild(o);
|
||||
|
||||
a.appendChild(mknod('span', k, '⏏'), a);
|
||||
o = ebi(k);
|
||||
o.style.cssText = 'position:absolute; right:1em; margin-top:-.2em; font-size:1.3em';
|
||||
o.onclick = function (e) {
|
||||
ev(e);
|
||||
var xhr = new XHR();
|
||||
xhr.open('POST', get_evpath(), true);
|
||||
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded;charset=UTF-8');
|
||||
xhr.send('msg=' + uricom_enc(':usb-eject:' + v + ':'));
|
||||
xhr.onload = xhr.onerror = eject_cb;
|
||||
toast.inf(10, "ejecting " + v + "...");
|
||||
};
|
||||
};
|
||||
|
||||
function add_eject() {
|
||||
var o = QSA('#treeul a[href^="/usb/"]');
|
||||
for (var a = o.length - 1; a > 0; a--)
|
||||
add_eject_2(o[a]);
|
||||
};
|
||||
|
||||
(function() {
|
||||
var f0 = treectl.rendertree;
|
||||
treectl.rendertree = function (res, ts, top0, dst, rst) {
|
||||
var ret = f0(res, ts, top0, dst, rst);
|
||||
add_eject();
|
||||
return ret;
|
||||
};
|
||||
})();
|
||||
|
||||
setTimeout(add_eject, 50);
|
||||
58
bin/hooks/usb-eject.py
Normal file
58
bin/hooks/usb-eject.py
Normal file
@@ -0,0 +1,58 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import stat
|
||||
import subprocess as sp
|
||||
import sys
|
||||
|
||||
|
||||
"""
|
||||
if you've found yourself using copyparty to serve flashdrives on a LAN
|
||||
and your only wish is that the web-UI had a button to unmount / safely
|
||||
remove those flashdrives, then boy howdy are you in the right place :D
|
||||
|
||||
put usb-eject.js in the webroot (or somewhere else http-accessible)
|
||||
then run copyparty with these args:
|
||||
|
||||
-v /run/media/egon:/usb:A:c,hist=/tmp/junk
|
||||
--xm=c1,bin/hooks/usb-eject.py
|
||||
--js-browser=/usb-eject.js
|
||||
|
||||
which does the following respectively,
|
||||
|
||||
* share all of /run/media/egon as /usb with admin for everyone
|
||||
and put the histpath somewhere it won't cause trouble
|
||||
* run the usb-eject hook with stdout redirect to the web-ui
|
||||
* add the complementary usb-eject.js to the browser
|
||||
|
||||
"""
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
label = sys.argv[1].split(":usb-eject:")[1].split(":")[0]
|
||||
mp = "/run/media/egon/" + label
|
||||
# print("ejecting [%s]... " % (mp,), end="")
|
||||
mp = os.path.abspath(os.path.realpath(mp.encode("utf-8")))
|
||||
st = os.lstat(mp)
|
||||
if not stat.S_ISDIR(st.st_mode):
|
||||
raise Exception("not a regular directory")
|
||||
|
||||
# if you're running copyparty as root (thx for the faith)
|
||||
# you'll need something like this to make dbus talkative
|
||||
cmd = b"sudo -u egon DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus gio mount -e"
|
||||
|
||||
# but if copyparty and the ui-session is running
|
||||
# as the same user (good) then this is plenty
|
||||
cmd = b"gio mount -e"
|
||||
|
||||
cmd = cmd.split(b" ") + [mp]
|
||||
ret = sp.check_output(cmd).decode("utf-8", "replace")
|
||||
print(ret.strip() or (label + " can be safely unplugged"))
|
||||
|
||||
except Exception as ex:
|
||||
print("unmount failed: %r" % (ex,))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -31,6 +31,9 @@ plugins in this section should only be used with appropriate precautions:
|
||||
* [very-bad-idea.py](./very-bad-idea.py) combined with [meadup.js](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/meadup.js) converts copyparty into a janky yet extremely flexible chromecast clone
|
||||
* also adds a virtual keyboard by @steinuil to the basic-upload tab for comfy couch crowd control
|
||||
* anything uploaded through the [android app](https://github.com/9001/party-up) (files or links) are executed on the server, meaning anyone can infect your PC with malware... so protect this with a password and keep it on a LAN!
|
||||
* [kamelåså](https://github.com/steinuil/kameloso) is a much better (and MUCH safer) alternative to this plugin
|
||||
* powered by [chicken-curry-banana-pineapple-peanut pizza](https://a.ocv.me/pub/g/i/2025/01/298437ce-8351-4c8c-861c-fa131d217999.jpg?cache) so you know it's good
|
||||
* and, unlike this plugin, kamelåså even has windows support (nice)
|
||||
|
||||
|
||||
# dependencies
|
||||
|
||||
@@ -6,6 +6,11 @@ WARNING -- DANGEROUS PLUGIN --
|
||||
running this plugin, they can execute malware on your machine
|
||||
so please keep this on a LAN and protect it with a password
|
||||
|
||||
here is a MUCH BETTER ALTERNATIVE (which also works on Windows):
|
||||
https://github.com/steinuil/kameloso
|
||||
|
||||
----------------------------------------------------------------------
|
||||
|
||||
use copyparty as a chromecast replacement:
|
||||
* post a URL and it will open in the default browser
|
||||
* upload a file and it will open in the default application
|
||||
|
||||
55
bin/u2c.py
55
bin/u2c.py
@@ -1,8 +1,8 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
S_VERSION = "2.7"
|
||||
S_BUILD_DT = "2024-12-06"
|
||||
S_VERSION = "2.9"
|
||||
S_BUILD_DT = "2025-01-27"
|
||||
|
||||
"""
|
||||
u2c.py: upload to copyparty
|
||||
@@ -234,6 +234,10 @@ CLEN = "Content-Length"
|
||||
|
||||
web = None # type: HCli
|
||||
|
||||
links = [] # type: list[str]
|
||||
linkmtx = threading.Lock()
|
||||
linkfile = None
|
||||
|
||||
|
||||
class File(object):
|
||||
"""an up2k upload task; represents a single file"""
|
||||
@@ -761,6 +765,29 @@ def get_hashlist(file, pcb, mth):
|
||||
file.kchunks[k] = [v1, v2]
|
||||
|
||||
|
||||
def printlink(ar, purl, name, fk):
|
||||
if not name:
|
||||
url = purl # srch
|
||||
else:
|
||||
name = quotep(name.encode("utf-8", WTF8)).decode("utf-8")
|
||||
if fk:
|
||||
url = "%s%s?k=%s" % (purl, name, fk)
|
||||
else:
|
||||
url = "%s%s" % (purl, name)
|
||||
|
||||
url = "%s/%s" % (ar.burl, url.lstrip("/"))
|
||||
|
||||
with linkmtx:
|
||||
if ar.u:
|
||||
links.append(url)
|
||||
if ar.ud:
|
||||
print(url)
|
||||
if linkfile:
|
||||
zs = "%s\n" % (url,)
|
||||
zb = zs.encode("utf-8", "replace")
|
||||
linkfile.write(zb)
|
||||
|
||||
|
||||
def handshake(ar, file, search):
|
||||
# type: (argparse.Namespace, File, bool) -> tuple[list[str], bool]
|
||||
"""
|
||||
@@ -832,12 +859,17 @@ def handshake(ar, file, search):
|
||||
raise Exception(txt)
|
||||
|
||||
if search:
|
||||
if ar.uon and r["hits"]:
|
||||
printlink(ar, r["hits"][0]["rp"], "", "")
|
||||
return r["hits"], False
|
||||
|
||||
file.url = quotep(r["purl"].encode("utf-8", WTF8)).decode("utf-8")
|
||||
file.name = r["name"]
|
||||
file.wark = r["wark"]
|
||||
|
||||
if ar.uon and not r["hash"]:
|
||||
printlink(ar, file.url, r["name"], r.get("fk"))
|
||||
|
||||
return r["hash"], r["sprs"]
|
||||
|
||||
|
||||
@@ -1249,7 +1281,7 @@ class Ctl(object):
|
||||
for n, zsii in enumerate(file.cids)
|
||||
]
|
||||
print("chs: %s\n%s" % (vp, "\n".join(zsl)))
|
||||
zsl = [self.ar.wsalt, str(file.size)] + [x[0] for x in file.kchunks]
|
||||
zsl = [self.ar.wsalt, str(file.size)] + [x[0] for x in file.cids]
|
||||
zb = hashlib.sha512("\n".join(zsl).encode("utf-8")).digest()[:33]
|
||||
wark = ub64enc(zb).decode("utf-8")
|
||||
if self.ar.jw:
|
||||
@@ -1472,7 +1504,7 @@ class APF(argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFor
|
||||
|
||||
|
||||
def main():
|
||||
global web
|
||||
global web, linkfile
|
||||
|
||||
time.strptime("19970815", "%Y%m%d") # python#7980
|
||||
"".encode("idna") # python#29288
|
||||
@@ -1509,6 +1541,11 @@ source file/folder selection uses rsync syntax, meaning that:
|
||||
ap.add_argument("--spd", action="store_true", help="print speeds for each file")
|
||||
ap.add_argument("--version", action="store_true", help="show version and exit")
|
||||
|
||||
ap = app.add_argument_group("print links")
|
||||
ap.add_argument("-u", action="store_true", help="print list of download-links after all uploads finished")
|
||||
ap.add_argument("-ud", action="store_true", help="print download-link after each upload finishes")
|
||||
ap.add_argument("-uf", type=unicode, metavar="PATH", help="print list of download-links to file")
|
||||
|
||||
ap = app.add_argument_group("compatibility")
|
||||
ap.add_argument("--cls", action="store_true", help="clear screen before start")
|
||||
ap.add_argument("--rh", type=int, metavar="TRIES", default=0, help="resolve server hostname before upload (good for buggy networks, but TLS certs will break)")
|
||||
@@ -1594,6 +1631,10 @@ source file/folder selection uses rsync syntax, meaning that:
|
||||
ar.x = "|".join(ar.x or [])
|
||||
|
||||
setattr(ar, "wlist", ar.url == "-")
|
||||
setattr(ar, "uon", ar.u or ar.ud or ar.uf)
|
||||
|
||||
if ar.uf:
|
||||
linkfile = open(ar.uf, "wb")
|
||||
|
||||
for k in "dl dr drd wlist".split():
|
||||
errs = []
|
||||
@@ -1656,6 +1697,12 @@ source file/folder selection uses rsync syntax, meaning that:
|
||||
ar.z = True
|
||||
ctl = Ctl(ar, ctl.stats)
|
||||
|
||||
if links:
|
||||
print()
|
||||
print("\n".join(links))
|
||||
if linkfile:
|
||||
linkfile.close()
|
||||
|
||||
if ctl.errs:
|
||||
print("WARNING: %d errors" % (ctl.errs))
|
||||
|
||||
|
||||
76
bin/zmq-recv.py
Executable file
76
bin/zmq-recv.py
Executable file
@@ -0,0 +1,76 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import sys
|
||||
import zmq
|
||||
|
||||
"""
|
||||
zmq-recv.py: demo zmq receiver
|
||||
2025-01-22, v1.0, ed <irc.rizon.net>, MIT-Licensed
|
||||
https://github.com/9001/copyparty/blob/hovudstraum/bin/zmq-recv.py
|
||||
|
||||
basic zmq-server to receive events from copyparty; try one of
|
||||
the below and then "send a message to serverlog" in the web-ui:
|
||||
|
||||
1) dumb fire-and-forget to any and all listeners;
|
||||
run this script with "sub" and run copyparty with this:
|
||||
--xm zmq:pub:tcp://*:5556
|
||||
|
||||
2) one lucky listener gets the message, blocks if no listeners:
|
||||
run this script with "pull" and run copyparty with this:
|
||||
--xm t3,zmq:push:tcp://*:5557
|
||||
|
||||
3) blocking syn/ack mode, client must ack each message;
|
||||
run this script with "rep" and run copyparty with this:
|
||||
--xm t3,zmq:req:tcp://localhost:5555
|
||||
|
||||
note: to conditionally block uploads based on message contents,
|
||||
use rep_server to answer with "return 1" and run copyparty with
|
||||
--xau t3,c,zmq:req:tcp://localhost:5555
|
||||
"""
|
||||
|
||||
|
||||
ctx = zmq.Context()
|
||||
|
||||
|
||||
def sub_server():
|
||||
# PUB/SUB allows any number of servers/clients, and
|
||||
# messages are fire-and-forget
|
||||
sck = ctx.socket(zmq.SUB)
|
||||
sck.connect("tcp://localhost:5556")
|
||||
sck.setsockopt_string(zmq.SUBSCRIBE, "")
|
||||
while True:
|
||||
print("copyparty says %r" % (sck.recv_string(),))
|
||||
|
||||
|
||||
def pull_server():
|
||||
# PUSH/PULL allows any number of servers/clients, and
|
||||
# each message is sent to a exactly one PULL client
|
||||
sck = ctx.socket(zmq.PULL)
|
||||
sck.connect("tcp://localhost:5557")
|
||||
while True:
|
||||
print("copyparty says %r" % (sck.recv_string(),))
|
||||
|
||||
|
||||
def rep_server():
|
||||
# REP/REQ is a server/client pair where each message must be
|
||||
# acked by the other before another message can be sent, so
|
||||
# copyparty will do a blocking-wait for the ack
|
||||
sck = ctx.socket(zmq.REP)
|
||||
sck.bind("tcp://*:5555")
|
||||
while True:
|
||||
print("copyparty says %r" % (sck.recv_string(),))
|
||||
reply = b"thx"
|
||||
# reply = b"return 1" # non-zero to block an upload
|
||||
sck.send(reply)
|
||||
|
||||
|
||||
mode = sys.argv[1].lower() if len(sys.argv) > 1 else ""
|
||||
|
||||
if mode == "sub":
|
||||
sub_server()
|
||||
elif mode == "pull":
|
||||
pull_server()
|
||||
elif mode == "rep":
|
||||
rep_server()
|
||||
else:
|
||||
print("specify mode as first argument: SUB | PULL | REP")
|
||||
@@ -12,14 +12,19 @@
|
||||
* assumes the webserver and copyparty is running on the same server/IP
|
||||
* modify `10.13.1.1` as necessary if you wish to support browsers without javascript
|
||||
|
||||
### [`sharex.sxcu`](sharex.sxcu)
|
||||
* sharex config file to upload screenshots and grab the URL
|
||||
### [`sharex.sxcu`](sharex.sxcu) - Windows screenshot uploader
|
||||
* [sharex](https://getsharex.com/) config file to upload screenshots and grab the URL
|
||||
* `RequestURL`: full URL to the target folder
|
||||
* `pw`: password (remove the `pw` line if anon-write)
|
||||
* the `act:bput` thing is optional since copyparty v1.9.29
|
||||
* using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu)
|
||||
|
||||
### [`flameshot.sh`](flameshot.sh)
|
||||
### [`ishare.iscu`](ishare.iscu) - MacOS screenshot uploader
|
||||
* [ishare](https://isharemac.app/) config file to upload screenshots and grab the URL
|
||||
* `RequestURL`: full URL to the target folder
|
||||
* `pw`: password (remove the `pw` line if anon-write)
|
||||
|
||||
### [`flameshot.sh`](flameshot.sh) - Linux screenshot uploader
|
||||
* takes a screenshot with [flameshot](https://flameshot.org/) on Linux, uploads it, and writes the URL to clipboard
|
||||
|
||||
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
|
||||
@@ -53,5 +58,10 @@ init-scripts to start copyparty as a service
|
||||
* [`openrc/copyparty`](openrc/copyparty)
|
||||
|
||||
# Reverse-proxy
|
||||
copyparty has basic support for running behind another webserver
|
||||
* [`nginx/copyparty.conf`](nginx/copyparty.conf)
|
||||
copyparty supports running behind another webserver
|
||||
* [`apache/copyparty.conf`](apache/copyparty.conf)
|
||||
* [`haproxy/copyparty.conf`](haproxy/copyparty.conf)
|
||||
* [`lighttpd/subdomain.conf`](lighttpd/subdomain.conf)
|
||||
* [`lighttpd/subpath.conf`](lighttpd/subpath.conf)
|
||||
* [`nginx/copyparty.conf`](nginx/copyparty.conf) -- recommended
|
||||
* [`traefik/copyparty.yaml`](traefik/copyparty.yaml)
|
||||
|
||||
@@ -1,14 +1,29 @@
|
||||
# when running copyparty behind a reverse proxy,
|
||||
# the following arguments are recommended:
|
||||
# if you would like to use unix-sockets (recommended),
|
||||
# you must run copyparty with one of the following:
|
||||
#
|
||||
# -i 127.0.0.1 only accept connections from nginx
|
||||
# -i unix:777:/dev/shm/party.sock
|
||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||
#
|
||||
# if you are doing location-based proxying (such as `/stuff` below)
|
||||
# you must run copyparty with --rp-loc=stuff
|
||||
#
|
||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||
|
||||
|
||||
LoadModule proxy_module modules/mod_proxy.so
|
||||
ProxyPass "/stuff" "http://127.0.0.1:3923/stuff"
|
||||
# do not specify ProxyPassReverse
|
||||
|
||||
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
|
||||
# NOTE: do not specify ProxyPassReverse
|
||||
|
||||
|
||||
##
|
||||
## then, enable one of the below:
|
||||
|
||||
# use subdomain proxying to unix-socket (best)
|
||||
ProxyPass "/" "unix:///dev/shm/party.sock|http://whatever/"
|
||||
|
||||
# use subdomain proxying to 127.0.0.1 (slower)
|
||||
#ProxyPass "/" "http://127.0.0.1:3923/"
|
||||
|
||||
# use subpath proxying to 127.0.0.1 (slow and maybe buggy)
|
||||
#ProxyPass "/stuff" "http://127.0.0.1:3923/stuff"
|
||||
|
||||
24
contrib/haproxy/copyparty.conf
Normal file
24
contrib/haproxy/copyparty.conf
Normal file
@@ -0,0 +1,24 @@
|
||||
# this config is essentially two separate examples;
|
||||
#
|
||||
# foo1 connects to copyparty using tcp, and
|
||||
# foo2 uses unix-sockets for 27% higher performance
|
||||
#
|
||||
# to use foo2 you must run copyparty with one of the following:
|
||||
#
|
||||
# -i unix:777:/dev/shm/party.sock
|
||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||
|
||||
defaults
|
||||
mode http
|
||||
option forwardfor
|
||||
timeout connect 1s
|
||||
timeout client 610s
|
||||
timeout server 610s
|
||||
|
||||
listen foo1
|
||||
bind *:8081
|
||||
server srv1 127.0.0.1:3923 maxconn 512
|
||||
|
||||
listen foo2
|
||||
bind *:8082
|
||||
server srv1 /dev/shm/party.sock maxconn 512
|
||||
10
contrib/ishare.iscu
Normal file
10
contrib/ishare.iscu
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"Name": "copyparty",
|
||||
"RequestURL": "http://127.0.0.1:3923/screenshots/",
|
||||
"Headers": {
|
||||
"pw": "PUT_YOUR_PASSWORD_HERE_MY_DUDE",
|
||||
"accept": "json"
|
||||
},
|
||||
"FileFormName": "f",
|
||||
"ResponseURL": "{{fileurl}}"
|
||||
}
|
||||
24
contrib/lighttpd/subdomain.conf
Normal file
24
contrib/lighttpd/subdomain.conf
Normal file
@@ -0,0 +1,24 @@
|
||||
# example usage for benchmarking:
|
||||
#
|
||||
# taskset -c 1 lighttpd -Df ~/dev/copyparty/contrib/lighttpd/subdomain.conf
|
||||
#
|
||||
# lighttpd can connect to copyparty using either tcp (127.0.0.1)
|
||||
# or a unix-socket, but unix-sockets are 37% faster because
|
||||
# lighttpd doesn't reuse tcp connections, so we're doing unix-sockets
|
||||
#
|
||||
# this means we must run copyparty with one of the following:
|
||||
#
|
||||
# -i unix:777:/dev/shm/party.sock
|
||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||
#
|
||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||
|
||||
server.port = 80
|
||||
server.document-root = "/var/empty"
|
||||
server.upload-dirs = ( "/dev/shm", "/tmp" )
|
||||
server.modules = ( "mod_proxy" )
|
||||
proxy.forwarded = ( "for" => 1, "proto" => 1 )
|
||||
proxy.server = ( "" => ( ( "host" => "/dev/shm/party.sock" ) ) )
|
||||
|
||||
# if you really need to use tcp instead of unix-sockets, do this instead:
|
||||
#proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3923" ) ) )
|
||||
31
contrib/lighttpd/subpath.conf
Normal file
31
contrib/lighttpd/subpath.conf
Normal file
@@ -0,0 +1,31 @@
|
||||
# example usage for benchmarking:
|
||||
#
|
||||
# taskset -c 1 lighttpd -Df ~/dev/copyparty/contrib/lighttpd/subpath.conf
|
||||
#
|
||||
# lighttpd can connect to copyparty using either tcp (127.0.0.1)
|
||||
# or a unix-socket, but unix-sockets are 37% faster because
|
||||
# lighttpd doesn't reuse tcp connections, so we're doing unix-sockets
|
||||
#
|
||||
# this means we must run copyparty with one of the following:
|
||||
#
|
||||
# -i unix:777:/dev/shm/party.sock
|
||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||
#
|
||||
# also since this example proxies a subpath instead of the
|
||||
# recommended subdomain-proxying, we must also specify this:
|
||||
#
|
||||
# --rp-loc files
|
||||
#
|
||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||
|
||||
server.port = 80
|
||||
server.document-root = "/var/empty"
|
||||
server.upload-dirs = ( "/dev/shm", "/tmp" )
|
||||
server.modules = ( "mod_proxy" )
|
||||
$HTTP["url"] =~ "^/files" {
|
||||
proxy.forwarded = ( "for" => 1, "proto" => 1 )
|
||||
proxy.server = ( "" => ( ( "host" => "/dev/shm/party.sock" ) ) )
|
||||
|
||||
# if you really need to use tcp instead of unix-sockets, do this instead:
|
||||
#proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3923" ) ) )
|
||||
}
|
||||
@@ -36,9 +36,9 @@ upstream cpp_uds {
|
||||
# but there must be at least one unix-group which both
|
||||
# nginx and copyparty is a member of; if that group is
|
||||
# "www" then run copyparty with the following args:
|
||||
# -i unix:770:www:/tmp/party.sock
|
||||
# -i unix:770:www:/dev/shm/party.sock
|
||||
|
||||
server unix:/tmp/party.sock fail_timeout=1s;
|
||||
server unix:/dev/shm/party.sock fail_timeout=1s;
|
||||
keepalive 1;
|
||||
}
|
||||
|
||||
@@ -61,6 +61,10 @@ server {
|
||||
client_max_body_size 0;
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
# improve download speed from 600 to 1500 MiB/s
|
||||
proxy_buffers 32 8k;
|
||||
proxy_buffer_size 16k;
|
||||
proxy_busy_buffers_size 24k;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Maintainer: icxes <dev.null@need.moe>
|
||||
pkgname=copyparty
|
||||
pkgver="1.16.6"
|
||||
pkgver="1.16.12"
|
||||
pkgrel=1
|
||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||
arch=("any")
|
||||
@@ -16,12 +16,13 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
||||
"libkeyfinder-git: detection of musical keys"
|
||||
"qm-vamp-plugins: BPM detection"
|
||||
"python-pyopenssl: ftps functionality"
|
||||
"python-argon2_cffi: hashed passwords in config"
|
||||
"python-pyzmq: send zeromq messages from event-hooks"
|
||||
"python-argon2-cffi: hashed passwords in config"
|
||||
"python-impacket-git: smb support (bad idea)"
|
||||
)
|
||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||
backup=("etc/${pkgname}.d/init" )
|
||||
sha256sums=("29a119f7e238c44b0697e5858da8154d883a97ae20ecbb10393904406fa4fe06")
|
||||
sha256sums=("b5b65103198a3dd8a3f9b15c3d6aff6c21147bf87627ceacc64205493c248997")
|
||||
|
||||
build() {
|
||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ lib, stdenv, makeWrapper, fetchurl, utillinux, python, jinja2, impacket, pyftpdlib, pyopenssl, argon2-cffi, pillow, pyvips, ffmpeg, mutagen,
|
||||
{ lib, stdenv, makeWrapper, fetchurl, utillinux, python, jinja2, impacket, pyftpdlib, pyopenssl, argon2-cffi, pillow, pyvips, pyzmq, ffmpeg, mutagen,
|
||||
|
||||
# use argon2id-hashed passwords in config files (sha2 is always available)
|
||||
withHashedPasswords ? true,
|
||||
@@ -21,6 +21,9 @@ withMediaProcessing ? true,
|
||||
# if MediaProcessing is not enabled, you probably want this instead (less accurate, but much safer and faster)
|
||||
withBasicAudioMetadata ? false,
|
||||
|
||||
# send ZeroMQ messages from event-hooks
|
||||
withZeroMQ ? true,
|
||||
|
||||
# enable FTPS support in the FTP server
|
||||
withFTPS ? false,
|
||||
|
||||
@@ -43,6 +46,7 @@ let
|
||||
++ lib.optional withMediaProcessing ffmpeg
|
||||
++ lib.optional withBasicAudioMetadata mutagen
|
||||
++ lib.optional withHashedPasswords argon2-cffi
|
||||
++ lib.optional withZeroMQ pyzmq
|
||||
);
|
||||
in stdenv.mkDerivation {
|
||||
pname = "copyparty";
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.16.6/copyparty-sfx.py",
|
||||
"version": "1.16.6",
|
||||
"hash": "sha256-gs2jSaXa0XbVbvpW1H4i/Vzovg68Usry0iHWfbddBCc="
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.16.12/copyparty-sfx.py",
|
||||
"version": "1.16.12",
|
||||
"hash": "sha256-gZZqd88/8PEseVtWspocqrWV7Ck8YQAhcsa4ED3F4JU="
|
||||
}
|
||||
@@ -15,6 +15,7 @@ save one of these as `.epilogue.html` inside a folder to customize it:
|
||||
point `--js-browser` to one of these by URL:
|
||||
|
||||
* [`minimal-up2k.js`](minimal-up2k.js) is similar to the above `minimal-up2k.html` except it applies globally to all write-only folders
|
||||
* [`quickmove.js`](quickmove.js) adds a hotkey to move selected files into a subfolder
|
||||
* [`up2k-hooks.js`](up2k-hooks.js) lets you specify a ruleset for files to skip uploading
|
||||
* [`up2k-hook-ytid.js`](up2k-hook-ytid.js) is a more specific example checking youtube-IDs against some API
|
||||
|
||||
|
||||
117
contrib/plugins/graft-thumbs.js
Normal file
117
contrib/plugins/graft-thumbs.js
Normal file
@@ -0,0 +1,117 @@
|
||||
// USAGE:
|
||||
// place this file somewhere in the webroot and then
|
||||
// python3 -m copyparty --js-browser /.res/graft-thumbs.js
|
||||
//
|
||||
// DESCRIPTION:
|
||||
// this is a gridview plugin which, for each file in a folder,
|
||||
// looks for another file with the same filename (but with a
|
||||
// different file extension)
|
||||
//
|
||||
// if one of those files is an image and the other is not,
|
||||
// then this plugin assumes the image is a "sidecar thumbnail"
|
||||
// for the other file, and it will graft the image thumbnail
|
||||
// onto the non-image file (for example an mp3)
|
||||
//
|
||||
// optional feature 1, default-enabled:
|
||||
// the image-file is then hidden from the directory listing
|
||||
//
|
||||
// optional feature 2, default-enabled:
|
||||
// when clicking the audio file, the image will also open
|
||||
|
||||
|
||||
(function() {
|
||||
|
||||
// `graft_thumbs` assumes the gridview has just been rendered;
|
||||
// it looks for sidecars, and transplants those thumbnails onto
|
||||
// the other file with the same basename (filename sans extension)
|
||||
|
||||
var graft_thumbs = function () {
|
||||
if (!thegrid.en)
|
||||
return; // not in grid mode
|
||||
|
||||
var files = msel.getall(),
|
||||
pairs = {};
|
||||
|
||||
console.log(files);
|
||||
|
||||
for (var a = 0; a < files.length; a++) {
|
||||
var file = files[a],
|
||||
is_pic = /\.(jpe?g|png|gif|webp)$/i.exec(file.vp),
|
||||
is_audio = re_au_all.exec(file.vp),
|
||||
basename = file.vp.replace(/\.[^\.]+$/, ""),
|
||||
entry = pairs[basename];
|
||||
|
||||
if (!entry)
|
||||
// first time seeing this basename; create a new entry in pairs
|
||||
entry = pairs[basename] = {};
|
||||
|
||||
if (is_pic)
|
||||
entry.thumb = file;
|
||||
else if (is_audio)
|
||||
entry.audio = file;
|
||||
}
|
||||
|
||||
var basenames = Object.keys(pairs);
|
||||
for (var a = 0; a < basenames.length; a++)
|
||||
(function(a) {
|
||||
var pair = pairs[basenames[a]];
|
||||
|
||||
if (!pair.thumb || !pair.audio)
|
||||
return; // not a matching pair of files
|
||||
|
||||
var img_thumb = QS('#ggrid a[ref="' + pair.thumb.id + '"] img[onload]'),
|
||||
img_audio = QS('#ggrid a[ref="' + pair.audio.id + '"] img[onload]');
|
||||
|
||||
if (!img_thumb || !img_audio)
|
||||
return; // something's wrong... let's bail
|
||||
|
||||
// alright, graft the thumb...
|
||||
img_audio.src = img_thumb.src;
|
||||
|
||||
// ...and hide the sidecar
|
||||
img_thumb.closest('a').style.display = 'none';
|
||||
|
||||
// ...and add another onclick-handler to the audio,
|
||||
// so it also opens the pic while playing the song
|
||||
img_audio.addEventListener('click', function() {
|
||||
img_thumb.click();
|
||||
return false; // let it bubble to the next listener
|
||||
});
|
||||
|
||||
})(a);
|
||||
};
|
||||
|
||||
// ...and then the trick! near the end of loadgrid,
|
||||
// thegrid.bagit is called to initialize the baguettebox
|
||||
// (image/video gallery); this is the perfect function to
|
||||
// "hook" (hijack) so we can run our code :^)
|
||||
|
||||
// need to grab a backup of the original function first,
|
||||
var orig_func = thegrid.bagit;
|
||||
|
||||
// and then replace it with our own:
|
||||
thegrid.bagit = function (isrc) {
|
||||
|
||||
if (isrc !== '#ggrid')
|
||||
// we only want to modify the grid, so
|
||||
// let the original function handle this one
|
||||
return orig_func(isrc);
|
||||
|
||||
graft_thumbs();
|
||||
|
||||
// when changing directories, the grid is
|
||||
// rendered before msel returns the correct
|
||||
// filenames, so schedule another run:
|
||||
setTimeout(graft_thumbs, 1);
|
||||
|
||||
// and finally, call the original thegrid.bagit function
|
||||
return orig_func(isrc);
|
||||
};
|
||||
|
||||
if (ls0) {
|
||||
// the server included an initial listing json (ls0),
|
||||
// so the grid has already been rendered without our hook
|
||||
graft_thumbs();
|
||||
}
|
||||
|
||||
})();
|
||||
140
contrib/plugins/quickmove.js
Normal file
140
contrib/plugins/quickmove.js
Normal file
@@ -0,0 +1,140 @@
|
||||
"use strict";
|
||||
|
||||
|
||||
// USAGE:
|
||||
// place this file somewhere in the webroot,
|
||||
// for example in a folder named ".res" to hide it, and then
|
||||
// python3 copyparty-sfx.py -v .::A --js-browser /.res/quickmove.js
|
||||
//
|
||||
// DESCRIPTION:
|
||||
// the command above launches copyparty with one single volume;
|
||||
// ".::A" = current folder as webroot, and everyone has Admin
|
||||
//
|
||||
// the plugin adds hotkey "W" which moves all selected files
|
||||
// into a subfolder named "foobar" inside the current folder
|
||||
|
||||
|
||||
(function() {
|
||||
|
||||
var action_to_perform = ask_for_confirmation_and_then_move;
|
||||
// this decides what the new hotkey should do;
|
||||
// ask_for_confirmation_and_then_move = show a yes/no box,
|
||||
// move_selected_files = just move the files immediately
|
||||
|
||||
var move_destination = "foobar";
|
||||
// this is the target folder to move files to;
|
||||
// by default it is a subfolder of the current folder,
|
||||
// but it can also be an absolute path like "/foo/bar"
|
||||
|
||||
// ===
|
||||
// === END OF CONFIG
|
||||
// ===
|
||||
|
||||
var main_hotkey_handler, // copyparty's original hotkey handler
|
||||
plugin_enabler, // timer to engage this plugin when safe
|
||||
files_to_move; // list of files to move
|
||||
|
||||
function ask_for_confirmation_and_then_move() {
|
||||
var num_files = msel.getsel().length,
|
||||
msg = "move the selected " + num_files + " files?";
|
||||
|
||||
if (!num_files)
|
||||
return toast.warn(2, 'no files were selected to be moved');
|
||||
|
||||
modal.confirm(msg, move_selected_files, null);
|
||||
}
|
||||
|
||||
function move_selected_files() {
|
||||
var selection = msel.getsel();
|
||||
|
||||
if (!selection.length)
|
||||
return toast.warn(2, 'no files were selected to be moved');
|
||||
|
||||
if (thegrid.bbox) {
|
||||
// close image/video viewer
|
||||
thegrid.bbox = null;
|
||||
baguetteBox.destroy();
|
||||
}
|
||||
|
||||
files_to_move = [];
|
||||
for (var a = 0; a < selection.length; a++)
|
||||
files_to_move.push(selection[a].vp);
|
||||
|
||||
move_next_file();
|
||||
}
|
||||
|
||||
function move_next_file() {
|
||||
var num_files = files_to_move.length,
|
||||
filepath = files_to_move.pop(),
|
||||
filename = vsplit(filepath)[1];
|
||||
|
||||
toast.inf(10, "moving " + num_files + " files...\n\n" + filename);
|
||||
|
||||
var dst = move_destination;
|
||||
|
||||
if (!dst.endsWith('/'))
|
||||
// must have a trailing slash, so add it
|
||||
dst += '/';
|
||||
|
||||
if (!dst.startsWith('/'))
|
||||
// destination is a relative path, so prefix current folder path
|
||||
dst = get_evpath() + dst;
|
||||
|
||||
// and finally append the filename
|
||||
dst += '/' + filename;
|
||||
|
||||
// prepare the move-request to be sent
|
||||
var xhr = new XHR();
|
||||
xhr.onload = xhr.onerror = function() {
|
||||
if (this.status !== 201)
|
||||
return toast.err(30, 'move failed: ' + esc(this.responseText));
|
||||
|
||||
if (files_to_move.length)
|
||||
return move_next_file(); // still more files to go
|
||||
|
||||
toast.ok(1, 'move OK');
|
||||
treectl.goto(); // reload the folder contents
|
||||
};
|
||||
xhr.open('POST', filepath + '?move=' + dst);
|
||||
xhr.send();
|
||||
}
|
||||
|
||||
function our_hotkey_handler(e) {
|
||||
// bail if either ALT, CTRL, or SHIFT is pressed
|
||||
if (e.altKey || e.shiftKey || e.isComposing || ctrl(e))
|
||||
return main_hotkey_handler(e); // let copyparty handle this keystroke
|
||||
|
||||
var key_name = (e.code || e.key) + '',
|
||||
ae = document.activeElement,
|
||||
aet = ae && ae != document.body ? ae.nodeName.toLowerCase() : '';
|
||||
|
||||
// check the current aet (active element type),
|
||||
// only continue if one of the following currently has input focus:
|
||||
// nothing | link | button | table-row | table-cell | div | text
|
||||
if (aet && !/^(a|button|tr|td|div|pre)$/.test(aet))
|
||||
return main_hotkey_handler(e); // let copyparty handle this keystroke
|
||||
|
||||
if (key_name == 'KeyW') {
|
||||
// okay, this one's for us... do the thing
|
||||
action_to_perform();
|
||||
return ev(e);
|
||||
}
|
||||
|
||||
return main_hotkey_handler(e); // let copyparty handle this keystroke
|
||||
}
|
||||
|
||||
function enable_plugin() {
|
||||
if (!window.hotkeys_attached)
|
||||
return console.log('quickmove is waiting for the page to finish loading');
|
||||
|
||||
clearInterval(plugin_enabler);
|
||||
main_hotkey_handler = document.onkeydown;
|
||||
document.onkeydown = our_hotkey_handler;
|
||||
console.log('quickmove is now enabled');
|
||||
}
|
||||
|
||||
// copyparty doesn't enable its hotkeys until the page
|
||||
// has finished loading, so we'll wait for that too
|
||||
plugin_enabler = setInterval(enable_plugin, 100);
|
||||
|
||||
})();
|
||||
25
contrib/traefik/copyparty.yaml
Normal file
25
contrib/traefik/copyparty.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# ./traefik --configFile=copyparty.yaml
|
||||
|
||||
entryPoints:
|
||||
web:
|
||||
address: :8080
|
||||
transport:
|
||||
# don't disconnect during big uploads
|
||||
respondingTimeouts:
|
||||
readTimeout: "0s"
|
||||
log:
|
||||
level: DEBUG
|
||||
providers:
|
||||
file:
|
||||
# WARNING: must be same filename as current file
|
||||
filename: "copyparty.yaml"
|
||||
http:
|
||||
services:
|
||||
service-cpp:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: "http://127.0.0.1:3923/"
|
||||
routers:
|
||||
my-router:
|
||||
rule: "PathPrefix(`/`)"
|
||||
service: service-cpp
|
||||
@@ -54,6 +54,8 @@ from .util import (
|
||||
RAM_TOTAL,
|
||||
SQLITE_VER,
|
||||
UNPLICATIONS,
|
||||
URL_BUG,
|
||||
URL_PRJ,
|
||||
Daemon,
|
||||
align_tab,
|
||||
ansi_re,
|
||||
@@ -332,17 +334,16 @@ def ensure_webdeps() -> None:
|
||||
if has_resource(E, "web/deps/mini-fa.woff"):
|
||||
return
|
||||
|
||||
warn(
|
||||
"""could not find webdeps;
|
||||
t = """could not find webdeps;
|
||||
if you are running the sfx, or exe, or pypi package, or docker image,
|
||||
then this is a bug! Please let me know so I can fix it, thanks :-)
|
||||
https://github.com/9001/copyparty/issues/new?labels=bug&template=bug_report.md
|
||||
%s
|
||||
|
||||
however, if you are a dev, or running copyparty from source, and you want
|
||||
full client functionality, you will need to build or obtain the webdeps:
|
||||
https://github.com/9001/copyparty/blob/hovudstraum/docs/devnotes.md#building
|
||||
%s/blob/hovudstraum/docs/devnotes.md#building
|
||||
"""
|
||||
)
|
||||
warn(t % (URL_BUG, URL_PRJ))
|
||||
|
||||
|
||||
def configure_ssl_ver(al: argparse.Namespace) -> None:
|
||||
@@ -739,6 +740,10 @@ def get_sects():
|
||||
the \033[33m,,\033[35m stops copyparty from reading the rest as flags and
|
||||
the \033[33m--\033[35m stops notify-send from reading the message as args
|
||||
and the alert will be "hey" followed by the messagetext
|
||||
|
||||
\033[36m--xau zmq:pub:tcp://*:5556\033[35m announces uploads on zeromq;
|
||||
\033[36m--xau t3,zmq:push:tcp://*:5557\033[35m also works, and you can
|
||||
\033[36m--xau t3,j,zmq:req:tcp://localhost:5555\033[35m too for example
|
||||
\033[0m
|
||||
each hook is executed once for each event, except for \033[36mxiu\033[0m
|
||||
which builds up a backlog of uploads, running the hook just once
|
||||
@@ -770,11 +775,22 @@ def get_sects():
|
||||
values for --urlform:
|
||||
\033[36mstash\033[35m dumps the data to file and returns length + checksum
|
||||
\033[36msave,get\033[35m dumps to file and returns the page like a GET
|
||||
\033[36mprint,get\033[35m prints the data in the log and returns GET
|
||||
(leave out the ",get" to return an error instead)\033[0m
|
||||
\033[36mprint \033[35m prints the data to log and returns an error
|
||||
\033[36mprint,xm \033[35m prints the data to log and returns --xm output
|
||||
\033[36mprint,get\033[35m prints the data to log and returns GET\033[0m
|
||||
|
||||
note that the \033[35m--xm\033[0m hook will only run if \033[35m--urlform\033[0m
|
||||
is either \033[36mprint\033[0m or the default \033[36mprint,get\033[0m
|
||||
note that the \033[35m--xm\033[0m hook will only run if \033[35m--urlform\033[0m is
|
||||
either \033[36mprint\033[0m or \033[36mprint,get\033[0m or the default \033[36mprint,xm\033[0m
|
||||
|
||||
if an \033[35m--xm\033[0m hook returns text, then
|
||||
the response code will be HTTP 202;
|
||||
http/get responses will be HTTP 200
|
||||
|
||||
if there are multiple \033[35m--xm\033[0m hooks defined, then
|
||||
the first hook that produced output is returned
|
||||
|
||||
if there are no \033[35m--xm\033[0m hooks defined, then the default
|
||||
\033[36mprint,xm\033[0m behaves like \033[36mprint,get\033[0m (returning html)
|
||||
"""
|
||||
),
|
||||
],
|
||||
@@ -955,7 +971,7 @@ def add_general(ap, nc, srvname):
|
||||
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m], see --help-accounts")
|
||||
ap2.add_argument("--grp", metavar="G:N,N", type=u, action="append", help="add group, \033[33mNAME\033[0m:\033[33mUSER1\033[0m,\033[33mUSER2\033[0m,\033[33m...\033[0m; example [\033[32madmins:ed,foo,bar\033[0m]")
|
||||
ap2.add_argument("-ed", action="store_true", help="enable the ?dots url parameter / client option which allows clients to see dotfiles / hidden files (volflag=dots)")
|
||||
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,get", help="how to handle url-form POSTs; see \033[33m--help-urlform\033[0m")
|
||||
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,xm", help="how to handle url-form POSTs; see \033[33m--help-urlform\033[0m")
|
||||
ap2.add_argument("--wintitle", metavar="TXT", type=u, default="cpp @ $pub", help="server terminal title, for example [\033[32m$ip-10.1.2.\033[0m] or [\033[32m$ip-]")
|
||||
ap2.add_argument("--name", metavar="TXT", type=u, default=srvname, help="server name (displayed topleft in browser and in mDNS)")
|
||||
ap2.add_argument("--mime", metavar="EXT=MIME", type=u, action="append", help="map file \033[33mEXT\033[0mension to \033[33mMIME\033[0mtype, for example [\033[32mjpg=image/jpeg\033[0m]")
|
||||
@@ -1167,6 +1183,7 @@ def add_webdav(ap):
|
||||
ap2.add_argument("--dav-mac", action="store_true", help="disable apple-garbage filter -- allow macos to create junk files (._* and .DS_Store, .Spotlight-*, .fseventsd, .Trashes, .AppleDouble, __MACOS)")
|
||||
ap2.add_argument("--dav-rt", action="store_true", help="show symlink-destination's lastmodified instead of the link itself; always enabled for recursive listings (volflag=davrt)")
|
||||
ap2.add_argument("--dav-auth", action="store_true", help="force auth for all folders (required by davfs2 when only some folders are world-readable) (volflag=davauth)")
|
||||
ap2.add_argument("--dav-ua1", metavar="PTN", type=u, default=r" kioworker/", help="regex of tricky user-agents which expect 401 from GET requests; disable with [\033[32mno\033[0m] or blank")
|
||||
|
||||
|
||||
def add_tftp(ap):
|
||||
@@ -1247,7 +1264,8 @@ def add_optouts(ap):
|
||||
ap2.add_argument("-nih", action="store_true", help="no info hostname -- don't show in UI")
|
||||
ap2.add_argument("-nid", action="store_true", help="no info disk-usage -- don't show in UI")
|
||||
ap2.add_argument("-nb", action="store_true", help="no powered-by-copyparty branding in UI")
|
||||
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar")
|
||||
ap2.add_argument("--zip-who", metavar="LVL", type=int, default=3, help="who can download as zip/tar? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=authenticated-with-read-access, [\033[32m3\033[0m]=everyone-with-read-access (volflag=zip_who)\n\033[1;31mWARNING:\033[0m if a nested volume has a more restrictive value than a parent volume, then this will be \033[33mignored\033[0m if the download is initiated from the parent, more lenient volume")
|
||||
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar; same as \033[33m--zip-who=0\033[0m")
|
||||
ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)")
|
||||
ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time")
|
||||
ap2.add_argument("--no-pipe", action="store_true", help="disable race-the-beam (lockstep download of files which are currently being uploaded) (volflag=nopipe)")
|
||||
@@ -1328,6 +1346,7 @@ def add_admin(ap):
|
||||
ap2.add_argument("--no-ups-page", action="store_true", help="disable ?ru (list of recent uploads)")
|
||||
ap2.add_argument("--no-up-list", action="store_true", help="don't show list of incoming files in controlpanel")
|
||||
ap2.add_argument("--dl-list", metavar="LVL", type=int, default=2, help="who can see active downloads in the controlpanel? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=everyone")
|
||||
ap2.add_argument("--ups-who", metavar="LVL", type=int, default=2, help="who can see recent uploads on the ?ru page? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=everyone (volflag=ups_who)")
|
||||
ap2.add_argument("--ups-when", action="store_true", help="let everyone see upload timestamps on the ?ru page, not just admins")
|
||||
|
||||
|
||||
@@ -1368,6 +1387,8 @@ def add_transcoding(ap):
|
||||
ap2 = ap.add_argument_group('transcoding options')
|
||||
ap2.add_argument("--q-opus", metavar="KBPS", type=int, default=128, help="target bitrate for transcoding to opus; set 0 to disable")
|
||||
ap2.add_argument("--q-mp3", metavar="QUALITY", type=u, default="q2", help="target quality for transcoding to mp3, for example [\033[32m192k\033[0m] (CBR) or [\033[32mq0\033[0m] (CQ/CRF, q0=maxquality, q9=smallest); set 0 to disable")
|
||||
ap2.add_argument("--no-caf", action="store_true", help="disable transcoding to caf-opus (affects iOS v12~v17), will use mp3 instead")
|
||||
ap2.add_argument("--no-owa", action="store_true", help="disable transcoding to webm-opus (iOS v18 and later), will use mp3 instead")
|
||||
ap2.add_argument("--no-acode", action="store_true", help="disable audio transcoding")
|
||||
ap2.add_argument("--no-bacode", action="store_true", help="disable batch audio transcoding by folder download (zip/tar)")
|
||||
ap2.add_argument("--ac-maxage", metavar="SEC", type=int, default=86400, help="delete cached transcode output after \033[33mSEC\033[0m seconds")
|
||||
@@ -1375,7 +1396,7 @@ def add_transcoding(ap):
|
||||
|
||||
def add_rss(ap):
|
||||
ap2 = ap.add_argument_group('RSS options')
|
||||
ap2.add_argument("--rss", action="store_true", help="enable RSS output (experimental)")
|
||||
ap2.add_argument("--rss", action="store_true", help="enable RSS output (experimental) (volflag=rss)")
|
||||
ap2.add_argument("--rss-nf", metavar="HITS", type=int, default=250, help="default number of files to return (url-param 'nf')")
|
||||
ap2.add_argument("--rss-fext", metavar="E,E", type=u, default="", help="default list of file extensions to include (url-param 'fext'); blank=all")
|
||||
ap2.add_argument("--rss-sort", metavar="ORD", type=u, default="m", help="default sort order (url-param 'sort'); [\033[32mm\033[0m]=last-modified [\033[32mu\033[0m]=upload-time [\033[32mn\033[0m]=filename [\033[32ms\033[0m]=filesize; Uppercase=oldest-first. Note that upload-time is 0 for non-uploaded files")
|
||||
@@ -1466,7 +1487,9 @@ def add_ui(ap, retry):
|
||||
ap2.add_argument("--hsortn", metavar="N", type=int, default=2, help="number of sorting rules to include in media URLs by default (volflag=hsortn)")
|
||||
ap2.add_argument("--unlist", metavar="REGEX", type=u, default="", help="don't show files matching \033[33mREGEX\033[0m in file list. Purely cosmetic! Does not affect API calls, just the browser. Example: [\033[32m\\.(js|css)$\033[0m] (volflag=unlist)")
|
||||
ap2.add_argument("--favico", metavar="TXT", type=u, default="c 000 none" if retry else "🎉 000 none", help="\033[33mfavicon-text\033[0m [ \033[33mforeground\033[0m [ \033[33mbackground\033[0m ] ], set blank to disable")
|
||||
ap2.add_argument("--ext-th", metavar="E=VP", type=u, action="append", help="use thumbnail-image \033[33mVP\033[0m for file-extension \033[33mE\033[0m, example: [\033[32mexe=/.res/exe.png\033[0m] (volflag=ext_th)")
|
||||
ap2.add_argument("--mpmc", metavar="URL", type=u, default="", help="change the mediaplayer-toggle mouse cursor; URL to a folder with {2..5}.png inside (or disable with [\033[32m.\033[0m])")
|
||||
ap2.add_argument("--spinner", metavar="TXT", type=u, default="🌲", help="\033[33memoji\033[0m or \033[33memoji,css\033[0m Example: [\033[32m🥖,padding:0\033[0m]")
|
||||
ap2.add_argument("--css-browser", metavar="L", type=u, default="", help="URL to additional CSS to include in the filebrowser html")
|
||||
ap2.add_argument("--js-browser", metavar="L", type=u, default="", help="URL to additional JS to include in the filebrowser html")
|
||||
ap2.add_argument("--js-other", metavar="L", type=u, default="", help="URL to additional JS to include in all other pages")
|
||||
@@ -1476,12 +1499,14 @@ def add_ui(ap, retry):
|
||||
ap2.add_argument("--txt-max", metavar="KiB", type=int, default=64, help="max size of embedded textfiles on ?doc= (anything bigger will be lazy-loaded by JS)")
|
||||
ap2.add_argument("--doctitle", metavar="TXT", type=u, default="copyparty @ --name", help="title / service-name to show in html documents")
|
||||
ap2.add_argument("--bname", metavar="TXT", type=u, default="--name", help="server name (displayed in filebrowser document title)")
|
||||
ap2.add_argument("--pb-url", metavar="URL", type=u, default="https://github.com/9001/copyparty", help="powered-by link; disable with \033[33m-np\033[0m")
|
||||
ap2.add_argument("--pb-url", metavar="URL", type=u, default=URL_PRJ, help="powered-by link; disable with \033[33m-np\033[0m")
|
||||
ap2.add_argument("--ver", action="store_true", help="show version on the control panel (incompatible with \033[33m-nb\033[0m)")
|
||||
ap2.add_argument("--k304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable k304 on the controlpanel (workaround for buggy reverse-proxies); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
|
||||
ap2.add_argument("--no304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable no304 on the controlpanel (workaround for buggy caching in browsers); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
|
||||
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox")
|
||||
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for prologue/epilogue docs (volflag=lg_sbf)")
|
||||
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to allow in the iframe 'sandbox' attribute for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#sandbox")
|
||||
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to allow in the iframe 'sandbox' attribute for prologue/epilogue docs (volflag=lg_sbf)")
|
||||
ap2.add_argument("--md-sba", metavar="TXT", type=u, default="", help="the value of the iframe 'allow' attribute for README.md docs, for example [\033[32mfullscreen\033[0m] (volflag=md_sba)")
|
||||
ap2.add_argument("--lg-sba", metavar="TXT", type=u, default="", help="the value of the iframe 'allow' attribute for prologue/epilogue docs (volflag=lg_sba); see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Permissions-Policy#iframes")
|
||||
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README/PREADME.md documents (volflags: no_sb_md | sb_md)")
|
||||
ap2.add_argument("--no-sb-lg", action="store_true", help="don't sandbox prologue/epilogue docs (volflags: no_sb_lg | sb_lg); enables non-js support")
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# coding: utf-8
|
||||
|
||||
VERSION = (1, 16, 7)
|
||||
VERSION = (1, 16, 13)
|
||||
CODENAME = "COPYparty"
|
||||
BUILD_DT = (2024, 12, 23)
|
||||
BUILD_DT = (2025, 2, 13)
|
||||
|
||||
S_VERSION = ".".join(map(str, VERSION))
|
||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||
|
||||
@@ -1289,10 +1289,10 @@ class AuthSrv(object):
|
||||
# one or more bools before the final flag; eat them
|
||||
n1, uname = uname.split(",", 1)
|
||||
for _, vp, _, _ in vols:
|
||||
self._read_volflag(flags[vp], n1, True, False)
|
||||
self._read_volflag(vp, flags[vp], n1, True, False)
|
||||
|
||||
for _, vp, _, _ in vols:
|
||||
self._read_volflag(flags[vp], uname, cval, False)
|
||||
self._read_volflag(vp, flags[vp], uname, cval, False)
|
||||
|
||||
return
|
||||
|
||||
@@ -1379,20 +1379,34 @@ class AuthSrv(object):
|
||||
|
||||
def _read_volflag(
|
||||
self,
|
||||
vpath: str,
|
||||
flags: dict[str, Any],
|
||||
name: str,
|
||||
value: Union[str, bool, list[str]],
|
||||
is_list: bool,
|
||||
) -> None:
|
||||
if name not in flagdescs:
|
||||
name = name.lower()
|
||||
|
||||
# volflags are snake_case, but a leading dash is the removal operator
|
||||
if name not in flagdescs and "-" in name[1:]:
|
||||
name = name[:1] + name[1:].replace("-", "_")
|
||||
|
||||
desc = flagdescs.get(name.lstrip("-"), "?").replace("\n", " ")
|
||||
|
||||
if not name:
|
||||
self._e("└─unreadable-line")
|
||||
t = "WARNING: the config for volume [/%s] indicated that a volflag was to be defined, but the volflag name was blank"
|
||||
self.log(t % (vpath,), 3)
|
||||
return
|
||||
|
||||
if re.match("^-[^-]+$", name):
|
||||
t = "└─unset volflag [{}] ({})"
|
||||
self._e(t.format(name[1:], desc))
|
||||
flags[name] = True
|
||||
return
|
||||
|
||||
zs = "mtp on403 on404 xbu xau xiu xbc xac xbr xar xbd xad xm xban"
|
||||
zs = "ext_th mtp on403 on404 xbu xau xiu xbc xac xbr xar xbd xad xm xban"
|
||||
if name not in zs.split():
|
||||
if value is True:
|
||||
t = "└─add volflag [{}] = {} ({})"
|
||||
@@ -1515,6 +1529,14 @@ class AuthSrv(object):
|
||||
if not mount and not self.args.idp_h_usr:
|
||||
# -h says our defaults are CWD at root and read/write for everyone
|
||||
axs = AXS(["*"], ["*"], None, None)
|
||||
if os.path.exists("/z/initcfg"):
|
||||
t = "Read-access has been disabled due to failsafe: Docker detected, but the config does not define any volumes. This failsafe is to prevent unintended access if this is due to accidental loss of config. You can override this safeguard and allow read/write to all of /w/ by adding the following arguments to the docker container: -v .::rw"
|
||||
self.log(t, 1)
|
||||
axs = AXS()
|
||||
elif self.args.c:
|
||||
t = "Read-access has been disabled due to failsafe: No volumes were defined by the config-file. This failsafe is to prevent unintended access if this is due to accidental loss of config. You can override this safeguard and allow read/write to the working-directory by adding the following arguments: -v .::rw"
|
||||
self.log(t, 1)
|
||||
axs = AXS()
|
||||
vfs = VFS(self.log_func, absreal("."), "", axs, {})
|
||||
elif "" not in mount:
|
||||
# there's volumes but no root; make root inaccessible
|
||||
@@ -1549,6 +1571,17 @@ class AuthSrv(object):
|
||||
vol.all_vps.sort(key=lambda x: len(x[0]), reverse=True)
|
||||
vol.root = vfs
|
||||
|
||||
zs = "neversymlink"
|
||||
k_ign = set(zs.split())
|
||||
for vol in vfs.all_vols.values():
|
||||
unknown_flags = set()
|
||||
for k, v in vol.flags.items():
|
||||
if k not in flagdescs and k not in k_ign:
|
||||
unknown_flags.add(k)
|
||||
if unknown_flags:
|
||||
t = "WARNING: the config for volume [/%s] has unrecognized volflags; will ignore: '%s'"
|
||||
self.log(t % (vol.vpath, "', '".join(unknown_flags)), 3)
|
||||
|
||||
enshare = self.args.shr
|
||||
shr = enshare[1:-1]
|
||||
shrs = enshare[1:]
|
||||
@@ -1832,7 +1865,11 @@ class AuthSrv(object):
|
||||
if fka and not fk:
|
||||
fk = fka
|
||||
if fk:
|
||||
vol.flags["fk"] = int(fk) if fk is not True else 8
|
||||
fk = 8 if fk is True else int(fk)
|
||||
if fk > 72:
|
||||
t = "max filekey-length is 72; volume /%s specified %d (anything higher than 16 is pointless btw)"
|
||||
raise Exception(t % (vol.vpath, fk))
|
||||
vol.flags["fk"] = fk
|
||||
have_fk = True
|
||||
|
||||
dk = vol.flags.get("dk")
|
||||
@@ -1910,7 +1947,7 @@ class AuthSrv(object):
|
||||
if k not in vol.flags:
|
||||
vol.flags[k] = getattr(self.args, k)
|
||||
|
||||
for k in ("nrand", "u2abort"):
|
||||
for k in ("nrand", "u2abort", "ups_who", "zip_who"):
|
||||
if k in vol.flags:
|
||||
vol.flags[k] = int(vol.flags[k])
|
||||
|
||||
@@ -1962,8 +1999,10 @@ class AuthSrv(object):
|
||||
|
||||
# append additive args from argv to volflags
|
||||
hooks = "xbu xau xiu xbc xac xbr xar xbd xad xm xban".split()
|
||||
for name in "mtp on404 on403".split() + hooks:
|
||||
self._read_volflag(vol.flags, name, getattr(self.args, name), True)
|
||||
for name in "ext_th mtp on404 on403".split() + hooks:
|
||||
self._read_volflag(
|
||||
vol.vpath, vol.flags, name, getattr(self.args, name), True
|
||||
)
|
||||
|
||||
for hn in hooks:
|
||||
cmds = vol.flags.get(hn)
|
||||
@@ -1991,6 +2030,16 @@ class AuthSrv(object):
|
||||
ncmds.append(ocmd)
|
||||
vol.flags[hn] = ncmds
|
||||
|
||||
ext_th = vol.flags["ext_th_d"] = {}
|
||||
etv = "(?)"
|
||||
try:
|
||||
for etv in vol.flags.get("ext_th") or []:
|
||||
k, v = etv.split("=")
|
||||
ext_th[k] = v
|
||||
except:
|
||||
t = "WARNING: volume [/%s]: invalid value specified for ext-th: %s"
|
||||
self.log(t % (vol.vpath, etv), 3)
|
||||
|
||||
# d2d drops all database features for a volume
|
||||
for grp, rm in [["d2d", "e2d"], ["d2t", "e2t"], ["d2d", "e2v"]]:
|
||||
if not vol.flags.get(grp, False):
|
||||
@@ -2339,8 +2388,10 @@ class AuthSrv(object):
|
||||
"frand": bool(vf.get("rand")),
|
||||
"lifetime": vf.get("lifetime") or 0,
|
||||
"unlist": vf.get("unlist") or "",
|
||||
"sb_lg": "" if "no_sb_lg" in vf else (vf.get("lg_sbf") or "y"),
|
||||
}
|
||||
js_htm = {
|
||||
"SPINNER": self.args.spinner,
|
||||
"s_name": self.args.bname,
|
||||
"have_up2k_idx": "e2d" in vf,
|
||||
"have_acode": not self.args.no_acode,
|
||||
@@ -2350,7 +2401,10 @@ class AuthSrv(object):
|
||||
"have_del": not self.args.no_del,
|
||||
"have_unpost": int(self.args.unpost),
|
||||
"have_emp": self.args.emp,
|
||||
"ext_th": vf.get("ext_th_d") or {},
|
||||
"sb_md": "" if "no_sb_md" in vf else (vf.get("md_sbf") or "y"),
|
||||
"sba_md": vf.get("md_sba") or "",
|
||||
"sba_lg": vf.get("lg_sba") or "",
|
||||
"txt_ext": self.args.textfiles.replace(",", " "),
|
||||
"def_hcols": list(vf.get("mth") or []),
|
||||
"unlist0": vf.get("unlist") or "",
|
||||
@@ -2752,7 +2806,9 @@ class AuthSrv(object):
|
||||
zs = "c ihead ohead mtm mtp on403 on404 xac xad xar xau xiu xban xbc xbd xbr xbu xm"
|
||||
lst = set(zs.split())
|
||||
askip = set("a v c vc cgen exp_lg exp_md theme".split())
|
||||
fskip = set("exp_lg exp_md mv_re_r mv_re_t rm_re_r rm_re_t".split())
|
||||
|
||||
t = "exp_lg exp_md ext_th_d mv_re_r mv_re_t rm_re_r rm_re_t srch_re_dots srch_re_nodot"
|
||||
fskip = set(t.split())
|
||||
|
||||
# keymap from argv to vflag
|
||||
amap = vf_bmap()
|
||||
|
||||
@@ -5,6 +5,9 @@ from __future__ import print_function, unicode_literals
|
||||
zs = "a c e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vp e2vu ed emp i j lo mcr mte mth mtm mtp nb nc nid nih nth nw p q s ss sss v z zv"
|
||||
onedash = set(zs.split())
|
||||
|
||||
# verify that all volflags are documented here:
|
||||
# grep volflag= __main__.py | sed -r 's/.*volflag=//;s/\).*//' | sort | uniq | while IFS= read -r x; do grep -E "\"$x(=[^ \"]+)?\": \"" cfg.py || printf '%s\n' "$x"; done
|
||||
|
||||
|
||||
def vf_bmap() -> dict[str, str]:
|
||||
"""argv-to-volflag: simple bools"""
|
||||
@@ -74,6 +77,8 @@ def vf_vmap() -> dict[str, str]:
|
||||
"html_head",
|
||||
"lg_sbf",
|
||||
"md_sbf",
|
||||
"lg_sba",
|
||||
"md_sba",
|
||||
"nrand",
|
||||
"og_desc",
|
||||
"og_site",
|
||||
@@ -91,6 +96,8 @@ def vf_vmap() -> dict[str, str]:
|
||||
"unlist",
|
||||
"u2abort",
|
||||
"u2ts",
|
||||
"ups_who",
|
||||
"zip_who",
|
||||
):
|
||||
ret[k] = k
|
||||
return ret
|
||||
@@ -102,6 +109,7 @@ def vf_cmap() -> dict[str, str]:
|
||||
for k in (
|
||||
"exp_lg",
|
||||
"exp_md",
|
||||
"ext_th",
|
||||
"mte",
|
||||
"mth",
|
||||
"mtp",
|
||||
@@ -144,6 +152,7 @@ flagcats = {
|
||||
"noclone": "take dupe data from clients, even if available on HDD",
|
||||
"nodupe": "rejects existing files (instead of linking/cloning them)",
|
||||
"sparse": "force use of sparse files, mainly for s3-backed storage",
|
||||
"nosparse": "deny use of sparse files, mainly for slow storage",
|
||||
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
|
||||
"nosub": "forces all uploads into the top folder of the vfs",
|
||||
"magic": "enables filetype detection for nameless uploads",
|
||||
@@ -174,8 +183,11 @@ flagcats = {
|
||||
"e2dsa": "scans all folders for new files on startup; also sets -e2d",
|
||||
"e2t": "enable multimedia indexing; makes it possible to search for tags",
|
||||
"e2ts": "scan existing files for tags on startup; also sets -e2t",
|
||||
"e2tsa": "delete all metadata from DB (full rescan); also sets -e2ts",
|
||||
"e2tsr": "delete all metadata from DB (full rescan); also sets -e2ts",
|
||||
"d2ts": "disables metadata collection for existing files",
|
||||
"e2v": "verify integrity on startup by hashing files and comparing to db",
|
||||
"e2vu": "when e2v fails, update the db (assume on-disk files are good)",
|
||||
"e2vp": "when e2v fails, panic and quit copyparty",
|
||||
"d2ds": "disables onboot indexing, overrides -e2ds*",
|
||||
"d2t": "disables metadata collection, overrides -e2t*",
|
||||
"d2v": "disables file verification, overrides -e2v*",
|
||||
@@ -195,6 +207,8 @@ flagcats = {
|
||||
"srch_excl": "exclude search results with URL matching this regex",
|
||||
},
|
||||
'database, audio tags\n"mte", "mth", "mtp", "mtm" all work the same as -mte, -mth, ...': {
|
||||
"mte=artist,title": "media-tags to index/display",
|
||||
"mth=fmt,res,ac": "media-tags to hide by default",
|
||||
"mtp=.bpm=f,audio-bpm.py": 'uses the "audio-bpm.py" program to\ngenerate ".bpm" tags from uploads (f = overwrite tags)',
|
||||
"mtp=ahash,vhash=media-hash.py": "collects two tags at once",
|
||||
},
|
||||
@@ -208,6 +222,7 @@ flagcats = {
|
||||
"crop": "center-cropping (y/n/fy/fn)",
|
||||
"th3x": "3x resolution (y/n/fy/fn)",
|
||||
"convt": "conversion timeout in seconds",
|
||||
"ext_th=s=/b.png": "use /b.png as thumbnail for file-extension s",
|
||||
},
|
||||
"handlers\n(better explained in --help-handlers)": {
|
||||
"on404=PY": "handle 404s by executing PY file",
|
||||
@@ -230,8 +245,12 @@ flagcats = {
|
||||
"grid": "show grid/thumbnails by default",
|
||||
"gsel": "select files in grid by ctrl-click",
|
||||
"sort": "default sort order",
|
||||
"nsort": "natural-sort of leading digits in filenames",
|
||||
"hsortn": "number of sort-rules to add to media URLs",
|
||||
"unlist": "dont list files matching REGEX",
|
||||
"html_head=TXT": "includes TXT in the <head>, or @PATH for file at PATH",
|
||||
"tcolor=#fc0": "theme color (a hint for webbrowsers, discord, etc.)",
|
||||
"nodirsz": "don't show total folder size",
|
||||
"robots": "allows indexing by search engines (default)",
|
||||
"norobots": "kindly asks search engines to leave",
|
||||
"no_sb_md": "disable js sandbox for markdown files",
|
||||
@@ -240,12 +259,37 @@ flagcats = {
|
||||
"sb_lg": "enable js sandbox for prologue/epilogue (default)",
|
||||
"md_sbf": "list of markdown-sandbox safeguards to disable",
|
||||
"lg_sbf": "list of *logue-sandbox safeguards to disable",
|
||||
"md_sba": "value of iframe allow-prop for markdown-sandbox",
|
||||
"lg_sba": "value of iframe allow-prop for *logue-sandbox",
|
||||
"nohtml": "return html and markdown as text/html",
|
||||
},
|
||||
"opengraph (discord embeds)": {
|
||||
"og": "enable OG (disables hotlinking)",
|
||||
"og_site": "sitename; defaults to --name, disable with '-'",
|
||||
"og_desc": "description text for all files; disable with '-'",
|
||||
"og_th=jf": "thumbnail format; j / jf / jf3 / w / w3 / ...",
|
||||
"og_title_a": "audio title format; default: {{ artist }} - {{ title }}",
|
||||
"og_title_v": "video title format; default: {{ title }}",
|
||||
"og_title_i": "image title format; default: {{ title }}",
|
||||
"og_title=foo": "fallback title if there's nothing in the db",
|
||||
"og_s_title": "force default title; do not read from tags",
|
||||
"og_tpl": "custom html; see --og-tpl in --help",
|
||||
"og_no_head": "you want to add tags manually with og_tpl",
|
||||
"og_ua": "if defined: only send OG html if useragent matches this regex",
|
||||
},
|
||||
"textfiles": {
|
||||
"exp": "enable textfile expansion; see --help-exp",
|
||||
"exp_md": "placeholders to expand in markdown files; see --help",
|
||||
"exp_lg": "placeholders to expand in prologue/epilogue; see --help",
|
||||
},
|
||||
"others": {
|
||||
"dots": "allow all users with read-access to\nenable the option to show dotfiles in listings",
|
||||
"fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
|
||||
"fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers',
|
||||
"rss": "allow '?rss' URL suffix (experimental)",
|
||||
"ups_who=2": "restrict viewing the list of recent uploads",
|
||||
"zip_who=2": "restrict access to download-as-zip/tar",
|
||||
"nopipe": "disable race-the-beam (download unfinished uploads)",
|
||||
"mv_retry": "ms-windows: timeout for renaming busy files",
|
||||
"rm_retry": "ms-windows: timeout for deleting busy files",
|
||||
"davauth": "ask webdav clients to login for all folders",
|
||||
@@ -255,3 +299,10 @@ flagcats = {
|
||||
|
||||
|
||||
flagdescs = {k.split("=")[0]: v for tab in flagcats.values() for k, v in tab.items()}
|
||||
|
||||
|
||||
if True: # so it gets removed in release-builds
|
||||
for fun in [vf_bmap, vf_cmap, vf_vmap]:
|
||||
for k in fun().values():
|
||||
if k not in flagdescs:
|
||||
raise Exception("undocumented volflag: " + k)
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import importlib
|
||||
import sys
|
||||
import xml.etree.ElementTree as ET
|
||||
@@ -8,6 +11,10 @@ if True: # pylint: disable=using-constant-test
|
||||
from typing import Any, Optional
|
||||
|
||||
|
||||
class BadXML(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def get_ET() -> ET.XMLParser:
|
||||
pn = "xml.etree.ElementTree"
|
||||
cn = "_elementtree"
|
||||
@@ -34,7 +41,7 @@ def get_ET() -> ET.XMLParser:
|
||||
XMLParser: ET.XMLParser = get_ET()
|
||||
|
||||
|
||||
class DXMLParser(XMLParser): # type: ignore
|
||||
class _DXMLParser(XMLParser): # type: ignore
|
||||
def __init__(self) -> None:
|
||||
tb = ET.TreeBuilder()
|
||||
super(DXMLParser, self).__init__(target=tb)
|
||||
@@ -49,8 +56,12 @@ class DXMLParser(XMLParser): # type: ignore
|
||||
raise BadXML("{}, {}".format(a, ka))
|
||||
|
||||
|
||||
class BadXML(Exception):
|
||||
pass
|
||||
class _NG(XMLParser): # type: ignore
|
||||
def __int__(self) -> None:
|
||||
raise BadXML("dxml selftest failed")
|
||||
|
||||
|
||||
DXMLParser = _DXMLParser
|
||||
|
||||
|
||||
def parse_xml(txt: str) -> ET.Element:
|
||||
@@ -59,6 +70,40 @@ def parse_xml(txt: str) -> ET.Element:
|
||||
return parser.close() # type: ignore
|
||||
|
||||
|
||||
def selftest() -> bool:
|
||||
qbe = r"""<!DOCTYPE d [
|
||||
<!ENTITY a "nice_bakuretsu">
|
||||
]>
|
||||
<root>&a;&a;&a;</root>"""
|
||||
|
||||
emb = r"""<!DOCTYPE d [
|
||||
<!ENTITY a SYSTEM "file:///etc/hostname">
|
||||
]>
|
||||
<root>&a;</root>"""
|
||||
|
||||
# future-proofing; there's never been any known vulns
|
||||
# regarding DTDs and ET.XMLParser, but might as well
|
||||
# block them since webdav-clients don't use them
|
||||
dtd = r"""<!DOCTYPE d SYSTEM "a.dtd">
|
||||
<root>a</root>"""
|
||||
|
||||
for txt in (qbe, emb, dtd):
|
||||
try:
|
||||
parse_xml(txt)
|
||||
t = "WARNING: dxml selftest failed:\n%s\n"
|
||||
print(t % (txt,), file=sys.stderr)
|
||||
return False
|
||||
except BadXML:
|
||||
pass
|
||||
|
||||
return True
|
||||
|
||||
|
||||
DXML_OK = selftest()
|
||||
if not DXML_OK:
|
||||
DXMLParser = _NG
|
||||
|
||||
|
||||
def mktnod(name: str, text: str) -> ET.Element:
|
||||
el = ET.Element(name)
|
||||
el.text = text
|
||||
|
||||
@@ -132,6 +132,8 @@ NO_CACHE = {"Cache-Control": "no-cache"}
|
||||
|
||||
ALL_COOKIES = "k304 no304 js idxh dots cppwd cppws".split()
|
||||
|
||||
BADXFF = " due to dangerous misconfiguration (the http-header specified by --xff-hdr was received from an untrusted reverse-proxy)"
|
||||
|
||||
H_CONN_KEEPALIVE = "Connection: Keep-Alive"
|
||||
H_CONN_CLOSE = "Connection: Close"
|
||||
|
||||
@@ -162,6 +164,8 @@ class HttpCli(object):
|
||||
def __init__(self, conn: "HttpConn") -> None:
|
||||
assert conn.sr # !rm
|
||||
|
||||
empty_stringlist: list[str] = []
|
||||
|
||||
self.t0 = time.time()
|
||||
self.conn = conn
|
||||
self.u2mutex = conn.u2mutex # mypy404
|
||||
@@ -187,7 +191,7 @@ class HttpCli(object):
|
||||
self.is_vproxied = False
|
||||
self.in_hdr_recv = True
|
||||
self.headers: dict[str, str] = {}
|
||||
self.mode = " "
|
||||
self.mode = " " # http verb
|
||||
self.req = " "
|
||||
self.http_ver = ""
|
||||
self.hint = ""
|
||||
@@ -207,9 +211,7 @@ class HttpCli(object):
|
||||
self.trailing_slash = True
|
||||
self.uname = " "
|
||||
self.pw = " "
|
||||
self.rvol = [" "]
|
||||
self.wvol = [" "]
|
||||
self.avol = [" "]
|
||||
self.rvol = self.wvol = self.avol = empty_stringlist
|
||||
self.do_log = True
|
||||
self.can_read = False
|
||||
self.can_write = False
|
||||
@@ -390,6 +392,7 @@ class HttpCli(object):
|
||||
) + "0.0/16"
|
||||
zs2 = ' or "--xff-src=lan"' if self.conn.xff_lan.map(pip) else ""
|
||||
self.log(t % (self.args.xff_hdr, pip, cli_ip, zso, zs, zs2), 3)
|
||||
self.bad_xff = True
|
||||
else:
|
||||
self.ip = cli_ip
|
||||
self.is_vproxied = bool(self.args.R)
|
||||
@@ -510,7 +513,7 @@ class HttpCli(object):
|
||||
return False
|
||||
|
||||
if "k" in uparam:
|
||||
m = RE_K.search(uparam["k"])
|
||||
m = re_k.search(uparam["k"])
|
||||
if m:
|
||||
zs = uparam["k"]
|
||||
t = "malicious user; illegal filekey; req(%r) k(%r) => %r"
|
||||
@@ -728,10 +731,10 @@ class HttpCli(object):
|
||||
return self.handle_unlock() and self.keepalive
|
||||
elif self.mode == "MKCOL":
|
||||
return self.handle_mkcol() and self.keepalive
|
||||
elif self.mode == "MOVE":
|
||||
return self.handle_move() and self.keepalive
|
||||
elif self.mode in ("MOVE", "COPY"):
|
||||
return self.handle_cpmv() and self.keepalive
|
||||
else:
|
||||
raise Pebkac(400, 'invalid HTTP mode "{0}"'.format(self.mode))
|
||||
raise Pebkac(400, 'invalid HTTP verb "{0}"'.format(self.mode))
|
||||
|
||||
except Exception as ex:
|
||||
if not isinstance(ex, Pebkac):
|
||||
@@ -1230,6 +1233,20 @@ class HttpCli(object):
|
||||
else:
|
||||
return self.tx_404(True)
|
||||
else:
|
||||
vfs = self.asrv.vfs
|
||||
if (
|
||||
not vfs.nodes
|
||||
and not vfs.axs.uread
|
||||
and not vfs.axs.uwrite
|
||||
and not vfs.axs.uget
|
||||
and not vfs.axs.uhtml
|
||||
and not vfs.axs.uadmin
|
||||
):
|
||||
t = "<h2>access denied due to failsafe; check server log</h2>"
|
||||
html = self.j2s("splash", this=self, msg=t)
|
||||
self.reply(html.encode("utf-8", "replace"), 500)
|
||||
return True
|
||||
|
||||
if self.vpath:
|
||||
ptn = self.args.nonsus_urls
|
||||
if not ptn or not ptn.search(self.vpath):
|
||||
@@ -1759,6 +1776,12 @@ class HttpCli(object):
|
||||
if "%" in self.req:
|
||||
self.log(" `-- %r" % (self.vpath,))
|
||||
|
||||
if self.args.no_dav:
|
||||
raise Pebkac(405, "WebDAV is disabled in server config")
|
||||
|
||||
if not self.can_write:
|
||||
raise Pebkac(401, "authenticate")
|
||||
|
||||
try:
|
||||
return self._mkdir(self.vpath, True)
|
||||
except Pebkac as ex:
|
||||
@@ -1768,14 +1791,35 @@ class HttpCli(object):
|
||||
self.reply(b"", ex.code)
|
||||
return True
|
||||
|
||||
def handle_move(self) -> bool:
|
||||
def handle_cpmv(self) -> bool:
|
||||
dst = self.headers["destination"]
|
||||
dst = re.sub("^https?://[^/]+", "", dst).lstrip()
|
||||
dst = unquotep(dst)
|
||||
if not self._mv(self.vpath, dst.lstrip("/")):
|
||||
return False
|
||||
|
||||
return True
|
||||
# dolphin (kioworker/6.10) "webdav://127.0.0.1:3923/a/b.txt"
|
||||
dst = re.sub("^[a-zA-Z]+://[^/]+", "", dst).lstrip()
|
||||
|
||||
if self.is_vproxied and dst.startswith(self.args.SRS):
|
||||
dst = dst[len(self.args.RS) :]
|
||||
|
||||
if self.do_log:
|
||||
self.log("%s %s --//> %s @%s" % (self.mode, self.req, dst, self.uname))
|
||||
if "%" in self.req:
|
||||
self.log(" `-- %r" % (self.vpath,))
|
||||
|
||||
if self.args.no_dav:
|
||||
raise Pebkac(405, "WebDAV is disabled in server config")
|
||||
|
||||
dst = unquotep(dst)
|
||||
|
||||
# overwrite=True is default; rfc4918 9.8.4
|
||||
overwrite = self.headers.get("overwrite", "").lower() != "f"
|
||||
|
||||
try:
|
||||
fun = self._cp if self.mode == "COPY" else self._mv
|
||||
return fun(self.vpath, dst.lstrip("/"), overwrite)
|
||||
except Pebkac as ex:
|
||||
if ex.code == 403:
|
||||
ex.code = 401
|
||||
raise
|
||||
|
||||
def _applesan(self) -> bool:
|
||||
if self.args.dav_mac or "Darwin/" not in self.ua:
|
||||
@@ -1896,8 +1940,11 @@ class HttpCli(object):
|
||||
if "stash" in opt:
|
||||
return self.handle_stash(False)
|
||||
|
||||
xm = []
|
||||
xm_rsp = {}
|
||||
|
||||
if "save" in opt:
|
||||
post_sz, _, _, _, path, _ = self.dump_to_file(False)
|
||||
post_sz, _, _, _, _, path, _ = self.dump_to_file(False)
|
||||
self.log("urlform: %d bytes, %r" % (post_sz, path))
|
||||
elif "print" in opt:
|
||||
reader, _ = self.get_body_reader()
|
||||
@@ -1918,7 +1965,7 @@ class HttpCli(object):
|
||||
plain = plain[4:]
|
||||
xm = self.vn.flags.get("xm")
|
||||
if xm:
|
||||
runhook(
|
||||
xm_rsp = runhook(
|
||||
self.log,
|
||||
self.conn.hsrv.broker,
|
||||
None,
|
||||
@@ -1942,6 +1989,13 @@ class HttpCli(object):
|
||||
except Exception as ex:
|
||||
self.log(repr(ex))
|
||||
|
||||
if "xm" in opt:
|
||||
if xm:
|
||||
self.loud_reply(xm_rsp.get("stdout") or "", status=202)
|
||||
return True
|
||||
else:
|
||||
return self.handle_get()
|
||||
|
||||
if "get" in opt:
|
||||
return self.handle_get()
|
||||
|
||||
@@ -1978,11 +2032,11 @@ class HttpCli(object):
|
||||
else:
|
||||
return read_socket(self.sr, bufsz, remains), remains
|
||||
|
||||
def dump_to_file(self, is_put: bool) -> tuple[int, str, str, int, str, str]:
|
||||
# post_sz, sha_hex, sha_b64, remains, path, url
|
||||
def dump_to_file(self, is_put: bool) -> tuple[int, str, str, str, int, str, str]:
|
||||
# post_sz, halg, sha_hex, sha_b64, remains, path, url
|
||||
reader, remains = self.get_body_reader()
|
||||
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
|
||||
rnd, _, lifetime, xbu, xau = self.upload_flags(vfs)
|
||||
rnd, lifetime, xbu, xau = self.upload_flags(vfs)
|
||||
lim = vfs.get_dbv(rem)[0].lim
|
||||
fdir = vfs.canonical(rem)
|
||||
if lim:
|
||||
@@ -2132,12 +2186,14 @@ class HttpCli(object):
|
||||
# small toctou, but better than clobbering a hardlink
|
||||
wunlink(self.log, path, vfs.flags)
|
||||
|
||||
halg = "sha512"
|
||||
hasher = None
|
||||
copier = hashcopy
|
||||
if "ck" in self.ouparam or "ck" in self.headers:
|
||||
zs = self.ouparam.get("ck") or self.headers.get("ck") or ""
|
||||
halg = zs = self.ouparam.get("ck") or self.headers.get("ck") or ""
|
||||
if not zs or zs == "no":
|
||||
copier = justcopy
|
||||
halg = ""
|
||||
elif zs == "md5":
|
||||
hasher = hashlib.md5(**USED4SEC)
|
||||
elif zs == "sha1":
|
||||
@@ -2171,7 +2227,7 @@ class HttpCli(object):
|
||||
raise
|
||||
|
||||
if self.args.nw:
|
||||
return post_sz, sha_hex, sha_b64, remains, path, ""
|
||||
return post_sz, halg, sha_hex, sha_b64, remains, path, ""
|
||||
|
||||
at = mt = time.time() - lifetime
|
||||
cli_mt = self.headers.get("x-oc-mtime")
|
||||
@@ -2282,19 +2338,30 @@ class HttpCli(object):
|
||||
self.args.RS + vpath + vsuf,
|
||||
)
|
||||
|
||||
return post_sz, sha_hex, sha_b64, remains, path, url
|
||||
return post_sz, halg, sha_hex, sha_b64, remains, path, url
|
||||
|
||||
def handle_stash(self, is_put: bool) -> bool:
|
||||
post_sz, sha_hex, sha_b64, remains, path, url = self.dump_to_file(is_put)
|
||||
post_sz, halg, sha_hex, sha_b64, remains, path, url = self.dump_to_file(is_put)
|
||||
spd = self._spd(post_sz)
|
||||
t = "%s wrote %d/%d bytes to %r # %s"
|
||||
self.log(t % (spd, post_sz, remains, path, sha_b64[:28])) # 21
|
||||
|
||||
ac = self.uparam.get(
|
||||
"want", self.headers.get("accept", "").lower().split(";")[-1]
|
||||
)
|
||||
mime = "text/plain; charset=utf-8"
|
||||
ac = self.uparam.get("want") or self.headers.get("accept") or ""
|
||||
if ac:
|
||||
ac = ac.split(";", 1)[0].lower()
|
||||
if ac == "application/json":
|
||||
ac = "json"
|
||||
if ac == "url":
|
||||
t = url
|
||||
elif ac == "json" or "j" in self.uparam:
|
||||
jmsg = {"fileurl": url, "filesz": post_sz}
|
||||
if halg:
|
||||
jmsg[halg] = sha_hex[:56]
|
||||
jmsg["sha_b64"] = sha_b64
|
||||
|
||||
mime = "application/json"
|
||||
t = json.dumps(jmsg, indent=2, sort_keys=True)
|
||||
else:
|
||||
t = "{}\n{}\n{}\n{}\n".format(post_sz, sha_b64, sha_hex[:56], url)
|
||||
|
||||
@@ -2304,7 +2371,7 @@ class HttpCli(object):
|
||||
h["X-OC-MTime"] = "accepted"
|
||||
t = "" # some webdav clients expect/prefer this
|
||||
|
||||
self.reply(t.encode("utf-8"), 201, headers=h)
|
||||
self.reply(t.encode("utf-8", "replace"), 201, mime=mime, headers=h)
|
||||
return True
|
||||
|
||||
def bakflip(
|
||||
@@ -2983,7 +3050,7 @@ class HttpCli(object):
|
||||
self.redirect(vpath, "?edit")
|
||||
return True
|
||||
|
||||
def upload_flags(self, vfs: VFS) -> tuple[int, bool, int, list[str], list[str]]:
|
||||
def upload_flags(self, vfs: VFS) -> tuple[int, int, list[str], list[str]]:
|
||||
if self.args.nw:
|
||||
rnd = 0
|
||||
else:
|
||||
@@ -2991,10 +3058,6 @@ class HttpCli(object):
|
||||
if vfs.flags.get("rand"): # force-enable
|
||||
rnd = max(rnd, vfs.flags["nrand"])
|
||||
|
||||
ac = self.uparam.get(
|
||||
"want", self.headers.get("accept", "").lower().split(";")[-1]
|
||||
)
|
||||
want_url = ac == "url"
|
||||
zs = self.uparam.get("life", self.headers.get("life", ""))
|
||||
if zs:
|
||||
vlife = vfs.flags.get("lifetime") or 0
|
||||
@@ -3004,7 +3067,6 @@ class HttpCli(object):
|
||||
|
||||
return (
|
||||
rnd,
|
||||
want_url,
|
||||
lifetime,
|
||||
vfs.flags.get("xbu") or [],
|
||||
vfs.flags.get("xau") or [],
|
||||
@@ -3057,7 +3119,14 @@ class HttpCli(object):
|
||||
if not nullwrite:
|
||||
bos.makedirs(fdir_base)
|
||||
|
||||
rnd, want_url, lifetime, xbu, xau = self.upload_flags(vfs)
|
||||
rnd, lifetime, xbu, xau = self.upload_flags(vfs)
|
||||
zs = self.uparam.get("want") or self.headers.get("accept") or ""
|
||||
if zs:
|
||||
zs = zs.split(";", 1)[0].lower()
|
||||
if zs == "application/json":
|
||||
zs = "json"
|
||||
want_url = zs == "url"
|
||||
want_json = zs == "json" or "j" in self.uparam
|
||||
|
||||
files: list[tuple[int, str, str, str, str, str]] = []
|
||||
# sz, sha_hex, sha_b64, p_file, fname, abspath
|
||||
@@ -3379,7 +3448,9 @@ class HttpCli(object):
|
||||
msg += "\n" + errmsg
|
||||
|
||||
self.reply(msg.encode("utf-8", "replace"), status=sc)
|
||||
elif "j" in self.uparam:
|
||||
elif want_json:
|
||||
if len(jmsg["files"]) == 1:
|
||||
jmsg["fileurl"] = jmsg["files"][0]["url"]
|
||||
jtxt = json.dumps(jmsg, indent=2, sort_keys=True).encode("utf-8", "replace")
|
||||
self.reply(jtxt, mime="application/json", status=sc)
|
||||
else:
|
||||
@@ -4253,8 +4324,14 @@ class HttpCli(object):
|
||||
rem: str,
|
||||
items: list[str],
|
||||
) -> bool:
|
||||
if self.args.no_zip:
|
||||
raise Pebkac(400, "not enabled in server config")
|
||||
lvl = vn.flags["zip_who"]
|
||||
if self.args.no_zip or not lvl:
|
||||
raise Pebkac(400, "download-as-zip/tar is disabled in server config")
|
||||
elif lvl <= 1 and not self.can_admin:
|
||||
raise Pebkac(400, "download-as-zip/tar is admin-only on this server")
|
||||
elif lvl <= 2 and self.uname in ("", "*"):
|
||||
t = "you must be authenticated to download-as-zip/tar on this server"
|
||||
raise Pebkac(400, t)
|
||||
|
||||
logmsg = "{:4} {} ".format("", self.req)
|
||||
self.keepalive = False
|
||||
@@ -4337,7 +4414,7 @@ class HttpCli(object):
|
||||
self.log,
|
||||
self.asrv,
|
||||
fgen,
|
||||
utf8="utf" in uarg,
|
||||
utf8="utf" in uarg or not uarg,
|
||||
pre_crc="crc" in uarg,
|
||||
cmp=uarg if cancmp or uarg == "pax" else "",
|
||||
)
|
||||
@@ -4551,12 +4628,12 @@ class HttpCli(object):
|
||||
else self.conn.hsrv.nm.map(self.ip) or host
|
||||
)
|
||||
# safer than html_escape/quotep since this avoids both XSS and shell-stuff
|
||||
pw = re.sub(r"[<>&$?`\"']", "_", self.pw or "pw")
|
||||
pw = re.sub(r"[<>&$?`\"']", "_", self.pw or "hunter2")
|
||||
vp = re.sub(r"[<>&$?`\"']", "_", self.uparam["hc"] or "").lstrip("/")
|
||||
pw = pw.replace(" ", "%20")
|
||||
vp = vp.replace(" ", "%20")
|
||||
if pw in self.asrv.sesa:
|
||||
pw = "pwd"
|
||||
pw = "hunter2"
|
||||
|
||||
html = self.j2s(
|
||||
"svcs",
|
||||
@@ -4759,9 +4836,12 @@ class HttpCli(object):
|
||||
# that the client is not a graphical browser
|
||||
if (
|
||||
rc == 403
|
||||
and not self.pw
|
||||
and not self.ua.startswith("Mozilla/")
|
||||
and self.uname == "*"
|
||||
and "sec-fetch-site" not in self.headers
|
||||
and (
|
||||
not self.ua.startswith("Mozilla/")
|
||||
or (self.args.dav_ua1 and self.args.dav_ua1.search(self.ua))
|
||||
)
|
||||
):
|
||||
rc = 401
|
||||
self.out_headers["WWW-Authenticate"] = 'Basic realm="a"'
|
||||
@@ -4795,7 +4875,7 @@ class HttpCli(object):
|
||||
|
||||
def scanvol(self) -> bool:
|
||||
if not self.can_admin:
|
||||
raise Pebkac(403, "not allowed for user " + self.uname)
|
||||
raise Pebkac(403, "'scanvol' not allowed for user " + self.uname)
|
||||
|
||||
if self.args.no_rescan:
|
||||
raise Pebkac(403, "the rescan feature is disabled in server config")
|
||||
@@ -4818,7 +4898,7 @@ class HttpCli(object):
|
||||
raise Pebkac(400, "only config files ('cfg') can be reloaded rn")
|
||||
|
||||
if not self.avol:
|
||||
raise Pebkac(403, "not allowed for user " + self.uname)
|
||||
raise Pebkac(403, "'reload' not allowed for user " + self.uname)
|
||||
|
||||
if self.args.no_reload:
|
||||
raise Pebkac(403, "the reload feature is disabled in server config")
|
||||
@@ -4828,7 +4908,7 @@ class HttpCli(object):
|
||||
|
||||
def tx_stack(self) -> bool:
|
||||
if not self.avol and not [x for x in self.wvol if x in self.rvol]:
|
||||
raise Pebkac(403, "not allowed for user " + self.uname)
|
||||
raise Pebkac(403, "'stack' not allowed for user " + self.uname)
|
||||
|
||||
if self.args.no_stack:
|
||||
raise Pebkac(403, "the stackdump feature is disabled in server config")
|
||||
@@ -4986,8 +5066,16 @@ class HttpCli(object):
|
||||
and (self.uname in vol.axs.uread or self.uname in vol.axs.upget)
|
||||
}
|
||||
|
||||
bad_xff = hasattr(self, "bad_xff")
|
||||
if bad_xff:
|
||||
allvols = []
|
||||
t = "will not return list of recent uploads" + BADXFF
|
||||
self.log(t, 1)
|
||||
if self.avol:
|
||||
raise Pebkac(500, t)
|
||||
|
||||
x = self.conn.hsrv.broker.ask(
|
||||
"up2k.get_unfinished_by_user", self.uname, self.ip
|
||||
"up2k.get_unfinished_by_user", self.uname, "" if bad_xff else self.ip
|
||||
)
|
||||
uret = x.get()
|
||||
|
||||
@@ -5114,6 +5202,12 @@ class HttpCli(object):
|
||||
adm = "*" in vol.axs.uadmin or self.uname in vol.axs.uadmin
|
||||
dots = "*" in vol.axs.udot or self.uname in vol.axs.udot
|
||||
|
||||
lvl = vol.flags["ups_who"]
|
||||
if not lvl:
|
||||
continue
|
||||
elif lvl == 1 and not adm:
|
||||
continue
|
||||
|
||||
n = 1000
|
||||
q = "select sz, rd, fn, ip, at from up where at>0 order by at desc"
|
||||
for sz, rd, fn, ip, at in cur.execute(q):
|
||||
@@ -5377,17 +5471,23 @@ class HttpCli(object):
|
||||
|
||||
def handle_rm(self, req: list[str]) -> bool:
|
||||
if not req and not self.can_delete:
|
||||
raise Pebkac(403, "not allowed for user " + self.uname)
|
||||
if self.mode == "DELETE" and self.uname == "*":
|
||||
raise Pebkac(401, "authenticate") # webdav
|
||||
raise Pebkac(403, "'delete' not allowed for user " + self.uname)
|
||||
|
||||
if self.args.no_del:
|
||||
raise Pebkac(403, "the delete feature is disabled in server config")
|
||||
|
||||
unpost = "unpost" in self.uparam
|
||||
if unpost and hasattr(self, "bad_xff"):
|
||||
self.log("unpost was denied" + BADXFF, 1)
|
||||
raise Pebkac(403, "the delete feature is disabled in server config")
|
||||
|
||||
if not req:
|
||||
req = [self.vpath]
|
||||
elif self.is_vproxied:
|
||||
req = [x[len(self.args.SR) :] for x in req]
|
||||
|
||||
unpost = "unpost" in self.uparam
|
||||
nlim = int(self.uparam.get("lim") or 0)
|
||||
lim = [nlim, nlim] if nlim else []
|
||||
|
||||
@@ -5407,14 +5507,22 @@ class HttpCli(object):
|
||||
if not dst:
|
||||
raise Pebkac(400, "need dst vpath")
|
||||
|
||||
return self._mv(self.vpath, dst.lstrip("/"))
|
||||
return self._mv(self.vpath, dst.lstrip("/"), False)
|
||||
|
||||
def _mv(self, vsrc: str, vdst: str) -> bool:
|
||||
def _mv(self, vsrc: str, vdst: str, overwrite: bool) -> bool:
|
||||
if self.args.no_mv:
|
||||
raise Pebkac(403, "the rename/move feature is disabled in server config")
|
||||
|
||||
self.asrv.vfs.get(vsrc, self.uname, True, False, True)
|
||||
self.asrv.vfs.get(vdst, self.uname, False, True)
|
||||
# `handle_cpmv` will catch 403 from these and raise 401
|
||||
svn, srem = self.asrv.vfs.get(vsrc, self.uname, True, False, True)
|
||||
dvn, drem = self.asrv.vfs.get(vdst, self.uname, False, True)
|
||||
|
||||
if overwrite:
|
||||
dabs = dvn.canonical(drem)
|
||||
if bos.path.exists(dabs):
|
||||
self.log("overwriting %s" % (dabs,))
|
||||
self.asrv.vfs.get(vdst, self.uname, False, True, False, True)
|
||||
wunlink(self.log, dabs, dvn.flags)
|
||||
|
||||
x = self.conn.hsrv.broker.ask("up2k.handle_mv", self.uname, self.ip, vsrc, vdst)
|
||||
self.loud_reply(x.get(), status=201)
|
||||
@@ -5430,14 +5538,21 @@ class HttpCli(object):
|
||||
if not dst:
|
||||
raise Pebkac(400, "need dst vpath")
|
||||
|
||||
return self._cp(self.vpath, dst.lstrip("/"))
|
||||
return self._cp(self.vpath, dst.lstrip("/"), False)
|
||||
|
||||
def _cp(self, vsrc: str, vdst: str) -> bool:
|
||||
def _cp(self, vsrc: str, vdst: str, overwrite: bool) -> bool:
|
||||
if self.args.no_cp:
|
||||
raise Pebkac(403, "the copy feature is disabled in server config")
|
||||
|
||||
self.asrv.vfs.get(vsrc, self.uname, True, False)
|
||||
self.asrv.vfs.get(vdst, self.uname, False, True)
|
||||
svn, srem = self.asrv.vfs.get(vsrc, self.uname, True, False)
|
||||
dvn, drem = self.asrv.vfs.get(vdst, self.uname, False, True)
|
||||
|
||||
if overwrite:
|
||||
dabs = dvn.canonical(drem)
|
||||
if bos.path.exists(dabs):
|
||||
self.log("overwriting %s" % (dabs,))
|
||||
self.asrv.vfs.get(vdst, self.uname, False, True, False, True)
|
||||
wunlink(self.log, dabs, dvn.flags)
|
||||
|
||||
x = self.conn.hsrv.broker.ask("up2k.handle_cp", self.uname, self.ip, vsrc, vdst)
|
||||
self.loud_reply(x.get(), status=201)
|
||||
@@ -5787,7 +5902,7 @@ class HttpCli(object):
|
||||
"taglist": [],
|
||||
"have_tags_idx": int(e2t),
|
||||
"have_b_u": (self.can_write and self.uparam.get("b") == "u"),
|
||||
"sb_lg": "" if "no_sb_lg" in vf else (vf.get("lg_sbf") or "y"),
|
||||
"sb_lg": vn.js_ls["sb_lg"],
|
||||
"url_suf": url_suf,
|
||||
"title": html_escape("%s %s" % (self.args.bname, self.vpath), crlf=True),
|
||||
"srv_info": srv_infot,
|
||||
|
||||
@@ -18,7 +18,7 @@ class Metrics(object):
|
||||
|
||||
def tx(self, cli: "HttpCli") -> bool:
|
||||
if not cli.avol:
|
||||
raise Pebkac(403, "not allowed for user " + cli.uname)
|
||||
raise Pebkac(403, "'stats' not allowed for user " + cli.uname)
|
||||
|
||||
args = cli.args
|
||||
if not args.stats:
|
||||
|
||||
@@ -163,6 +163,7 @@ class MCast(object):
|
||||
sck.settimeout(None)
|
||||
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
try:
|
||||
# safe for this purpose; https://lwn.net/Articles/853637/
|
||||
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
|
||||
except:
|
||||
pass
|
||||
|
||||
@@ -50,6 +50,8 @@ from .util import (
|
||||
FFMPEG_URL,
|
||||
HAVE_PSUTIL,
|
||||
HAVE_SQLITE3,
|
||||
HAVE_ZMQ,
|
||||
URL_BUG,
|
||||
UTC,
|
||||
VERSIONS,
|
||||
Daemon,
|
||||
@@ -60,6 +62,7 @@ from .util import (
|
||||
alltrace,
|
||||
ansi_re,
|
||||
build_netmap,
|
||||
expat_ver,
|
||||
load_ipu,
|
||||
min_ex,
|
||||
mp,
|
||||
@@ -639,6 +642,7 @@ class SvcHub(object):
|
||||
(HAVE_FFPROBE, "ffprobe", t_ff + ", read audio/media tags"),
|
||||
(HAVE_MUTAGEN, "mutagen", "read audio tags (ffprobe is better but slower)"),
|
||||
(HAVE_ARGON2, "argon2", "secure password hashing (advanced users only)"),
|
||||
(HAVE_ZMQ, "pyzmq", "send zeromq messages from event-hooks"),
|
||||
(HAVE_HEIF, "pillow-heif", "read .heif images with pillow (rarely useful)"),
|
||||
(HAVE_AVIF, "pillow-avif", "read .avif images with pillow (rarely useful)"),
|
||||
]
|
||||
@@ -695,6 +699,15 @@ class SvcHub(object):
|
||||
if self.args.bauth_last:
|
||||
self.log("root", "WARNING: ignoring --bauth-last due to --no-bauth", 3)
|
||||
|
||||
if not self.args.no_dav:
|
||||
from .dxml import DXML_OK
|
||||
|
||||
if not DXML_OK:
|
||||
if not self.args.no_dav:
|
||||
self.args.no_dav = True
|
||||
t = "WARNING:\nDisabling WebDAV support because dxml selftest failed. Please report this bug;\n%s\n...and include the following information in the bug-report:\n%s | expat %s\n"
|
||||
self.log("root", t % (URL_BUG, VERSIONS, expat_ver()), 1)
|
||||
|
||||
def _process_config(self) -> bool:
|
||||
al = self.args
|
||||
|
||||
@@ -756,7 +769,7 @@ class SvcHub(object):
|
||||
vs = os.path.expandvars(os.path.expanduser(vs))
|
||||
setattr(al, k, vs)
|
||||
|
||||
for k in "sus_urls nonsus_urls".split(" "):
|
||||
for k in "dav_ua1 sus_urls nonsus_urls".split(" "):
|
||||
vs = getattr(al, k)
|
||||
if not vs or vs == "no":
|
||||
setattr(al, k, None)
|
||||
|
||||
@@ -6,7 +6,7 @@ import os
|
||||
from .__init__ import TYPE_CHECKING
|
||||
from .authsrv import VFS
|
||||
from .bos import bos
|
||||
from .th_srv import HAVE_WEBP, thumb_path
|
||||
from .th_srv import EXTS_AC, HAVE_WEBP, thumb_path
|
||||
from .util import Cooldown
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
@@ -57,13 +57,17 @@ class ThumbCli(object):
|
||||
if is_vid and "dvthumb" in dbv.flags:
|
||||
return None
|
||||
|
||||
want_opus = fmt in ("opus", "caf", "mp3")
|
||||
want_opus = fmt in EXTS_AC
|
||||
is_au = ext in self.fmt_ffa
|
||||
is_vau = want_opus and ext in self.fmt_ffv
|
||||
if is_au or is_vau:
|
||||
if want_opus:
|
||||
if self.args.no_acode:
|
||||
return None
|
||||
elif fmt == "caf" and self.args.no_caf:
|
||||
fmt = "mp3"
|
||||
elif fmt == "owa" and self.args.no_owa:
|
||||
fmt = "mp3"
|
||||
else:
|
||||
if "dathumb" in dbv.flags:
|
||||
return None
|
||||
|
||||
@@ -32,7 +32,7 @@ from .util import (
|
||||
)
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
from typing import Optional, Union
|
||||
from typing import Any, Optional, Union
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .svchub import SvcHub
|
||||
@@ -46,6 +46,9 @@ HAVE_HEIF = False
|
||||
HAVE_AVIF = False
|
||||
HAVE_WEBP = False
|
||||
|
||||
EXTS_TH = set(["jpg", "webp", "png"])
|
||||
EXTS_AC = set(["opus", "owa", "caf", "mp3"])
|
||||
|
||||
try:
|
||||
if os.environ.get("PRTY_NO_PIL"):
|
||||
raise Exception()
|
||||
@@ -139,7 +142,7 @@ def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -
|
||||
h = hashlib.sha512(afsenc(fn)).digest()
|
||||
fn = ub64enc(h).decode("ascii")[:24]
|
||||
|
||||
if fmt in ("opus", "caf", "mp3"):
|
||||
if fmt in EXTS_AC:
|
||||
cat = "ac"
|
||||
else:
|
||||
fc = fmt[:1]
|
||||
@@ -334,9 +337,10 @@ class ThumbSrv(object):
|
||||
ap_unpk = abspath
|
||||
|
||||
if not bos.path.exists(tpath):
|
||||
want_mp3 = tpath.endswith(".mp3")
|
||||
want_opus = tpath.endswith(".opus") or tpath.endswith(".caf")
|
||||
want_png = tpath.endswith(".png")
|
||||
tex = tpath.rsplit(".", 1)[-1]
|
||||
want_mp3 = tex == "mp3"
|
||||
want_opus = tex in ("opus", "owa", "caf")
|
||||
want_png = tex == "png"
|
||||
want_au = want_mp3 or want_opus
|
||||
for lib in self.args.th_dec:
|
||||
can_au = lib == "ff" and (
|
||||
@@ -754,25 +758,37 @@ class ThumbSrv(object):
|
||||
if "ac" not in tags:
|
||||
raise Exception("not audio")
|
||||
|
||||
try:
|
||||
dur = tags[".dur"][1]
|
||||
except:
|
||||
dur = 0
|
||||
sq = "%dk" % (self.args.q_opus,)
|
||||
bq = sq.encode("ascii")
|
||||
if tags["ac"][1] == "opus":
|
||||
enc = "-c:a copy"
|
||||
else:
|
||||
enc = "-c:a libopus -b:a " + sq
|
||||
|
||||
src_opus = abspath.lower().endswith(".opus") or tags["ac"][1] == "opus"
|
||||
want_caf = tpath.endswith(".caf")
|
||||
tmp_opus = tpath
|
||||
if want_caf:
|
||||
tmp_opus = tpath + ".opus"
|
||||
try:
|
||||
wunlink(self.log, tmp_opus, vn.flags)
|
||||
except:
|
||||
pass
|
||||
fun = self._conv_caf if fmt == "caf" else self._conv_owa
|
||||
|
||||
caf_src = abspath if src_opus else tmp_opus
|
||||
bq = ("%dk" % (self.args.q_opus,)).encode("ascii")
|
||||
fun(abspath, tpath, tags, rawtags, enc, bq, vn)
|
||||
|
||||
def _conv_owa(
|
||||
self,
|
||||
abspath: str,
|
||||
tpath: str,
|
||||
tags: dict[str, tuple[int, Any]],
|
||||
rawtags: dict[str, list[Any]],
|
||||
enc: str,
|
||||
bq: bytes,
|
||||
vn: VFS,
|
||||
) -> None:
|
||||
if tpath.endswith(".owa"):
|
||||
container = b"webm"
|
||||
tagset = [b"-map_metadata", b"-1"]
|
||||
else:
|
||||
container = b"opus"
|
||||
tagset = self.big_tags(rawtags)
|
||||
|
||||
self.log("conv2 %s [%s]" % (container, enc), 6)
|
||||
benc = enc.encode("ascii").split(b" ")
|
||||
|
||||
if not want_caf or not src_opus:
|
||||
# fmt: off
|
||||
cmd = [
|
||||
b"ffmpeg",
|
||||
@@ -780,10 +796,50 @@ class ThumbSrv(object):
|
||||
b"-v", b"error",
|
||||
b"-hide_banner",
|
||||
b"-i", fsenc(abspath),
|
||||
] + self.big_tags(rawtags) + [
|
||||
] + tagset + [
|
||||
b"-map", b"0:a:0",
|
||||
b"-c:a", b"libopus",
|
||||
b"-b:a", bq,
|
||||
] + benc + [
|
||||
b"-f", container,
|
||||
fsenc(tpath)
|
||||
]
|
||||
# fmt: on
|
||||
self._run_ff(cmd, vn, oom=300)
|
||||
|
||||
def _conv_caf(
|
||||
self,
|
||||
abspath: str,
|
||||
tpath: str,
|
||||
tags: dict[str, tuple[int, Any]],
|
||||
rawtags: dict[str, list[Any]],
|
||||
enc: str,
|
||||
bq: bytes,
|
||||
vn: VFS,
|
||||
) -> None:
|
||||
tmp_opus = tpath + ".opus"
|
||||
try:
|
||||
wunlink(self.log, tmp_opus, vn.flags)
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
dur = tags[".dur"][1]
|
||||
except:
|
||||
dur = 0
|
||||
|
||||
self.log("conv2 caf-tmp [%s]" % (enc,), 6)
|
||||
benc = enc.encode("ascii").split(b" ")
|
||||
|
||||
# fmt: off
|
||||
cmd = [
|
||||
b"ffmpeg",
|
||||
b"-nostdin",
|
||||
b"-v", b"error",
|
||||
b"-hide_banner",
|
||||
b"-i", fsenc(abspath),
|
||||
b"-map_metadata", b"-1",
|
||||
b"-map", b"0:a:0",
|
||||
] + benc + [
|
||||
b"-f", b"opus",
|
||||
fsenc(tmp_opus)
|
||||
]
|
||||
# fmt: on
|
||||
@@ -794,7 +850,10 @@ class ThumbSrv(object):
|
||||
# fix that by mixing in some inaudible pink noise :^)
|
||||
# 6.3 sec seems like the cutoff so lets do 7, and
|
||||
# 7 sec of psyqui-musou.opus @ 3:50 is 174 KiB
|
||||
if want_caf and (dur < 20 or bos.path.getsize(caf_src) < 256 * 1024):
|
||||
sz = bos.path.getsize(tmp_opus)
|
||||
if dur < 20 or sz < 256 * 1024:
|
||||
zs = bq.decode("ascii")
|
||||
self.log("conv2 caf-transcode; dur=%d sz=%d q=%s" % (dur, sz, zs), 6)
|
||||
# fmt: off
|
||||
cmd = [
|
||||
b"ffmpeg",
|
||||
@@ -813,15 +872,16 @@ class ThumbSrv(object):
|
||||
# fmt: on
|
||||
self._run_ff(cmd, vn, oom=300)
|
||||
|
||||
elif want_caf:
|
||||
else:
|
||||
# simple remux should be safe
|
||||
self.log("conv2 caf-remux; dur=%d sz=%d" % (dur, sz), 6)
|
||||
# fmt: off
|
||||
cmd = [
|
||||
b"ffmpeg",
|
||||
b"-nostdin",
|
||||
b"-v", b"error",
|
||||
b"-hide_banner",
|
||||
b"-i", fsenc(abspath if src_opus else tmp_opus),
|
||||
b"-i", fsenc(tmp_opus),
|
||||
b"-map_metadata", b"-1",
|
||||
b"-map", b"0:a:0",
|
||||
b"-c:a", b"copy",
|
||||
@@ -831,7 +891,6 @@ class ThumbSrv(object):
|
||||
# fmt: on
|
||||
self._run_ff(cmd, vn, oom=300)
|
||||
|
||||
if tmp_opus != tpath:
|
||||
try:
|
||||
wunlink(self.log, tmp_opus, vn.flags)
|
||||
except:
|
||||
@@ -891,7 +950,7 @@ class ThumbSrv(object):
|
||||
|
||||
def _clean(self, cat: str, thumbpath: str) -> int:
|
||||
# self.log("cln {}".format(thumbpath))
|
||||
exts = ["jpg", "webp", "png"] if cat == "th" else ["opus", "caf", "mp3"]
|
||||
exts = EXTS_TH if cat == "th" else EXTS_AC
|
||||
maxage = getattr(self.args, cat + "_maxage")
|
||||
now = time.time()
|
||||
prev_b64 = None
|
||||
|
||||
@@ -795,7 +795,7 @@ class Up2k(object):
|
||||
continue
|
||||
|
||||
self.log("xiu: %d# %r" % (len(wrfs), cmd))
|
||||
runihook(self.log, cmd, vol, ups)
|
||||
runihook(self.log, self.args.hook_v, cmd, vol, ups)
|
||||
|
||||
def _vis_job_progress(self, job: dict[str, Any]) -> str:
|
||||
perc = 100 - (len(job["need"]) * 100.0 / (len(job["hash"]) or 1))
|
||||
@@ -856,7 +856,7 @@ class Up2k(object):
|
||||
self.iacct = self.asrv.iacct
|
||||
self.grps = self.asrv.grps
|
||||
|
||||
have_e2d = self.args.idp_h_usr
|
||||
have_e2d = self.args.idp_h_usr or self.args.chpw or self.args.shr
|
||||
vols = list(all_vols.values())
|
||||
t0 = time.time()
|
||||
|
||||
@@ -1119,6 +1119,7 @@ class Up2k(object):
|
||||
reg = {}
|
||||
drp = None
|
||||
emptylist = []
|
||||
dotpart = "." if self.args.dotpart else ""
|
||||
snap = os.path.join(histpath, "up2k.snap")
|
||||
if bos.path.exists(snap):
|
||||
with gzip.GzipFile(snap, "rb") as f:
|
||||
@@ -1131,6 +1132,8 @@ class Up2k(object):
|
||||
except:
|
||||
pass
|
||||
|
||||
reg = reg2 # diff-golf
|
||||
|
||||
if reg2 and "dwrk" not in reg2[next(iter(reg2))]:
|
||||
for job in reg2.values():
|
||||
job["dwrk"] = job["wark"]
|
||||
@@ -1138,7 +1141,8 @@ class Up2k(object):
|
||||
rm = []
|
||||
for k, job in reg2.items():
|
||||
job["ptop"] = ptop
|
||||
if "done" in job:
|
||||
is_done = "done" in job
|
||||
if is_done:
|
||||
job["need"] = job["hash"] = emptylist
|
||||
else:
|
||||
if "need" not in job:
|
||||
@@ -1146,10 +1150,13 @@ class Up2k(object):
|
||||
if "hash" not in job:
|
||||
job["hash"] = []
|
||||
|
||||
if is_done:
|
||||
fp = djoin(ptop, job["prel"], job["name"])
|
||||
else:
|
||||
fp = djoin(ptop, job["prel"], dotpart + job["name"] + ".PARTIAL")
|
||||
|
||||
if bos.path.exists(fp):
|
||||
reg[k] = job
|
||||
if "done" in job:
|
||||
if is_done:
|
||||
continue
|
||||
job["poke"] = time.time()
|
||||
job["busy"] = {}
|
||||
@@ -1157,11 +1164,18 @@ class Up2k(object):
|
||||
self.log("ign deleted file in snap: %r" % (fp,))
|
||||
if not n4g:
|
||||
rm.append(k)
|
||||
continue
|
||||
|
||||
for x in rm:
|
||||
del reg2[x]
|
||||
|
||||
# optimize pre-1.15.4 entries
|
||||
if next((x for x in reg.values() if "done" in x and "poke" in x), None):
|
||||
zsl = "host tnam busy sprs poke t0c".split()
|
||||
for job in reg.values():
|
||||
if "done" in job:
|
||||
for k in zsl:
|
||||
job.pop(k, None)
|
||||
|
||||
if drp is None:
|
||||
drp = [k for k, v in reg.items() if not v["need"]]
|
||||
else:
|
||||
@@ -3004,7 +3018,7 @@ class Up2k(object):
|
||||
if wark in reg:
|
||||
del reg[wark]
|
||||
job["hash"] = job["need"] = []
|
||||
job["done"] = True
|
||||
job["done"] = 1
|
||||
job["busy"] = {}
|
||||
|
||||
if lost:
|
||||
@@ -4614,12 +4628,12 @@ class Up2k(object):
|
||||
Optional[str],
|
||||
Optional[int],
|
||||
Optional[int],
|
||||
Optional[str],
|
||||
str,
|
||||
Optional[int],
|
||||
]:
|
||||
cur = self.cur.get(ptop)
|
||||
if not cur:
|
||||
return None, None, None, None, None, None
|
||||
return None, None, None, None, "", None
|
||||
|
||||
rd, fn = vsplit(vrem)
|
||||
q = "select w, mt, sz, ip, at from up where rd=? and fn=? limit 1"
|
||||
@@ -4633,7 +4647,7 @@ class Up2k(object):
|
||||
if hit:
|
||||
wark, ftime, fsize, ip, at = hit
|
||||
return cur, wark, ftime, fsize, ip, at
|
||||
return cur, None, None, None, None, None
|
||||
return cur, None, None, None, "", None
|
||||
|
||||
def _forget_file(
|
||||
self,
|
||||
@@ -4891,7 +4905,8 @@ class Up2k(object):
|
||||
except:
|
||||
pass
|
||||
|
||||
xbu = self.flags[job["ptop"]].get("xbu")
|
||||
vf = self.flags[job["ptop"]]
|
||||
xbu = vf.get("xbu")
|
||||
ap_chk = djoin(pdir, job["name"])
|
||||
vp_chk = djoin(job["vtop"], job["prel"], job["name"])
|
||||
if xbu:
|
||||
@@ -4921,7 +4936,7 @@ class Up2k(object):
|
||||
if x:
|
||||
zvfs = vfs
|
||||
pdir, _, job["name"], (vfs, rem) = x
|
||||
job["vcfg"] = vfs.flags
|
||||
job["vcfg"] = vf = vfs.flags
|
||||
job["ptop"] = vfs.realpath
|
||||
job["vtop"] = vfs.vpath
|
||||
job["prel"] = rem
|
||||
@@ -4971,8 +4986,13 @@ class Up2k(object):
|
||||
fs = self.fstab.get(pdir)
|
||||
if fs == "ok":
|
||||
pass
|
||||
elif "sparse" in self.flags[job["ptop"]]:
|
||||
t = "volflag 'sparse' is forcing use of sparse files for uploads to [%s]"
|
||||
elif "nosparse" in vf:
|
||||
t = "volflag 'nosparse' is preventing creation of sparse files for uploads to [%s]"
|
||||
self.log(t % (job["ptop"],))
|
||||
relabel = True
|
||||
sprs = False
|
||||
elif "sparse" in vf:
|
||||
t = "volflag 'sparse' is forcing creation of sparse files for uploads to [%s]"
|
||||
self.log(t % (job["ptop"],))
|
||||
relabel = True
|
||||
else:
|
||||
|
||||
@@ -120,6 +120,13 @@ try:
|
||||
except:
|
||||
HAVE_SQLITE3 = False
|
||||
|
||||
try:
|
||||
import importlib.util
|
||||
|
||||
HAVE_ZMQ = bool(importlib.util.find_spec("zmq"))
|
||||
except:
|
||||
HAVE_ZMQ = False
|
||||
|
||||
try:
|
||||
if os.environ.get("PRTY_NO_PSUTIL"):
|
||||
raise Exception()
|
||||
@@ -229,9 +236,14 @@ META_NOBOTS = '<meta name="robots" content="noindex, nofollow">\n'
|
||||
|
||||
FFMPEG_URL = "https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z"
|
||||
|
||||
URL_PRJ = "https://github.com/9001/copyparty"
|
||||
|
||||
URL_BUG = URL_PRJ + "/issues/new?labels=bug&template=bug_report.md"
|
||||
|
||||
HTTPCODE = {
|
||||
200: "OK",
|
||||
201: "Created",
|
||||
202: "Accepted",
|
||||
204: "No Content",
|
||||
206: "Partial Content",
|
||||
207: "Multi-Status",
|
||||
@@ -319,6 +331,7 @@ DAV_ALLPROPS = set(DAV_ALLPROP_L)
|
||||
|
||||
MIMES = {
|
||||
"opus": "audio/ogg; codecs=opus",
|
||||
"owa": "audio/webm; codecs=opus",
|
||||
}
|
||||
|
||||
|
||||
@@ -491,6 +504,15 @@ def py_desc() -> str:
|
||||
)
|
||||
|
||||
|
||||
def expat_ver() -> str:
|
||||
try:
|
||||
import pyexpat
|
||||
|
||||
return ".".join([str(x) for x in pyexpat.version_info])
|
||||
except:
|
||||
return "?"
|
||||
|
||||
|
||||
def _sqlite_ver() -> str:
|
||||
assert sqlite3 # type: ignore # !rm
|
||||
try:
|
||||
@@ -3386,6 +3408,7 @@ def _parsehook(
|
||||
|
||||
def runihook(
|
||||
log: Optional["NamedLogger"],
|
||||
verbose: bool,
|
||||
cmd: str,
|
||||
vol: "VFS",
|
||||
ups: list[tuple[str, int, int, str, str, str, int]],
|
||||
@@ -3415,6 +3438,17 @@ def runihook(
|
||||
else:
|
||||
sp_ka["sin"] = b"\n".join(fsenc(x) for x in aps)
|
||||
|
||||
if acmd[0].startswith("zmq:"):
|
||||
try:
|
||||
msg = sp_ka["sin"].decode("utf-8", "replace")
|
||||
_zmq_hook(log, verbose, "xiu", acmd[0][4:].lower(), msg, wait, sp_ka)
|
||||
if verbose and log:
|
||||
log("hook(xiu) %r OK" % (cmd,), 6)
|
||||
except Exception as ex:
|
||||
if log:
|
||||
log("zeromq failed: %r" % (ex,))
|
||||
return True
|
||||
|
||||
t0 = time.time()
|
||||
if fork:
|
||||
Daemon(runcmd, cmd, bcmd, ka=sp_ka)
|
||||
@@ -3424,6 +3458,7 @@ def runihook(
|
||||
retchk(rc, bcmd, err, log, 5)
|
||||
return False
|
||||
|
||||
if wait:
|
||||
wait -= time.time() - t0
|
||||
if wait > 0:
|
||||
time.sleep(wait)
|
||||
@@ -3431,8 +3466,118 @@ def runihook(
|
||||
return True
|
||||
|
||||
|
||||
ZMQ = {}
|
||||
ZMQ_DESC = {
|
||||
"pub": "fire-and-forget to all/any connected SUB-clients",
|
||||
"push": "fire-and-forget to one of the connected PULL-clients",
|
||||
"req": "send messages to a REP-server and blocking-wait for ack",
|
||||
}
|
||||
|
||||
|
||||
def _zmq_hook(
|
||||
log: Optional["NamedLogger"],
|
||||
verbose: bool,
|
||||
src: str,
|
||||
cmd: str,
|
||||
msg: str,
|
||||
wait: float,
|
||||
sp_ka: dict[str, Any],
|
||||
) -> tuple[int, str]:
|
||||
import zmq
|
||||
|
||||
try:
|
||||
mtx = ZMQ["mtx"]
|
||||
except:
|
||||
ZMQ["mtx"] = threading.Lock()
|
||||
time.sleep(0.1)
|
||||
mtx = ZMQ["mtx"]
|
||||
|
||||
ret = ""
|
||||
nret = 0
|
||||
t0 = time.time()
|
||||
if verbose and log:
|
||||
log("hook(%s) %r entering zmq-main-lock" % (src, cmd), 6)
|
||||
|
||||
with mtx:
|
||||
try:
|
||||
mode, sck, mtx = ZMQ[cmd]
|
||||
except:
|
||||
mode, uri = cmd.split(":", 1)
|
||||
try:
|
||||
desc = ZMQ_DESC[mode]
|
||||
if log:
|
||||
t = "libzmq(%s) pyzmq(%s) init(%s); %s"
|
||||
log(t % (zmq.zmq_version(), zmq.__version__, cmd, desc))
|
||||
except:
|
||||
raise Exception("the only supported ZMQ modes are REQ PUB PUSH")
|
||||
|
||||
try:
|
||||
ctx = ZMQ["ctx"]
|
||||
except:
|
||||
ctx = ZMQ["ctx"] = zmq.Context()
|
||||
|
||||
timeout = sp_ka["timeout"]
|
||||
|
||||
if mode == "pub":
|
||||
sck = ctx.socket(zmq.PUB)
|
||||
sck.setsockopt(zmq.LINGER, 0)
|
||||
sck.bind(uri)
|
||||
time.sleep(1) # give clients time to connect; avoids losing first msg
|
||||
elif mode == "push":
|
||||
sck = ctx.socket(zmq.PUSH)
|
||||
if timeout:
|
||||
sck.SNDTIMEO = int(timeout * 1000)
|
||||
sck.setsockopt(zmq.LINGER, 0)
|
||||
sck.bind(uri)
|
||||
elif mode == "req":
|
||||
sck = ctx.socket(zmq.REQ)
|
||||
if timeout:
|
||||
sck.RCVTIMEO = int(timeout * 1000)
|
||||
sck.setsockopt(zmq.LINGER, 0)
|
||||
sck.connect(uri)
|
||||
else:
|
||||
raise Exception()
|
||||
|
||||
mtx = threading.Lock()
|
||||
ZMQ[cmd] = (mode, sck, mtx)
|
||||
|
||||
if verbose and log:
|
||||
log("hook(%s) %r entering socket-lock" % (src, cmd), 6)
|
||||
|
||||
with mtx:
|
||||
if verbose and log:
|
||||
log("hook(%s) %r sending |%d|" % (src, cmd, len(msg)), 6)
|
||||
|
||||
sck.send_string(msg) # PUSH can safely timeout here
|
||||
|
||||
if mode == "req":
|
||||
if verbose and log:
|
||||
log("hook(%s) %r awaiting ack from req" % (src, cmd), 6)
|
||||
try:
|
||||
ret = sck.recv().decode("utf-8", "replace")
|
||||
if ret.startswith("return "):
|
||||
m = re.search("^return ([0-9]+)", ret[:12])
|
||||
if m:
|
||||
nret = int(m.group(1))
|
||||
except:
|
||||
sck.close()
|
||||
del ZMQ[cmd] # bad state; must reset
|
||||
raise Exception("ack timeout; zmq socket killed")
|
||||
|
||||
if ret and log:
|
||||
log("hook(%s) %r ACK: %r" % (src, cmd, ret), 6)
|
||||
|
||||
if wait:
|
||||
wait -= time.time() - t0
|
||||
if wait > 0:
|
||||
time.sleep(wait)
|
||||
|
||||
return nret, ret
|
||||
|
||||
|
||||
def _runhook(
|
||||
log: Optional["NamedLogger"],
|
||||
verbose: bool,
|
||||
src: str,
|
||||
cmd: str,
|
||||
ap: str,
|
||||
@@ -3473,6 +3618,12 @@ def _runhook(
|
||||
else:
|
||||
arg = txt or ap
|
||||
|
||||
if acmd[0].startswith("zmq:"):
|
||||
zi, zs = _zmq_hook(log, verbose, src, acmd[0][4:].lower(), arg, wait, sp_ka)
|
||||
if zi:
|
||||
raise Exception("zmq says %d" % (zi,))
|
||||
return {"rc": 0, "stdout": zs}
|
||||
|
||||
acmd += [arg]
|
||||
if acmd[0].endswith(".py"):
|
||||
acmd = [pybin] + acmd
|
||||
@@ -3501,6 +3652,7 @@ def _runhook(
|
||||
except:
|
||||
ret = {"rc": rc, "stdout": v}
|
||||
|
||||
if wait:
|
||||
wait -= time.time() - t0
|
||||
if wait > 0:
|
||||
time.sleep(wait)
|
||||
@@ -3527,14 +3679,15 @@ def runhook(
|
||||
) -> dict[str, Any]:
|
||||
assert broker or up2k # !rm
|
||||
args = (broker or up2k).args
|
||||
verbose = args.hook_v
|
||||
vp = vp.replace("\\", "/")
|
||||
ret = {"rc": 0}
|
||||
for cmd in cmds:
|
||||
try:
|
||||
hr = _runhook(
|
||||
log, src, cmd, ap, vp, host, uname, perms, mt, sz, ip, at, txt
|
||||
log, verbose, src, cmd, ap, vp, host, uname, perms, mt, sz, ip, at, txt
|
||||
)
|
||||
if log and args.hook_v:
|
||||
if verbose and log:
|
||||
log("hook(%s) %r => \033[32m%s" % (src, cmd, hr), 6)
|
||||
if not hr:
|
||||
return {}
|
||||
@@ -3550,6 +3703,8 @@ def runhook(
|
||||
elif k in ret:
|
||||
if k == "rc" and v:
|
||||
ret[k] = v
|
||||
elif k == "stdout" and v and not ret[k]:
|
||||
ret[k] = v
|
||||
else:
|
||||
ret[k] = v
|
||||
except Exception as ex:
|
||||
|
||||
@@ -633,6 +633,9 @@ window.baguetteBox = (function () {
|
||||
catch (ex) { }
|
||||
isFullscreen = false;
|
||||
|
||||
if (toast.tag == 'bb-ded')
|
||||
toast.hide();
|
||||
|
||||
if (dtor || overlay.style.display === 'none')
|
||||
return;
|
||||
|
||||
@@ -668,6 +671,7 @@ window.baguetteBox = (function () {
|
||||
if (v == keep)
|
||||
continue;
|
||||
|
||||
unbind(v, 'error', lerr);
|
||||
v.src = '';
|
||||
v.load();
|
||||
|
||||
@@ -695,6 +699,28 @@ window.baguetteBox = (function () {
|
||||
}
|
||||
}
|
||||
|
||||
function lerr() {
|
||||
var t;
|
||||
try {
|
||||
t = this.getAttribute('src');
|
||||
t = uricom_dec(t.split('/').pop().split('?')[0]);
|
||||
}
|
||||
catch (ex) { }
|
||||
|
||||
t = 'Failed to open ' + (t?t:'file');
|
||||
console.log('bb-ded', t);
|
||||
t += '\n\nEither the file is corrupt, or your browser does not understand the file format or codec';
|
||||
|
||||
try {
|
||||
t += "\n\nerr#" + this.error.code + ", " + this.error.message;
|
||||
}
|
||||
catch (ex) { }
|
||||
|
||||
this.ded = esc(t);
|
||||
if (this === vidimg())
|
||||
toast.err(20, this.ded, 'bb-ded');
|
||||
}
|
||||
|
||||
function loadImage(index, callback) {
|
||||
var imageContainer = imagesElements[index];
|
||||
var galleryItem = currentGallery[index];
|
||||
@@ -739,7 +765,8 @@ window.baguetteBox = (function () {
|
||||
var image = mknod(is_vid ? 'video' : 'img');
|
||||
clmod(imageContainer, 'vid', is_vid);
|
||||
|
||||
image.addEventListener(is_vid ? 'loadedmetadata' : 'load', function () {
|
||||
bind(image, 'error', lerr);
|
||||
bind(image, is_vid ? 'loadedmetadata' : 'load', function () {
|
||||
// Remove loader element
|
||||
qsr('#baguette-img-' + index + ' .bbox-spinner');
|
||||
if (!options.async && callback)
|
||||
@@ -816,6 +843,12 @@ window.baguetteBox = (function () {
|
||||
});
|
||||
updateOffset();
|
||||
|
||||
var im = vidimg();
|
||||
if (im && im.ded)
|
||||
toast.err(20, im.ded, 'bb-ded');
|
||||
else if (toast.tag == 'bb-ded')
|
||||
toast.hide();
|
||||
|
||||
if (options.animation == 'none')
|
||||
unvid(vid());
|
||||
else
|
||||
|
||||
@@ -1699,6 +1699,8 @@ html.y #tree.nowrap .ntree a+a:hover {
|
||||
margin: 1em .3em 1em 1em;
|
||||
padding: 0 1.2em 0 0;
|
||||
font-size: 4em;
|
||||
min-width: 1em;
|
||||
min-height: 1em;
|
||||
opacity: 0;
|
||||
animation: 1s linear .15s infinite forwards spin, .2s ease .15s 1 forwards fadein;
|
||||
position: absolute;
|
||||
|
||||
@@ -124,9 +124,7 @@
|
||||
|
||||
</div>
|
||||
|
||||
{%- if srv_info %}
|
||||
<div id="srv_info"><span>{{ srv_info }}</span></div>
|
||||
{%- endif %}
|
||||
|
||||
<div id="widget"></div>
|
||||
|
||||
|
||||
@@ -62,6 +62,7 @@ var Ls = {
|
||||
["U/O", "skip 10sec back/fwd"],
|
||||
["0..9", "jump to 0%..90%"],
|
||||
["P", "play/pause (also initiates)"],
|
||||
["S", "select playing song"],
|
||||
["Y", "download song"],
|
||||
], [
|
||||
"image-viewer",
|
||||
@@ -70,6 +71,7 @@ var Ls = {
|
||||
["F", "fullscreen"],
|
||||
["R", "rotate clockwise"],
|
||||
["🡅 R", "rotate ccw"],
|
||||
["S", "select pic"],
|
||||
["Y", "download pic"],
|
||||
], [
|
||||
"video-player",
|
||||
@@ -241,7 +243,7 @@ var Ls = {
|
||||
"cut_nag": "OS notification when upload completes$N(only if the browser or tab is not active)",
|
||||
"cut_sfx": "audible alert when upload completes$N(only if the browser or tab is not active)",
|
||||
|
||||
"cut_mt": "use multithreading to accelerate file hashing$N$Nthis uses web-workers and requires$Nmore RAM (up to 512 MiB extra)$N$N30% faster https, 4.5x faster http,$Nand 5.3x faster on android phones\">mt",
|
||||
"cut_mt": "use multithreading to accelerate file hashing$N$Nthis uses web-workers and requires$Nmore RAM (up to 512 MiB extra)$N$Nmakes https 30% faster, http 4.5x faster\">mt",
|
||||
|
||||
"cft_text": "favicon text (blank and refresh to disable)",
|
||||
"cft_fg": "foreground color",
|
||||
@@ -263,6 +265,7 @@ var Ls = {
|
||||
"ml_pmode": "at end of folder...",
|
||||
"ml_btns": "cmds",
|
||||
"ml_tcode": "transcode",
|
||||
"ml_tcode2": "transcode to",
|
||||
"ml_tint": "tint",
|
||||
"ml_eq": "audio equalizer",
|
||||
"ml_drc": "dynamic range compressor",
|
||||
@@ -286,6 +289,14 @@ var Ls = {
|
||||
"mt_cflac": "convert flac / wav to opus\">flac",
|
||||
"mt_caac": "convert aac / m4a to opus\">aac",
|
||||
"mt_coth": "convert all others (not mp3) to opus\">oth",
|
||||
"mt_c2opus": "best choice for desktops, laptops, android\">opus",
|
||||
"mt_c2owa": "opus-weba, for iOS 17.5 and newer\">owa",
|
||||
"mt_c2caf": "opus-caf, for iOS 11 through 17\">caf",
|
||||
"mt_c2mp3": "use this on very old devices\">mp3",
|
||||
"mt_c2ok": "nice, good choice",
|
||||
"mt_c2nd": "that's not the recommended output format for your device, but that's fine",
|
||||
"mt_c2ng": "your device does not seem to support this output format, but let's try anyways",
|
||||
"mt_xowa": "there are bugs in iOS preventing background playback using this format; please use caf or mp3 instead",
|
||||
"mt_tint": "background level (0-100) on the seekbar$Nto make buffering less distracting",
|
||||
"mt_eq": "enables the equalizer and gain control;$N$Nboost <code>0</code> = standard 100% volume (unmodified)$N$Nwidth <code>1 </code> = standard stereo (unmodified)$Nwidth <code>0.5</code> = 50% left-right crossfeed$Nwidth <code>0 </code> = mono$N$Nboost <code>-0.8</code> & width <code>10</code> = vocal removal :^)$N$Nenabling the equalizer makes gapless albums fully gapless, so leave it on with all the values at zero (except width = 1) if you care about that",
|
||||
"mt_drc": "enables the dynamic range compressor (volume flattener / brickwaller); will also enable EQ to balance the spaghetti, so set all EQ fields except for 'width' to 0 if you don't want it$N$Nlowers the volume of audio above THRESHOLD dB; for every RATIO dB past THRESHOLD there is 1 dB of output, so default values of tresh -24 and ratio 12 means it should never get louder than -22 dB and it is safe to increase the equalizer boost to 0.8, or even 1.8 with ATK 0 and a huge RLS like 90 (only works in firefox; RLS is max 1 in other browsers)$N$N(see wikipedia, they explain it much better)",
|
||||
@@ -650,6 +661,7 @@ var Ls = {
|
||||
["U/O", "hopp 10sek bak/frem"],
|
||||
["0..9", "hopp til 0%..90%"],
|
||||
["P", "pause, eller start / fortsett"],
|
||||
["S", "marker spillende sang"],
|
||||
["Y", "last ned sang"],
|
||||
], [
|
||||
"bildeviser",
|
||||
@@ -658,6 +670,7 @@ var Ls = {
|
||||
["F", "fullskjermvisning"],
|
||||
["R", "rotere mot høyre"],
|
||||
["🡅 R", "rotere mot venstre"],
|
||||
["S", "marker bilde"],
|
||||
["Y", "last ned bilde"],
|
||||
], [
|
||||
"videospiller",
|
||||
@@ -672,7 +685,7 @@ var Ls = {
|
||||
["I/K", "forr./neste fil"],
|
||||
["M", "lukk tekstdokument"],
|
||||
["E", "rediger tekstdokument"],
|
||||
["S", "velg fil (for F2/ctrl-x/...)"],
|
||||
["S", "marker fil (for F2/ctrl-x/...)"],
|
||||
["Y", "last ned tekstfil"],
|
||||
]
|
||||
],
|
||||
@@ -830,7 +843,7 @@ var Ls = {
|
||||
"cut_nag": "meldingsvarsel når opplastning er ferdig$N(kun on nettleserfanen ikke er synlig)",
|
||||
"cut_sfx": "lydvarsel når opplastning er ferdig$N(kun on nettleserfanen ikke er synlig)",
|
||||
|
||||
"cut_mt": "raskere befaring ved å bruke hele CPU'en$N$Ndenne funksjonen anvender web-workers$Nog krever mer RAM (opptil 512 MiB ekstra)$N$N30% raskere https, 4.5x raskere http,$Nog 5.3x raskere på android-telefoner\">mt",
|
||||
"cut_mt": "raskere befaring ved å bruke hele CPU'en$N$Ndenne funksjonen anvender web-workers$Nog krever mer RAM (opptil 512 MiB ekstra)$N$Ngjør https 30% raskere, http 4.5x raskere\">mt",
|
||||
|
||||
"cft_text": "ikontekst (blank ut og last siden på nytt for å deaktivere)",
|
||||
"cft_fg": "farge",
|
||||
@@ -852,6 +865,7 @@ var Ls = {
|
||||
"ml_pmode": "ved enden av mappen",
|
||||
"ml_btns": "knapper",
|
||||
"ml_tcode": "konvertering",
|
||||
"ml_tcode2": "konverter til",
|
||||
"ml_tint": "tint",
|
||||
"ml_eq": "audio equalizer (tonejustering)",
|
||||
"ml_drc": "compressor (volum-utjevning)",
|
||||
@@ -875,6 +889,14 @@ var Ls = {
|
||||
"mt_cflac": "konverter flac / wav-filer til opus\">flac",
|
||||
"mt_caac": "konverter aac / m4a-filer til to opus\">aac",
|
||||
"mt_coth": "konverter alt annet (men ikke mp3) til opus\">andre",
|
||||
"mt_c2opus": "det beste valget for alle PCer og Android\">opus",
|
||||
"mt_c2owa": "opus-weba, for iOS 17.5 og nyere\">owa",
|
||||
"mt_c2caf": "opus-caf, for iOS 11 tilogmed 17\">caf",
|
||||
"mt_c2mp3": "bra valg for steinalder-utstyr (slår aldri feil)\">mp3",
|
||||
"mt_c2ok": "bra valg!",
|
||||
"mt_c2nd": "ikke det foretrukne valget for din enhet, men funker sikkert greit",
|
||||
"mt_c2ng": "ser virkelig ikke ut som enheten din takler dette formatet... men ok, vi prøver",
|
||||
"mt_xowa": "iOS har fortsatt problemer med avspilling av owa-musikk i bakgrunnen. Bruk caf eller mp3 istedenfor",
|
||||
"mt_tint": "nivå av bakgrunnsfarge på søkestripa (0-100),$Ngjør oppdateringer mindre distraherende",
|
||||
"mt_eq": "aktiver tonekontroll og forsterker;$N$Nboost <code>0</code> = normal volumskala$N$Nwidth <code>1 </code> = normal stereo$Nwidth <code>0.5</code> = 50% blanding venstre-høyre$Nwidth <code>0 </code> = mono$N$Nboost <code>-0.8</code> & width <code>10</code> = instrumental :^)$N$Nreduserer også dødtid imellom sangfiler",
|
||||
"mt_drc": "aktiver volum-utjevning (dynamic range compressor); vil også aktivere tonejustering, så sett alle EQ-feltene bortsett fra 'width' til 0 hvis du ikke vil ha noe EQ$N$Nfilteret vil dempe volumet på alt som er høyere enn TRESH dB; for hver RATIO dB over grensen er det 1dB som treffer høyttalerne, så standardverdiene tresh -24 og ratio 12 skal bety at volumet ikke går høyere enn -22 dB, slik at man trygt kan øke boost-verdien i equalizer'n til rundt 0.8, eller 1.8 kombinert med ATK 0 og RLS 90 (bare mulig i firefox; andre nettlesere tar ikke høyere RLS enn 1)$N$Nwikipedia forklarer dette mye bedre forresten",
|
||||
@@ -1240,6 +1262,7 @@ var Ls = {
|
||||
["U/O", "跳过10秒向前/向后"],
|
||||
["0..9", "跳转到0%..90%"],
|
||||
["P", "播放/暂停(也可以启动)"],
|
||||
["S", "选择正在播放的歌曲"], //m
|
||||
["Y", "下载歌曲"]
|
||||
], [
|
||||
"image-viewer",
|
||||
@@ -1248,6 +1271,7 @@ var Ls = {
|
||||
["F", "全屏"],
|
||||
["R", "顺时针旋转"],
|
||||
["🡅 R", "逆时针旋转"],
|
||||
["S", "选择图片"], //m
|
||||
["Y", "下载图片"]
|
||||
], [
|
||||
"video-player",
|
||||
@@ -1419,7 +1443,7 @@ var Ls = {
|
||||
"cut_nag": "上传完成时的操作系统通知$N(仅当浏览器或标签页不活跃时)",
|
||||
"cut_sfx": "上传完成时的声音警报$N(仅当浏览器或标签页不活跃时)",
|
||||
|
||||
"cut_mt": "使用多线程加速文件哈希$N$N这使用 Web Worker 并且需要更多内存(额外最多 512 MiB)$N$N比https快30%,http快4.5倍,比Android 手机快5.3倍\">mt",
|
||||
"cut_mt": "使用多线程加速文件哈希$N$N这使用 Web Worker 并且需要更多内存(额外最多 512 MiB)$N$N这使得 https 快 30%,http 快 4.5 倍\">mt",
|
||||
|
||||
"cft_text": "网站图标文本(为空并刷新以禁用)",
|
||||
"cft_fg": "前景色",
|
||||
@@ -1441,6 +1465,7 @@ var Ls = {
|
||||
"ml_pmode": "在文件夹末尾时...",
|
||||
"ml_btns": "命令",
|
||||
"ml_tcode": "转码",
|
||||
"ml_tcode2": "转换为", //m
|
||||
"ml_tint": "透明度",
|
||||
"ml_eq": "音频均衡器",
|
||||
"ml_drc": "动态范围压缩器",
|
||||
@@ -1464,6 +1489,14 @@ var Ls = {
|
||||
"mt_cflac": "将 flac / wav 转换为 opus\">flac",
|
||||
"mt_caac": "将 aac / m4a 转换为 opus\">aac",
|
||||
"mt_coth": "将所有其他(不是 mp3)转换为 opus\">oth",
|
||||
"mt_c2opus": "适合桌面电脑、笔记本电脑和安卓设备的最佳选择\">opus", //m
|
||||
"mt_c2owa": "opus-weba(适用于 iOS 17.5 及更新版本)\">owa", //m
|
||||
"mt_c2caf": "opus-caf(适用于 iOS 11 到 iOS 17)\">caf", //m
|
||||
"mt_c2mp3": "适用于非常旧的设备\">mp3", //m
|
||||
"mt_c2ok": "不错的选择!", //m
|
||||
"mt_c2nd": "这不是您的设备推荐的输出格式,但应该没问题。", //m
|
||||
"mt_c2ng": "您的设备似乎不支持此输出格式,不过我们还是试试看吧。", //m
|
||||
"mt_xowa": "iOS 系统仍存在无法后台播放 owa 音乐的错误,请改用 caf 或 mp3 格式。", //m
|
||||
"mt_tint": "在进度条上设置背景级别(0-100)",
|
||||
"mt_eq": "启用均衡器和增益控制;$N$Nboost <code>0</code> = 标准 100% 音量(默认)$N$Nwidth <code>1 </code> = 标准立体声(默认)$Nwidth <code>0.5</code> = 50% 左右交叉反馈$Nwidth <code>0 </code> = 单声道$N$Nboost <code>-0.8</code> & width <code>10</code> = 人声移除 )$N$N启用均衡器使无缝专辑完全无缝,所以如果你在乎这一点,请保持启用,所有值设为零(除了宽度 = 1)",
|
||||
"mt_drc": "启用动态范围压缩器(音量平滑器 / 限幅器);还会启用均衡器以平衡音频,因此如果你不想要它,请将均衡器字段除了 '宽度' 外的所有字段设置为 0$N$N降低 THRESHOLD dB 以上的音频的音量;每超过 THRESHOLD dB 的 RATIO 会有 1 dB 输出,所以默认值 tresh -24 和 ratio 12 意味着它的音量不应超过 -22 dB,可以安全地将均衡器增益提高到 0.8,甚至在 ATK 0 和 RLS 如 90 的情况下提高到 1.8(仅在 Firefox 中有效;其他浏览器中 RLS 最大为 1)$N$N(见维基百科,他们解释得更好)",
|
||||
@@ -2193,6 +2226,7 @@ var ACtx = !IPHONE && (window.AudioContext || window.webkitAudioContext),
|
||||
hash0 = location.hash,
|
||||
sloc0 = '' + location,
|
||||
noih = /[?&]v\b/.exec(sloc0),
|
||||
rtt = null,
|
||||
ldks = [],
|
||||
dks = {},
|
||||
dk, mp;
|
||||
@@ -2269,6 +2303,12 @@ var mpl = (function () {
|
||||
'<a href="#" id="ac_flac" class="tgl btn" tt="' + L.mt_cflac + '</a>' +
|
||||
'<a href="#" id="ac_aac" class="tgl btn" tt="' + L.mt_caac + '</a>' +
|
||||
'<a href="#" id="ac_oth" class="tgl btn" tt="' + L.mt_coth + '</a>' +
|
||||
'</div></div>' +
|
||||
'<div><h3>' + L.ml_tcode2 + '</h3><div>' +
|
||||
'<a href="#" id="ac2opus" class="tgl btn" tt="' + L.mt_c2opus + '</a>' +
|
||||
'<a href="#" id="ac2owa" class="tgl btn" tt="' + L.mt_c2owa + '</a>' +
|
||||
'<a href="#" id="ac2caf" class="tgl btn" tt="' + L.mt_c2caf + '</a>' +
|
||||
'<a href="#" id="ac2mp3" class="tgl btn" tt="' + L.mt_c2mp3 + '</a>' +
|
||||
'</div></div>'
|
||||
) : '') +
|
||||
|
||||
@@ -2376,7 +2416,7 @@ var mpl = (function () {
|
||||
c = r.ac_flac;
|
||||
else if (/\.(aac|m4a)$/i.exec(cs))
|
||||
c = r.ac_aac;
|
||||
else if (/\.(ogg|opus)$/i.exec(cs) && !can_ogg)
|
||||
else if (/\.(ogg|opus)$/i.exec(cs) && (!can_ogg || mpl.ac2 == 'mp3'))
|
||||
c = true;
|
||||
else if (re_au_native.exec(cs))
|
||||
c = false;
|
||||
@@ -2384,7 +2424,56 @@ var mpl = (function () {
|
||||
if (!c)
|
||||
return url;
|
||||
|
||||
return addq(url, 'th=' + (can_ogg ? 'opus' : (IPHONE || MACOS) ? 'caf' : 'mp3'));
|
||||
return addq(url, 'th=' + r.ac2);
|
||||
};
|
||||
|
||||
r.set_ac2 = function () {
|
||||
r.init_ac2(this.getAttribute('id').split('ac2')[1]);
|
||||
};
|
||||
|
||||
r.init_ac2 = function (v) {
|
||||
if (!window.have_acode) {
|
||||
r.ac2 = 'opus';
|
||||
return;
|
||||
}
|
||||
|
||||
var dv = can_ogg ? 'opus' : can_caf ? 'caf' : 'mp3',
|
||||
fmts = ['opus', 'owa', 'caf', 'mp3'],
|
||||
btns = [];
|
||||
|
||||
if (v === dv)
|
||||
toast.ok(5, L.mt_c2ok);
|
||||
else if (v)
|
||||
toast.inf(10, L.mt_c2nd);
|
||||
|
||||
if ((v == 'opus' && !can_ogg) ||
|
||||
(v == 'caf' && !can_caf) ||
|
||||
(v == 'owa' && !can_owa))
|
||||
toast.warn(15, L.mt_c2ng);
|
||||
|
||||
if (v == 'owa' && IPHONE)
|
||||
toast.err(30, L.mt_xowa);
|
||||
|
||||
for (var a = 0; a < fmts.length; a++) {
|
||||
var btn = ebi('ac2' + fmts[a]);
|
||||
if (!btn)
|
||||
return console.log('!btn', fmts[a]);
|
||||
btn.onclick = r.set_ac2;
|
||||
btns.push(btn);
|
||||
}
|
||||
if (!IPHONE)
|
||||
btns[1].style.display = btns[2].style.display = 'none';
|
||||
|
||||
if (v)
|
||||
swrite('acode2', v);
|
||||
else
|
||||
v = dv;
|
||||
|
||||
v = sread('acode2', fmts) || v;
|
||||
for (var a = 0; a < fmts.length; a++)
|
||||
clmod(btns[a], 'on', fmts[a] == v)
|
||||
|
||||
r.ac2 = v;
|
||||
};
|
||||
|
||||
r.pp = function () {
|
||||
@@ -2483,6 +2572,7 @@ var mpl = (function () {
|
||||
r.unbuffer = function (url) {
|
||||
if (mp.au2 && (!url || mp.au2.rsrc == url)) {
|
||||
mp.au2.src = mp.au2.rsrc = '';
|
||||
mp.au2.ld = 0; //owa
|
||||
mp.au2.load();
|
||||
}
|
||||
if (!url)
|
||||
@@ -2493,11 +2583,23 @@ var mpl = (function () {
|
||||
})();
|
||||
|
||||
|
||||
var can_ogg = true;
|
||||
var za,
|
||||
can_ogg = true,
|
||||
can_owa = false,
|
||||
can_caf = APPLE && !/ OS ([1-9]|1[01])_/.test(UA);
|
||||
try {
|
||||
can_ogg = new Audio().canPlayType('audio/ogg; codecs=opus') === 'probably';
|
||||
za = new Audio();
|
||||
can_ogg = za.canPlayType('audio/ogg; codecs=opus') === 'probably';
|
||||
can_owa = za.canPlayType('audio/webm; codecs=opus') === 'probably';
|
||||
can_caf = za.canPlayType('audio/x-caf') && can_caf; //'maybe'
|
||||
}
|
||||
catch (ex) { }
|
||||
za = null;
|
||||
|
||||
if (can_owa && APPLE && / OS ([1-9]|1[0-7])_/.test(UA))
|
||||
can_owa = false;
|
||||
|
||||
mpl.init_ac2();
|
||||
|
||||
|
||||
var re_au_native = (can_ogg || have_acode) ? /\.(aac|flac|m4a|mp3|ogg|opus|wav)$/i : /\.(aac|flac|m4a|mp3|wav)$/i,
|
||||
@@ -2670,7 +2772,8 @@ function MPlayer() {
|
||||
});
|
||||
|
||||
r.nopause();
|
||||
r.au2.onloadeddata = r.au2.onloadedmetadata = r.nopause;
|
||||
r.au2.ld = 0; //owa
|
||||
r.au2.onloadeddata = r.au2.onloadedmetadata = r.onpreload;
|
||||
r.au2.preload = "auto";
|
||||
r.au2.src = r.au2.rsrc = url;
|
||||
|
||||
@@ -2685,6 +2788,11 @@ function MPlayer() {
|
||||
r.cd_pause = Date.now();
|
||||
};
|
||||
|
||||
r.onpreload = function () {
|
||||
r.nopause();
|
||||
this.ld++;
|
||||
};
|
||||
|
||||
r.init_fau = function () {
|
||||
if (r.fau || !mpl.fau)
|
||||
return;
|
||||
@@ -2985,7 +3093,7 @@ var pbar = (function () {
|
||||
return;
|
||||
|
||||
pctx.fillStyle = light ? 'rgba(0,0,0,0.5)' : 'rgba(255,255,255,0.5)';
|
||||
var m = /[?&]th=(opus|caf|mp3)/.exec('' + mp.au.rsrc),
|
||||
var m = /[?&]th=(opus|owa|caf|mp3)/.exec('' + mp.au.rsrc),
|
||||
txt = mp.au.ded ? L.mm_playerr.replace(':', ' ;_;') :
|
||||
m ? L.mm_bconv.format(m[1]) : L.mm_bload;
|
||||
|
||||
@@ -3249,6 +3357,14 @@ function dl_song() {
|
||||
var url = addq(mp.au.osrc, 'cache=987&_=' + ACB);
|
||||
dl_file(url);
|
||||
}
|
||||
function sel_song() {
|
||||
var o = QS('#files tr.play');
|
||||
if (!o)
|
||||
return;
|
||||
clmod(o, 'sel', 't');
|
||||
msel.origin_tr(o);
|
||||
msel.selui();
|
||||
}
|
||||
|
||||
|
||||
function playpause(e) {
|
||||
@@ -3941,16 +4057,20 @@ function play(tid, is_ev, seek) {
|
||||
mp.au = mp.au2;
|
||||
mp.au2 = t;
|
||||
t.onerror = t.onprogress = t.onended = null;
|
||||
t.ld = 0; //owa
|
||||
mp.au.onerror = evau_error;
|
||||
mp.au.onprogress = pbar.drawpos;
|
||||
mp.au.onplaying = mpui.progress_updater;
|
||||
mp.au.onloadeddata = mp.au.onloadedmetadata = mp.nopause;
|
||||
mp.au.onended = next_song;
|
||||
t = mp.au.currentTime;
|
||||
if (isNum(t) && t > 0.1)
|
||||
mp.au.currentTime = 0;
|
||||
}
|
||||
else
|
||||
else {
|
||||
console.log('get ' + url.split('/').pop());
|
||||
mp.au.src = mp.au.rsrc = url;
|
||||
}
|
||||
|
||||
mp.au.osrc = mp.tracks[tid];
|
||||
afilt.apply();
|
||||
@@ -4036,6 +4156,14 @@ function evau_error(e) {
|
||||
err = L.mm_eabrt;
|
||||
break;
|
||||
case eplaya.error.MEDIA_ERR_NETWORK:
|
||||
if (IPHONE && eplaya.ld === 1 && mpl.ac2 == 'owa' && !eplaya.paused && !eplaya.currentTime) {
|
||||
eplaya.ded = 0;
|
||||
if (!mpl.owaw) {
|
||||
mpl.owaw = 1;
|
||||
console.log('ignored iOS bug; spurious error sent in parallel with preloaded songs starting to play just fine');
|
||||
}
|
||||
return;
|
||||
}
|
||||
err = L.mm_enet;
|
||||
break;
|
||||
case eplaya.error.MEDIA_ERR_DECODE:
|
||||
@@ -4145,8 +4273,11 @@ function scan_hash(v) {
|
||||
|
||||
|
||||
function eval_hash() {
|
||||
if (!window.hotkeys_attached) {
|
||||
window.hotkeys_attached = true;
|
||||
document.onkeydown = ahotkeys;
|
||||
window.onpopstate = treectl.onpopfun;
|
||||
}
|
||||
|
||||
if (hash0 && window.og_fn) {
|
||||
var all = msel.getall(), mi;
|
||||
@@ -6163,19 +6294,40 @@ var thegrid = (function () {
|
||||
var html = [],
|
||||
svgs = new Set(),
|
||||
max_svgs = CHROME ? 500 : 5000,
|
||||
need_ext = !r.thumbs || !!ext_th,
|
||||
use_ext_th = r.thumbs && ext_th,
|
||||
files = QSA('#files>tbody>tr>td:nth-child(2) a[id]');
|
||||
|
||||
for (var a = 0, aa = files.length; a < aa; a++) {
|
||||
var ao = files[a],
|
||||
ohref = esc(ao.getAttribute('href')),
|
||||
href = ohref.split('?')[0],
|
||||
ext = '',
|
||||
name = uricom_dec(vsplit(href)[1]),
|
||||
ref = ao.getAttribute('id'),
|
||||
isdir = href.endsWith('/'),
|
||||
ac = isdir ? ' class="dir"' : '',
|
||||
ihref = ohref;
|
||||
|
||||
if (r.thumbs) {
|
||||
if (need_ext && href != "#") {
|
||||
var ar = href.split('.');
|
||||
if (ar.length > 1)
|
||||
ar.shift();
|
||||
|
||||
ar.reverse();
|
||||
for (var b = 0; b < Math.min(2, ar.length); b++) {
|
||||
if (ar[b].length > 7)
|
||||
break;
|
||||
|
||||
ext = ar[b] + '.' + ext;
|
||||
}
|
||||
ext = (ext || 'unk.').slice(0, -1);
|
||||
}
|
||||
|
||||
if (use_ext_th && ext_th[ext]) {
|
||||
ihref = ext_th[ext];
|
||||
}
|
||||
else if (r.thumbs) {
|
||||
ihref = addq(ihref, 'th=' + (have_webp ? 'w' : 'j'));
|
||||
if (!r.crop)
|
||||
ihref += 'f';
|
||||
@@ -6188,22 +6340,6 @@ var thegrid = (function () {
|
||||
ihref = SR + '/.cpr/ico/folder';
|
||||
}
|
||||
else {
|
||||
var ar = href.split('.');
|
||||
if (ar.length > 1)
|
||||
ar = ar.slice(1);
|
||||
|
||||
ihref = '';
|
||||
ar.reverse();
|
||||
for (var b = 0; b < ar.length; b++) {
|
||||
if (ar[b].length > 7)
|
||||
break;
|
||||
|
||||
ihref = ar[b] + '.' + ihref;
|
||||
}
|
||||
if (!ihref) {
|
||||
ihref = 'unk.';
|
||||
}
|
||||
var ext = ihref.slice(0, -1);
|
||||
if (!svgs.has(ext)) {
|
||||
if (svgs.size < max_svgs)
|
||||
svgs.add(ext);
|
||||
@@ -6692,6 +6828,11 @@ var ahotkeys = function (e) {
|
||||
ebi('editdoc').click();
|
||||
}
|
||||
|
||||
if (mp && mp.au && !mp.au.paused) {
|
||||
if (k == 'KeyS')
|
||||
return sel_song();
|
||||
}
|
||||
|
||||
if (thegrid.en) {
|
||||
if (k == 'KeyS' || k == 's')
|
||||
return ebi('gridsel').click();
|
||||
@@ -7355,12 +7496,12 @@ var treectl = (function () {
|
||||
xhr.rst = rst;
|
||||
xhr.ts = Date.now();
|
||||
xhr.open('GET', addq(dst, 'tree=' + top + (r.dots ? '&dots' : '') + k), true);
|
||||
xhr.onload = xhr.onerror = recvtree;
|
||||
xhr.onload = xhr.onerror = r.recvtree;
|
||||
xhr.send();
|
||||
enspin('#tree');
|
||||
}
|
||||
|
||||
function recvtree() {
|
||||
r.recvtree = function () {
|
||||
if (!xhrchk(this, L.tl_xe1, L.tl_xe2))
|
||||
return;
|
||||
|
||||
@@ -7370,10 +7511,10 @@ var treectl = (function () {
|
||||
catch (ex) {
|
||||
return toast.err(30, "bad <code>?tree</code> reply;\nexpected json, got this:\n\n" + esc(this.responseText + ''));
|
||||
}
|
||||
rendertree(res, this.ts, this.top, this.dst, this.rst);
|
||||
}
|
||||
r.rendertree(res, this.ts, this.top, this.dst, this.rst);
|
||||
};
|
||||
|
||||
function rendertree(res, ts, top0, dst, rst) {
|
||||
r.rendertree = function (res, ts, top0, dst, rst) {
|
||||
var cur = ebi('treeul').getAttribute('ts');
|
||||
if (cur && parseInt(cur) > ts + 20 && QS('#treeul>li>a+a')) {
|
||||
console.log("reject tree; " + cur + " / " + (ts - cur));
|
||||
@@ -7427,7 +7568,7 @@ var treectl = (function () {
|
||||
console.log("dir_cb failed", ex);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
function reload_tree() {
|
||||
var cdir = r.nextdir || get_vpath(),
|
||||
@@ -7546,14 +7687,18 @@ var treectl = (function () {
|
||||
|
||||
var xhr = new XHR(),
|
||||
m = /[?&](k=[^&#]+)/.exec(url),
|
||||
k = m ? '&' + m[1] : dk ? '&k=' + dk : '';
|
||||
k = m ? '&' + m[1] : dk ? '&k=' + dk : '',
|
||||
uq = (r.dots ? '&dots' : '') + k;
|
||||
|
||||
if (rtt !== null)
|
||||
uq += '&rtt=' + rtt;
|
||||
|
||||
xhr.top = url.split('?')[0];
|
||||
xhr.back = back
|
||||
xhr.hpush = hpush;
|
||||
xhr.hydrate = hydrate;
|
||||
xhr.ts = Date.now();
|
||||
xhr.open('GET', xhr.top + '?ls' + (r.dots ? '&dots' : '') + k, true);
|
||||
xhr.open('GET', xhr.top + '?ls' + uq, true);
|
||||
xhr.onload = xhr.onerror = recvls;
|
||||
xhr.send();
|
||||
|
||||
@@ -7585,6 +7730,8 @@ var treectl = (function () {
|
||||
if (!xhrchk(this, L.fl_xe1, L.fl_xe2))
|
||||
return;
|
||||
|
||||
rtt = Date.now() - this.ts;
|
||||
|
||||
r.nextdir = null;
|
||||
var cdir = get_evpath(),
|
||||
lfiles = ebi('files'),
|
||||
@@ -7628,10 +7775,12 @@ var treectl = (function () {
|
||||
dk = res.dk;
|
||||
|
||||
srvinf = res.srvinf;
|
||||
try {
|
||||
ebi('srv_info').innerHTML = ebi('srv_info2').innerHTML = '<span>' + res.srvinf + '</span>';
|
||||
}
|
||||
catch (ex) { }
|
||||
if (rtt !== null)
|
||||
srvinf += (srvinf ? '</span> // <span>rtt: ' : 'rtt: ') + rtt;
|
||||
|
||||
var o = ebi('srv_info2');
|
||||
if (o)
|
||||
o.innerHTML = ebi('srv_info').innerHTML = '<span>' + srvinf + '</span>';
|
||||
|
||||
if (this.hpush && !showfile.active())
|
||||
hist_push(this.top + (dk ? '?k=' + dk : ''));
|
||||
@@ -7649,7 +7798,7 @@ var treectl = (function () {
|
||||
dirs.push(dn);
|
||||
}
|
||||
|
||||
rendertree({ "a": dirs }, this.ts, ".", get_evpath() + (dk ? '?k=' + dk : ''));
|
||||
r.rendertree({ "a": dirs }, this.ts, ".", get_evpath() + (dk ? '?k=' + dk : ''));
|
||||
}
|
||||
|
||||
r.gentab(this.top, res);
|
||||
@@ -7667,9 +7816,9 @@ var treectl = (function () {
|
||||
if (lg1 === Ls.eng.f_empty)
|
||||
lg1 = L.f_empty;
|
||||
|
||||
sandbox(ebi('pro'), sb_lg, '', lg0);
|
||||
sandbox(ebi('pro'), sb_lg, sba_lg,'', lg0);
|
||||
if (dirchg)
|
||||
sandbox(ebi('epi'), sb_lg, '', lg1);
|
||||
sandbox(ebi('epi'), sb_lg, sba_lg, '', lg1);
|
||||
|
||||
clmod(ebi('pro'), 'mdo');
|
||||
clmod(ebi('epi'), 'mdo');
|
||||
@@ -7680,7 +7829,7 @@ var treectl = (function () {
|
||||
if (md1)
|
||||
show_readme(md1, 1);
|
||||
else if (!dirchg)
|
||||
sandbox(ebi('epi'), sb_lg, '', lg1);
|
||||
sandbox(ebi('epi'), sb_lg, sba_lg, '', lg1);
|
||||
|
||||
if (this.hpush && !this.back) {
|
||||
var ofs = ebi('wrap').offsetTop;
|
||||
@@ -7713,7 +7862,7 @@ var treectl = (function () {
|
||||
r.gentab = function (top, res) {
|
||||
var nodes = res.dirs.concat(res.files),
|
||||
html = mk_files_header(res.taglist),
|
||||
sel = r.lsc === res ? msel.getsel() : [],
|
||||
sel = msel.hist[top],
|
||||
ae = document.activeElement,
|
||||
cid = null,
|
||||
plain = [],
|
||||
@@ -7824,10 +7973,7 @@ var treectl = (function () {
|
||||
apply_perms(res);
|
||||
fileman.render();
|
||||
}
|
||||
if (sel.length)
|
||||
msel.loadsel(sel);
|
||||
else
|
||||
msel.origin_id(null);
|
||||
msel.loadsel(top, sel);
|
||||
|
||||
if (cid) try {
|
||||
ebi(cid).closest('tr').focus();
|
||||
@@ -8047,11 +8193,18 @@ var treectl = (function () {
|
||||
})();
|
||||
|
||||
|
||||
var m = SPINNER.split(','),
|
||||
SPINNER_CSS = m.length < 2 ? '' : SPINNER.slice(m[0].length + 1);
|
||||
SPINNER = m[0];
|
||||
|
||||
|
||||
function enspin(sel) {
|
||||
despin(sel);
|
||||
var d = mknod('div');
|
||||
d.className = 'dumb_loader_thing';
|
||||
d.innerHTML = '🌲';
|
||||
d.innerHTML = SPINNER;
|
||||
if (SPINNER_CSS)
|
||||
d.style.cssText = SPINNER_CSS;
|
||||
var tgt = QS(sel);
|
||||
tgt.insertBefore(d, tgt.childNodes[0]);
|
||||
}
|
||||
@@ -8600,12 +8753,12 @@ var settheme = (function () {
|
||||
(function () {
|
||||
function freshen() {
|
||||
lang = sread("cpp_lang", LANGS) || lang;
|
||||
var html = [];
|
||||
for (var k in Ls)
|
||||
if (Ls.hasOwnProperty(k))
|
||||
var k, html = [];
|
||||
for (var a = 0; a < LANGS.length; a++) {
|
||||
k = LANGS[a];
|
||||
html.push('<a href="#" class="btn tgl' + (k == lang ? ' on' : '') +
|
||||
'" tt="' + Ls[k].tt + '">' + k + '</a>');
|
||||
|
||||
}
|
||||
ebi('langs').innerHTML = html.join('');
|
||||
var btns = QSA('#langs a');
|
||||
for (var a = 0, aa = btns.length; a < aa; a++)
|
||||
@@ -8635,8 +8788,8 @@ var arcfmt = (function () {
|
||||
["pax", "tar=pax", L.fz_pax],
|
||||
["tgz", "tar=gz", L.fz_targz],
|
||||
["txz", "tar=xz", L.fz_tarxz],
|
||||
["zip", "zip=utf8", L.fz_zip8],
|
||||
["zip_dos", "zip", L.fz_zipd],
|
||||
["zip", "zip", L.fz_zip8],
|
||||
["zip_dos", "zip=dos", L.fz_zipd],
|
||||
["zip_crc", "zip=crc", L.fz_zipc]
|
||||
];
|
||||
|
||||
@@ -8709,6 +8862,7 @@ var msel = (function () {
|
||||
var r = {};
|
||||
r.sel = null;
|
||||
r.all = null;
|
||||
r.hist = {};
|
||||
r.so = null; // selection origin
|
||||
r.pr = null; // previous range
|
||||
|
||||
@@ -8757,10 +8911,14 @@ var msel = (function () {
|
||||
}
|
||||
};
|
||||
|
||||
r.loadsel = function (sel) {
|
||||
r.loadsel = function (vp, sel) {
|
||||
if (!sel || !r.so || !ebi(r.so))
|
||||
r.so = r.pr = null;
|
||||
|
||||
if (!sel)
|
||||
return r.origin_id(null);
|
||||
|
||||
r.hist[vp] = sel;
|
||||
r.sel = [];
|
||||
r.load();
|
||||
|
||||
@@ -8792,6 +8950,11 @@ var msel = (function () {
|
||||
thegrid.loadsel();
|
||||
fileman.render();
|
||||
showfile.updtree();
|
||||
|
||||
if (r.sel.length)
|
||||
r.hist[get_evpath()] = r.sel;
|
||||
else
|
||||
delete r.hist[get_evpath()];
|
||||
};
|
||||
r.seltgl = function (e) {
|
||||
ev(e);
|
||||
@@ -9028,14 +9191,18 @@ var msel = (function () {
|
||||
function cb() {
|
||||
xhrchk(this, L.fsm_xe1, L.fsm_xe2);
|
||||
|
||||
if (this.status < 200 || this.status > 201) {
|
||||
if (this.status < 200 || this.status > 202) {
|
||||
sf.textContent = 'error: ' + hunpre(this.responseText);
|
||||
return;
|
||||
}
|
||||
|
||||
tb.value = '';
|
||||
clmod(sf, 'vis');
|
||||
sf.textContent = 'sent: "' + this.msg + '"';
|
||||
var txt = 'sent: <code>' + esc(this.msg) + '</code>';
|
||||
if (this.status == 202)
|
||||
txt += '<br /> got: <code>' + esc(this.responseText) + '</code>';
|
||||
|
||||
sf.innerHTML = txt;
|
||||
setTimeout(function () {
|
||||
treectl.goto();
|
||||
}, 100);
|
||||
@@ -9156,7 +9323,7 @@ function show_md(md, name, div, url, depth) {
|
||||
if (!have_emp)
|
||||
md_html = DOMPurify.sanitize(md_html);
|
||||
|
||||
if (sandbox(div, sb_md, 'mdo', md_html))
|
||||
if (sandbox(div, sb_md, sba_md, 'mdo', md_html))
|
||||
return;
|
||||
|
||||
ext = md_plug.post;
|
||||
@@ -9207,7 +9374,7 @@ function show_readme(md, n) {
|
||||
var tgt = ebi(n ? 'epi' : 'pro');
|
||||
|
||||
if (!treectl.ireadme)
|
||||
return sandbox(tgt, '', '', 'a');
|
||||
return sandbox(tgt, '', '', '', 'a');
|
||||
|
||||
show_md(md, n ? 'README.md' : 'PREADME.md', tgt);
|
||||
}
|
||||
@@ -9216,7 +9383,7 @@ for (var a = 0; a < readmes.length; a++)
|
||||
show_readme(readmes[a], a);
|
||||
|
||||
|
||||
function sandbox(tgt, rules, cls, html) {
|
||||
function sandbox(tgt, rules, allow, cls, html) {
|
||||
if (!treectl.ireadme) {
|
||||
tgt.innerHTML = html ? L.md_off : '';
|
||||
return;
|
||||
@@ -9272,6 +9439,7 @@ function sandbox(tgt, rules, cls, html) {
|
||||
var fr = mknod('iframe');
|
||||
fr.setAttribute('title', 'folder ' + tid + 'logue');
|
||||
fr.setAttribute('sandbox', rules ? 'allow-' + rules.replace(/ /g, ' allow-') : '');
|
||||
fr.setAttribute('allow', allow);
|
||||
fr.setAttribute('srcdoc', html);
|
||||
tgt.appendChild(fr);
|
||||
treectl.sb_msg = true;
|
||||
@@ -9321,8 +9489,8 @@ if (sb_lg && logues.length) {
|
||||
if (logues[1] === Ls.eng.f_empty)
|
||||
logues[1] = L.f_empty;
|
||||
|
||||
sandbox(ebi('pro'), sb_lg, '', logues[0]);
|
||||
sandbox(ebi('epi'), sb_lg, '', logues[1]);
|
||||
sandbox(ebi('pro'), sb_lg, sba_lg, '', logues[0]);
|
||||
sandbox(ebi('epi'), sb_lg, sba_lg, '', logues[1]);
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -23,8 +23,7 @@ var dbg = function () { };
|
||||
|
||||
// dodge browser issues
|
||||
(function () {
|
||||
var ua = navigator.userAgent;
|
||||
if (ua.indexOf(') Gecko/') !== -1 && /Linux| Mac /.exec(ua)) {
|
||||
if (UA.indexOf(') Gecko/') !== -1 && /Linux| Mac /.exec(UA)) {
|
||||
// necessary on ff-68.7 at least
|
||||
var s = mknod('style');
|
||||
s.innerHTML = '@page { margin: .5in .6in .8in .6in; }';
|
||||
|
||||
@@ -450,7 +450,7 @@ function savechk_cb() {
|
||||
|
||||
// firefox bug: initial selection offset isn't cleared properly through js
|
||||
var ff_clearsel = (function () {
|
||||
if (navigator.userAgent.indexOf(') Gecko/') === -1)
|
||||
if (UA.indexOf(') Gecko/') === -1)
|
||||
return function () { }
|
||||
|
||||
return function () {
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
<meta name="theme-color" content="#{{ tcolor }}">
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/splash.css?_={{ ts }}">
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
<style>ul{padding-left:1.3em}li{margin:.4em 0}</style>
|
||||
<style>ul{padding-left:1.3em}li{margin:.4em 0}.txa{float:right;margin:0 0 0 1em}</style>
|
||||
{{ html_head }}
|
||||
</head>
|
||||
|
||||
@@ -31,15 +31,22 @@
|
||||
<br />
|
||||
<span class="os win lin mac">placeholders:</span>
|
||||
<span class="os win">
|
||||
{% if accs %}<code><b>{{ pw }}</b></code>=password, {% endif %}<code><b>W:</b></code>=mountpoint
|
||||
{% if accs %}<code><b id="pw0">{{ pw }}</b></code>=password, {% endif %}<code><b>W:</b></code>=mountpoint
|
||||
</span>
|
||||
<span class="os lin mac">
|
||||
{% if accs %}<code><b>{{ pw }}</b></code>=password, {% endif %}<code><b>mp</b></code>=mountpoint
|
||||
{% if accs %}<code><b id="pw0">{{ pw }}</b></code>=password, {% endif %}<code><b>mp</b></code>=mountpoint
|
||||
</span>
|
||||
<a href="#" id="setpw">use real password</a>
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
{% if args.idp_h_usr %}
|
||||
<p style="line-height:2em"><b>WARNING:</b> this server is using IdP-based authentication, so this stuff may not work as advertised. Depending on server config, these commands can probably only be used to access areas which don't require authentication, unless you auth using any non-IdP accounts defined in the copyparty config. Please see <a href="https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#connecting-webdav-clients">the IdP docs</a></p>
|
||||
{% endif %}
|
||||
|
||||
|
||||
|
||||
{% if not args.no_dav %}
|
||||
<h1>WebDAV</h1>
|
||||
|
||||
@@ -229,6 +236,60 @@
|
||||
|
||||
|
||||
|
||||
<div class="os win">
|
||||
<h1>ShareX</h1>
|
||||
|
||||
<p>to upload screenshots using ShareX <a href="https://github.com/ShareX/ShareX/releases/tag/v12.1.1">v12</a> or <a href="https://getsharex.com/">v15+</a>, save this as <code>copyparty.sxcu</code> and run it:</p>
|
||||
|
||||
<pre class="dl" name="copyparty.sxcu">
|
||||
{ "Name": "copyparty",
|
||||
"RequestURL": "http{{ s }}://{{ ep }}/{{ rvp }}",
|
||||
"Headers": {
|
||||
{% if accs %}"pw": "<b>{{ pw }}</b>",{% endif %}
|
||||
"accept": "url"
|
||||
},
|
||||
"DestinationType": "ImageUploader, TextUploader, FileUploader",
|
||||
"FileFormName": "f" }
|
||||
</pre>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
<div class="os mac">
|
||||
<h1>ishare</h1>
|
||||
|
||||
<p>to upload screenshots using <a href="https://isharemac.app/">ishare</a>, save this as <code>copyparty.iscu</code> and run it:</p>
|
||||
|
||||
<pre class="dl" name="copyparty.iscu">
|
||||
{ "Name": "copyparty",
|
||||
"RequestURL": "http{{ s }}://{{ ep }}/{{ rvp }}",
|
||||
"Headers": {
|
||||
{% if accs %}"pw": "<b>{{ pw }}</b>",{% endif %}
|
||||
"accept": "json"
|
||||
},
|
||||
"ResponseURL": "{{ '{{fileurl}}' }}",
|
||||
"FileFormName": "f" }
|
||||
</pre>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
<div class="os lin">
|
||||
<h1>flameshot</h1>
|
||||
|
||||
<p>to upload screenshots using <a href="https://flameshot.org/">flameshot</a>, save this as <code>flameshot.sh</code> and run it:</p>
|
||||
|
||||
<pre class="dl" name="flameshot.sh">
|
||||
#!/bin/bash
|
||||
pw="<b>{{ pw }}</b>"
|
||||
url="http{{ s }}://{{ ep }}/{{ rvp }}"
|
||||
filename="$(date +%Y-%m%d-%H%M%S).png"
|
||||
flameshot gui -s -r | curl -sT- "$url$filename?want=url&pw=$pw" | xsel -ib
|
||||
</pre>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
</div>
|
||||
<a href="#" id="repl">π</a>
|
||||
<script>
|
||||
|
||||
@@ -1,11 +1,3 @@
|
||||
function QSA(x) {
|
||||
return document.querySelectorAll(x);
|
||||
}
|
||||
var LINUX = /Linux/.test(navigator.userAgent),
|
||||
MACOS = /[^a-z]mac ?os/i.test(navigator.userAgent),
|
||||
WINDOWS = /Windows/.test(navigator.userAgent);
|
||||
|
||||
|
||||
var oa = QSA('pre');
|
||||
for (var a = 0; a < oa.length; a++) {
|
||||
var html = oa[a].innerHTML,
|
||||
@@ -15,6 +7,21 @@ for (var a = 0; a < oa.length; a++) {
|
||||
oa[a].innerHTML = html.replace(rd, '$1').replace(/[ \r\n]+$/, '').replace(/\r?\n/g, '<br />');
|
||||
}
|
||||
|
||||
function add_dls() {
|
||||
oa = QSA('pre.dl');
|
||||
for (var a = 0; a < oa.length; a++) {
|
||||
var an = 'ta' + a,
|
||||
o = ebi(an) || mknod('a', an, 'download');
|
||||
|
||||
oa[a].setAttribute('id', 'tx' + a);
|
||||
oa[a].parentNode.insertBefore(o, oa[a]);
|
||||
o.setAttribute('download', oa[a].getAttribute('name'));
|
||||
o.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(oa[a].innerText));
|
||||
clmod(o, 'txa', 1);
|
||||
}
|
||||
}
|
||||
add_dls();
|
||||
|
||||
|
||||
oa = QSA('.ossel a');
|
||||
for (var a = 0; a < oa.length; a++)
|
||||
@@ -40,3 +47,21 @@ function setos(os) {
|
||||
}
|
||||
|
||||
setos(WINDOWS ? 'win' : LINUX ? 'lin' : MACOS ? 'mac' : 'idk');
|
||||
|
||||
|
||||
ebi('setpw').onclick = function (e) {
|
||||
ev(e);
|
||||
modal.prompt('password:', '', function (v) {
|
||||
if (!v)
|
||||
return;
|
||||
|
||||
var pw0 = ebi('pw0').innerHTML,
|
||||
oa = QSA('b');
|
||||
|
||||
for (var a = 0; a < oa.length; a++)
|
||||
if (oa[a].innerHTML == pw0)
|
||||
oa[a].textContent = v;
|
||||
|
||||
add_dls();
|
||||
});
|
||||
}
|
||||
|
||||
@@ -881,7 +881,7 @@ function up2k_init(subtle) {
|
||||
bcfg_bind(uc, 'turbo', 'u2turbo', turbolvl > 1, draw_turbo);
|
||||
bcfg_bind(uc, 'datechk', 'u2tdate', turbolvl < 3, null);
|
||||
bcfg_bind(uc, 'az', 'u2sort', u2sort.indexOf('n') + 1, set_u2sort);
|
||||
bcfg_bind(uc, 'hashw', 'hashw', !!WebAssembly && (!subtle || !CHROME || MOBILE || VCHROME >= 107), set_hashw);
|
||||
bcfg_bind(uc, 'hashw', 'hashw', !!WebAssembly && !(CHROME && MOBILE) && (!subtle || !CHROME), set_hashw);
|
||||
bcfg_bind(uc, 'upnag', 'upnag', false, set_upnag);
|
||||
bcfg_bind(uc, 'upsfx', 'upsfx', false, set_upsfx);
|
||||
|
||||
@@ -969,7 +969,7 @@ function up2k_init(subtle) {
|
||||
ud = function () { ebi('dir' + fdom_ctr).click(); };
|
||||
|
||||
// too buggy on chrome <= 72
|
||||
var m = / Chrome\/([0-9]+)\./.exec(navigator.userAgent);
|
||||
var m = / Chrome\/([0-9]+)\./.exec(UA);
|
||||
if (m && parseInt(m[1]) < 73)
|
||||
return uf();
|
||||
|
||||
@@ -1972,32 +1972,84 @@ function up2k_init(subtle) {
|
||||
nchunk = 0,
|
||||
chunksize = get_chunksize(t.size),
|
||||
nchunks = Math.ceil(t.size / chunksize),
|
||||
csz_mib = chunksize / 1048576,
|
||||
tread = t.t_hashing,
|
||||
cache_buf = null,
|
||||
cache_car = 0,
|
||||
cache_cdr = 0,
|
||||
hashers = 0,
|
||||
hashtab = {};
|
||||
|
||||
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
|
||||
var use_workers = hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'),
|
||||
hash_par = (!subtle && !use_workers) ? 0 : csz_mib < 48 ? 2 : csz_mib < 96 ? 1 : 0;
|
||||
|
||||
pvis.setab(t.n, nchunks);
|
||||
pvis.move(t.n, 'bz');
|
||||
|
||||
if (hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
|
||||
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
|
||||
if (use_workers)
|
||||
return wexec_hash(t, chunksize, nchunks);
|
||||
|
||||
var segm_next = function () {
|
||||
if (nchunk >= nchunks || bpend)
|
||||
return false;
|
||||
|
||||
var reader = new FileReader(),
|
||||
nch = nchunk++,
|
||||
var nch = nchunk++,
|
||||
car = nch * chunksize,
|
||||
cdr = Math.min(chunksize + car, t.size);
|
||||
|
||||
st.bytes.hashed += cdr - car;
|
||||
st.etac.h++;
|
||||
|
||||
var orz = function (e) {
|
||||
bpend--;
|
||||
segm_next();
|
||||
hash_calc(nch, e.target.result);
|
||||
if (MOBILE && CHROME && st.slow_io === null && nch == 1 && cdr - car >= 1024 * 512) {
|
||||
var spd = Math.floor((cdr - car) / (Date.now() + 1 - tread));
|
||||
st.slow_io = spd < 40 * 1024;
|
||||
console.log('spd {0}, slow: {1}'.format(spd, st.slow_io));
|
||||
}
|
||||
|
||||
if (cdr <= cache_cdr && car >= cache_car) {
|
||||
try {
|
||||
var ofs = car - cache_car,
|
||||
ofs2 = ofs + (cdr - car),
|
||||
buf = cache_buf.subarray(ofs, ofs2);
|
||||
|
||||
hash_calc(nch, buf);
|
||||
}
|
||||
catch (ex) {
|
||||
vis_exh(ex + '', 'up2k.js', '', '', ex);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
var reader = new FileReader(),
|
||||
fr_cdr = cdr;
|
||||
|
||||
if (st.slow_io) {
|
||||
var step = cdr - car,
|
||||
tgt = 48 * 1048576;
|
||||
|
||||
while (step && fr_cdr - car < tgt)
|
||||
fr_cdr += step;
|
||||
if (fr_cdr - car > tgt && fr_cdr > cdr)
|
||||
fr_cdr -= step;
|
||||
if (fr_cdr > t.size)
|
||||
fr_cdr = t.size;
|
||||
}
|
||||
|
||||
var orz = function (e) {
|
||||
bpend = 0;
|
||||
var buf = e.target.result;
|
||||
if (fr_cdr > cdr) {
|
||||
cache_buf = new Uint8Array(buf);
|
||||
cache_car = car;
|
||||
cache_cdr = fr_cdr;
|
||||
buf = cache_buf.subarray(0, cdr - car);
|
||||
}
|
||||
if (hashers < hash_par)
|
||||
segm_next();
|
||||
|
||||
hash_calc(nch, buf);
|
||||
};
|
||||
reader.onload = function (e) {
|
||||
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
|
||||
};
|
||||
@@ -2024,17 +2076,20 @@ function up2k_init(subtle) {
|
||||
|
||||
toast.err(0, 'y o u b r o k e i t\nfile: ' + esc(t.name + '') + '\nerror: ' + err);
|
||||
};
|
||||
bpend++;
|
||||
reader.readAsArrayBuffer(t.fobj.slice(car, cdr));
|
||||
bpend = 1;
|
||||
tread = Date.now();
|
||||
reader.readAsArrayBuffer(t.fobj.slice(car, fr_cdr));
|
||||
|
||||
return true;
|
||||
};
|
||||
|
||||
var hash_calc = function (nch, buf) {
|
||||
hashers++;
|
||||
var orz = function (hashbuf) {
|
||||
var hslice = new Uint8Array(hashbuf).subarray(0, 33),
|
||||
b64str = buf2b64(hslice);
|
||||
|
||||
hashers--;
|
||||
hashtab[nch] = b64str;
|
||||
t.hash.push(nch);
|
||||
pvis.hashed(t);
|
||||
@@ -3044,7 +3099,7 @@ function up2k_init(subtle) {
|
||||
new_state = false;
|
||||
fixed = true;
|
||||
}
|
||||
if (new_state === undefined)
|
||||
if (new_state === undefined && preferred === undefined)
|
||||
new_state = can_write ? false : have_up2k_idx ? true : undefined;
|
||||
}
|
||||
|
||||
|
||||
@@ -29,14 +29,17 @@ var wah = '',
|
||||
HTTPS = ('' + location).indexOf('https:') === 0,
|
||||
TOUCH = 'ontouchstart' in window,
|
||||
MOBILE = TOUCH,
|
||||
CHROME = !!window.chrome,
|
||||
CHROME = !!window.chrome, // safari=false
|
||||
VCHROME = CHROME ? 1 : 0,
|
||||
IE = /Trident\//.test(navigator.userAgent),
|
||||
FIREFOX = ('netscape' in window) && / rv:/.test(navigator.userAgent),
|
||||
IPHONE = TOUCH && /iPhone|iPad|iPod/i.test(navigator.userAgent),
|
||||
LINUX = /Linux/.test(navigator.userAgent),
|
||||
MACOS = /[^a-z]mac ?os/i.test(navigator.userAgent),
|
||||
WINDOWS = /Windows/.test(navigator.userAgent);
|
||||
UA = '' + navigator.userAgent,
|
||||
IE = /Trident\//.test(UA),
|
||||
FIREFOX = ('netscape' in window) && / rv:/.test(UA),
|
||||
IPHONE = TOUCH && /iPhone|iPad|iPod/i.test(UA),
|
||||
LINUX = /Linux/.test(UA),
|
||||
MACOS = /Macintosh/.test(UA),
|
||||
WINDOWS = /Windows/.test(UA),
|
||||
APPLE = IPHONE || MACOS,
|
||||
APPLEM = TOUCH && APPLE;
|
||||
|
||||
if (!window.WebAssembly || !WebAssembly.Memory)
|
||||
window.WebAssembly = false;
|
||||
@@ -196,7 +199,7 @@ function vis_exh(msg, url, lineNo, columnNo, error) {
|
||||
'<p style="font-size:1.3em;margin:0;line-height:2em">try to <a href="#" onclick="localStorage.clear();location.reload();">reset copyparty settings</a> if you are stuck here, or <a href="#" onclick="ignex();">ignore this</a> / <a href="#" onclick="ignex(true);">ignore all</a> / <a href="?b=u">basic</a></p>',
|
||||
'<p style="color:#fff">please send me a screenshot arigathanks gozaimuch: <a href="<ghi>" target="_blank">new github issue</a></p>',
|
||||
'<p class="b">' + esc(url + ' @' + lineNo + ':' + columnNo), '<br />' + esc(msg).replace(/\n/g, '<br />') + '</p>',
|
||||
'<p><b>UA:</b> ' + esc(navigator.userAgent + '')
|
||||
'<p><b>UA:</b> ' + esc(UA)
|
||||
];
|
||||
|
||||
try {
|
||||
@@ -431,7 +434,7 @@ function import_js(url, cb, ecb) {
|
||||
|
||||
|
||||
function unsmart(txt) {
|
||||
return !IPHONE ? txt : (txt.
|
||||
return !APPLEM ? txt : (txt.
|
||||
replace(/[\u2014]/g, "--").
|
||||
replace(/[\u2022]/g, "*").
|
||||
replace(/[\u2018\u2019]/g, "'").
|
||||
@@ -1356,7 +1359,7 @@ var tt = (function () {
|
||||
};
|
||||
|
||||
r.getmsg = function (el) {
|
||||
if (IPHONE && QS('body.bbox-open'))
|
||||
if (APPLEM && QS('body.bbox-open'))
|
||||
return;
|
||||
|
||||
var cfg = sread('tooltips');
|
||||
|
||||
@@ -25,6 +25,9 @@
|
||||
## [`changelog.md`](changelog.md)
|
||||
* occasionally grabbed from github release notes
|
||||
|
||||
## [`synology-dsm.md`](synology-dsm.md)
|
||||
* running copyparty on a synology nas
|
||||
|
||||
## [`devnotes.md`](devnotes.md)
|
||||
* technical stuff
|
||||
|
||||
|
||||
@@ -1,3 +1,180 @@
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0209-2331 `v1.16.12` RTT
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* show rtt (network latency to server, including request processing time) in the top status text d27f1104
|
||||
* and log the client-reported RTT to serverlog 20ddeb6e
|
||||
* remember file selection when changing folders c7db08ed
|
||||
* good for when you accidentally navigate elsewhere
|
||||
* option to restrict download-as-zip/tar to admins-only c87af9e8
|
||||
* #135 add [bubbleparty](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#bubblepartysh), thx @coderofsalvation! 3582a100
|
||||
* runs copyparty in a [sandbox](https://github.com/containers/bubblewrap), making it harder to gain unintended access through bugs in python or copyparty
|
||||
* better alternative to [prisonparty](https://github.com/9001/copyparty/tree/hovudstraum/bin#prisonpartysh), more similar to [the sandboxing in the nixos package](https://github.com/9001/copyparty/blob/7dda77dcb/contrib/nixos/modules/copyparty.nix#L232-L272)
|
||||
* new plugin: [quickmove](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/quickmove.js) 46f9e9ef
|
||||
* adds hotkey `W` to quickly move selected files into a subfolder
|
||||
* #133 new plugin: [graft-thumbs.js](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/graft-thumbs.js) 6c202eff
|
||||
* in folders with foobar.mp3 and foobar.png, can copy the thumbnail from the png to the jpg (and then hide the png)
|
||||
* handlers: add [http-redirect example](https://github.com/9001/copyparty/blob/hovudstraum/bin/handlers/redirect.py) 22cbd2db
|
||||
* add [ping.html](https://github.com/9001/copyparty/blob/hovudstraum/srv/ping.html) 7de9d15a 910797cc
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* improve iPad detection so they get opus instead of mp3 12dcea4f
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* safeguard against accidental config loss cd71b505
|
||||
* while no copyparty servers have ended up in this unfortunate situation yet (afaik), be proactive and borrow some experience from other docker-based services
|
||||
* readme: improve config examples 32e90859
|
||||
* improve serverlog entries regarding 403s b020fd4a
|
||||
* #132 mention fuse permissions in readme d9d2a092
|
||||
* traefik-example: fix disconnect during big uploads 6a9ffe7e
|
||||
* try to show an appropriate warning for media that the browser doesn't support playing 4ef35263
|
||||
* was an attempt at detecting iphones failing to play high-color-precision webm files, but safari doesn't seem to realize itself that playback has failed, ah well
|
||||
* copyparty.exe: update to python 3.12.9
|
||||
* update deps: dompurify 3.2.4
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0127-0140 `v1.16.11` fix no-acode
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* u2c (commandline uploader): print download-links for uploaded files 1fe30363
|
||||
* `-u` prints a list after all uploads finished
|
||||
* `-ud` print during upload, after each file
|
||||
* `-uf a.txt` writes them to `a.txt`
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* [previous ver](https://github.com/9001/copyparty/releases/tag/v1.16.10) broke `--no-acode` (disable audio transcoding) by showing javascript errors 54a7256c
|
||||
* reported on discord (thx)
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0125-1809 `v1.16.10` iOS9 is fine too
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* support audio playback on *really old* apple devices c9eba39e
|
||||
* will now transcode to mp3 when necessary, since iOS didn't support opus-in-caf before iOS 11
|
||||
* support audio playback on *future* apple devices 28c9de3f 95390b65
|
||||
* iOS 17.5 introduced support for opus-in-weba (like webp just audio instead) and, unlike caf, this intentionally supports vbr-opus (awesome)
|
||||
* ...but the current code in iOS is too buggy, so this new format is default-disabled and we'll stick to caf for now fff38f48
|
||||
* ZeroMQ event-hooks can reject uploads 3a5c1d9f
|
||||
* see [the example zmq listener](https://github.com/9001/copyparty/blob/1dace720/bin/zmq-recv.py#L26-L28)
|
||||
* chat with ZeroMQ event-hooks from javascript cdd3b67a
|
||||
* replies from ZMQ REP servers are included in the msg-to-log responses
|
||||
* which makes [this joke](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/usb-eject.py) possible f38c7543
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* nope
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* option to restrict the recent-uploads listing to admins-only b8b5214f
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0122-2326 `v1.16.9` ZeroMQ says hello
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* event-hooks can send zeromq / zmq / 0mq messages; see [readme](https://github.com/9001/copyparty#zeromq) or `--help-hooks` for examples d9db1534
|
||||
* new volflags to specify the [allow-tag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Permissions-Policy#iframes) of the markdown/logue sandbox, to allow fullscreen and such (see `--help-flags`) 6a0aaaf0
|
||||
* new volflag `nosparse` for possibly-better performance in very rare and specific scenarios 917380dd
|
||||
* only enable this if you're uploading to s3 or something like that, and do plenty of benchmarking to make sure that it actually improved performance instead of making it worse
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* restrict max-length of filekeys to 72 characters e0cac6fd
|
||||
* the hash-calculator mode of the commandline uploader produced incorrect whole-file hashes 4c04798a
|
||||
* each chunk (`--chs`) was okay, but the final sum was not
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* selftest the xml-parser on startup with malicious xml b2e8bf6e
|
||||
* just in case a future python-version suddenly makes it unsafe somehow
|
||||
* disable some features if a dangerously misconfigured reverseproxy is detected 3f84b0a0
|
||||
* the download-as-zip feature now defaults to utf8 filenames 1231ce19
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0111-1611 `v1.16.8` android boost
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* 10x faster file hashing in android-chrome ec507889
|
||||
* on a recent pixel, speed went from 13 to 139 MiB/s
|
||||
* android's sandboxing makes small reads expensive, so do bigger reads instead
|
||||
* so the browser-tab will use more RAM on android now, maybe around 200 MiB
|
||||
* this only affects chrome-based browsers on android, not firefox
|
||||
* PUT/multipart uploads: request-header `Accept: json` makes it return json instead of html, just like `?j` ce0e5be4
|
||||
* add config examples for [ishare](https://isharemac.app/), a MacOS screenshot utility inspired by ShareX 0c0d6b2b
|
||||
* also includes a bug-workaround for [ishare#107](https://github.com/castdrian/ishare/issues/107) - copyparty will now include a toplevel json property `fileurl` in the response if exactly one file was uploaded
|
||||
* the [connect-page](https://a.ocv.me/?hc) generates an appropriate `copyparty.iscu` for ishare; [it looks like this](https://github.com/user-attachments/assets/820730ad-2319-4912-8eb2-733755a4cf54)
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* fix a potential upload deadlock when...
|
||||
* ...the database (`-e2d`) is **not** enabled for any volume, and...
|
||||
* ...either the shares feature, or user-changeable passwords, is enabled 9e542cf8
|
||||
* when loading the partial-uploads registry on startup, a cosmetic desync could occur 467acb47
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* remove some deprecated properties in partial-upload metadata aa2a8fa2
|
||||
* v1.15.7 is now the oldest version which still has any chance of reading a modern up2k.snap
|
||||
* #129 added howto: [using webdav when copyparty is behind IdP](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#connecting-webdav-clients) -- thanks @wuast94 !
|
||||
* added howto: [install copyparty on a synology nas](https://github.com/9001/copyparty/blob/hovudstraum/docs/synology-dsm.md) 21f93042
|
||||
* more examples in the connect-page: 278258ee fb139697
|
||||
* config-file for sharex on windows
|
||||
* config-file for ishare on macos
|
||||
* script for flameshot on linux
|
||||
* #75 add recommendation to use the [kamelåså project](https://github.com/steinuil/kameloso) instead of copyparty's [very-bad-idea.py](https://github.com/9001/copyparty/tree/hovudstraum/bin/mtag#dangerous-plugins) 9f84dc42
|
||||
* more reverse-proxy examples (haproxy, lighttpd, traefik, caddy) and improved nginx performance ac0a2da3
|
||||
* readme has a [performance comparison](https://github.com/9001/copyparty?tab=readme-ov-file#reverse-proxy-performance) -- `haproxy > caddy > traefik > nginx > apache > lighttpd`
|
||||
* copyparty.exe: updated pillow 244e952f
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1223-0005 `v1.16.7` an idp fix for xmas
|
||||
|
||||
# ☃️🎄 **there is still time** 🎅🎁
|
||||
|
||||
❄️❄️❄️ please [enjoy some appropriate music](https://a.ocv.me/pub/demo/music/.bonus/#af-55d4554d) -- you'll probably like this more than the idp thing honestly ❄️❄️❄️
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* more improvements to the recent-uploads feature 87598dcd
|
||||
* move html rendering to clientside
|
||||
* any changes to the filter-text applies in real-time
|
||||
* loads 50% faster, reduces server-load by 30%
|
||||
* inhibits search engines from indexing it
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* using idp without e2d could mess with uploads dd6e9ea7
|
||||
* u2c (commandline uploader): fix window title 946a8c5b
|
||||
* mDNS/SSDP: fix incorrect log colors when multiple primary IPs are lost 552897ab
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* ui: make it more obvious that the volume-control is a volume-control 7f044372
|
||||
* copyparty.exe: update deps (jinja2, markupsafe, pyinstaller) c0dacbc4
|
||||
* improve safety of custom plugins 988a7223
|
||||
* if you've made your own plugins which expect certain values (host-header, filekeys) to be html-safe, then you'll want to upgrade
|
||||
* also fixes rss-feed xml if password contains special characters
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1219-0037 `v1.16.6` merry \x58mas
|
||||
|
||||
|
||||
48
docs/chunksizes.py
Executable file
48
docs/chunksizes.py
Executable file
@@ -0,0 +1,48 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
# there's far better ways to do this but its 4am and i dont wanna think
|
||||
|
||||
# just pypy it my dude
|
||||
|
||||
import math
|
||||
|
||||
def humansize(sz, terse=False):
|
||||
for unit in ["B", "KiB", "MiB", "GiB", "TiB"]:
|
||||
if sz < 1024:
|
||||
break
|
||||
|
||||
sz /= 1024.0
|
||||
|
||||
ret = " ".join([str(sz)[:4].rstrip("."), unit])
|
||||
|
||||
if not terse:
|
||||
return ret
|
||||
|
||||
return ret.replace("iB", "").replace(" ", "")
|
||||
|
||||
|
||||
def up2k_chunksize(filesize):
|
||||
chunksize = 1024 * 1024
|
||||
stepsize = 512 * 1024
|
||||
while True:
|
||||
for mul in [1, 2]:
|
||||
nchunks = math.ceil(filesize * 1.0 / chunksize)
|
||||
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks <= 4096):
|
||||
return chunksize
|
||||
|
||||
chunksize += stepsize
|
||||
stepsize *= mul
|
||||
|
||||
|
||||
def main():
|
||||
prev = 1048576
|
||||
n = n0 = 524288
|
||||
while True:
|
||||
csz = up2k_chunksize(n)
|
||||
if csz > prev:
|
||||
print(f"| {n-n0:>18_} | {humansize(n-n0):>8} | {prev:>13_} | {humansize(prev):>8} |".replace("_", " "))
|
||||
prev = csz
|
||||
n += n0
|
||||
|
||||
|
||||
main()
|
||||
@@ -6,6 +6,7 @@
|
||||
* [up2k](#up2k) - quick outline of the up2k protocol
|
||||
* [why not tus](#why-not-tus) - I didn't know about [tus](https://tus.io/)
|
||||
* [why chunk-hashes](#why-chunk-hashes) - a single sha512 would be better, right?
|
||||
* [list of chunk-sizes](#list-of-chunk-sizes) - specific chunksizes are enforced
|
||||
* [hashed passwords](#hashed-passwords) - regarding the curious decisions
|
||||
* [http api](#http-api)
|
||||
* [read](#read)
|
||||
@@ -95,6 +96,44 @@ hashwasm would solve the streaming issue but reduces hashing speed for sha512 (x
|
||||
|
||||
* blake2 might be a better choice since xxh is non-cryptographic, but that gets ~15 MiB/s on slower androids
|
||||
|
||||
### list of chunk-sizes
|
||||
|
||||
specific chunksizes are enforced depending on total filesize
|
||||
|
||||
each pair of filesize/chunksize is the largest filesize which will use its listed chunksize; a 512 MiB file will use chunksize 2 MiB, but if the file is one byte larger than 512 MiB then it becomes 3 MiB
|
||||
|
||||
for the purpose of performance (or dodging arbitrary proxy limitations), it is possible to upload combined and/or partial chunks using stitching and/or subchunks respectively
|
||||
|
||||
| filesize | filesize | chunksize | chunksz |
|
||||
| -----------------: | -------: | ------------: | ------: |
|
||||
| 268 435 456 | 256 MiB | 1 048 576 | 1.0 MiB |
|
||||
| 402 653 184 | 384 MiB | 1 572 864 | 1.5 MiB |
|
||||
| 536 870 912 | 512 MiB | 2 097 152 | 2.0 MiB |
|
||||
| 805 306 368 | 768 MiB | 3 145 728 | 3.0 MiB |
|
||||
| 1 073 741 824 | 1.0 GiB | 4 194 304 | 4.0 MiB |
|
||||
| 1 610 612 736 | 1.5 GiB | 6 291 456 | 6.0 MiB |
|
||||
| 2 147 483 648 | 2.0 GiB | 8 388 608 | 8.0 MiB |
|
||||
| 3 221 225 472 | 3.0 GiB | 12 582 912 | 12 MiB |
|
||||
| 4 294 967 296 | 4.0 GiB | 16 777 216 | 16 MiB |
|
||||
| 6 442 450 944 | 6.0 GiB | 25 165 824 | 24 MiB |
|
||||
| 137 438 953 472 | 128 GiB | 33 554 432 | 32 MiB |
|
||||
| 206 158 430 208 | 192 GiB | 50 331 648 | 48 MiB |
|
||||
| 274 877 906 944 | 256 GiB | 67 108 864 | 64 MiB |
|
||||
| 412 316 860 416 | 384 GiB | 100 663 296 | 96 MiB |
|
||||
| 549 755 813 888 | 512 GiB | 134 217 728 | 128 MiB |
|
||||
| 824 633 720 832 | 768 GiB | 201 326 592 | 192 MiB |
|
||||
| 1 099 511 627 776 | 1.0 TiB | 268 435 456 | 256 MiB |
|
||||
| 1 649 267 441 664 | 1.5 TiB | 402 653 184 | 384 MiB |
|
||||
| 2 199 023 255 552 | 2.0 TiB | 536 870 912 | 512 MiB |
|
||||
| 3 298 534 883 328 | 3.0 TiB | 805 306 368 | 768 MiB |
|
||||
| 4 398 046 511 104 | 4.0 TiB | 1 073 741 824 | 1.0 GiB |
|
||||
| 6 597 069 766 656 | 6.0 TiB | 1 610 612 736 | 1.5 GiB |
|
||||
| 8 796 093 022 208 | 8.0 TiB | 2 147 483 648 | 2.0 GiB |
|
||||
| 13 194 139 533 312 | 12.0 TiB | 3 221 225 472 | 3.0 GiB |
|
||||
| 17 592 186 044 416 | 16.0 TiB | 4 294 967 296 | 4.0 GiB |
|
||||
| 26 388 279 066 624 | 24.0 TiB | 6 442 450 944 | 6.0 GiB |
|
||||
| 35 184 372 088 832 | 32.0 TiB | 8 589 934 592 | 8.0 GiB |
|
||||
|
||||
|
||||
# hashed passwords
|
||||
|
||||
@@ -133,8 +172,8 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
||||
| GET | `?tar=xz:9` | ...as an xz-level-9 gnu-tar file |
|
||||
| GET | `?tar=pax` | ...as a pax-tar file |
|
||||
| GET | `?tar=pax,xz` | ...as an xz-level-1 pax-tar file |
|
||||
| GET | `?zip=utf-8` | ...as a zip file |
|
||||
| GET | `?zip` | ...as a WinXP-compatible zip file |
|
||||
| GET | `?zip` | ...as a zip file |
|
||||
| GET | `?zip=dos` | ...as a WinXP-compatible zip file |
|
||||
| GET | `?zip=crc` | ...as an MSDOS-compatible zip file |
|
||||
| GET | `?tar&w` | pregenerate webp thumbnails |
|
||||
| GET | `?tar&j` | pregenerate jpg thumbnails |
|
||||
@@ -173,6 +212,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
||||
| method | params | body | result |
|
||||
|--|--|--|--|
|
||||
| PUT | | (binary data) | upload into file at URL |
|
||||
| PUT | `?j` | (binary data) | ...and reply with json |
|
||||
| PUT | `?ck` | (binary data) | upload without checksum gen (faster) |
|
||||
| PUT | `?ck=md5` | (binary data) | return md5 instead of sha512 |
|
||||
| PUT | `?gz` | (binary data) | compress with gzip and write into file at URL |
|
||||
@@ -197,6 +237,7 @@ upload modifiers:
|
||||
| http-header | url-param | effect |
|
||||
|--|--|--|
|
||||
| `Accept: url` | `want=url` | return just the file URL |
|
||||
| `Accept: json` | `want=json` | return upload info as json; same as `?j` |
|
||||
| `Rand: 4` | `rand=4` | generate random filename with 4 characters |
|
||||
| `Life: 30` | `life=30` | delete file after 30 seconds |
|
||||
| `CK: no` | `ck` | disable serverside checksum (maybe faster) |
|
||||
@@ -301,6 +342,7 @@ python3 -m venv .venv
|
||||
. .venv/bin/activate
|
||||
pip install jinja2 strip_hints # MANDATORY
|
||||
pip install argon2-cffi # password hashing
|
||||
pip install pyzmq # send 0mq from hooks
|
||||
pip install mutagen # audio metadata
|
||||
pip install pyftpdlib # ftp server
|
||||
pip install partftpy # tftp server
|
||||
|
||||
22
docs/idp.md
22
docs/idp.md
@@ -20,3 +20,25 @@ this means that, if an IdP volume is located inside a folder that is readable by
|
||||
and likewise -- if the IdP volume is inside a folder that is only accessible by certain users, but the IdP volume is configured to allow access from unauthenticated users, then the contents of the volume will NOT be accessible until it is revived
|
||||
|
||||
until this limitation is fixed (if ever), it is recommended to place IdP volumes inside an appropriate parent volume, so they can inherit acceptable permissions until their revival; see the "strategic volumes" at the bottom of [./examples/docker/idp/copyparty.conf](./examples/docker/idp/copyparty.conf)
|
||||
|
||||
|
||||
## Connecting webdav clients
|
||||
|
||||
If you use only idp and want to connect via rclone you have to adapt a few things.
|
||||
The following steps are for Authelia, but should be easy adaptable to other IdPs and clients. There may be better/smarter ways to do this, but this is a known solution.
|
||||
|
||||
1. Add a rule for your domain and set it to one factor
|
||||
```
|
||||
rules:
|
||||
- domain: 'sub.domain.tld'
|
||||
policy: one_factor
|
||||
```
|
||||
2. After you created your rclone config find its location with `rclone config file` and add the headers option to it, change the string to `username:password` base64 encoded. Make sure to set the right url location, otherwise you will get a 401 from copyparty.
|
||||
```
|
||||
[servername-dav]
|
||||
type = webdav
|
||||
url = https://sub.domain.tld/u/user/priv/
|
||||
vendor = owncloud
|
||||
pacer_min_sleep = 0.01ms
|
||||
headers = Proxy-Authorization,basic base64encodedstring==
|
||||
```
|
||||
@@ -259,6 +259,12 @@ for d in /usr /var; do find $d -type f -size +30M 2>/dev/null; done | while IFS=
|
||||
for f in {0..255}; do echo $f; truncate -s 256M $f; b1=$(printf '%02x' $f); for o in {0..255}; do b2=$(printf '%02x' $o); printf "\x$b1\x$b2" | dd of=$f bs=2 seek=$((o*1024*1024)) conv=notrunc 2>/dev/null; done; done
|
||||
# create 6.06G file with 16 bytes of unique data at start+end of each 32M chunk
|
||||
sz=6509559808; truncate -s $sz f; csz=33554432; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done
|
||||
# same but for chunksizes 16M (3.1G), 24M (4.1G), 48M (128.1G)
|
||||
sz=3321225472; csz=16777216;
|
||||
sz=4394967296; csz=25165824;
|
||||
sz=6509559808; csz=33554432;
|
||||
sz=138438953472; csz=50331648;
|
||||
f=csz-$csz; truncate -s $sz $f; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=$f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done
|
||||
|
||||
# py2 on osx
|
||||
brew install python@2
|
||||
|
||||
@@ -48,6 +48,20 @@ and if you want to have a monospace font in the fancy markdown editor, do this:
|
||||
NB: `<textarea id="mt">` and `<div id="mtr">` in the regular markdown editor must have the same font; none of the suggestions above will cause any issues but keep it in mind if you're getting creative
|
||||
|
||||
|
||||
# boring loader spinner
|
||||
|
||||
replace the 🌲 with a spinning circle using commandline args:
|
||||
|
||||
`--spinner ',padding:0;border-radius:9em;border:.2em solid #444;border-top:.2em solid #fc0'`
|
||||
|
||||
or config file example:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
spinner: ,padding:0;border-radius:9em;border:.2em solid #444;border-top:.2em solid #fc0
|
||||
```
|
||||
|
||||
|
||||
# `<head>`
|
||||
|
||||
to add stuff to the html `<head>`, for example a css `<link>` or `<meta>` tags, use either the global-option `--html-head` or the volflag `html_head`
|
||||
@@ -61,6 +75,8 @@ if the value starts with `%` it will assume a jinja2 template and expand it; the
|
||||
|
||||
add your own translations by using the english or norwegian one from `browser.js` as a template
|
||||
|
||||
> ⚠ Please do not contribute translations to [RTL (Right-to-Left) languages](https://en.wikipedia.org/wiki/Right-to-left_script) for now; the javascript is [not ready](https://github.com/9001/copyparty/blob/hovudstraum/docs/rice/rtl.patch) to deal with it
|
||||
|
||||
the easy way is to open up and modify `browser.js` in your own installation; depending on how you installed copyparty it might be named `browser.js.gz` instead, in which case just decompress it, restart copyparty, and start editing it anyways
|
||||
|
||||
you will be delighted to see inline html in the translation strings; to help prevent syntax errors, there is [a very jank linux script](https://github.com/9001/copyparty/blob/hovudstraum/scripts/tlcheck.sh) which is slightly better than nothing -- just beware the false-positives, so even if it complains it's not necessarily wrong/bad
|
||||
|
||||
79
docs/rice/rtl.patch
Normal file
79
docs/rice/rtl.patch
Normal file
@@ -0,0 +1,79 @@
|
||||
RTL support is not planned, but it would be
|
||||
something like this (just a whole lot more)
|
||||
|
||||
diff --git a/copyparty/web/browser.css b/copyparty/web/browser.css
|
||||
index e66279d4..2888be56 100644
|
||||
--- a/copyparty/web/browser.css
|
||||
+++ b/copyparty/web/browser.css
|
||||
@@ -653,12 +653,10 @@ a:hover {
|
||||
.s0:after,
|
||||
.s1:after {
|
||||
content: '⌄';
|
||||
- margin-left: -.15em;
|
||||
}
|
||||
.s0r:after,
|
||||
.s1r:after {
|
||||
content: '⌃';
|
||||
- margin-left: -.15em;
|
||||
}
|
||||
.s0:after,
|
||||
.s0r:after {
|
||||
@@ -668,6 +666,19 @@ a:hover {
|
||||
.s1r:after {
|
||||
color: var(--sort-2);
|
||||
}
|
||||
+.ltr .s0:after,
|
||||
+.ltr .s1:after,
|
||||
+.ltr .s0r:after,
|
||||
+.ltr .s1r:after {
|
||||
+ margin-left: -.15em;
|
||||
+}
|
||||
+.rtl .s0:after,
|
||||
+.rtl .s1:after,
|
||||
+.rtl .s0r:after,
|
||||
+.rtl .s1r:after {
|
||||
+ margin-left: -.5em;
|
||||
+ padding: 0 .25em 0 0;
|
||||
+}
|
||||
#files thead th:after {
|
||||
margin-right: -.5em;
|
||||
}
|
||||
diff --git a/copyparty/web/browser.js b/copyparty/web/browser.js
|
||||
index 33965a70..bf425cc7 100644
|
||||
--- a/copyparty/web/browser.js
|
||||
+++ b/copyparty/web/browser.js
|
||||
@@ -1797,9 +1797,13 @@ var Ls = {
|
||||
|
||||
"lang_set": "刷新以使更改生效?",
|
||||
},
|
||||
+ "foo": {
|
||||
+ "tt": "Foobar",
|
||||
+ "rtl": "rtl",
|
||||
+ },
|
||||
};
|
||||
|
||||
-var LANGS = ["eng", "nor", "chi"];
|
||||
+var LANGS = ["eng", "nor", "chi", "foo"];
|
||||
|
||||
if (window.langmod)
|
||||
langmod();
|
||||
@@ -1819,7 +1823,7 @@ for (var a = 0; a < LANGS.length; a++) {
|
||||
t2 = Ls[LANGS[i2]];
|
||||
|
||||
for (var k in t1)
|
||||
- if (!t2[k]) {
|
||||
+ if (!t2[k] && k != 'rtl') {
|
||||
console.log("E missing TL", LANGS[i2], k);
|
||||
t2[k] = t1[k];
|
||||
}
|
||||
@@ -1829,6 +1833,10 @@ for (var a = 0; a < LANGS.length; a++) {
|
||||
if (!has(LANGS, lang))
|
||||
alert('unsupported --lang "' + lang + '" specified in server args;\nplease use one of these: ' + LANGS);
|
||||
|
||||
+if (L.rtl)
|
||||
+ document.documentElement.setAttribute('dir', L.rtl);
|
||||
+document.documentElement.className = L.rtl || 'ltr';
|
||||
+
|
||||
modal.load();
|
||||
|
||||
|
||||
140
docs/synology-dsm.md
Normal file
140
docs/synology-dsm.md
Normal file
@@ -0,0 +1,140 @@
|
||||
# running copyparty on synology dsm nas
|
||||
|
||||

|
||||
|
||||
this has been tested on a `Synology ds218+` NAS with 1 SHR storage-pool and 1 volume, but the same steps should work in more advanced setups too
|
||||
|
||||
verified on DSM 7.1 and 7.2, but not on 6.x since my flea-market ds218+ refuses to install it for some reason
|
||||
|
||||
|
||||
|
||||
# ok let's go
|
||||
|
||||
go to controlpanel -> shared-folders, and create the following shared-folders if you don't already have appropriate ones:
|
||||
|
||||
* a shared-folder for configuration files, preferably on SSD if you have one
|
||||
|
||||
* one or more shared-folders for your actual data/media to share
|
||||
|
||||
(btw, when you create the shared-folders, it asks whether you want to enable data checksum and file compression, i would recommend both)
|
||||
|
||||
the rest of this doc assumes that these two shared-folders are named `configs` and `media1`, and that you made an empty folder inside the `configs` shared-folder named `cpp`
|
||||
|
||||
* your copyparty config file (see below) should be named `something.conf` directly inside that cpp folder, for example `/configs/cpp/copyparty.conf`
|
||||
|
||||
* during first start, copyparty will create a folder there named `copyparty`, in other words `/configs/cpp/copyparty` which you should leave alone; that's where copyparty stores its indexes and other runtime config
|
||||
|
||||
|
||||
|
||||
## recommended copyparty config
|
||||
|
||||
open the Package Center and install `Text Editor` (by Synology Inc.) to create and edit your copyparty config:
|
||||
|
||||

|
||||
|
||||
* note the `copyparty` and `hist` folders in that screenshot which are autogenerated by copyparty and to be left alone
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
e2d, e2t # remember uploads & read media tags
|
||||
rss, daw, ver # some other nice-to-have features
|
||||
#dedup # you may want this, or maybe not
|
||||
hist: /cfg/hist # don't pollute the shared-folder
|
||||
name: synology # shows in the browser, can be anything
|
||||
|
||||
[accounts]
|
||||
ed: wark # username ed, password wark
|
||||
|
||||
[/] # share the following at the webroot:
|
||||
/w # the "/w" docker-volume (the shared-folder)
|
||||
accs:
|
||||
A: ed # give Admin to username ed
|
||||
|
||||
# hide the synology system files by creating a hidden volume
|
||||
[/@eaDir]
|
||||
/w/@eaDir
|
||||
```
|
||||
|
||||
if you ever change the copyparty config file, then [restart the container](https://ocv.me/copyparty/doc/pics/dsm71-02.png) to make the changes take effect
|
||||
|
||||
okay now continue with one of these:
|
||||
|
||||
* [DSM v7.2 or newer](#dsm-v72-or-newer)
|
||||
|
||||
* [all older DSM versions](#dsm-v6x-dsm-v71x-or-older)
|
||||
|
||||
|
||||
|
||||
# DSM v7.2 or newer
|
||||
|
||||
`Docker` was replaced by `Container Manager` in DSM v7.2 but they're almost the same thing;
|
||||
|
||||
* open the `Package Center` and install the [Container Manager package](https://ocv.me/copyparty/doc/pics/dsm72-01.png) by `Docker Inc.`
|
||||
* open the `Container Manager` app
|
||||
* go to the `Registry` tab and search for `copyparty`
|
||||
* [doubleclick copyparty/ac](https://ocv.me/copyparty/doc/pics/dsm72-02.png) and keep the [default `latest`](https://ocv.me/copyparty/doc/pics/dsm72-03.png) when it asks you which tag to use
|
||||
* switch to the `Container` tab and click `Create`
|
||||
* [choose `copyparty/ac:latest`](https://ocv.me/copyparty/doc/pics/dsm72-04.png) and click `Next`
|
||||
|
||||
finally, in the [Advanced Settings](https://ocv.me/copyparty/doc/pics/dsm72-05.png) window,
|
||||
|
||||
* under `Port Settings`, type `3923` into the `Local Port` textbox
|
||||
* click `Add Folder` and select `/configs/cpp` on your nas (the `cpp` folder in the `configs` shared-folder), and change `Mount path` to `/cfg`
|
||||
* click `Add Folder` and select `/media1` on your nas (the shared-folder that copyparty can share in its web-UI) and change `Mount path` to `/w`
|
||||
* if you are adding multiple shared-folders for media, then the `Mount path` of the 2nd folder should be something like `/w/share2` or `/w/music`
|
||||
|
||||
copyparty will launch and become available at http://192.168.1.9:3923/ (assuming `192.168.1.9` is your nas ip)
|
||||
|
||||
|
||||
# DSM v6.x, DSM v7.1.x or older
|
||||
|
||||
if you're using DSM 7.1 or older, then you don't have [Container Manager](https://www.synology.com/en-global/dsm/packages/ContainerManager) yet and you'll have to use [Docker](https://www.synology.com/en-global/dsm/packages/Docker?os_ver=6.2&search=docker) instead. Here's how:
|
||||
|
||||
* open the `Package Center` and install the [Docker package](https://ocv.me/copyparty/doc/pics/dsm71-01.png) by `Docker Inc.`
|
||||
* open the `Docker` app
|
||||
* go to the `Registry` tab and search for `copyparty`
|
||||
* [doubleclick copyparty/ac](https://ocv.me/copyparty/doc/pics/dsm71-02.png) and keep the [default `latest`](https://ocv.me/copyparty/doc/pics/dsm71-03.png) when it asks you which tag to use
|
||||
* switch to the `Container` tab and click `Create`
|
||||
* [choose `copyparty/ac:latest`](https://ocv.me/copyparty/doc/pics/dsm71-04.png) and `Next`
|
||||
* in the [Network](https://ocv.me/copyparty/doc/pics/dsm71-05.png) window, keep the default `Use the selected networks: [x] bridge`
|
||||
* in the [General Settings](https://ocv.me/copyparty/doc/pics/dsm71-06.png) window, just keep everything default (in other words, everything disabled)
|
||||
* in the [Port Settings](https://ocv.me/copyparty/doc/pics/dsm71-07.png) window, change `Local Port` to `3923` (or choose something else, but it cannot be the default `Auto`)
|
||||
|
||||
finally, in the [Volume Settings](https://ocv.me/copyparty/doc/pics/dsm71-08.png) window, add a docker volume for copyparty config, and at least one volume for media-files which copyparty can share in its web-UI
|
||||
|
||||
* click `Add Folder` and select `/configs/cpp` on your nas (the `cpp` folder in the `configs` shared-folder), and change `Mount path` to `/cfg`
|
||||
* click `Add Folder` and select `/media1` on your nas (the shared-folder that copyparty can share in its web-UI) and change `Mount path` to `/w`
|
||||
* if you are adding multiple shared-folders for media, then the `Mount path` of the 2nd folder should be something like `/w/share2` or `/w/music`
|
||||
|
||||
copyparty will launch and become available at http://192.168.1.9:3923/ (assuming `192.168.1.9` is your nas ip)
|
||||
|
||||
|
||||
# misc notes
|
||||
|
||||
note that if you only want to share some folders inside your data volume, and not all of it, then you can either give copyparty the whole shared-folder anyways and control/restrict access in the copyparty config file (recommended), or you can add each folder as a new docker volume (not as flexible)
|
||||
|
||||
|
||||
|
||||
## regarding ram usage
|
||||
|
||||
the ram usage indicator in both `Docker` and `Container Manager` is misleading because it also counts the kernel disk cache which makes the number insanely high -- the synology resource monitor shows the correct values, usually less than 100 MiB
|
||||
|
||||
to see the actual memory usage by copyparty, see `Resource Monitor` -> `Task Manager` -> `Processes` and look at the `Private Memory` of `python3` which is probably copyparty
|
||||
|
||||
|
||||
|
||||
## regarding performance
|
||||
|
||||
when uploading files to the synology nas with the respective web-UIs,
|
||||
|
||||
* `File Station` does about 16 MiB/s,
|
||||
|
||||
* `Synology Drive Server` does about 50 MiB/s; deceivingly fast upload speeds at first, but when the file is fully uploaded, there is a lengthy "processing" step at the end, reducing the average speed to about 50% of the initial
|
||||
|
||||
* copyparty maxes the HDD write-speeds, 99 MiB/s
|
||||
|
||||
when uploading to the synology nas over webdav,
|
||||
|
||||
* `WebDAV Server` by `Synology Inc.` in the Package Center does 86 MiB/s
|
||||
|
||||
* copyparty does 79 MiB/s; the NAS CPU is a bottleneck because copyparty verifies the upload checksum while `WebDAV Server` doesn't
|
||||
@@ -279,7 +279,7 @@ symbol legend,
|
||||
| per-file passwords | █ | | | █ | █ | | █ | | █ | | | | █ |
|
||||
| unmap subfolders | █ | | █ | | | | █ | | | █ | ╱ | • | |
|
||||
| index.html blocks list | ╱ | | | | | | █ | | | • | | | |
|
||||
| write-only folders | █ | | █ | | | | | | | | █ | █ | |
|
||||
| write-only folders | █ | | █ | | █ | | | | | | █ | █ | |
|
||||
| files stored as-is | █ | █ | █ | █ | | █ | █ | | | █ | █ | █ | █ |
|
||||
| file versioning | | | | █ | █ | | | | | | | | |
|
||||
| file encryption | | | | █ | █ | █ | | | | | | █ | |
|
||||
@@ -507,7 +507,6 @@ symbol legend,
|
||||
* ⚠️ uploads not resumable / accelerated / integrity-checked
|
||||
* ⚠️ on cloudflare: max upload size 100 MiB
|
||||
* ⚠️ uploading small files is slow; `4.7` files per sec (copyparty does `670`/sec, 140x faster)
|
||||
* ⚠️ no write-only / upload-only folders
|
||||
* ⚠️ big folders cannot be zip-downloaded
|
||||
* ⚠️ http/webdav only; no ftp, zeroconf
|
||||
* ⚠️ less awesome music player
|
||||
@@ -593,6 +592,7 @@ symbol legend,
|
||||
* ✅ user signup
|
||||
* ✅ command runner / remote shell
|
||||
* ✅ more efficient; can handle around twice as much simultaneous traffic
|
||||
* note: keep an eye on [gtsteffaniak's fork](https://github.com/gtsteffaniak/filebrowser)
|
||||
|
||||
## [filegator](https://github.com/filegator/filegator)
|
||||
* php; cross-platform (windows, linux, mac)
|
||||
|
||||
@@ -52,6 +52,7 @@ ftpd = ["pyftpdlib"]
|
||||
ftps = ["pyftpdlib", "pyopenssl"]
|
||||
tftpd = ["partftpy>=0.4.0"]
|
||||
pwhash = ["argon2-cffi"]
|
||||
zeromq = ["pyzmq"]
|
||||
|
||||
[project.scripts]
|
||||
copyparty = "copyparty.__main__:main"
|
||||
|
||||
@@ -3,7 +3,7 @@ WORKDIR /z
|
||||
ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
|
||||
ver_hashwasm=4.12.0 \
|
||||
ver_marked=4.3.0 \
|
||||
ver_dompf=3.2.3 \
|
||||
ver_dompf=3.2.4 \
|
||||
ver_mde=2.18.0 \
|
||||
ver_codemirror=5.65.18 \
|
||||
ver_fontawesome=5.13.0 \
|
||||
|
||||
@@ -9,7 +9,7 @@ ENV XDG_CONFIG_HOME=/cfg
|
||||
|
||||
RUN apk --no-cache add !pyc \
|
||||
tzdata wget \
|
||||
py3-jinja2 py3-argon2-cffi py3-pillow \
|
||||
py3-jinja2 py3-argon2-cffi py3-pyzmq py3-pillow \
|
||||
ffmpeg
|
||||
|
||||
COPY i/dist/copyparty-sfx.py innvikler.sh ./
|
||||
|
||||
@@ -12,7 +12,8 @@ COPY i/bin/mtag/audio-bpm.py /mtag/
|
||||
COPY i/bin/mtag/audio-key.py /mtag/
|
||||
RUN apk add -U !pyc \
|
||||
tzdata wget \
|
||||
py3-jinja2 py3-argon2-cffi py3-pillow py3-pip py3-cffi \
|
||||
py3-jinja2 py3-argon2-cffi py3-pyzmq py3-pillow \
|
||||
py3-pip py3-cffi \
|
||||
ffmpeg \
|
||||
vips-jxl vips-heif vips-poppler vips-magick \
|
||||
py3-numpy fftw libsndfile \
|
||||
|
||||
@@ -9,7 +9,8 @@ ENV XDG_CONFIG_HOME=/cfg
|
||||
|
||||
RUN apk add -U !pyc \
|
||||
tzdata wget \
|
||||
py3-jinja2 py3-argon2-cffi py3-pillow py3-pip py3-cffi \
|
||||
py3-jinja2 py3-argon2-cffi py3-pyzmq py3-pillow \
|
||||
py3-pip py3-cffi \
|
||||
ffmpeg \
|
||||
vips-jxl vips-heif vips-poppler vips-magick \
|
||||
&& apk add -t .bd \
|
||||
|
||||
@@ -57,6 +57,8 @@ most editions support `x86`, `x86_64`, `armhf`, `aarch64`, `ppc64le`, `s390x`
|
||||
* `dj` doesn't run on `ppc64le`, `s390x`, `armhf`
|
||||
* `iv` doesn't run on `ppc64le`, `s390x`
|
||||
|
||||
> NOTE: the following editions are unfinished experiments, and not published anywhere: djd djf djff dju
|
||||
|
||||
|
||||
## detecting bpm and musical key
|
||||
|
||||
|
||||
@@ -1,6 +1,13 @@
|
||||
#!/bin/ash
|
||||
set -ex
|
||||
|
||||
# patch musl cve https://www.openwall.com/lists/musl/2025/02/13/1
|
||||
apk add -U grep
|
||||
grep -aobRE 'euckr[^\w]ksc5601[^\w]ksx1001[^\w]cp949[^\w]' /lib/ | awk -F: '$2>999{printf "%d %s\n",$2,$1}' | while read ofs fn
|
||||
do printf -- '-----\0-------\0-------\0-----\0' | dd bs=1 iflag=fullblock conv=notrunc seek=$ofs of=$fn; done 2>&1 |
|
||||
tee /dev/stderr | grep -E copied, | wc -l | grep '^2$'
|
||||
apk del grep
|
||||
|
||||
# cleanup for flavors with python build steps (dj/iv)
|
||||
rm -rf /var/cache/apk/* /root/.cache
|
||||
|
||||
|
||||
@@ -32,6 +32,7 @@ def readclip():
|
||||
def cnv(src):
|
||||
hostname = str(socket.gethostname()).split(".")[0]
|
||||
|
||||
yield '<!DOCTYPE html>'
|
||||
yield '<html style="background:#222;color:#fff"><body>'
|
||||
skip_sfx = False
|
||||
in_sfx = 0
|
||||
|
||||
@@ -27,7 +27,7 @@ ac96786e5d35882e0c5b724794329c9125c2b86ae7847f17acfc49f0d294312c6afc1c3f248655de
|
||||
6df21f0da408a89f6504417c7cdf9aaafe4ed88cfa13e9b8fa8414f604c0401f885a04bbad0484dc51a29284af5d1548e33c6cc6bfb9896d9992c1b1074f332d MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl
|
||||
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
|
||||
0203ec2551c4836696cfab0b2c9fff603352f03fa36e7476e2e1ca7ec57a3a0c24bd791fcd92f342bf817f0887854d9f072e0271c643de4b313d8c9569ba8813 packaging-24.1-py3-none-any.whl
|
||||
2be320b4191f208cdd6af183c77ba2cf460ea52164ee45ac3ff17d6dfa57acd9deff016636c2dd42a21f4f6af977d5f72df7dacf599bebcf41757272354d14c1 pillow-10.4.0-cp312-cp312-win_amd64.whl
|
||||
12d7921dc7dfd8a4b0ea0fa2bae8f1354fcdd59ece3d7f4e075aed631f9ba791dc142c70b1ccd1e6287c43139df1db26bd57a7a217c8da3a77326036495cdb57 pillow-11.1.0-cp312-cp312-win_amd64.whl
|
||||
f0463895e9aee97f31a2003323de235fed1b26289766dc0837261e3f4a594a31162b69e9adbb0e9a31e2e2d4b5f25c762ed1669553df7dc89a8ba4f85d297873 pyinstaller-6.11.1-py3-none-win_amd64.whl
|
||||
d550a0a14428386945533de2220c4c2e37c0c890fc51a600f626c6ca90a32d39572c121ec04c157ba3a8d6601cb021f8433d871b5c562a3d342c804fffec90c1 pyinstaller_hooks_contrib-2024.11-py3-none-any.whl
|
||||
0f623c9ab52d050283e97a986ba626d86b04cd02fa7ffdf352740576940b142b264709abadb5d875c90f625b28103d7210b900e0d77f12c1c140108bd2a159aa python-3.12.8-amd64.exe
|
||||
17b64ff6744004a05d475c8f6de3e48286db4069afad4cae690f83b3555f8e35ceafb210eeba69a11e983d0da3001099de284b6696ed0f1bf9cd791938a7f2cd python-3.12.9-amd64.exe
|
||||
|
||||
@@ -38,10 +38,10 @@ fns=(
|
||||
MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl
|
||||
mutagen-1.47.0-py3-none-any.whl
|
||||
packaging-24.1-py3-none-any.whl
|
||||
pillow-10.4.0-cp312-cp312-win_amd64.whl
|
||||
pillow-11.1.0-cp312-cp312-win_amd64.whl
|
||||
pyinstaller-6.10.0-py3-none-win_amd64.whl
|
||||
pyinstaller_hooks_contrib-2024.8-py3-none-any.whl
|
||||
python-3.12.7-amd64.exe
|
||||
python-3.12.9-amd64.exe
|
||||
)
|
||||
[ $w7 ] && fns+=(
|
||||
future-1.0.0-py3-none-any.whl
|
||||
|
||||
@@ -20,7 +20,7 @@ cat $f | awk '
|
||||
o{next}
|
||||
/^#/{s=1;rs=0;pr()}
|
||||
/^#* *(nix package)/{rs=1}
|
||||
/^#* *(themes|install on android|dev env setup|just the sfx|complete release|optional gpl stuff|nixos module)|```/{s=rs}
|
||||
/^#* *(themes|install on android|dev env setup|just the sfx|complete release|optional gpl stuff|nixos module|reverse-proxy perf)|```/{s=rs}
|
||||
/^#/{
|
||||
lv=length($1);
|
||||
sub(/[^ ]+ /,"");
|
||||
|
||||
1
setup.py
1
setup.py
@@ -144,6 +144,7 @@ args = {
|
||||
"ftps": ["pyftpdlib", "pyopenssl"],
|
||||
"tftpd": ["partftpy>=0.4.0"],
|
||||
"pwhash": ["argon2-cffi"],
|
||||
"zeromq": ["pyzmq"],
|
||||
},
|
||||
"entry_points": {"console_scripts": ["copyparty = copyparty.__main__:main"]},
|
||||
"scripts": ["bin/partyfuse.py", "bin/u2c.py"],
|
||||
|
||||
199
srv/ping.html
Normal file
199
srv/ping.html
Normal file
@@ -0,0 +1,199 @@
|
||||
<!DOCTYPE html><html><head>
|
||||
<meta charset="utf-8">
|
||||
<title>partyping</title>
|
||||
<!--
|
||||
// ping.html - when icmp is not an option
|
||||
// 2014, ed <irc.rizon.net>, MIT-licensed
|
||||
// https://github.com/9001/copyparty/blob/hovudstraum/contrib/ping.html
|
||||
-->
|
||||
<style>
|
||||
html {
|
||||
color: #000;
|
||||
background: #eee;
|
||||
font-family: sans-serif;
|
||||
overflow-x: hidden;
|
||||
overflow-y: scroll;
|
||||
}
|
||||
#wrap {
|
||||
margin: 0 auto;
|
||||
max-width: 30em;
|
||||
text-align: center;
|
||||
}
|
||||
table {
|
||||
font-family: monospace, monospace;
|
||||
margin: 1em auto;
|
||||
font-size: 1.5em;
|
||||
}
|
||||
td {
|
||||
padding: .2em .4em;
|
||||
}
|
||||
.conf td {
|
||||
text-align: left;
|
||||
}
|
||||
.conf td:first-child {
|
||||
text-align: right;
|
||||
}
|
||||
.stats td {
|
||||
padding: .2em .7em;
|
||||
border: 1px solid #bbb;
|
||||
border-width: 0 1px 1px 0;
|
||||
}
|
||||
#start {
|
||||
width: 100%;
|
||||
}
|
||||
#log {
|
||||
margin: 2em;
|
||||
font-size: .9em;
|
||||
text-align: left;
|
||||
}
|
||||
input {
|
||||
font: inherit;
|
||||
}
|
||||
</style></head><body><div id="wrap">
|
||||
<table class="conf">
|
||||
<tr>
|
||||
<td>interval (msec):</td>
|
||||
<td><input id="delay" type="text" value="1000" size="7" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>num pings:</td>
|
||||
<td><input id="more" type="text" value="100" size="7" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td> </td>
|
||||
<td><input id="start" type="button" value="start" onclick="okgo();return false" /></td>
|
||||
</table>
|
||||
<table class="stats">
|
||||
<tr>
|
||||
<td>min</td>
|
||||
<td>avg</td>
|
||||
<td>med</td>
|
||||
<td>max</td>
|
||||
<td>mdev</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td id="min">x</td>
|
||||
<td id="avg">x</td>
|
||||
<td id="med">x</td>
|
||||
<td id="max">x</td>
|
||||
<td id="mdv">x</td>
|
||||
</tr>
|
||||
</table>
|
||||
<div id="log">
|
||||
Log goes here
|
||||
</div>
|
||||
</div>
|
||||
<script>
|
||||
|
||||
// ping.html - when icmp is not an option
|
||||
// 2014, ed <irc.rizon.net>, MIT-licensed
|
||||
// https://github.com/9001/copyparty/blob/hovudstraum/contrib/ping.html
|
||||
|
||||
function ebi(k) {
|
||||
return document.getElementById(k);
|
||||
}
|
||||
var log = [],
|
||||
srt = [],
|
||||
delay,
|
||||
t0 = -1,
|
||||
nbad = 0,
|
||||
min = 9999999,
|
||||
max = 0,
|
||||
sum = 0,
|
||||
sum2 = 0,
|
||||
omin = ebi('min'),
|
||||
omax = ebi('max'),
|
||||
oavg = ebi('avg'),
|
||||
omed = ebi('med'),
|
||||
omdv = ebi('mdv'),
|
||||
olog = ebi('log');
|
||||
|
||||
function insert(t, v) {
|
||||
var lo = 0, hi = t.length;
|
||||
while (lo < hi) {
|
||||
var mid = Math.floor((lo+hi)/2);
|
||||
if (t[mid] < v)
|
||||
lo = mid + 1;
|
||||
else
|
||||
hi = mid;
|
||||
}
|
||||
t.splice(lo, 0, v);
|
||||
}
|
||||
function f2f(val, nd) {
|
||||
val = (parseFloat(val) * Math.pow(10, nd)).toFixed(0).split('.')[0];
|
||||
return nd ? (val.slice(0, -nd) || '0') + '.' + val.slice(-nd) : val;
|
||||
}
|
||||
function okgo() {
|
||||
if (t0 < 0)
|
||||
ping();
|
||||
}
|
||||
function ping() {
|
||||
var xh,
|
||||
more = parseInt(ebi('more').value) - 1,
|
||||
stats = [omin.innerHTML, omed.innerHTML, omax.innerHTML, omdv.innerHTML];
|
||||
|
||||
if (more < 0)
|
||||
return;
|
||||
|
||||
if (more > 499)
|
||||
more = 499;
|
||||
|
||||
delay = parseInt(ebi('delay').value);
|
||||
if (delay < 100)
|
||||
delay = 100;
|
||||
|
||||
if (window.XMLHttpRequest)
|
||||
xh = new XMLHttpRequest();
|
||||
else
|
||||
xh = new ActiveXObject("Microsoft.XMLHTTP");
|
||||
|
||||
xh.onreadystatechange = function() {
|
||||
if (xh.readyState != 4)
|
||||
return;
|
||||
|
||||
var t = new Date().getTime() - t0,
|
||||
ok = xh.status == 200,
|
||||
rsp = xh.responseText;
|
||||
|
||||
if (ok)
|
||||
ok = rsp.indexOf('o7') === 0;
|
||||
|
||||
if (!ok)
|
||||
nbad++;
|
||||
|
||||
sum += t;
|
||||
sum2 += t * t;
|
||||
log.push(t);
|
||||
insert(srt, t);
|
||||
|
||||
if (min > t)
|
||||
min = t;
|
||||
|
||||
if (max < t)
|
||||
max = t;
|
||||
|
||||
var avg = sum / log.length,
|
||||
smean = sum2 / log.length,
|
||||
med = srt[Math.floor(srt.length/2)],
|
||||
mdev = Math.sqrt(smean-(avg*avg));
|
||||
|
||||
omin.innerHTML = min;
|
||||
omax.innerHTML = max;
|
||||
oavg.innerHTML = Math.round(avg);
|
||||
omed.innerHTML = med;
|
||||
omdv.innerHTML = f2f(mdev, 2);
|
||||
olog.innerHTML = log.join(', ') + '<br /><br />' + srt.join(', ') + (
|
||||
nbad ? "<br /><br />invalid/corrupted ping responses: " + nbad : '');
|
||||
setTimeout(ping, delay);
|
||||
};
|
||||
t0 = new Date().getTime();
|
||||
stats.push(t0);
|
||||
xh.open("GET", "/?setck=a=x&ping="+stats.join(","), true);
|
||||
xh.send();
|
||||
|
||||
ebi('more').value = more;
|
||||
olog.innerHTML += '<br /><br />ping...';
|
||||
}
|
||||
</script>
|
||||
</body></html>
|
||||
|
||||
@@ -20,7 +20,8 @@ def _parse(txt):
|
||||
|
||||
|
||||
class TestDXML(unittest.TestCase):
|
||||
def test1(self):
|
||||
def test_qbe(self):
|
||||
# allowed by default; verify that we stopped it
|
||||
txt = r"""<!DOCTYPE qbe [
|
||||
<!ENTITY a "nice_bakuretsu">
|
||||
]>
|
||||
@@ -28,7 +29,8 @@ class TestDXML(unittest.TestCase):
|
||||
_parse(txt)
|
||||
ET.fromstring(txt)
|
||||
|
||||
def test2(self):
|
||||
def test_ent_file(self):
|
||||
# NOT allowed by default; should still be blocked
|
||||
txt = r"""<!DOCTYPE ext [
|
||||
<!ENTITY ee SYSTEM "file:///bin/bash">
|
||||
]>
|
||||
@@ -40,6 +42,25 @@ class TestDXML(unittest.TestCase):
|
||||
except ET.ParseError:
|
||||
pass
|
||||
|
||||
def test_ent_ext(self):
|
||||
# NOT allowed by default; should still be blocked
|
||||
txt = r"""<!DOCTYPE ext [
|
||||
<!ENTITY ee SYSTEM "http://example.com/a.xml">
|
||||
]>
|
||||
<root>ⅇ</root>"""
|
||||
_parse(txt)
|
||||
|
||||
def test_dtd(self):
|
||||
# allowed by default; verify that we stopped it
|
||||
txt = r"""<!DOCTYPE d SYSTEM "a.dtd">
|
||||
<root>a</root>"""
|
||||
_parse(txt)
|
||||
ET.fromstring(txt)
|
||||
|
||||
##
|
||||
## end of negative/security tests; the rest is functional
|
||||
##
|
||||
|
||||
def test3(self):
|
||||
txt = r"""<?xml version="1.0" ?>
|
||||
<propfind xmlns="DAV:">
|
||||
|
||||
@@ -141,19 +141,19 @@ class Cfg(Namespace):
|
||||
ex = "hash_mt hsortn safe_dedup srch_time u2abort u2j u2sz"
|
||||
ka.update(**{k: 1 for k in ex.split()})
|
||||
|
||||
ex = "au_vol dl_list mtab_age reg_cap s_thead s_tbody th_convt"
|
||||
ex = "au_vol dl_list mtab_age reg_cap s_thead s_tbody th_convt ups_who zip_who"
|
||||
ka.update(**{k: 9 for k in ex.split()})
|
||||
|
||||
ex = "db_act k304 loris no304 re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
|
||||
ka.update(**{k: 0 for k in ex.split()})
|
||||
|
||||
ex = "ah_alg bname chpw_db doctitle df exit favico idp_h_usr ipa html_head lg_sbf log_fk md_sbf name og_desc og_site og_th og_title og_title_a og_title_v og_title_i shr tcolor textfiles unlist vname xff_src R RS SR"
|
||||
ex = "ah_alg bname chpw_db doctitle df exit favico idp_h_usr ipa html_head lg_sba lg_sbf log_fk md_sba md_sbf name og_desc og_site og_th og_title og_title_a og_title_v og_title_i shr tcolor textfiles unlist vname xff_src R RS SR"
|
||||
ka.update(**{k: "" for k in ex.split()})
|
||||
|
||||
ex = "ban_403 ban_404 ban_422 ban_pw ban_url"
|
||||
ex = "ban_403 ban_404 ban_422 ban_pw ban_url spinner"
|
||||
ka.update(**{k: "no" for k in ex.split()})
|
||||
|
||||
ex = "grp on403 on404 xac xad xar xau xban xbc xbd xbr xbu xiu xm"
|
||||
ex = "ext_th grp on403 on404 xac xad xar xau xban xbc xbd xbr xbu xiu xm"
|
||||
ka.update(**{k: [] for k in ex.split()})
|
||||
|
||||
ex = "exp_lg exp_md"
|
||||
|
||||
Reference in New Issue
Block a user