Compare commits
59 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b3cecabca3 | ||
|
|
662541c64c | ||
|
|
225bd80ea8 | ||
|
|
85e54980cc | ||
|
|
a19a0fa9f3 | ||
|
|
9bb6e0dc62 | ||
|
|
15ddcf53e7 | ||
|
|
6b54972ec0 | ||
|
|
0219eada23 | ||
|
|
8916bce306 | ||
|
|
99edba4fd9 | ||
|
|
64de3e01e8 | ||
|
|
8222ccc40b | ||
|
|
dc449bf8b0 | ||
|
|
ef0ecf878b | ||
|
|
53f1e3c91d | ||
|
|
eeef80919f | ||
|
|
987bce2182 | ||
|
|
b511d686f0 | ||
|
|
132a83501e | ||
|
|
e565ad5f55 | ||
|
|
f955d2bd58 | ||
|
|
5953399090 | ||
|
|
d26a944d95 | ||
|
|
50dac15568 | ||
|
|
ac1e11e4ce | ||
|
|
d749683d48 | ||
|
|
84e8e1ddfb | ||
|
|
6e58514b84 | ||
|
|
803e156509 | ||
|
|
c06aa683eb | ||
|
|
6644ceef49 | ||
|
|
bd3b3863ae | ||
|
|
ffd4f9c8b9 | ||
|
|
760ff2db72 | ||
|
|
f37187a041 | ||
|
|
1cdb170290 | ||
|
|
d5de3f2fe0 | ||
|
|
d76673e62d | ||
|
|
c549f367c1 | ||
|
|
927c3bce96 | ||
|
|
d75a2c77da | ||
|
|
e6c55d7ff9 | ||
|
|
4c2cb26991 | ||
|
|
dfe7f1d9af | ||
|
|
666297f6fb | ||
|
|
55a011b9c1 | ||
|
|
27aff12a1e | ||
|
|
9a87ee2fe4 | ||
|
|
0a9f4c6074 | ||
|
|
7219331057 | ||
|
|
2fd12a839c | ||
|
|
8c73e0cbc2 | ||
|
|
52e06226a2 | ||
|
|
452592519d | ||
|
|
c9281f8912 | ||
|
|
36d6d29a0c | ||
|
|
db6059e100 | ||
|
|
aab57cb24b |
51
README.md
51
README.md
@@ -83,6 +83,8 @@ turn almost any device into a file server with resumable uploads/downloads using
|
|||||||
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
|
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
|
||||||
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
|
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
|
||||||
* [prometheus](#prometheus) - metrics/stats can be enabled
|
* [prometheus](#prometheus) - metrics/stats can be enabled
|
||||||
|
* [other extremely specific features](#other-extremely-specific-features) - you'll never find a use for these
|
||||||
|
* [custom mimetypes](#custom-mimetypes) - change the association of a file extension
|
||||||
* [packages](#packages) - the party might be closer than you think
|
* [packages](#packages) - the party might be closer than you think
|
||||||
* [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes)
|
* [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes)
|
||||||
* [fedora package](#fedora-package) - does not exist yet
|
* [fedora package](#fedora-package) - does not exist yet
|
||||||
@@ -207,7 +209,7 @@ also see [comparison to similar software](./docs/versus.md)
|
|||||||
* upload
|
* upload
|
||||||
* ☑ basic: plain multipart, ie6 support
|
* ☑ basic: plain multipart, ie6 support
|
||||||
* ☑ [up2k](#uploading): js, resumable, multithreaded
|
* ☑ [up2k](#uploading): js, resumable, multithreaded
|
||||||
* unaffected by cloudflare's max-upload-size (100 MiB)
|
* **no filesize limit!** ...unless you use Cloudflare, then it's 383.9 GiB
|
||||||
* ☑ stash: simple PUT filedropper
|
* ☑ stash: simple PUT filedropper
|
||||||
* ☑ filename randomizer
|
* ☑ filename randomizer
|
||||||
* ☑ write-only folders
|
* ☑ write-only folders
|
||||||
@@ -223,6 +225,7 @@ also see [comparison to similar software](./docs/versus.md)
|
|||||||
* ☑ [navpane](#navpane) (directory tree sidebar)
|
* ☑ [navpane](#navpane) (directory tree sidebar)
|
||||||
* ☑ file manager (cut/paste, delete, [batch-rename](#batch-rename))
|
* ☑ file manager (cut/paste, delete, [batch-rename](#batch-rename))
|
||||||
* ☑ audio player (with [OS media controls](https://user-images.githubusercontent.com/241032/215347492-b4250797-6c90-4e09-9a4c-721edf2fb15c.png) and opus/mp3 transcoding)
|
* ☑ audio player (with [OS media controls](https://user-images.githubusercontent.com/241032/215347492-b4250797-6c90-4e09-9a4c-721edf2fb15c.png) and opus/mp3 transcoding)
|
||||||
|
* ☑ play video files as audio (converted on server)
|
||||||
* ☑ image gallery with webm player
|
* ☑ image gallery with webm player
|
||||||
* ☑ textfile browser with syntax hilighting
|
* ☑ textfile browser with syntax hilighting
|
||||||
* ☑ [thumbnails](#thumbnails)
|
* ☑ [thumbnails](#thumbnails)
|
||||||
@@ -573,6 +576,7 @@ it does static images with Pillow / pyvips / FFmpeg, and uses FFmpeg for video f
|
|||||||
audio files are covnerted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`)
|
audio files are covnerted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`)
|
||||||
|
|
||||||
images with the following names (see `--th-covers`) become the thumbnail of the folder they're in: `folder.png`, `folder.jpg`, `cover.png`, `cover.jpg`
|
images with the following names (see `--th-covers`) become the thumbnail of the folder they're in: `folder.png`, `folder.jpg`, `cover.png`, `cover.jpg`
|
||||||
|
* the order is significant, so if both `cover.png` and `folder.jpg` exist in a folder, it will pick the first matching `--th-covers` entry (`folder.jpg`)
|
||||||
* and, if you enable [file indexing](#file-indexing), it will also try those names as dotfiles (`.folder.jpg` and so), and then fallback on the first picture in the folder (if it has any pictures at all)
|
* and, if you enable [file indexing](#file-indexing), it will also try those names as dotfiles (`.folder.jpg` and so), and then fallback on the first picture in the folder (if it has any pictures at all)
|
||||||
|
|
||||||
in the grid/thumbnail view, if the audio player panel is open, songs will start playing when clicked
|
in the grid/thumbnail view, if the audio player panel is open, songs will start playing when clicked
|
||||||
@@ -580,6 +584,7 @@ in the grid/thumbnail view, if the audio player panel is open, songs will start
|
|||||||
|
|
||||||
enabling `multiselect` lets you click files to select them, and then shift-click another file for range-select
|
enabling `multiselect` lets you click files to select them, and then shift-click another file for range-select
|
||||||
* `multiselect` is mostly intended for phones/tablets, but the `sel` option in the `[⚙️] settings` tab is better suited for desktop use, allowing selection by CTRL-clicking and range-selection with SHIFT-click, all without affecting regular clicking
|
* `multiselect` is mostly intended for phones/tablets, but the `sel` option in the `[⚙️] settings` tab is better suited for desktop use, allowing selection by CTRL-clicking and range-selection with SHIFT-click, all without affecting regular clicking
|
||||||
|
* the `sel` option can be made default globally with `--gsel` or per-volume with volflag `gsel`
|
||||||
|
|
||||||
|
|
||||||
## zip downloads
|
## zip downloads
|
||||||
@@ -642,6 +647,7 @@ up2k has several advantages:
|
|||||||
* uploads resume if you reboot your browser or pc, just upload the same files again
|
* uploads resume if you reboot your browser or pc, just upload the same files again
|
||||||
* server detects any corruption; the client reuploads affected chunks
|
* server detects any corruption; the client reuploads affected chunks
|
||||||
* the client doesn't upload anything that already exists on the server
|
* the client doesn't upload anything that already exists on the server
|
||||||
|
* no filesize limit unless imposed by a proxy, for example Cloudflare, which blocks uploads over 383.9 GiB
|
||||||
* much higher speeds than ftp/scp/tarpipe on some internet connections (mainly american ones) thanks to parallel connections
|
* much higher speeds than ftp/scp/tarpipe on some internet connections (mainly american ones) thanks to parallel connections
|
||||||
* the last-modified timestamp of the file is preserved
|
* the last-modified timestamp of the file is preserved
|
||||||
|
|
||||||
@@ -709,7 +715,7 @@ uploads can be given a lifetime, afer which they expire / self-destruct
|
|||||||
|
|
||||||
the feature must be enabled per-volume with the `lifetime` [upload rule](#upload-rules) which sets the upper limit for how long a file gets to stay on the server
|
the feature must be enabled per-volume with the `lifetime` [upload rule](#upload-rules) which sets the upper limit for how long a file gets to stay on the server
|
||||||
|
|
||||||
clients can specify a shorter expiration time using the [up2k ui](#uploading) -- the relevant options become visible upon navigating into a folder with `lifetimes` enabled -- or by using the `life` [upload modifier](#write)
|
clients can specify a shorter expiration time using the [up2k ui](#uploading) -- the relevant options become visible upon navigating into a folder with `lifetimes` enabled -- or by using the `life` [upload modifier](./docs/devnotes.md#write)
|
||||||
|
|
||||||
specifying a custom expiration time client-side will affect the timespan in which unposts are permitted, so keep an eye on the estimates in the up2k ui
|
specifying a custom expiration time client-side will affect the timespan in which unposts are permitted, so keep an eye on the estimates in the up2k ui
|
||||||
|
|
||||||
@@ -796,6 +802,7 @@ some hilights:
|
|||||||
* OS integration; control playback from your phone's lockscreen ([windows](https://user-images.githubusercontent.com/241032/233213022-298a98ba-721a-4cf1-a3d4-f62634bc53d5.png) // [iOS](https://user-images.githubusercontent.com/241032/142711926-0700be6c-3e31-47b3-9928-53722221f722.png) // [android](https://user-images.githubusercontent.com/241032/233212311-a7368590-08c7-4f9f-a1af-48ccf3f36fad.png))
|
* OS integration; control playback from your phone's lockscreen ([windows](https://user-images.githubusercontent.com/241032/233213022-298a98ba-721a-4cf1-a3d4-f62634bc53d5.png) // [iOS](https://user-images.githubusercontent.com/241032/142711926-0700be6c-3e31-47b3-9928-53722221f722.png) // [android](https://user-images.githubusercontent.com/241032/233212311-a7368590-08c7-4f9f-a1af-48ccf3f36fad.png))
|
||||||
* shows the audio waveform in the seekbar
|
* shows the audio waveform in the seekbar
|
||||||
* not perfectly gapless but can get really close (see settings + eq below); good enough to enjoy gapless albums as intended
|
* not perfectly gapless but can get really close (see settings + eq below); good enough to enjoy gapless albums as intended
|
||||||
|
* videos can be played as audio, without wasting bandwidth on the video
|
||||||
|
|
||||||
click the `play` link next to an audio file, or copy the link target to [share it](https://a.ocv.me/pub/demo/music/Ubiktune%20-%20SOUNDSHOCK%202%20-%20FM%20FUNK%20TERRROR!!/#af-1fbfba61&t=18) (optionally with a timestamp to start playing from, like that example does)
|
click the `play` link next to an audio file, or copy the link target to [share it](https://a.ocv.me/pub/demo/music/Ubiktune%20-%20SOUNDSHOCK%202%20-%20FM%20FUNK%20TERRROR!!/#af-1fbfba61&t=18) (optionally with a timestamp to start playing from, like that example does)
|
||||||
|
|
||||||
@@ -877,6 +884,8 @@ see [./srv/expand/](./srv/expand/) for usage and examples
|
|||||||
|
|
||||||
* files named `.prologue.html` / `.epilogue.html` will be rendered before/after directory listings unless `--no-logues`
|
* files named `.prologue.html` / `.epilogue.html` will be rendered before/after directory listings unless `--no-logues`
|
||||||
|
|
||||||
|
* files named `descript.ion` / `DESCRIPT.ION` are parsed and displayed in the file listing, or as the epilogue if nonstandard
|
||||||
|
|
||||||
* files named `README.md` / `readme.md` will be rendered after directory listings unless `--no-readme` (but `.epilogue.html` takes precedence)
|
* files named `README.md` / `readme.md` will be rendered after directory listings unless `--no-readme` (but `.epilogue.html` takes precedence)
|
||||||
|
|
||||||
* `README.md` and `*logue.html` can contain placeholder values which are replaced server-side before embedding into directory listings; see `--help-exp`
|
* `README.md` and `*logue.html` can contain placeholder values which are replaced server-side before embedding into directory listings; see `--help-exp`
|
||||||
@@ -981,7 +990,7 @@ some recommended FTP / FTPS clients; `wark` = example password:
|
|||||||
|
|
||||||
## webdav server
|
## webdav server
|
||||||
|
|
||||||
with read-write support, supports winXP and later, macos, nautilus/gvfs
|
with read-write support, supports winXP and later, macos, nautilus/gvfs ... a greay way to [access copyparty straight from the file explorer in your OS](#mount-as-drive)
|
||||||
|
|
||||||
click the [connect](http://127.0.0.1:3923/?hc) button in the control-panel to see connection instructions for windows, linux, macos
|
click the [connect](http://127.0.0.1:3923/?hc) button in the control-panel to see connection instructions for windows, linux, macos
|
||||||
|
|
||||||
@@ -1445,8 +1454,9 @@ you can either:
|
|||||||
* or do location-based proxying, using `--rp-loc=/stuff` to tell copyparty where it is mounted -- has a slight performance cost and higher chance of bugs
|
* or do location-based proxying, using `--rp-loc=/stuff` to tell copyparty where it is mounted -- has a slight performance cost and higher chance of bugs
|
||||||
* if copyparty says `incorrect --rp-loc or webserver config; expected vpath starting with [...]` it's likely because the webserver is stripping away the proxy location from the request URLs -- see the `ProxyPass` in the apache example below
|
* if copyparty says `incorrect --rp-loc or webserver config; expected vpath starting with [...]` it's likely because the webserver is stripping away the proxy location from the request URLs -- see the `ProxyPass` in the apache example below
|
||||||
|
|
||||||
some reverse proxies (such as [Caddy](https://caddyserver.com/)) can automatically obtain a valid https/tls certificate for you, and some support HTTP/2 and QUIC which could be a nice speed boost
|
some reverse proxies (such as [Caddy](https://caddyserver.com/)) can automatically obtain a valid https/tls certificate for you, and some support HTTP/2 and QUIC which *could* be a nice speed boost, depending on a lot of factors
|
||||||
* **warning:** nginx-QUIC is still experimental and can make uploads much slower, so HTTP/2 is recommended for now
|
* **warning:** nginx-QUIC (HTTP/3) is still experimental and can make uploads much slower, so HTTP/1.1 is recommended for now
|
||||||
|
* depending on server/client, HTTP/1.1 can also be 5x faster than HTTP/2
|
||||||
|
|
||||||
example webserver configs:
|
example webserver configs:
|
||||||
|
|
||||||
@@ -1526,6 +1536,28 @@ the following options are available to disable some of the metrics:
|
|||||||
note: the following metrics are counted incorrectly if multiprocessing is enabled with `-j`: `cpp_http_conns`, `cpp_http_reqs`, `cpp_sus_reqs`, `cpp_active_bans`, `cpp_total_bans`
|
note: the following metrics are counted incorrectly if multiprocessing is enabled with `-j`: `cpp_http_conns`, `cpp_http_reqs`, `cpp_sus_reqs`, `cpp_active_bans`, `cpp_total_bans`
|
||||||
|
|
||||||
|
|
||||||
|
## other extremely specific features
|
||||||
|
|
||||||
|
you'll never find a use for these:
|
||||||
|
|
||||||
|
|
||||||
|
### custom mimetypes
|
||||||
|
|
||||||
|
change the association of a file extension
|
||||||
|
|
||||||
|
using commandline args, you can do something like `--mime gif=image/jif` and `--mime ts=text/x.typescript` (can be specified multiple times)
|
||||||
|
|
||||||
|
in a config-file, this is the same as:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
[global]
|
||||||
|
mime: gif=image/jif
|
||||||
|
mime: ts=text/x.typescript
|
||||||
|
```
|
||||||
|
|
||||||
|
run copyparty with `--mimes` to list all the default mappings
|
||||||
|
|
||||||
|
|
||||||
# packages
|
# packages
|
||||||
|
|
||||||
the party might be closer than you think
|
the party might be closer than you think
|
||||||
@@ -1769,12 +1801,14 @@ alternatively, some alternatives roughly sorted by speed (unreproducible benchma
|
|||||||
* [rclone-http](./docs/rclone.md) (26s), read-only
|
* [rclone-http](./docs/rclone.md) (26s), read-only
|
||||||
* [partyfuse.py](./bin/#partyfusepy) (35s), read-only
|
* [partyfuse.py](./bin/#partyfusepy) (35s), read-only
|
||||||
* [rclone-ftp](./docs/rclone.md) (47s), read/WRITE
|
* [rclone-ftp](./docs/rclone.md) (47s), read/WRITE
|
||||||
* davfs2 (103s), read/WRITE, *very fast* on small files
|
* davfs2 (103s), read/WRITE
|
||||||
* [win10-webdav](#webdav-server) (138s), read/WRITE
|
* [win10-webdav](#webdav-server) (138s), read/WRITE
|
||||||
* [win10-smb2](#smb-server) (387s), read/WRITE
|
* [win10-smb2](#smb-server) (387s), read/WRITE
|
||||||
|
|
||||||
most clients will fail to mount the root of a copyparty server unless there is a root volume (so you get the admin-panel instead of a browser when accessing it) -- in that case, mount a specific volume instead
|
most clients will fail to mount the root of a copyparty server unless there is a root volume (so you get the admin-panel instead of a browser when accessing it) -- in that case, mount a specific volume instead
|
||||||
|
|
||||||
|
if you have volumes that are accessible without a password, then some webdav clients (such as davfs2) require the global-option `--dav-auth` to access any password-protected areas
|
||||||
|
|
||||||
|
|
||||||
# android app
|
# android app
|
||||||
|
|
||||||
@@ -1803,6 +1837,7 @@ defaults are usually fine - expect `8 GiB/s` download, `1 GiB/s` upload
|
|||||||
|
|
||||||
below are some tweaks roughly ordered by usefulness:
|
below are some tweaks roughly ordered by usefulness:
|
||||||
|
|
||||||
|
* disabling HTTP/2 and HTTP/3 can make uploads 5x faster, depending on server/client software
|
||||||
* `-q` disables logging and can help a bunch, even when combined with `-lo` to redirect logs to file
|
* `-q` disables logging and can help a bunch, even when combined with `-lo` to redirect logs to file
|
||||||
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
|
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
|
||||||
* and also makes thumbnails load faster, regardless of e2d/e2t
|
* and also makes thumbnails load faster, regardless of e2d/e2t
|
||||||
@@ -1918,7 +1953,7 @@ volflag `dk` generates dirkeys (per-directory accesskeys) for all folders, grant
|
|||||||
|
|
||||||
volflag `dky` disables the actual key-check, meaning anyone can see the contents of a folder where they have `g` access, but not its subdirectories
|
volflag `dky` disables the actual key-check, meaning anyone can see the contents of a folder where they have `g` access, but not its subdirectories
|
||||||
|
|
||||||
* `dk` + `dky` gives the same behavior as if all users with `g` access have full read-access, but subfolders are hidden files (their names start with a dot), so `dky` is an alternative to renaming all the folders for that purpose, maybe just for some users
|
* `dk` + `dky` gives the same behavior as if all users with `g` access have full read-access, but subfolders are hidden files (as if their names start with a dot), so `dky` is an alternative to renaming all the folders for that purpose, maybe just for some users
|
||||||
|
|
||||||
volflag `dks` lets people enter subfolders as well, and also enables download-as-zip/tar
|
volflag `dks` lets people enter subfolders as well, and also enables download-as-zip/tar
|
||||||
|
|
||||||
@@ -1943,7 +1978,7 @@ the default configs take about 0.4 sec and 256 MiB RAM to process a new password
|
|||||||
|
|
||||||
both HTTP and HTTPS are accepted by default, but letting a [reverse proxy](#reverse-proxy) handle the https/tls/ssl would be better (probably more secure by default)
|
both HTTP and HTTPS are accepted by default, but letting a [reverse proxy](#reverse-proxy) handle the https/tls/ssl would be better (probably more secure by default)
|
||||||
|
|
||||||
copyparty doesn't speak HTTP/2 or QUIC, so using a reverse proxy would solve that as well
|
copyparty doesn't speak HTTP/2 or QUIC, so using a reverse proxy would solve that as well -- but note that HTTP/1 is usually faster than both HTTP/2 and HTTP/3
|
||||||
|
|
||||||
if [cfssl](https://github.com/cloudflare/cfssl/releases/latest) is installed, copyparty will automatically create a CA and server-cert on startup
|
if [cfssl](https://github.com/cloudflare/cfssl/releases/latest) is installed, copyparty will automatically create a CA and server-cert on startup
|
||||||
* the certs are written to `--crt-dir` for distribution, see `--help` for the other `--crt` options
|
* the certs are written to `--crt-dir` for distribution, see `--help` for the other `--crt` options
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ standalone programs which are executed by copyparty when an event happens (uploa
|
|||||||
|
|
||||||
these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info
|
these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info
|
||||||
|
|
||||||
run copyparty with `--help-hooks` for usage details / hook type explanations (xbu/xau/xiu/xbr/xar/xbd/xad)
|
run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbr/xar/xbd/xad/xban)
|
||||||
|
|
||||||
> **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead
|
> **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead
|
||||||
|
|
||||||
@@ -13,6 +13,7 @@ run copyparty with `--help-hooks` for usage details / hook type explanations (xb
|
|||||||
* [image-noexif.py](image-noexif.py) removes image exif by overwriting / directly editing the uploaded file
|
* [image-noexif.py](image-noexif.py) removes image exif by overwriting / directly editing the uploaded file
|
||||||
* [discord-announce.py](discord-announce.py) announces new uploads on discord using webhooks ([example](https://user-images.githubusercontent.com/241032/215304439-1c1cb3c8-ec6f-4c17-9f27-81f969b1811a.png))
|
* [discord-announce.py](discord-announce.py) announces new uploads on discord using webhooks ([example](https://user-images.githubusercontent.com/241032/215304439-1c1cb3c8-ec6f-4c17-9f27-81f969b1811a.png))
|
||||||
* [reject-mimetype.py](reject-mimetype.py) rejects uploads unless the mimetype is acceptable
|
* [reject-mimetype.py](reject-mimetype.py) rejects uploads unless the mimetype is acceptable
|
||||||
|
* [into-the-cache-it-goes.py](into-the-cache-it-goes.py) avoids bugs in caching proxies by immediately downloading each file that is uploaded
|
||||||
|
|
||||||
|
|
||||||
# upload batches
|
# upload batches
|
||||||
@@ -27,3 +28,5 @@ these are `--xiu` hooks; unlike `xbu` and `xau` (which get executed on every sin
|
|||||||
|
|
||||||
# on message
|
# on message
|
||||||
* [wget.py](wget.py) lets you download files by POSTing URLs to copyparty
|
* [wget.py](wget.py) lets you download files by POSTing URLs to copyparty
|
||||||
|
* [qbittorrent-magnet.py](qbittorrent-magnet.py) starts downloading a torrent if you post a magnet url
|
||||||
|
* [msg-log.py](msg-log.py) is a guestbook; logs messages to a doc in the same folder
|
||||||
|
|||||||
@@ -12,19 +12,28 @@ announces a new upload on discord
|
|||||||
example usage as global config:
|
example usage as global config:
|
||||||
--xau f,t5,j,bin/hooks/discord-announce.py
|
--xau f,t5,j,bin/hooks/discord-announce.py
|
||||||
|
|
||||||
|
parameters explained,
|
||||||
|
xau = execute after upload
|
||||||
|
f = fork; don't delay other hooks while this is running
|
||||||
|
t5 = timeout if it's still running after 5 sec
|
||||||
|
j = this hook needs upload information as json (not just the filename)
|
||||||
|
|
||||||
example usage as a volflag (per-volume config):
|
example usage as a volflag (per-volume config):
|
||||||
-v srv/inc:inc:r:rw,ed:c,xau=f,t5,j,bin/hooks/discord-announce.py
|
-v srv/inc:inc:r:rw,ed:c,xau=f,t5,j,bin/hooks/discord-announce.py
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
(share filesystem-path srv/inc as volume /inc,
|
(share filesystem-path srv/inc as volume /inc,
|
||||||
readable by everyone, read-write for user 'ed',
|
readable by everyone, read-write for user 'ed',
|
||||||
running this plugin on all uploads with the params listed below)
|
running this plugin on all uploads with the params explained above)
|
||||||
|
|
||||||
parameters explained,
|
example usage as a volflag in a copyparty config file:
|
||||||
xbu = execute after upload
|
[/inc]
|
||||||
f = fork; don't wait for it to finish
|
srv/inc
|
||||||
t5 = timeout if it's still running after 5 sec
|
accs:
|
||||||
j = provide upload information as json; not just the filename
|
r: *
|
||||||
|
rw: ed
|
||||||
|
flags:
|
||||||
|
xau: f,t5,j,bin/hooks/discord-announce.py
|
||||||
|
|
||||||
replace "xau" with "xbu" to announce Before upload starts instead of After completion
|
replace "xau" with "xbu" to announce Before upload starts instead of After completion
|
||||||
|
|
||||||
|
|||||||
140
bin/hooks/into-the-cache-it-goes.py
Normal file
140
bin/hooks/into-the-cache-it-goes.py
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
import shutil
|
||||||
|
import platform
|
||||||
|
import subprocess as sp
|
||||||
|
from urllib.parse import quote
|
||||||
|
|
||||||
|
|
||||||
|
_ = r"""
|
||||||
|
try to avoid race conditions in caching proxies
|
||||||
|
(primarily cloudflare, but probably others too)
|
||||||
|
by means of the most obvious solution possible:
|
||||||
|
|
||||||
|
just as each file has finished uploading, use
|
||||||
|
the server's external URL to download the file
|
||||||
|
so that it ends up in the cache, warm and snug
|
||||||
|
|
||||||
|
this intentionally delays the upload response
|
||||||
|
as it waits for the file to finish downloading
|
||||||
|
before copyparty is allowed to return the URL
|
||||||
|
|
||||||
|
NOTE: you must edit this script before use,
|
||||||
|
replacing https://example.com with your URL
|
||||||
|
|
||||||
|
NOTE: if the files are only accessible with a
|
||||||
|
password and/or filekey, you must also add
|
||||||
|
a cromulent password in the PASSWORD field
|
||||||
|
|
||||||
|
NOTE: needs either wget, curl, or "requests":
|
||||||
|
python3 -m pip install --user -U requests
|
||||||
|
|
||||||
|
|
||||||
|
example usage as global config:
|
||||||
|
--xau j,t10,bin/hooks/into-the-cache-it-goes.py
|
||||||
|
|
||||||
|
parameters explained,
|
||||||
|
xau = execute after upload
|
||||||
|
j = this hook needs upload information as json (not just the filename)
|
||||||
|
t10 = abort download and continue if it takes longer than 10sec
|
||||||
|
|
||||||
|
example usage as a volflag (per-volume config):
|
||||||
|
-v srv/inc:inc:r:rw,ed:xau=j,t10,bin/hooks/into-the-cache-it-goes.py
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
(share filesystem-path srv/inc as volume /inc,
|
||||||
|
readable by everyone, read-write for user 'ed',
|
||||||
|
running this plugin on all uploads with params explained above)
|
||||||
|
|
||||||
|
example usage as a volflag in a copyparty config file:
|
||||||
|
[/inc]
|
||||||
|
srv/inc
|
||||||
|
accs:
|
||||||
|
r: *
|
||||||
|
rw: ed
|
||||||
|
flags:
|
||||||
|
xau: j,t10,bin/hooks/into-the-cache-it-goes.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
# replace this with your site's external URL
|
||||||
|
# (including the :portnumber if necessary)
|
||||||
|
SITE_URL = "https://example.com"
|
||||||
|
|
||||||
|
# if downloading is protected by passwords or filekeys,
|
||||||
|
# specify a valid password between the quotes below:
|
||||||
|
PASSWORD = ""
|
||||||
|
|
||||||
|
# if file is larger than this, skip download
|
||||||
|
MAX_MEGABYTES = 8
|
||||||
|
|
||||||
|
# =============== END OF CONFIG ===============
|
||||||
|
|
||||||
|
|
||||||
|
WINDOWS = platform.system() == "Windows"
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
fun = download_with_python
|
||||||
|
if shutil.which("curl"):
|
||||||
|
fun = download_with_curl
|
||||||
|
elif shutil.which("wget"):
|
||||||
|
fun = download_with_wget
|
||||||
|
|
||||||
|
inf = json.loads(sys.argv[1])
|
||||||
|
|
||||||
|
if inf["sz"] > 1024 * 1024 * MAX_MEGABYTES:
|
||||||
|
print("[into-the-cache] file is too large; will not download")
|
||||||
|
return
|
||||||
|
|
||||||
|
file_url = "/"
|
||||||
|
if inf["vp"]:
|
||||||
|
file_url += inf["vp"] + "/"
|
||||||
|
file_url += inf["ap"].replace("\\", "/").split("/")[-1]
|
||||||
|
file_url = SITE_URL.rstrip("/") + quote(file_url, safe=b"/")
|
||||||
|
|
||||||
|
print("[into-the-cache] %s(%s)" % (fun.__name__, file_url))
|
||||||
|
fun(file_url, PASSWORD.strip())
|
||||||
|
|
||||||
|
print("[into-the-cache] Download OK")
|
||||||
|
|
||||||
|
|
||||||
|
def download_with_curl(url, pw):
|
||||||
|
cmd = ["curl"]
|
||||||
|
|
||||||
|
if pw:
|
||||||
|
cmd += ["-HPW:%s" % (pw,)]
|
||||||
|
|
||||||
|
nah = sp.DEVNULL
|
||||||
|
sp.check_call(cmd + [url], stdout=nah, stderr=nah)
|
||||||
|
|
||||||
|
|
||||||
|
def download_with_wget(url, pw):
|
||||||
|
cmd = ["wget", "-O"]
|
||||||
|
|
||||||
|
cmd += ["nul" if WINDOWS else "/dev/null"]
|
||||||
|
|
||||||
|
if pw:
|
||||||
|
cmd += ["--header=PW:%s" % (pw,)]
|
||||||
|
|
||||||
|
nah = sp.DEVNULL
|
||||||
|
sp.check_call(cmd + [url], stdout=nah, stderr=nah)
|
||||||
|
|
||||||
|
|
||||||
|
def download_with_python(url, pw):
|
||||||
|
import requests
|
||||||
|
|
||||||
|
headers = {}
|
||||||
|
if pw:
|
||||||
|
headers["PW"] = pw
|
||||||
|
|
||||||
|
with requests.get(url, headers=headers, stream=True) as r:
|
||||||
|
r.raise_for_status()
|
||||||
|
for _ in r.iter_content(chunk_size=1024 * 256):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@@ -14,19 +14,32 @@ except:
|
|||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
|
|
||||||
"""
|
_ = r"""
|
||||||
use copyparty as a dumb messaging server / guestbook thing;
|
use copyparty as a dumb messaging server / guestbook thing;
|
||||||
|
accepts guestbook entries from 📟 (message-to-server-log) in the web-ui
|
||||||
initially contributed by @clach04 in https://github.com/9001/copyparty/issues/35 (thanks!)
|
initially contributed by @clach04 in https://github.com/9001/copyparty/issues/35 (thanks!)
|
||||||
|
|
||||||
Sample usage:
|
example usage as global config:
|
||||||
|
|
||||||
python copyparty-sfx.py --xm j,bin/hooks/msg-log.py
|
python copyparty-sfx.py --xm j,bin/hooks/msg-log.py
|
||||||
|
|
||||||
Where:
|
parameters explained,
|
||||||
|
xm = execute on message (📟)
|
||||||
|
j = this hook needs message information as json (not just the message-text)
|
||||||
|
|
||||||
xm = execute on message-to-server-log
|
example usage as a volflag (per-volume config):
|
||||||
j = provide message information as json; not just the text - this script REQUIRES json
|
python copyparty-sfx.py -v srv/log:log:r:c,xm=j,bin/hooks/msg-log.py
|
||||||
t10 = timeout and kill download after 10 secs
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
(share filesystem-path srv/log as volume /log, readable by everyone,
|
||||||
|
running this plugin on all messages with the params explained above)
|
||||||
|
|
||||||
|
example usage as a volflag in a copyparty config file:
|
||||||
|
[/log]
|
||||||
|
srv/log
|
||||||
|
accs:
|
||||||
|
r: *
|
||||||
|
flags:
|
||||||
|
xm: j,bin/hooks/msg-log.py
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
128
bin/hooks/qbittorrent-magnet.py
Executable file
128
bin/hooks/qbittorrent-magnet.py
Executable file
@@ -0,0 +1,128 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# coding: utf-8
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
import shutil
|
||||||
|
import subprocess as sp
|
||||||
|
|
||||||
|
|
||||||
|
_ = r"""
|
||||||
|
start downloading a torrent by POSTing a magnet URL to copyparty,
|
||||||
|
for example using 📟 (message-to-server-log) in the web-ui
|
||||||
|
|
||||||
|
by default it will download the torrent to the folder you were in
|
||||||
|
when you pasted the magnet into the message-to-server-log field
|
||||||
|
|
||||||
|
you can optionally specify another location by adding a whitespace
|
||||||
|
after the magnet URL followed by the name of the subfolder to DL into,
|
||||||
|
or for example "anime/airing" would download to /srv/media/anime/airing
|
||||||
|
because the keyword "anime" is in the DESTS config below
|
||||||
|
|
||||||
|
needs python3
|
||||||
|
|
||||||
|
example usage as global config (not a good idea):
|
||||||
|
python copyparty-sfx.py --xm aw,f,j,t60,bin/hooks/qbittorrent-magnet.py
|
||||||
|
|
||||||
|
parameters explained,
|
||||||
|
xm = execute on message (📟)
|
||||||
|
aw = only users with write-access can use this
|
||||||
|
f = fork; don't delay other hooks while this is running
|
||||||
|
j = provide message information as json (not just the text)
|
||||||
|
t60 = abort if qbittorrent has to think about it for more than 1 min
|
||||||
|
|
||||||
|
example usage as a volflag (per-volume config, much better):
|
||||||
|
-v srv/qb:qb:A,ed:c,xm=aw,f,j,t60,bin/hooks/qbittorrent-magnet.py
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
(share filesystem-path srv/qb as volume /qb with Admin for user 'ed',
|
||||||
|
running this plugin on all messages with the params explained above)
|
||||||
|
|
||||||
|
example usage as a volflag in a copyparty config file:
|
||||||
|
[/qb]
|
||||||
|
srv/qb
|
||||||
|
accs:
|
||||||
|
A: ed
|
||||||
|
flags:
|
||||||
|
xm: aw,f,j,t60,bin/hooks/qbittorrent-magnet.py
|
||||||
|
|
||||||
|
the volflag examples only kicks in if you send the torrent magnet
|
||||||
|
while you're in the /qb folder (or any folder below there)
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
# list of usernames to allow
|
||||||
|
ALLOWLIST = [ "ed", "morpheus" ]
|
||||||
|
|
||||||
|
|
||||||
|
# list of destination aliases to translate into full filesystem
|
||||||
|
# paths; takes effect if the first folder component in the
|
||||||
|
# custom download location matches anything in this dict
|
||||||
|
DESTS = {
|
||||||
|
"iso": "/srv/pub/linux-isos",
|
||||||
|
"anime": "/srv/media/anime",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
inf = json.loads(sys.argv[1])
|
||||||
|
url = inf["txt"]
|
||||||
|
if not url.lower().startswith("magnet:?"):
|
||||||
|
# not a magnet, abort
|
||||||
|
return
|
||||||
|
|
||||||
|
if inf["user"] not in ALLOWLIST:
|
||||||
|
print("🧲 denied for user", inf["user"])
|
||||||
|
return
|
||||||
|
|
||||||
|
# might as well run the command inside the filesystem folder
|
||||||
|
# which matches the URL that the magnet message was sent to
|
||||||
|
os.chdir(inf["ap"])
|
||||||
|
|
||||||
|
# is there is a custom download location in the url?
|
||||||
|
dst = ""
|
||||||
|
if " " in url:
|
||||||
|
url, dst = url.split(" ", 1)
|
||||||
|
|
||||||
|
# is the location in the predefined list of locations?
|
||||||
|
parts = dst.replace("\\", "/").split("/")
|
||||||
|
if parts[0] in DESTS:
|
||||||
|
dst = os.path.join(DESTS[parts[0]], *(parts[1:]))
|
||||||
|
|
||||||
|
else:
|
||||||
|
# nope, so download to the current folder instead;
|
||||||
|
# comment the dst line below to instead use the default
|
||||||
|
# download location from your qbittorrent settings
|
||||||
|
dst = inf["ap"]
|
||||||
|
pass
|
||||||
|
|
||||||
|
# archlinux has a -nox suffix for qbittorrent if headless
|
||||||
|
# so check if we should be using that
|
||||||
|
if shutil.which("qbittorrent-nox"):
|
||||||
|
torrent_bin = "qbittorrent-nox"
|
||||||
|
else:
|
||||||
|
torrent_bin = "qbittorrent"
|
||||||
|
|
||||||
|
# the command to add a new torrent, adjust if necessary
|
||||||
|
cmd = [torrent_bin, url]
|
||||||
|
if dst:
|
||||||
|
cmd += ["--save-path=%s" % (dst,)]
|
||||||
|
|
||||||
|
# if copyparty and qbittorrent are running as different users
|
||||||
|
# you may have to do something like the following
|
||||||
|
# (assuming qbittorrent* is nopasswd-allowed in sudoers):
|
||||||
|
#
|
||||||
|
# cmd = ["sudo", "-u", "qbitter"] + cmd
|
||||||
|
|
||||||
|
print("🧲", cmd)
|
||||||
|
|
||||||
|
try:
|
||||||
|
sp.check_call(cmd)
|
||||||
|
except:
|
||||||
|
print("🧲 FAILED TO ADD", url)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
||||||
@@ -9,25 +9,38 @@ import subprocess as sp
|
|||||||
_ = r"""
|
_ = r"""
|
||||||
use copyparty as a file downloader by POSTing URLs as
|
use copyparty as a file downloader by POSTing URLs as
|
||||||
application/x-www-form-urlencoded (for example using the
|
application/x-www-form-urlencoded (for example using the
|
||||||
message/pager function on the website)
|
📟 message-to-server-log in the web-ui)
|
||||||
|
|
||||||
example usage as global config:
|
example usage as global config:
|
||||||
--xm f,j,t3600,bin/hooks/wget.py
|
--xm aw,f,j,t3600,bin/hooks/wget.py
|
||||||
|
|
||||||
example usage as a volflag (per-volume config):
|
|
||||||
-v srv/inc:inc:r:rw,ed:c,xm=f,j,t3600,bin/hooks/wget.py
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
(share filesystem-path srv/inc as volume /inc,
|
|
||||||
readable by everyone, read-write for user 'ed',
|
|
||||||
running this plugin on all messages with the params listed below)
|
|
||||||
|
|
||||||
parameters explained,
|
parameters explained,
|
||||||
xm = execute on message-to-server-log
|
xm = execute on message-to-server-log
|
||||||
f = fork so it doesn't block uploads
|
aw = only users with write-access can use this
|
||||||
j = provide message information as json; not just the text
|
f = fork; don't delay other hooks while this is running
|
||||||
|
j = provide message information as json (not just the text)
|
||||||
c3 = mute all output
|
c3 = mute all output
|
||||||
t3600 = timeout and kill download after 1 hour
|
t3600 = timeout and abort download after 1 hour
|
||||||
|
|
||||||
|
example usage as a volflag (per-volume config):
|
||||||
|
-v srv/inc:inc:r:rw,ed:c,xm=aw,f,j,t3600,bin/hooks/wget.py
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
(share filesystem-path srv/inc as volume /inc,
|
||||||
|
readable by everyone, read-write for user 'ed',
|
||||||
|
running this plugin on all messages with the params explained above)
|
||||||
|
|
||||||
|
example usage as a volflag in a copyparty config file:
|
||||||
|
[/inc]
|
||||||
|
srv/inc
|
||||||
|
accs:
|
||||||
|
r: *
|
||||||
|
rw: ed
|
||||||
|
flags:
|
||||||
|
xm: aw,f,j,t3600,bin/hooks/wget.py
|
||||||
|
|
||||||
|
the volflag examples only kicks in if you send the message
|
||||||
|
while you're in the /inc folder (or any folder below there)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
161
bin/u2c.py
161
bin/u2c.py
@@ -1,8 +1,8 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
S_VERSION = "1.18"
|
S_VERSION = "1.21"
|
||||||
S_BUILD_DT = "2024-06-01"
|
S_BUILD_DT = "2024-07-26"
|
||||||
|
|
||||||
"""
|
"""
|
||||||
u2c.py: upload to copyparty
|
u2c.py: upload to copyparty
|
||||||
@@ -20,6 +20,7 @@ import sys
|
|||||||
import stat
|
import stat
|
||||||
import math
|
import math
|
||||||
import time
|
import time
|
||||||
|
import json
|
||||||
import atexit
|
import atexit
|
||||||
import signal
|
import signal
|
||||||
import socket
|
import socket
|
||||||
@@ -79,7 +80,7 @@ req_ses = requests.Session()
|
|||||||
|
|
||||||
|
|
||||||
class Daemon(threading.Thread):
|
class Daemon(threading.Thread):
|
||||||
def __init__(self, target, name = None, a = None):
|
def __init__(self, target, name=None, a=None):
|
||||||
threading.Thread.__init__(self, name=name)
|
threading.Thread.__init__(self, name=name)
|
||||||
self.a = a or ()
|
self.a = a or ()
|
||||||
self.fun = target
|
self.fun = target
|
||||||
@@ -110,18 +111,22 @@ class File(object):
|
|||||||
# set by get_hashlist
|
# set by get_hashlist
|
||||||
self.cids = [] # type: list[tuple[str, int, int]] # [ hash, ofs, sz ]
|
self.cids = [] # type: list[tuple[str, int, int]] # [ hash, ofs, sz ]
|
||||||
self.kchunks = {} # type: dict[str, tuple[int, int]] # hash: [ ofs, sz ]
|
self.kchunks = {} # type: dict[str, tuple[int, int]] # hash: [ ofs, sz ]
|
||||||
|
self.t_hash = 0.0 # type: float
|
||||||
|
|
||||||
# set by handshake
|
# set by handshake
|
||||||
self.recheck = False # duplicate; redo handshake after all files done
|
self.recheck = False # duplicate; redo handshake after all files done
|
||||||
self.ucids = [] # type: list[str] # chunks which need to be uploaded
|
self.ucids = [] # type: list[str] # chunks which need to be uploaded
|
||||||
self.wark = "" # type: str
|
self.wark = "" # type: str
|
||||||
self.url = "" # type: str
|
self.url = "" # type: str
|
||||||
self.nhs = 0
|
self.nhs = 0 # type: int
|
||||||
|
|
||||||
# set by upload
|
# set by upload
|
||||||
|
self.t0_up = 0.0 # type: float
|
||||||
|
self.t1_up = 0.0 # type: float
|
||||||
|
self.nojoin = 0 # type: int
|
||||||
self.up_b = 0 # type: int
|
self.up_b = 0 # type: int
|
||||||
self.up_c = 0 # type: int
|
self.up_c = 0 # type: int
|
||||||
self.cd = 0
|
self.cd = 0 # type: int
|
||||||
|
|
||||||
# t = "size({}) lmod({}) top({}) rel({}) abs({}) name({})\n"
|
# t = "size({}) lmod({}) top({}) rel({}) abs({}) name({})\n"
|
||||||
# eprint(t.format(self.size, self.lmod, self.top, self.rel, self.abs, self.name))
|
# eprint(t.format(self.size, self.lmod, self.top, self.rel, self.abs, self.name))
|
||||||
@@ -130,10 +135,20 @@ class File(object):
|
|||||||
class FileSlice(object):
|
class FileSlice(object):
|
||||||
"""file-like object providing a fixed window into a file"""
|
"""file-like object providing a fixed window into a file"""
|
||||||
|
|
||||||
def __init__(self, file, cid):
|
def __init__(self, file, cids):
|
||||||
# type: (File, str) -> None
|
# type: (File, str) -> None
|
||||||
|
|
||||||
self.car, self.len = file.kchunks[cid]
|
self.file = file
|
||||||
|
self.cids = cids
|
||||||
|
|
||||||
|
self.car, tlen = file.kchunks[cids[0]]
|
||||||
|
for cid in cids[1:]:
|
||||||
|
ofs, clen = file.kchunks[cid]
|
||||||
|
if ofs != self.car + tlen:
|
||||||
|
raise Exception(9)
|
||||||
|
tlen += clen
|
||||||
|
|
||||||
|
self.len = tlen
|
||||||
self.cdr = self.car + self.len
|
self.cdr = self.car + self.len
|
||||||
self.ofs = 0 # type: int
|
self.ofs = 0 # type: int
|
||||||
self.f = open(file.abs, "rb", 512 * 1024)
|
self.f = open(file.abs, "rb", 512 * 1024)
|
||||||
@@ -357,7 +372,7 @@ def undns(url):
|
|||||||
usp = urlsplit(url)
|
usp = urlsplit(url)
|
||||||
hn = usp.hostname
|
hn = usp.hostname
|
||||||
gai = None
|
gai = None
|
||||||
eprint("resolving host [{0}] ...".format(hn), end="")
|
eprint("resolving host [%s] ..." % (hn,))
|
||||||
try:
|
try:
|
||||||
gai = socket.getaddrinfo(hn, None)
|
gai = socket.getaddrinfo(hn, None)
|
||||||
hn = gai[0][4][0]
|
hn = gai[0][4][0]
|
||||||
@@ -375,7 +390,7 @@ def undns(url):
|
|||||||
|
|
||||||
usp = usp._replace(netloc=hn)
|
usp = usp._replace(netloc=hn)
|
||||||
url = urlunsplit(usp)
|
url = urlunsplit(usp)
|
||||||
eprint(" {0}".format(url))
|
eprint(" %s\n" % (url,))
|
||||||
return url
|
return url
|
||||||
|
|
||||||
|
|
||||||
@@ -518,6 +533,8 @@ def get_hashlist(file, pcb, mth):
|
|||||||
file_ofs = 0
|
file_ofs = 0
|
||||||
ret = []
|
ret = []
|
||||||
with open(file.abs, "rb", 512 * 1024) as f:
|
with open(file.abs, "rb", 512 * 1024) as f:
|
||||||
|
t0 = time.time()
|
||||||
|
|
||||||
if mth and file.size >= 1024 * 512:
|
if mth and file.size >= 1024 * 512:
|
||||||
ret = mth.hash(f, file.size, chunk_sz, pcb, file)
|
ret = mth.hash(f, file.size, chunk_sz, pcb, file)
|
||||||
file_rem = 0
|
file_rem = 0
|
||||||
@@ -544,10 +561,12 @@ def get_hashlist(file, pcb, mth):
|
|||||||
if pcb:
|
if pcb:
|
||||||
pcb(file, file_ofs)
|
pcb(file, file_ofs)
|
||||||
|
|
||||||
|
file.t_hash = time.time() - t0
|
||||||
file.cids = ret
|
file.cids = ret
|
||||||
file.kchunks = {}
|
file.kchunks = {}
|
||||||
for k, v1, v2 in ret:
|
for k, v1, v2 in ret:
|
||||||
file.kchunks[k] = [v1, v2]
|
if k not in file.kchunks:
|
||||||
|
file.kchunks[k] = [v1, v2]
|
||||||
|
|
||||||
|
|
||||||
def handshake(ar, file, search):
|
def handshake(ar, file, search):
|
||||||
@@ -589,7 +608,8 @@ def handshake(ar, file, search):
|
|||||||
sc = 600
|
sc = 600
|
||||||
txt = ""
|
txt = ""
|
||||||
try:
|
try:
|
||||||
r = req_ses.post(url, headers=headers, json=req)
|
zs = json.dumps(req, separators=(",\n", ": "))
|
||||||
|
r = req_ses.post(url, headers=headers, data=zs)
|
||||||
sc = r.status_code
|
sc = r.status_code
|
||||||
txt = r.text
|
txt = r.text
|
||||||
if sc < 400:
|
if sc < 400:
|
||||||
@@ -636,13 +656,13 @@ def handshake(ar, file, search):
|
|||||||
return r["hash"], r["sprs"]
|
return r["hash"], r["sprs"]
|
||||||
|
|
||||||
|
|
||||||
def upload(file, cid, pw, stats):
|
def upload(fsl, pw, stats):
|
||||||
# type: (File, str, str, str) -> None
|
# type: (FileSlice, str, str) -> None
|
||||||
"""upload one specific chunk, `cid` (a chunk-hash)"""
|
"""upload a range of file data, defined by one or more `cid` (chunk-hash)"""
|
||||||
|
|
||||||
headers = {
|
headers = {
|
||||||
"X-Up2k-Hash": cid,
|
"X-Up2k-Hash": ",".join(fsl.cids),
|
||||||
"X-Up2k-Wark": file.wark,
|
"X-Up2k-Wark": fsl.file.wark,
|
||||||
"Content-Type": "application/octet-stream",
|
"Content-Type": "application/octet-stream",
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -652,15 +672,24 @@ def upload(file, cid, pw, stats):
|
|||||||
if pw:
|
if pw:
|
||||||
headers["Cookie"] = "=".join(["cppwd", pw])
|
headers["Cookie"] = "=".join(["cppwd", pw])
|
||||||
|
|
||||||
f = FileSlice(file, cid)
|
|
||||||
try:
|
try:
|
||||||
r = req_ses.post(file.url, headers=headers, data=f)
|
r = req_ses.post(fsl.file.url, headers=headers, data=fsl)
|
||||||
|
|
||||||
|
if r.status_code == 400:
|
||||||
|
txt = r.text
|
||||||
|
if (
|
||||||
|
"already being written" in txt
|
||||||
|
or "already got that" in txt
|
||||||
|
or "only sibling chunks" in txt
|
||||||
|
):
|
||||||
|
fsl.file.nojoin = 1
|
||||||
|
|
||||||
if not r:
|
if not r:
|
||||||
raise Exception(repr(r))
|
raise Exception(repr(r))
|
||||||
|
|
||||||
_ = r.content
|
_ = r.content
|
||||||
finally:
|
finally:
|
||||||
f.f.close()
|
fsl.f.close()
|
||||||
|
|
||||||
|
|
||||||
class Ctl(object):
|
class Ctl(object):
|
||||||
@@ -724,6 +753,9 @@ class Ctl(object):
|
|||||||
if ar.safe:
|
if ar.safe:
|
||||||
self._safe()
|
self._safe()
|
||||||
else:
|
else:
|
||||||
|
self.at_hash = 0.0
|
||||||
|
self.at_up = 0.0
|
||||||
|
self.at_upr = 0.0
|
||||||
self.hash_f = 0
|
self.hash_f = 0
|
||||||
self.hash_c = 0
|
self.hash_c = 0
|
||||||
self.hash_b = 0
|
self.hash_b = 0
|
||||||
@@ -743,7 +775,7 @@ class Ctl(object):
|
|||||||
|
|
||||||
self.mutex = threading.Lock()
|
self.mutex = threading.Lock()
|
||||||
self.q_handshake = Queue() # type: Queue[File]
|
self.q_handshake = Queue() # type: Queue[File]
|
||||||
self.q_upload = Queue() # type: Queue[tuple[File, str]]
|
self.q_upload = Queue() # type: Queue[FileSlice]
|
||||||
|
|
||||||
self.st_hash = [None, "(idle, starting...)"] # type: tuple[File, int]
|
self.st_hash = [None, "(idle, starting...)"] # type: tuple[File, int]
|
||||||
self.st_up = [None, "(idle, starting...)"] # type: tuple[File, int]
|
self.st_up = [None, "(idle, starting...)"] # type: tuple[File, int]
|
||||||
@@ -788,7 +820,8 @@ class Ctl(object):
|
|||||||
for nc, cid in enumerate(hs):
|
for nc, cid in enumerate(hs):
|
||||||
print(" {0} up {1}".format(ncs - nc, cid))
|
print(" {0} up {1}".format(ncs - nc, cid))
|
||||||
stats = "{0}/0/0/{1}".format(nf, self.nfiles - nf)
|
stats = "{0}/0/0/{1}".format(nf, self.nfiles - nf)
|
||||||
upload(file, cid, self.ar.a, stats)
|
fslice = FileSlice(file, [cid])
|
||||||
|
upload(fslice, self.ar.a, stats)
|
||||||
|
|
||||||
print(" ok!")
|
print(" ok!")
|
||||||
if file.recheck:
|
if file.recheck:
|
||||||
@@ -797,7 +830,7 @@ class Ctl(object):
|
|||||||
if not self.recheck:
|
if not self.recheck:
|
||||||
return
|
return
|
||||||
|
|
||||||
eprint("finalizing {0} duplicate files".format(len(self.recheck)))
|
eprint("finalizing %d duplicate files\n" % (len(self.recheck),))
|
||||||
for file in self.recheck:
|
for file in self.recheck:
|
||||||
handshake(self.ar, file, search)
|
handshake(self.ar, file, search)
|
||||||
|
|
||||||
@@ -871,10 +904,17 @@ class Ctl(object):
|
|||||||
t = "{0} eta @ {1}/s, {2}, {3}# left".format(self.eta, spd, sleft, nleft)
|
t = "{0} eta @ {1}/s, {2}, {3}# left".format(self.eta, spd, sleft, nleft)
|
||||||
eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail))
|
eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail))
|
||||||
|
|
||||||
|
if self.hash_b and self.at_hash:
|
||||||
|
spd = humansize(self.hash_b / self.at_hash)
|
||||||
|
eprint("\nhasher: %.2f sec, %s/s\n" % (self.at_hash, spd))
|
||||||
|
if self.up_b and self.at_up:
|
||||||
|
spd = humansize(self.up_b / self.at_up)
|
||||||
|
eprint("upload: %.2f sec, %s/s\n" % (self.at_up, spd))
|
||||||
|
|
||||||
if not self.recheck:
|
if not self.recheck:
|
||||||
return
|
return
|
||||||
|
|
||||||
eprint("finalizing {0} duplicate files".format(len(self.recheck)))
|
eprint("finalizing %d duplicate files\n" % (len(self.recheck),))
|
||||||
for file in self.recheck:
|
for file in self.recheck:
|
||||||
handshake(self.ar, file, False)
|
handshake(self.ar, file, False)
|
||||||
|
|
||||||
@@ -1060,21 +1100,62 @@ class Ctl(object):
|
|||||||
self.handshaker_busy -= 1
|
self.handshaker_busy -= 1
|
||||||
|
|
||||||
if not hs:
|
if not hs:
|
||||||
kw = "uploaded" if file.up_b else " found"
|
self.at_hash += file.t_hash
|
||||||
print("{0} {1}".format(kw, upath))
|
|
||||||
for cid in hs:
|
if self.ar.spd:
|
||||||
self.q_upload.put([file, cid])
|
if VT100:
|
||||||
|
c1 = "\033[36m"
|
||||||
|
c2 = "\033[0m"
|
||||||
|
else:
|
||||||
|
c1 = c2 = ""
|
||||||
|
|
||||||
|
spd_h = humansize(file.size / file.t_hash, True)
|
||||||
|
if file.up_b:
|
||||||
|
t_up = file.t1_up - file.t0_up
|
||||||
|
spd_u = humansize(file.size / t_up, True)
|
||||||
|
|
||||||
|
t = "uploaded %s %s(h:%.2fs,%s/s,up:%.2fs,%s/s)%s"
|
||||||
|
print(t % (upath, c1, file.t_hash, spd_h, t_up, spd_u, c2))
|
||||||
|
else:
|
||||||
|
t = " found %s %s(%.2fs,%s/s)%s"
|
||||||
|
print(t % (upath, c1, file.t_hash, spd_h, c2))
|
||||||
|
else:
|
||||||
|
kw = "uploaded" if file.up_b else " found"
|
||||||
|
print("{0} {1}".format(kw, upath))
|
||||||
|
|
||||||
|
chunksz = up2k_chunksize(file.size)
|
||||||
|
njoin = (self.ar.sz * 1024 * 1024) // chunksz
|
||||||
|
cs = hs[:]
|
||||||
|
while cs:
|
||||||
|
fsl = FileSlice(file, cs[:1])
|
||||||
|
try:
|
||||||
|
if file.nojoin:
|
||||||
|
raise Exception()
|
||||||
|
for n in range(2, min(len(cs), njoin + 1)):
|
||||||
|
fsl = FileSlice(file, cs[:n])
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
cs = cs[len(fsl.cids) :]
|
||||||
|
self.q_upload.put(fsl)
|
||||||
|
|
||||||
def uploader(self):
|
def uploader(self):
|
||||||
while True:
|
while True:
|
||||||
task = self.q_upload.get()
|
fsl = self.q_upload.get()
|
||||||
if not task:
|
if not fsl:
|
||||||
self.st_up = [None, "(finished)"]
|
self.st_up = [None, "(finished)"]
|
||||||
break
|
break
|
||||||
|
|
||||||
|
file = fsl.file
|
||||||
|
cids = fsl.cids
|
||||||
|
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
|
if not self.uploader_busy:
|
||||||
|
self.at_upr = time.time()
|
||||||
self.uploader_busy += 1
|
self.uploader_busy += 1
|
||||||
self.t0_up = self.t0_up or time.time()
|
if not file.t0_up:
|
||||||
|
file.t0_up = time.time()
|
||||||
|
if not self.t0_up:
|
||||||
|
self.t0_up = file.t0_up
|
||||||
|
|
||||||
stats = "%d/%d/%d/%d %d/%d %s" % (
|
stats = "%d/%d/%d/%d %d/%d %s" % (
|
||||||
self.up_f,
|
self.up_f,
|
||||||
@@ -1086,28 +1167,30 @@ class Ctl(object):
|
|||||||
self.eta,
|
self.eta,
|
||||||
)
|
)
|
||||||
|
|
||||||
file, cid = task
|
|
||||||
try:
|
try:
|
||||||
upload(file, cid, self.ar.a, stats)
|
upload(fsl, self.ar.a, stats)
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
t = "upload failed, retrying: {0} #{1} ({2})\n"
|
t = "upload failed, retrying: %s #%s+%d (%s)\n"
|
||||||
eprint(t.format(file.name, cid[:8], ex))
|
eprint(t % (file.name, cids[0][:8], len(cids) - 1, ex))
|
||||||
file.cd = time.time() + self.ar.cd
|
file.cd = time.time() + self.ar.cd
|
||||||
# handshake will fix it
|
# handshake will fix it
|
||||||
|
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
sz = file.kchunks[cid][1]
|
sz = fsl.len
|
||||||
file.ucids = [x for x in file.ucids if x != cid]
|
file.ucids = [x for x in file.ucids if x not in cids]
|
||||||
if not file.ucids:
|
if not file.ucids:
|
||||||
|
file.t1_up = time.time()
|
||||||
self.q_handshake.put(file)
|
self.q_handshake.put(file)
|
||||||
|
|
||||||
self.st_up = [file, cid]
|
self.st_up = [file, cids[0]]
|
||||||
file.up_b += sz
|
file.up_b += sz
|
||||||
self.up_b += sz
|
self.up_b += sz
|
||||||
self.up_br += sz
|
self.up_br += sz
|
||||||
file.up_c += 1
|
file.up_c += 1
|
||||||
self.up_c += 1
|
self.up_c += 1
|
||||||
self.uploader_busy -= 1
|
self.uploader_busy -= 1
|
||||||
|
if not self.uploader_busy:
|
||||||
|
self.at_up += time.time() - self.at_upr
|
||||||
|
|
||||||
def up_done(self, file):
|
def up_done(self, file):
|
||||||
if self.ar.dl:
|
if self.ar.dl:
|
||||||
@@ -1150,6 +1233,7 @@ source file/folder selection uses rsync syntax, meaning that:
|
|||||||
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
|
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
|
||||||
ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)")
|
ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)")
|
||||||
ap.add_argument("--ow", action="store_true", help="overwrite existing files instead of autorenaming")
|
ap.add_argument("--ow", action="store_true", help="overwrite existing files instead of autorenaming")
|
||||||
|
ap.add_argument("--spd", action="store_true", help="print speeds for each file")
|
||||||
ap.add_argument("--version", action="store_true", help="show version and exit")
|
ap.add_argument("--version", action="store_true", help="show version and exit")
|
||||||
|
|
||||||
ap = app.add_argument_group("compatibility")
|
ap = app.add_argument_group("compatibility")
|
||||||
@@ -1164,6 +1248,7 @@ source file/folder selection uses rsync syntax, meaning that:
|
|||||||
ap = app.add_argument_group("performance tweaks")
|
ap = app.add_argument_group("performance tweaks")
|
||||||
ap.add_argument("-j", type=int, metavar="CONNS", default=2, help="parallel connections")
|
ap.add_argument("-j", type=int, metavar="CONNS", default=2, help="parallel connections")
|
||||||
ap.add_argument("-J", type=int, metavar="CORES", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing")
|
ap.add_argument("-J", type=int, metavar="CORES", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing")
|
||||||
|
ap.add_argument("--sz", type=int, metavar="MiB", default=64, help="try to make each POST this big")
|
||||||
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
|
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
|
||||||
ap.add_argument("-ns", action="store_true", help="no status panel (for slow consoles and macos)")
|
ap.add_argument("-ns", action="store_true", help="no status panel (for slow consoles and macos)")
|
||||||
ap.add_argument("--cd", type=float, metavar="SEC", default=5, help="delay before reattempting a failed handshake/upload")
|
ap.add_argument("--cd", type=float, metavar="SEC", default=5, help="delay before reattempting a failed handshake/upload")
|
||||||
@@ -1208,7 +1293,7 @@ source file/folder selection uses rsync syntax, meaning that:
|
|||||||
ar.url = ar.url.rstrip("/") + "/"
|
ar.url = ar.url.rstrip("/") + "/"
|
||||||
if "://" not in ar.url:
|
if "://" not in ar.url:
|
||||||
ar.url = "http://" + ar.url
|
ar.url = "http://" + ar.url
|
||||||
|
|
||||||
if "https://" in ar.url.lower():
|
if "https://" in ar.url.lower():
|
||||||
try:
|
try:
|
||||||
import ssl, zipfile
|
import ssl, zipfile
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Maintainer: icxes <dev.null@need.moe>
|
# Maintainer: icxes <dev.null@need.moe>
|
||||||
pkgname=copyparty
|
pkgname=copyparty
|
||||||
pkgver="1.13.2"
|
pkgver="1.13.5"
|
||||||
pkgrel=1
|
pkgrel=1
|
||||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||||
arch=("any")
|
arch=("any")
|
||||||
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
|||||||
)
|
)
|
||||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||||
backup=("etc/${pkgname}.d/init" )
|
backup=("etc/${pkgname}.d/init" )
|
||||||
sha256sums=("39937526aab77f4d78f19d16ed5c0eae9ac28f658e772ae9b54fff5281161c77")
|
sha256sums=("83bf52ac03256ee6fe405a912e2767578692760f9554f821dfcab0700dd58082")
|
||||||
|
|
||||||
build() {
|
build() {
|
||||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"url": "https://github.com/9001/copyparty/releases/download/v1.13.2/copyparty-sfx.py",
|
"url": "https://github.com/9001/copyparty/releases/download/v1.13.5/copyparty-sfx.py",
|
||||||
"version": "1.13.2",
|
"version": "1.13.5",
|
||||||
"hash": "sha256-5vvhbiZtgW/km9csq9iYezCaS6wAsLn1qVXDjl6gvwU="
|
"hash": "sha256-I+dqsiScYPcX6JpLgwVoLs7l0FlbXabc/Ofqye9RQI0="
|
||||||
}
|
}
|
||||||
@@ -4,7 +4,7 @@
|
|||||||
#
|
#
|
||||||
# installation:
|
# installation:
|
||||||
# wget https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py -O /usr/local/bin/copyparty-sfx.py
|
# wget https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py -O /usr/local/bin/copyparty-sfx.py
|
||||||
# useradd -r -s /sbin/nologin -d /var/lib/copyparty copyparty
|
# useradd -r -s /sbin/nologin -m -d /var/lib/copyparty copyparty
|
||||||
# firewall-cmd --permanent --add-port=3923/tcp # --zone=libvirt
|
# firewall-cmd --permanent --add-port=3923/tcp # --zone=libvirt
|
||||||
# firewall-cmd --reload
|
# firewall-cmd --reload
|
||||||
# cp -pv copyparty.service /etc/systemd/system/
|
# cp -pv copyparty.service /etc/systemd/system/
|
||||||
@@ -12,11 +12,18 @@
|
|||||||
# restorecon -vr /etc/systemd/system/copyparty.service # on fedora/rhel
|
# restorecon -vr /etc/systemd/system/copyparty.service # on fedora/rhel
|
||||||
# systemctl daemon-reload && systemctl enable --now copyparty
|
# systemctl daemon-reload && systemctl enable --now copyparty
|
||||||
#
|
#
|
||||||
|
# every time you edit this file, you must "systemctl daemon-reload"
|
||||||
|
# for the changes to take effect and then "systemctl restart copyparty"
|
||||||
|
#
|
||||||
# if it fails to start, first check this: systemctl status copyparty
|
# if it fails to start, first check this: systemctl status copyparty
|
||||||
# then try starting it while viewing logs:
|
# then try starting it while viewing logs:
|
||||||
# journalctl -fan 100
|
# journalctl -fan 100
|
||||||
# tail -Fn 100 /var/log/copyparty/$(date +%Y-%m%d.log)
|
# tail -Fn 100 /var/log/copyparty/$(date +%Y-%m%d.log)
|
||||||
#
|
#
|
||||||
|
# if you run into any issues, for example thumbnails not working,
|
||||||
|
# try removing the "some quick hardening" section and then please
|
||||||
|
# let me know if that actually helped so we can look into it
|
||||||
|
#
|
||||||
# you may want to:
|
# you may want to:
|
||||||
# - change "User=copyparty" and "/var/lib/copyparty/" to another user
|
# - change "User=copyparty" and "/var/lib/copyparty/" to another user
|
||||||
# - edit /etc/copyparty.conf to configure copyparty
|
# - edit /etc/copyparty.conf to configure copyparty
|
||||||
|
|||||||
118
contrib/themes/bsod.css
Normal file
118
contrib/themes/bsod.css
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
/* copy bsod.* into a folder named ".themes" in your webroot and then
|
||||||
|
--themes=10 --theme=9 --css-browser=/.themes/bsod.css
|
||||||
|
*/
|
||||||
|
|
||||||
|
html.ey {
|
||||||
|
--w2: #3d7bbc;
|
||||||
|
--w3: #5fcbec;
|
||||||
|
|
||||||
|
--fg: #fff;
|
||||||
|
--fg-max: #fff;
|
||||||
|
--fg-weak: var(--w3);
|
||||||
|
|
||||||
|
--bg: #2067b2;
|
||||||
|
--bg-d3: var(--bg);
|
||||||
|
--bg-d2: var(--w2);
|
||||||
|
--bg-d1: var(--fg-weak);
|
||||||
|
--bg-u2: var(--bg);
|
||||||
|
--bg-u3: var(--bg);
|
||||||
|
--bg-u5: var(--w2);
|
||||||
|
|
||||||
|
--tab-alt: var(--fg-weak);
|
||||||
|
--row-alt: var(--w2);
|
||||||
|
|
||||||
|
--scroll: var(--w3);
|
||||||
|
|
||||||
|
--a: #fff;
|
||||||
|
--a-b: #fff;
|
||||||
|
--a-hil: #fff;
|
||||||
|
--a-h-bg: var(--fg-weak);
|
||||||
|
--a-dark: var(--a);
|
||||||
|
--a-gray: var(--fg-weak);
|
||||||
|
|
||||||
|
--btn-fg: var(--a);
|
||||||
|
--btn-bg: var(--w2);
|
||||||
|
--btn-h-fg: var(--w2);
|
||||||
|
--btn-1-fg: var(--bg);
|
||||||
|
--btn-1-bg: var(--a);
|
||||||
|
--txt-sh: a;
|
||||||
|
--txt-bg: var(--w2);
|
||||||
|
|
||||||
|
--u2-b1-bg: var(--w2);
|
||||||
|
--u2-b2-bg: var(--w2);
|
||||||
|
--u2-o-bg: var(--w2);
|
||||||
|
--u2-o-1-bg: var(--a);
|
||||||
|
--u2-txt-bg: var(--w2);
|
||||||
|
--u2-tab-bg: a;
|
||||||
|
--u2-tab-1-bg: var(--w2);
|
||||||
|
|
||||||
|
--sort-1: var(--a);
|
||||||
|
--sort-1: var(--fg-weak);
|
||||||
|
|
||||||
|
--tree-bg: var(--bg);
|
||||||
|
|
||||||
|
--g-b1: a;
|
||||||
|
--g-b2: a;
|
||||||
|
--g-f-bg: var(--w2);
|
||||||
|
|
||||||
|
--f-sh1: 0.1;
|
||||||
|
--f-sh2: 0.02;
|
||||||
|
--f-sh3: 0.1;
|
||||||
|
--f-h-b1: a;
|
||||||
|
|
||||||
|
--srv-1: var(--a);
|
||||||
|
--srv-3: var(--a);
|
||||||
|
|
||||||
|
--mp-sh: a;
|
||||||
|
}
|
||||||
|
|
||||||
|
html.ey {
|
||||||
|
background: url('bsod.png') top 5em right 4.5em no-repeat fixed var(--bg);
|
||||||
|
}
|
||||||
|
html.ey body#b {
|
||||||
|
background: var(--bg); /*sandbox*/
|
||||||
|
}
|
||||||
|
html.ey #ops {
|
||||||
|
margin: 1.7em 1.5em 0 1.5em;
|
||||||
|
border-radius: .3em;
|
||||||
|
border-width: 1px 0;
|
||||||
|
}
|
||||||
|
html.ey #ops a {
|
||||||
|
text-shadow: 1px 1px 0 rgba(0,0,0,0.5);
|
||||||
|
}
|
||||||
|
html.ey .opbox {
|
||||||
|
margin: 1.5em 0 0 0;
|
||||||
|
}
|
||||||
|
html.ey #tree {
|
||||||
|
box-shadow: none;
|
||||||
|
}
|
||||||
|
html.ey #tt {
|
||||||
|
border-color: var(--w2);
|
||||||
|
background: var(--w2);
|
||||||
|
}
|
||||||
|
html.ey .mdo a {
|
||||||
|
background: none;
|
||||||
|
text-decoration: underline;
|
||||||
|
}
|
||||||
|
html.ey .mdo pre,
|
||||||
|
html.ey .mdo code {
|
||||||
|
color: #fff;
|
||||||
|
background: var(--w2);
|
||||||
|
border: none;
|
||||||
|
}
|
||||||
|
html.ey .mdo h1,
|
||||||
|
html.ey .mdo h2 {
|
||||||
|
background: none;
|
||||||
|
border-color: var(--w2);
|
||||||
|
}
|
||||||
|
html.ey .mdo ul ul,
|
||||||
|
html.ey .mdo ul ol,
|
||||||
|
html.ey .mdo ol ul,
|
||||||
|
html.ey .mdo ol ol {
|
||||||
|
border-color: var(--w2);
|
||||||
|
}
|
||||||
|
html.ey .mdo p>em,
|
||||||
|
html.ey .mdo li>em,
|
||||||
|
html.ey .mdo td>em {
|
||||||
|
color: #fd0;
|
||||||
|
}
|
||||||
BIN
contrib/themes/bsod.png
Normal file
BIN
contrib/themes/bsod.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.2 KiB |
@@ -42,6 +42,7 @@ from .util import (
|
|||||||
DEF_EXP,
|
DEF_EXP,
|
||||||
DEF_MTE,
|
DEF_MTE,
|
||||||
DEF_MTH,
|
DEF_MTH,
|
||||||
|
HAVE_IPV6,
|
||||||
IMPLICATIONS,
|
IMPLICATIONS,
|
||||||
JINJA_VER,
|
JINJA_VER,
|
||||||
MIMES,
|
MIMES,
|
||||||
@@ -293,6 +294,9 @@ def get_ah_salt() -> str:
|
|||||||
|
|
||||||
|
|
||||||
def ensure_locale() -> None:
|
def ensure_locale() -> None:
|
||||||
|
if ANYWIN and PY2:
|
||||||
|
return # maybe XP, so busted 65001
|
||||||
|
|
||||||
safe = "en_US.UTF-8"
|
safe = "en_US.UTF-8"
|
||||||
for x in [
|
for x in [
|
||||||
safe,
|
safe,
|
||||||
@@ -487,11 +491,17 @@ def disable_quickedit() -> None:
|
|||||||
|
|
||||||
|
|
||||||
def sfx_tpoke(top: str):
|
def sfx_tpoke(top: str):
|
||||||
files = [os.path.join(dp, p) for dp, dd, df in os.walk(top) for p in dd + df]
|
files = [top] + [
|
||||||
|
os.path.join(dp, p) for dp, dd, df in os.walk(top) for p in dd + df
|
||||||
|
]
|
||||||
while True:
|
while True:
|
||||||
t = int(time.time())
|
t = int(time.time())
|
||||||
for f in [top] + files:
|
for f in list(files):
|
||||||
os.utime(f, (t, t))
|
try:
|
||||||
|
os.utime(f, (t, t))
|
||||||
|
except Exception as ex:
|
||||||
|
lprint("<TPOKE> [%s] %r" % (f, ex))
|
||||||
|
files.remove(f)
|
||||||
|
|
||||||
time.sleep(78123)
|
time.sleep(78123)
|
||||||
|
|
||||||
@@ -630,12 +640,12 @@ def get_sects():
|
|||||||
\033[36mxban\033[35m executes CMD if someone gets banned
|
\033[36mxban\033[35m executes CMD if someone gets banned
|
||||||
\033[0m
|
\033[0m
|
||||||
can be defined as --args or volflags; for example \033[36m
|
can be defined as --args or volflags; for example \033[36m
|
||||||
--xau notify-send
|
--xau foo.py
|
||||||
-v .::r:c,xau=notify-send
|
-v .::r:c,xau=bar.py
|
||||||
\033[0m
|
\033[0m
|
||||||
commands specified as --args are appended to volflags;
|
hooks specified as commandline --args are appended to volflags;
|
||||||
each --arg and volflag can be specified multiple times,
|
each commandline --arg and volflag can be specified multiple times,
|
||||||
each command will execute in order unless one returns non-zero
|
each hook will execute in order unless one returns non-zero
|
||||||
|
|
||||||
optionally prefix the command with comma-sep. flags similar to -mtp:
|
optionally prefix the command with comma-sep. flags similar to -mtp:
|
||||||
|
|
||||||
@@ -646,6 +656,10 @@ def get_sects():
|
|||||||
\033[36mtN\033[35m sets an N sec timeout before the command is abandoned
|
\033[36mtN\033[35m sets an N sec timeout before the command is abandoned
|
||||||
\033[36miN\033[35m xiu only: volume must be idle for N sec (default = 5)
|
\033[36miN\033[35m xiu only: volume must be idle for N sec (default = 5)
|
||||||
|
|
||||||
|
\033[36mar\033[35m only run hook if user has read-access
|
||||||
|
\033[36marw\033[35m only run hook if user has read-write-access
|
||||||
|
\033[36marwmd\033[35m ...and so on... (doesn't work for xiu or xban)
|
||||||
|
|
||||||
\033[36mkt\033[35m kills the entire process tree on timeout (default),
|
\033[36mkt\033[35m kills the entire process tree on timeout (default),
|
||||||
\033[36mkm\033[35m kills just the main process
|
\033[36mkm\033[35m kills just the main process
|
||||||
\033[36mkn\033[35m lets it continue running until copyparty is terminated
|
\033[36mkn\033[35m lets it continue running until copyparty is terminated
|
||||||
@@ -655,6 +669,21 @@ def get_sects():
|
|||||||
\033[36mc2\033[35m show only stdout
|
\033[36mc2\033[35m show only stdout
|
||||||
\033[36mc3\033[35m mute all process otput
|
\033[36mc3\033[35m mute all process otput
|
||||||
\033[0m
|
\033[0m
|
||||||
|
examples:
|
||||||
|
|
||||||
|
\033[36m--xm some.py\033[35m runs \033[33msome.py msgtxt\033[35m on each 📟 message;
|
||||||
|
\033[33mmsgtxt\033[35m is the message that was written into the web-ui
|
||||||
|
|
||||||
|
\033[36m--xm j,some.py\033[35m runs \033[33msome.py jsontext\033[35m on each 📟 message;
|
||||||
|
\033[33mjsontext\033[35m is the message info (ip, user, ..., msg-text)
|
||||||
|
|
||||||
|
\033[36m--xm aw,j,some.py\033[35m requires user to have write-access
|
||||||
|
|
||||||
|
\033[36m--xm aw,,notify-send,hey,--\033[35m shows an OS alert on linux;
|
||||||
|
the \033[33m,,\033[35m stops copyparty from reading the rest as flags and
|
||||||
|
the \033[33m--\033[35m stops notify-send from reading the message as args
|
||||||
|
and the alert will be "hey" followed by the messagetext
|
||||||
|
\033[0m
|
||||||
each hook is executed once for each event, except for \033[36mxiu\033[0m
|
each hook is executed once for each event, except for \033[36mxiu\033[0m
|
||||||
which builds up a backlog of uploads, running the hook just once
|
which builds up a backlog of uploads, running the hook just once
|
||||||
as soon as the volume has been idle for iN seconds (5 by default)
|
as soon as the volume has been idle for iN seconds (5 by default)
|
||||||
@@ -681,7 +710,10 @@ def get_sects():
|
|||||||
\033[36mstash\033[35m dumps the data to file and returns length + checksum
|
\033[36mstash\033[35m dumps the data to file and returns length + checksum
|
||||||
\033[36msave,get\033[35m dumps to file and returns the page like a GET
|
\033[36msave,get\033[35m dumps to file and returns the page like a GET
|
||||||
\033[36mprint,get\033[35m prints the data in the log and returns GET
|
\033[36mprint,get\033[35m prints the data in the log and returns GET
|
||||||
(leave out the ",get" to return an error instead)
|
(leave out the ",get" to return an error instead)\033[0m
|
||||||
|
|
||||||
|
note that the \033[35m--xm\033[0m hook will only run if \033[35m--urlform\033[0m
|
||||||
|
is either \033[36mprint\033[0m or the default \033[36mprint,get\033[0m
|
||||||
"""
|
"""
|
||||||
),
|
),
|
||||||
],
|
],
|
||||||
@@ -907,7 +939,7 @@ def add_upload(ap):
|
|||||||
ap2.add_argument("--no-dupe", action="store_true", help="reject duplicate files during upload; only matches within the same volume (volflag=nodupe)")
|
ap2.add_argument("--no-dupe", action="store_true", help="reject duplicate files during upload; only matches within the same volume (volflag=nodupe)")
|
||||||
ap2.add_argument("--no-snap", action="store_true", help="disable snapshots -- forget unfinished uploads on shutdown; don't create .hist/up2k.snap files -- abandoned/interrupted uploads must be cleaned up manually")
|
ap2.add_argument("--no-snap", action="store_true", help="disable snapshots -- forget unfinished uploads on shutdown; don't create .hist/up2k.snap files -- abandoned/interrupted uploads must be cleaned up manually")
|
||||||
ap2.add_argument("--snap-wri", metavar="SEC", type=int, default=300, help="write upload state to ./hist/up2k.snap every \033[33mSEC\033[0m seconds; allows resuming incomplete uploads after a server crash")
|
ap2.add_argument("--snap-wri", metavar="SEC", type=int, default=300, help="write upload state to ./hist/up2k.snap every \033[33mSEC\033[0m seconds; allows resuming incomplete uploads after a server crash")
|
||||||
ap2.add_argument("--snap-drop", metavar="MIN", type=float, default=1440, help="forget unfinished uploads after \033[33mMIN\033[0m minutes; impossible to resume them after that (360=6h, 1440=24h)")
|
ap2.add_argument("--snap-drop", metavar="MIN", type=float, default=1440.0, help="forget unfinished uploads after \033[33mMIN\033[0m minutes; impossible to resume them after that (360=6h, 1440=24h)")
|
||||||
ap2.add_argument("--u2ts", metavar="TXT", type=u, default="c", help="how to timestamp uploaded files; [\033[32mc\033[0m]=client-last-modified, [\033[32mu\033[0m]=upload-time, [\033[32mfc\033[0m]=force-c, [\033[32mfu\033[0m]=force-u (volflag=u2ts)")
|
ap2.add_argument("--u2ts", metavar="TXT", type=u, default="c", help="how to timestamp uploaded files; [\033[32mc\033[0m]=client-last-modified, [\033[32mu\033[0m]=upload-time, [\033[32mfc\033[0m]=force-c, [\033[32mfu\033[0m]=force-u (volflag=u2ts)")
|
||||||
ap2.add_argument("--rand", action="store_true", help="force randomized filenames, \033[33m--nrand\033[0m chars long (volflag=rand)")
|
ap2.add_argument("--rand", action="store_true", help="force randomized filenames, \033[33m--nrand\033[0m chars long (volflag=rand)")
|
||||||
ap2.add_argument("--nrand", metavar="NUM", type=int, default=9, help="randomized filenames length (volflag=nrand)")
|
ap2.add_argument("--nrand", metavar="NUM", type=int, default=9, help="randomized filenames length (volflag=nrand)")
|
||||||
@@ -916,6 +948,7 @@ def add_upload(ap):
|
|||||||
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
|
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
|
||||||
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
|
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
|
||||||
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
|
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
|
||||||
|
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for this size. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
|
||||||
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
|
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
|
||||||
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
|
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
|
||||||
|
|
||||||
@@ -935,12 +968,12 @@ def add_network(ap):
|
|||||||
else:
|
else:
|
||||||
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
|
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
|
||||||
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
|
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
|
||||||
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
|
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
|
||||||
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
|
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
|
||||||
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
|
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
|
||||||
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0, help="debug: socket write delay in seconds")
|
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0.0, help="debug: socket write delay in seconds")
|
||||||
ap2.add_argument("--rsp-slp", metavar="SEC", type=float, default=0, help="debug: response delay in seconds")
|
ap2.add_argument("--rsp-slp", metavar="SEC", type=float, default=0.0, help="debug: response delay in seconds")
|
||||||
ap2.add_argument("--rsp-jtr", metavar="SEC", type=float, default=0, help="debug: response delay, random duration 0..\033[33mSEC\033[0m")
|
ap2.add_argument("--rsp-jtr", metavar="SEC", type=float, default=0.0, help="debug: response delay, random duration 0..\033[33mSEC\033[0m")
|
||||||
|
|
||||||
|
|
||||||
def add_tls(ap, cert_path):
|
def add_tls(ap, cert_path):
|
||||||
@@ -948,10 +981,10 @@ def add_tls(ap, cert_path):
|
|||||||
ap2.add_argument("--http-only", action="store_true", help="disable ssl/tls -- force plaintext")
|
ap2.add_argument("--http-only", action="store_true", help="disable ssl/tls -- force plaintext")
|
||||||
ap2.add_argument("--https-only", action="store_true", help="disable plaintext -- force tls")
|
ap2.add_argument("--https-only", action="store_true", help="disable plaintext -- force tls")
|
||||||
ap2.add_argument("--cert", metavar="PATH", type=u, default=cert_path, help="path to TLS certificate")
|
ap2.add_argument("--cert", metavar="PATH", type=u, default=cert_path, help="path to TLS certificate")
|
||||||
ap2.add_argument("--ssl-ver", metavar="LIST", type=u, help="set allowed ssl/tls versions; [\033[32mhelp\033[0m] shows available versions; default is what your python version considers safe")
|
ap2.add_argument("--ssl-ver", metavar="LIST", type=u, default="", help="set allowed ssl/tls versions; [\033[32mhelp\033[0m] shows available versions; default is what your python version considers safe")
|
||||||
ap2.add_argument("--ciphers", metavar="LIST", type=u, help="set allowed ssl/tls ciphers; [\033[32mhelp\033[0m] shows available ciphers")
|
ap2.add_argument("--ciphers", metavar="LIST", type=u, default="", help="set allowed ssl/tls ciphers; [\033[32mhelp\033[0m] shows available ciphers")
|
||||||
ap2.add_argument("--ssl-dbg", action="store_true", help="dump some tls info")
|
ap2.add_argument("--ssl-dbg", action="store_true", help="dump some tls info")
|
||||||
ap2.add_argument("--ssl-log", metavar="PATH", type=u, help="log master secrets for later decryption in wireshark")
|
ap2.add_argument("--ssl-log", metavar="PATH", type=u, default="", help="log master secrets for later decryption in wireshark")
|
||||||
|
|
||||||
|
|
||||||
def add_cert(ap, cert_path):
|
def add_cert(ap, cert_path):
|
||||||
@@ -964,12 +997,12 @@ def add_cert(ap, cert_path):
|
|||||||
ap2.add_argument("--crt-nolo", action="store_true", help="do not add 127.0.0.1 / localhost into cert")
|
ap2.add_argument("--crt-nolo", action="store_true", help="do not add 127.0.0.1 / localhost into cert")
|
||||||
ap2.add_argument("--crt-nohn", action="store_true", help="do not add mDNS names / hostname into cert")
|
ap2.add_argument("--crt-nohn", action="store_true", help="do not add mDNS names / hostname into cert")
|
||||||
ap2.add_argument("--crt-dir", metavar="PATH", default=cert_dir, help="where to save the CA cert")
|
ap2.add_argument("--crt-dir", metavar="PATH", default=cert_dir, help="where to save the CA cert")
|
||||||
ap2.add_argument("--crt-cdays", metavar="D", type=float, default=3650, help="ca-certificate expiration time in days")
|
ap2.add_argument("--crt-cdays", metavar="D", type=float, default=3650.0, help="ca-certificate expiration time in days")
|
||||||
ap2.add_argument("--crt-sdays", metavar="D", type=float, default=365, help="server-cert expiration time in days")
|
ap2.add_argument("--crt-sdays", metavar="D", type=float, default=365.0, help="server-cert expiration time in days")
|
||||||
ap2.add_argument("--crt-cn", metavar="TXT", type=u, default="partyco", help="CA/server-cert common-name")
|
ap2.add_argument("--crt-cn", metavar="TXT", type=u, default="partyco", help="CA/server-cert common-name")
|
||||||
ap2.add_argument("--crt-cnc", metavar="TXT", type=u, default="--crt-cn", help="override CA name")
|
ap2.add_argument("--crt-cnc", metavar="TXT", type=u, default="--crt-cn", help="override CA name")
|
||||||
ap2.add_argument("--crt-cns", metavar="TXT", type=u, default="--crt-cn cpp", help="override server-cert name")
|
ap2.add_argument("--crt-cns", metavar="TXT", type=u, default="--crt-cn cpp", help="override server-cert name")
|
||||||
ap2.add_argument("--crt-back", metavar="HRS", type=float, default=72, help="backdate in hours")
|
ap2.add_argument("--crt-back", metavar="HRS", type=float, default=72.0, help="backdate in hours")
|
||||||
ap2.add_argument("--crt-alg", metavar="S-N", type=u, default="ecdsa-256", help="algorithm and keysize; one of these: \033[32mecdsa-256 rsa-4096 rsa-2048\033[0m")
|
ap2.add_argument("--crt-alg", metavar="S-N", type=u, default="ecdsa-256", help="algorithm and keysize; one of these: \033[32mecdsa-256 rsa-4096 rsa-2048\033[0m")
|
||||||
|
|
||||||
|
|
||||||
@@ -1010,7 +1043,7 @@ def add_zc_mdns(ap):
|
|||||||
ap2.add_argument("--zm-mnic", action="store_true", help="merge NICs which share subnets; assume that same subnet means same network")
|
ap2.add_argument("--zm-mnic", action="store_true", help="merge NICs which share subnets; assume that same subnet means same network")
|
||||||
ap2.add_argument("--zm-msub", action="store_true", help="merge subnets on each NIC -- always enabled for ipv6 -- reduces network load, but gnome-gvfs clients may stop working, and clients cannot be in subnets that the server is not")
|
ap2.add_argument("--zm-msub", action="store_true", help="merge subnets on each NIC -- always enabled for ipv6 -- reduces network load, but gnome-gvfs clients may stop working, and clients cannot be in subnets that the server is not")
|
||||||
ap2.add_argument("--zm-noneg", action="store_true", help="disable NSEC replies -- try this if some clients don't see copyparty")
|
ap2.add_argument("--zm-noneg", action="store_true", help="disable NSEC replies -- try this if some clients don't see copyparty")
|
||||||
ap2.add_argument("--zm-spam", metavar="SEC", type=float, default=0, help="send unsolicited announce every \033[33mSEC\033[0m; useful if clients have IPs in a subnet which doesn't overlap with the server, or to avoid some firewall issues")
|
ap2.add_argument("--zm-spam", metavar="SEC", type=float, default=0.0, help="send unsolicited announce every \033[33mSEC\033[0m; useful if clients have IPs in a subnet which doesn't overlap with the server, or to avoid some firewall issues")
|
||||||
|
|
||||||
|
|
||||||
def add_zc_ssdp(ap):
|
def add_zc_ssdp(ap):
|
||||||
@@ -1025,14 +1058,15 @@ def add_zc_ssdp(ap):
|
|||||||
|
|
||||||
def add_ftp(ap):
|
def add_ftp(ap):
|
||||||
ap2 = ap.add_argument_group('FTP options (TCP only)')
|
ap2 = ap.add_argument_group('FTP options (TCP only)')
|
||||||
ap2.add_argument("--ftp", metavar="PORT", type=int, help="enable FTP server on \033[33mPORT\033[0m, for example \033[32m3921")
|
ap2.add_argument("--ftp", metavar="PORT", type=int, default=0, help="enable FTP server on \033[33mPORT\033[0m, for example \033[32m3921")
|
||||||
ap2.add_argument("--ftps", metavar="PORT", type=int, help="enable FTPS server on \033[33mPORT\033[0m, for example \033[32m3990")
|
ap2.add_argument("--ftps", metavar="PORT", type=int, default=0, help="enable FTPS server on \033[33mPORT\033[0m, for example \033[32m3990")
|
||||||
ap2.add_argument("--ftpv", action="store_true", help="verbose")
|
ap2.add_argument("--ftpv", action="store_true", help="verbose")
|
||||||
ap2.add_argument("--ftp4", action="store_true", help="only listen on IPv4")
|
ap2.add_argument("--ftp4", action="store_true", help="only listen on IPv4")
|
||||||
ap2.add_argument("--ftp-ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
|
ap2.add_argument("--ftp-ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
|
||||||
|
ap2.add_argument("--ftp-no-ow", action="store_true", help="if target file exists, reject upload instead of overwrite")
|
||||||
ap2.add_argument("--ftp-wt", metavar="SEC", type=int, default=7, help="grace period for resuming interrupted uploads (any client can write to any file last-modified more recently than \033[33mSEC\033[0m seconds ago)")
|
ap2.add_argument("--ftp-wt", metavar="SEC", type=int, default=7, help="grace period for resuming interrupted uploads (any client can write to any file last-modified more recently than \033[33mSEC\033[0m seconds ago)")
|
||||||
ap2.add_argument("--ftp-nat", metavar="ADDR", type=u, help="the NAT address to use for passive connections")
|
ap2.add_argument("--ftp-nat", metavar="ADDR", type=u, default="", help="the NAT address to use for passive connections")
|
||||||
ap2.add_argument("--ftp-pr", metavar="P-P", type=u, help="the range of TCP ports to use for passive connections, for example \033[32m12000-13000")
|
ap2.add_argument("--ftp-pr", metavar="P-P", type=u, default="", help="the range of TCP ports to use for passive connections, for example \033[32m12000-13000")
|
||||||
|
|
||||||
|
|
||||||
def add_webdav(ap):
|
def add_webdav(ap):
|
||||||
@@ -1046,14 +1080,15 @@ def add_webdav(ap):
|
|||||||
|
|
||||||
def add_tftp(ap):
|
def add_tftp(ap):
|
||||||
ap2 = ap.add_argument_group('TFTP options (UDP only)')
|
ap2 = ap.add_argument_group('TFTP options (UDP only)')
|
||||||
ap2.add_argument("--tftp", metavar="PORT", type=int, help="enable TFTP server on \033[33mPORT\033[0m, for example \033[32m69 \033[0mor \033[32m3969")
|
ap2.add_argument("--tftp", metavar="PORT", type=int, default=0, help="enable TFTP server on \033[33mPORT\033[0m, for example \033[32m69 \033[0mor \033[32m3969")
|
||||||
|
ap2.add_argument("--tftp4", action="store_true", help="only listen on IPv4")
|
||||||
ap2.add_argument("--tftpv", action="store_true", help="verbose")
|
ap2.add_argument("--tftpv", action="store_true", help="verbose")
|
||||||
ap2.add_argument("--tftpvv", action="store_true", help="verboser")
|
ap2.add_argument("--tftpvv", action="store_true", help="verboser")
|
||||||
ap2.add_argument("--tftp-no-fast", action="store_true", help="debug: disable optimizations")
|
ap2.add_argument("--tftp-no-fast", action="store_true", help="debug: disable optimizations")
|
||||||
ap2.add_argument("--tftp-lsf", metavar="PTN", type=u, default="\\.?(dir|ls)(\\.txt)?", help="return a directory listing if a file with this name is requested and it does not exist; defaults matches .ls, dir, .dir.txt, ls.txt, ...")
|
ap2.add_argument("--tftp-lsf", metavar="PTN", type=u, default="\\.?(dir|ls)(\\.txt)?", help="return a directory listing if a file with this name is requested and it does not exist; defaults matches .ls, dir, .dir.txt, ls.txt, ...")
|
||||||
ap2.add_argument("--tftp-nols", action="store_true", help="if someone tries to download a directory, return an error instead of showing its directory listing")
|
ap2.add_argument("--tftp-nols", action="store_true", help="if someone tries to download a directory, return an error instead of showing its directory listing")
|
||||||
ap2.add_argument("--tftp-ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
|
ap2.add_argument("--tftp-ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
|
||||||
ap2.add_argument("--tftp-pr", metavar="P-P", type=u, help="the range of UDP ports to use for data transfer, for example \033[32m12000-13000")
|
ap2.add_argument("--tftp-pr", metavar="P-P", type=u, default="", help="the range of UDP ports to use for data transfer, for example \033[32m12000-13000")
|
||||||
|
|
||||||
|
|
||||||
def add_smb(ap):
|
def add_smb(ap):
|
||||||
@@ -1129,7 +1164,7 @@ def add_safety(ap):
|
|||||||
ap2.add_argument("-s", action="count", default=0, help="increase safety: Disable thumbnails / potentially dangerous software (ffmpeg/pillow/vips), hide partial uploads, avoid crawlers.\n └─Alias of\033[32m --dotpart --no-thumb --no-mtag-ff --no-robots --force-js")
|
ap2.add_argument("-s", action="count", default=0, help="increase safety: Disable thumbnails / potentially dangerous software (ffmpeg/pillow/vips), hide partial uploads, avoid crawlers.\n └─Alias of\033[32m --dotpart --no-thumb --no-mtag-ff --no-robots --force-js")
|
||||||
ap2.add_argument("-ss", action="store_true", help="further increase safety: Prevent js-injection, accidental move/delete, broken symlinks, webdav, 404 on 403, ban on excessive 404s.\n └─Alias of\033[32m -s --unpost=0 --no-del --no-mv --hardlink --vague-403 -nih")
|
ap2.add_argument("-ss", action="store_true", help="further increase safety: Prevent js-injection, accidental move/delete, broken symlinks, webdav, 404 on 403, ban on excessive 404s.\n └─Alias of\033[32m -s --unpost=0 --no-del --no-mv --hardlink --vague-403 -nih")
|
||||||
ap2.add_argument("-sss", action="store_true", help="further increase safety: Enable logging to disk, scan for dangerous symlinks.\n └─Alias of\033[32m -ss --no-dav --no-logues --no-readme -lo=cpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz --ls=**,*,ln,p,r")
|
ap2.add_argument("-sss", action="store_true", help="further increase safety: Enable logging to disk, scan for dangerous symlinks.\n └─Alias of\033[32m -ss --no-dav --no-logues --no-readme -lo=cpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz --ls=**,*,ln,p,r")
|
||||||
ap2.add_argument("--ls", metavar="U[,V[,F]]", type=u, help="do a sanity/safety check of all volumes on startup; arguments \033[33mUSER\033[0m,\033[33mVOL\033[0m,\033[33mFLAGS\033[0m (see \033[33m--help-ls\033[0m); example [\033[32m**,*,ln,p,r\033[0m]")
|
ap2.add_argument("--ls", metavar="U[,V[,F]]", type=u, default="", help="do a sanity/safety check of all volumes on startup; arguments \033[33mUSER\033[0m,\033[33mVOL\033[0m,\033[33mFLAGS\033[0m (see \033[33m--help-ls\033[0m); example [\033[32m**,*,ln,p,r\033[0m]")
|
||||||
ap2.add_argument("--xvol", action="store_true", help="never follow symlinks leaving the volume root, unless the link is into another volume where the user has similar access (volflag=xvol)")
|
ap2.add_argument("--xvol", action="store_true", help="never follow symlinks leaving the volume root, unless the link is into another volume where the user has similar access (volflag=xvol)")
|
||||||
ap2.add_argument("--xdev", action="store_true", help="stay within the filesystem of the volume root; do not descend into other devices (symlink or bind-mount to another HDD, ...) (volflag=xdev)")
|
ap2.add_argument("--xdev", action="store_true", help="stay within the filesystem of the volume root; do not descend into other devices (symlink or bind-mount to another HDD, ...) (volflag=xdev)")
|
||||||
ap2.add_argument("--no-dot-mv", action="store_true", help="disallow moving dotfiles; makes it impossible to move folders containing dotfiles")
|
ap2.add_argument("--no-dot-mv", action="store_true", help="disallow moving dotfiles; makes it impossible to move folders containing dotfiles")
|
||||||
@@ -1139,7 +1174,7 @@ def add_safety(ap):
|
|||||||
ap2.add_argument("--vague-403", action="store_true", help="send 404 instead of 403 (security through ambiguity, very enterprise)")
|
ap2.add_argument("--vague-403", action="store_true", help="send 404 instead of 403 (security through ambiguity, very enterprise)")
|
||||||
ap2.add_argument("--force-js", action="store_true", help="don't send folder listings as HTML, force clients to use the embedded json instead -- slight protection against misbehaving search engines which ignore \033[33m--no-robots\033[0m")
|
ap2.add_argument("--force-js", action="store_true", help="don't send folder listings as HTML, force clients to use the embedded json instead -- slight protection against misbehaving search engines which ignore \033[33m--no-robots\033[0m")
|
||||||
ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything (volflag=norobots)")
|
ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything (volflag=norobots)")
|
||||||
ap2.add_argument("--logout", metavar="H", type=float, default="8086", help="logout clients after \033[33mH\033[0m hours of inactivity; [\033[32m0.0028\033[0m]=10sec, [\033[32m0.1\033[0m]=6min, [\033[32m24\033[0m]=day, [\033[32m168\033[0m]=week, [\033[32m720\033[0m]=month, [\033[32m8760\033[0m]=year)")
|
ap2.add_argument("--logout", metavar="H", type=float, default=8086.0, help="logout clients after \033[33mH\033[0m hours of inactivity; [\033[32m0.0028\033[0m]=10sec, [\033[32m0.1\033[0m]=6min, [\033[32m24\033[0m]=day, [\033[32m168\033[0m]=week, [\033[32m720\033[0m]=month, [\033[32m8760\033[0m]=year)")
|
||||||
ap2.add_argument("--ban-pw", metavar="N,W,B", type=u, default="9,60,1440", help="more than \033[33mN\033[0m wrong passwords in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; disable with [\033[32mno\033[0m]")
|
ap2.add_argument("--ban-pw", metavar="N,W,B", type=u, default="9,60,1440", help="more than \033[33mN\033[0m wrong passwords in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; disable with [\033[32mno\033[0m]")
|
||||||
ap2.add_argument("--ban-404", metavar="N,W,B", type=u, default="50,60,1440", help="hitting more than \033[33mN\033[0m 404's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; only affects users who cannot see directory listings because their access is either g/G/h")
|
ap2.add_argument("--ban-404", metavar="N,W,B", type=u, default="50,60,1440", help="hitting more than \033[33mN\033[0m 404's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; only affects users who cannot see directory listings because their access is either g/G/h")
|
||||||
ap2.add_argument("--ban-403", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m 403's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; [\033[32m1440\033[0m]=day, [\033[32m10080\033[0m]=week, [\033[32m43200\033[0m]=month")
|
ap2.add_argument("--ban-403", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m 403's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; [\033[32m1440\033[0m]=day, [\033[32m10080\033[0m]=week, [\033[32m43200\033[0m]=month")
|
||||||
@@ -1175,7 +1210,7 @@ def add_shutdown(ap):
|
|||||||
def add_logging(ap):
|
def add_logging(ap):
|
||||||
ap2 = ap.add_argument_group('logging options')
|
ap2 = ap.add_argument_group('logging options')
|
||||||
ap2.add_argument("-q", action="store_true", help="quiet; disable most STDOUT messages")
|
ap2.add_argument("-q", action="store_true", help="quiet; disable most STDOUT messages")
|
||||||
ap2.add_argument("-lo", metavar="PATH", type=u, help="logfile, example: \033[32mcpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz\033[0m (NB: some errors may appear on STDOUT only)")
|
ap2.add_argument("-lo", metavar="PATH", type=u, default="", help="logfile, example: \033[32mcpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz\033[0m (NB: some errors may appear on STDOUT only)")
|
||||||
ap2.add_argument("--no-ansi", action="store_true", default=not VT100, help="disable colors; same as environment-variable NO_COLOR")
|
ap2.add_argument("--no-ansi", action="store_true", default=not VT100, help="disable colors; same as environment-variable NO_COLOR")
|
||||||
ap2.add_argument("--ansi", action="store_true", help="force colors; overrides environment-variable NO_COLOR")
|
ap2.add_argument("--ansi", action="store_true", help="force colors; overrides environment-variable NO_COLOR")
|
||||||
ap2.add_argument("--no-logflush", action="store_true", help="don't flush the logfile after each write; tiny bit faster")
|
ap2.add_argument("--no-logflush", action="store_true", help="don't flush the logfile after each write; tiny bit faster")
|
||||||
@@ -1202,10 +1237,10 @@ def add_thumbnail(ap):
|
|||||||
ap2.add_argument("--no-athumb", action="store_true", help="disable audio thumbnails (spectrograms) (volflag=dathumb)")
|
ap2.add_argument("--no-athumb", action="store_true", help="disable audio thumbnails (spectrograms) (volflag=dathumb)")
|
||||||
ap2.add_argument("--th-size", metavar="WxH", default="320x256", help="thumbnail res (volflag=thsize)")
|
ap2.add_argument("--th-size", metavar="WxH", default="320x256", help="thumbnail res (volflag=thsize)")
|
||||||
ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails")
|
ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails")
|
||||||
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60, help="conversion timeout in seconds (volflag=convt)")
|
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60.0, help="conversion timeout in seconds (volflag=convt)")
|
||||||
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=6, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
|
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=6.0, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
|
||||||
ap2.add_argument("--th-crop", metavar="TXT", type=u, default="y", help="crop thumbnails to 4:3 or keep dynamic height; client can override in UI unless force. [\033[32mfy\033[0m]=crop, [\033[32mfn\033[0m]=nocrop, [\033[32mfy\033[0m]=force-y, [\033[32mfn\033[0m]=force-n (volflag=crop)")
|
ap2.add_argument("--th-crop", metavar="TXT", type=u, default="y", help="crop thumbnails to 4:3 or keep dynamic height; client can override in UI unless force. [\033[32my\033[0m]=crop, [\033[32mn\033[0m]=nocrop, [\033[32mfy\033[0m]=force-y, [\033[32mfn\033[0m]=force-n (volflag=crop)")
|
||||||
ap2.add_argument("--th-x3", metavar="TXT", type=u, default="n", help="show thumbs at 3x resolution; client can override in UI unless force. [\033[32mfy\033[0m]=yes, [\033[32mfn\033[0m]=no, [\033[32mfy\033[0m]=force-yes, [\033[32mfn\033[0m]=force-no (volflag=th3x)")
|
ap2.add_argument("--th-x3", metavar="TXT", type=u, default="n", help="show thumbs at 3x resolution; client can override in UI unless force. [\033[32my\033[0m]=yes, [\033[32mn\033[0m]=no, [\033[32mfy\033[0m]=force-yes, [\033[32mfn\033[0m]=force-no (volflag=th3x)")
|
||||||
ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference")
|
ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference")
|
||||||
ap2.add_argument("--th-no-jpg", action="store_true", help="disable jpg output")
|
ap2.add_argument("--th-no-jpg", action="store_true", help="disable jpg output")
|
||||||
ap2.add_argument("--th-no-webp", action="store_true", help="disable webp output")
|
ap2.add_argument("--th-no-webp", action="store_true", help="disable webp output")
|
||||||
@@ -1244,8 +1279,8 @@ def add_db_general(ap, hcores):
|
|||||||
ap2.add_argument("-e2v", action="store_true", help="verify file integrity; rehash all files and compare with db")
|
ap2.add_argument("-e2v", action="store_true", help="verify file integrity; rehash all files and compare with db")
|
||||||
ap2.add_argument("-e2vu", action="store_true", help="on hash mismatch: update the database with the new hash")
|
ap2.add_argument("-e2vu", action="store_true", help="on hash mismatch: update the database with the new hash")
|
||||||
ap2.add_argument("-e2vp", action="store_true", help="on hash mismatch: panic and quit copyparty")
|
ap2.add_argument("-e2vp", action="store_true", help="on hash mismatch: panic and quit copyparty")
|
||||||
ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume data (db, thumbs) (volflag=hist)")
|
ap2.add_argument("--hist", metavar="PATH", type=u, default="", help="where to store volume data (db, thumbs); default is a folder named \".hist\" inside each volume (volflag=hist)")
|
||||||
ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
|
ap2.add_argument("--no-hash", metavar="PTN", type=u, default="", help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
|
||||||
ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
|
ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
|
||||||
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
|
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
|
||||||
ap2.add_argument("--re-dhash", action="store_true", help="force a cache rebuild on startup; enable this once if it gets out of sync (should never be necessary)")
|
ap2.add_argument("--re-dhash", action="store_true", help="force a cache rebuild on startup; enable this once if it gets out of sync (should never be necessary)")
|
||||||
@@ -1254,7 +1289,7 @@ def add_db_general(ap, hcores):
|
|||||||
ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (volflag=xlink)")
|
ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (volflag=xlink)")
|
||||||
ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing")
|
ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing")
|
||||||
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="rescan filesystem for changes every \033[33mSEC\033[0m seconds; 0=off (volflag=scan)")
|
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="rescan filesystem for changes every \033[33mSEC\033[0m seconds; 0=off (volflag=scan)")
|
||||||
ap2.add_argument("--db-act", metavar="SEC", type=float, default=10, help="defer any scheduled volume reindexing until \033[33mSEC\033[0m seconds after last db write (uploads, renames, ...)")
|
ap2.add_argument("--db-act", metavar="SEC", type=float, default=10.0, help="defer any scheduled volume reindexing until \033[33mSEC\033[0m seconds after last db write (uploads, renames, ...)")
|
||||||
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=45, help="search deadline -- terminate searches running for more than \033[33mSEC\033[0m seconds")
|
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=45, help="search deadline -- terminate searches running for more than \033[33mSEC\033[0m seconds")
|
||||||
ap2.add_argument("--srch-hits", metavar="N", type=int, default=7999, help="max search results to allow clients to fetch; 125 results will be shown initially")
|
ap2.add_argument("--srch-hits", metavar="N", type=int, default=7999, help="max search results to allow clients to fetch; 125 results will be shown initially")
|
||||||
ap2.add_argument("--dotsrch", action="store_true", help="show dotfiles in search results (volflags: dotsrch | nodotsrch)")
|
ap2.add_argument("--dotsrch", action="store_true", help="show dotfiles in search results (volflags: dotsrch | nodotsrch)")
|
||||||
@@ -1307,6 +1342,7 @@ def add_og(ap):
|
|||||||
def add_ui(ap, retry):
|
def add_ui(ap, retry):
|
||||||
ap2 = ap.add_argument_group('ui options')
|
ap2 = ap.add_argument_group('ui options')
|
||||||
ap2.add_argument("--grid", action="store_true", help="show grid/thumbnails by default (volflag=grid)")
|
ap2.add_argument("--grid", action="store_true", help="show grid/thumbnails by default (volflag=grid)")
|
||||||
|
ap2.add_argument("--gsel", action="store_true", help="select files in grid by ctrl-click (volflag=gsel)")
|
||||||
ap2.add_argument("--lang", metavar="LANG", type=u, default="eng", help="language; one of the following: \033[32meng nor\033[0m")
|
ap2.add_argument("--lang", metavar="LANG", type=u, default="eng", help="language; one of the following: \033[32meng nor\033[0m")
|
||||||
ap2.add_argument("--theme", metavar="NUM", type=int, default=0, help="default theme to use (0..7)")
|
ap2.add_argument("--theme", metavar="NUM", type=int, default=0, help="default theme to use (0..7)")
|
||||||
ap2.add_argument("--themes", metavar="NUM", type=int, default=8, help="number of themes installed")
|
ap2.add_argument("--themes", metavar="NUM", type=int, default=8, help="number of themes installed")
|
||||||
@@ -1315,8 +1351,8 @@ def add_ui(ap, retry):
|
|||||||
ap2.add_argument("--unlist", metavar="REGEX", type=u, default="", help="don't show files matching \033[33mREGEX\033[0m in file list. Purely cosmetic! Does not affect API calls, just the browser. Example: [\033[32m\\.(js|css)$\033[0m] (volflag=unlist)")
|
ap2.add_argument("--unlist", metavar="REGEX", type=u, default="", help="don't show files matching \033[33mREGEX\033[0m in file list. Purely cosmetic! Does not affect API calls, just the browser. Example: [\033[32m\\.(js|css)$\033[0m] (volflag=unlist)")
|
||||||
ap2.add_argument("--favico", metavar="TXT", type=u, default="c 000 none" if retry else "🎉 000 none", help="\033[33mfavicon-text\033[0m [ \033[33mforeground\033[0m [ \033[33mbackground\033[0m ] ], set blank to disable")
|
ap2.add_argument("--favico", metavar="TXT", type=u, default="c 000 none" if retry else "🎉 000 none", help="\033[33mfavicon-text\033[0m [ \033[33mforeground\033[0m [ \033[33mbackground\033[0m ] ], set blank to disable")
|
||||||
ap2.add_argument("--mpmc", metavar="URL", type=u, default="", help="change the mediaplayer-toggle mouse cursor; URL to a folder with {2..5}.png inside (or disable with [\033[32m.\033[0m])")
|
ap2.add_argument("--mpmc", metavar="URL", type=u, default="", help="change the mediaplayer-toggle mouse cursor; URL to a folder with {2..5}.png inside (or disable with [\033[32m.\033[0m])")
|
||||||
ap2.add_argument("--js-browser", metavar="L", type=u, help="URL to additional JS to include")
|
ap2.add_argument("--js-browser", metavar="L", type=u, default="", help="URL to additional JS to include")
|
||||||
ap2.add_argument("--css-browser", metavar="L", type=u, help="URL to additional CSS to include")
|
ap2.add_argument("--css-browser", metavar="L", type=u, default="", help="URL to additional CSS to include")
|
||||||
ap2.add_argument("--html-head", metavar="TXT", type=u, default="", help="text to append to the <head> of all HTML pages; can be @PATH to send the contents of a file at PATH, and/or begin with %% to render as jinja2 template (volflag=html_head)")
|
ap2.add_argument("--html-head", metavar="TXT", type=u, default="", help="text to append to the <head> of all HTML pages; can be @PATH to send the contents of a file at PATH, and/or begin with %% to render as jinja2 template (volflag=html_head)")
|
||||||
ap2.add_argument("--ih", action="store_true", help="if a folder contains index.html, show that instead of the directory listing by default (can be changed in the client settings UI, or add ?v to URL for override)")
|
ap2.add_argument("--ih", action="store_true", help="if a folder contains index.html, show that instead of the directory listing by default (can be changed in the client settings UI, or add ?v to URL for override)")
|
||||||
ap2.add_argument("--textfiles", metavar="CSV", type=u, default="txt,nfo,diz,cue,readme", help="file extensions to present as plaintext")
|
ap2.add_argument("--textfiles", metavar="CSV", type=u, default="txt,nfo,diz,cue,readme", help="file extensions to present as plaintext")
|
||||||
@@ -1344,8 +1380,8 @@ def add_debug(ap):
|
|||||||
ap2.add_argument("--no-htp", action="store_true", help="disable httpserver threadpool, create threads as-needed instead")
|
ap2.add_argument("--no-htp", action="store_true", help="disable httpserver threadpool, create threads as-needed instead")
|
||||||
ap2.add_argument("--srch-dbg", action="store_true", help="explain search processing, and do some extra expensive sanity checks")
|
ap2.add_argument("--srch-dbg", action="store_true", help="explain search processing, and do some extra expensive sanity checks")
|
||||||
ap2.add_argument("--rclone-mdns", action="store_true", help="use mdns-domain instead of server-ip on /?hc")
|
ap2.add_argument("--rclone-mdns", action="store_true", help="use mdns-domain instead of server-ip on /?hc")
|
||||||
ap2.add_argument("--stackmon", metavar="P,S", type=u, help="write stacktrace to \033[33mP\033[0math every \033[33mS\033[0m second, for example --stackmon=\033[32m./st/%%Y-%%m/%%d/%%H%%M.xz,60")
|
ap2.add_argument("--stackmon", metavar="P,S", type=u, default="", help="write stacktrace to \033[33mP\033[0math every \033[33mS\033[0m second, for example --stackmon=\033[32m./st/%%Y-%%m/%%d/%%H%%M.xz,60")
|
||||||
ap2.add_argument("--log-thrs", metavar="SEC", type=float, help="list active threads every \033[33mSEC\033[0m")
|
ap2.add_argument("--log-thrs", metavar="SEC", type=float, default=0.0, help="list active threads every \033[33mSEC\033[0m")
|
||||||
ap2.add_argument("--log-fk", metavar="REGEX", type=u, default="", help="log filekey params for files where path matches \033[33mREGEX\033[0m; [\033[32m.\033[0m] (a single dot) = all files")
|
ap2.add_argument("--log-fk", metavar="REGEX", type=u, default="", help="log filekey params for files where path matches \033[33mREGEX\033[0m; [\033[32m.\033[0m] (a single dot) = all files")
|
||||||
ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to \033[33m--bf-nc\033[0m and \033[33m--bf-dir\033[0m")
|
ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to \033[33m--bf-nc\033[0m and \033[33m--bf-dir\033[0m")
|
||||||
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
|
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
|
||||||
@@ -1593,6 +1629,9 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
|
|||||||
if getattr(al, k1):
|
if getattr(al, k1):
|
||||||
setattr(al, k2, False)
|
setattr(al, k2, False)
|
||||||
|
|
||||||
|
if not HAVE_IPV6 and al.i == "::":
|
||||||
|
al.i = "0.0.0.0"
|
||||||
|
|
||||||
al.i = al.i.split(",")
|
al.i = al.i.split(",")
|
||||||
try:
|
try:
|
||||||
if "-" in al.p:
|
if "-" in al.p:
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
VERSION = (1, 13, 3)
|
VERSION = (1, 13, 6)
|
||||||
CODENAME = "race the beam"
|
CODENAME = "race the beam"
|
||||||
BUILD_DT = (2024, 6, 1)
|
BUILD_DT = (2024, 7, 29)
|
||||||
|
|
||||||
S_VERSION = ".".join(map(str, VERSION))
|
S_VERSION = ".".join(map(str, VERSION))
|
||||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||||
|
|||||||
@@ -17,6 +17,8 @@ from .bos import bos
|
|||||||
from .cfg import flagdescs, permdescs, vf_bmap, vf_cmap, vf_vmap
|
from .cfg import flagdescs, permdescs, vf_bmap, vf_cmap, vf_vmap
|
||||||
from .pwhash import PWHash
|
from .pwhash import PWHash
|
||||||
from .util import (
|
from .util import (
|
||||||
|
DEF_MTE,
|
||||||
|
DEF_MTH,
|
||||||
EXTS,
|
EXTS,
|
||||||
IMPLICATIONS,
|
IMPLICATIONS,
|
||||||
MIMES,
|
MIMES,
|
||||||
@@ -475,6 +477,13 @@ class VFS(object):
|
|||||||
)
|
)
|
||||||
# skip uhtml because it's rarely needed
|
# skip uhtml because it's rarely needed
|
||||||
|
|
||||||
|
def get_perms(self, vpath: str, uname: str) -> str:
|
||||||
|
zbl = self.can_access(vpath, uname)
|
||||||
|
ret = "".join(ch for ch, ok in zip("rwmdgGa.", zbl) if ok)
|
||||||
|
if "rwmd" in ret and "a." in ret:
|
||||||
|
ret += "A"
|
||||||
|
return ret
|
||||||
|
|
||||||
def get(
|
def get(
|
||||||
self,
|
self,
|
||||||
vpath: str,
|
vpath: str,
|
||||||
@@ -2267,10 +2276,11 @@ class AuthSrv(object):
|
|||||||
"",
|
"",
|
||||||
]
|
]
|
||||||
|
|
||||||
csv = set("i p".split())
|
csv = set("i p th_covers zm_on zm_off zs_on zs_off".split())
|
||||||
zs = "c ihead mtm mtp on403 on404 xad xar xau xiu xban xbd xbr xbu xm"
|
zs = "c ihead mtm mtp on403 on404 xad xar xau xiu xban xbd xbr xbu xm"
|
||||||
lst = set(zs.split())
|
lst = set(zs.split())
|
||||||
askip = set("a v c vc cgen theme".split())
|
askip = set("a v c vc cgen exp_lg exp_md theme".split())
|
||||||
|
fskip = set("exp_lg exp_md mv_re_r mv_re_t rm_re_r rm_re_t".split())
|
||||||
|
|
||||||
# keymap from argv to vflag
|
# keymap from argv to vflag
|
||||||
amap = vf_bmap()
|
amap = vf_bmap()
|
||||||
@@ -2291,11 +2301,35 @@ class AuthSrv(object):
|
|||||||
for k, v in args.items():
|
for k, v in args.items():
|
||||||
if k in askip:
|
if k in askip:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
v = v.pattern
|
||||||
|
if k in ("idp_gsep", "tftp_lsf"):
|
||||||
|
v = v[1:-1] # close enough
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
skip = False
|
||||||
|
for k2, defstr in (("mte", DEF_MTE), ("mth", DEF_MTH)):
|
||||||
|
if k != k2:
|
||||||
|
continue
|
||||||
|
s1 = list(sorted(list(v)))
|
||||||
|
s2 = list(sorted(defstr.split(",")))
|
||||||
|
if s1 == s2:
|
||||||
|
skip = True
|
||||||
|
break
|
||||||
|
v = ",".join(s1)
|
||||||
|
|
||||||
|
if skip:
|
||||||
|
continue
|
||||||
|
|
||||||
if k in csv:
|
if k in csv:
|
||||||
v = ", ".join([str(za) for za in v])
|
v = ", ".join([str(za) for za in v])
|
||||||
try:
|
try:
|
||||||
v2 = getattr(self.dargs, k)
|
v2 = getattr(self.dargs, k)
|
||||||
if v == v2:
|
if k == "tcolor" and len(v2) == 3:
|
||||||
|
v2 = "".join([x * 2 for x in v2])
|
||||||
|
if v == v2 or v.replace(", ", ",") == v2:
|
||||||
continue
|
continue
|
||||||
except:
|
except:
|
||||||
continue
|
continue
|
||||||
@@ -2354,6 +2388,7 @@ class AuthSrv(object):
|
|||||||
pstr += pchar
|
pstr += pchar
|
||||||
if "g" in pstr and "G" in pstr:
|
if "g" in pstr and "G" in pstr:
|
||||||
pstr = pstr.replace("g", "")
|
pstr = pstr.replace("g", "")
|
||||||
|
pstr = pstr.replace("rwmd.a", "A")
|
||||||
try:
|
try:
|
||||||
vperms[pstr].append(uname)
|
vperms[pstr].append(uname)
|
||||||
except:
|
except:
|
||||||
@@ -2363,24 +2398,48 @@ class AuthSrv(object):
|
|||||||
trues = []
|
trues = []
|
||||||
vals = []
|
vals = []
|
||||||
for k, v in sorted(vol.flags.items()):
|
for k, v in sorted(vol.flags.items()):
|
||||||
|
if k in fskip:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
v = v.pattern
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
try:
|
try:
|
||||||
ak = vmap[k]
|
ak = vmap[k]
|
||||||
if getattr(self.args, ak) is v:
|
v2 = getattr(self.args, ak)
|
||||||
|
|
||||||
|
try:
|
||||||
|
v2 = v2.pattern
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if v2 is v:
|
||||||
continue
|
continue
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
skip = False
|
||||||
|
for k2, defstr in (("mte", DEF_MTE), ("mth", DEF_MTH)):
|
||||||
|
if k != k2:
|
||||||
|
continue
|
||||||
|
s1 = list(sorted(list(v)))
|
||||||
|
s2 = list(sorted(defstr.split(",")))
|
||||||
|
if s1 == s2:
|
||||||
|
skip = True
|
||||||
|
break
|
||||||
|
v = ",".join(s1)
|
||||||
|
|
||||||
|
if skip:
|
||||||
|
continue
|
||||||
|
|
||||||
if k in lst:
|
if k in lst:
|
||||||
for ve in v:
|
for ve in v:
|
||||||
vals.append("{}: {}".format(k, ve))
|
vals.append("{}: {}".format(k, ve))
|
||||||
elif v is True:
|
elif v is True:
|
||||||
trues.append(k)
|
trues.append(k)
|
||||||
elif v is not False:
|
elif v is not False:
|
||||||
try:
|
|
||||||
v = v.pattern
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
vals.append("{}: {}".format(k, v))
|
vals.append("{}: {}".format(k, v))
|
||||||
pops = []
|
pops = []
|
||||||
for k1, k2 in IMPLICATIONS:
|
for k1, k2 in IMPLICATIONS:
|
||||||
|
|||||||
@@ -28,7 +28,7 @@ class ExceptionalQueue(Queue, object):
|
|||||||
if rv[1] == "pebkac":
|
if rv[1] == "pebkac":
|
||||||
raise Pebkac(*rv[2:])
|
raise Pebkac(*rv[2:])
|
||||||
else:
|
else:
|
||||||
raise Exception(rv[2])
|
raise rv[2]
|
||||||
|
|
||||||
return rv
|
return rv
|
||||||
|
|
||||||
@@ -65,8 +65,8 @@ def try_exec(want_retval: Union[bool, int], func: Any, *args: list[Any]) -> Any:
|
|||||||
|
|
||||||
return ["exception", "pebkac", ex.code, str(ex)]
|
return ["exception", "pebkac", ex.code, str(ex)]
|
||||||
|
|
||||||
except:
|
except Exception as ex:
|
||||||
if not want_retval:
|
if not want_retval:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
return ["exception", "stack", traceback.format_exc()]
|
return ["exception", "stack", ex]
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ from .util import Netdev, runcmd, wrename, wunlink
|
|||||||
HAVE_CFSSL = True
|
HAVE_CFSSL = True
|
||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
from .util import RootLogger
|
from .util import NamedLogger, RootLogger
|
||||||
|
|
||||||
|
|
||||||
if ANYWIN:
|
if ANYWIN:
|
||||||
@@ -83,6 +83,8 @@ def _read_crt(args, fn):
|
|||||||
|
|
||||||
|
|
||||||
def _gen_ca(log: "RootLogger", args):
|
def _gen_ca(log: "RootLogger", args):
|
||||||
|
nlog: "NamedLogger" = lambda msg, c=0: log("cert-gen-ca", msg, c)
|
||||||
|
|
||||||
expiry = _read_crt(args, "ca.pem")[0]
|
expiry = _read_crt(args, "ca.pem")[0]
|
||||||
if time.time() + args.crt_cdays * 60 * 60 * 24 * 0.1 < expiry:
|
if time.time() + args.crt_cdays * 60 * 60 * 24 * 0.1 < expiry:
|
||||||
return
|
return
|
||||||
@@ -113,16 +115,18 @@ def _gen_ca(log: "RootLogger", args):
|
|||||||
|
|
||||||
bname = os.path.join(args.crt_dir, "ca")
|
bname = os.path.join(args.crt_dir, "ca")
|
||||||
try:
|
try:
|
||||||
wunlink(log, bname + ".key", VF)
|
wunlink(nlog, bname + ".key", VF)
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
wrename(log, bname + "-key.pem", bname + ".key", VF)
|
wrename(nlog, bname + "-key.pem", bname + ".key", VF)
|
||||||
wunlink(log, bname + ".csr", VF)
|
wunlink(nlog, bname + ".csr", VF)
|
||||||
|
|
||||||
log("cert", "new ca OK", 2)
|
log("cert", "new ca OK", 2)
|
||||||
|
|
||||||
|
|
||||||
def _gen_srv(log: "RootLogger", args, netdevs: dict[str, Netdev]):
|
def _gen_srv(log: "RootLogger", args, netdevs: dict[str, Netdev]):
|
||||||
|
nlog: "NamedLogger" = lambda msg, c=0: log("cert-gen-srv", msg, c)
|
||||||
|
|
||||||
names = args.crt_ns.split(",") if args.crt_ns else []
|
names = args.crt_ns.split(",") if args.crt_ns else []
|
||||||
if not args.crt_exact:
|
if not args.crt_exact:
|
||||||
for n in names[:]:
|
for n in names[:]:
|
||||||
@@ -196,11 +200,11 @@ def _gen_srv(log: "RootLogger", args, netdevs: dict[str, Netdev]):
|
|||||||
|
|
||||||
bname = os.path.join(args.crt_dir, "srv")
|
bname = os.path.join(args.crt_dir, "srv")
|
||||||
try:
|
try:
|
||||||
wunlink(log, bname + ".key", VF)
|
wunlink(nlog, bname + ".key", VF)
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
wrename(log, bname + "-key.pem", bname + ".key", VF)
|
wrename(nlog, bname + "-key.pem", bname + ".key", VF)
|
||||||
wunlink(log, bname + ".csr", VF)
|
wunlink(nlog, bname + ".csr", VF)
|
||||||
|
|
||||||
with open(os.path.join(args.crt_dir, "ca.pem"), "rb") as f:
|
with open(os.path.join(args.crt_dir, "ca.pem"), "rb") as f:
|
||||||
ca = f.read()
|
ca = f.read()
|
||||||
|
|||||||
@@ -35,6 +35,7 @@ def vf_bmap() -> dict[str, str]:
|
|||||||
"e2vp",
|
"e2vp",
|
||||||
"exp",
|
"exp",
|
||||||
"grid",
|
"grid",
|
||||||
|
"gsel",
|
||||||
"hardlink",
|
"hardlink",
|
||||||
"magic",
|
"magic",
|
||||||
"no_sb_md",
|
"no_sb_md",
|
||||||
@@ -144,6 +145,7 @@ flagcats = {
|
|||||||
"maxb=1g,300": "max 1 GiB over 5min (suffixes: b, k, m, g, t)",
|
"maxb=1g,300": "max 1 GiB over 5min (suffixes: b, k, m, g, t)",
|
||||||
"vmaxb=1g": "total volume size max 1 GiB (suffixes: b, k, m, g, t)",
|
"vmaxb=1g": "total volume size max 1 GiB (suffixes: b, k, m, g, t)",
|
||||||
"vmaxn=4k": "max 4096 files in volume (suffixes: b, k, m, g, t)",
|
"vmaxn=4k": "max 4096 files in volume (suffixes: b, k, m, g, t)",
|
||||||
|
"medialinks": "return medialinks for non-up2k uploads (not hotlinks)",
|
||||||
"rand": "force randomized filenames, 9 chars long by default",
|
"rand": "force randomized filenames, 9 chars long by default",
|
||||||
"nrand=N": "randomized filenames are N chars long",
|
"nrand=N": "randomized filenames are N chars long",
|
||||||
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
|
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
|
||||||
@@ -213,6 +215,7 @@ flagcats = {
|
|||||||
},
|
},
|
||||||
"client and ux": {
|
"client and ux": {
|
||||||
"grid": "show grid/thumbnails by default",
|
"grid": "show grid/thumbnails by default",
|
||||||
|
"gsel": "select files in grid by ctrl-click",
|
||||||
"sort": "default sort order",
|
"sort": "default sort order",
|
||||||
"unlist": "dont list files matching REGEX",
|
"unlist": "dont list files matching REGEX",
|
||||||
"html_head=TXT": "includes TXT in the <head>, or @PATH for file at PATH",
|
"html_head=TXT": "includes TXT in the <head>, or @PATH for file at PATH",
|
||||||
|
|||||||
@@ -19,6 +19,7 @@ from .__init__ import PY2, TYPE_CHECKING
|
|||||||
from .authsrv import VFS
|
from .authsrv import VFS
|
||||||
from .bos import bos
|
from .bos import bos
|
||||||
from .util import (
|
from .util import (
|
||||||
|
VF_CAREFUL,
|
||||||
Daemon,
|
Daemon,
|
||||||
ODict,
|
ODict,
|
||||||
Pebkac,
|
Pebkac,
|
||||||
@@ -30,6 +31,7 @@ from .util import (
|
|||||||
runhook,
|
runhook,
|
||||||
sanitize_fn,
|
sanitize_fn,
|
||||||
vjoin,
|
vjoin,
|
||||||
|
wunlink,
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
@@ -37,7 +39,7 @@ if TYPE_CHECKING:
|
|||||||
|
|
||||||
if True: # pylint: disable=using-constant-test
|
if True: # pylint: disable=using-constant-test
|
||||||
import typing
|
import typing
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional, Union
|
||||||
|
|
||||||
|
|
||||||
class FSE(FilesystemError):
|
class FSE(FilesystemError):
|
||||||
@@ -139,6 +141,9 @@ class FtpFs(AbstractedFS):
|
|||||||
self.listdirinfo = self.listdir
|
self.listdirinfo = self.listdir
|
||||||
self.chdir(".")
|
self.chdir(".")
|
||||||
|
|
||||||
|
def log(self, msg: str, c: Union[int, str] = 0) -> None:
|
||||||
|
self.hub.log("ftpd", msg, c)
|
||||||
|
|
||||||
def v2a(
|
def v2a(
|
||||||
self,
|
self,
|
||||||
vpath: str,
|
vpath: str,
|
||||||
@@ -207,17 +212,37 @@ class FtpFs(AbstractedFS):
|
|||||||
w = "w" in mode or "a" in mode or "+" in mode
|
w = "w" in mode or "a" in mode or "+" in mode
|
||||||
|
|
||||||
ap = self.rv2a(filename, r, w)[0]
|
ap = self.rv2a(filename, r, w)[0]
|
||||||
|
self.validpath(ap)
|
||||||
if w:
|
if w:
|
||||||
try:
|
try:
|
||||||
st = bos.stat(ap)
|
st = bos.stat(ap)
|
||||||
td = time.time() - st.st_mtime
|
td = time.time() - st.st_mtime
|
||||||
|
need_unlink = True
|
||||||
except:
|
except:
|
||||||
|
need_unlink = False
|
||||||
td = 0
|
td = 0
|
||||||
|
|
||||||
if td < -1 or td > self.args.ftp_wt:
|
if w and need_unlink:
|
||||||
raise FSE("Cannot open existing file for writing")
|
if td >= -1 and td <= self.args.ftp_wt:
|
||||||
|
# within permitted timeframe; unlink and accept
|
||||||
|
do_it = True
|
||||||
|
elif self.args.no_del or self.args.ftp_no_ow:
|
||||||
|
# file too old, or overwrite not allowed; reject
|
||||||
|
do_it = False
|
||||||
|
else:
|
||||||
|
# allow overwrite if user has delete permission
|
||||||
|
# (avoids win2000 freaking out and deleting the server copy without uploading its own)
|
||||||
|
try:
|
||||||
|
self.rv2a(filename, False, True, False, True)
|
||||||
|
do_it = True
|
||||||
|
except:
|
||||||
|
do_it = False
|
||||||
|
|
||||||
|
if not do_it:
|
||||||
|
raise FSE("File already exists")
|
||||||
|
|
||||||
|
wunlink(self.log, ap, VF_CAREFUL)
|
||||||
|
|
||||||
self.validpath(ap)
|
|
||||||
return open(fsenc(ap), mode, self.args.iobuf)
|
return open(fsenc(ap), mode, self.args.iobuf)
|
||||||
|
|
||||||
def chdir(self, path: str) -> None:
|
def chdir(self, path: str) -> None:
|
||||||
@@ -282,9 +307,20 @@ class FtpFs(AbstractedFS):
|
|||||||
# display write-only folders as empty
|
# display write-only folders as empty
|
||||||
return []
|
return []
|
||||||
|
|
||||||
# return list of volumes
|
# return list of accessible volumes
|
||||||
r = {x.split("/")[0]: 1 for x in self.hub.asrv.vfs.all_vols.keys()}
|
ret = []
|
||||||
return list(sorted(list(r.keys())))
|
for vn in self.hub.asrv.vfs.all_vols.values():
|
||||||
|
if "/" in vn.vpath or not vn.vpath:
|
||||||
|
continue # only include toplevel-mounted vols
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.hub.asrv.vfs.get(vn.vpath, self.uname, True, False)
|
||||||
|
ret.append(vn.vpath)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
ret.sort()
|
||||||
|
return ret
|
||||||
|
|
||||||
def rmdir(self, path: str) -> None:
|
def rmdir(self, path: str) -> None:
|
||||||
ap = self.rv2a(path, d=True)[0]
|
ap = self.rv2a(path, d=True)[0]
|
||||||
@@ -434,9 +470,10 @@ class FtpHandler(FTPHandler):
|
|||||||
None,
|
None,
|
||||||
xbu,
|
xbu,
|
||||||
ap,
|
ap,
|
||||||
vfs.canonical(rem),
|
vp,
|
||||||
"",
|
"",
|
||||||
self.uname,
|
self.uname,
|
||||||
|
self.hub.asrv.vfs.get_perms(vp, self.uname),
|
||||||
0,
|
0,
|
||||||
0,
|
0,
|
||||||
self.cli_ip,
|
self.cli_ip,
|
||||||
|
|||||||
@@ -648,6 +648,7 @@ class HttpCli(object):
|
|||||||
|
|
||||||
em = str(ex)
|
em = str(ex)
|
||||||
msg = em if pex is ex else min_ex()
|
msg = em if pex is ex else min_ex()
|
||||||
|
|
||||||
if pex.code != 404 or self.do_log:
|
if pex.code != 404 or self.do_log:
|
||||||
self.log(
|
self.log(
|
||||||
"%s\033[0m, %s" % (msg, self.vpath),
|
"%s\033[0m, %s" % (msg, self.vpath),
|
||||||
@@ -695,6 +696,7 @@ class HttpCli(object):
|
|||||||
self.vpath,
|
self.vpath,
|
||||||
self.host,
|
self.host,
|
||||||
self.uname,
|
self.uname,
|
||||||
|
"",
|
||||||
time.time(),
|
time.time(),
|
||||||
0,
|
0,
|
||||||
self.ip,
|
self.ip,
|
||||||
@@ -744,7 +746,7 @@ class HttpCli(object):
|
|||||||
or ("; Trident/" in self.ua and not k304)
|
or ("; Trident/" in self.ua and not k304)
|
||||||
)
|
)
|
||||||
|
|
||||||
def _build_html_head(self, maybe_html: Any, kv: dict[str, Any]) -> bool:
|
def _build_html_head(self, maybe_html: Any, kv: dict[str, Any]) -> None:
|
||||||
html = str(maybe_html)
|
html = str(maybe_html)
|
||||||
is_jinja = html[:2] in "%@%"
|
is_jinja = html[:2] in "%@%"
|
||||||
if is_jinja:
|
if is_jinja:
|
||||||
@@ -1631,6 +1633,7 @@ class HttpCli(object):
|
|||||||
self.vpath,
|
self.vpath,
|
||||||
self.host,
|
self.host,
|
||||||
self.uname,
|
self.uname,
|
||||||
|
self.asrv.vfs.get_perms(self.vpath, self.uname),
|
||||||
time.time(),
|
time.time(),
|
||||||
len(buf),
|
len(buf),
|
||||||
self.ip,
|
self.ip,
|
||||||
@@ -1674,6 +1677,8 @@ class HttpCli(object):
|
|||||||
remains = int(self.headers.get("content-length", -1))
|
remains = int(self.headers.get("content-length", -1))
|
||||||
if remains == -1:
|
if remains == -1:
|
||||||
self.keepalive = False
|
self.keepalive = False
|
||||||
|
self.in_hdr_recv = True
|
||||||
|
self.s.settimeout(max(self.args.s_tbody // 20, 1))
|
||||||
return read_socket_unbounded(self.sr, bufsz), remains
|
return read_socket_unbounded(self.sr, bufsz), remains
|
||||||
else:
|
else:
|
||||||
return read_socket(self.sr, bufsz, remains), remains
|
return read_socket(self.sr, bufsz, remains), remains
|
||||||
@@ -1778,6 +1783,7 @@ class HttpCli(object):
|
|||||||
self.vpath,
|
self.vpath,
|
||||||
self.host,
|
self.host,
|
||||||
self.uname,
|
self.uname,
|
||||||
|
self.asrv.vfs.get_perms(self.vpath, self.uname),
|
||||||
at,
|
at,
|
||||||
remains,
|
remains,
|
||||||
self.ip,
|
self.ip,
|
||||||
@@ -1868,6 +1874,7 @@ class HttpCli(object):
|
|||||||
self.vpath,
|
self.vpath,
|
||||||
self.host,
|
self.host,
|
||||||
self.uname,
|
self.uname,
|
||||||
|
self.asrv.vfs.get_perms(self.vpath, self.uname),
|
||||||
mt,
|
mt,
|
||||||
post_sz,
|
post_sz,
|
||||||
self.ip,
|
self.ip,
|
||||||
@@ -1904,6 +1911,9 @@ class HttpCli(object):
|
|||||||
0 if ANYWIN else bos.stat(path).st_ino,
|
0 if ANYWIN else bos.stat(path).st_ino,
|
||||||
)[: vfs.flags["fk"]]
|
)[: vfs.flags["fk"]]
|
||||||
|
|
||||||
|
if "media" in self.uparam or "medialinks" in vfs.flags:
|
||||||
|
vsuf += "&v" if vsuf else "?v"
|
||||||
|
|
||||||
vpath = "/".join([x for x in [vfs.vpath, rem, fn] if x])
|
vpath = "/".join([x for x in [vfs.vpath, rem, fn] if x])
|
||||||
vpath = quotep(vpath)
|
vpath = quotep(vpath)
|
||||||
|
|
||||||
@@ -2025,7 +2035,7 @@ class HttpCli(object):
|
|||||||
|
|
||||||
v = self.uparam[k]
|
v = self.uparam[k]
|
||||||
|
|
||||||
if self._use_dirkey():
|
if self._use_dirkey(self.vn, ""):
|
||||||
vn = self.vn
|
vn = self.vn
|
||||||
rem = self.rem
|
rem = self.rem
|
||||||
else:
|
else:
|
||||||
@@ -2189,33 +2199,39 @@ class HttpCli(object):
|
|||||||
|
|
||||||
def handle_post_binary(self) -> bool:
|
def handle_post_binary(self) -> bool:
|
||||||
try:
|
try:
|
||||||
remains = int(self.headers["content-length"])
|
postsize = remains = int(self.headers["content-length"])
|
||||||
except:
|
except:
|
||||||
raise Pebkac(400, "you must supply a content-length for binary POST")
|
raise Pebkac(400, "you must supply a content-length for binary POST")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
chash = self.headers["x-up2k-hash"]
|
chashes = self.headers["x-up2k-hash"].split(",")
|
||||||
wark = self.headers["x-up2k-wark"]
|
wark = self.headers["x-up2k-wark"]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
raise Pebkac(400, "need hash and wark headers for binary POST")
|
raise Pebkac(400, "need hash and wark headers for binary POST")
|
||||||
|
|
||||||
|
chashes = [x.strip() for x in chashes]
|
||||||
|
|
||||||
vfs, _ = self.asrv.vfs.get(self.vpath, self.uname, False, True)
|
vfs, _ = self.asrv.vfs.get(self.vpath, self.uname, False, True)
|
||||||
ptop = (vfs.dbv or vfs).realpath
|
ptop = (vfs.dbv or vfs).realpath
|
||||||
|
|
||||||
x = self.conn.hsrv.broker.ask("up2k.handle_chunk", ptop, wark, chash)
|
x = self.conn.hsrv.broker.ask("up2k.handle_chunks", ptop, wark, chashes)
|
||||||
response = x.get()
|
response = x.get()
|
||||||
chunksize, cstart, path, lastmod, sprs = response
|
chunksize, cstarts, path, lastmod, sprs = response
|
||||||
|
maxsize = chunksize * len(chashes)
|
||||||
|
cstart0 = cstarts[0]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if self.args.nw:
|
if self.args.nw:
|
||||||
path = os.devnull
|
path = os.devnull
|
||||||
|
|
||||||
if remains > chunksize:
|
if remains > maxsize:
|
||||||
raise Pebkac(400, "your chunk is too big to fit")
|
t = "your client is sending %d bytes which is too much (server expected %d bytes at most)"
|
||||||
|
raise Pebkac(400, t % (remains, maxsize))
|
||||||
|
|
||||||
self.log("writing {} #{} @{} len {}".format(path, chash, cstart, remains))
|
t = "writing %s %s+%d #%d+%d %s"
|
||||||
|
chunkno = cstart0[0] // chunksize
|
||||||
reader = read_socket(self.sr, self.args.s_rd_sz, remains)
|
zs = " ".join([chashes[0][:15]] + [x[:9] for x in chashes[1:]])
|
||||||
|
self.log(t % (path, cstart0, remains, chunkno, len(chashes), zs))
|
||||||
|
|
||||||
f = None
|
f = None
|
||||||
fpool = not self.args.no_fpool and sprs
|
fpool = not self.args.no_fpool and sprs
|
||||||
@@ -2229,37 +2245,43 @@ class HttpCli(object):
|
|||||||
f = f or open(fsenc(path), "rb+", self.args.iobuf)
|
f = f or open(fsenc(path), "rb+", self.args.iobuf)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
f.seek(cstart[0])
|
for chash, cstart in zip(chashes, cstarts):
|
||||||
post_sz, _, sha_b64 = hashcopy(reader, f, self.args.s_wr_slp)
|
f.seek(cstart[0])
|
||||||
|
reader = read_socket(
|
||||||
if sha_b64 != chash:
|
self.sr, self.args.s_rd_sz, min(remains, chunksize)
|
||||||
try:
|
|
||||||
self.bakflip(f, cstart[0], post_sz, sha_b64, vfs.flags)
|
|
||||||
except:
|
|
||||||
self.log("bakflip failed: " + min_ex())
|
|
||||||
|
|
||||||
t = "your chunk got corrupted somehow (received {} bytes); expected vs received hash:\n{}\n{}"
|
|
||||||
raise Pebkac(400, t.format(post_sz, chash, sha_b64))
|
|
||||||
|
|
||||||
if len(cstart) > 1 and path != os.devnull:
|
|
||||||
self.log(
|
|
||||||
"clone {} to {}".format(
|
|
||||||
cstart[0], " & ".join(unicode(x) for x in cstart[1:])
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
ofs = 0
|
post_sz, _, sha_b64 = hashcopy(reader, f, self.args.s_wr_slp)
|
||||||
while ofs < chunksize:
|
|
||||||
bufsz = max(4 * 1024 * 1024, self.args.iobuf)
|
|
||||||
bufsz = min(chunksize - ofs, bufsz)
|
|
||||||
f.seek(cstart[0] + ofs)
|
|
||||||
buf = f.read(bufsz)
|
|
||||||
for wofs in cstart[1:]:
|
|
||||||
f.seek(wofs + ofs)
|
|
||||||
f.write(buf)
|
|
||||||
|
|
||||||
ofs += len(buf)
|
if sha_b64 != chash:
|
||||||
|
try:
|
||||||
|
self.bakflip(f, cstart[0], post_sz, sha_b64, vfs.flags)
|
||||||
|
except:
|
||||||
|
self.log("bakflip failed: " + min_ex())
|
||||||
|
|
||||||
self.log("clone {} done".format(cstart[0]))
|
t = "your chunk got corrupted somehow (received {} bytes); expected vs received hash:\n{}\n{}"
|
||||||
|
raise Pebkac(400, t.format(post_sz, chash, sha_b64))
|
||||||
|
|
||||||
|
remains -= chunksize
|
||||||
|
|
||||||
|
if len(cstart) > 1 and path != os.devnull:
|
||||||
|
self.log(
|
||||||
|
"clone {} to {}".format(
|
||||||
|
cstart[0], " & ".join(unicode(x) for x in cstart[1:])
|
||||||
|
)
|
||||||
|
)
|
||||||
|
ofs = 0
|
||||||
|
while ofs < chunksize:
|
||||||
|
bufsz = max(4 * 1024 * 1024, self.args.iobuf)
|
||||||
|
bufsz = min(chunksize - ofs, bufsz)
|
||||||
|
f.seek(cstart[0] + ofs)
|
||||||
|
buf = f.read(bufsz)
|
||||||
|
for wofs in cstart[1:]:
|
||||||
|
f.seek(wofs + ofs)
|
||||||
|
f.write(buf)
|
||||||
|
|
||||||
|
ofs += len(buf)
|
||||||
|
|
||||||
|
self.log("clone {} done".format(cstart[0]))
|
||||||
|
|
||||||
if not fpool:
|
if not fpool:
|
||||||
f.close()
|
f.close()
|
||||||
@@ -2271,10 +2293,10 @@ class HttpCli(object):
|
|||||||
f.close()
|
f.close()
|
||||||
raise
|
raise
|
||||||
finally:
|
finally:
|
||||||
x = self.conn.hsrv.broker.ask("up2k.release_chunk", ptop, wark, chash)
|
x = self.conn.hsrv.broker.ask("up2k.release_chunks", ptop, wark, chashes)
|
||||||
x.get() # block client until released
|
x.get() # block client until released
|
||||||
|
|
||||||
x = self.conn.hsrv.broker.ask("up2k.confirm_chunk", ptop, wark, chash)
|
x = self.conn.hsrv.broker.ask("up2k.confirm_chunks", ptop, wark, chashes)
|
||||||
ztis = x.get()
|
ztis = x.get()
|
||||||
try:
|
try:
|
||||||
num_left, fin_path = ztis
|
num_left, fin_path = ztis
|
||||||
@@ -2293,7 +2315,7 @@ class HttpCli(object):
|
|||||||
|
|
||||||
cinf = self.headers.get("x-up2k-stat", "")
|
cinf = self.headers.get("x-up2k-stat", "")
|
||||||
|
|
||||||
spd = self._spd(post_sz)
|
spd = self._spd(postsize)
|
||||||
self.log("{:70} thank {}".format(spd, cinf))
|
self.log("{:70} thank {}".format(spd, cinf))
|
||||||
self.reply(b"thank")
|
self.reply(b"thank")
|
||||||
return True
|
return True
|
||||||
@@ -2547,6 +2569,7 @@ class HttpCli(object):
|
|||||||
self.vpath,
|
self.vpath,
|
||||||
self.host,
|
self.host,
|
||||||
self.uname,
|
self.uname,
|
||||||
|
self.asrv.vfs.get_perms(self.vpath, self.uname),
|
||||||
at,
|
at,
|
||||||
0,
|
0,
|
||||||
self.ip,
|
self.ip,
|
||||||
@@ -2610,6 +2633,7 @@ class HttpCli(object):
|
|||||||
self.vpath,
|
self.vpath,
|
||||||
self.host,
|
self.host,
|
||||||
self.uname,
|
self.uname,
|
||||||
|
self.asrv.vfs.get_perms(self.vpath, self.uname),
|
||||||
at,
|
at,
|
||||||
sz,
|
sz,
|
||||||
self.ip,
|
self.ip,
|
||||||
@@ -2685,6 +2709,9 @@ class HttpCli(object):
|
|||||||
0 if ANYWIN or not ap else bos.stat(ap).st_ino,
|
0 if ANYWIN or not ap else bos.stat(ap).st_ino,
|
||||||
)[: vfs.flags["fk"]]
|
)[: vfs.flags["fk"]]
|
||||||
|
|
||||||
|
if "media" in self.uparam or "medialinks" in vfs.flags:
|
||||||
|
vsuf += "&v" if vsuf else "?v"
|
||||||
|
|
||||||
vpath = "{}/{}".format(upload_vpath, lfn).strip("/")
|
vpath = "{}/{}".format(upload_vpath, lfn).strip("/")
|
||||||
rel_url = quotep(self.args.RS + vpath) + vsuf
|
rel_url = quotep(self.args.RS + vpath) + vsuf
|
||||||
msg += 'sha512: {} // {} // {} bytes // <a href="/{}">{}</a> {}\n'.format(
|
msg += 'sha512: {} // {} // {} bytes // <a href="/{}">{}</a> {}\n'.format(
|
||||||
@@ -2851,6 +2878,7 @@ class HttpCli(object):
|
|||||||
self.vpath,
|
self.vpath,
|
||||||
self.host,
|
self.host,
|
||||||
self.uname,
|
self.uname,
|
||||||
|
self.asrv.vfs.get_perms(self.vpath, self.uname),
|
||||||
time.time(),
|
time.time(),
|
||||||
0,
|
0,
|
||||||
self.ip,
|
self.ip,
|
||||||
@@ -2889,6 +2917,7 @@ class HttpCli(object):
|
|||||||
self.vpath,
|
self.vpath,
|
||||||
self.host,
|
self.host,
|
||||||
self.uname,
|
self.uname,
|
||||||
|
self.asrv.vfs.get_perms(self.vpath, self.uname),
|
||||||
new_lastmod,
|
new_lastmod,
|
||||||
sz,
|
sz,
|
||||||
self.ip,
|
self.ip,
|
||||||
@@ -2943,22 +2972,24 @@ class HttpCli(object):
|
|||||||
|
|
||||||
return file_lastmod, True
|
return file_lastmod, True
|
||||||
|
|
||||||
def _use_dirkey(self, ap: str = "") -> bool:
|
def _use_dirkey(self, vn: VFS, ap: str) -> bool:
|
||||||
if self.can_read or not self.can_get:
|
if self.can_read or not self.can_get:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
if self.vn.flags.get("dky"):
|
if vn.flags.get("dky"):
|
||||||
return True
|
return True
|
||||||
|
|
||||||
req = self.uparam.get("k") or ""
|
req = self.uparam.get("k") or ""
|
||||||
if not req:
|
if not req:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
dk_len = self.vn.flags.get("dk")
|
dk_len = vn.flags.get("dk")
|
||||||
if not dk_len:
|
if not dk_len:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
ap = ap or self.vn.canonical(self.rem)
|
if not ap:
|
||||||
|
ap = vn.canonical(self.rem)
|
||||||
|
|
||||||
zs = self.gen_fk(2, self.args.dk_salt, ap, 0, 0)[:dk_len]
|
zs = self.gen_fk(2, self.args.dk_salt, ap, 0, 0)[:dk_len]
|
||||||
if req == zs:
|
if req == zs:
|
||||||
return True
|
return True
|
||||||
@@ -2967,6 +2998,71 @@ class HttpCli(object):
|
|||||||
self.log(t % (zs, req, self.req, ap), 6)
|
self.log(t % (zs, req, self.req, ap), 6)
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def _use_filekey(self, vn: VFS, ap: str, st: os.stat_result) -> bool:
|
||||||
|
if self.can_read or not self.can_get:
|
||||||
|
return False
|
||||||
|
|
||||||
|
req = self.uparam.get("k") or ""
|
||||||
|
if not req:
|
||||||
|
return False
|
||||||
|
|
||||||
|
fk_len = vn.flags.get("fk")
|
||||||
|
if not fk_len:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if not ap:
|
||||||
|
ap = self.vn.canonical(self.rem)
|
||||||
|
|
||||||
|
alg = 2 if "fka" in vn.flags else 1
|
||||||
|
|
||||||
|
zs = self.gen_fk(
|
||||||
|
alg, self.args.fk_salt, ap, st.st_size, 0 if ANYWIN else st.st_ino
|
||||||
|
)[:fk_len]
|
||||||
|
|
||||||
|
if req == zs:
|
||||||
|
return True
|
||||||
|
|
||||||
|
t = "wrong filekey, want %s, got %s\n vp: %s\n ap: %s"
|
||||||
|
self.log(t % (zs, req, self.req, ap), 6)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _add_logues(
|
||||||
|
self, vn: VFS, abspath: str, lnames: Optional[dict[str, str]]
|
||||||
|
) -> tuple[list[str], str]:
|
||||||
|
logues = ["", ""]
|
||||||
|
if not self.args.no_logues:
|
||||||
|
for n, fn in enumerate([".prologue.html", ".epilogue.html"]):
|
||||||
|
if lnames is not None and fn not in lnames:
|
||||||
|
continue
|
||||||
|
fn = os.path.join(abspath, fn)
|
||||||
|
if bos.path.exists(fn):
|
||||||
|
with open(fsenc(fn), "rb") as f:
|
||||||
|
logues[n] = f.read().decode("utf-8")
|
||||||
|
if "exp" in vn.flags:
|
||||||
|
logues[n] = self._expand(
|
||||||
|
logues[n], vn.flags.get("exp_lg") or []
|
||||||
|
)
|
||||||
|
|
||||||
|
readme = ""
|
||||||
|
if not self.args.no_readme and not logues[1]:
|
||||||
|
if lnames is None:
|
||||||
|
fns = ["README.md", "readme.md"]
|
||||||
|
elif "readme.md" in lnames:
|
||||||
|
fns = [lnames["readme.md"]]
|
||||||
|
else:
|
||||||
|
fns = []
|
||||||
|
|
||||||
|
for fn in fns:
|
||||||
|
fn = os.path.join(abspath, fn)
|
||||||
|
if bos.path.isfile(fn):
|
||||||
|
with open(fsenc(fn), "rb") as f:
|
||||||
|
readme = f.read().decode("utf-8")
|
||||||
|
break
|
||||||
|
if readme and "exp" in vn.flags:
|
||||||
|
readme = self._expand(readme, vn.flags.get("exp_md") or [])
|
||||||
|
|
||||||
|
return logues, readme
|
||||||
|
|
||||||
def _expand(self, txt: str, phs: list[str]) -> str:
|
def _expand(self, txt: str, phs: list[str]) -> str:
|
||||||
for ph in phs:
|
for ph in phs:
|
||||||
if ph.startswith("hdr."):
|
if ph.startswith("hdr."):
|
||||||
@@ -2996,6 +3092,7 @@ class HttpCli(object):
|
|||||||
logtail = ""
|
logtail = ""
|
||||||
|
|
||||||
if ptop is not None:
|
if ptop is not None:
|
||||||
|
ap_data = "<%s>" % (req_path,)
|
||||||
try:
|
try:
|
||||||
dp, fn = os.path.split(req_path)
|
dp, fn = os.path.split(req_path)
|
||||||
tnam = fn + ".PARTIAL"
|
tnam = fn + ".PARTIAL"
|
||||||
@@ -3342,7 +3439,16 @@ class HttpCli(object):
|
|||||||
|
|
||||||
if lower < upper and not broken:
|
if lower < upper and not broken:
|
||||||
with open(req_path, "rb") as f:
|
with open(req_path, "rb") as f:
|
||||||
remains = sendfile_py(self.log, lower, upper, f, self.s, wr_sz, wr_slp)
|
remains = sendfile_py(
|
||||||
|
self.log,
|
||||||
|
lower,
|
||||||
|
upper,
|
||||||
|
f,
|
||||||
|
self.s,
|
||||||
|
wr_sz,
|
||||||
|
wr_slp,
|
||||||
|
not self.args.no_poll,
|
||||||
|
)
|
||||||
|
|
||||||
spd = self._spd((upper - lower) - remains)
|
spd = self._spd((upper - lower) - remains)
|
||||||
if self.do_log:
|
if self.do_log:
|
||||||
@@ -3854,7 +3960,7 @@ class HttpCli(object):
|
|||||||
dk_sz = False
|
dk_sz = False
|
||||||
if dk:
|
if dk:
|
||||||
vn, rem = vfs.get(top, self.uname, False, False)
|
vn, rem = vfs.get(top, self.uname, False, False)
|
||||||
if vn.flags.get("dks") and self._use_dirkey(vn.canonical(rem)):
|
if vn.flags.get("dks") and self._use_dirkey(vn, vn.canonical(rem)):
|
||||||
dk_sz = vn.flags.get("dk")
|
dk_sz = vn.flags.get("dk")
|
||||||
|
|
||||||
dots = False
|
dots = False
|
||||||
@@ -4163,6 +4269,9 @@ class HttpCli(object):
|
|||||||
add_og = True
|
add_og = True
|
||||||
og_fn = ""
|
og_fn = ""
|
||||||
|
|
||||||
|
if "v" in self.uparam:
|
||||||
|
add_og = og_ua = True
|
||||||
|
|
||||||
if "b" in self.uparam:
|
if "b" in self.uparam:
|
||||||
self.out_headers["X-Robots-Tag"] = "noindex, nofollow"
|
self.out_headers["X-Robots-Tag"] = "noindex, nofollow"
|
||||||
|
|
||||||
@@ -4175,9 +4284,20 @@ class HttpCli(object):
|
|||||||
if idx and hasattr(idx, "p_end"):
|
if idx and hasattr(idx, "p_end"):
|
||||||
icur = idx.get_cur(dbv)
|
icur = idx.get_cur(dbv)
|
||||||
|
|
||||||
|
if "k" in self.uparam or "dky" in vn.flags:
|
||||||
|
if is_dir:
|
||||||
|
use_dirkey = self._use_dirkey(vn, abspath)
|
||||||
|
use_filekey = False
|
||||||
|
else:
|
||||||
|
use_filekey = self._use_filekey(vn, abspath, st)
|
||||||
|
use_dirkey = False
|
||||||
|
else:
|
||||||
|
use_dirkey = use_filekey = False
|
||||||
|
|
||||||
th_fmt = self.uparam.get("th")
|
th_fmt = self.uparam.get("th")
|
||||||
if self.can_read or (
|
if self.can_read or (
|
||||||
self.can_get and (vn.flags.get("dk") or "fk" not in vn.flags)
|
self.can_get
|
||||||
|
and (use_filekey or use_dirkey or (not is_dir and "fk" not in vn.flags))
|
||||||
):
|
):
|
||||||
if th_fmt is not None:
|
if th_fmt is not None:
|
||||||
nothumb = "dthumb" in dbv.flags
|
nothumb = "dthumb" in dbv.flags
|
||||||
@@ -4194,18 +4314,21 @@ class HttpCli(object):
|
|||||||
if cfn:
|
if cfn:
|
||||||
fn = cfn[0]
|
fn = cfn[0]
|
||||||
fp = os.path.join(abspath, fn)
|
fp = os.path.join(abspath, fn)
|
||||||
if bos.path.exists(fp):
|
st = bos.stat(fp)
|
||||||
vrem = "{}/{}".format(vrem, fn).strip("/")
|
vrem = "{}/{}".format(vrem, fn).strip("/")
|
||||||
is_dir = False
|
is_dir = False
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
else:
|
else:
|
||||||
for fn in self.args.th_covers:
|
for fn in self.args.th_covers:
|
||||||
fp = os.path.join(abspath, fn)
|
fp = os.path.join(abspath, fn)
|
||||||
if bos.path.exists(fp):
|
try:
|
||||||
|
st = bos.stat(fp)
|
||||||
vrem = "{}/{}".format(vrem, fn).strip("/")
|
vrem = "{}/{}".format(vrem, fn).strip("/")
|
||||||
is_dir = False
|
is_dir = False
|
||||||
break
|
break
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
if is_dir:
|
if is_dir:
|
||||||
return self.tx_svg("folder")
|
return self.tx_svg("folder")
|
||||||
@@ -4257,21 +4380,10 @@ class HttpCli(object):
|
|||||||
|
|
||||||
if not is_dir and (self.can_read or self.can_get):
|
if not is_dir and (self.can_read or self.can_get):
|
||||||
if not self.can_read and not fk_pass and "fk" in vn.flags:
|
if not self.can_read and not fk_pass and "fk" in vn.flags:
|
||||||
alg = 2 if "fka" in vn.flags else 1
|
if not use_filekey:
|
||||||
correct = self.gen_fk(
|
|
||||||
alg,
|
|
||||||
self.args.fk_salt,
|
|
||||||
abspath,
|
|
||||||
st.st_size,
|
|
||||||
0 if ANYWIN else st.st_ino,
|
|
||||||
)[: vn.flags["fk"]]
|
|
||||||
got = self.uparam.get("k")
|
|
||||||
if got != correct:
|
|
||||||
t = "wrong filekey, want %s, got %s\n vp: %s\n ap: %s"
|
|
||||||
self.log(t % (correct, got, self.req, abspath), 6)
|
|
||||||
return self.tx_404()
|
return self.tx_404()
|
||||||
|
|
||||||
if add_og:
|
if add_og and not abspath.lower().endswith(".md"):
|
||||||
if og_ua or self.host not in self.headers.get("referer", ""):
|
if og_ua or self.host not in self.headers.get("referer", ""):
|
||||||
self.vpath, og_fn = vsplit(self.vpath)
|
self.vpath, og_fn = vsplit(self.vpath)
|
||||||
vpath = self.vpath
|
vpath = self.vpath
|
||||||
@@ -4286,7 +4398,7 @@ class HttpCli(object):
|
|||||||
(abspath.endswith(".md") or self.can_delete)
|
(abspath.endswith(".md") or self.can_delete)
|
||||||
and "nohtml" not in vn.flags
|
and "nohtml" not in vn.flags
|
||||||
and (
|
and (
|
||||||
"v" in self.uparam
|
("v" in self.uparam and abspath.endswith(".md"))
|
||||||
or "edit" in self.uparam
|
or "edit" in self.uparam
|
||||||
or "edit2" in self.uparam
|
or "edit2" in self.uparam
|
||||||
)
|
)
|
||||||
@@ -4299,7 +4411,7 @@ class HttpCli(object):
|
|||||||
)
|
)
|
||||||
|
|
||||||
elif is_dir and not self.can_read:
|
elif is_dir and not self.can_read:
|
||||||
if self._use_dirkey(abspath):
|
if use_dirkey:
|
||||||
is_dk = True
|
is_dk = True
|
||||||
elif not self.can_write:
|
elif not self.can_write:
|
||||||
return self.tx_404(True)
|
return self.tx_404(True)
|
||||||
@@ -4356,29 +4468,6 @@ class HttpCli(object):
|
|||||||
tpl = "browser2"
|
tpl = "browser2"
|
||||||
is_js = False
|
is_js = False
|
||||||
|
|
||||||
logues = ["", ""]
|
|
||||||
if not self.args.no_logues:
|
|
||||||
for n, fn in enumerate([".prologue.html", ".epilogue.html"]):
|
|
||||||
fn = os.path.join(abspath, fn)
|
|
||||||
if bos.path.exists(fn):
|
|
||||||
with open(fsenc(fn), "rb") as f:
|
|
||||||
logues[n] = f.read().decode("utf-8")
|
|
||||||
if "exp" in vn.flags:
|
|
||||||
logues[n] = self._expand(
|
|
||||||
logues[n], vn.flags.get("exp_lg") or []
|
|
||||||
)
|
|
||||||
|
|
||||||
readme = ""
|
|
||||||
if not self.args.no_readme and not logues[1]:
|
|
||||||
for fn in ["README.md", "readme.md"]:
|
|
||||||
fn = os.path.join(abspath, fn)
|
|
||||||
if bos.path.isfile(fn):
|
|
||||||
with open(fsenc(fn), "rb") as f:
|
|
||||||
readme = f.read().decode("utf-8")
|
|
||||||
break
|
|
||||||
if readme and "exp" in vn.flags:
|
|
||||||
readme = self._expand(readme, vn.flags.get("exp_md") or [])
|
|
||||||
|
|
||||||
vf = vn.flags
|
vf = vn.flags
|
||||||
unlist = vf.get("unlist", "")
|
unlist = vf.get("unlist", "")
|
||||||
ls_ret = {
|
ls_ret = {
|
||||||
@@ -4397,8 +4486,6 @@ class HttpCli(object):
|
|||||||
"frand": bool(vn.flags.get("rand")),
|
"frand": bool(vn.flags.get("rand")),
|
||||||
"unlist": unlist,
|
"unlist": unlist,
|
||||||
"perms": perms,
|
"perms": perms,
|
||||||
"logues": logues,
|
|
||||||
"readme": readme,
|
|
||||||
}
|
}
|
||||||
cgv = {
|
cgv = {
|
||||||
"ls0": None,
|
"ls0": None,
|
||||||
@@ -4416,8 +4503,8 @@ class HttpCli(object):
|
|||||||
"have_zip": (not self.args.no_zip),
|
"have_zip": (not self.args.no_zip),
|
||||||
"have_unpost": int(self.args.unpost),
|
"have_unpost": int(self.args.unpost),
|
||||||
"sb_md": "" if "no_sb_md" in vf else (vf.get("md_sbf") or "y"),
|
"sb_md": "" if "no_sb_md" in vf else (vf.get("md_sbf") or "y"),
|
||||||
"readme": readme,
|
|
||||||
"dgrid": "grid" in vf,
|
"dgrid": "grid" in vf,
|
||||||
|
"dgsel": "gsel" in vf,
|
||||||
"dsort": vf["sort"],
|
"dsort": vf["sort"],
|
||||||
"dcrop": vf["crop"],
|
"dcrop": vf["crop"],
|
||||||
"dth3x": vf["th3x"],
|
"dth3x": vf["th3x"],
|
||||||
@@ -4425,6 +4512,7 @@ class HttpCli(object):
|
|||||||
"themes": self.args.themes,
|
"themes": self.args.themes,
|
||||||
"turbolvl": self.args.turbo,
|
"turbolvl": self.args.turbo,
|
||||||
"u2j": self.args.u2j,
|
"u2j": self.args.u2j,
|
||||||
|
"u2sz": self.args.u2sz,
|
||||||
"idxh": int(self.args.ih),
|
"idxh": int(self.args.ih),
|
||||||
"u2sort": self.args.u2sort,
|
"u2sort": self.args.u2sort,
|
||||||
}
|
}
|
||||||
@@ -4438,7 +4526,6 @@ class HttpCli(object):
|
|||||||
"have_b_u": (self.can_write and self.uparam.get("b") == "u"),
|
"have_b_u": (self.can_write and self.uparam.get("b") == "u"),
|
||||||
"sb_lg": "" if "no_sb_lg" in vf else (vf.get("lg_sbf") or "y"),
|
"sb_lg": "" if "no_sb_lg" in vf else (vf.get("lg_sbf") or "y"),
|
||||||
"url_suf": url_suf,
|
"url_suf": url_suf,
|
||||||
"logues": logues,
|
|
||||||
"title": html_escape("%s %s" % (self.args.bname, self.vpath), crlf=True),
|
"title": html_escape("%s %s" % (self.args.bname, self.vpath), crlf=True),
|
||||||
"srv_info": srv_infot,
|
"srv_info": srv_infot,
|
||||||
"dtheme": self.args.theme,
|
"dtheme": self.args.theme,
|
||||||
@@ -4458,6 +4545,10 @@ class HttpCli(object):
|
|||||||
j2a["no_prism"] = True
|
j2a["no_prism"] = True
|
||||||
|
|
||||||
if not self.can_read and not is_dk:
|
if not self.can_read and not is_dk:
|
||||||
|
logues, readme = self._add_logues(vn, abspath, None)
|
||||||
|
ls_ret["logues"] = j2a["logues"] = logues
|
||||||
|
ls_ret["readme"] = cgv["readme"] = readme
|
||||||
|
|
||||||
if is_ls:
|
if is_ls:
|
||||||
return self.tx_ls(ls_ret)
|
return self.tx_ls(ls_ret)
|
||||||
|
|
||||||
@@ -4514,6 +4605,8 @@ class HttpCli(object):
|
|||||||
):
|
):
|
||||||
ls_names = exclude_dotfiles(ls_names)
|
ls_names = exclude_dotfiles(ls_names)
|
||||||
|
|
||||||
|
lnames = {x.lower(): x for x in ls_names}
|
||||||
|
|
||||||
add_dk = vf.get("dk")
|
add_dk = vf.get("dk")
|
||||||
add_fk = vf.get("fk")
|
add_fk = vf.get("fk")
|
||||||
fk_alg = 2 if "fka" in vf else 1
|
fk_alg = 2 if "fka" in vf else 1
|
||||||
@@ -4703,9 +4796,45 @@ class HttpCli(object):
|
|||||||
else:
|
else:
|
||||||
taglist = list(tagset)
|
taglist = list(tagset)
|
||||||
|
|
||||||
|
logues, readme = self._add_logues(vn, abspath, lnames)
|
||||||
|
ls_ret["logues"] = j2a["logues"] = logues
|
||||||
|
ls_ret["readme"] = cgv["readme"] = readme
|
||||||
|
|
||||||
if not files and not dirs and not readme and not logues[0] and not logues[1]:
|
if not files and not dirs and not readme and not logues[0] and not logues[1]:
|
||||||
logues[1] = "this folder is empty"
|
logues[1] = "this folder is empty"
|
||||||
|
|
||||||
|
if "descript.ion" in lnames and os.path.isfile(
|
||||||
|
os.path.join(abspath, lnames["descript.ion"])
|
||||||
|
):
|
||||||
|
rem = []
|
||||||
|
with open(os.path.join(abspath, lnames["descript.ion"]), "rb") as f:
|
||||||
|
for bln in [x.strip() for x in f]:
|
||||||
|
try:
|
||||||
|
if bln.endswith(b"\x04\xc2"):
|
||||||
|
# multiline comment; replace literal r"\n" with " // "
|
||||||
|
bln = bln.replace(br"\\n", b" // ")[:-2]
|
||||||
|
ln = bln.decode("utf-8", "replace")
|
||||||
|
if ln.startswith('"'):
|
||||||
|
fn, desc = ln.split('" ', 1)
|
||||||
|
fn = fn[1:]
|
||||||
|
else:
|
||||||
|
fn, desc = ln.split(" ", 1)
|
||||||
|
fe = next(
|
||||||
|
(x for x in files if x["name"].lower() == fn.lower()), None
|
||||||
|
)
|
||||||
|
if fe:
|
||||||
|
fe["tags"]["descript.ion"] = desc
|
||||||
|
else:
|
||||||
|
t = "<li><code>%s</code> %s</li>"
|
||||||
|
rem.append(t % (html_escape(fn), html_escape(desc)))
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
if "descript.ion" not in taglist:
|
||||||
|
taglist.insert(0, "descript.ion")
|
||||||
|
if rem and not logues[1]:
|
||||||
|
t = "<h3>descript.ion</h3><ul>\n"
|
||||||
|
logues[1] = t + "\n".join(rem) + "</ul>"
|
||||||
|
|
||||||
if is_ls:
|
if is_ls:
|
||||||
ls_ret["dirs"] = dirs
|
ls_ret["dirs"] = dirs
|
||||||
ls_ret["files"] = files
|
ls_ret["files"] = files
|
||||||
@@ -4775,14 +4904,13 @@ class HttpCli(object):
|
|||||||
self.conn.hsrv.j2[tpl] = j2env.get_template(tname)
|
self.conn.hsrv.j2[tpl] = j2env.get_template(tname)
|
||||||
thumb = ""
|
thumb = ""
|
||||||
is_pic = is_vid = is_au = False
|
is_pic = is_vid = is_au = False
|
||||||
covernames = self.args.th_coversd
|
for fn in self.args.th_coversd:
|
||||||
for fn in ls_names:
|
if fn in lnames:
|
||||||
if fn.lower() in covernames:
|
thumb = lnames[fn]
|
||||||
thumb = fn
|
|
||||||
break
|
break
|
||||||
if og_fn:
|
if og_fn:
|
||||||
ext = og_fn.split(".")[-1].lower()
|
ext = og_fn.split(".")[-1].lower()
|
||||||
if ext in self.thumbcli.thumbable:
|
if self.thumbcli and ext in self.thumbcli.thumbable:
|
||||||
is_pic = (
|
is_pic = (
|
||||||
ext in self.thumbcli.fmt_pil
|
ext in self.thumbcli.fmt_pil
|
||||||
or ext in self.thumbcli.fmt_vips
|
or ext in self.thumbcli.fmt_vips
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ import time
|
|||||||
|
|
||||||
import queue
|
import queue
|
||||||
|
|
||||||
from .__init__ import ANYWIN, CORES, EXE, MACOS, TYPE_CHECKING, EnvParams
|
from .__init__ import ANYWIN, CORES, EXE, MACOS, TYPE_CHECKING, EnvParams, unicode
|
||||||
|
|
||||||
try:
|
try:
|
||||||
MNFE = ModuleNotFoundError
|
MNFE = ModuleNotFoundError
|
||||||
@@ -335,11 +335,11 @@ class HttpSrv(object):
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
sck, saddr = srv_sck.accept()
|
sck, saddr = srv_sck.accept()
|
||||||
cip, cport = saddr[:2]
|
cip = unicode(saddr[0])
|
||||||
if cip.startswith("::ffff:"):
|
if cip.startswith("::ffff:"):
|
||||||
cip = cip[7:]
|
cip = cip[7:]
|
||||||
|
|
||||||
addr = (cip, cport)
|
addr = (cip, saddr[1])
|
||||||
except (OSError, socket.error) as ex:
|
except (OSError, socket.error) as ex:
|
||||||
if self.stopping:
|
if self.stopping:
|
||||||
break
|
break
|
||||||
|
|||||||
@@ -361,7 +361,7 @@ class MDNS(MCast):
|
|||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
self.srv = {}
|
self.srv.clear()
|
||||||
|
|
||||||
def eat(self, buf: bytes, addr: tuple[str, int], sck: socket.socket) -> None:
|
def eat(self, buf: bytes, addr: tuple[str, int], sck: socket.socket) -> None:
|
||||||
cip = addr[0]
|
cip = addr[0]
|
||||||
|
|||||||
@@ -139,6 +139,9 @@ def au_unpk(
|
|||||||
zil = [x for x in zil if x.filename.lower().split(".")[-1] == au]
|
zil = [x for x in zil if x.filename.lower().split(".")[-1] == au]
|
||||||
fi = zf.open(zil[0])
|
fi = zf.open(zil[0])
|
||||||
|
|
||||||
|
else:
|
||||||
|
raise Exception("unknown compression %s" % (pk,))
|
||||||
|
|
||||||
with os.fdopen(fd, "wb") as fo:
|
with os.fdopen(fd, "wb") as fo:
|
||||||
while True:
|
while True:
|
||||||
buf = fi.read(32768)
|
buf = fi.read(32768)
|
||||||
|
|||||||
@@ -240,7 +240,7 @@ class SMB(object):
|
|||||||
|
|
||||||
xbu = vfs.flags.get("xbu")
|
xbu = vfs.flags.get("xbu")
|
||||||
if xbu and not runhook(
|
if xbu and not runhook(
|
||||||
self.nlog, xbu, ap, vpath, "", "", 0, 0, "1.7.6.2", 0, ""
|
self.nlog, xbu, ap, vpath, "", "", "", 0, 0, "1.7.6.2", 0, ""
|
||||||
):
|
):
|
||||||
yeet("blocked by xbu server config: " + vpath)
|
yeet("blocked by xbu server config: " + vpath)
|
||||||
|
|
||||||
|
|||||||
@@ -188,7 +188,7 @@ class SSDPd(MCast):
|
|||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
self.srv = {}
|
self.srv.clear()
|
||||||
|
|
||||||
def eat(self, buf: bytes, addr: tuple[str, int]) -> None:
|
def eat(self, buf: bytes, addr: tuple[str, int]) -> None:
|
||||||
cip = addr[0]
|
cip = addr[0]
|
||||||
|
|||||||
@@ -479,8 +479,10 @@ class SvcHub(object):
|
|||||||
zsl = al.th_covers.split(",")
|
zsl = al.th_covers.split(",")
|
||||||
zsl = [x.strip() for x in zsl]
|
zsl = [x.strip() for x in zsl]
|
||||||
zsl = [x for x in zsl if x]
|
zsl = [x for x in zsl if x]
|
||||||
al.th_covers = set(zsl)
|
al.th_covers = zsl
|
||||||
al.th_coversd = set(zsl + ["." + x for x in zsl])
|
al.th_coversd = zsl + ["." + x for x in zsl]
|
||||||
|
al.th_covers_set = set(al.th_covers)
|
||||||
|
al.th_coversd_set = set(al.th_coversd)
|
||||||
|
|
||||||
for k in "c".split(" "):
|
for k in "c".split(" "):
|
||||||
vl = getattr(al, k)
|
vl = getattr(al, k)
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ from .util import (
|
|||||||
E_ADDR_IN_USE,
|
E_ADDR_IN_USE,
|
||||||
E_ADDR_NOT_AVAIL,
|
E_ADDR_NOT_AVAIL,
|
||||||
E_UNREACH,
|
E_UNREACH,
|
||||||
|
HAVE_IPV6,
|
||||||
IP6ALL,
|
IP6ALL,
|
||||||
Netdev,
|
Netdev,
|
||||||
min_ex,
|
min_ex,
|
||||||
@@ -111,8 +112,10 @@ class TcpSrv(object):
|
|||||||
|
|
||||||
eps = {
|
eps = {
|
||||||
"127.0.0.1": Netdev("127.0.0.1", 0, "", "local only"),
|
"127.0.0.1": Netdev("127.0.0.1", 0, "", "local only"),
|
||||||
"::1": Netdev("::1", 0, "", "local only"),
|
|
||||||
}
|
}
|
||||||
|
if HAVE_IPV6:
|
||||||
|
eps["::1"] = Netdev("::1", 0, "", "local only")
|
||||||
|
|
||||||
nonlocals = [x for x in self.args.i if x not in [k.split("/")[0] for k in eps]]
|
nonlocals = [x for x in self.args.i if x not in [k.split("/")[0] for k in eps]]
|
||||||
if nonlocals:
|
if nonlocals:
|
||||||
try:
|
try:
|
||||||
|
|||||||
@@ -33,7 +33,7 @@ from partftpy import (
|
|||||||
)
|
)
|
||||||
from partftpy.TftpShared import TftpException
|
from partftpy.TftpShared import TftpException
|
||||||
|
|
||||||
from .__init__ import EXE, TYPE_CHECKING
|
from .__init__ import EXE, PY2, TYPE_CHECKING
|
||||||
from .authsrv import VFS
|
from .authsrv import VFS
|
||||||
from .bos import bos
|
from .bos import bos
|
||||||
from .util import BytesIO, Daemon, ODict, exclude_dotfiles, min_ex, runhook, undot
|
from .util import BytesIO, Daemon, ODict, exclude_dotfiles, min_ex, runhook, undot
|
||||||
@@ -95,7 +95,7 @@ class Tftpd(object):
|
|||||||
TftpServer,
|
TftpServer,
|
||||||
]
|
]
|
||||||
cbak = []
|
cbak = []
|
||||||
if not self.args.tftp_no_fast and not EXE:
|
if not self.args.tftp_no_fast and not EXE and not PY2:
|
||||||
try:
|
try:
|
||||||
ptn = re.compile(r"(^\s*)log\.debug\(.*\)$")
|
ptn = re.compile(r"(^\s*)log\.debug\(.*\)$")
|
||||||
for C in Cs:
|
for C in Cs:
|
||||||
@@ -105,7 +105,7 @@ class Tftpd(object):
|
|||||||
cfn = C.__spec__.origin
|
cfn = C.__spec__.origin
|
||||||
exec (compile(src2, filename=cfn, mode="exec"), C.__dict__)
|
exec (compile(src2, filename=cfn, mode="exec"), C.__dict__)
|
||||||
except Exception:
|
except Exception:
|
||||||
t = "failed to optimize tftp code; run with --tftp-noopt if there are issues:\n"
|
t = "failed to optimize tftp code; run with --tftp-no-fast if there are issues:\n"
|
||||||
self.log("tftp", t + min_ex(), 3)
|
self.log("tftp", t + min_ex(), 3)
|
||||||
for n, zd in enumerate(cbak):
|
for n, zd in enumerate(cbak):
|
||||||
Cs[n].__dict__ = zd
|
Cs[n].__dict__ = zd
|
||||||
@@ -150,11 +150,6 @@ class Tftpd(object):
|
|||||||
|
|
||||||
self._disarm(fos)
|
self._disarm(fos)
|
||||||
|
|
||||||
ip = next((x for x in self.args.i if ":" not in x), None)
|
|
||||||
if not ip:
|
|
||||||
self.log("tftp", "IPv6 not supported for tftp; listening on 0.0.0.0", 3)
|
|
||||||
ip = "0.0.0.0"
|
|
||||||
|
|
||||||
self.port = int(self.args.tftp)
|
self.port = int(self.args.tftp)
|
||||||
self.srv = []
|
self.srv = []
|
||||||
self.ips = []
|
self.ips = []
|
||||||
@@ -168,7 +163,7 @@ class Tftpd(object):
|
|||||||
if "::" in ips:
|
if "::" in ips:
|
||||||
ips.append("0.0.0.0")
|
ips.append("0.0.0.0")
|
||||||
|
|
||||||
if self.args.ftp4:
|
if self.args.tftp4:
|
||||||
ips = [x for x in ips if ":" not in x]
|
ips = [x for x in ips if ":" not in x]
|
||||||
|
|
||||||
ips = list(ODict.fromkeys(ips)) # dedup
|
ips = list(ODict.fromkeys(ips)) # dedup
|
||||||
@@ -333,7 +328,7 @@ class Tftpd(object):
|
|||||||
|
|
||||||
xbu = vfs.flags.get("xbu")
|
xbu = vfs.flags.get("xbu")
|
||||||
if xbu and not runhook(
|
if xbu and not runhook(
|
||||||
self.nlog, xbu, ap, vpath, "", "", 0, 0, "8.3.8.7", 0, ""
|
self.nlog, xbu, ap, vpath, "", "", "", 0, 0, "8.3.8.7", 0, ""
|
||||||
):
|
):
|
||||||
yeet("blocked by xbu server config: " + vpath)
|
yeet("blocked by xbu server config: " + vpath)
|
||||||
|
|
||||||
|
|||||||
@@ -59,7 +59,8 @@ class ThumbCli(object):
|
|||||||
|
|
||||||
want_opus = fmt in ("opus", "caf", "mp3")
|
want_opus = fmt in ("opus", "caf", "mp3")
|
||||||
is_au = ext in self.fmt_ffa
|
is_au = ext in self.fmt_ffa
|
||||||
if is_au:
|
is_vau = want_opus and ext in self.fmt_ffv
|
||||||
|
if is_au or is_vau:
|
||||||
if want_opus:
|
if want_opus:
|
||||||
if self.args.no_acode:
|
if self.args.no_acode:
|
||||||
return None
|
return None
|
||||||
@@ -107,7 +108,7 @@ class ThumbCli(object):
|
|||||||
|
|
||||||
fmt = sfmt
|
fmt = sfmt
|
||||||
|
|
||||||
elif fmt[:1] == "p" and not is_au:
|
elif fmt[:1] == "p" and not is_au and not is_vid:
|
||||||
t = "cannot thumbnail [%s]: png only allowed for waveforms"
|
t = "cannot thumbnail [%s]: png only allowed for waveforms"
|
||||||
self.log(t % (rem), 6)
|
self.log(t % (rem), 6)
|
||||||
return None
|
return None
|
||||||
|
|||||||
@@ -304,23 +304,31 @@ class ThumbSrv(object):
|
|||||||
ap_unpk = abspath
|
ap_unpk = abspath
|
||||||
|
|
||||||
if not bos.path.exists(tpath):
|
if not bos.path.exists(tpath):
|
||||||
|
want_mp3 = tpath.endswith(".mp3")
|
||||||
|
want_opus = tpath.endswith(".opus") or tpath.endswith(".caf")
|
||||||
|
want_png = tpath.endswith(".png")
|
||||||
|
want_au = want_mp3 or want_opus
|
||||||
for lib in self.args.th_dec:
|
for lib in self.args.th_dec:
|
||||||
|
can_au = lib == "ff" and (
|
||||||
|
ext in self.fmt_ffa or ext in self.fmt_ffv
|
||||||
|
)
|
||||||
|
|
||||||
if lib == "pil" and ext in self.fmt_pil:
|
if lib == "pil" and ext in self.fmt_pil:
|
||||||
funs.append(self.conv_pil)
|
funs.append(self.conv_pil)
|
||||||
elif lib == "vips" and ext in self.fmt_vips:
|
elif lib == "vips" and ext in self.fmt_vips:
|
||||||
funs.append(self.conv_vips)
|
funs.append(self.conv_vips)
|
||||||
elif lib == "ff" and ext in self.fmt_ffi or ext in self.fmt_ffv:
|
elif can_au and (want_png or want_au):
|
||||||
funs.append(self.conv_ffmpeg)
|
if want_opus:
|
||||||
elif lib == "ff" and ext in self.fmt_ffa:
|
|
||||||
if tpath.endswith(".opus") or tpath.endswith(".caf"):
|
|
||||||
funs.append(self.conv_opus)
|
funs.append(self.conv_opus)
|
||||||
elif tpath.endswith(".mp3"):
|
elif want_mp3:
|
||||||
funs.append(self.conv_mp3)
|
funs.append(self.conv_mp3)
|
||||||
elif tpath.endswith(".png"):
|
elif want_png:
|
||||||
funs.append(self.conv_waves)
|
funs.append(self.conv_waves)
|
||||||
png_ok = True
|
png_ok = True
|
||||||
else:
|
elif lib == "ff" and (ext in self.fmt_ffi or ext in self.fmt_ffv):
|
||||||
funs.append(self.conv_spec)
|
funs.append(self.conv_ffmpeg)
|
||||||
|
elif lib == "ff" and ext in self.fmt_ffa and not want_au:
|
||||||
|
funs.append(self.conv_spec)
|
||||||
|
|
||||||
tdir, tfn = os.path.split(tpath)
|
tdir, tfn = os.path.split(tpath)
|
||||||
ttpath = os.path.join(tdir, "w", tfn)
|
ttpath = os.path.join(tdir, "w", tfn)
|
||||||
|
|||||||
@@ -545,7 +545,7 @@ class Up2k(object):
|
|||||||
nrm += 1
|
nrm += 1
|
||||||
|
|
||||||
if nrm:
|
if nrm:
|
||||||
self.log("{} files graduated in {}".format(nrm, vp))
|
self.log("%d files graduated in /%s" % (nrm, vp))
|
||||||
|
|
||||||
if timeout < 10:
|
if timeout < 10:
|
||||||
continue
|
continue
|
||||||
@@ -654,7 +654,7 @@ class Up2k(object):
|
|||||||
return False, flags
|
return False, flags
|
||||||
|
|
||||||
ret = {k: v for k, v in flags.items() if not k.startswith("e2t")}
|
ret = {k: v for k, v in flags.items() if not k.startswith("e2t")}
|
||||||
if ret.keys() == flags.keys():
|
if len(ret) == len(flags):
|
||||||
return False, flags
|
return False, flags
|
||||||
|
|
||||||
return True, ret
|
return True, ret
|
||||||
@@ -1195,6 +1195,9 @@ class Up2k(object):
|
|||||||
fat32 = True
|
fat32 = True
|
||||||
cv = ""
|
cv = ""
|
||||||
|
|
||||||
|
th_cvd = self.args.th_coversd
|
||||||
|
th_cvds = self.args.th_coversd_set
|
||||||
|
|
||||||
assert self.pp and self.mem_cur
|
assert self.pp and self.mem_cur
|
||||||
self.pp.msg = "a%d %s" % (self.pp.n, cdir)
|
self.pp.msg = "a%d %s" % (self.pp.n, cdir)
|
||||||
|
|
||||||
@@ -1279,12 +1282,21 @@ class Up2k(object):
|
|||||||
|
|
||||||
files.append((sz, lmod, iname))
|
files.append((sz, lmod, iname))
|
||||||
liname = iname.lower()
|
liname = iname.lower()
|
||||||
if sz and (
|
if (
|
||||||
iname in self.args.th_coversd
|
sz
|
||||||
or (
|
and (
|
||||||
|
liname in th_cvds
|
||||||
|
or (
|
||||||
|
not cv
|
||||||
|
and liname.rsplit(".", 1)[-1] in CV_EXTS
|
||||||
|
and not iname.startswith(".")
|
||||||
|
)
|
||||||
|
)
|
||||||
|
and (
|
||||||
not cv
|
not cv
|
||||||
and liname.rsplit(".", 1)[-1] in CV_EXTS
|
or liname not in th_cvds
|
||||||
and not iname.startswith(".")
|
or cv.lower() not in th_cvds
|
||||||
|
or th_cvd.index(liname) < th_cvd.index(cv.lower())
|
||||||
)
|
)
|
||||||
):
|
):
|
||||||
cv = iname
|
cv = iname
|
||||||
@@ -2758,6 +2770,7 @@ class Up2k(object):
|
|||||||
job["vtop"],
|
job["vtop"],
|
||||||
job["host"],
|
job["host"],
|
||||||
job["user"],
|
job["user"],
|
||||||
|
self.asrv.vfs.get_perms(job["vtop"], job["user"]),
|
||||||
job["lmod"],
|
job["lmod"],
|
||||||
job["size"],
|
job["size"],
|
||||||
job["addr"],
|
job["addr"],
|
||||||
@@ -3000,9 +3013,9 @@ class Up2k(object):
|
|||||||
times = (int(time.time()), int(lmod))
|
times = (int(time.time()), int(lmod))
|
||||||
bos.utime(dst, times, False)
|
bos.utime(dst, times, False)
|
||||||
|
|
||||||
def handle_chunk(
|
def handle_chunks(
|
||||||
self, ptop: str, wark: str, chash: str
|
self, ptop: str, wark: str, chashes: list[str]
|
||||||
) -> tuple[int, list[int], str, float, bool]:
|
) -> tuple[list[int], list[list[int]], str, float, bool]:
|
||||||
with self.mutex, self.reg_mutex:
|
with self.mutex, self.reg_mutex:
|
||||||
self.db_act = self.vol_act[ptop] = time.time()
|
self.db_act = self.vol_act[ptop] = time.time()
|
||||||
job = self.registry[ptop].get(wark)
|
job = self.registry[ptop].get(wark)
|
||||||
@@ -3011,26 +3024,37 @@ class Up2k(object):
|
|||||||
self.log("unknown wark [{}], known: {}".format(wark, known))
|
self.log("unknown wark [{}], known: {}".format(wark, known))
|
||||||
raise Pebkac(400, "unknown wark" + SSEELOG)
|
raise Pebkac(400, "unknown wark" + SSEELOG)
|
||||||
|
|
||||||
if chash not in job["need"]:
|
for chash in chashes:
|
||||||
msg = "chash = {} , need:\n".format(chash)
|
if chash not in job["need"]:
|
||||||
msg += "\n".join(job["need"])
|
msg = "chash = {} , need:\n".format(chash)
|
||||||
self.log(msg)
|
msg += "\n".join(job["need"])
|
||||||
raise Pebkac(400, "already got that but thanks??")
|
self.log(msg)
|
||||||
|
raise Pebkac(400, "already got that (%s) but thanks??" % (chash,))
|
||||||
|
|
||||||
nchunk = [n for n, v in enumerate(job["hash"]) if v == chash]
|
if chash in job["busy"]:
|
||||||
if not nchunk:
|
nh = len(job["hash"])
|
||||||
raise Pebkac(400, "unknown chunk")
|
idx = job["hash"].index(chash)
|
||||||
|
t = "that chunk is already being written to:\n {}\n {} {}/{}\n {}"
|
||||||
if chash in job["busy"]:
|
raise Pebkac(400, t.format(wark, chash, idx, nh, job["name"]))
|
||||||
nh = len(job["hash"])
|
|
||||||
idx = job["hash"].index(chash)
|
|
||||||
t = "that chunk is already being written to:\n {}\n {} {}/{}\n {}"
|
|
||||||
raise Pebkac(400, t.format(wark, chash, idx, nh, job["name"]))
|
|
||||||
|
|
||||||
path = djoin(job["ptop"], job["prel"], job["tnam"])
|
|
||||||
|
|
||||||
chunksize = up2k_chunksize(job["size"])
|
chunksize = up2k_chunksize(job["size"])
|
||||||
ofs = [chunksize * x for x in nchunk]
|
|
||||||
|
coffsets = []
|
||||||
|
for chash in chashes:
|
||||||
|
nchunk = [n for n, v in enumerate(job["hash"]) if v == chash]
|
||||||
|
if not nchunk:
|
||||||
|
raise Pebkac(400, "unknown chunk %s" % (chash))
|
||||||
|
|
||||||
|
ofs = [chunksize * x for x in nchunk]
|
||||||
|
coffsets.append(ofs)
|
||||||
|
|
||||||
|
for ofs1, ofs2 in zip(coffsets, coffsets[1:]):
|
||||||
|
gap = (ofs2[0] - ofs1[0]) - chunksize
|
||||||
|
if gap:
|
||||||
|
t = "only sibling chunks can be stitched; gap of %d bytes between offsets %d and %d in %s"
|
||||||
|
raise Pebkac(400, t % (gap, ofs1[0], ofs2[0], job["name"]))
|
||||||
|
|
||||||
|
path = djoin(job["ptop"], job["prel"], job["tnam"])
|
||||||
|
|
||||||
if not job["sprs"]:
|
if not job["sprs"]:
|
||||||
cur_sz = bos.path.getsize(path)
|
cur_sz = bos.path.getsize(path)
|
||||||
@@ -3043,17 +3067,20 @@ class Up2k(object):
|
|||||||
|
|
||||||
job["poke"] = time.time()
|
job["poke"] = time.time()
|
||||||
|
|
||||||
return chunksize, ofs, path, job["lmod"], job["sprs"]
|
return chunksize, coffsets, path, job["lmod"], job["sprs"]
|
||||||
|
|
||||||
def release_chunk(self, ptop: str, wark: str, chash: str) -> bool:
|
def release_chunks(self, ptop: str, wark: str, chashes: list[str]) -> bool:
|
||||||
with self.reg_mutex:
|
with self.reg_mutex:
|
||||||
job = self.registry[ptop].get(wark)
|
job = self.registry[ptop].get(wark)
|
||||||
if job:
|
if job:
|
||||||
job["busy"].pop(chash, None)
|
for chash in chashes:
|
||||||
|
job["busy"].pop(chash, None)
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def confirm_chunk(self, ptop: str, wark: str, chash: str) -> tuple[int, str]:
|
def confirm_chunks(
|
||||||
|
self, ptop: str, wark: str, chashes: list[str]
|
||||||
|
) -> tuple[int, str]:
|
||||||
with self.mutex, self.reg_mutex:
|
with self.mutex, self.reg_mutex:
|
||||||
self.db_act = self.vol_act[ptop] = time.time()
|
self.db_act = self.vol_act[ptop] = time.time()
|
||||||
try:
|
try:
|
||||||
@@ -3062,14 +3089,16 @@ class Up2k(object):
|
|||||||
src = djoin(pdir, job["tnam"])
|
src = djoin(pdir, job["tnam"])
|
||||||
dst = djoin(pdir, job["name"])
|
dst = djoin(pdir, job["name"])
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
return "confirm_chunk, wark, " + repr(ex) # type: ignore
|
return "confirm_chunk, wark(%r)" % (ex,) # type: ignore
|
||||||
|
|
||||||
job["busy"].pop(chash, None)
|
for chash in chashes:
|
||||||
|
job["busy"].pop(chash, None)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
job["need"].remove(chash)
|
for chash in chashes:
|
||||||
|
job["need"].remove(chash)
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
return "confirm_chunk, chash, " + repr(ex) # type: ignore
|
return "confirm_chunk, chash(%s) %r" % (chash, ex) # type: ignore
|
||||||
|
|
||||||
ret = len(job["need"])
|
ret = len(job["need"])
|
||||||
if ret > 0:
|
if ret > 0:
|
||||||
@@ -3080,7 +3109,7 @@ class Up2k(object):
|
|||||||
|
|
||||||
return ret, dst
|
return ret, dst
|
||||||
|
|
||||||
def finish_upload(self, ptop: str, wark: str, busy_aps: set[str]) -> None:
|
def finish_upload(self, ptop: str, wark: str, busy_aps: dict[str, int]) -> None:
|
||||||
self.busy_aps = busy_aps
|
self.busy_aps = busy_aps
|
||||||
with self.mutex, self.reg_mutex:
|
with self.mutex, self.reg_mutex:
|
||||||
self._finish_upload(ptop, wark)
|
self._finish_upload(ptop, wark)
|
||||||
@@ -3285,6 +3314,7 @@ class Up2k(object):
|
|||||||
djoin(vtop, rd, fn),
|
djoin(vtop, rd, fn),
|
||||||
host,
|
host,
|
||||||
usr,
|
usr,
|
||||||
|
self.asrv.vfs.get_perms(djoin(vtop, rd, fn), usr),
|
||||||
int(ts),
|
int(ts),
|
||||||
sz,
|
sz,
|
||||||
ip,
|
ip,
|
||||||
@@ -3319,15 +3349,29 @@ class Up2k(object):
|
|||||||
with self.rescan_cond:
|
with self.rescan_cond:
|
||||||
self.rescan_cond.notify_all()
|
self.rescan_cond.notify_all()
|
||||||
|
|
||||||
if rd and sz and fn.lower() in self.args.th_coversd:
|
if rd and sz and fn.lower() in self.args.th_coversd_set:
|
||||||
# wasteful; db_add will re-index actual covers
|
# wasteful; db_add will re-index actual covers
|
||||||
# but that won't catch existing files
|
# but that won't catch existing files
|
||||||
crd, cdn = rd.rsplit("/", 1) if "/" in rd else ("", rd)
|
crd, cdn = rd.rsplit("/", 1) if "/" in rd else ("", rd)
|
||||||
try:
|
try:
|
||||||
db.execute("delete from cv where rd=? and dn=?", (crd, cdn))
|
q = "select fn from cv where rd=? and dn=?"
|
||||||
db.execute("insert into cv values (?,?,?)", (crd, cdn, fn))
|
db_cv = db.execute(q, (crd, cdn)).fetchone()[0]
|
||||||
|
db_lcv = db_cv.lower()
|
||||||
|
if db_lcv in self.args.th_coversd_set:
|
||||||
|
idx_db = self.args.th_coversd.index(db_lcv)
|
||||||
|
idx_fn = self.args.th_coversd.index(fn.lower())
|
||||||
|
add_cv = idx_fn < idx_db
|
||||||
|
else:
|
||||||
|
add_cv = True
|
||||||
except:
|
except:
|
||||||
pass
|
add_cv = True
|
||||||
|
|
||||||
|
if add_cv:
|
||||||
|
try:
|
||||||
|
db.execute("delete from cv where rd=? and dn=?", (crd, cdn))
|
||||||
|
db.execute("insert into cv values (?,?,?)", (crd, cdn, fn))
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
def handle_rm(
|
def handle_rm(
|
||||||
self,
|
self,
|
||||||
@@ -3470,6 +3514,7 @@ class Up2k(object):
|
|||||||
vpath,
|
vpath,
|
||||||
"",
|
"",
|
||||||
uname,
|
uname,
|
||||||
|
self.asrv.vfs.get_perms(vpath, uname),
|
||||||
stl.st_mtime,
|
stl.st_mtime,
|
||||||
st.st_size,
|
st.st_size,
|
||||||
ip,
|
ip,
|
||||||
@@ -3503,6 +3548,7 @@ class Up2k(object):
|
|||||||
vpath,
|
vpath,
|
||||||
"",
|
"",
|
||||||
uname,
|
uname,
|
||||||
|
self.asrv.vfs.get_perms(vpath, uname),
|
||||||
stl.st_mtime,
|
stl.st_mtime,
|
||||||
st.st_size,
|
st.st_size,
|
||||||
ip,
|
ip,
|
||||||
@@ -3635,7 +3681,18 @@ class Up2k(object):
|
|||||||
xar = dvn.flags.get("xar")
|
xar = dvn.flags.get("xar")
|
||||||
if xbr:
|
if xbr:
|
||||||
if not runhook(
|
if not runhook(
|
||||||
self.log, xbr, sabs, svp, "", uname, stl.st_mtime, st.st_size, "", 0, ""
|
self.log,
|
||||||
|
xbr,
|
||||||
|
sabs,
|
||||||
|
svp,
|
||||||
|
"",
|
||||||
|
uname,
|
||||||
|
self.asrv.vfs.get_perms(svp, uname),
|
||||||
|
stl.st_mtime,
|
||||||
|
st.st_size,
|
||||||
|
"",
|
||||||
|
0,
|
||||||
|
"",
|
||||||
):
|
):
|
||||||
t = "move blocked by xbr server config: {}".format(svp)
|
t = "move blocked by xbr server config: {}".format(svp)
|
||||||
self.log(t, 1)
|
self.log(t, 1)
|
||||||
@@ -3660,7 +3717,20 @@ class Up2k(object):
|
|||||||
self.rescan_cond.notify_all()
|
self.rescan_cond.notify_all()
|
||||||
|
|
||||||
if xar:
|
if xar:
|
||||||
runhook(self.log, xar, dabs, dvp, "", uname, 0, 0, "", 0, "")
|
runhook(
|
||||||
|
self.log,
|
||||||
|
xar,
|
||||||
|
dabs,
|
||||||
|
dvp,
|
||||||
|
"",
|
||||||
|
uname,
|
||||||
|
self.asrv.vfs.get_perms(dvp, uname),
|
||||||
|
0,
|
||||||
|
0,
|
||||||
|
"",
|
||||||
|
0,
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
|
||||||
return "k"
|
return "k"
|
||||||
|
|
||||||
@@ -3759,7 +3829,20 @@ class Up2k(object):
|
|||||||
wunlink(self.log, sabs, svn.flags)
|
wunlink(self.log, sabs, svn.flags)
|
||||||
|
|
||||||
if xar:
|
if xar:
|
||||||
runhook(self.log, xar, dabs, dvp, "", uname, 0, 0, "", 0, "")
|
runhook(
|
||||||
|
self.log,
|
||||||
|
xar,
|
||||||
|
dabs,
|
||||||
|
dvp,
|
||||||
|
"",
|
||||||
|
uname,
|
||||||
|
self.asrv.vfs.get_perms(dvp, uname),
|
||||||
|
0,
|
||||||
|
0,
|
||||||
|
"",
|
||||||
|
0,
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
|
||||||
return "k"
|
return "k"
|
||||||
|
|
||||||
@@ -4048,6 +4131,7 @@ class Up2k(object):
|
|||||||
vp_chk,
|
vp_chk,
|
||||||
job["host"],
|
job["host"],
|
||||||
job["user"],
|
job["user"],
|
||||||
|
self.asrv.vfs.get_perms(vp_chk, job["user"]),
|
||||||
int(job["lmod"]),
|
int(job["lmod"]),
|
||||||
job["size"],
|
job["size"],
|
||||||
job["addr"],
|
job["addr"],
|
||||||
|
|||||||
@@ -158,6 +158,18 @@ else:
|
|||||||
from urllib import unquote # type: ignore # pylint: disable=no-name-in-module
|
from urllib import unquote # type: ignore # pylint: disable=no-name-in-module
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
socket.inet_pton(socket.AF_INET6, "::1")
|
||||||
|
HAVE_IPV6 = True
|
||||||
|
except:
|
||||||
|
|
||||||
|
def inet_pton(fam, ip):
|
||||||
|
return socket.inet_aton(ip)
|
||||||
|
|
||||||
|
socket.inet_pton = inet_pton
|
||||||
|
HAVE_IPV6 = False
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
struct.unpack(b">i", b"idgi")
|
struct.unpack(b">i", b"idgi")
|
||||||
spack = struct.pack # type: ignore
|
spack = struct.pack # type: ignore
|
||||||
@@ -231,6 +243,7 @@ IMPLICATIONS = [
|
|||||||
["e2vu", "e2v"],
|
["e2vu", "e2v"],
|
||||||
["e2vp", "e2v"],
|
["e2vp", "e2v"],
|
||||||
["e2v", "e2d"],
|
["e2v", "e2d"],
|
||||||
|
["tftpvv", "tftpv"],
|
||||||
["smbw", "smb"],
|
["smbw", "smb"],
|
||||||
["smb1", "smb"],
|
["smb1", "smb"],
|
||||||
["smbvvv", "smbvv"],
|
["smbvvv", "smbvv"],
|
||||||
@@ -1364,7 +1377,7 @@ def vol_san(vols: list["VFS"], txt: bytes) -> bytes:
|
|||||||
def min_ex(max_lines: int = 8, reverse: bool = False) -> str:
|
def min_ex(max_lines: int = 8, reverse: bool = False) -> str:
|
||||||
et, ev, tb = sys.exc_info()
|
et, ev, tb = sys.exc_info()
|
||||||
stb = traceback.extract_tb(tb) if tb else traceback.extract_stack()[:-1]
|
stb = traceback.extract_tb(tb) if tb else traceback.extract_stack()[:-1]
|
||||||
fmt = "%s @ %d <%s>: %s"
|
fmt = "%s:%d <%s>: %s"
|
||||||
ex = [fmt % (fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb]
|
ex = [fmt % (fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb]
|
||||||
if et or ev or tb:
|
if et or ev or tb:
|
||||||
ex.append("[%s] %s" % (et.__name__ if et else "(anonymous)", ev))
|
ex.append("[%s] %s" % (et.__name__ if et else "(anonymous)", ev))
|
||||||
@@ -2458,6 +2471,9 @@ def build_netmap(csv: str):
|
|||||||
csv += ", 127.0.0.0/8, ::1/128" # loopback
|
csv += ", 127.0.0.0/8, ::1/128" # loopback
|
||||||
|
|
||||||
srcs = [x.strip() for x in csv.split(",") if x.strip()]
|
srcs = [x.strip() for x in csv.split(",") if x.strip()]
|
||||||
|
if not HAVE_IPV6:
|
||||||
|
srcs = [x for x in srcs if ":" not in x]
|
||||||
|
|
||||||
cidrs = []
|
cidrs = []
|
||||||
for zs in srcs:
|
for zs in srcs:
|
||||||
if not zs.endswith("."):
|
if not zs.endswith("."):
|
||||||
@@ -2495,7 +2511,7 @@ def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
|
|||||||
def hashcopy(
|
def hashcopy(
|
||||||
fin: Generator[bytes, None, None],
|
fin: Generator[bytes, None, None],
|
||||||
fout: Union[typing.BinaryIO, typing.IO[Any]],
|
fout: Union[typing.BinaryIO, typing.IO[Any]],
|
||||||
slp: int = 0,
|
slp: float = 0,
|
||||||
max_sz: int = 0,
|
max_sz: int = 0,
|
||||||
) -> tuple[int, str, str]:
|
) -> tuple[int, str, str]:
|
||||||
hashobj = hashlib.sha512()
|
hashobj = hashlib.sha512()
|
||||||
@@ -2523,7 +2539,7 @@ def sendfile_py(
|
|||||||
f: typing.BinaryIO,
|
f: typing.BinaryIO,
|
||||||
s: socket.socket,
|
s: socket.socket,
|
||||||
bufsz: int,
|
bufsz: int,
|
||||||
slp: int,
|
slp: float,
|
||||||
use_poll: bool,
|
use_poll: bool,
|
||||||
) -> int:
|
) -> int:
|
||||||
remains = upper - lower
|
remains = upper - lower
|
||||||
@@ -2552,7 +2568,7 @@ def sendfile_kern(
|
|||||||
f: typing.BinaryIO,
|
f: typing.BinaryIO,
|
||||||
s: socket.socket,
|
s: socket.socket,
|
||||||
bufsz: int,
|
bufsz: int,
|
||||||
slp: int,
|
slp: float,
|
||||||
use_poll: bool,
|
use_poll: bool,
|
||||||
) -> int:
|
) -> int:
|
||||||
out_fd = s.fileno()
|
out_fd = s.fileno()
|
||||||
@@ -2976,7 +2992,8 @@ def retchk(
|
|||||||
|
|
||||||
def _parsehook(
|
def _parsehook(
|
||||||
log: Optional["NamedLogger"], cmd: str
|
log: Optional["NamedLogger"], cmd: str
|
||||||
) -> tuple[bool, bool, bool, float, dict[str, Any], str]:
|
) -> tuple[str, bool, bool, bool, float, dict[str, Any], list[str]]:
|
||||||
|
areq = ""
|
||||||
chk = False
|
chk = False
|
||||||
fork = False
|
fork = False
|
||||||
jtxt = False
|
jtxt = False
|
||||||
@@ -3001,8 +3018,12 @@ def _parsehook(
|
|||||||
cap = int(arg[1:]) # 0=none 1=stdout 2=stderr 3=both
|
cap = int(arg[1:]) # 0=none 1=stdout 2=stderr 3=both
|
||||||
elif arg.startswith("k"):
|
elif arg.startswith("k"):
|
||||||
kill = arg[1:] # [t]ree [m]ain [n]one
|
kill = arg[1:] # [t]ree [m]ain [n]one
|
||||||
|
elif arg.startswith("a"):
|
||||||
|
areq = arg[1:] # required perms
|
||||||
elif arg.startswith("i"):
|
elif arg.startswith("i"):
|
||||||
pass
|
pass
|
||||||
|
elif not arg:
|
||||||
|
break
|
||||||
else:
|
else:
|
||||||
t = "hook: invalid flag {} in {}"
|
t = "hook: invalid flag {} in {}"
|
||||||
(log or print)(t.format(arg, ocmd))
|
(log or print)(t.format(arg, ocmd))
|
||||||
@@ -3029,9 +3050,11 @@ def _parsehook(
|
|||||||
"capture": cap,
|
"capture": cap,
|
||||||
}
|
}
|
||||||
|
|
||||||
cmd = os.path.expandvars(os.path.expanduser(cmd))
|
argv = cmd.split(",") if "," in cmd else [cmd]
|
||||||
|
|
||||||
return chk, fork, jtxt, wait, sp_ka, cmd
|
argv[0] = os.path.expandvars(os.path.expanduser(argv[0]))
|
||||||
|
|
||||||
|
return areq, chk, fork, jtxt, wait, sp_ka, argv
|
||||||
|
|
||||||
|
|
||||||
def runihook(
|
def runihook(
|
||||||
@@ -3040,10 +3063,9 @@ def runihook(
|
|||||||
vol: "VFS",
|
vol: "VFS",
|
||||||
ups: list[tuple[str, int, int, str, str, str, int]],
|
ups: list[tuple[str, int, int, str, str, str, int]],
|
||||||
) -> bool:
|
) -> bool:
|
||||||
ocmd = cmd
|
_, chk, fork, jtxt, wait, sp_ka, acmd = _parsehook(log, cmd)
|
||||||
chk, fork, jtxt, wait, sp_ka, cmd = _parsehook(log, cmd)
|
bcmd = [sfsenc(x) for x in acmd]
|
||||||
bcmd = [sfsenc(cmd)]
|
if acmd[0].endswith(".py"):
|
||||||
if cmd.endswith(".py"):
|
|
||||||
bcmd = [sfsenc(pybin)] + bcmd
|
bcmd = [sfsenc(pybin)] + bcmd
|
||||||
|
|
||||||
vps = [vjoin(*list(s3dec(x[3], x[4]))) for x in ups]
|
vps = [vjoin(*list(s3dec(x[3], x[4]))) for x in ups]
|
||||||
@@ -3068,7 +3090,7 @@ def runihook(
|
|||||||
|
|
||||||
t0 = time.time()
|
t0 = time.time()
|
||||||
if fork:
|
if fork:
|
||||||
Daemon(runcmd, ocmd, [bcmd], ka=sp_ka)
|
Daemon(runcmd, cmd, bcmd, ka=sp_ka)
|
||||||
else:
|
else:
|
||||||
rc, v, err = runcmd(bcmd, **sp_ka) # type: ignore
|
rc, v, err = runcmd(bcmd, **sp_ka) # type: ignore
|
||||||
if chk and rc:
|
if chk and rc:
|
||||||
@@ -3089,14 +3111,20 @@ def _runhook(
|
|||||||
vp: str,
|
vp: str,
|
||||||
host: str,
|
host: str,
|
||||||
uname: str,
|
uname: str,
|
||||||
|
perms: str,
|
||||||
mt: float,
|
mt: float,
|
||||||
sz: int,
|
sz: int,
|
||||||
ip: str,
|
ip: str,
|
||||||
at: float,
|
at: float,
|
||||||
txt: str,
|
txt: str,
|
||||||
) -> bool:
|
) -> bool:
|
||||||
ocmd = cmd
|
areq, chk, fork, jtxt, wait, sp_ka, acmd = _parsehook(log, cmd)
|
||||||
chk, fork, jtxt, wait, sp_ka, cmd = _parsehook(log, cmd)
|
if areq:
|
||||||
|
for ch in areq:
|
||||||
|
if ch not in perms:
|
||||||
|
t = "user %s not allowed to run hook %s; need perms %s, have %s"
|
||||||
|
log(t % (uname, cmd, areq, perms))
|
||||||
|
return True # fallthrough to next hook
|
||||||
if jtxt:
|
if jtxt:
|
||||||
ja = {
|
ja = {
|
||||||
"ap": ap,
|
"ap": ap,
|
||||||
@@ -3107,21 +3135,22 @@ def _runhook(
|
|||||||
"at": at or time.time(),
|
"at": at or time.time(),
|
||||||
"host": host,
|
"host": host,
|
||||||
"user": uname,
|
"user": uname,
|
||||||
|
"perms": perms,
|
||||||
"txt": txt,
|
"txt": txt,
|
||||||
}
|
}
|
||||||
arg = json.dumps(ja)
|
arg = json.dumps(ja)
|
||||||
else:
|
else:
|
||||||
arg = txt or ap
|
arg = txt or ap
|
||||||
|
|
||||||
acmd = [cmd, arg]
|
acmd += [arg]
|
||||||
if cmd.endswith(".py"):
|
if acmd[0].endswith(".py"):
|
||||||
acmd = [pybin] + acmd
|
acmd = [pybin] + acmd
|
||||||
|
|
||||||
bcmd = [fsenc(x) if x == ap else sfsenc(x) for x in acmd]
|
bcmd = [fsenc(x) if x == ap else sfsenc(x) for x in acmd]
|
||||||
|
|
||||||
t0 = time.time()
|
t0 = time.time()
|
||||||
if fork:
|
if fork:
|
||||||
Daemon(runcmd, ocmd, [bcmd], ka=sp_ka)
|
Daemon(runcmd, cmd, [bcmd], ka=sp_ka)
|
||||||
else:
|
else:
|
||||||
rc, v, err = runcmd(bcmd, **sp_ka) # type: ignore
|
rc, v, err = runcmd(bcmd, **sp_ka) # type: ignore
|
||||||
if chk and rc:
|
if chk and rc:
|
||||||
@@ -3142,6 +3171,7 @@ def runhook(
|
|||||||
vp: str,
|
vp: str,
|
||||||
host: str,
|
host: str,
|
||||||
uname: str,
|
uname: str,
|
||||||
|
perms: str,
|
||||||
mt: float,
|
mt: float,
|
||||||
sz: int,
|
sz: int,
|
||||||
ip: str,
|
ip: str,
|
||||||
@@ -3151,7 +3181,7 @@ def runhook(
|
|||||||
vp = vp.replace("\\", "/")
|
vp = vp.replace("\\", "/")
|
||||||
for cmd in cmds:
|
for cmd in cmds:
|
||||||
try:
|
try:
|
||||||
if not _runhook(log, cmd, ap, vp, host, uname, mt, sz, ip, at, txt):
|
if not _runhook(log, cmd, ap, vp, host, uname, perms, mt, sz, ip, at, txt):
|
||||||
return False
|
return False
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
(log or print)("hook: {}".format(ex))
|
(log or print)("hook: {}".format(ex))
|
||||||
|
|||||||
@@ -29,6 +29,7 @@ window.baguetteBox = (function () {
|
|||||||
isOverlayVisible = false,
|
isOverlayVisible = false,
|
||||||
touch = {}, // start-pos
|
touch = {}, // start-pos
|
||||||
touchFlag = false, // busy
|
touchFlag = false, // busy
|
||||||
|
scrollCSS = ['', ''],
|
||||||
scrollTimer = 0,
|
scrollTimer = 0,
|
||||||
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
|
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
|
||||||
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
|
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
|
||||||
@@ -567,6 +568,12 @@ window.baguetteBox = (function () {
|
|||||||
|
|
||||||
function showOverlay(chosenImageIndex) {
|
function showOverlay(chosenImageIndex) {
|
||||||
if (options.noScrollbars) {
|
if (options.noScrollbars) {
|
||||||
|
var a = document.documentElement.style.overflowY,
|
||||||
|
b = document.body.style.overflowY;
|
||||||
|
|
||||||
|
if (a != 'hidden' || b != 'scroll')
|
||||||
|
scrollCSS = [a, b];
|
||||||
|
|
||||||
document.documentElement.style.overflowY = 'hidden';
|
document.documentElement.style.overflowY = 'hidden';
|
||||||
document.body.style.overflowY = 'scroll';
|
document.body.style.overflowY = 'scroll';
|
||||||
}
|
}
|
||||||
@@ -615,8 +622,8 @@ window.baguetteBox = (function () {
|
|||||||
playvid(false);
|
playvid(false);
|
||||||
removeFromCache('#files');
|
removeFromCache('#files');
|
||||||
if (options.noScrollbars) {
|
if (options.noScrollbars) {
|
||||||
document.documentElement.style.overflowY = 'auto';
|
document.documentElement.style.overflowY = scrollCSS[0];
|
||||||
document.body.style.overflowY = 'auto';
|
document.body.style.overflowY = scrollCSS[1];
|
||||||
}
|
}
|
||||||
|
|
||||||
try {
|
try {
|
||||||
@@ -1060,7 +1067,7 @@ window.baguetteBox = (function () {
|
|||||||
return showNextImage();
|
return showNextImage();
|
||||||
|
|
||||||
show_buttons('t');
|
show_buttons('t');
|
||||||
|
|
||||||
if (Date.now() - ctime <= 500 && !IPHONE)
|
if (Date.now() - ctime <= 500 && !IPHONE)
|
||||||
tglfull();
|
tglfull();
|
||||||
|
|
||||||
|
|||||||
@@ -210,6 +210,8 @@ var Ls = {
|
|||||||
|
|
||||||
"cut_datechk": "has no effect unless the turbo button is enabled$N$Nreduces the yolo factor by a tiny amount; checks whether the file timestamps on the server matches yours$N$Nshould <em>theoretically</em> catch most unfinished / corrupted uploads, but is not a substitute for doing a verification pass with turbo disabled afterwards\">date-chk",
|
"cut_datechk": "has no effect unless the turbo button is enabled$N$Nreduces the yolo factor by a tiny amount; checks whether the file timestamps on the server matches yours$N$Nshould <em>theoretically</em> catch most unfinished / corrupted uploads, but is not a substitute for doing a verification pass with turbo disabled afterwards\">date-chk",
|
||||||
|
|
||||||
|
"cut_u2sz": "size (in MiB) of each upload chunk; big values fly better across the atlantic. Try low values on very unreliable connections",
|
||||||
|
|
||||||
"cut_flag": "ensure only one tab is uploading at a time $N -- other tabs must have this enabled too $N -- only affects tabs on the same domain",
|
"cut_flag": "ensure only one tab is uploading at a time $N -- other tabs must have this enabled too $N -- only affects tabs on the same domain",
|
||||||
|
|
||||||
"cut_az": "upload files in alphabetical order, rather than smallest-file-first$N$Nalphabetical order can make it easier to eyeball if something went wrong on the server, but it makes uploading slightly slower on fiber / LAN",
|
"cut_az": "upload files in alphabetical order, rather than smallest-file-first$N$Nalphabetical order can make it easier to eyeball if something went wrong on the server, but it makes uploading slightly slower on fiber / LAN",
|
||||||
@@ -268,6 +270,8 @@ var Ls = {
|
|||||||
"mb_play": "play",
|
"mb_play": "play",
|
||||||
"mm_hashplay": "play this audio file?",
|
"mm_hashplay": "play this audio file?",
|
||||||
"mp_breq": "need firefox 82+ or chrome 73+ or iOS 15+",
|
"mp_breq": "need firefox 82+ or chrome 73+ or iOS 15+",
|
||||||
|
"mm_bload": "now loading...",
|
||||||
|
"mm_bconv": "converting to {0}, please wait...",
|
||||||
"mm_opusen": "your browser cannot play aac / m4a files;\ntranscoding to opus is now enabled",
|
"mm_opusen": "your browser cannot play aac / m4a files;\ntranscoding to opus is now enabled",
|
||||||
"mm_playerr": "playback failed: ",
|
"mm_playerr": "playback failed: ",
|
||||||
"mm_eabrt": "The playback attempt was cancelled",
|
"mm_eabrt": "The playback attempt was cancelled",
|
||||||
@@ -358,6 +362,7 @@ var Ls = {
|
|||||||
"tvt_sel": "select file ( for cut / delete / ... )$NHotkey: S\">sel",
|
"tvt_sel": "select file ( for cut / delete / ... )$NHotkey: S\">sel",
|
||||||
"tvt_edit": "open file in text editor$NHotkey: E\">✏️ edit",
|
"tvt_edit": "open file in text editor$NHotkey: E\">✏️ edit",
|
||||||
|
|
||||||
|
"gt_vau": "don't show videos, just play the audio\">🎧",
|
||||||
"gt_msel": "enable file selection; ctrl-click a file to override$N$N<em>when active: doubleclick a file / folder to open it</em>$N$NHotkey: S\">multiselect",
|
"gt_msel": "enable file selection; ctrl-click a file to override$N$N<em>when active: doubleclick a file / folder to open it</em>$N$NHotkey: S\">multiselect",
|
||||||
"gt_crop": "center-crop thumbnails\">crop",
|
"gt_crop": "center-crop thumbnails\">crop",
|
||||||
"gt_3x": "hi-res thumbnails\">3x",
|
"gt_3x": "hi-res thumbnails\">3x",
|
||||||
@@ -400,6 +405,7 @@ var Ls = {
|
|||||||
"badreply": "Failed to parse reply from server",
|
"badreply": "Failed to parse reply from server",
|
||||||
|
|
||||||
"xhr403": "403: Access denied\n\ntry pressing F5, maybe you got logged out",
|
"xhr403": "403: Access denied\n\ntry pressing F5, maybe you got logged out",
|
||||||
|
"xhr0": "unknown (probably lost connection to server, or server is offline)",
|
||||||
"cf_ok": "sorry about that -- DD" + wah + "oS protection kicked in\n\nthings should resume in about 30 sec\n\nif nothing happens, hit F5 to reload the page",
|
"cf_ok": "sorry about that -- DD" + wah + "oS protection kicked in\n\nthings should resume in about 30 sec\n\nif nothing happens, hit F5 to reload the page",
|
||||||
"tl_xe1": "could not list subfolders:\n\nerror ",
|
"tl_xe1": "could not list subfolders:\n\nerror ",
|
||||||
"tl_xe2": "404: Folder not found",
|
"tl_xe2": "404: Folder not found",
|
||||||
@@ -460,7 +466,7 @@ var Ls = {
|
|||||||
"u_badf": 'These {0} files (of {1} total) were skipped, possibly due to filesystem permissions:\n\n',
|
"u_badf": 'These {0} files (of {1} total) were skipped, possibly due to filesystem permissions:\n\n',
|
||||||
"u_blankf": 'These {0} files (of {1} total) are blank / empty; upload them anyways?\n\n',
|
"u_blankf": 'These {0} files (of {1} total) are blank / empty; upload them anyways?\n\n',
|
||||||
"u_just1": '\nMaybe it works better if you select just one file',
|
"u_just1": '\nMaybe it works better if you select just one file',
|
||||||
"u_ff_many": "This amount of files <em>may</em> cause Firefox to skip some files, or crash.\nPlease try again with fewer files (or use Chrome) if that happens.",
|
"u_ff_many": "if you're using <b>Linux / MacOS / Android,</b> then this amount of files <a href=\"https://bugzilla.mozilla.org/show_bug.cgi?id=1790500\" target=\"_blank\"><em>may</em> crash Firefox!</a>\nif that happens, please try again (or use Chrome).",
|
||||||
"u_up_life": "This upload will be deleted from the server\n{0} after it completes",
|
"u_up_life": "This upload will be deleted from the server\n{0} after it completes",
|
||||||
"u_asku": 'upload these {0} files to <code>{1}</code>',
|
"u_asku": 'upload these {0} files to <code>{1}</code>',
|
||||||
"u_unpt": "you can undo / delete this upload using the top-left 🧯",
|
"u_unpt": "you can undo / delete this upload using the top-left 🧯",
|
||||||
@@ -477,12 +483,13 @@ var Ls = {
|
|||||||
"u_ehsinit": "server rejected the request to initiate upload; retrying...",
|
"u_ehsinit": "server rejected the request to initiate upload; retrying...",
|
||||||
"u_eneths": "network error while performing upload handshake; retrying...",
|
"u_eneths": "network error while performing upload handshake; retrying...",
|
||||||
"u_enethd": "network error while testing target existence; retrying...",
|
"u_enethd": "network error while testing target existence; retrying...",
|
||||||
|
"u_cbusy": "waiting for server to trust us again after a network glitch...",
|
||||||
"u_ehsdf": "server ran out of disk space!\n\nwill keep retrying, in case someone\nfrees up enough space to continue",
|
"u_ehsdf": "server ran out of disk space!\n\nwill keep retrying, in case someone\nfrees up enough space to continue",
|
||||||
"u_emtleak1": "it looks like your webbrowser may have a memory leak;\nplease",
|
"u_emtleak1": "it looks like your webbrowser may have a memory leak;\nplease",
|
||||||
"u_emtleak2": ' <a href="{0}">switch to https (recommended)</a> or ',
|
"u_emtleak2": ' <a href="{0}">switch to https (recommended)</a> or ',
|
||||||
"u_emtleak3": ' ',
|
"u_emtleak3": ' ',
|
||||||
"u_emtleakc": 'try the following:\n<ul><li>hit <code>F5</code> to refresh the page</li><li>then disable the <code>mt</code> button in the <code>⚙️ settings</code></li><li>and try that upload again</li></ul>Uploads will be a bit slower, but oh well.\nSorry for the trouble !\n\nPS: chrome v107 <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=1354816">has a bugfix</a> for this',
|
"u_emtleakc": 'try the following:\n<ul><li>hit <code>F5</code> to refresh the page</li><li>then disable the <code>mt</code> button in the <code>⚙️ settings</code></li><li>and try that upload again</li></ul>Uploads will be a bit slower, but oh well.\nSorry for the trouble !\n\nPS: chrome v107 <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=1354816" target="_blank">has a bugfix</a> for this',
|
||||||
"u_emtleakf": 'try the following:\n<ul><li>hit <code>F5</code> to refresh the page</li><li>then enable <code>🥔</code> (potato) in the upload UI<li>and try that upload again</li></ul>\nPS: firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500">will hopefully have a bugfix</a> at some point',
|
"u_emtleakf": 'try the following:\n<ul><li>hit <code>F5</code> to refresh the page</li><li>then enable <code>🥔</code> (potato) in the upload UI<li>and try that upload again</li></ul>\nPS: firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500" target="_blank">will hopefully have a bugfix</a> at some point',
|
||||||
"u_s404": "not found on server",
|
"u_s404": "not found on server",
|
||||||
"u_expl": "explain",
|
"u_expl": "explain",
|
||||||
"u_maxconn": "most browsers limit this to 6, but firefox lets you raise it with <code>connections-per-server</code> in <code>about:config</code>",
|
"u_maxconn": "most browsers limit this to 6, but firefox lets you raise it with <code>connections-per-server</code> in <code>about:config</code>",
|
||||||
@@ -720,6 +727,8 @@ var Ls = {
|
|||||||
|
|
||||||
"cut_datechk": "har ingen effekt dersom turbo er avslått$N$Ngjør turbo bittelitt tryggere ved å sjekke datostemplingen på filene (i tillegg til filstørrelse)$N$N<em>burde</em> oppdage og gjenoppta de fleste ufullstendige opplastninger, men er <em>ikke</em> en fullverdig erstatning for å deaktivere turbo og gjøre en skikkelig sjekk\">date-chk",
|
"cut_datechk": "har ingen effekt dersom turbo er avslått$N$Ngjør turbo bittelitt tryggere ved å sjekke datostemplingen på filene (i tillegg til filstørrelse)$N$N<em>burde</em> oppdage og gjenoppta de fleste ufullstendige opplastninger, men er <em>ikke</em> en fullverdig erstatning for å deaktivere turbo og gjøre en skikkelig sjekk\">date-chk",
|
||||||
|
|
||||||
|
"cut_u2sz": "størrelse i megabyte for hvert bruddstykke for opplastning. Store verdier flyr bedre over atlanteren. Små verdier kan være bedre på særdeles ustabile forbindelser",
|
||||||
|
|
||||||
"cut_flag": "samkjører nettleserfaner slik at bare én $N kan holde på med befaring / opplastning $N -- andre faner må også ha denne skrudd på $N -- fungerer kun innenfor samme domene",
|
"cut_flag": "samkjører nettleserfaner slik at bare én $N kan holde på med befaring / opplastning $N -- andre faner må også ha denne skrudd på $N -- fungerer kun innenfor samme domene",
|
||||||
|
|
||||||
"cut_az": "last opp filer i alfabetisk rekkefølge, istedenfor minste-fil-først$N$Nalfabetisk kan gjøre det lettere å anslå om alt gikk bra, men er bittelitt tregere på fiber / LAN",
|
"cut_az": "last opp filer i alfabetisk rekkefølge, istedenfor minste-fil-først$N$Nalfabetisk kan gjøre det lettere å anslå om alt gikk bra, men er bittelitt tregere på fiber / LAN",
|
||||||
@@ -778,6 +787,8 @@ var Ls = {
|
|||||||
"mb_play": "lytt",
|
"mb_play": "lytt",
|
||||||
"mm_hashplay": "spill denne sangen?",
|
"mm_hashplay": "spill denne sangen?",
|
||||||
"mp_breq": "krever firefox 82+, chrome 73+, eller iOS 15+",
|
"mp_breq": "krever firefox 82+, chrome 73+, eller iOS 15+",
|
||||||
|
"mm_bload": "laster inn...",
|
||||||
|
"mm_bconv": "konverterer til {0}, vent litt...",
|
||||||
"mm_opusen": "nettleseren din forstår ikke aac / m4a;\nkonvertering til opus er nå aktivert",
|
"mm_opusen": "nettleseren din forstår ikke aac / m4a;\nkonvertering til opus er nå aktivert",
|
||||||
"mm_playerr": "avspilling feilet: ",
|
"mm_playerr": "avspilling feilet: ",
|
||||||
"mm_eabrt": "Avspillingsforespørselen ble avbrutt",
|
"mm_eabrt": "Avspillingsforespørselen ble avbrutt",
|
||||||
@@ -868,6 +879,7 @@ var Ls = {
|
|||||||
"tvt_sel": "markér filen ( for utklipp / sletting / ... )$NSnarvei: S\">merk",
|
"tvt_sel": "markér filen ( for utklipp / sletting / ... )$NSnarvei: S\">merk",
|
||||||
"tvt_edit": "redigér filen$NSnarvei: E\">✏️ endre",
|
"tvt_edit": "redigér filen$NSnarvei: E\">✏️ endre",
|
||||||
|
|
||||||
|
"gt_vau": "ikke vis videofiler, bare spill lyden\">🎧",
|
||||||
"gt_msel": "markér filer istedenfor å åpne dem; ctrl-klikk filer for å overstyre$N$N<em>når aktiv: dobbelklikk en fil / mappe for å åpne</em>$N$NSnarvei: S\">markering",
|
"gt_msel": "markér filer istedenfor å åpne dem; ctrl-klikk filer for å overstyre$N$N<em>når aktiv: dobbelklikk en fil / mappe for å åpne</em>$N$NSnarvei: S\">markering",
|
||||||
"gt_crop": "beskjær ikonene så de passer bedre\">✂",
|
"gt_crop": "beskjær ikonene så de passer bedre\">✂",
|
||||||
"gt_3x": "høyere oppløsning på ikoner\">3x",
|
"gt_3x": "høyere oppløsning på ikoner\">3x",
|
||||||
@@ -910,6 +922,7 @@ var Ls = {
|
|||||||
"badreply": "Ugyldig svar ifra serveren",
|
"badreply": "Ugyldig svar ifra serveren",
|
||||||
|
|
||||||
"xhr403": "403: Tilgang nektet\n\nkanskje du ble logget ut? prøv å trykk F5",
|
"xhr403": "403: Tilgang nektet\n\nkanskje du ble logget ut? prøv å trykk F5",
|
||||||
|
"xhr0": "ukjent (enten nettverksproblemer eller serverkrasj)",
|
||||||
"cf_ok": "beklager -- liten tilfeldig kontroll, alt OK\n\nting skal fortsette om ca. 30 sekunder\n\nhvis ikkeno skjer, trykk F5 for å laste siden på nytt",
|
"cf_ok": "beklager -- liten tilfeldig kontroll, alt OK\n\nting skal fortsette om ca. 30 sekunder\n\nhvis ikkeno skjer, trykk F5 for å laste siden på nytt",
|
||||||
"tl_xe1": "kunne ikke hente undermapper:\n\nfeil ",
|
"tl_xe1": "kunne ikke hente undermapper:\n\nfeil ",
|
||||||
"tl_xe2": "404: Mappen finnes ikke",
|
"tl_xe2": "404: Mappen finnes ikke",
|
||||||
@@ -970,7 +983,7 @@ var Ls = {
|
|||||||
"u_badf": 'Disse {0} filene (av totalt {1}) kan ikke leses, kanskje pga rettighetsproblemer i filsystemet på datamaskinen din:\n\n',
|
"u_badf": 'Disse {0} filene (av totalt {1}) kan ikke leses, kanskje pga rettighetsproblemer i filsystemet på datamaskinen din:\n\n',
|
||||||
"u_blankf": 'Disse {0} filene (av totalt {1}) er blanke / uten innhold; ønsker du å laste dem opp uansett?\n\n',
|
"u_blankf": 'Disse {0} filene (av totalt {1}) er blanke / uten innhold; ønsker du å laste dem opp uansett?\n\n',
|
||||||
"u_just1": '\nFunker kanskje bedre hvis du bare tar én fil om gangen',
|
"u_just1": '\nFunker kanskje bedre hvis du bare tar én fil om gangen',
|
||||||
"u_ff_many": "Det var mange filer! Mulig at Firefox kommer til å krasje, eller\nhoppe over et par av dem. Smart å ha Chrome på lur i tilfelle.",
|
"u_ff_many": 'Hvis du bruker <b>Linux / MacOS / Android,</b> så kan dette antallet filer<br /><a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500" target="_blank"><em>kanskje</em> krasje Firefox!</a> Hvis det skjer, så prøv igjen (eller bruk Chrome).',
|
||||||
"u_up_life": "Filene slettes fra serveren {0}\netter at opplastningen er fullført",
|
"u_up_life": "Filene slettes fra serveren {0}\netter at opplastningen er fullført",
|
||||||
"u_asku": 'Laste opp disse {0} filene til <code>{1}</code>',
|
"u_asku": 'Laste opp disse {0} filene til <code>{1}</code>',
|
||||||
"u_unpt": "Du kan angre / slette opplastningen med 🧯 oppe til venstre",
|
"u_unpt": "Du kan angre / slette opplastningen med 🧯 oppe til venstre",
|
||||||
@@ -987,12 +1000,13 @@ var Ls = {
|
|||||||
"u_ehsinit": "server nektet forespørselen om å begynne en ny opplastning; prøver igjen...",
|
"u_ehsinit": "server nektet forespørselen om å begynne en ny opplastning; prøver igjen...",
|
||||||
"u_eneths": "et problem med nettverket gjorde at avtale om opplastning ikke kunne inngås; prøver igjen...",
|
"u_eneths": "et problem med nettverket gjorde at avtale om opplastning ikke kunne inngås; prøver igjen...",
|
||||||
"u_enethd": "et problem med nettverket gjorde at filsjekk ikke kunne utføres; prøver igjen...",
|
"u_enethd": "et problem med nettverket gjorde at filsjekk ikke kunne utføres; prøver igjen...",
|
||||||
|
"u_cbusy": "venter på klarering ifra server etter et lite nettverksglipp...",
|
||||||
"u_ehsdf": "serveren er full!\n\nprøver igjen regelmessig,\ni tilfelle noen rydder litt...",
|
"u_ehsdf": "serveren er full!\n\nprøver igjen regelmessig,\ni tilfelle noen rydder litt...",
|
||||||
"u_emtleak1": "uff, det er mulig at nettleseren din har en minnelekkasje...\nForeslår",
|
"u_emtleak1": "uff, det er mulig at nettleseren din har en minnelekkasje...\nForeslår",
|
||||||
"u_emtleak2": ' helst at du <a href="{0}">bytter til https</a>, eller ',
|
"u_emtleak2": ' helst at du <a href="{0}">bytter til https</a>, eller ',
|
||||||
"u_emtleak3": ' at du ',
|
"u_emtleak3": ' at du ',
|
||||||
"u_emtleakc": 'prøver følgende:\n<ul><li>trykk F5 for å laste siden på nytt</li><li>så skru av <code>mt</code> bryteren under <code>⚙️ innstillinger</code></li><li>og forsøk den samme opplastningen igjen</li></ul>Opplastning vil gå litt tregere, men det får så være.\nBeklager bryderiet !\n\nPS: feilen <a href="<a href="https://bugs.chromium.org/p/chromium/issues/detail?id=1354816">skal være fikset</a> i chrome v107',
|
"u_emtleakc": 'prøver følgende:\n<ul><li>trykk F5 for å laste siden på nytt</li><li>så skru av <code>mt</code> bryteren under <code>⚙️ innstillinger</code></li><li>og forsøk den samme opplastningen igjen</li></ul>Opplastning vil gå litt tregere, men det får så være.\nBeklager bryderiet !\n\nPS: feilen <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=1354816" target="_blank">skal være fikset</a> i chrome v107',
|
||||||
"u_emtleakf": 'prøver følgende:\n<ul><li>trykk F5 for å laste siden på nytt</li><li>så skru på <code>🥔</code> ("enkelt UI") i opplasteren</li><li>og forsøk den samme opplastningen igjen</li></ul>\nPS: Firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500">fikser forhåpentligvis feilen</a> en eller annen gang',
|
"u_emtleakf": 'prøver følgende:\n<ul><li>trykk F5 for å laste siden på nytt</li><li>så skru på <code>🥔</code> ("enkelt UI") i opplasteren</li><li>og forsøk den samme opplastningen igjen</li></ul>\nPS: Firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500" target="_blank">fikser forhåpentligvis feilen</a> en eller annen gang',
|
||||||
"u_s404": "ikke funnet på serveren",
|
"u_s404": "ikke funnet på serveren",
|
||||||
"u_expl": "forklar",
|
"u_expl": "forklar",
|
||||||
"u_maxconn": "de fleste nettlesere tillater ikke mer enn 6, men firefox lar deg øke grensen med <code>connections-per-server</code> i <code>about:config</code>",
|
"u_maxconn": "de fleste nettlesere tillater ikke mer enn 6, men firefox lar deg øke grensen med <code>connections-per-server</code> i <code>about:config</code>",
|
||||||
@@ -1062,7 +1076,7 @@ ebi('ops').innerHTML = (
|
|||||||
'<a href="#" data-perm="write" data-dest="bup" tt="' + L.ot_bup + '">🎈</a>' +
|
'<a href="#" data-perm="write" data-dest="bup" tt="' + L.ot_bup + '">🎈</a>' +
|
||||||
'<a href="#" data-perm="write" data-dest="mkdir" tt="' + L.ot_mkdir + '">📂</a>' +
|
'<a href="#" data-perm="write" data-dest="mkdir" tt="' + L.ot_mkdir + '">📂</a>' +
|
||||||
'<a href="#" data-perm="read write" data-dest="new_md" tt="' + L.ot_md + '">📝</a>' +
|
'<a href="#" data-perm="read write" data-dest="new_md" tt="' + L.ot_md + '">📝</a>' +
|
||||||
'<a href="#" data-perm="write" data-dest="msg" tt="' + L.ot_msg + '">📟</a>' +
|
'<a href="#" data-dest="msg" tt="' + L.ot_msg + '">📟</a>' +
|
||||||
'<a href="#" data-dest="player" tt="' + L.ot_mp + '">🎺</a>' +
|
'<a href="#" data-dest="player" tt="' + L.ot_mp + '">🎺</a>' +
|
||||||
'<a href="#" data-dest="cfg" tt="' + L.ot_cfg + '">⚙️</a>' +
|
'<a href="#" data-dest="cfg" tt="' + L.ot_cfg + '">⚙️</a>' +
|
||||||
(IE ? '<span id="noie">' + L.ot_noie + '</span>' : '') +
|
(IE ? '<span id="noie">' + L.ot_noie + '</span>' : '') +
|
||||||
@@ -1249,6 +1263,7 @@ ebi('op_cfg').innerHTML = (
|
|||||||
' <a id="hashw" class="tgl btn" href="#" tt="' + L.cut_mt + '</a>\n' +
|
' <a id="hashw" class="tgl btn" href="#" tt="' + L.cut_mt + '</a>\n' +
|
||||||
' <a id="u2turbo" class="tgl btn ttb" href="#" tt="' + L.cut_turbo + '</a>\n' +
|
' <a id="u2turbo" class="tgl btn ttb" href="#" tt="' + L.cut_turbo + '</a>\n' +
|
||||||
' <a id="u2tdate" class="tgl btn ttb" href="#" tt="' + L.cut_datechk + '</a>\n' +
|
' <a id="u2tdate" class="tgl btn ttb" href="#" tt="' + L.cut_datechk + '</a>\n' +
|
||||||
|
' <input type="text" id="u2szg" value="" ' + NOAC + ' style="width:3em" tt="' + L.cut_u2sz + '" />' +
|
||||||
' <a id="flag_en" class="tgl btn" href="#" tt="' + L.cut_flag + '">💤</a>\n' +
|
' <a id="flag_en" class="tgl btn" href="#" tt="' + L.cut_flag + '">💤</a>\n' +
|
||||||
' <a id="u2sort" class="tgl btn" href="#" tt="' + L.cut_az + '">az</a>\n' +
|
' <a id="u2sort" class="tgl btn" href="#" tt="' + L.cut_az + '">az</a>\n' +
|
||||||
' <a id="upnag" class="tgl btn" href="#" tt="' + L.cut_nag + '">🔔</a>\n' +
|
' <a id="upnag" class="tgl btn" href="#" tt="' + L.cut_nag + '">🔔</a>\n' +
|
||||||
@@ -1591,6 +1606,8 @@ var mpl = (function () {
|
|||||||
r.pp = function () {
|
r.pp = function () {
|
||||||
var adur, apos, playing = mp.au && !mp.au.paused;
|
var adur, apos, playing = mp.au && !mp.au.paused;
|
||||||
|
|
||||||
|
clearTimeout(mpl.t_eplay);
|
||||||
|
|
||||||
clmod(ebi('np_inf'), 'playing', playing);
|
clmod(ebi('np_inf'), 'playing', playing);
|
||||||
|
|
||||||
if (mp.au && isNum(adur = mp.au.duration) && isNum(apos = mp.au.currentTime) && apos >= 0)
|
if (mp.au && isNum(adur = mp.au.duration) && isNum(apos = mp.au.currentTime) && apos >= 0)
|
||||||
@@ -1700,7 +1717,7 @@ catch (ex) { }
|
|||||||
|
|
||||||
|
|
||||||
var re_au_native = (can_ogg || have_acode) ? /\.(aac|flac|m4a|mp3|ogg|opus|wav)$/i : /\.(aac|flac|m4a|mp3|wav)$/i,
|
var re_au_native = (can_ogg || have_acode) ? /\.(aac|flac|m4a|mp3|ogg|opus|wav)$/i : /\.(aac|flac|m4a|mp3|wav)$/i,
|
||||||
re_au_all = /\.(aac|ac3|aif|aiff|alac|alaw|amr|ape|au|dfpwm|dts|flac|gsm|it|itgz|itxz|itz|m4a|mdgz|mdxz|mdz|mo3|mod|mp2|mp3|mpc|mptm|mt2|mulaw|ogg|okt|opus|ra|s3m|s3gz|s3xz|s3z|tak|tta|ulaw|wav|wma|wv|xm|xmgz|xmxz|xmz|xpk)$/i;
|
re_au_all = /\.(aac|ac3|aif|aiff|alac|alaw|amr|ape|au|dfpwm|dts|flac|gsm|it|itgz|itxz|itz|m4a|mdgz|mdxz|mdz|mo3|mod|mp2|mp3|mpc|mptm|mt2|mulaw|ogg|okt|opus|ra|s3m|s3gz|s3xz|s3z|tak|tta|ulaw|wav|wma|wv|xm|xmgz|xmxz|xmz|xpk|3gp|asf|avi|flv|m4v|mkv|mov|mp4|mpeg|mpeg2|mpegts|mpg|mpg2|nut|ogm|ogv|rm|ts|vob|webm|wmv)$/i;
|
||||||
|
|
||||||
|
|
||||||
// extract songs + add play column
|
// extract songs + add play column
|
||||||
@@ -2176,8 +2193,21 @@ var pbar = (function () {
|
|||||||
}
|
}
|
||||||
pctx.clearRect(0, 0, pc.w, pc.h);
|
pctx.clearRect(0, 0, pc.w, pc.h);
|
||||||
|
|
||||||
if (!mp || !mp.au || !isNum(adur = mp.au.duration) || !isNum(apos = mp.au.currentTime) || apos < 0 || adur < apos)
|
if (!mp || !mp.au)
|
||||||
|
return; // not-init
|
||||||
|
|
||||||
|
if (!isNum(adur = mp.au.duration) || !isNum(apos = mp.au.currentTime) || apos < 0 || adur < apos) {
|
||||||
|
if (Date.now() - mp.au.pt0 < 500)
|
||||||
|
return;
|
||||||
|
|
||||||
|
pctx.fillStyle = light ? 'rgba(0,0,0,0.5)' : 'rgba(255,255,255,0.5)';
|
||||||
|
var m = /[?&]th=(opus|caf|mp3)/.exec('' + mp.au.rsrc),
|
||||||
|
txt = mp.au.ded ? L.mm_playerr.replace(':', ' ;_;') :
|
||||||
|
m ? L.mm_bconv.format(m[1]) : L.mm_bload;
|
||||||
|
|
||||||
|
pctx.fillText(txt, 16, pc.h / 1.5);
|
||||||
return; // not-init || unsupp-codec
|
return; // not-init || unsupp-codec
|
||||||
|
}
|
||||||
|
|
||||||
if (bau != mp.au)
|
if (bau != mp.au)
|
||||||
r.drawbuf();
|
r.drawbuf();
|
||||||
@@ -2431,7 +2461,7 @@ function dl_song() {
|
|||||||
return toast.inf(10, L.f_dls);
|
return toast.inf(10, L.f_dls);
|
||||||
}
|
}
|
||||||
|
|
||||||
var url = addq(mp.tracks[mp.au.tid], 'cache=987&_=' + ACB);
|
var url = addq(mp.au.osrc, 'cache=987&_=' + ACB);
|
||||||
dl_file(url);
|
dl_file(url);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2493,7 +2523,7 @@ function mpause(e) {
|
|||||||
seek_au_mul(x * 1.0 / rect.width);
|
seek_au_mul(x * 1.0 / rect.width);
|
||||||
};
|
};
|
||||||
|
|
||||||
if (!TOUCH)
|
if (!TOUCH) {
|
||||||
bar.onwheel = function (e) {
|
bar.onwheel = function (e) {
|
||||||
var dist = Math.sign(e.deltaY) * 10;
|
var dist = Math.sign(e.deltaY) * 10;
|
||||||
if (Math.abs(e.deltaY) < 30 && !e.deltaMode)
|
if (Math.abs(e.deltaY) < 30 && !e.deltaMode)
|
||||||
@@ -2505,6 +2535,19 @@ function mpause(e) {
|
|||||||
seek_au_rel(dist);
|
seek_au_rel(dist);
|
||||||
ev(e);
|
ev(e);
|
||||||
};
|
};
|
||||||
|
ebi('pvol').onwheel = function (e) {
|
||||||
|
var dist = Math.sign(e.deltaY) * 10;
|
||||||
|
if (Math.abs(e.deltaY) < 30 && !e.deltaMode)
|
||||||
|
dist = e.deltaY;
|
||||||
|
|
||||||
|
if (!dist || !mp.au)
|
||||||
|
return true;
|
||||||
|
|
||||||
|
mp.setvol(mp.vol + dist / 500);
|
||||||
|
vbar.draw();
|
||||||
|
ev(e);
|
||||||
|
};
|
||||||
|
}
|
||||||
})();
|
})();
|
||||||
|
|
||||||
|
|
||||||
@@ -2589,6 +2632,12 @@ var mpui = (function () {
|
|||||||
if (mpl.prescan_evp == evp)
|
if (mpl.prescan_evp == evp)
|
||||||
throw "evp match";
|
throw "evp match";
|
||||||
|
|
||||||
|
if (mpl.traversals++ > 4) {
|
||||||
|
mpl.prescan_evp = null;
|
||||||
|
toast.inf(10, L.mm_nof);
|
||||||
|
throw L.mm_nof;
|
||||||
|
}
|
||||||
|
|
||||||
mpl.prescan_evp = evp;
|
mpl.prescan_evp = evp;
|
||||||
toast.inf(10, L.mm_prescan);
|
toast.inf(10, L.mm_prescan);
|
||||||
treectl.ls_cb = repreload;
|
treectl.ls_cb = repreload;
|
||||||
@@ -3040,6 +3089,7 @@ var afilt = (function () {
|
|||||||
|
|
||||||
// plays the tid'th audio file on the page
|
// plays the tid'th audio file on the page
|
||||||
function play(tid, is_ev, seek) {
|
function play(tid, is_ev, seek) {
|
||||||
|
clearTimeout(mpl.t_eplay);
|
||||||
if (mp.order.length == 0)
|
if (mp.order.length == 0)
|
||||||
return console.log('no audio found wait what');
|
return console.log('no audio found wait what');
|
||||||
|
|
||||||
@@ -3116,13 +3166,16 @@ function play(tid, is_ev, seek) {
|
|||||||
else
|
else
|
||||||
mp.au.src = mp.au.rsrc = url;
|
mp.au.src = mp.au.rsrc = url;
|
||||||
|
|
||||||
|
mp.au.osrc = mp.tracks[tid];
|
||||||
afilt.apply();
|
afilt.apply();
|
||||||
|
|
||||||
setTimeout(function () {
|
setTimeout(function () {
|
||||||
mpl.unbuffer(url);
|
mpl.unbuffer(url);
|
||||||
}, 500);
|
}, 500);
|
||||||
|
|
||||||
|
mp.au.ded = 0;
|
||||||
mp.au.tid = tid;
|
mp.au.tid = tid;
|
||||||
|
mp.au.pt0 = Date.now();
|
||||||
mp.au.evp = get_evpath();
|
mp.au.evp = get_evpath();
|
||||||
mp.au.volume = mp.expvol(mp.vol);
|
mp.au.volume = mp.expvol(mp.vol);
|
||||||
var trs = QSA('#files tr.play');
|
var trs = QSA('#files tr.play');
|
||||||
@@ -3172,7 +3225,7 @@ function play(tid, is_ev, seek) {
|
|||||||
toast.err(0, esc(L.mm_playerr + basenames(ex)));
|
toast.err(0, esc(L.mm_playerr + basenames(ex)));
|
||||||
}
|
}
|
||||||
clmod(ebi(oid), 'act');
|
clmod(ebi(oid), 'act');
|
||||||
setTimeout(next_song, 5000);
|
mpl.t_eplay = setTimeout(next_song, 5000);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -3190,6 +3243,8 @@ function evau_error(e) {
|
|||||||
var err = '',
|
var err = '',
|
||||||
eplaya = (e && e.target) || (window.event && window.event.srcElement);
|
eplaya = (e && e.target) || (window.event && window.event.srcElement);
|
||||||
|
|
||||||
|
eplaya.ded = 1;
|
||||||
|
|
||||||
switch (eplaya.error.code) {
|
switch (eplaya.error.code) {
|
||||||
case eplaya.error.MEDIA_ERR_ABORTED:
|
case eplaya.error.MEDIA_ERR_ABORTED:
|
||||||
err = L.mm_eabrt;
|
err = L.mm_eabrt;
|
||||||
@@ -3250,7 +3305,7 @@ function evau_error(e) {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
setTimeout(next_song, 15000);
|
mpl.t_eplay = setTimeout(next_song, 15000);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -3304,6 +3359,7 @@ function scan_hash(v) {
|
|||||||
|
|
||||||
|
|
||||||
function eval_hash() {
|
function eval_hash() {
|
||||||
|
document.onkeydown = ahotkeys;
|
||||||
window.onpopstate = treectl.onpopfun;
|
window.onpopstate = treectl.onpopfun;
|
||||||
|
|
||||||
if (hash0 && window.og_fn) {
|
if (hash0 && window.og_fn) {
|
||||||
@@ -4380,7 +4436,7 @@ var showfile = (function () {
|
|||||||
|
|
||||||
var td = ebi(link.id).closest('tr').getElementsByTagName('td')[0];
|
var td = ebi(link.id).closest('tr').getElementsByTagName('td')[0];
|
||||||
|
|
||||||
if (lang == 'md' && td.textContent != '-')
|
if (lang == 'ts' || (lang == 'md' && td.textContent != '-'))
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
td.innerHTML = '<a href="#" id="t' +
|
td.innerHTML = '<a href="#" id="t' +
|
||||||
@@ -4647,6 +4703,7 @@ var thegrid = (function () {
|
|||||||
gfiles.style.display = 'none';
|
gfiles.style.display = 'none';
|
||||||
gfiles.innerHTML = (
|
gfiles.innerHTML = (
|
||||||
'<div id="ghead" class="ghead">' +
|
'<div id="ghead" class="ghead">' +
|
||||||
|
'<a href="#" class="tgl btn" id="gridvau" tt="' + L.gt_vau + '</a> ' +
|
||||||
'<a href="#" class="tgl btn" id="gridsel" tt="' + L.gt_msel + '</a> ' +
|
'<a href="#" class="tgl btn" id="gridsel" tt="' + L.gt_msel + '</a> ' +
|
||||||
'<a href="#" class="tgl btn" id="gridcrop" tt="' + L.gt_crop + '</a> ' +
|
'<a href="#" class="tgl btn" id="gridcrop" tt="' + L.gt_crop + '</a> ' +
|
||||||
'<a href="#" class="tgl btn" id="grid3x" tt="' + L.gt_3x + '</a> ' +
|
'<a href="#" class="tgl btn" id="grid3x" tt="' + L.gt_3x + '</a> ' +
|
||||||
@@ -4808,7 +4865,7 @@ var thegrid = (function () {
|
|||||||
else if (oth.hasAttribute('download'))
|
else if (oth.hasAttribute('download'))
|
||||||
oth.click();
|
oth.click();
|
||||||
|
|
||||||
else if (widget.is_open && aplay)
|
else if (aplay && (r.vau || !is_img))
|
||||||
aplay.click();
|
aplay.click();
|
||||||
|
|
||||||
else if (is_dir && !have_sel)
|
else if (is_dir && !have_sel)
|
||||||
@@ -5099,6 +5156,7 @@ var thegrid = (function () {
|
|||||||
|
|
||||||
bcfg_bind(r, 'thumbs', 'thumbs', true, r.setdirty);
|
bcfg_bind(r, 'thumbs', 'thumbs', true, r.setdirty);
|
||||||
bcfg_bind(r, 'ihop', 'ihop', true);
|
bcfg_bind(r, 'ihop', 'ihop', true);
|
||||||
|
bcfg_bind(r, 'vau', 'gridvau', false);
|
||||||
bcfg_bind(r, 'crop', 'gridcrop', !dcrop.endsWith('n'), r.set_crop);
|
bcfg_bind(r, 'crop', 'gridcrop', !dcrop.endsWith('n'), r.set_crop);
|
||||||
bcfg_bind(r, 'x3', 'grid3x', dth3x.endsWith('y'), r.set_x3);
|
bcfg_bind(r, 'x3', 'grid3x', dth3x.endsWith('y'), r.set_x3);
|
||||||
bcfg_bind(r, 'sel', 'gridsel', false, r.loadsel);
|
bcfg_bind(r, 'sel', 'gridsel', false, r.loadsel);
|
||||||
@@ -5198,7 +5256,9 @@ function tree_up(justgo) {
|
|||||||
if (!justgo)
|
if (!justgo)
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
act.parentNode.parentNode.parentNode.getElementsByTagName('a')[1].click();
|
var a = act.parentNode.parentNode.parentNode.getElementsByTagName('a')[1];
|
||||||
|
if (a.parentNode.tagName == 'LI')
|
||||||
|
a.click();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -5261,7 +5321,7 @@ function fselfunw(e, ae, d, rem) {
|
|||||||
}
|
}
|
||||||
selfun();
|
selfun();
|
||||||
}
|
}
|
||||||
document.onkeydown = function (e) {
|
var ahotkeys = function (e) {
|
||||||
if (e.altKey || e.isComposing)
|
if (e.altKey || e.isComposing)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
@@ -5846,7 +5906,7 @@ var treectl = (function () {
|
|||||||
bcfg_bind(r, 'ireadme', 'ireadme', true);
|
bcfg_bind(r, 'ireadme', 'ireadme', true);
|
||||||
bcfg_bind(r, 'idxh', 'idxh', idxh, setidxh);
|
bcfg_bind(r, 'idxh', 'idxh', idxh, setidxh);
|
||||||
bcfg_bind(r, 'dyn', 'dyntree', true, onresize);
|
bcfg_bind(r, 'dyn', 'dyntree', true, onresize);
|
||||||
bcfg_bind(r, 'csel', 'csel', false);
|
bcfg_bind(r, 'csel', 'csel', dgsel);
|
||||||
bcfg_bind(r, 'dots', 'dotfiles', false, function (v) {
|
bcfg_bind(r, 'dots', 'dotfiles', false, function (v) {
|
||||||
r.goto();
|
r.goto();
|
||||||
var xhr = new XHR();
|
var xhr = new XHR();
|
||||||
@@ -6302,6 +6362,7 @@ var treectl = (function () {
|
|||||||
r.nvis = r.lim;
|
r.nvis = r.lim;
|
||||||
r.sb_msg = false;
|
r.sb_msg = false;
|
||||||
r.nextdir = xhr.top;
|
r.nextdir = xhr.top;
|
||||||
|
clearTimeout(mpl.t_eplay);
|
||||||
enspin('#tree');
|
enspin('#tree');
|
||||||
enspin(thegrid.en ? '#gfiles' : '#files');
|
enspin(thegrid.en ? '#gfiles' : '#files');
|
||||||
window.removeEventListener('scroll', r.tscroll);
|
window.removeEventListener('scroll', r.tscroll);
|
||||||
@@ -6341,8 +6402,15 @@ var treectl = (function () {
|
|||||||
var res = JSON.parse(this.responseText);
|
var res = JSON.parse(this.responseText);
|
||||||
}
|
}
|
||||||
catch (ex) {
|
catch (ex) {
|
||||||
if (!this.hydrate)
|
if (r.ls_cb) {
|
||||||
|
r.ls_cb = null;
|
||||||
|
return toast.inf(10, L.mm_nof);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!this.hydrate) {
|
||||||
location = this.top;
|
location = this.top;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
return toast.err(30, "bad <code>?ls</code> reply;\nexpected json, got this:\n\n" + esc(this.responseText + ''));
|
return toast.err(30, "bad <code>?ls</code> reply;\nexpected json, got this:\n\n" + esc(this.responseText + ''));
|
||||||
}
|
}
|
||||||
@@ -7961,7 +8029,8 @@ function sandbox(tgt, rules, cls, html) {
|
|||||||
env = js.split(/\blogues *=/)[0] + 'a;';
|
env = js.split(/\blogues *=/)[0] + 'a;';
|
||||||
}
|
}
|
||||||
|
|
||||||
html = '<html class="iframe ' + document.documentElement.className + '"><head><style>' + globalcss() +
|
html = '<html class="iframe ' + document.documentElement.className +
|
||||||
|
'"><head><style>html{background:#eee;color:#000}\n' + globalcss() +
|
||||||
'</style><base target="_parent"></head><body id="b" class="logue ' + cls + '">' + html +
|
'</style><base target="_parent"></head><body id="b" class="logue ' + cls + '">' + html +
|
||||||
'<script>' + env + '</script>' + sandboxjs() +
|
'<script>' + env + '</script>' + sandboxjs() +
|
||||||
'<script>var d=document.documentElement,TS="' + TS + '",' +
|
'<script>var d=document.documentElement,TS="' + TS + '",' +
|
||||||
|
|||||||
@@ -56,7 +56,7 @@
|
|||||||
<li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li>
|
<li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li>
|
||||||
<li>old version of rclone? replace all <code>=</code> with <code> </code> (space)</li>
|
<li>old version of rclone? replace all <code>=</code> with <code> </code> (space)</li>
|
||||||
</ul>
|
</ul>
|
||||||
|
|
||||||
<p>if you want to use the native WebDAV client in windows instead (slow and buggy), first run <a href="{{ r }}/.cpr/a/webdav-cfg.bat">webdav-cfg.bat</a> to remove the 47 MiB filesize limit (also fixes latency and password login), then connect:</p>
|
<p>if you want to use the native WebDAV client in windows instead (slow and buggy), first run <a href="{{ r }}/.cpr/a/webdav-cfg.bat">webdav-cfg.bat</a> to remove the 47 MiB filesize limit (also fixes latency and password login), then connect:</p>
|
||||||
<pre>
|
<pre>
|
||||||
net use <b>w:</b> http{{ s }}://{{ ep }}/{{ rvp }}{% if accs %} k /user:<b>{{ pw }}</b>{% endif %}
|
net use <b>w:</b> http{{ s }}://{{ ep }}/{{ rvp }}{% if accs %} k /user:<b>{{ pw }}</b>{% endif %}
|
||||||
@@ -64,16 +64,7 @@
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="os lin">
|
<div class="os lin">
|
||||||
<pre>
|
<p>rclone (v1.63 or later) is recommended:</p>
|
||||||
yum install davfs2
|
|
||||||
{% if accs %}printf '%s\n' <b>{{ pw }}</b> k | {% endif %}mount -t davfs -ouid=1000 http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b>
|
|
||||||
</pre>
|
|
||||||
<p>make it automount on boot:</p>
|
|
||||||
<pre>
|
|
||||||
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>{{ pw }}</b> k" >> /etc/davfs2/secrets
|
|
||||||
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b> davfs rw,user,uid=1000,noauto 0 0" >> /etc/fstab
|
|
||||||
</pre>
|
|
||||||
<p>or you can use rclone instead, which is much slower but doesn't require root (plus it keeps lastmodified on upload):</p>
|
|
||||||
<pre>
|
<pre>
|
||||||
rclone config create {{ aname }}-dav webdav url=http{{ s }}://{{ rip }}{{ hport }} vendor=owncloud pacer_min_sleep=0.01ms{% if accs %} user=k pass=<b>{{ pw }}</b>{% endif %}
|
rclone config create {{ aname }}-dav webdav url=http{{ s }}://{{ rip }}{{ hport }} vendor=owncloud pacer_min_sleep=0.01ms{% if accs %} user=k pass=<b>{{ pw }}</b>{% endif %}
|
||||||
rclone mount --vfs-cache-mode writes --dir-cache-time 5s {{ aname }}-dav:{{ rvp }} <b>mp</b>
|
rclone mount --vfs-cache-mode writes --dir-cache-time 5s {{ aname }}-dav:{{ rvp }} <b>mp</b>
|
||||||
@@ -85,6 +76,16 @@
|
|||||||
<li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li>
|
<li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li>
|
||||||
<li>old version of rclone? replace all <code>=</code> with <code> </code> (space)</li>
|
<li>old version of rclone? replace all <code>=</code> with <code> </code> (space)</li>
|
||||||
</ul>
|
</ul>
|
||||||
|
<p>alternatively use davfs2 (requires root, is slower, forgets lastmodified-timestamp on upload):</p>
|
||||||
|
<pre>
|
||||||
|
yum install davfs2
|
||||||
|
{% if accs %}printf '%s\n' <b>{{ pw }}</b> k | {% endif %}mount -t davfs -ouid=1000 http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b>
|
||||||
|
</pre>
|
||||||
|
<p>make davfs2 automount on boot:</p>
|
||||||
|
<pre>
|
||||||
|
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>{{ pw }}</b> k" >> /etc/davfs2/secrets
|
||||||
|
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b> davfs rw,user,uid=1000,noauto 0 0" >> /etc/fstab
|
||||||
|
</pre>
|
||||||
<p>or the emergency alternative (gnome/gui-only):</p>
|
<p>or the emergency alternative (gnome/gui-only):</p>
|
||||||
<!-- gnome-bug: ignores vp -->
|
<!-- gnome-bug: ignores vp -->
|
||||||
<pre>
|
<pre>
|
||||||
@@ -104,7 +105,7 @@
|
|||||||
<pre>
|
<pre>
|
||||||
http{{ s }}://k:<b>{{ pw }}</b>@{{ ep }}/{{ rvp }}
|
http{{ s }}://k:<b>{{ pw }}</b>@{{ ep }}/{{ rvp }}
|
||||||
</pre>
|
</pre>
|
||||||
|
|
||||||
{% if s %}
|
{% if s %}
|
||||||
<p><em>replace <code>https</code> with <code>http</code> if it doesn't work</em></p>
|
<p><em>replace <code>https</code> with <code>http</code> if it doesn't work</em></p>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|||||||
@@ -184,6 +184,7 @@ html {
|
|||||||
padding: 1.5em 2em;
|
padding: 1.5em 2em;
|
||||||
border-width: .5em 0;
|
border-width: .5em 0;
|
||||||
}
|
}
|
||||||
|
.logue code,
|
||||||
#modalc code,
|
#modalc code,
|
||||||
#tt code {
|
#tt code {
|
||||||
color: #eee;
|
color: #eee;
|
||||||
@@ -264,7 +265,11 @@ html.y #tth {
|
|||||||
box-shadow: 0 .3em 3em rgba(0,0,0,0.5);
|
box-shadow: 0 .3em 3em rgba(0,0,0,0.5);
|
||||||
max-width: 50em;
|
max-width: 50em;
|
||||||
max-height: 30em;
|
max-height: 30em;
|
||||||
overflow: auto;
|
overflow-x: auto;
|
||||||
|
overflow-y: scroll;
|
||||||
|
}
|
||||||
|
#modalc.yk {
|
||||||
|
overflow-y: auto;
|
||||||
}
|
}
|
||||||
#modalc td {
|
#modalc td {
|
||||||
text-align: unset;
|
text-align: unset;
|
||||||
|
|||||||
@@ -658,7 +658,9 @@ function Donut(uc, st) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
function pos() {
|
function pos() {
|
||||||
return uc.fsearch ? Math.max(st.bytes.hashed, st.bytes.finished) : st.bytes.finished;
|
return uc.fsearch ?
|
||||||
|
Math.max(st.bytes.hashed, st.bytes.finished) :
|
||||||
|
st.bytes.inflight + st.bytes.finished;
|
||||||
}
|
}
|
||||||
|
|
||||||
r.on = function (ya) {
|
r.on = function (ya) {
|
||||||
@@ -853,6 +855,7 @@ function up2k_init(subtle) {
|
|||||||
setmsg(suggest_up2k, 'msg');
|
setmsg(suggest_up2k, 'msg');
|
||||||
|
|
||||||
var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
|
var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
|
||||||
|
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz.split(',')[1]),
|
||||||
uc = {},
|
uc = {},
|
||||||
fdom_ctr = 0,
|
fdom_ctr = 0,
|
||||||
biggest_file = 0;
|
biggest_file = 0;
|
||||||
@@ -1207,7 +1210,7 @@ function up2k_init(subtle) {
|
|||||||
match = false;
|
match = false;
|
||||||
|
|
||||||
if (match) {
|
if (match) {
|
||||||
var msg = ['directory iterator got stuck on the following {0} items; good chance your browser is about to spinlock:<ul>'.format(missing.length)];
|
var msg = ['directory iterator got stuck trying to access the following {0} items; will skip:<ul>'.format(missing.length)];
|
||||||
for (var a = 0; a < Math.min(20, missing.length); a++)
|
for (var a = 0; a < Math.min(20, missing.length); a++)
|
||||||
msg.push('<li>' + esc(missing[a]) + '</li>');
|
msg.push('<li>' + esc(missing[a]) + '</li>');
|
||||||
|
|
||||||
@@ -1736,6 +1739,11 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (st.bytes.inflight && (st.bytes.inflight < 0 || !st.busy.upload.length)) {
|
||||||
|
console.log('insane inflight ' + st.bytes.inflight);
|
||||||
|
st.bytes.inflight = 0;
|
||||||
|
}
|
||||||
|
|
||||||
var mou_ikkai = false;
|
var mou_ikkai = false;
|
||||||
|
|
||||||
if (st.busy.handshake.length &&
|
if (st.busy.handshake.length &&
|
||||||
@@ -2178,7 +2186,7 @@ function up2k_init(subtle) {
|
|||||||
st.busy.head.push(t);
|
st.busy.head.push(t);
|
||||||
|
|
||||||
var xhr = new XMLHttpRequest();
|
var xhr = new XMLHttpRequest();
|
||||||
xhr.onerror = function () {
|
xhr.onerror = xhr.ontimeout = function () {
|
||||||
console.log('head onerror, retrying', t.name, t);
|
console.log('head onerror, retrying', t.name, t);
|
||||||
if (!toast.visible)
|
if (!toast.visible)
|
||||||
toast.warn(9.98, L.u_enethd + "\n\nfile: " + t.name, t);
|
toast.warn(9.98, L.u_enethd + "\n\nfile: " + t.name, t);
|
||||||
@@ -2222,6 +2230,7 @@ function up2k_init(subtle) {
|
|||||||
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
|
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
|
||||||
};
|
};
|
||||||
|
|
||||||
|
xhr.timeout = 34000;
|
||||||
xhr.open('HEAD', t.purl + uricom_enc(t.name), true);
|
xhr.open('HEAD', t.purl + uricom_enc(t.name), true);
|
||||||
xhr.send();
|
xhr.send();
|
||||||
}
|
}
|
||||||
@@ -2247,7 +2256,7 @@ function up2k_init(subtle) {
|
|||||||
console.log("sending keepalive handshake", t.name, t);
|
console.log("sending keepalive handshake", t.name, t);
|
||||||
|
|
||||||
var xhr = new XMLHttpRequest();
|
var xhr = new XMLHttpRequest();
|
||||||
xhr.onerror = function () {
|
xhr.onerror = xhr.ontimeout = function () {
|
||||||
if (t.t_busied != me) // t.done ok
|
if (t.t_busied != me) // t.done ok
|
||||||
return console.log('zombie handshake onerror', t.name, t);
|
return console.log('zombie handshake onerror', t.name, t);
|
||||||
|
|
||||||
@@ -2374,11 +2383,23 @@ function up2k_init(subtle) {
|
|||||||
var arr = st.todo.upload,
|
var arr = st.todo.upload,
|
||||||
sort = arr.length && arr[arr.length - 1].nfile > t.n;
|
sort = arr.length && arr[arr.length - 1].nfile > t.n;
|
||||||
|
|
||||||
for (var a = 0; a < t.postlist.length; a++)
|
for (var a = 0; a < t.postlist.length; a++) {
|
||||||
|
var nparts = [], tbytes = 0, stitch = stitch_tgt;
|
||||||
|
if (t.nojoin && t.nojoin - t.postlist.length < 6)
|
||||||
|
stitch = 1;
|
||||||
|
|
||||||
|
--a;
|
||||||
|
for (var b = 0; b < stitch; b++) {
|
||||||
|
nparts.push(t.postlist[++a]);
|
||||||
|
tbytes += chunksize;
|
||||||
|
if (tbytes + chunksize > stitch * 1024 * 1024 || t.postlist[a + 1] - t.postlist[a] !== 1)
|
||||||
|
break;
|
||||||
|
}
|
||||||
arr.push({
|
arr.push({
|
||||||
'nfile': t.n,
|
'nfile': t.n,
|
||||||
'npart': t.postlist[a]
|
'nparts': nparts
|
||||||
});
|
});
|
||||||
|
}
|
||||||
|
|
||||||
msg = null;
|
msg = null;
|
||||||
done = false;
|
done = false;
|
||||||
@@ -2387,7 +2408,7 @@ function up2k_init(subtle) {
|
|||||||
arr.sort(function (a, b) {
|
arr.sort(function (a, b) {
|
||||||
return a.nfile < b.nfile ? -1 :
|
return a.nfile < b.nfile ? -1 :
|
||||||
/* */ a.nfile > b.nfile ? 1 :
|
/* */ a.nfile > b.nfile ? 1 :
|
||||||
a.npart < b.npart ? -1 : 1;
|
/* */ a.nparts[0] < b.nparts[0] ? -1 : 1;
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2493,6 +2514,7 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
xhr.open('POST', t.purl, true);
|
xhr.open('POST', t.purl, true);
|
||||||
xhr.responseType = 'text';
|
xhr.responseType = 'text';
|
||||||
|
xhr.timeout = 42000;
|
||||||
xhr.send(JSON.stringify(req));
|
xhr.send(JSON.stringify(req));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2534,7 +2556,10 @@ function up2k_init(subtle) {
|
|||||||
function exec_upload() {
|
function exec_upload() {
|
||||||
var upt = st.todo.upload.shift(),
|
var upt = st.todo.upload.shift(),
|
||||||
t = st.files[upt.nfile],
|
t = st.files[upt.nfile],
|
||||||
npart = upt.npart,
|
nparts = upt.nparts,
|
||||||
|
pcar = nparts[0],
|
||||||
|
pcdr = nparts[nparts.length - 1],
|
||||||
|
snpart = pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr),
|
||||||
tries = 0;
|
tries = 0;
|
||||||
|
|
||||||
if (t.done)
|
if (t.done)
|
||||||
@@ -2549,8 +2574,8 @@ function up2k_init(subtle) {
|
|||||||
pvis.seth(t.n, 1, "🚀 send");
|
pvis.seth(t.n, 1, "🚀 send");
|
||||||
|
|
||||||
var chunksize = get_chunksize(t.size),
|
var chunksize = get_chunksize(t.size),
|
||||||
car = npart * chunksize,
|
car = pcar * chunksize,
|
||||||
cdr = car + chunksize;
|
cdr = (pcdr + 1) * chunksize;
|
||||||
|
|
||||||
if (cdr >= t.size)
|
if (cdr >= t.size)
|
||||||
cdr = t.size;
|
cdr = t.size;
|
||||||
@@ -2560,14 +2585,19 @@ function up2k_init(subtle) {
|
|||||||
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
|
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
|
||||||
if (txt.indexOf('upload blocked by x') + 1) {
|
if (txt.indexOf('upload blocked by x') + 1) {
|
||||||
apop(st.busy.upload, upt);
|
apop(st.busy.upload, upt);
|
||||||
apop(t.postlist, npart);
|
for (var a = pcar; a <= pcdr; a++)
|
||||||
|
apop(t.postlist, a);
|
||||||
pvis.seth(t.n, 1, "ERROR");
|
pvis.seth(t.n, 1, "ERROR");
|
||||||
pvis.seth(t.n, 2, txt.split(/\n/)[0]);
|
pvis.seth(t.n, 2, txt.split(/\n/)[0]);
|
||||||
pvis.move(t.n, 'ng');
|
pvis.move(t.n, 'ng');
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
if (xhr.status == 200) {
|
if (xhr.status == 200) {
|
||||||
pvis.prog(t, npart, cdr - car);
|
var bdone = cdr - car;
|
||||||
|
for (var a = pcar; a <= pcdr; a++) {
|
||||||
|
pvis.prog(t, a, Math.min(bdone, chunksize));
|
||||||
|
bdone -= chunksize;
|
||||||
|
}
|
||||||
st.bytes.finished += cdr - car;
|
st.bytes.finished += cdr - car;
|
||||||
st.bytes.uploaded += cdr - car;
|
st.bytes.uploaded += cdr - car;
|
||||||
t.bytes_uploaded += cdr - car;
|
t.bytes_uploaded += cdr - car;
|
||||||
@@ -2576,18 +2606,21 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
else if (txt.indexOf('already got that') + 1 ||
|
else if (txt.indexOf('already got that') + 1 ||
|
||||||
txt.indexOf('already being written') + 1) {
|
txt.indexOf('already being written') + 1) {
|
||||||
console.log("ignoring dupe-segment error", t.name, t);
|
t.nojoin = t.postlist.length;
|
||||||
|
console.log("ignoring dupe-segment with backoff", t.nojoin, t.name, t);
|
||||||
|
if (!toast.visible && st.todo.upload.length < 4)
|
||||||
|
toast.msg(10, L.u_cbusy);
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
xhrchk(xhr, L.u_cuerr2.format(npart, Math.ceil(t.size / chunksize), t.name), "404, target folder not found (???)", "warn", t);
|
xhrchk(xhr, L.u_cuerr2.format(snpart, Math.ceil(t.size / chunksize), t.name), "404, target folder not found (???)", "warn", t);
|
||||||
|
|
||||||
chill(t);
|
chill(t);
|
||||||
}
|
}
|
||||||
orz2(xhr);
|
orz2(xhr);
|
||||||
}
|
}
|
||||||
var orz2 = function (xhr) {
|
var orz2 = function (xhr) {
|
||||||
apop(st.busy.upload, upt);
|
apop(st.busy.upload, upt);
|
||||||
apop(t.postlist, npart);
|
for (var a = pcar; a <= pcdr; a++)
|
||||||
|
apop(t.postlist, a);
|
||||||
if (!t.postlist.length) {
|
if (!t.postlist.length) {
|
||||||
t.t_uploaded = Date.now();
|
t.t_uploaded = Date.now();
|
||||||
pvis.seth(t.n, 1, 'verifying');
|
pvis.seth(t.n, 1, 'verifying');
|
||||||
@@ -2601,28 +2634,38 @@ function up2k_init(subtle) {
|
|||||||
btot = Math.floor(st.bytes.total / 1024 / 1024);
|
btot = Math.floor(st.bytes.total / 1024 / 1024);
|
||||||
|
|
||||||
xhr.upload.onprogress = function (xev) {
|
xhr.upload.onprogress = function (xev) {
|
||||||
var nb = xev.loaded;
|
var nb = xev.loaded,
|
||||||
st.bytes.inflight += nb - xhr.bsent;
|
db = nb - xhr.bsent;
|
||||||
|
|
||||||
|
if (!db)
|
||||||
|
return;
|
||||||
|
|
||||||
|
st.bytes.inflight += db;
|
||||||
xhr.bsent = nb;
|
xhr.bsent = nb;
|
||||||
pvis.prog(t, npart, nb);
|
xhr.timeout = 64000 + Date.now() - xhr.t0;
|
||||||
|
pvis.prog(t, pcar, nb);
|
||||||
};
|
};
|
||||||
xhr.onload = function (xev) {
|
xhr.onload = function (xev) {
|
||||||
try { orz(xhr); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
|
try { orz(xhr); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
|
||||||
};
|
};
|
||||||
xhr.onerror = function (xev) {
|
xhr.onerror = xhr.ontimeout = function (xev) {
|
||||||
if (crashed)
|
if (crashed)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
st.bytes.inflight -= (xhr.bsent || 0);
|
st.bytes.inflight -= (xhr.bsent || 0);
|
||||||
|
|
||||||
if (!toast.visible)
|
if (!toast.visible)
|
||||||
toast.warn(9.98, L.u_cuerr.format(npart, Math.ceil(t.size / chunksize), t.name), t);
|
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), t.name), t);
|
||||||
|
|
||||||
console.log('chunkpit onerror,', ++tries, t.name, t);
|
console.log('chunkpit onerror,', ++tries, t.name, t);
|
||||||
orz2(xhr);
|
orz2(xhr);
|
||||||
};
|
};
|
||||||
|
var chashes = [];
|
||||||
|
for (var a = pcar; a <= pcdr; a++)
|
||||||
|
chashes.push(t.hash[a]);
|
||||||
|
|
||||||
xhr.open('POST', t.purl, true);
|
xhr.open('POST', t.purl, true);
|
||||||
xhr.setRequestHeader("X-Up2k-Hash", t.hash[npart]);
|
xhr.setRequestHeader("X-Up2k-Hash", chashes.join(","));
|
||||||
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
|
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
|
||||||
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
|
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
|
||||||
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
|
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
|
||||||
@@ -2632,6 +2675,8 @@ function up2k_init(subtle) {
|
|||||||
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
|
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
|
||||||
|
|
||||||
xhr.bsent = 0;
|
xhr.bsent = 0;
|
||||||
|
xhr.t0 = Date.now();
|
||||||
|
xhr.timeout = 42000;
|
||||||
xhr.responseType = 'text';
|
xhr.responseType = 'text';
|
||||||
xhr.send(t.fobj.slice(car, cdr));
|
xhr.send(t.fobj.slice(car, cdr));
|
||||||
}
|
}
|
||||||
@@ -2732,13 +2777,34 @@ function up2k_init(subtle) {
|
|||||||
if (parallel_uploads > 16)
|
if (parallel_uploads > 16)
|
||||||
parallel_uploads = 16;
|
parallel_uploads = 16;
|
||||||
|
|
||||||
if (parallel_uploads > 7)
|
if (parallel_uploads > 6)
|
||||||
toast.warn(10, L.u_maxconn);
|
toast.warn(10, L.u_maxconn);
|
||||||
|
else if (toast.txt == L.u_maxconn)
|
||||||
|
toast.hide();
|
||||||
|
|
||||||
obj.value = parallel_uploads;
|
obj.value = parallel_uploads;
|
||||||
bumpthread({ "target": 1 });
|
bumpthread({ "target": 1 });
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var read_u2sz = function () {
|
||||||
|
var el = ebi('u2szg'), n = parseInt(el.value), dv = u2sz.split(',');
|
||||||
|
stitch_tgt = n = (
|
||||||
|
isNaN(n) ? dv[1] :
|
||||||
|
n < dv[0] ? dv[0] :
|
||||||
|
n > dv[2] ? dv[2] : n
|
||||||
|
);
|
||||||
|
if (n == dv[1]) sdrop('u2sz'); else swrite('u2sz', n);
|
||||||
|
if (el.value != n) el.value = n;
|
||||||
|
};
|
||||||
|
ebi('u2szg').addEventListener('blur', read_u2sz);
|
||||||
|
ebi('u2szg').onkeydown = function (e) {
|
||||||
|
if (anymod(e)) return;
|
||||||
|
var n = e.code == 'ArrowUp' ? 1 : e.code == 'ArrowDown' ? -1 : 0;
|
||||||
|
if (!n) return;
|
||||||
|
this.value = parseInt(this.value) + n;
|
||||||
|
read_u2sz();
|
||||||
|
}
|
||||||
|
|
||||||
function tgl_fsearch() {
|
function tgl_fsearch() {
|
||||||
set_fsearch(!uc.fsearch);
|
set_fsearch(!uc.fsearch);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -127,13 +127,13 @@ if ((document.location + '').indexOf(',rej,') + 1)
|
|||||||
|
|
||||||
try {
|
try {
|
||||||
console.hist = [];
|
console.hist = [];
|
||||||
var CMAXHIST = 1000;
|
var CMAXHIST = MOBILE ? 9000 : 44000;
|
||||||
var hook = function (t) {
|
var hook = function (t) {
|
||||||
var orig = console[t].bind(console),
|
var orig = console[t].bind(console),
|
||||||
cfun = function () {
|
cfun = function () {
|
||||||
console.hist.push(Date.now() + ' ' + t + ': ' + Array.from(arguments).join(', '));
|
console.hist.push(Date.now() + ' ' + t + ': ' + Array.from(arguments).join(', '));
|
||||||
if (console.hist.length > CMAXHIST)
|
if (console.hist.length > CMAXHIST)
|
||||||
console.hist = console.hist.slice(CMAXHIST / 2);
|
console.hist = console.hist.slice(CMAXHIST / 4);
|
||||||
|
|
||||||
orig.apply(console, arguments);
|
orig.apply(console, arguments);
|
||||||
};
|
};
|
||||||
@@ -1396,10 +1396,10 @@ var tt = (function () {
|
|||||||
o = ctr.querySelectorAll('*[tt]');
|
o = ctr.querySelectorAll('*[tt]');
|
||||||
|
|
||||||
for (var a = o.length - 1; a >= 0; a--) {
|
for (var a = o.length - 1; a >= 0; a--) {
|
||||||
o[a].onfocus = _cshow;
|
o[a].addEventListener('focus', _cshow);
|
||||||
o[a].onblur = _hide;
|
o[a].addEventListener('blur', _hide);
|
||||||
o[a].onmouseenter = _dshow;
|
o[a].addEventListener('mouseenter', _dshow);
|
||||||
o[a].onmouseleave = _hide;
|
o[a].addEventListener('mouseleave', _hide);
|
||||||
}
|
}
|
||||||
r.hide();
|
r.hide();
|
||||||
}
|
}
|
||||||
@@ -1536,6 +1536,7 @@ var modal = (function () {
|
|||||||
var r = {},
|
var r = {},
|
||||||
q = [],
|
q = [],
|
||||||
o = null,
|
o = null,
|
||||||
|
scrolling = null,
|
||||||
cb_up = null,
|
cb_up = null,
|
||||||
cb_ok = null,
|
cb_ok = null,
|
||||||
cb_ng = null,
|
cb_ng = null,
|
||||||
@@ -1556,6 +1557,7 @@ var modal = (function () {
|
|||||||
r.nofocus = 0;
|
r.nofocus = 0;
|
||||||
|
|
||||||
r.show = function (html) {
|
r.show = function (html) {
|
||||||
|
tt.hide();
|
||||||
o = mknod('div', 'modal');
|
o = mknod('div', 'modal');
|
||||||
o.innerHTML = '<table><tr><td><div id="modalc">' + html + '</div></td></tr></table>';
|
o.innerHTML = '<table><tr><td><div id="modalc">' + html + '</div></td></tr></table>';
|
||||||
document.body.appendChild(o);
|
document.body.appendChild(o);
|
||||||
@@ -1579,6 +1581,7 @@ var modal = (function () {
|
|||||||
|
|
||||||
document.addEventListener('focus', onfocus);
|
document.addEventListener('focus', onfocus);
|
||||||
document.addEventListener('selectionchange', onselch);
|
document.addEventListener('selectionchange', onselch);
|
||||||
|
timer.add(scrollchk, 1);
|
||||||
timer.add(onfocus);
|
timer.add(onfocus);
|
||||||
if (cb_up)
|
if (cb_up)
|
||||||
setTimeout(cb_up, 1);
|
setTimeout(cb_up, 1);
|
||||||
@@ -1586,6 +1589,8 @@ var modal = (function () {
|
|||||||
|
|
||||||
r.hide = function () {
|
r.hide = function () {
|
||||||
timer.rm(onfocus);
|
timer.rm(onfocus);
|
||||||
|
timer.rm(scrollchk);
|
||||||
|
scrolling = null;
|
||||||
try {
|
try {
|
||||||
ebi('modal-ok').removeEventListener('blur', onblur);
|
ebi('modal-ok').removeEventListener('blur', onblur);
|
||||||
}
|
}
|
||||||
@@ -1604,13 +1609,28 @@ var modal = (function () {
|
|||||||
r.hide();
|
r.hide();
|
||||||
if (cb_ok)
|
if (cb_ok)
|
||||||
cb_ok(v);
|
cb_ok(v);
|
||||||
}
|
};
|
||||||
var ng = function (e) {
|
var ng = function (e) {
|
||||||
ev(e);
|
ev(e);
|
||||||
r.hide();
|
r.hide();
|
||||||
if (cb_ng)
|
if (cb_ng)
|
||||||
cb_ng(null);
|
cb_ng(null);
|
||||||
}
|
};
|
||||||
|
|
||||||
|
var scrollchk = function () {
|
||||||
|
if (scrolling === true)
|
||||||
|
return;
|
||||||
|
|
||||||
|
var o = ebi('modalc'),
|
||||||
|
vis = o.offsetHeight,
|
||||||
|
all = o.scrollHeight,
|
||||||
|
nsc = 8 + vis < all;
|
||||||
|
|
||||||
|
if (scrolling !== nsc)
|
||||||
|
clmod(o, 'yk', !nsc);
|
||||||
|
|
||||||
|
scrolling = nsc;
|
||||||
|
};
|
||||||
|
|
||||||
var onselch = function () {
|
var onselch = function () {
|
||||||
try {
|
try {
|
||||||
@@ -2024,6 +2044,9 @@ function xhrchk(xhr, prefix, e404, lvl, tag) {
|
|||||||
if (xhr.status == 404)
|
if (xhr.status == 404)
|
||||||
return toast.err(0, prefix + e404 + suf, tag);
|
return toast.err(0, prefix + e404 + suf, tag);
|
||||||
|
|
||||||
|
if (!xhr.status && !errtxt)
|
||||||
|
return toast.err(0, prefix + L.xhr0);
|
||||||
|
|
||||||
if (is_cf && (xhr.status == 403 || xhr.status == 503)) {
|
if (is_cf && (xhr.status == 403 || xhr.status == 503)) {
|
||||||
var now = Date.now(), td = now - cf_cha_t;
|
var now = Date.now(), td = now - cf_cha_t;
|
||||||
if (td < 15000)
|
if (td < 15000)
|
||||||
|
|||||||
@@ -1,3 +1,117 @@
|
|||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-0722-2323 `v1.13.5` american sized
|
||||||
|
|
||||||
|
## new features
|
||||||
|
|
||||||
|
* long-distance uploads are now **twice as fast** on average 132a8350
|
||||||
|
* boost tcp windowsize scaling by stitching together smaller chunks into bigger chonks so they fly better across the atlantic
|
||||||
|
* i'm not kidding, on the two routes we've tested this on we gained 1.6x / 160% (from US-West to Finland) and **2.6x / 260%** (Norway to US-East)
|
||||||
|
* files that are between 4 MiB and 256 MiB see the biggest improvement; 70% faster <= 768 MiB, 40% <= 1.5 GiB, 10% <= 6G
|
||||||
|
* if this turns out to be buggy, disable it serverside with `--u2sz 1,1,1` or clientside in the browser-ui: `[⚙️]` -> `up2k switches` -> change `64` to `1`
|
||||||
|
* u2c.py (CLI uploader): support stitching (☝️) + print a summary with hashing and upload speeds 987bce21
|
||||||
|
* video files can play as audio 53f1e3c9
|
||||||
|
* audio is extracted serverside to avoid wasting bandwidth
|
||||||
|
* extraction is lossy (converted to opus or mp3 depending on browser)
|
||||||
|
* togglebutton `🎧` in the gridview toolbar to enable/disable
|
||||||
|
* new hook: [into-the-cache-it-goes.py](https://github.com/9001/copyparty/tree/hovudstraum/bin/hooks#after-upload) d26a944d
|
||||||
|
* avoids a cloudflare bug (race condition?) where it will send truncated files to visitors on the very first load if several people simultaneously access a file that hasn't been viewed before
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
|
||||||
|
* inline markdown/logues rendered black-on-black in firefox 54 and some other browsers from 2017 and older eeef8091
|
||||||
|
* unintuitive folder thumbnail selection if folder contains both `Cover.jpg` and `cover.jpg` f955d2bd
|
||||||
|
* the gridview toolbar got undocked after viewing a pic/vid dc449bf8
|
||||||
|
|
||||||
|
## other changes
|
||||||
|
|
||||||
|
* #90 recommend rclone in favor of davfs2 ef0ecf87
|
||||||
|
* improved some error messages e565ad5f
|
||||||
|
* added helptext exporters to generate the online [html](https://ocv.me/copyparty/helptext.html) and [txt](https://ocv.me/copyparty/helptext.txt) editions 59533990
|
||||||
|
* mention that cloudflare is incompatible with uploading files larger than 383.9 GiB
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-0716-0457 `v1.13.4` descript.ion
|
||||||
|
|
||||||
|
## new features
|
||||||
|
|
||||||
|
* "medialinks"; instead of the usual hotlink, the basic-uploader (as used by sharex and such) can return a link that opens the file in the media viewer c9281f89
|
||||||
|
* enable for all uploads with volflag `medialinks`, or just for one upload by adding `?media` to the post url
|
||||||
|
* thumbnails are now fully compatible with dirkeys/filekeys 52e06226
|
||||||
|
* `--th-covers` will respect filename order, selecting the first matching filename as the folder thumbnail 1cdb1702
|
||||||
|
* new hook: [bittorrent downloader](https://github.com/9001/copyparty/tree/hovudstraum/bin/hooks#on-message) bd3b3863 803e1565
|
||||||
|
* hooks: d749683d
|
||||||
|
* can be restricted to only run when user has specific permissions
|
||||||
|
* user permissions are also included in the json message to the hook
|
||||||
|
* new syntax to prepend args to the hook's command
|
||||||
|
* (all this will be better documented after some additional upcoming hook-related features, see `--help-hooks` for now)
|
||||||
|
* support `descript.ion` usenet metadata; will parse and render into directory listings when possible 927c3bce
|
||||||
|
* directory listings are now 2% slower, eh who's keeping count anyways
|
||||||
|
* tftp-server: 45259251
|
||||||
|
* improved support for buggy clients
|
||||||
|
* improved ipv6 support, especially on macos
|
||||||
|
* improved robustness on unreliable networks
|
||||||
|
* #85 new option `--gsel` to default-enable the client setting to select files by ctrl-clicking them in the grid 9a87ee2f
|
||||||
|
* music player: set audio volume by scrollwheel 36d6d29a
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
|
||||||
|
* race-the-beam (downloading an unfinished upload) could get interrupted near the end, requiring a manual resume in the browser's download manager to finish f37187a0
|
||||||
|
* ftp-server: when accessing the root folder of servers without a root folder, it could mention inaccessible folders 84e8e1dd
|
||||||
|
* ftp-server: uploads will automatically replace existing files if user has delete perms 0a9f4c60
|
||||||
|
* windows 2000 expects this behavior, otherwise it'll freak out and delete stuff and then not actually upload it, nice
|
||||||
|
* new option `--ftp-no-ow` restores old default behavior of rejecting upload if target filename exists
|
||||||
|
* music player:
|
||||||
|
* stop trying to recover from a corrupted file if the user already fixed it manually 55a011b9
|
||||||
|
* support downloading the currently playing song regardless of current folder c06aa683
|
||||||
|
* music player preloader: db6059e1
|
||||||
|
* stop searching after 5 folders of nothing
|
||||||
|
* don't crash playback by walking into error-pages
|
||||||
|
* `--og` (rich discord embeds) was incompatible with viewing markdown docs d75a2c77
|
||||||
|
* `--cgen` (configfile generator) much less jank d5de3f2f
|
||||||
|
|
||||||
|
## other changes
|
||||||
|
|
||||||
|
* mention that HTTP/2 is still usually slower than HTTP/1.1 dfe7f1d9
|
||||||
|
* give up much sooner if a client is supposed to send a request body but isn't c549f367
|
||||||
|
* support running copyparty as a server on windows 2000 and winXP 8c73e0cb 2fd12a83
|
||||||
|
* updated deps 6e58514b
|
||||||
|
* copyparty.exe: python 3.12, pillow 10.4, pyinstaller 6.9
|
||||||
|
* dompurify 3.1.6
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2024-0601-2324 `v1.13.3` 700+
|
||||||
|
|
||||||
|
## new features
|
||||||
|
|
||||||
|
* keep tags when transcoding music to opus/mp3 07ea629c
|
||||||
|
* useful for batch-downloading folders with [on-the-fly transcoding](https://github.com/9001/copyparty#zip-downloads)
|
||||||
|
* excessively large tags will be individually dropped (traktor beatmaps, cover-art, xmp)
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
|
||||||
|
* optimization for large amounts (700+) of tcp connections / clients 07b2bf11
|
||||||
|
* `select()` was used for non-https downloads and mdns/ssdp initialization, which would start spinning at more than 1024 FDs, so now they `poll()` when possible (so not on windows)
|
||||||
|
* default max number of connections on windows was lowered to 486 since windows maxes out at 512 FDs
|
||||||
|
* the markdown editor autoindent would duplicate `<hr>` 692175f5
|
||||||
|
|
||||||
|
## other changes
|
||||||
|
|
||||||
|
* #83: more intuitive behavior for `--df` and the `df` volflag 5ad65450
|
||||||
|
* print helpful warning if OS restrictions make it impossible to persist config b629d18d
|
||||||
|
* censor filesystem paths in the download-as-zip error summary 5919607a
|
||||||
|
* `u2c.exe`: explain that https is disabled bef96176
|
||||||
|
* ux: 60c96f99
|
||||||
|
* hide lightbox buttons when a video is playing
|
||||||
|
* move audio seekbar text down a bit so it hides less of the waveform and minute-markers
|
||||||
|
* updated dompurify to 3.1.5 f00b9394
|
||||||
|
* updated docker images to alpine 3.20
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
# 2024-0510-1431 `v1.13.2` s3xmodit.zip
|
# 2024-0510-1431 `v1.13.2` s3xmodit.zip
|
||||||
|
|
||||||
|
|||||||
@@ -55,8 +55,8 @@ quick outline of the up2k protocol, see [uploading](https://github.com/9001/cop
|
|||||||
* server creates the `wark`, an identifier for this upload
|
* server creates the `wark`, an identifier for this upload
|
||||||
* `sha512( salt + filesize + chunk_hashes )`
|
* `sha512( salt + filesize + chunk_hashes )`
|
||||||
* and a sparse file is created for the chunks to drop into
|
* and a sparse file is created for the chunks to drop into
|
||||||
* client uploads each chunk
|
* client sends a series of POSTs, with one or more consecutive chunks in each
|
||||||
* header entries for the chunk-hash and wark
|
* header entries for the chunk-hashes (comma-separated) and wark
|
||||||
* server writes chunks into place based on the hash
|
* server writes chunks into place based on the hash
|
||||||
* client does another handshake with the hashlist; server replies with OK or a list of chunks to reupload
|
* client does another handshake with the hashlist; server replies with OK or a list of chunks to reupload
|
||||||
|
|
||||||
@@ -141,6 +141,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
|||||||
| GET | `?ups&filter=f` | ...where URL contains `f` |
|
| GET | `?ups&filter=f` | ...where URL contains `f` |
|
||||||
| GET | `?mime=foo` | specify return mimetype `foo` |
|
| GET | `?mime=foo` | specify return mimetype `foo` |
|
||||||
| GET | `?v` | render markdown file at URL |
|
| GET | `?v` | render markdown file at URL |
|
||||||
|
| GET | `?v` | open image/video/audio in mediaplayer |
|
||||||
| GET | `?txt` | get file at URL as plaintext |
|
| GET | `?txt` | get file at URL as plaintext |
|
||||||
| GET | `?txt=iso-8859-1` | ...with specific charset |
|
| GET | `?txt=iso-8859-1` | ...with specific charset |
|
||||||
| GET | `?th` | get image/video at URL as thumbnail |
|
| GET | `?th` | get image/video at URL as thumbnail |
|
||||||
@@ -169,6 +170,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
|||||||
| mPOST | | `f=FILE` | upload `FILE` into the folder at URL |
|
| mPOST | | `f=FILE` | upload `FILE` into the folder at URL |
|
||||||
| mPOST | `?j` | `f=FILE` | ...and reply with json |
|
| mPOST | `?j` | `f=FILE` | ...and reply with json |
|
||||||
| mPOST | `?replace` | `f=FILE` | ...and overwrite existing files |
|
| mPOST | `?replace` | `f=FILE` | ...and overwrite existing files |
|
||||||
|
| mPOST | `?media` | `f=FILE` | ...and return medialink (not hotlink) |
|
||||||
| mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL |
|
| mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL |
|
||||||
| POST | `?delete` | | delete URL recursively |
|
| POST | `?delete` | | delete URL recursively |
|
||||||
| jPOST | `?delete` | `["/foo","/bar"]` | delete `/foo` and `/bar` recursively |
|
| jPOST | `?delete` | `["/foo","/bar"]` | delete `/foo` and `/bar` recursively |
|
||||||
@@ -325,10 +327,6 @@ can be reproduced with `--no-sendfile --s-wr-sz 8192 --s-wr-slp 0.3 --rsp-slp 6`
|
|||||||
* remove brokers / multiprocessing stuff; https://github.com/9001/copyparty/tree/no-broker
|
* remove brokers / multiprocessing stuff; https://github.com/9001/copyparty/tree/no-broker
|
||||||
* reduce the nesting / indirections in `HttpCli` / `httpcli.py`
|
* reduce the nesting / indirections in `HttpCli` / `httpcli.py`
|
||||||
* nearly zero benefit from stuff like replacing all the `self.conn.hsrv` with a local `hsrv` variable
|
* nearly zero benefit from stuff like replacing all the `self.conn.hsrv` with a local `hsrv` variable
|
||||||
* reduce up2k roundtrips
|
|
||||||
* start from a chunk index and just go
|
|
||||||
* terminate client on bad data
|
|
||||||
* not worth the effort, just throw enough conncetions at it
|
|
||||||
* single sha512 across all up2k chunks?
|
* single sha512 across all up2k chunks?
|
||||||
* crypto.subtle cannot into streaming, would have to use hashwasm, expensive
|
* crypto.subtle cannot into streaming, would have to use hashwasm, expensive
|
||||||
* separate sqlite table per tag
|
* separate sqlite table per tag
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ currently up to date with [awesome-selfhosted](https://github.com/awesome-selfho
|
|||||||
* 💾 = what copyparty offers as an alternative
|
* 💾 = what copyparty offers as an alternative
|
||||||
* 🔵 = similarities
|
* 🔵 = similarities
|
||||||
* ⚠️ = disadvantages (something copyparty does "better")
|
* ⚠️ = disadvantages (something copyparty does "better")
|
||||||
|
* 🔥 = hazards
|
||||||
|
|
||||||
|
|
||||||
## toc
|
## toc
|
||||||
@@ -37,7 +38,7 @@ currently up to date with [awesome-selfhosted](https://github.com/awesome-selfho
|
|||||||
* [another matrix](#another-matrix)
|
* [another matrix](#another-matrix)
|
||||||
* [reviews](#reviews)
|
* [reviews](#reviews)
|
||||||
* [copyparty](#copyparty)
|
* [copyparty](#copyparty)
|
||||||
* [hfs2](#hfs2)
|
* [hfs2](#hfs2) 🔥
|
||||||
* [hfs3](#hfs3)
|
* [hfs3](#hfs3)
|
||||||
* [nextcloud](#nextcloud)
|
* [nextcloud](#nextcloud)
|
||||||
* [seafile](#seafile)
|
* [seafile](#seafile)
|
||||||
@@ -83,8 +84,8 @@ the table headers in the matrixes below are the different softwares, with a quic
|
|||||||
|
|
||||||
the softwares,
|
the softwares,
|
||||||
* `a` = [copyparty](https://github.com/9001/copyparty)
|
* `a` = [copyparty](https://github.com/9001/copyparty)
|
||||||
* `b` = [hfs2](https://rejetto.com/hfs/)
|
* `b` = [hfs2](https://github.com/rejetto/hfs2/) 🔥
|
||||||
* `c` = [hfs3](https://github.com/rejetto/hfs)
|
* `c` = [hfs3](https://rejetto.com/hfs/)
|
||||||
* `d` = [nextcloud](https://github.com/nextcloud/server)
|
* `d` = [nextcloud](https://github.com/nextcloud/server)
|
||||||
* `e` = [seafile](https://github.com/haiwen/seafile)
|
* `e` = [seafile](https://github.com/haiwen/seafile)
|
||||||
* `f` = [rclone](https://github.com/rclone/rclone), specifically `rclone serve webdav .`
|
* `f` = [rclone](https://github.com/rclone/rclone), specifically `rclone serve webdav .`
|
||||||
@@ -152,19 +153,20 @@ symbol legend,
|
|||||||
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
|
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
|
||||||
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
|
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
|
||||||
| download folder as zip | █ | █ | █ | █ | ╱ | | █ | | █ | █ | ╱ | █ | ╱ |
|
| download folder as zip | █ | █ | █ | █ | ╱ | | █ | | █ | █ | ╱ | █ | ╱ |
|
||||||
| download folder as tar | █ | | | | | | | | | █ | | | |
|
| download folder as tar | █ | | | | | | | | | | | | |
|
||||||
| upload | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
|
| upload | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | ╱ | █ | █ |
|
||||||
| parallel uploads | █ | | | █ | █ | | • | | █ | | █ | | █ |
|
| parallel uploads | █ | | | █ | █ | | • | | █ | █ | █ | | █ |
|
||||||
| resumable uploads | █ | | | | | | | | █ | | █ | ╱ | |
|
| resumable uploads | █ | | █ | | | | | | █ | █ | █ | ╱ | |
|
||||||
| upload segmenting | █ | | | | | | | █ | █ | | █ | ╱ | █ |
|
| upload segmenting | █ | | | | | | | █ | █ | █ | █ | ╱ | █ |
|
||||||
| upload acceleration | █ | | | | | | | | █ | | █ | | |
|
| upload acceleration | █ | | | | | | | | █ | | █ | | |
|
||||||
| upload verification | █ | | | █ | █ | | | | █ | | | | |
|
| upload verification | █ | | | █ | █ | | | | █ | | | | |
|
||||||
| upload deduplication | █ | | | | █ | | | | █ | | | | |
|
| upload deduplication | █ | | | | █ | | | | █ | | | | |
|
||||||
| upload a 999 TiB file | █ | | | | █ | █ | • | | █ | | █ | ╱ | ╱ |
|
| upload a 999 TiB file | █ | | | | █ | █ | • | | █ | | █ | ╱ | ╱ |
|
||||||
| race the beam ("p2p") | █ | | | | | | | | | • | | | |
|
| CTRL-V from device | █ | | | █ | | | | | | | | | |
|
||||||
|
| race the beam ("p2p") | █ | | | | | | | | | | | | |
|
||||||
| keep last-modified time | █ | | | █ | █ | █ | | | | | | █ | |
|
| keep last-modified time | █ | | | █ | █ | █ | | | | | | █ | |
|
||||||
| upload rules | ╱ | ╱ | ╱ | ╱ | ╱ | | | ╱ | ╱ | | ╱ | ╱ | ╱ |
|
| upload rules | ╱ | ╱ | ╱ | ╱ | ╱ | | | ╱ | ╱ | | ╱ | ╱ | ╱ |
|
||||||
| ┗ max disk usage | █ | █ | | | █ | | | | █ | | | █ | █ |
|
| ┗ max disk usage | █ | █ | █ | | █ | | | | █ | | | █ | █ |
|
||||||
| ┗ max filesize | █ | | | | | | | █ | | | █ | █ | █ |
|
| ┗ max filesize | █ | | | | | | | █ | | | █ | █ | █ |
|
||||||
| ┗ max items in folder | █ | | | | | | | | | | | ╱ | |
|
| ┗ max items in folder | █ | | | | | | | | | | | ╱ | |
|
||||||
| ┗ max file age | █ | | | | | | | | █ | | | | |
|
| ┗ max file age | █ | | | | | | | | █ | | | | |
|
||||||
@@ -182,6 +184,8 @@ symbol legend,
|
|||||||
|
|
||||||
* `upload verification` = uploads are checksummed or otherwise confirmed to have been transferred correctly
|
* `upload verification` = uploads are checksummed or otherwise confirmed to have been transferred correctly
|
||||||
|
|
||||||
|
* `CTRL-V from device` = press CTRL-C in Windows Explorer (or whatever) and paste into the webbrowser to upload it
|
||||||
|
|
||||||
* `race the beam` = files can be downloaded while they're still uploading; downloaders are slowed down such that the uploader is always ahead
|
* `race the beam` = files can be downloaded while they're still uploading; downloaders are slowed down such that the uploader is always ahead
|
||||||
|
|
||||||
* `checksums provided` = when downloading a file from the server, the file's checksum is provided for verification client-side
|
* `checksums provided` = when downloading a file from the server, the file's checksum is provided for verification client-side
|
||||||
@@ -213,7 +217,7 @@ symbol legend,
|
|||||||
| serve sftp (ssh) | | | | | | █ | | | | | | █ | █ |
|
| serve sftp (ssh) | | | | | | █ | | | | | | █ | █ |
|
||||||
| serve smb/cifs | ╱ | | | | | █ | | | | | | | |
|
| serve smb/cifs | ╱ | | | | | █ | | | | | | | |
|
||||||
| serve dlna | | | | | | █ | | | | | | | |
|
| serve dlna | | | | | | █ | | | | | | | |
|
||||||
| listen on unix-socket | | | | █ | █ | | █ | █ | █ | | █ | █ | |
|
| listen on unix-socket | | | | █ | █ | | █ | █ | █ | █ | █ | █ | |
|
||||||
| zeroconf | █ | | | | | | | | | | | | █ |
|
| zeroconf | █ | | | | | | | | | | | | █ |
|
||||||
| supports netscape 4 | ╱ | | | | | █ | | | | | • | | ╱ |
|
| supports netscape 4 | ╱ | | | | | █ | | | | | • | | ╱ |
|
||||||
| ...internet explorer 6 | ╱ | █ | | █ | | █ | | | | | • | | ╱ |
|
| ...internet explorer 6 | ╱ | █ | | █ | | █ | | | | | • | | ╱ |
|
||||||
@@ -243,7 +247,7 @@ symbol legend,
|
|||||||
| listen multiple ports | █ | | | | | | | | | | | █ | |
|
| listen multiple ports | █ | | | | | | | | | | | █ | |
|
||||||
| virtual file system | █ | █ | █ | | | | █ | | | | | █ | |
|
| virtual file system | █ | █ | █ | | | | █ | | | | | █ | |
|
||||||
| reverse-proxy ok | █ | | █ | █ | █ | █ | █ | █ | • | • | • | █ | ╱ |
|
| reverse-proxy ok | █ | | █ | █ | █ | █ | █ | █ | • | • | • | █ | ╱ |
|
||||||
| folder-rproxy ok | █ | | | | █ | █ | | • | • | • | • | | • |
|
| folder-rproxy ok | █ | | █ | | █ | █ | | • | • | █ | • | | • |
|
||||||
|
|
||||||
* `folder-rproxy` = reverse-proxying without dedicating an entire (sub)domain, using a subfolder instead
|
* `folder-rproxy` = reverse-proxying without dedicating an entire (sub)domain, using a subfolder instead
|
||||||
* `l`/sftpgo:
|
* `l`/sftpgo:
|
||||||
@@ -266,9 +270,9 @@ symbol legend,
|
|||||||
| per-folder permissions | ╱ | | | █ | █ | | █ | | █ | █ | ╱ | █ | █ |
|
| per-folder permissions | ╱ | | | █ | █ | | █ | | █ | █ | ╱ | █ | █ |
|
||||||
| per-file permissions | | | | █ | █ | | █ | | █ | | | | █ |
|
| per-file permissions | | | | █ | █ | | █ | | █ | | | | █ |
|
||||||
| per-file passwords | █ | | | █ | █ | | █ | | █ | | | | █ |
|
| per-file passwords | █ | | | █ | █ | | █ | | █ | | | | █ |
|
||||||
| unmap subfolders | █ | | | | | | █ | | | █ | ╱ | • | |
|
| unmap subfolders | █ | | █ | | | | █ | | | █ | ╱ | • | |
|
||||||
| index.html blocks list | ╱ | | | | | | █ | | | • | | | |
|
| index.html blocks list | ╱ | | | | | | █ | | | • | | | |
|
||||||
| write-only folders | █ | | | | | | | | | | █ | █ | |
|
| write-only folders | █ | | █ | | | | | | | | █ | █ | |
|
||||||
| files stored as-is | █ | █ | █ | █ | | █ | █ | | | █ | █ | █ | █ |
|
| files stored as-is | █ | █ | █ | █ | | █ | █ | | | █ | █ | █ | █ |
|
||||||
| file versioning | | | | █ | █ | | | | | | | | |
|
| file versioning | | | | █ | █ | | | | | | | | |
|
||||||
| file encryption | | | | █ | █ | █ | | | | | | █ | |
|
| file encryption | | | | █ | █ | █ | | | | | | █ | |
|
||||||
@@ -298,6 +302,7 @@ symbol legend,
|
|||||||
* `file action event hooks` = run script before/after upload, move, rename, ...
|
* `file action event hooks` = run script before/after upload, move, rename, ...
|
||||||
* `one-way folder sync` = like rsync, optionally deleting unexpected files at target
|
* `one-way folder sync` = like rsync, optionally deleting unexpected files at target
|
||||||
* `full sync` = stateful, dropbox-like sync
|
* `full sync` = stateful, dropbox-like sync
|
||||||
|
* `speed throttle` = rate limiting (per ip, per user, per connection, anything like that)
|
||||||
* `curl-friendly ls` = returns a [sortable plaintext folder listing](https://user-images.githubusercontent.com/241032/215322619-ea5fd606-3654-40ad-94ee-2bc058647bb2.png) when curled
|
* `curl-friendly ls` = returns a [sortable plaintext folder listing](https://user-images.githubusercontent.com/241032/215322619-ea5fd606-3654-40ad-94ee-2bc058647bb2.png) when curled
|
||||||
* `curl-friendly upload` = uploading with curl is just `curl -T some.bin http://.../`
|
* `curl-friendly upload` = uploading with curl is just `curl -T some.bin http://.../`
|
||||||
* `a`/copyparty remarks:
|
* `a`/copyparty remarks:
|
||||||
@@ -323,14 +328,14 @@ symbol legend,
|
|||||||
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
|
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
|
||||||
| ---------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
|
| ---------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
|
||||||
| single-page app | █ | | █ | █ | █ | | | █ | █ | █ | █ | | █ |
|
| single-page app | █ | | █ | █ | █ | | | █ | █ | █ | █ | | █ |
|
||||||
| themes | █ | █ | | █ | | | | | █ | | | | |
|
| themes | █ | █ | █ | █ | | | | | █ | | | | |
|
||||||
| directory tree nav | █ | ╱ | | | █ | | | | █ | | ╱ | | |
|
| directory tree nav | █ | ╱ | | | █ | | | | █ | | ╱ | | |
|
||||||
| multi-column sorting | █ | | | | | | | | | | | | |
|
| multi-column sorting | █ | | | | | | | | | | | | |
|
||||||
| thumbnails | █ | | | ╱ | ╱ | | | █ | █ | ╱ | | | █ |
|
| thumbnails | █ | | | ╱ | ╱ | | | █ | █ | ╱ | | | █ |
|
||||||
| ┗ image thumbnails | █ | | | █ | █ | | | █ | █ | █ | | | █ |
|
| ┗ image thumbnails | █ | | | █ | █ | | | █ | █ | █ | | | █ |
|
||||||
| ┗ video thumbnails | █ | | | █ | █ | | | | █ | | | | █ |
|
| ┗ video thumbnails | █ | | | █ | █ | | | | █ | | | | █ |
|
||||||
| ┗ audio spectrograms | █ | | | | | | | | | | | | |
|
| ┗ audio spectrograms | █ | | | | | | | | | | | | |
|
||||||
| audio player | █ | | | █ | █ | | | | █ | ╱ | | | █ |
|
| audio player | █ | | ╱ | █ | █ | | | | █ | ╱ | | | █ |
|
||||||
| ┗ gapless playback | █ | | | | | | | | • | | | | |
|
| ┗ gapless playback | █ | | | | | | | | • | | | | |
|
||||||
| ┗ audio equalizer | █ | | | | | | | | | | | | |
|
| ┗ audio equalizer | █ | | | | | | | | | | | | |
|
||||||
| ┗ waveform seekbar | █ | | | | | | | | | | | | |
|
| ┗ waveform seekbar | █ | | | | | | | | | | | | |
|
||||||
@@ -348,16 +353,16 @@ symbol legend,
|
|||||||
| search by custom parser | █ | | | | | | | | | | | | |
|
| search by custom parser | █ | | | | | | | | | | | | |
|
||||||
| find local file | █ | | | | | | | | | | | | |
|
| find local file | █ | | | | | | | | | | | | |
|
||||||
| undo recent uploads | █ | | | | | | | | | | | | |
|
| undo recent uploads | █ | | | | | | | | | | | | |
|
||||||
| create directories | █ | | | █ | █ | ╱ | █ | █ | █ | █ | █ | █ | █ |
|
| create directories | █ | | █ | █ | █ | ╱ | █ | █ | █ | █ | █ | █ | █ |
|
||||||
| image viewer | █ | | | █ | █ | | | | █ | █ | █ | | █ |
|
| image viewer | █ | | █ | █ | █ | | | | █ | █ | █ | | █ |
|
||||||
| markdown viewer | █ | | | | █ | | | | █ | ╱ | ╱ | | █ |
|
| markdown viewer | █ | | | | █ | | | | █ | ╱ | ╱ | | █ |
|
||||||
| markdown editor | █ | | | | █ | | | | █ | ╱ | ╱ | | █ |
|
| markdown editor | █ | | | | █ | | | | █ | ╱ | ╱ | | █ |
|
||||||
| readme.md in listing | █ | | | █ | | | | | | | | | |
|
| readme.md in listing | █ | | | █ | | | | | | | | | |
|
||||||
| rename files | █ | █ | █ | █ | █ | ╱ | █ | | █ | █ | █ | █ | █ |
|
| rename files | █ | █ | █ | █ | █ | ╱ | █ | | █ | █ | █ | █ | █ |
|
||||||
| batch rename | █ | | | | | | | | █ | | | | |
|
| batch rename | █ | | | | | | | | █ | | | | |
|
||||||
| cut / paste files | █ | █ | | █ | █ | | | | █ | | | | █ |
|
| cut / paste files | █ | █ | | █ | █ | | | | █ | | | | █ |
|
||||||
| move files | █ | █ | | █ | █ | | █ | | █ | █ | █ | | █ |
|
| move files | █ | █ | █ | █ | █ | | █ | | █ | █ | █ | | █ |
|
||||||
| delete files | █ | █ | | █ | █ | ╱ | █ | █ | █ | █ | █ | █ | █ |
|
| delete files | █ | █ | █ | █ | █ | ╱ | █ | █ | █ | █ | █ | █ | █ |
|
||||||
| copy files | | | | | █ | | | | █ | █ | █ | | █ |
|
| copy files | | | | | █ | | | | █ | █ | █ | | █ |
|
||||||
|
|
||||||
* `single-page app` = multitasking; possible to continue navigating while uploading
|
* `single-page app` = multitasking; possible to continue navigating while uploading
|
||||||
@@ -367,8 +372,12 @@ symbol legend,
|
|||||||
* `undo recent uploads` = accounts without delete permissions have a time window where they can undo their own uploads
|
* `undo recent uploads` = accounts without delete permissions have a time window where they can undo their own uploads
|
||||||
* `a`/copyparty has teeny-tiny skips playing gapless albums depending on audio codec (opus best)
|
* `a`/copyparty has teeny-tiny skips playing gapless albums depending on audio codec (opus best)
|
||||||
* `b`/hfs2 has a very basic directory tree view, not showing sibling folders
|
* `b`/hfs2 has a very basic directory tree view, not showing sibling folders
|
||||||
|
* `c`/hfs3 remarks:
|
||||||
|
* audio playback does not continue into next song
|
||||||
* `f`/rclone can do some file management (mkdir, rename, delete) when hosting througn webdav
|
* `f`/rclone can do some file management (mkdir, rename, delete) when hosting througn webdav
|
||||||
* `j`/filebrowser has a plaintext viewer/editor
|
* `j`/filebrowser remarks:
|
||||||
|
* audio playback does not continue into next song
|
||||||
|
* plaintext viewer/editor
|
||||||
* `k`/filegator directory tree is a modal window
|
* `k`/filegator directory tree is a modal window
|
||||||
|
|
||||||
|
|
||||||
@@ -424,6 +433,7 @@ symbol legend,
|
|||||||
* 💾 are what copyparty offers as an alternative
|
* 💾 are what copyparty offers as an alternative
|
||||||
* 🔵 are similarities
|
* 🔵 are similarities
|
||||||
* ⚠️ are disadvantages (something copyparty does "better")
|
* ⚠️ are disadvantages (something copyparty does "better")
|
||||||
|
* 🔥 are hazards
|
||||||
|
|
||||||
## [copyparty](https://github.com/9001/copyparty)
|
## [copyparty](https://github.com/9001/copyparty)
|
||||||
* resumable uploads which are verified server-side
|
* resumable uploads which are verified server-side
|
||||||
@@ -431,8 +441,9 @@ symbol legend,
|
|||||||
* both of the above are surprisingly uncommon features
|
* both of the above are surprisingly uncommon features
|
||||||
* very cross-platform (python, no dependencies)
|
* very cross-platform (python, no dependencies)
|
||||||
|
|
||||||
## [hfs2](https://rejetto.com/hfs/)
|
## [hfs2](https://github.com/rejetto/hfs2/)
|
||||||
* the OG, the legend
|
* the OG, the legend (now replaced by [hfs3](#hfs3))
|
||||||
|
* 🔥 hfs2 is dead and dangerous! unfixed RCE: [info](https://github.com/rejetto/hfs2/issues/44), [info](https://github.com/drapid/hfs/issues/3), [info](https://asec.ahnlab.com/en/67650/)
|
||||||
* ⚠️ uploads not resumable / accelerated / integrity-checked
|
* ⚠️ uploads not resumable / accelerated / integrity-checked
|
||||||
* ⚠️ on cloudflare: max upload size 100 MiB
|
* ⚠️ on cloudflare: max upload size 100 MiB
|
||||||
* ⚠️ windows-only
|
* ⚠️ windows-only
|
||||||
@@ -440,10 +451,19 @@ symbol legend,
|
|||||||
* vfs with gui config, per-volume permissions
|
* vfs with gui config, per-volume permissions
|
||||||
* starting to show its age, hence the rewrite:
|
* starting to show its age, hence the rewrite:
|
||||||
|
|
||||||
## [hfs3](https://github.com/rejetto/hfs)
|
## [hfs3](https://rejetto.com/hfs/)
|
||||||
* nodejs; cross-platform
|
* nodejs; cross-platform
|
||||||
* vfs with gui config, per-volume permissions
|
* vfs with gui config, per-volume permissions
|
||||||
* still early development, let's revisit later
|
* 🔵 uploads are resumable
|
||||||
|
* ⚠️ uploads are not segmented; max upload size 100 MiB on cloudflare
|
||||||
|
* ⚠️ uploads are not accelerated (copyparty is 3x faster across the atlantic)
|
||||||
|
* ⚠️ uploads are not integrity-checked
|
||||||
|
* ⚠️ copies the file after upload; need twice filesize free disk space
|
||||||
|
* ⚠️ doesn't support crazy filenames
|
||||||
|
* ✅ config GUI
|
||||||
|
* ✅ download counter
|
||||||
|
* ✅ watch active connections
|
||||||
|
* ✅ plugins
|
||||||
|
|
||||||
## [nextcloud](https://github.com/nextcloud/server)
|
## [nextcloud](https://github.com/nextcloud/server)
|
||||||
* php, mariadb
|
* php, mariadb
|
||||||
@@ -497,6 +517,7 @@ symbol legend,
|
|||||||
* rust; cross-platform (windows, linux, macos)
|
* rust; cross-platform (windows, linux, macos)
|
||||||
* ⚠️ uploads not resumable / accelerated / integrity-checked
|
* ⚠️ uploads not resumable / accelerated / integrity-checked
|
||||||
* ⚠️ on cloudflare: max upload size 100 MiB
|
* ⚠️ on cloudflare: max upload size 100 MiB
|
||||||
|
* ⚠️ across the atlantic, copyparty is 3x faster
|
||||||
* ⚠️ doesn't support crazy filenames
|
* ⚠️ doesn't support crazy filenames
|
||||||
* ✅ per-url access control (copyparty is per-volume)
|
* ✅ per-url access control (copyparty is per-volume)
|
||||||
* 🔵 basic but really snappy ui
|
* 🔵 basic but really snappy ui
|
||||||
@@ -539,8 +560,10 @@ symbol legend,
|
|||||||
|
|
||||||
## [filebrowser](https://github.com/filebrowser/filebrowser)
|
## [filebrowser](https://github.com/filebrowser/filebrowser)
|
||||||
* go; cross-platform (windows, linux, mac)
|
* go; cross-platform (windows, linux, mac)
|
||||||
* ⚠️ uploads not resumable / accelerated / integrity-checked
|
* 🔵 uploads are resumable and segmented
|
||||||
* ⚠️ on cloudflare: max upload size 100 MiB
|
* 🔵 multiple files are uploaded in parallel, but...
|
||||||
|
* ⚠️ big files are not accelerated (copyparty is 5x faster across the atlantic)
|
||||||
|
* ⚠️ uploads are not integrity-checked
|
||||||
* ⚠️ http only; no webdav / ftp / zeroconf
|
* ⚠️ http only; no webdav / ftp / zeroconf
|
||||||
* ⚠️ doesn't support crazy filenames
|
* ⚠️ doesn't support crazy filenames
|
||||||
* ⚠️ no directory tree nav
|
* ⚠️ no directory tree nav
|
||||||
@@ -550,12 +573,14 @@ symbol legend,
|
|||||||
* ⚠️ but no directory tree for navigation
|
* ⚠️ but no directory tree for navigation
|
||||||
* ✅ user signup
|
* ✅ user signup
|
||||||
* ✅ command runner / remote shell
|
* ✅ command runner / remote shell
|
||||||
* 🔵 supposed to have write-only folders but couldn't get it to work
|
* ✅ more efficient; can handle around twice as much simultaneous traffic
|
||||||
|
|
||||||
## [filegator](https://github.com/filegator/filegator)
|
## [filegator](https://github.com/filegator/filegator)
|
||||||
* go; cross-platform (windows, linux, mac)
|
* php; cross-platform (windows, linux, mac)
|
||||||
* 🔵 *it has upload segmenting and acceleration*
|
* 🔵 *it has upload segmenting and acceleration*
|
||||||
* ⚠️ but uploads are still not integrity-checked
|
* ⚠️ but uploads are still not integrity-checked
|
||||||
|
* ⚠️ on copyparty, uploads are 40x faster
|
||||||
|
* compared to the official filegator docker example which might be bad
|
||||||
* ⚠️ http only; no webdav / ftp / zeroconf
|
* ⚠️ http only; no webdav / ftp / zeroconf
|
||||||
* ⚠️ does not support symlinks
|
* ⚠️ does not support symlinks
|
||||||
* ⚠️ expensive download-as-zip feature
|
* ⚠️ expensive download-as-zip feature
|
||||||
@@ -566,6 +591,7 @@ symbol legend,
|
|||||||
* go; cross-platform (windows, linux, mac)
|
* go; cross-platform (windows, linux, mac)
|
||||||
* ⚠️ http uploads not resumable / accelerated / integrity-checked
|
* ⚠️ http uploads not resumable / accelerated / integrity-checked
|
||||||
* ⚠️ on cloudflare: max upload size 100 MiB
|
* ⚠️ on cloudflare: max upload size 100 MiB
|
||||||
|
* ⚠️ across the atlantic, copyparty is 2.5x faster
|
||||||
* 🔵 sftp uploads are resumable
|
* 🔵 sftp uploads are resumable
|
||||||
* ⚠️ web UI is very minimal + a bit slow
|
* ⚠️ web UI is very minimal + a bit slow
|
||||||
* ⚠️ no thumbnails / image viewer / audio player
|
* ⚠️ no thumbnails / image viewer / audio player
|
||||||
@@ -573,6 +599,7 @@ symbol legend,
|
|||||||
* ⚠️ no filesystem indexing / search
|
* ⚠️ no filesystem indexing / search
|
||||||
* ⚠️ doesn't run on phones, tablets
|
* ⚠️ doesn't run on phones, tablets
|
||||||
* ⚠️ no zeroconf (mdns/ssdp)
|
* ⚠️ no zeroconf (mdns/ssdp)
|
||||||
|
* ⚠️ impractical directory URLs
|
||||||
* ⚠️ AGPL licensed
|
* ⚠️ AGPL licensed
|
||||||
* 🔵 ftp, ftps, webdav
|
* 🔵 ftp, ftps, webdav
|
||||||
* ✅ sftp server
|
* ✅ sftp server
|
||||||
@@ -589,11 +616,13 @@ symbol legend,
|
|||||||
## [arozos](https://github.com/tobychui/arozos)
|
## [arozos](https://github.com/tobychui/arozos)
|
||||||
* big suite of applications similar to [kodbox](#kodbox), copyparty is better at downloading/uploading/music/indexing but arozos has other advantages
|
* big suite of applications similar to [kodbox](#kodbox), copyparty is better at downloading/uploading/music/indexing but arozos has other advantages
|
||||||
* go; primarily linux (limited support for windows)
|
* go; primarily linux (limited support for windows)
|
||||||
|
* ⚠️ needs root
|
||||||
* ⚠️ uploads not resumable / integrity-checked
|
* ⚠️ uploads not resumable / integrity-checked
|
||||||
* ⚠️ uploading small files to copyparty is 2.7x faster
|
* ⚠️ uploading small files to copyparty is 2.7x faster
|
||||||
* ⚠️ uploading large files to copyparty is at least 10% faster
|
* ⚠️ uploading large files to copyparty is at least 10% faster
|
||||||
* arozos is websocket-based, 512 KiB chunks; writes each chunk to separate files and then merges
|
* arozos is websocket-based, 512 KiB chunks; writes each chunk to separate files and then merges
|
||||||
* copyparty splices directly into the final file; faster and better for the HDD and filesystem
|
* copyparty splices directly into the final file; faster and better for the HDD and filesystem
|
||||||
|
* ⚠️ across the atlantic, uploading to copyparty is 6x faster
|
||||||
* ⚠️ no directory tree navpane; not as easy to navigate
|
* ⚠️ no directory tree navpane; not as easy to navigate
|
||||||
* ⚠️ download-as-zip is not streaming; creates a temp.file on the server
|
* ⚠️ download-as-zip is not streaming; creates a temp.file on the server
|
||||||
* ⚠️ not self-contained (pulls from jsdelivr)
|
* ⚠️ not self-contained (pulls from jsdelivr)
|
||||||
|
|||||||
@@ -49,7 +49,7 @@ thumbnails2 = ["pyvips"]
|
|||||||
audiotags = ["mutagen"]
|
audiotags = ["mutagen"]
|
||||||
ftpd = ["pyftpdlib"]
|
ftpd = ["pyftpdlib"]
|
||||||
ftps = ["pyftpdlib", "pyopenssl"]
|
ftps = ["pyftpdlib", "pyopenssl"]
|
||||||
tftpd = ["partftpy>=0.3.1"]
|
tftpd = ["partftpy>=0.4.0"]
|
||||||
pwhash = ["argon2-cffi"]
|
pwhash = ["argon2-cffi"]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ WORKDIR /z
|
|||||||
ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
|
ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
|
||||||
ver_hashwasm=4.10.0 \
|
ver_hashwasm=4.10.0 \
|
||||||
ver_marked=4.3.0 \
|
ver_marked=4.3.0 \
|
||||||
ver_dompf=3.1.5 \
|
ver_dompf=3.1.6 \
|
||||||
ver_mde=2.18.0 \
|
ver_mde=2.18.0 \
|
||||||
ver_codemirror=5.65.16 \
|
ver_codemirror=5.65.16 \
|
||||||
ver_fontawesome=5.13.0 \
|
ver_fontawesome=5.13.0 \
|
||||||
|
|||||||
@@ -17,3 +17,19 @@ but I don't really know what i'm doing here 💩
|
|||||||
`podman login docker.io`
|
`podman login docker.io`
|
||||||
`podman login ghcr.io -u 9001`
|
`podman login ghcr.io -u 9001`
|
||||||
[about gchq](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) (takes a classic token as password)
|
[about gchq](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) (takes a classic token as password)
|
||||||
|
|
||||||
|
|
||||||
|
## building on alpine
|
||||||
|
|
||||||
|
```bash
|
||||||
|
apk add podman{,-docker}
|
||||||
|
rc-update add cgroups
|
||||||
|
service cgroups start
|
||||||
|
vim /etc/containers/storage.conf # driver = "btrfs"
|
||||||
|
modprobe tun
|
||||||
|
echo ed:100000:65536 >/etc/subuid
|
||||||
|
echo ed:100000:65536 >/etc/subgid
|
||||||
|
apk add qemu-openrc qemu-tools qemu-{arm,armeb,aarch64,s390x,ppc64le}
|
||||||
|
rc-update add qemu-binfmt
|
||||||
|
service qemu-binfmt start
|
||||||
|
```
|
||||||
|
|||||||
81
scripts/help2html.py
Executable file
81
scripts/help2html.py
Executable file
@@ -0,0 +1,81 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import re
|
||||||
|
import subprocess as sp
|
||||||
|
|
||||||
|
|
||||||
|
# to convert the copyparty --help to html, run this in xfce4-terminal @ 140x43:
|
||||||
|
_ = r""""
|
||||||
|
echo; for a in '' -accounts -flags -handlers -hooks -urlform -exp -ls -dbd -pwhash -zm; do
|
||||||
|
./copyparty-sfx.py --help$a 2>/dev/null; printf '\n\n\n%0139d\n\n\n'; done # xfce4-terminal @ 140x43
|
||||||
|
"""
|
||||||
|
# click [edit] => [select all]
|
||||||
|
# click [edit] => [copy as html]
|
||||||
|
# and then run this script
|
||||||
|
|
||||||
|
|
||||||
|
def readclip():
|
||||||
|
cmds = [
|
||||||
|
"xsel -ob",
|
||||||
|
"xclip -selection CLIPBOARD -o",
|
||||||
|
"pbpaste",
|
||||||
|
]
|
||||||
|
for cmd in cmds:
|
||||||
|
try:
|
||||||
|
return sp.check_output(cmd.split()).decode("utf-8")
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def cnv(src):
|
||||||
|
yield '<html style="background:#222;color:#fff"><body>'
|
||||||
|
skip_sfx = False
|
||||||
|
in_sfx = 0
|
||||||
|
in_salt = 0
|
||||||
|
|
||||||
|
while True:
|
||||||
|
ln = next(src)
|
||||||
|
if "<font" in ln:
|
||||||
|
if not ln.startswith("<pre>"):
|
||||||
|
ln = "<pre>" + ln
|
||||||
|
yield ln
|
||||||
|
break
|
||||||
|
|
||||||
|
for ln in src:
|
||||||
|
ln = ln.rstrip()
|
||||||
|
if re.search(r"^<font[^>]+>copyparty v[0-9]", ln):
|
||||||
|
in_sfx = 3
|
||||||
|
if in_sfx:
|
||||||
|
in_sfx -= 1
|
||||||
|
if not skip_sfx:
|
||||||
|
yield ln
|
||||||
|
continue
|
||||||
|
if '">uuid:' in ln:
|
||||||
|
ln = re.sub(r">uuid:[0-9a-f-]{36}<", ">autogenerated<", ln)
|
||||||
|
if "-salt SALT" in ln:
|
||||||
|
in_salt = 3
|
||||||
|
if in_salt:
|
||||||
|
in_salt -= 1
|
||||||
|
t = ln
|
||||||
|
ln = re.sub(r">[0-9a-zA-Z/+]{24}<", ">24-character-autogenerated<", ln)
|
||||||
|
ln = re.sub(r">[0-9a-zA-Z/+]{40}<", ">40-character-autogenerated<", ln)
|
||||||
|
if t != ln:
|
||||||
|
in_salt = 0
|
||||||
|
ln = ln.replace(">/home/ed/", ">~/")
|
||||||
|
if ln.startswith("0" * 20):
|
||||||
|
skip_sfx = True
|
||||||
|
yield ln
|
||||||
|
|
||||||
|
yield "</pre>eof</body></html>"
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
src = readclip()
|
||||||
|
src = re.split("0{100,200}", src[::-1], 1)[1][::-1]
|
||||||
|
with open("helptext.html", "wb") as fo:
|
||||||
|
for ln in cnv(iter(src.split("\n")[:-3])):
|
||||||
|
fo.write(ln.encode("utf-8") + b"\r\n")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
26
scripts/help2txt.sh
Executable file
26
scripts/help2txt.sh
Executable file
@@ -0,0 +1,26 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
( xsel -ob | sed -r '
|
||||||
|
s`/home/ed/`~/`;
|
||||||
|
s/uuid:[0-9a-f-]{36}/autogenerated/;
|
||||||
|
s/(-salt SALT.*default: )[0-9a-zA-Z/+]{24}\)/\124-character-autogenerated)/;
|
||||||
|
s/(-salt SALT.*default: )[0-9a-zA-Z/+]{40}\)/\140-character-autogenerated)/;
|
||||||
|
' | awk '
|
||||||
|
/^copyparty/{a=1} !a{next}
|
||||||
|
/^0{20}/{b=1} b&&/^copyparty v[0-9]+\./{s=3}
|
||||||
|
s{s-=1;next} 1' |
|
||||||
|
head -n-6; echo eof ) >helptext.txt
|
||||||
|
exit 0
|
||||||
|
|
||||||
|
|
||||||
|
# =====================================================================
|
||||||
|
# end of script; below is the explanation how to use this:
|
||||||
|
|
||||||
|
|
||||||
|
# first open an infinitely wide console (this is why you own an ultrawide) and copypaste this into it:
|
||||||
|
for a in '' -accounts -flags -handlers -hooks -urlform -exp -ls -dbd -pwhash -zm; do
|
||||||
|
./copyparty-sfx.py --help$a 2>/dev/null; printf '\n\n\n%0255d\n\n\n'; done
|
||||||
|
|
||||||
|
# then copypaste all of the output by pressing ctrl-shift-a, ctrl-shift-c
|
||||||
|
# and finally actually run this script which should produce helptext.txt
|
||||||
@@ -219,9 +219,9 @@ necho() {
|
|||||||
mv pyftpdlib ftp/
|
mv pyftpdlib ftp/
|
||||||
|
|
||||||
necho collecting partftpy
|
necho collecting partftpy
|
||||||
f="../build/partftpy-0.3.1.tar.gz"
|
f="../build/partftpy-0.4.0.tar.gz"
|
||||||
[ -e "$f" ] ||
|
[ -e "$f" ] ||
|
||||||
(url=https://files.pythonhosted.org/packages/37/79/1a1de1d3fdf27ddc9c2d55fec6552e7b8ed115258fedac6120679898b83d/partftpy-0.3.1.tar.gz;
|
(url=https://files.pythonhosted.org/packages/8c/96/642bb3ddcb07a2c6764eb29aa562d1cf56877ad6c330c3c8921a5f05606d/partftpy-0.4.0.tar.gz;
|
||||||
wget -O$f "$url" || curl -L "$url" >$f)
|
wget -O$f "$url" || curl -L "$url" >$f)
|
||||||
|
|
||||||
tar -zxf $f
|
tar -zxf $f
|
||||||
@@ -490,6 +490,11 @@ while IFS= read -r f; do
|
|||||||
tmv "$f"
|
tmv "$f"
|
||||||
done
|
done
|
||||||
|
|
||||||
|
grep -rlE '^class [^(]+:' |
|
||||||
|
while IFS= read -r f; do
|
||||||
|
ised 's/(^class [^(:]+):/\1(object):/' "$f"
|
||||||
|
done
|
||||||
|
|
||||||
# up2k goes from 28k to 22k laff
|
# up2k goes from 28k to 22k laff
|
||||||
awk 'BEGIN{gensub(//,"",1)}' </dev/null 2>/dev/null &&
|
awk 'BEGIN{gensub(//,"",1)}' </dev/null 2>/dev/null &&
|
||||||
echo entabbening &&
|
echo entabbening &&
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ uname -s | grep WOW64 && m=64 || m=32
|
|||||||
uname -s | grep NT-10 && w10=1 || w7=1
|
uname -s | grep NT-10 && w10=1 || w7=1
|
||||||
[ $w7 ] && [ -e up2k.sh ] && [ ! "$1" ] && ./up2k.sh
|
[ $w7 ] && [ -e up2k.sh ] && [ ! "$1" ] && ./up2k.sh
|
||||||
|
|
||||||
[ $w7 ] && pyv=37 || pyv=311
|
[ $w7 ] && pyv=37 || pyv=312
|
||||||
esuf=
|
esuf=
|
||||||
[ $w7 ] && [ $m = 32 ] && esuf=32
|
[ $w7 ] && [ $m = 32 ] && esuf=32
|
||||||
[ $w7 ] && [ $m = 64 ] && esuf=-winpe64
|
[ $w7 ] && [ $m = 64 ] && esuf=-winpe64
|
||||||
@@ -127,3 +127,6 @@ grep -q $csum uplod.log && echo upload OK || {
|
|||||||
echo UPLOAD FAILED
|
echo UPLOAD FAILED
|
||||||
exit 1
|
exit 1
|
||||||
}
|
}
|
||||||
|
|
||||||
|
echo; read -u1 -n1 -p 'shutdown? y/n: '
|
||||||
|
[ "$REPLY" = y ] && shutdown -s -t 1
|
||||||
|
|||||||
@@ -1,33 +1,39 @@
|
|||||||
f117016b1e6a7d7e745db30d3e67f1acf7957c443a0dd301b6c5e10b8368f2aa4db6be9782d2d3f84beadd139bfeef4982e40f21ca5d9065cb794eeb0e473e82 altgraph-0.17.4-py2.py3-none-any.whl
|
f117016b1e6a7d7e745db30d3e67f1acf7957c443a0dd301b6c5e10b8368f2aa4db6be9782d2d3f84beadd139bfeef4982e40f21ca5d9065cb794eeb0e473e82 altgraph-0.17.4-py2.py3-none-any.whl
|
||||||
e0d2e6183437af321a36944f04a501e85181243e5fa2da3254254305dd8119161f62048bc56bff8849b49f546ff175b02b4c999401f1c404f6b88e6f46a9c96e Git-2.44.0-32-bit.exe
|
6a624018f30da375581d5751eca0080edbbe37f102f643f856279fcfded3a4379fd1b6fb0661cdb2e72bbbbc726ca714a1f5990cc348df365db62bc53e4c4503 Git-2.45.2-32-bit.exe
|
||||||
9d2c31701a4d3fef553928c00528a48f9e1854ab5333528b50e358a214eba90029d687f039bcda5760b6fdf9f2de3bcf3784ae21a6374cf2a97a845d33b636c6 packaging-24.0-py3-none-any.whl
|
|
||||||
17ce52ba50692a9d964f57a23ac163fb74c77fdeb2ca988a6d439ae1fe91955ff43730c073af97a7b3223093ffea3479a996b9b50ee7fba0869247a56f74baa6 pefile-2023.2.7-py3-none-any.whl
|
17ce52ba50692a9d964f57a23ac163fb74c77fdeb2ca988a6d439ae1fe91955ff43730c073af97a7b3223093ffea3479a996b9b50ee7fba0869247a56f74baa6 pefile-2023.2.7-py3-none-any.whl
|
||||||
126ca016c00256f4ff13c88707ead21b3b98f3c665ae57a5bcbb80c8be3004bff36d9c7f9a1cc9d20551019708f2b195154f302d80a1e5a2026d6d0fe9f3d5f4 pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl
|
|
||||||
749a473646c6d4c7939989649733d4c7699fd1c359c27046bf5bc9c070d1a4b8b986bbc65f60d7da725baf16dbfdd75a4c2f5bb8335f2cb5685073f5fee5c2d1 pywin32_ctypes-0.2.2-py3-none-any.whl
|
749a473646c6d4c7939989649733d4c7699fd1c359c27046bf5bc9c070d1a4b8b986bbc65f60d7da725baf16dbfdd75a4c2f5bb8335f2cb5685073f5fee5c2d1 pywin32_ctypes-0.2.2-py3-none-any.whl
|
||||||
6e0d854040baff861e1647d2bece7d090bc793b2bd9819c56105b94090df54881a6a9b43ebd82578cd7c76d47181571b671e60672afd9def389d03c9dae84fcf setuptools-68.2.2-py3-none-any.whl
|
085d39ef4426aa5f097fbc484595becc16e61ca23fc7da4d2a8bba540a3b82e789e390b176c7151bdc67d01735cce22b1562cdb2e31273225a2d3e275851a4ad setuptools-70.3.0-py3-none-any.whl
|
||||||
|
360a141928f4a7ec18a994602cbb28bbf8b5cc7c077a06ac76b54b12fa769ed95ca0333a5cf728923a8e0baeb5cc4d5e73e5b3de2666beb05eb477d8ae719093 upx-4.2.4-win32.zip
|
||||||
# u2c (win7)
|
# u2c (win7)
|
||||||
7a3bd4849f95e1715fe2e99613df70a0fedd944a9bfde71a0fadb837fe62c3431c30da4f0b75c74de6f1a459f1fdf7cb62eaf404fdbe45e2d121e0b1021f1580 certifi-2024.2.2-py3-none-any.whl
|
7a3bd4849f95e1715fe2e99613df70a0fedd944a9bfde71a0fadb837fe62c3431c30da4f0b75c74de6f1a459f1fdf7cb62eaf404fdbe45e2d121e0b1021f1580 certifi-2024.2.2-py3-none-any.whl
|
||||||
9cc8acc5e269e6421bc32bb89261101da29d6ca337d39d60b9106de9ed7904e188716e4a48d78a2c4329026443fcab7acab013d2fe43778e30d6c4e4506a1b91 charset_normalizer-3.3.2-cp37-cp37m-win32.whl
|
9cc8acc5e269e6421bc32bb89261101da29d6ca337d39d60b9106de9ed7904e188716e4a48d78a2c4329026443fcab7acab013d2fe43778e30d6c4e4506a1b91 charset_normalizer-3.3.2-cp37-cp37m-win32.whl
|
||||||
0ec1ae5c928b4a0001a254c8598b746049406e1eed720bfafa94d4474078eff76bf6e032124e2d4df4619052836523af36162443c6d746487b387d2e3476e691 idna-3.6-py3-none-any.whl
|
0ec1ae5c928b4a0001a254c8598b746049406e1eed720bfafa94d4474078eff76bf6e032124e2d4df4619052836523af36162443c6d746487b387d2e3476e691 idna-3.6-py3-none-any.whl
|
||||||
b795abb26ba2f04f1afcfb196f21f638014b26c8186f8f488f1c2d91e8e0220962fbd259dbc9c3875222eb47fc95c73fc0606aaa6602b9ebc524809c9ba3501f requests-2.31.0-py3-none-any.whl
|
cc08d0d87d184401872a2f82266d589253979b4cd02f23b51290fbb2a20082848fc72acbed8aacb74ac4af068d575ef96e66196c5068bc38fb0bcafdc7626869 requests-2.29.0-py3-none-any.whl
|
||||||
61ed4500b6361632030f05229705c5c5a52cb47e31c0e6b55151c8f3beed631cd752ca6c3d6393d56a2acf6a453cfcf801e877116123c550922249c3a976e0f4 urllib3-1.26.18-py2.py3-none-any.whl
|
fe5fee6cb8a2c68800b32353a0015e5d2e1ad1cb6e0c9e6acf86e48e5cdb5606ad465dc4485ea5fbc8701d8716a8a7f7148c57724ef9da26b0c0a76f6dbbd698 urllib3-1.26.19-py2.py3-none-any.whl
|
||||||
# win7
|
# win7
|
||||||
|
3253e86471e6f9fa85bfdb7684cd2f964ed6e35c6a4db87f81cca157c049bef43e66dfcae1e037b2fb904567b1e028aaeefe8983ba3255105df787406d2aa71e en_windows_7_professional_with_sp1_x86_dvd_u_677056.iso
|
||||||
|
ab0db0283f61a5bbe44797d74546786bf41685175764a448d2e3bd629f292f1e7d829757b26be346b5044d78c9c1891736d93237cee4b1b6f5996a902c86d15f en_windows_7_professional_with_sp1_x64_dvd_u_676939.iso
|
||||||
d130bfa136bd171b9972b5c281c578545f2a84a909fdf18a6d2d71dd12fb3d512a7a1fa5cf7300433adece1d306eb2f22d7278f4c90e744e04dc67ba627a82c0 future-1.0.0-py3-none-any.whl
|
d130bfa136bd171b9972b5c281c578545f2a84a909fdf18a6d2d71dd12fb3d512a7a1fa5cf7300433adece1d306eb2f22d7278f4c90e744e04dc67ba627a82c0 future-1.0.0-py3-none-any.whl
|
||||||
0b4d07434bf8d314f42893d90bce005545b44a509e7353a73cad26dc9360b44e2824218a1a74f8174d02eba87fba91baffa82c8901279a32ebc6b8386b1b4275 importlib_metadata-6.7.0-py3-none-any.whl
|
0b4d07434bf8d314f42893d90bce005545b44a509e7353a73cad26dc9360b44e2824218a1a74f8174d02eba87fba91baffa82c8901279a32ebc6b8386b1b4275 importlib_metadata-6.7.0-py3-none-any.whl
|
||||||
|
9d2c31701a4d3fef553928c00528a48f9e1854ab5333528b50e358a214eba90029d687f039bcda5760b6fdf9f2de3bcf3784ae21a6374cf2a97a845d33b636c6 packaging-24.0-py3-none-any.whl
|
||||||
5d7462a584105bccaa9cf376f5a8c5827ead099c813c8af7392d478a4398f373d9e8cac7bbad2db51b335411ab966b21e119b1b1234c9a7ab70c6ddfc9306da6 pip-24.0-py3-none-any.whl
|
5d7462a584105bccaa9cf376f5a8c5827ead099c813c8af7392d478a4398f373d9e8cac7bbad2db51b335411ab966b21e119b1b1234c9a7ab70c6ddfc9306da6 pip-24.0-py3-none-any.whl
|
||||||
f298e34356b5590dde7477d7b3a88ad39c622a2bcf3fcd7c53870ce8384dd510f690af81b8f42e121a22d3968a767d2e07595036b2ed7049c8ef4d112bcf3a61 pyinstaller-5.13.2-py3-none-win32.whl
|
f298e34356b5590dde7477d7b3a88ad39c622a2bcf3fcd7c53870ce8384dd510f690af81b8f42e121a22d3968a767d2e07595036b2ed7049c8ef4d112bcf3a61 pyinstaller-5.13.2-py3-none-win32.whl
|
||||||
|
ea73aa54cc6d5db20dfb127e54562dabf890e4cd6171a91b10a51af2bcfc76e1d64cbdce4546df2dcfe42b624724c85b1cd05934be2413425b1f880222727b4f pyinstaller-5.13.2-py3-none-win_amd64.whl
|
||||||
|
2f4e3927a38cf7757bc9a1c06370d79209669a285a80f1b09cf9917137825c7022a50a56b351807e6e687e2c3a7bd7b2c5cc6daeb4d90e11920284c1a04a1cc3 pyinstaller_hooks_contrib-2023.8-py2.py3-none-any.whl
|
||||||
6bb73cc2db795c59c92f2115727f5c173cacc9465af7710db9ff2f2aec2d73130d0992d0f16dcb3fac222dc15c0916562d0813b2337401022020673a4461df3d python-3.7.9-amd64.exe
|
6bb73cc2db795c59c92f2115727f5c173cacc9465af7710db9ff2f2aec2d73130d0992d0f16dcb3fac222dc15c0916562d0813b2337401022020673a4461df3d python-3.7.9-amd64.exe
|
||||||
500747651c87f59f2436c5ab91207b5b657856e43d10083f3ce27efb196a2580fadd199a4209519b409920c562aaaa7dcbdfb83ed2072a43eaccae6e2d056f31 python-3.7.9.exe
|
500747651c87f59f2436c5ab91207b5b657856e43d10083f3ce27efb196a2580fadd199a4209519b409920c562aaaa7dcbdfb83ed2072a43eaccae6e2d056f31 python-3.7.9.exe
|
||||||
03e50aecc85914567c114e38a1777e32628ee098756f37177bc23220eab33ac7d3ff591fd162db3b4d4e34d55cee93ef0dc67af68a69c38bb1435e0768dee57e typing_extensions-4.7.1-py3-none-any.whl
|
03e50aecc85914567c114e38a1777e32628ee098756f37177bc23220eab33ac7d3ff591fd162db3b4d4e34d55cee93ef0dc67af68a69c38bb1435e0768dee57e typing_extensions-4.7.1-py3-none-any.whl
|
||||||
2e04acff170ca3bbceeeb18489c687126c951ec0bfd53cccfb389ba8d29a4576c1a9e8f2e5ea26c84dd21bfa2912f4e71fa72c1e2653b71e34afc0e65f1722d4 upx-4.2.2-win32.zip
|
|
||||||
68e1b618d988be56aaae4e2eb92bc0093627a00441c1074ebe680c41aa98a6161e52733ad0c59888c643a33fe56884e4f935178b2557fbbdd105e92e0d993df6 windows6.1-kb2533623-x64.msu
|
68e1b618d988be56aaae4e2eb92bc0093627a00441c1074ebe680c41aa98a6161e52733ad0c59888c643a33fe56884e4f935178b2557fbbdd105e92e0d993df6 windows6.1-kb2533623-x64.msu
|
||||||
479a63e14586ab2f2228208116fc149ed8ee7b1e4ff360754f5bda4bf765c61af2e04b5ef123976623d04df4976b7886e0445647269da81436bd0a7b5671d361 windows6.1-kb2533623-x86.msu
|
479a63e14586ab2f2228208116fc149ed8ee7b1e4ff360754f5bda4bf765c61af2e04b5ef123976623d04df4976b7886e0445647269da81436bd0a7b5671d361 windows6.1-kb2533623-x86.msu
|
||||||
ac96786e5d35882e0c5b724794329c9125c2b86ae7847f17acfc49f0d294312c6afc1c3f248655de3f0ccb4ca426d7957d02ba702f4a15e9fcd7e2c314e72c19 zipp-3.15.0-py3-none-any.whl
|
ac96786e5d35882e0c5b724794329c9125c2b86ae7847f17acfc49f0d294312c6afc1c3f248655de3f0ccb4ca426d7957d02ba702f4a15e9fcd7e2c314e72c19 zipp-3.15.0-py3-none-any.whl
|
||||||
# win10
|
# win10
|
||||||
|
0a2cd4cadf0395f0374974cd2bc2407e5cc65c111275acdffb6ecc5a2026eee9e1bb3da528b35c7f0ff4b64563a74857d5c2149051e281cc09ebd0d1968be9aa en-us_windows_10_enterprise_ltsc_2021_x64_dvd_d289cf96.iso
|
||||||
|
16cc0c58b5df6c7040893089f3eb29c074aed61d76dae6cd628d8a89a05f6223ac5d7f3f709a12417c147594a87a94cc808d1e04a6f1e407cc41f7c9f47790d1 virtio-win-0.1.248.iso
|
||||||
d1420c8417fad7888766dd26b9706a87c63e8f33dceeb8e26d0056d5127b0b3ed9272e44b4b761132d4b3320327252eab1696520488ca64c25958896b41f547b jinja2-3.1.4-py3-none-any.whl
|
d1420c8417fad7888766dd26b9706a87c63e8f33dceeb8e26d0056d5127b0b3ed9272e44b4b761132d4b3320327252eab1696520488ca64c25958896b41f547b jinja2-3.1.4-py3-none-any.whl
|
||||||
e21495f1d473d855103fb4a243095b498ec90eb68776b0f9b48e994990534f7286c0292448e129c507e5d70409f8a05cca58b98d59ce2a815993d0a873dfc480 MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl
|
8e6847bcde75a2736be0214731f834bc1b5854238d703351e68bc4e74d38404b212b8568565ae22c844189e466d3fbe6024836351cb69ffb1824131387644fef MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl
|
||||||
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
|
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
|
||||||
1dfe6f66bef5c9d62c9028a964196b902772ec9e19db215f3f41acb8d2d563586988d81b455fa6b895b434e9e1e9d57e4d271d1b1214483bdb3eadffcbba6a33 pillow-10.3.0-cp311-cp311-win_amd64.whl
|
0203ec2551c4836696cfab0b2c9fff603352f03fa36e7476e2e1ca7ec57a3a0c24bd791fcd92f342bf817f0887854d9f072e0271c643de4b313d8c9569ba8813 packaging-24.1-py3-none-any.whl
|
||||||
8760eab271e79256ae3bfb4af8ccc59010cb5d2eccdd74b325d1a533ae25eb127d51c2ec28ff90d449afed32dd7d6af62934fe9caaf1ae1f4d4831e948e912da pyinstaller-6.5.0-py3-none-win_amd64.whl
|
2be320b4191f208cdd6af183c77ba2cf460ea52164ee45ac3ff17d6dfa57acd9deff016636c2dd42a21f4f6af977d5f72df7dacf599bebcf41757272354d14c1 pillow-10.4.0-cp312-cp312-win_amd64.whl
|
||||||
897a14d5ee5cbc6781a0f48beffc27807a4f789d58c4329d899233f615d168a5dcceddf7f8f2d5bb52212ddcf3eba4664590d9f1fdb25bb5201f44899e03b2f7 python-3.11.9-amd64.exe
|
776378f5414efd26ec8a1cb3228a7b5fdf6afca3fa335a0e9b071266d55d9d9e66ee157c25a468a05bfa70ccd33c48b101998523fc6ff6bcf5e82a1d81ed0af8 pyinstaller-6.9.0-py3-none-win_amd64.whl
|
||||||
729dc52f1a02bc6274d012ce33f534102975a828cba11f6029600ea40e2d23aefeb07bf4ae19f9621d0565dd03eb2635bbb97d45fb692c1f756315e8c86c5255 upx-4.2.2-win64.zip
|
c0af77d2a57cb063ab038dc986ed3582bc5acc8c8bd91d726101935d6388f50854ddbca26bc846ed5d1022cdee4d96242938c66f0ddc4565c36b60d691064db8 pyinstaller_hooks_contrib-2024.7-py2.py3-none-any.whl
|
||||||
|
2f9a11ffae6d9f1ed76bf816f28812fcba71f87080b0c92e52bfccb46243118c5803a7e25dd78003ca7d66501bfcdce8ff7c691c63c0038b0d409ca3842dcc89 python-3.12.4-amd64.exe
|
||||||
|
|||||||
@@ -34,6 +34,7 @@ https://support.microsoft.com/en-us/topic/microsoft-security-advisory-insecure-l
|
|||||||
see web.archive.org links below
|
see web.archive.org links below
|
||||||
|
|
||||||
# direct links to version-frozen deps
|
# direct links to version-frozen deps
|
||||||
|
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.248-1/virtio-win-0.1.248.iso
|
||||||
https://www.python.org/ftp/python/3.7.9/python-3.7.9-amd64.exe
|
https://www.python.org/ftp/python/3.7.9/python-3.7.9-amd64.exe
|
||||||
https://www.python.org/ftp/python/3.7.9/python-3.7.9.exe
|
https://www.python.org/ftp/python/3.7.9/python-3.7.9.exe
|
||||||
https://web.archive.org/web/20200412130846if_/https://download.microsoft.com/download/2/D/7/2D78D0DD-2802-41F5-88D6-DC1D559F206D/Windows6.1-KB2533623-x86.msu
|
https://web.archive.org/web/20200412130846if_/https://download.microsoft.com/download/2/D/7/2D78D0DD-2802-41F5-88D6-DC1D559F206D/Windows6.1-KB2533623-x86.msu
|
||||||
|
|||||||
@@ -1,14 +1,26 @@
|
|||||||
|
after performing all the initial setup in this file,
|
||||||
run ./build.sh in git-bash to build + upload the exe
|
run ./build.sh in git-bash to build + upload the exe
|
||||||
|
|
||||||
|
|
||||||
## ============================================================
|
|
||||||
## first-time setup on a stock win7x32sp1 and/or win10x64 vm:
|
|
||||||
##
|
|
||||||
|
|
||||||
to obtain the files referenced below, see ./deps.txt
|
to obtain the files referenced below, see ./deps.txt
|
||||||
|
|
||||||
download + install git (32bit OK on 64):
|
and if you don't yet have a windows vm to build in, then the
|
||||||
http://192.168.123.1:3923/ro/pyi/Git-2.44.0-32-bit.exe
|
first step will be "creating the windows VM templates" below
|
||||||
|
|
||||||
|
commands to start the VMs after initial setup:
|
||||||
|
qemu-system-x86_64 -m 4096 -enable-kvm --machine q35 -cpu host -smp 4 -usb -device usb-tablet -net bridge,br=virhost0 -net nic,model=e1000e -drive file=win7-x32.qcow2,discard=unmap,detect-zeroes=unmap
|
||||||
|
qemu-system-x86_64 -m 4096 -enable-kvm --machine q35 -cpu host -smp 4 -usb -device usb-tablet -net bridge,br=virhost0 -net nic,model=virtio -drive file=win10-e2021.qcow2,if=virtio,discard=unmap
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## ============================================================
|
||||||
|
## first-time setup in a stock win7x32sp1 and/or win10x64 vm:
|
||||||
|
##
|
||||||
|
|
||||||
|
grab & install from ftp2host: Git-2.45.2-32-bit.exe
|
||||||
|
|
||||||
|
...and do this on the host so you can grab these notes too:
|
||||||
|
unix2dos <~/dev/copyparty/scripts/pyinstaller/notes.txt >~/dev/pyi/notes.txt
|
||||||
|
|
||||||
|
|
||||||
===[ copy-paste into git-bash ]================================
|
===[ copy-paste into git-bash ]================================
|
||||||
uname -s | grep NT-10 && w10=1 || {
|
uname -s | grep NT-10 && w10=1 || {
|
||||||
@@ -16,39 +28,40 @@ uname -s | grep NT-10 && w10=1 || {
|
|||||||
}
|
}
|
||||||
fns=(
|
fns=(
|
||||||
altgraph-0.17.4-py2.py3-none-any.whl
|
altgraph-0.17.4-py2.py3-none-any.whl
|
||||||
packaging-24.0-py3-none-any.whl
|
|
||||||
pefile-2023.2.7-py3-none-any.whl
|
pefile-2023.2.7-py3-none-any.whl
|
||||||
pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl
|
|
||||||
pywin32_ctypes-0.2.2-py3-none-any.whl
|
pywin32_ctypes-0.2.2-py3-none-any.whl
|
||||||
setuptools-68.2.2-py3-none-any.whl
|
setuptools-70.3.0-py3-none-any.whl
|
||||||
|
upx-4.2.4-win32.zip
|
||||||
)
|
)
|
||||||
[ $w10 ] && fns+=(
|
[ $w10 ] && fns+=(
|
||||||
pyinstaller-6.5.0-py3-none-win_amd64.whl
|
jinja2-3.1.4-py3-none-any.whl
|
||||||
Jinja2-3.1.4-py3-none-any.whl
|
MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl
|
||||||
MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl
|
|
||||||
mutagen-1.47.0-py3-none-any.whl
|
mutagen-1.47.0-py3-none-any.whl
|
||||||
pillow-10.3.0-cp311-cp311-win_amd64.whl
|
packaging-24.1-py3-none-any.whl
|
||||||
python-3.11.9-amd64.exe
|
pillow-10.4.0-cp312-cp312-win_amd64.whl
|
||||||
upx-4.2.2-win64.zip
|
pyinstaller-6.9.0-py3-none-win_amd64.whl
|
||||||
|
pyinstaller_hooks_contrib-2024.7-py2.py3-none-any.whl
|
||||||
|
python-3.12.4-amd64.exe
|
||||||
)
|
)
|
||||||
[ $w7 ] && fns+=(
|
[ $w7 ] && fns+=( # u2c stuff
|
||||||
pyinstaller-5.13.2-py3-none-win32.whl
|
|
||||||
certifi-2024.2.2-py3-none-any.whl
|
certifi-2024.2.2-py3-none-any.whl
|
||||||
charset_normalizer-3.3.2-cp37-cp37m-win32.whl
|
charset_normalizer-3.3.2-cp37-cp37m-win32.whl
|
||||||
idna-3.6-py3-none-any.whl
|
idna-3.6-py3-none-any.whl
|
||||||
requests-2.31.0-py3-none-any.whl
|
requests-2.29.0-py3-none-any.whl
|
||||||
urllib3-1.26.18-py2.py3-none-any.whl
|
urllib3-1.26.19-py2.py3-none-any.whl
|
||||||
upx-4.2.2-win32.zip
|
|
||||||
)
|
)
|
||||||
[ $w7 ] && fns+=(
|
[ $w7 ] && fns+=(
|
||||||
future-1.0.0-py3-none-any.whl
|
future-1.0.0-py3-none-any.whl
|
||||||
importlib_metadata-6.7.0-py3-none-any.whl
|
importlib_metadata-6.7.0-py3-none-any.whl
|
||||||
|
packaging-24.0-py3-none-any.whl
|
||||||
pip-24.0-py3-none-any.whl
|
pip-24.0-py3-none-any.whl
|
||||||
|
pyinstaller_hooks_contrib-2023.8-py2.py3-none-any.whl
|
||||||
typing_extensions-4.7.1-py3-none-any.whl
|
typing_extensions-4.7.1-py3-none-any.whl
|
||||||
zipp-3.15.0-py3-none-any.whl
|
zipp-3.15.0-py3-none-any.whl
|
||||||
)
|
)
|
||||||
[ $w7x64 ] && fns+=(
|
[ $w7x64 ] && fns+=(
|
||||||
windows6.1-kb2533623-x64.msu
|
windows6.1-kb2533623-x64.msu
|
||||||
|
pyinstaller-5.13.2-py3-none-win64.whl
|
||||||
python-3.7.9-amd64.exe
|
python-3.7.9-amd64.exe
|
||||||
)
|
)
|
||||||
[ $w7x32 ] && fns+=(
|
[ $w7x32 ] && fns+=(
|
||||||
@@ -57,20 +70,24 @@ fns=(
|
|||||||
python-3.7.9.exe
|
python-3.7.9.exe
|
||||||
)
|
)
|
||||||
dl() { curl -fkLOC- "$1" && return 0; echo "$1"; return 1; }
|
dl() { curl -fkLOC- "$1" && return 0; echo "$1"; return 1; }
|
||||||
cd ~/Downloads &&
|
cd ~/Downloads && rm -f Git-*.exe &&
|
||||||
for fn in "${fns[@]}"; do
|
for fn in "${fns[@]}"; do
|
||||||
dl "https://192.168.123.1:3923/ro/pyi/$fn" || {
|
dl "https://192.168.123.1:3923/ro/pyi/$fn" || {
|
||||||
echo ERROR; ok=; break
|
echo ERROR; ok=; break
|
||||||
}
|
}
|
||||||
done
|
done
|
||||||
|
|
||||||
manually install:
|
|
||||||
windows6.1-kb2533623 + reboot
|
WIN7-ONLY: manually install windows6.1-kb2533623 and reboot
|
||||||
python-3.7.9
|
|
||||||
|
manually install python-3.99.99.exe and then delete it
|
||||||
|
|
||||||
|
close and reopen git-bash so python is in PATH
|
||||||
|
|
||||||
|
|
||||||
===[ copy-paste into git-bash ]================================
|
===[ copy-paste into git-bash ]================================
|
||||||
uname -s | grep NT-10 && w10=1 || w7=1
|
uname -s | grep NT-10 && w10=1 || w7=1
|
||||||
[ $w7 ] && pyv=37 || pyv=311
|
[ $w7 ] && pyv=37 || pyv=312
|
||||||
appd=$(cygpath.exe "$APPDATA")
|
appd=$(cygpath.exe "$APPDATA")
|
||||||
cd ~/Downloads &&
|
cd ~/Downloads &&
|
||||||
yes | unzip upx-*-win32.zip &&
|
yes | unzip upx-*-win32.zip &&
|
||||||
@@ -78,7 +95,7 @@ mv upx-*/upx.exe . &&
|
|||||||
python -m ensurepip &&
|
python -m ensurepip &&
|
||||||
{ [ $w10 ] || python -m pip install --user -U pip-*.whl; } &&
|
{ [ $w10 ] || python -m pip install --user -U pip-*.whl; } &&
|
||||||
python -m pip install --user -U packaging-*.whl &&
|
python -m pip install --user -U packaging-*.whl &&
|
||||||
{ [ $w7 ] || python -m pip install --user -U {setuptools,mutagen,Pillow,Jinja2,MarkupSafe}-*.whl; } &&
|
{ [ $w7 ] || python -m pip install --user -U {setuptools,mutagen,pillow,jinja2,MarkupSafe}-*.whl; } &&
|
||||||
{ [ $w10 ] || python -m pip install --user -U {requests,urllib3,charset_normalizer,certifi,idna}-*.whl; } &&
|
{ [ $w10 ] || python -m pip install --user -U {requests,urllib3,charset_normalizer,certifi,idna}-*.whl; } &&
|
||||||
{ [ $w10 ] || python -m pip install --user -U future-*.whl importlib_metadata-*.whl typing_extensions-*.whl zipp-*.whl; } &&
|
{ [ $w10 ] || python -m pip install --user -U future-*.whl importlib_metadata-*.whl typing_extensions-*.whl zipp-*.whl; } &&
|
||||||
python -m pip install --user -U pyinstaller-*.whl pefile-*.whl pywin32_ctypes-*.whl pyinstaller_hooks_contrib-*.whl altgraph-*.whl &&
|
python -m pip install --user -U pyinstaller-*.whl pefile-*.whl pywin32_ctypes-*.whl pyinstaller_hooks_contrib-*.whl altgraph-*.whl &&
|
||||||
@@ -90,9 +107,65 @@ rm -f build.sh &&
|
|||||||
curl -fkLO https://192.168.123.1:3923/cpp/scripts/pyinstaller/build.sh &&
|
curl -fkLO https://192.168.123.1:3923/cpp/scripts/pyinstaller/build.sh &&
|
||||||
{ [ $w10 ] || curl -fkLO https://192.168.123.1:3923/cpp/scripts/pyinstaller/up2k.sh; } &&
|
{ [ $w10 ] || curl -fkLO https://192.168.123.1:3923/cpp/scripts/pyinstaller/up2k.sh; } &&
|
||||||
echo ok
|
echo ok
|
||||||
# python -m pip install --user -U Pillow-9.2.0-cp37-cp37m-win32.whl
|
|
||||||
# sed -ri 's/, bestopt, /]+bestopt+[/' $APPDATA/Python/Python37/site-packages/pyinstaller/building/utils.py
|
|
||||||
# sed -ri 's/(^\s+bestopt = ).*/\1["--best","--lzma","--ultra-brute"]/' $APPDATA/Python/Python37/site-packages/pyinstaller/building/utils.py
|
now is an excellent time to take another snapshot, but:
|
||||||
|
* on win7: first do the 4g.nul thing again
|
||||||
|
* on win10: first do a reboot so fstrim kicks in
|
||||||
|
then shutdown and: vmsnap the.qcow2 snap2
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## ============================================================
|
||||||
|
## creating the windows VM templates
|
||||||
|
##
|
||||||
|
|
||||||
|
bash ~/dev/asm/doc/setup-virhost.sh # github:9001/asm
|
||||||
|
truncate -s 4G ~/dev/pyi/4g.nul # win7 "trim"
|
||||||
|
|
||||||
|
note: if you keep accidentally killing the vm with alt-f4 then remove "-device usb-tablet" in the qemu commands below
|
||||||
|
|
||||||
|
# win7: don't bother with virtio stuff since win7 doesn't fstrim properly anyways (4g.nul takes care of that)
|
||||||
|
rm -f win7-x32.qcow2
|
||||||
|
qemu-img create -f qcow2 win7-x32.qcow2 64g
|
||||||
|
qemu-system-x86_64 -m 4096 -enable-kvm --machine q35 -cpu host -smp 4 -usb -device usb-tablet -net bridge,br=virhost0 -net nic,model=e1000e -drive file=win7-x32.qcow2,discard=unmap,detect-zeroes=unmap \
|
||||||
|
-cdrom ~/iso/win7-X17-59183-english-32bit-professional.iso
|
||||||
|
|
||||||
|
# win10: use virtio hdd and net (viostor+netkvm), but do NOT use qxl graphics (kills mouse cursor)
|
||||||
|
rm -f win10-e2021.qcow2
|
||||||
|
qemu-img create -f qcow2 win10-e2021.qcow2 64g
|
||||||
|
qemu-system-x86_64 -m 4096 -enable-kvm --machine q35 -cpu host -smp 4 -usb -device usb-tablet -net bridge,br=virhost0 -net nic,model=virtio -drive file=win10-e2021.qcow2,if=virtio,discard=unmap \
|
||||||
|
-drive file=$HOME/iso/virtio-win-0.1.248.iso,media=cdrom -cdrom $HOME/iso/en-us_windows_10_enterprise_ltsc_2021_x64_dvd_d289cf96.iso
|
||||||
|
|
||||||
|
tweak stuff to your preference, but also do these steps in order:
|
||||||
|
* press ctrl-alt-g so you don't accidentally alt-f4 the vm
|
||||||
|
* startmenu, type "sysdm.cpl" and hit Enter,
|
||||||
|
* system protection -> configure -> disable
|
||||||
|
* advanced > performance > advanced > virtual memory > no paging file
|
||||||
|
* startmenu, type "cmd" and hit Ctrl-Shift-Enter, run command: powercfg /h off
|
||||||
|
* reboot
|
||||||
|
* make screen resolution something comfy (1440x900 is always a winner)
|
||||||
|
* cmd.exe window-width 176 (assuming 1440x900) and buffer-height 8191
|
||||||
|
* fix explorer settings (show hidden files and file extensions)
|
||||||
|
* WIN10-ONLY: startmenu, device manager, install netkvm driver for ethernet
|
||||||
|
* create ftp2host.bat on desktop with following contents:
|
||||||
|
start explorer ftp://wark:k@192.168.123.1:3921/ro/pyi/
|
||||||
|
* WIN7-ONLY: connect to ftp, download 4g.nul to desktop, then delete it (poor man's fstrim...)
|
||||||
|
|
||||||
|
and finally take snapshots of the VMs by copypasting this stuff into your shell:
|
||||||
|
vmsnap() { zstd --long=31 -vT0 -19 <$1 >$1.$2; };
|
||||||
|
vmsnap win7-x32.qcow2 snap1
|
||||||
|
vmsnap win10-e2021.qcow2 snap1
|
||||||
|
|
||||||
|
note: vmsnap could have defragged the qcow2 as well, but
|
||||||
|
that makes it hard to do xdelta3 memes so it's not worth it --
|
||||||
|
but you can add this before "zstd" if you still want to:
|
||||||
|
qemu-img convert -f qcow2 -O qcow2 $1 a.qcow2 && mv a.qcow2 $1 &&
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## ============================================================
|
## ============================================================
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ grep -E '^from .ssl_ import' $APPDATA/python/python37/site-packages/urllib3/util
|
|||||||
echo golfing
|
echo golfing
|
||||||
echo > $APPDATA/python/python37/site-packages/requests/certs.py
|
echo > $APPDATA/python/python37/site-packages/requests/certs.py
|
||||||
sed -ri 's/^(DEFAULT_CA_BUNDLE_PATH = ).*/\1""/' $APPDATA/python/python37/site-packages/requests/utils.py
|
sed -ri 's/^(DEFAULT_CA_BUNDLE_PATH = ).*/\1""/' $APPDATA/python/python37/site-packages/requests/utils.py
|
||||||
sed -ri '/^import zipfile$/d' $APPDATA/python/python37/site-packages/requests/utils.py
|
sed -ri '/^import zipfile$/d' $APPDATA/python/python37/site-packages/requests/utils.py
|
||||||
sed -ri 's/"idna"//' $APPDATA/python/python37/site-packages/requests/packages.py
|
sed -ri 's/"idna"//' $APPDATA/python/python37/site-packages/requests/packages.py
|
||||||
sed -ri 's/import charset_normalizer.*/pass/' $APPDATA/python/python37/site-packages/requests/compat.py
|
sed -ri 's/import charset_normalizer.*/pass/' $APPDATA/python/python37/site-packages/requests/compat.py
|
||||||
sed -ri 's/raise.*charset_normalizer.*/pass/' $APPDATA/python/python37/site-packages/requests/__init__.py
|
sed -ri 's/raise.*charset_normalizer.*/pass/' $APPDATA/python/python37/site-packages/requests/__init__.py
|
||||||
|
|||||||
@@ -47,6 +47,14 @@ def uh(top):
|
|||||||
|
|
||||||
|
|
||||||
def uh1(fp):
|
def uh1(fp):
|
||||||
|
try:
|
||||||
|
uh2(fp)
|
||||||
|
except:
|
||||||
|
print("failed to process", fp)
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def uh2(fp):
|
||||||
pr(".")
|
pr(".")
|
||||||
cs = strip_file_to_string(fp, no_ast=True, to_empty=True)
|
cs = strip_file_to_string(fp, no_ast=True, to_empty=True)
|
||||||
|
|
||||||
|
|||||||
2
setup.py
2
setup.py
@@ -141,7 +141,7 @@ args = {
|
|||||||
"audiotags": ["mutagen"],
|
"audiotags": ["mutagen"],
|
||||||
"ftpd": ["pyftpdlib"],
|
"ftpd": ["pyftpdlib"],
|
||||||
"ftps": ["pyftpdlib", "pyopenssl"],
|
"ftps": ["pyftpdlib", "pyopenssl"],
|
||||||
"tftpd": ["partftpy>=0.3.1"],
|
"tftpd": ["partftpy>=0.4.0"],
|
||||||
"pwhash": ["argon2-cffi"],
|
"pwhash": ["argon2-cffi"],
|
||||||
},
|
},
|
||||||
"entry_points": {"console_scripts": ["copyparty = copyparty.__main__:main"]},
|
"entry_points": {"console_scripts": ["copyparty = copyparty.__main__:main"]},
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ def hdr(query, uname):
|
|||||||
return (h % (query, uname)).encode("utf-8")
|
return (h % (query, uname)).encode("utf-8")
|
||||||
|
|
||||||
|
|
||||||
class TestHttpCli(unittest.TestCase):
|
class TestDots(unittest.TestCase):
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.td = tu.get_ramdisk()
|
self.td = tu.get_ramdisk()
|
||||||
|
|
||||||
@@ -31,7 +31,7 @@ class TestHttpCli(unittest.TestCase):
|
|||||||
os.chdir(tempfile.gettempdir())
|
os.chdir(tempfile.gettempdir())
|
||||||
shutil.rmtree(self.td)
|
shutil.rmtree(self.td)
|
||||||
|
|
||||||
def test(self):
|
def test_dots(self):
|
||||||
td = os.path.join(self.td, "vfs")
|
td = os.path.join(self.td, "vfs")
|
||||||
os.mkdir(td)
|
os.mkdir(td)
|
||||||
os.chdir(td)
|
os.chdir(td)
|
||||||
@@ -118,6 +118,214 @@ class TestHttpCli(unittest.TestCase):
|
|||||||
url = "v?k=" + zj["dk"]
|
url = "v?k=" + zj["dk"]
|
||||||
self.assertEqual(self.tarsel(url, "u2", ["f1.txt", "a", ".b"]), "f1.txt")
|
self.assertEqual(self.tarsel(url, "u2", ["f1.txt", "a", ".b"]), "f1.txt")
|
||||||
|
|
||||||
|
shutil.rmtree("v")
|
||||||
|
|
||||||
|
def test_dk_fk(self):
|
||||||
|
# python3 -m unittest tests.test_dots.TestDots.test_dk_fk
|
||||||
|
|
||||||
|
td = os.path.join(self.td, "vfs")
|
||||||
|
os.mkdir(td)
|
||||||
|
os.chdir(td)
|
||||||
|
|
||||||
|
vcfg = []
|
||||||
|
for k in "dk dks dky fk fka dk,fk dks,fk".split():
|
||||||
|
vcfg += ["{0}:{0}:r.,u1:g,u2:c,{0}".format(k)]
|
||||||
|
zs = "%s/s1/s2" % (k,)
|
||||||
|
os.makedirs(zs)
|
||||||
|
|
||||||
|
with open("%s/f.t1" % (k,), "wb") as f:
|
||||||
|
f.write(b"f1")
|
||||||
|
|
||||||
|
with open("%s/s1/f.t2" % (k,), "wb") as f:
|
||||||
|
f.write(b"f2")
|
||||||
|
|
||||||
|
with open("%s/s1/s2/f.t3" % (k,), "wb") as f:
|
||||||
|
f.write(b"f3")
|
||||||
|
|
||||||
|
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"])
|
||||||
|
self.asrv = AuthSrv(self.args, self.log)
|
||||||
|
|
||||||
|
dk = {}
|
||||||
|
for d in "dk dks dk,fk dks,fk".split():
|
||||||
|
zj = json.loads(self.curl("%s?ls" % (d,), "u1")[1])
|
||||||
|
dk[d] = zj["dk"]
|
||||||
|
|
||||||
|
##
|
||||||
|
## dk
|
||||||
|
|
||||||
|
# should not be able to access dk with wrong dirkey,
|
||||||
|
zs = self.curl("dk?ls&k=%s" % (dk["dks"]), "u2")[1]
|
||||||
|
self.assertEqual(zs, "\nJ2EOT")
|
||||||
|
# so use the right key
|
||||||
|
zs = self.curl("dk?ls&k=%s" % (dk["dk"]), "u2")[1]
|
||||||
|
zj = json.loads(zs)
|
||||||
|
self.assertEqual(len(zj["dirs"]), 0)
|
||||||
|
self.assertEqual(len(zj["files"]), 1)
|
||||||
|
self.assertEqual(zj["files"][0]["href"], "f.t1")
|
||||||
|
|
||||||
|
##
|
||||||
|
## dk thumbs
|
||||||
|
|
||||||
|
self.assertIn('">folder</text>', self.curl("dk?th=x", "u1")[1])
|
||||||
|
self.assertIn('">e403</text>', self.curl("dk?th=x", "u2")[1])
|
||||||
|
|
||||||
|
zs = "dk?th=x&k=%s" % (dk["dks"])
|
||||||
|
self.assertIn('">e403</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
zs = "dk?th=x&k=%s" % (dk["dk"])
|
||||||
|
self.assertIn('">folder</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
# fk not enabled, so this should work
|
||||||
|
self.assertIn('">t1</text>', self.curl("dk/f.t1?th=x", "u2")[1])
|
||||||
|
self.assertIn('">t2</text>', self.curl("dk/s1/f.t2?th=x", "u2")[1])
|
||||||
|
|
||||||
|
##
|
||||||
|
## dks
|
||||||
|
|
||||||
|
# should not be able to access dks with wrong dirkey,
|
||||||
|
zs = self.curl("dks?ls&k=%s" % (dk["dk"]), "u2")[1]
|
||||||
|
self.assertEqual(zs, "\nJ2EOT")
|
||||||
|
# so use the right key
|
||||||
|
zs = self.curl("dks?ls&k=%s" % (dk["dks"]), "u2")[1]
|
||||||
|
zj = json.loads(zs)
|
||||||
|
self.assertEqual(len(zj["dirs"]), 1)
|
||||||
|
self.assertEqual(len(zj["files"]), 1)
|
||||||
|
self.assertEqual(zj["files"][0]["href"], "f.t1")
|
||||||
|
# dks should return correct dirkey of subfolders;
|
||||||
|
s1 = zj["dirs"][0]["href"]
|
||||||
|
self.assertEqual(s1.split("/")[0], "s1")
|
||||||
|
zs = self.curl("dks/%s&ls" % (s1), "u2")[1]
|
||||||
|
self.assertIn('"s2/?k=', zs)
|
||||||
|
|
||||||
|
##
|
||||||
|
## dks thumbs
|
||||||
|
|
||||||
|
self.assertIn('">folder</text>', self.curl("dks?th=x", "u1")[1])
|
||||||
|
self.assertIn('">e403</text>', self.curl("dks?th=x", "u2")[1])
|
||||||
|
|
||||||
|
zs = "dks?th=x&k=%s" % (dk["dk"])
|
||||||
|
self.assertIn('">e403</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
zs = "dks?th=x&k=%s" % (dk["dks"])
|
||||||
|
self.assertIn('">folder</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
# fk not enabled, so this should work
|
||||||
|
self.assertIn('">t1</text>', self.curl("dks/f.t1?th=x", "u2")[1])
|
||||||
|
self.assertIn('">t2</text>', self.curl("dks/s1/f.t2?th=x", "u2")[1])
|
||||||
|
|
||||||
|
##
|
||||||
|
## dky
|
||||||
|
|
||||||
|
# doesn't care about keys
|
||||||
|
zs = self.curl("dky?ls&k=ok", "u2")[1]
|
||||||
|
self.assertEqual(zs, self.curl("dky?ls", "u2")[1])
|
||||||
|
zj = json.loads(zs)
|
||||||
|
self.assertEqual(len(zj["dirs"]), 0)
|
||||||
|
self.assertEqual(len(zj["files"]), 1)
|
||||||
|
self.assertEqual(zj["files"][0]["href"], "f.t1")
|
||||||
|
|
||||||
|
##
|
||||||
|
## dky thumbs
|
||||||
|
|
||||||
|
self.assertIn('">folder</text>', self.curl("dky?th=x", "u1")[1])
|
||||||
|
self.assertIn('">folder</text>', self.curl("dky?th=x", "u2")[1])
|
||||||
|
|
||||||
|
zs = "dky?th=x&k=%s" % (dk["dk"])
|
||||||
|
self.assertIn('">folder</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
# fk not enabled, so this should work
|
||||||
|
self.assertIn('">t1</text>', self.curl("dky/f.t1?th=x", "u2")[1])
|
||||||
|
self.assertIn('">t2</text>', self.curl("dky/s1/f.t2?th=x", "u2")[1])
|
||||||
|
|
||||||
|
##
|
||||||
|
## dk+fk
|
||||||
|
|
||||||
|
# should not be able to access dk with wrong dirkey,
|
||||||
|
zs = self.curl("dk,fk?ls&k=%s" % (dk["dk"]), "u2")[1]
|
||||||
|
self.assertEqual(zs, "\nJ2EOT")
|
||||||
|
# so use the right key
|
||||||
|
zs = self.curl("dk,fk?ls&k=%s" % (dk["dk,fk"]), "u2")[1]
|
||||||
|
zj = json.loads(zs)
|
||||||
|
self.assertEqual(len(zj["dirs"]), 0)
|
||||||
|
self.assertEqual(len(zj["files"]), 1)
|
||||||
|
self.assertEqual(zj["files"][0]["href"][:7], "f.t1?k=")
|
||||||
|
|
||||||
|
##
|
||||||
|
## dk+fk thumbs
|
||||||
|
|
||||||
|
self.assertIn('">folder</text>', self.curl("dk,fk?th=x", "u1")[1])
|
||||||
|
self.assertIn('">e403</text>', self.curl("dk,fk?th=x", "u2")[1])
|
||||||
|
|
||||||
|
zs = "dk,fk?th=x&k=%s" % (dk["dk"])
|
||||||
|
self.assertIn('">e403</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
zs = "dk,fk?th=x&k=%s" % (dk["dk,fk"])
|
||||||
|
self.assertIn('">folder</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
# fk enabled, so this should fail
|
||||||
|
self.assertIn('">e404</text>', self.curl("dk,fk/f.t1?th=x", "u2")[1])
|
||||||
|
self.assertIn('">e404</text>', self.curl("dk,fk/s1/f.t2?th=x", "u2")[1])
|
||||||
|
|
||||||
|
# but dk should return correct filekeys, so try that
|
||||||
|
zs = "dk,fk/%s&th=x" % (zj["files"][0]["href"])
|
||||||
|
self.assertIn('">t1</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
##
|
||||||
|
## dks+fk
|
||||||
|
|
||||||
|
# should not be able to access dk with wrong dirkey,
|
||||||
|
zs = self.curl("dks,fk?ls&k=%s" % (dk["dk"]), "u2")[1]
|
||||||
|
self.assertEqual(zs, "\nJ2EOT")
|
||||||
|
# so use the right key
|
||||||
|
zs = self.curl("dks,fk?ls&k=%s" % (dk["dks,fk"]), "u2")[1]
|
||||||
|
zj = json.loads(zs)
|
||||||
|
self.assertEqual(len(zj["dirs"]), 1)
|
||||||
|
self.assertEqual(len(zj["files"]), 1)
|
||||||
|
self.assertEqual(zj["dirs"][0]["href"][:6], "s1/?k=")
|
||||||
|
self.assertEqual(zj["files"][0]["href"][:7], "f.t1?k=")
|
||||||
|
|
||||||
|
##
|
||||||
|
## dks+fk thumbs
|
||||||
|
|
||||||
|
self.assertIn('">folder</text>', self.curl("dks,fk?th=x", "u1")[1])
|
||||||
|
self.assertIn('">e403</text>', self.curl("dks,fk?th=x", "u2")[1])
|
||||||
|
|
||||||
|
zs = "dks,fk?th=x&k=%s" % (dk["dk"])
|
||||||
|
self.assertIn('">e403</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
zs = "dks,fk?th=x&k=%s" % (dk["dks,fk"])
|
||||||
|
self.assertIn('">folder</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
# subdir s1 without key
|
||||||
|
zs = "dks,fk/s1/?th=x"
|
||||||
|
self.assertIn('">e403</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
# subdir s1 with bad key
|
||||||
|
zs = "dks,fk/s1/?th=x&k=no"
|
||||||
|
self.assertIn('">e403</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
# subdir s1 with correct key
|
||||||
|
zs = "dks,fk/%s&th=x" % (zj["dirs"][0]["href"])
|
||||||
|
self.assertIn('">folder</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
# fk enabled, so this should fail
|
||||||
|
self.assertIn('">e404</text>', self.curl("dks,fk/f.t1?th=x", "u2")[1])
|
||||||
|
self.assertIn('">e404</text>', self.curl("dks,fk/s1/f.t2?th=x", "u2")[1])
|
||||||
|
|
||||||
|
# but dk should return correct filekeys, so try that
|
||||||
|
zs = "dks,fk/%s&th=x" % (zj["files"][0]["href"])
|
||||||
|
self.assertIn('">t1</text>', self.curl(zs, "u2")[1])
|
||||||
|
|
||||||
|
# subdir
|
||||||
|
self.assertIn('">e403</text>', self.curl("dks,fk/s1/?th=x", "u2")[1])
|
||||||
|
self.assertEqual("\nJ2EOT", self.curl("dks,fk/s1/?ls", "u2")[1])
|
||||||
|
zs = "dks,fk/s1%s&th=x" % (zj["files"][0]["href"])
|
||||||
|
zs = self.curl("dks,fk?ls&k=%s" % (dk["dks,fk"]), "u2")[1]
|
||||||
|
zj = json.loads(zs)
|
||||||
|
url = "dks,fk/%s" % zj["dirs"][0]["href"]
|
||||||
|
self.assertIn('"files"', self.curl(url + "&ls", "u2")[1])
|
||||||
|
self.assertEqual("\nJ2EOT", self.curl(url + "x&ls", "u2")[1])
|
||||||
|
|
||||||
def tardir(self, url, uname):
|
def tardir(self, url, uname):
|
||||||
top = url.split("?")[0]
|
top = url.split("?")[0]
|
||||||
top = ("top" if not top else top.lstrip(".").split("/")[0]) + "/"
|
top = ("top" if not top else top.lstrip(".").split("/")[0]) + "/"
|
||||||
|
|||||||
@@ -235,7 +235,7 @@ class TestVFS(unittest.TestCase):
|
|||||||
"""
|
"""
|
||||||
u a:123
|
u a:123
|
||||||
u asd:fgh:jkl
|
u asd:fgh:jkl
|
||||||
|
|
||||||
./src
|
./src
|
||||||
/dst
|
/dst
|
||||||
r a
|
r a
|
||||||
|
|||||||
@@ -43,6 +43,7 @@ if MACOS:
|
|||||||
|
|
||||||
from copyparty.__init__ import E
|
from copyparty.__init__ import E
|
||||||
from copyparty.__main__ import init_E
|
from copyparty.__main__ import init_E
|
||||||
|
from copyparty.ico import Ico
|
||||||
from copyparty.u2idx import U2idx
|
from copyparty.u2idx import U2idx
|
||||||
from copyparty.util import FHC, CachedDict, Garda, Unrecv
|
from copyparty.util import FHC, CachedDict, Garda, Unrecv
|
||||||
|
|
||||||
@@ -110,7 +111,7 @@ class Cfg(Namespace):
|
|||||||
def __init__(self, a=None, v=None, c=None, **ka0):
|
def __init__(self, a=None, v=None, c=None, **ka0):
|
||||||
ka = {}
|
ka = {}
|
||||||
|
|
||||||
ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand smb srch_dbg stats uqe vague_403 vc ver xdev xlink xvol"
|
ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand smb srch_dbg stats uqe vague_403 vc ver xdev xlink xvol"
|
||||||
ka.update(**{k: False for k in ex.split()})
|
ka.update(**{k: False for k in ex.split()})
|
||||||
|
|
||||||
ex = "dotpart dotsrch no_dhash no_fastboot no_rescan no_sendfile no_snap no_voldump re_dhash plain_ip"
|
ex = "dotpart dotsrch no_dhash no_fastboot no_rescan no_sendfile no_snap no_voldump re_dhash plain_ip"
|
||||||
@@ -119,7 +120,7 @@ class Cfg(Namespace):
|
|||||||
ex = "ah_cli ah_gen css_browser hist js_browser mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
|
ex = "ah_cli ah_gen css_browser hist js_browser mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
|
||||||
ka.update(**{k: None for k in ex.split()})
|
ka.update(**{k: None for k in ex.split()})
|
||||||
|
|
||||||
ex = "hash_mt srch_time u2abort u2j"
|
ex = "hash_mt srch_time u2abort u2j u2sz"
|
||||||
ka.update(**{k: 1 for k in ex.split()})
|
ka.update(**{k: 1 for k in ex.split()})
|
||||||
|
|
||||||
ex = "au_vol mtab_age reg_cap s_thead s_tbody th_convt"
|
ex = "au_vol mtab_age reg_cap s_thead s_tbody th_convt"
|
||||||
@@ -134,7 +135,7 @@ class Cfg(Namespace):
|
|||||||
ex = "grp on403 on404 xad xar xau xban xbd xbr xbu xiu xm"
|
ex = "grp on403 on404 xad xar xau xban xbd xbr xbu xiu xm"
|
||||||
ka.update(**{k: [] for k in ex.split()})
|
ka.update(**{k: [] for k in ex.split()})
|
||||||
|
|
||||||
ex = "exp_lg exp_md th_coversd"
|
ex = "exp_lg exp_md"
|
||||||
ka.update(**{k: {} for k in ex.split()})
|
ka.update(**{k: {} for k in ex.split()})
|
||||||
|
|
||||||
ka.update(ka0)
|
ka.update(ka0)
|
||||||
@@ -161,6 +162,10 @@ class Cfg(Namespace):
|
|||||||
s_wr_sz=256 * 1024,
|
s_wr_sz=256 * 1024,
|
||||||
sort="href",
|
sort="href",
|
||||||
srch_hits=99999,
|
srch_hits=99999,
|
||||||
|
th_covers=["folder.png"],
|
||||||
|
th_coversd=["folder.png"],
|
||||||
|
th_covers_set=set(["folder.png"]),
|
||||||
|
th_coversd_set=set(["folder.png"]),
|
||||||
th_crop="y",
|
th_crop="y",
|
||||||
th_size="320x256",
|
th_size="320x256",
|
||||||
th_x3="n",
|
th_x3="n",
|
||||||
@@ -245,7 +250,7 @@ class VHttpConn(object):
|
|||||||
self.bans = {}
|
self.bans = {}
|
||||||
self.freshen_pwd = 0.0
|
self.freshen_pwd = 0.0
|
||||||
self.hsrv = VHttpSrv(args, asrv, log)
|
self.hsrv = VHttpSrv(args, asrv, log)
|
||||||
self.ico = None
|
self.ico = Ico(args)
|
||||||
self.ipa_nm = None
|
self.ipa_nm = None
|
||||||
self.lf_url = None
|
self.lf_url = None
|
||||||
self.log_func = log
|
self.log_func = log
|
||||||
|
|||||||
Reference in New Issue
Block a user