Compare commits

...

34 Commits

Author SHA1 Message Date
ed
42d00050c1 v1.13.0 2024-04-20 22:32:50 +00:00
ed
4bb0e6e75a pipe: windows: make it safe with aggressive flushing 2024-04-20 22:15:08 +00:00
ed
2f7f9de3f5 pipe: optimize (1 GiB/s @ ryzen5-4500U) 2024-04-20 20:13:31 +00:00
ed
f31ac90932 less confusing help-text for --re-dhash 2024-04-20 16:42:56 +00:00
ed
439cb7f85b u2c: add --ow (previously part of --dr) 2024-04-20 16:36:10 +00:00
ed
af193ee834 keep up2k state integrity on abort 2024-04-20 16:13:32 +00:00
ed
c06126cc9d pipe: add volflag to disable 2024-04-19 23:54:23 +00:00
ed
897ffbbbd0 pipe: add to docs 2024-04-19 00:02:28 +00:00
ed
8244d3b4fc pipe: add tapering to keep tcp alive 2024-04-18 23:10:37 +00:00
ed
74266af6d1 pipe: warn when trying to download a .PARTIAL
and fix file sorting indicators on firefox
2024-04-18 23:10:11 +00:00
ed
8c552f1ad1 windows: fix upload-abort 2024-04-18 23:08:05 +00:00
ed
bf5850785f add opt-out from storing uploader IPs 2024-04-18 17:16:00 +00:00
ed
feecb3e0b8 up2k: fix put-hasher dying + a harmless race
* hasher thread could die if a client would rapidly
   upload and delete files (so very unlikely)

* two unprotected calls to register_vpath which was
   almost-definitely safe because the volumes
   already existed in the registry
2024-04-18 16:43:38 +00:00
ed
08d8c82167 PoC: ongoing uploads can be downloaded in lockstep 2024-04-18 00:10:54 +00:00
ed
5239e7ac0c separate registry mutex for faster access
also fix a harmless toctou in handle_json where clients
could get stuck hanging for a bit longer than necessary
2024-04-18 00:07:56 +00:00
ed
9937c2e755 add ArozOS to comparison 2024-04-16 21:00:47 +00:00
ed
f1e947f37d rehost deps from a flaky server 2024-04-12 21:49:01 +00:00
ed
a70a49b9c9 update pkgs to 1.12.2 2024-04-12 21:25:21 +00:00
ed
fe700dcf1a v1.12.2 2024-04-12 21:10:02 +00:00
ed
c8e3ed3aae retry failed renames on windows
theoretical issue which nobody has ran into yet,
probably because nobody uses this on windows
2024-04-12 20:38:30 +00:00
ed
b8733653a3 fix audio transcoding with filekeys 2024-04-11 21:54:15 +00:00
ed
b772a4f8bb fix wordwrap of buttons on ios 2024-04-11 21:31:40 +00:00
ed
9e5253ef87 ie11: restore load-bearing thing 2024-04-11 20:53:15 +00:00
ed
7b94e4edf3 configurable basic-auth preference;
adds options `--bauth-last` to lower the preference for
taking the basic-auth password in case of conflict,
and `--no-bauth` to entirely disable basic-authentication

if a client is providing multiple passwords, for example when
"logged in" with one password (the `cppwd` cookie) and switching
to another account by also sending a PW header/url-param, then
the default evaluation order to determine which password to use is:

url-param `pw`, header `pw`, basic-auth header, cookie (cppwd/cppws)

so if a client supplies a basic-auth header, it will ignore the cookie
and use the basic-auth password instead, which usually makes sense

but this can become a problem if you have other webservers running
on the same domain which also support basic-authentication

--bauth-last is a good choice for cooperating with such services, as
--no-bauth currently breaks support for the android app...
2024-04-11 20:15:49 +00:00
ed
da26ec36ca add password placeholder on login page
was easy to assume you were supposed to put a username there
2024-04-11 19:31:02 +00:00
ed
443acf2f8b update nuitka notes 2024-04-10 22:04:43 +00:00
ed
6c90e3893d update pkgs to 1.12.1 2024-04-09 23:53:43 +00:00
ed
ea002ee71d v1.12.1 2024-04-09 23:34:31 +00:00
ed
ab18893cd2 update deps 2024-04-09 23:25:54 +00:00
ed
844d16b9e5 bbox: scrollwheel for prev/next pic
inspired by d385305f5e
2024-04-09 20:39:07 +00:00
ed
989cc613ef fix tree-rendering when history-popping into bbox
plus misc similar technically-incorrect addq usages;
most of these don't matter in practice since they'll
never get a url with a hash, but makes the intent clear

and make sure hashes never get passed around
like they're part of a dirkey, harmless as it is
2024-04-09 19:54:15 +00:00
ed
4f0cad5468 fix bbox destructor, closes #81 for real 2024-04-09 19:10:55 +00:00
ed
f89de6b35d preloading too aggressive, chill a bit 2024-04-09 18:44:23 +00:00
ed
e0bcb88ee7 update pkgs to 1.12.0 2024-04-06 20:56:52 +00:00
34 changed files with 995 additions and 429 deletions

View File

@@ -10,13 +10,16 @@ turn almost any device into a file server with resumable uploads/downloads using
📷 **screenshots:** [browser](#the-browser) // [upload](#uploading) // [unpost](#unpost) // [thumbnails](#thumbnails) // [search](#searching) // [fsearch](#file-search) // [zip-DL](#zip-downloads) // [md-viewer](#markdown-viewer) 📷 **screenshots:** [browser](#the-browser) // [upload](#uploading) // [unpost](#unpost) // [thumbnails](#thumbnails) // [search](#searching) // [fsearch](#file-search) // [zip-DL](#zip-downloads) // [md-viewer](#markdown-viewer)
🎬 **videos:** [upload](https://a.ocv.me/pub/demo/pics-vids/up2k.webm) // [cli-upload](https://a.ocv.me/pub/demo/pics-vids/u2cli.webm) // [race-the-beam](https://a.ocv.me/pub/g/nerd-stuff/cpp/2024-0418-race-the-beam.webm)
## readme toc ## readme toc
* top * top
* [quickstart](#quickstart) - just run **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py)** -- that's it! 🎉 * [quickstart](#quickstart) - just run **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py)** -- that's it! 🎉
* [at home](#at-home) - make it accessible over the internet
* [on servers](#on-servers) - you may also want these, especially on servers * [on servers](#on-servers) - you may also want these, especially on servers
* [features](#features) * [features](#features) - also see [comparison to similar software](./docs/versus.md)
* [testimonials](#testimonials) - small collection of user feedback * [testimonials](#testimonials) - small collection of user feedback
* [motivations](#motivations) - project goals / philosophy * [motivations](#motivations) - project goals / philosophy
* [notes](#notes) - general notes * [notes](#notes) - general notes
@@ -37,6 +40,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [file-search](#file-search) - dropping files into the browser also lets you see if they exist on the server * [file-search](#file-search) - dropping files into the browser also lets you see if they exist on the server
* [unpost](#unpost) - undo/delete accidental uploads * [unpost](#unpost) - undo/delete accidental uploads
* [self-destruct](#self-destruct) - uploads can be given a lifetime * [self-destruct](#self-destruct) - uploads can be given a lifetime
* [race the beam](#race-the-beam) - download files while they're still uploading ([demo video](http://a.ocv.me/pub/g/nerd-stuff/cpp/2024-0418-race-the-beam.webm))
* [file manager](#file-manager) - cut/paste, rename, and delete files/folders (if you have permission) * [file manager](#file-manager) - cut/paste, rename, and delete files/folders (if you have permission)
* [batch rename](#batch-rename) - select some files and press `F2` to bring up the rename UI * [batch rename](#batch-rename) - select some files and press `F2` to bring up the rename UI
* [media player](#media-player) - plays almost every audio format there is * [media player](#media-player) - plays almost every audio format there is
@@ -126,7 +130,7 @@ enable thumbnails (images/audio/video), media indexing, and audio transcoding by
* **Alpine:** `apk add py3-pillow ffmpeg` * **Alpine:** `apk add py3-pillow ffmpeg`
* **Debian:** `apt install --no-install-recommends python3-pil ffmpeg` * **Debian:** `apt install --no-install-recommends python3-pil ffmpeg`
* **Fedora:** rpmfusion + `dnf install python3-pillow ffmpeg` * **Fedora:** rpmfusion + `dnf install python3-pillow ffmpeg --allowerasing`
* **FreeBSD:** `pkg install py39-sqlite3 py39-pillow ffmpeg` * **FreeBSD:** `pkg install py39-sqlite3 py39-pillow ffmpeg`
* **MacOS:** `port install py-Pillow ffmpeg` * **MacOS:** `port install py-Pillow ffmpeg`
* **MacOS** (alternative): `brew install pillow ffmpeg` * **MacOS** (alternative): `brew install pillow ffmpeg`
@@ -147,6 +151,17 @@ some recommended options:
* see [accounts and volumes](#accounts-and-volumes) (or `--help-accounts`) for the syntax and other permissions * see [accounts and volumes](#accounts-and-volumes) (or `--help-accounts`) for the syntax and other permissions
### at home
make it accessible over the internet by starting a [cloudflare quicktunnel](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/do-more-with-tunnels/trycloudflare/) like so:
first download [cloudflared](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/downloads/) and then start the tunnel with `cloudflared tunnel --url http://127.0.0.1:3923`
as the tunnel starts, it will show a URL which you can share to let anyone browse your stash or upload files to you
since people will be connecting through cloudflare, run copyparty with `--xff-hdr cf-connecting-ip` to detect client IPs correctly
### on servers ### on servers
you may also want these, especially on servers: you may also want these, especially on servers:
@@ -170,6 +185,8 @@ firewall-cmd --reload
## features ## features
also see [comparison to similar software](./docs/versus.md)
* backend stuff * backend stuff
* ☑ IPv6 * ☑ IPv6
* ☑ [multiprocessing](#performance) (actual multithreading) * ☑ [multiprocessing](#performance) (actual multithreading)
@@ -192,6 +209,7 @@ firewall-cmd --reload
* ☑ write-only folders * ☑ write-only folders
* ☑ [unpost](#unpost): undo/delete accidental uploads * ☑ [unpost](#unpost): undo/delete accidental uploads
* ☑ [self-destruct](#self-destruct) (specified server-side or client-side) * ☑ [self-destruct](#self-destruct) (specified server-side or client-side)
* ☑ [race the beam](#race-the-beam) (almost like peer-to-peer)
* ☑ symlink/discard duplicates (content-matching) * ☑ symlink/discard duplicates (content-matching)
* download * download
* ☑ single files in browser * ☑ single files in browser
@@ -617,7 +635,7 @@ up2k has several advantages:
> it is perfectly safe to restart / upgrade copyparty while someone is uploading to it! > it is perfectly safe to restart / upgrade copyparty while someone is uploading to it!
> all known up2k clients will resume just fine 💪 > all known up2k clients will resume just fine 💪
see [up2k](#up2k) for details on how it works, or watch a [demo video](https://a.ocv.me/pub/demo/pics-vids/#gf-0f6f5c0d) see [up2k](./docs/devnotes.md#up2k) for details on how it works, or watch a [demo video](https://a.ocv.me/pub/demo/pics-vids/#gf-0f6f5c0d)
![copyparty-upload-fs8](https://user-images.githubusercontent.com/241032/129635371-48fc54ca-fa91-48e3-9b1d-ba413e4b68cb.png) ![copyparty-upload-fs8](https://user-images.githubusercontent.com/241032/129635371-48fc54ca-fa91-48e3-9b1d-ba413e4b68cb.png)
@@ -683,6 +701,13 @@ clients can specify a shorter expiration time using the [up2k ui](#uploading) --
specifying a custom expiration time client-side will affect the timespan in which unposts are permitted, so keep an eye on the estimates in the up2k ui specifying a custom expiration time client-side will affect the timespan in which unposts are permitted, so keep an eye on the estimates in the up2k ui
### race the beam
download files while they're still uploading ([demo video](http://a.ocv.me/pub/g/nerd-stuff/cpp/2024-0418-race-the-beam.webm)) -- it's almost like peer-to-peer
requires the file to be uploaded using up2k (which is the default drag-and-drop uploader), alternatively the command-line program
## file manager ## file manager
cut/paste, rename, and delete files/folders (if you have permission) cut/paste, rename, and delete files/folders (if you have permission)
@@ -1042,6 +1067,8 @@ tweaking the ui
* to sort in music order (album, track, artist, title) with filename as fallback, you could `--sort tags/Cirle,tags/.tn,tags/Artist,tags/Title,href` * to sort in music order (album, track, artist, title) with filename as fallback, you could `--sort tags/Cirle,tags/.tn,tags/Artist,tags/Title,href`
* to sort by upload date, first enable showing the upload date in the listing with `-e2d -mte +.up_at` and then `--sort tags/.up_at` * to sort by upload date, first enable showing the upload date in the listing with `-e2d -mte +.up_at` and then `--sort tags/.up_at`
see [./docs/rice](./docs/rice) for more
## file indexing ## file indexing

View File

@@ -231,7 +231,7 @@ install_vamp() {
cd "$td" cd "$td"
echo '#include <vamp-sdk/Plugin.h>' | g++ -x c++ -c -o /dev/null - || [ -e ~/pe/vamp-sdk ] || { echo '#include <vamp-sdk/Plugin.h>' | g++ -x c++ -c -o /dev/null - || [ -e ~/pe/vamp-sdk ] || {
printf '\033[33mcould not find the vamp-sdk, building from source\033[0m\n' printf '\033[33mcould not find the vamp-sdk, building from source\033[0m\n'
(dl_files yolo https://code.soundsoftware.ac.uk/attachments/download/2691/vamp-plugin-sdk-2.10.0.tar.gz) (dl_files yolo https://ocv.me/mirror/vamp-plugin-sdk-2.10.0.tar.gz)
sha512sum -c <( sha512sum -c <(
echo "153b7f2fa01b77c65ad393ca0689742d66421017fd5931d216caa0fcf6909355fff74706fabbc062a3a04588a619c9b515a1dae00f21a57afd97902a355c48ed -" echo "153b7f2fa01b77c65ad393ca0689742d66421017fd5931d216caa0fcf6909355fff74706fabbc062a3a04588a619c9b515a1dae00f21a57afd97902a355c48ed -"
) <vamp-plugin-sdk-2.10.0.tar.gz ) <vamp-plugin-sdk-2.10.0.tar.gz
@@ -247,7 +247,7 @@ install_vamp() {
cd "$td" cd "$td"
have_beatroot || { have_beatroot || {
printf '\033[33mcould not find the vamp beatroot plugin, building from source\033[0m\n' printf '\033[33mcould not find the vamp beatroot plugin, building from source\033[0m\n'
(dl_files yolo https://code.soundsoftware.ac.uk/attachments/download/885/beatroot-vamp-v1.0.tar.gz) (dl_files yolo https://ocv.me/mirror/beatroot-vamp-v1.0.tar.gz)
sha512sum -c <( sha512sum -c <(
echo "1f444d1d58ccf565c0adfe99f1a1aa62789e19f5071e46857e2adfbc9d453037bc1c4dcb039b02c16240e9b97f444aaff3afb625c86aa2470233e711f55b6874 -" echo "1f444d1d58ccf565c0adfe99f1a1aa62789e19f5071e46857e2adfbc9d453037bc1c4dcb039b02c16240e9b97f444aaff3afb625c86aa2470233e711f55b6874 -"
) <beatroot-vamp-v1.0.tar.gz ) <beatroot-vamp-v1.0.tar.gz

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
S_VERSION = "1.15" S_VERSION = "1.16"
S_BUILD_DT = "2024-02-18" S_BUILD_DT = "2024-04-20"
""" """
u2c.py: upload to copyparty u2c.py: upload to copyparty
@@ -563,7 +563,7 @@ def handshake(ar, file, search):
else: else:
if ar.touch: if ar.touch:
req["umod"] = True req["umod"] = True
if ar.dr: if ar.ow:
req["replace"] = True req["replace"] = True
headers = {"Content-Type": "text/plain"} # <=1.5.1 compat headers = {"Content-Type": "text/plain"} # <=1.5.1 compat
@@ -1140,6 +1140,7 @@ source file/folder selection uses rsync syntax, meaning that:
ap.add_argument("-x", type=unicode, metavar="REGEX", default="", help="skip file if filesystem-abspath matches REGEX, example: '.*/\\.hist/.*'") ap.add_argument("-x", type=unicode, metavar="REGEX", default="", help="skip file if filesystem-abspath matches REGEX, example: '.*/\\.hist/.*'")
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible") ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)") ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)")
ap.add_argument("--ow", action="store_true", help="overwrite existing files instead of autorenaming")
ap.add_argument("--version", action="store_true", help="show version and exit") ap.add_argument("--version", action="store_true", help="show version and exit")
ap = app.add_argument_group("compatibility") ap = app.add_argument_group("compatibility")
@@ -1148,7 +1149,7 @@ source file/folder selection uses rsync syntax, meaning that:
ap = app.add_argument_group("folder sync") ap = app.add_argument_group("folder sync")
ap.add_argument("--dl", action="store_true", help="delete local files after uploading") ap.add_argument("--dl", action="store_true", help="delete local files after uploading")
ap.add_argument("--dr", action="store_true", help="delete remote files which don't exist locally") ap.add_argument("--dr", action="store_true", help="delete remote files which don't exist locally (implies --ow)")
ap.add_argument("--drd", action="store_true", help="delete remote files during upload instead of afterwards; reduces peak disk space usage, but will reupload instead of detecting renames") ap.add_argument("--drd", action="store_true", help="delete remote files during upload instead of afterwards; reduces peak disk space usage, but will reupload instead of detecting renames")
ap = app.add_argument_group("performance tweaks") ap = app.add_argument_group("performance tweaks")
@@ -1178,6 +1179,9 @@ source file/folder selection uses rsync syntax, meaning that:
if ar.drd: if ar.drd:
ar.dr = True ar.dr = True
if ar.dr:
ar.ow = True
for k in "dl dr drd".split(): for k in "dl dr drd".split():
errs = [] errs = []
if ar.safe and getattr(ar, k): if ar.safe and getattr(ar, k):

View File

@@ -1,6 +1,6 @@
# Maintainer: icxes <dev.null@need.moe> # Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty pkgname=copyparty
pkgver="1.11.2" pkgver="1.12.2"
pkgrel=1 pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++" pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any") arch=("any")
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
) )
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz") source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" ) backup=("etc/${pkgname}.d/init" )
sha256sums=("0b37641746d698681691ea9e7070096404afc64a42d3d4e96cc4e036074fded9") sha256sums=("e4fd6733e5361f5ceb2ae950f71f65f2609c2b69d45f47e8b2a2f128fb67de0a")
build() { build() {
cd "${srcdir}/${pkgname}-${pkgver}" cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,5 +1,5 @@
{ {
"url": "https://github.com/9001/copyparty/releases/download/v1.11.2/copyparty-sfx.py", "url": "https://github.com/9001/copyparty/releases/download/v1.12.2/copyparty-sfx.py",
"version": "1.11.2", "version": "1.12.2",
"hash": "sha256-3nIHLM4xJ9RQH3ExSGvBckHuS40IdzyREAtMfpJmfug=" "hash": "sha256-GJts5N0leK/WHqpqb+eB1JjBvf6TRpzCc9R7AIHkujo="
} }

View File

@@ -856,8 +856,9 @@ def add_qr(ap, tty):
def add_fs(ap): def add_fs(ap):
ap2 = ap.add_argument_group("filesystem options") ap2 = ap.add_argument_group("filesystem options")
rm_re_def = "5/0.1" if ANYWIN else "0/0" rm_re_def = "15/0.1" if ANYWIN else "0/0"
ap2.add_argument("--rm-retry", metavar="T/R", type=u, default=rm_re_def, help="if a file cannot be deleted because it is busy, continue trying for \033[33mT\033[0m seconds, retry every \033[33mR\033[0m seconds; disable with 0/0 (volflag=rm_retry)") ap2.add_argument("--rm-retry", metavar="T/R", type=u, default=rm_re_def, help="if a file cannot be deleted because it is busy, continue trying for \033[33mT\033[0m seconds, retry every \033[33mR\033[0m seconds; disable with 0/0 (volflag=rm_retry)")
ap2.add_argument("--mv-retry", metavar="T/R", type=u, default=rm_re_def, help="if a file cannot be renamed because it is busy, continue trying for \033[33mT\033[0m seconds, retry every \033[33mR\033[0m seconds; disable with 0/0 (volflag=mv_retry)")
ap2.add_argument("--iobuf", metavar="BYTES", type=int, default=256*1024, help="file I/O buffer-size; if your volumes are on a network drive, try increasing to \033[32m524288\033[0m or even \033[32m4194304\033[0m (and let me know if that improves your performance)") ap2.add_argument("--iobuf", metavar="BYTES", type=int, default=256*1024, help="file I/O buffer-size; if your volumes are on a network drive, try increasing to \033[32m524288\033[0m or even \033[32m4194304\033[0m (and let me know if that improves your performance)")
@@ -949,6 +950,8 @@ def add_auth(ap):
ap2.add_argument("--idp-h-grp", metavar="HN", type=u, default="", help="assume the request-header \033[33mHN\033[0m contains the groupname of the requesting user; can be referenced in config files for group-based access control") ap2.add_argument("--idp-h-grp", metavar="HN", type=u, default="", help="assume the request-header \033[33mHN\033[0m contains the groupname of the requesting user; can be referenced in config files for group-based access control")
ap2.add_argument("--idp-h-key", metavar="HN", type=u, default="", help="optional but recommended safeguard; your reverse-proxy will insert a secret header named \033[33mHN\033[0m into all requests, and the other IdP headers will be ignored if this header is not present") ap2.add_argument("--idp-h-key", metavar="HN", type=u, default="", help="optional but recommended safeguard; your reverse-proxy will insert a secret header named \033[33mHN\033[0m into all requests, and the other IdP headers will be ignored if this header is not present")
ap2.add_argument("--idp-gsep", metavar="RE", type=u, default="|:;+,", help="if there are multiple groups in \033[33m--idp-h-grp\033[0m, they are separated by one of the characters in \033[33mRE\033[0m") ap2.add_argument("--idp-gsep", metavar="RE", type=u, default="|:;+,", help="if there are multiple groups in \033[33m--idp-h-grp\033[0m, they are separated by one of the characters in \033[33mRE\033[0m")
ap2.add_argument("--no-bauth", action="store_true", help="disable basic-authentication support; do not accept passwords from the 'Authenticate' header at all. NOTE: This breaks support for the android app")
ap2.add_argument("--bauth-last", action="store_true", help="keeps basic-authentication enabled, but only as a last-resort; if a cookie is also provided then the cookie wins")
def add_zeroconf(ap): def add_zeroconf(ap):
@@ -1088,6 +1091,8 @@ def add_optouts(ap):
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar") ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar")
ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)") ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)")
ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time") ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time")
ap2.add_argument("--no-pipe", action="store_true", help="disable race-the-beam (lockstep download of files which are currently being uploaded) (volflag=nopipe)")
ap2.add_argument("--no-db-ip", action="store_true", help="do not write uploader IPs into the database")
def add_safety(ap): def add_safety(ap):
@@ -1213,7 +1218,7 @@ def add_db_general(ap, hcores):
ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)") ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)") ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower") ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
ap2.add_argument("--re-dhash", action="store_true", help="rebuild the cache if it gets out of sync (for example crash on startup during metadata scanning)") ap2.add_argument("--re-dhash", action="store_true", help="force a cache rebuild on startup; enable this once if it gets out of sync (should never be necessary)")
ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice -- only useful for offloading uploads to a cloud service or something (volflag=noforget)") ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice -- only useful for offloading uploads to a cloud service or something (volflag=noforget)")
ap2.add_argument("--dbd", metavar="PROFILE", default="wal", help="database durability profile; sets the tradeoff between robustness and speed, see \033[33m--help-dbd\033[0m (volflag=dbd)") ap2.add_argument("--dbd", metavar="PROFILE", default="wal", help="database durability profile; sets the tradeoff between robustness and speed, see \033[33m--help-dbd\033[0m (volflag=dbd)")
ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (volflag=xlink)") ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (volflag=xlink)")

View File

@@ -1,8 +1,8 @@
# coding: utf-8 # coding: utf-8
VERSION = (1, 12, 0) VERSION = (1, 13, 0)
CODENAME = "locksmith" CODENAME = "race the beam"
BUILD_DT = (2024, 4, 6) BUILD_DT = (2024, 4, 20)
S_VERSION = ".".join(map(str, VERSION)) S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT) S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -1764,13 +1764,14 @@ class AuthSrv(object):
if k in vol.flags: if k in vol.flags:
vol.flags[k] = float(vol.flags[k]) vol.flags[k] = float(vol.flags[k])
try: for k in ("mv_re", "rm_re"):
zs1, zs2 = vol.flags["rm_retry"].split("/") try:
vol.flags["rm_re_t"] = float(zs1) zs1, zs2 = vol.flags[k + "try"].split("/")
vol.flags["rm_re_r"] = float(zs2) vol.flags[k + "_t"] = float(zs1)
except: vol.flags[k + "_r"] = float(zs2)
t = 'volume "/%s" has invalid rm_retry [%s]' except:
raise Exception(t % (vol.vpath, vol.flags.get("rm_retry"))) t = 'volume "/%s" has invalid %stry [%s]'
raise Exception(t % (vol.vpath, k, vol.flags.get(k + "try")))
for k1, k2 in IMPLICATIONS: for k1, k2 in IMPLICATIONS:
if k1 in vol.flags: if k1 in vol.flags:

View File

@@ -6,7 +6,8 @@ import os
import shutil import shutil
import time import time
from .util import Netdev, runcmd from .__init__ import ANYWIN
from .util import Netdev, runcmd, wrename, wunlink
HAVE_CFSSL = True HAVE_CFSSL = True
@@ -14,6 +15,12 @@ if True: # pylint: disable=using-constant-test
from .util import RootLogger from .util import RootLogger
if ANYWIN:
VF = {"mv_re_t": 5, "rm_re_t": 5, "mv_re_r": 0.1, "rm_re_r": 0.1}
else:
VF = {"mv_re_t": 0, "rm_re_t": 0}
def ensure_cert(log: "RootLogger", args) -> None: def ensure_cert(log: "RootLogger", args) -> None:
""" """
the default cert (and the entire TLS support) is only here to enable the the default cert (and the entire TLS support) is only here to enable the
@@ -105,8 +112,12 @@ def _gen_ca(log: "RootLogger", args):
raise Exception("failed to translate ca-cert: {}, {}".format(rc, se), 3) raise Exception("failed to translate ca-cert: {}, {}".format(rc, se), 3)
bname = os.path.join(args.crt_dir, "ca") bname = os.path.join(args.crt_dir, "ca")
os.rename(bname + "-key.pem", bname + ".key") try:
os.unlink(bname + ".csr") wunlink(log, bname + ".key", VF)
except:
pass
wrename(log, bname + "-key.pem", bname + ".key", VF)
wunlink(log, bname + ".csr", VF)
log("cert", "new ca OK", 2) log("cert", "new ca OK", 2)
@@ -185,11 +196,11 @@ def _gen_srv(log: "RootLogger", args, netdevs: dict[str, Netdev]):
bname = os.path.join(args.crt_dir, "srv") bname = os.path.join(args.crt_dir, "srv")
try: try:
os.unlink(bname + ".key") wunlink(log, bname + ".key", VF)
except: except:
pass pass
os.rename(bname + "-key.pem", bname + ".key") wrename(log, bname + "-key.pem", bname + ".key", VF)
os.unlink(bname + ".csr") wunlink(log, bname + ".csr", VF)
with open(os.path.join(args.crt_dir, "ca.pem"), "rb") as f: with open(os.path.join(args.crt_dir, "ca.pem"), "rb") as f:
ca = f.read() ca = f.read()

View File

@@ -16,6 +16,7 @@ def vf_bmap() -> dict[str, str]:
"no_dedup": "copydupes", "no_dedup": "copydupes",
"no_dupe": "nodupe", "no_dupe": "nodupe",
"no_forget": "noforget", "no_forget": "noforget",
"no_pipe": "nopipe",
"no_robots": "norobots", "no_robots": "norobots",
"no_thumb": "dthumb", "no_thumb": "dthumb",
"no_vthumb": "dvthumb", "no_vthumb": "dvthumb",
@@ -63,6 +64,7 @@ def vf_vmap() -> dict[str, str]:
"lg_sbf", "lg_sbf",
"md_sbf", "md_sbf",
"nrand", "nrand",
"mv_retry",
"rm_retry", "rm_retry",
"sort", "sort",
"unlist", "unlist",
@@ -214,6 +216,7 @@ flagcats = {
"dots": "allow all users with read-access to\nenable the option to show dotfiles in listings", "dots": "allow all users with read-access to\nenable the option to show dotfiles in listings",
"fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes', "fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
"fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers', "fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers',
"mv_retry": "ms-windows: timeout for renaming busy files",
"rm_retry": "ms-windows: timeout for deleting busy files", "rm_retry": "ms-windows: timeout for deleting busy files",
"davauth": "ask webdav clients to login for all folders", "davauth": "ask webdav clients to login for all folders",
"davrt": "show lastmod time of symlink destination, not the link itself\n(note: this option is always enabled for recursive listings)", "davrt": "show lastmod time of symlink destination, not the link itself\n(note: this option is always enabled for recursive listings)",

View File

@@ -36,6 +36,7 @@ from .bos import bos
from .star import StreamTar from .star import StreamTar
from .sutil import StreamArc, gfilter from .sutil import StreamArc, gfilter
from .szip import StreamZip from .szip import StreamZip
from .up2k import up2k_chunksize
from .util import unquote # type: ignore from .util import unquote # type: ignore
from .util import ( from .util import (
APPLESAN_RE, APPLESAN_RE,
@@ -89,6 +90,7 @@ from .util import (
vjoin, vjoin,
vol_san, vol_san,
vsplit, vsplit,
wrename,
wunlink, wunlink,
yieldfile, yieldfile,
) )
@@ -126,6 +128,7 @@ class HttpCli(object):
self.ico = conn.ico # mypy404 self.ico = conn.ico # mypy404
self.thumbcli = conn.thumbcli # mypy404 self.thumbcli = conn.thumbcli # mypy404
self.u2fh = conn.u2fh # mypy404 self.u2fh = conn.u2fh # mypy404
self.pipes = conn.pipes # mypy404
self.log_func = conn.log_func # mypy404 self.log_func = conn.log_func # mypy404
self.log_src = conn.log_src # mypy404 self.log_src = conn.log_src # mypy404
self.gen_fk = self._gen_fk if self.args.log_fk else gen_filekey self.gen_fk = self._gen_fk if self.args.log_fk else gen_filekey
@@ -443,7 +446,11 @@ class HttpCli(object):
zso = self.headers.get("authorization") zso = self.headers.get("authorization")
bauth = "" bauth = ""
if zso: if (
zso
and not self.args.no_bauth
and (not cookie_pw or not self.args.bauth_last)
):
try: try:
zb = zso.split(" ")[1].encode("ascii") zb = zso.split(" ")[1].encode("ascii")
zs = base64.b64decode(zb).decode("utf-8") zs = base64.b64decode(zb).decode("utf-8")
@@ -1800,7 +1807,7 @@ class HttpCli(object):
f, fn = zfw["orz"] f, fn = zfw["orz"]
path2 = os.path.join(fdir, fn2) path2 = os.path.join(fdir, fn2)
atomic_move(path, path2) atomic_move(self.log, path, path2, vfs.flags)
fn = fn2 fn = fn2
path = path2 path = path2
@@ -1881,7 +1888,9 @@ class HttpCli(object):
self.reply(t.encode("utf-8"), 201, headers=h) self.reply(t.encode("utf-8"), 201, headers=h)
return True return True
def bakflip(self, f: typing.BinaryIO, ofs: int, sz: int, sha: str) -> None: def bakflip(
self, f: typing.BinaryIO, ofs: int, sz: int, sha: str, flags: dict[str, Any]
) -> None:
if not self.args.bak_flips or self.args.nw: if not self.args.bak_flips or self.args.nw:
return return
@@ -1909,7 +1918,7 @@ class HttpCli(object):
if nrem: if nrem:
self.log("bakflip truncated; {} remains".format(nrem), 1) self.log("bakflip truncated; {} remains".format(nrem), 1)
atomic_move(fp, fp + ".trunc") atomic_move(self.log, fp, fp + ".trunc", flags)
else: else:
self.log("bakflip ok", 2) self.log("bakflip ok", 2)
@@ -2175,7 +2184,7 @@ class HttpCli(object):
if sha_b64 != chash: if sha_b64 != chash:
try: try:
self.bakflip(f, cstart[0], post_sz, sha_b64) self.bakflip(f, cstart[0], post_sz, sha_b64, vfs.flags)
except: except:
self.log("bakflip failed: " + min_ex()) self.log("bakflip failed: " + min_ex())
@@ -2527,7 +2536,7 @@ class HttpCli(object):
raise raise
if not nullwrite: if not nullwrite:
atomic_move(tabspath, abspath) atomic_move(self.log, tabspath, abspath, vfs.flags)
tabspath = "" tabspath = ""
@@ -2767,7 +2776,7 @@ class HttpCli(object):
hidedir(dp) hidedir(dp)
except: except:
pass pass
bos.rename(fp, os.path.join(mdir, ".hist", mfile2)) wrename(self.log, fp, os.path.join(mdir, ".hist", mfile2), vfs.flags)
assert self.parser.gen assert self.parser.gen
p_field, _, p_data = next(self.parser.gen) p_field, _, p_data = next(self.parser.gen)
@@ -2922,17 +2931,42 @@ class HttpCli(object):
return txt return txt
def tx_file(self, req_path: str) -> bool: def tx_file(self, req_path: str, ptop: Optional[str] = None) -> bool:
status = 200 status = 200
logmsg = "{:4} {} ".format("", self.req) logmsg = "{:4} {} ".format("", self.req)
logtail = "" logtail = ""
if ptop is not None:
try:
dp, fn = os.path.split(req_path)
tnam = fn + ".PARTIAL"
if self.args.dotpart:
tnam = "." + tnam
ap_data = os.path.join(dp, tnam)
st_data = bos.stat(ap_data)
if not st_data.st_size:
raise Exception("partial is empty")
x = self.conn.hsrv.broker.ask("up2k.find_job_by_ap", ptop, req_path)
job = json.loads(x.get())
if not job:
raise Exception("not found in registry")
self.pipes.set(req_path, job)
except Exception as ex:
self.log("will not pipe [%s]; %s" % (ap_data, ex), 6)
ptop = None
# #
# if request is for foo.js, check if we have foo.js.gz # if request is for foo.js, check if we have foo.js.gz
file_ts = 0.0 file_ts = 0.0
editions: dict[str, tuple[str, int]] = {} editions: dict[str, tuple[str, int]] = {}
for ext in ("", ".gz"): for ext in ("", ".gz"):
if ptop is not None:
sz = job["size"]
file_ts = job["lmod"]
editions["plain"] = (ap_data, sz)
break
try: try:
fs_path = req_path + ext fs_path = req_path + ext
st = bos.stat(fs_path) st = bos.stat(fs_path)
@@ -3089,6 +3123,11 @@ class HttpCli(object):
self.send_headers(length=upper - lower, status=status, mime=mime) self.send_headers(length=upper - lower, status=status, mime=mime)
return True return True
if ptop is not None:
return self.tx_pipe(
ptop, req_path, ap_data, job, lower, upper, status, mime, logmsg
)
ret = True ret = True
with open_func(*open_args) as f: with open_func(*open_args) as f:
self.send_headers(length=upper - lower, status=status, mime=mime) self.send_headers(length=upper - lower, status=status, mime=mime)
@@ -3108,6 +3147,143 @@ class HttpCli(object):
return ret return ret
def tx_pipe(
self,
ptop: str,
req_path: str,
ap_data: str,
job: dict[str, Any],
lower: int,
upper: int,
status: int,
mime: str,
logmsg: str,
) -> bool:
M = 1048576
self.send_headers(length=upper - lower, status=status, mime=mime)
wr_slp = self.args.s_wr_slp
wr_sz = self.args.s_wr_sz
file_size = job["size"]
chunk_size = up2k_chunksize(file_size)
num_need = -1
data_end = 0
remains = upper - lower
broken = False
spins = 0
tier = 0
tiers = ["uncapped", "reduced speed", "one byte per sec"]
while lower < upper and not broken:
with self.u2mutex:
job = self.pipes.get(req_path)
if not job:
x = self.conn.hsrv.broker.ask("up2k.find_job_by_ap", ptop, req_path)
job = json.loads(x.get())
if job:
self.pipes.set(req_path, job)
if not job:
t = "pipe: OK, upload has finished; yeeting remainder"
self.log(t, 2)
data_end = file_size
break
if num_need != len(job["need"]):
num_need = len(job["need"])
data_end = 0
for cid in job["hash"]:
if cid in job["need"]:
break
data_end += chunk_size
t = "pipe: can stream %.2f MiB; requested range is %.2f to %.2f"
self.log(t % (data_end / M, lower / M, upper / M), 6)
with self.u2mutex:
if data_end > self.u2fh.aps.get(ap_data, data_end):
try:
fhs = self.u2fh.cache[ap_data].all_fhs
for fh in fhs:
fh.flush()
self.u2fh.aps[ap_data] = data_end
self.log("pipe: flushed %d up2k-FDs" % (len(fhs),))
except Exception as ex:
self.log("pipe: u2fh flush failed: %r" % (ex,))
if lower >= data_end:
if data_end:
t = "pipe: uploader is too slow; aborting download at %.2f MiB"
self.log(t % (data_end / M))
raise Pebkac(416, "uploader is too slow")
raise Pebkac(416, "no data available yet; please retry in a bit")
slack = data_end - lower
if slack >= 8 * M:
ntier = 0
winsz = M
bufsz = wr_sz
slp = wr_slp
else:
winsz = max(40, int(M * (slack / (12 * M))))
base_rate = M if not wr_slp else wr_sz / wr_slp
if winsz > base_rate:
ntier = 0
bufsz = wr_sz
slp = wr_slp
elif winsz > 300:
ntier = 1
bufsz = winsz // 5
slp = 0.2
else:
ntier = 2
bufsz = winsz = slp = 1
if tier != ntier:
tier = ntier
self.log("moved to tier %d (%s)" % (tier, tiers[tier]))
try:
with open(ap_data, "rb", self.args.iobuf) as f:
f.seek(lower)
page = f.read(min(winsz, data_end - lower, upper - lower))
if not page:
raise Exception("got 0 bytes (EOF?)")
except Exception as ex:
self.log("pipe: read failed at %.2f MiB: %s" % (lower / M, ex), 3)
with self.u2mutex:
self.pipes.c.pop(req_path, None)
spins += 1
if spins > 3:
raise Pebkac(500, "file became unreadable")
time.sleep(2)
continue
spins = 0
pofs = 0
while pofs < len(page):
if slp:
time.sleep(slp)
try:
buf = page[pofs : pofs + bufsz]
self.s.sendall(buf)
zi = len(buf)
remains -= zi
lower += zi
pofs += zi
except:
broken = True
break
if lower < upper and not broken:
with open(req_path, "rb") as f:
remains = sendfile_py(self.log, lower, upper, f, self.s, wr_sz, wr_slp)
spd = self._spd((upper - lower) - remains)
if self.do_log:
self.log("{}, {}".format(logmsg, spd))
return not broken
def tx_zip( def tx_zip(
self, self,
fmt: str, fmt: str,
@@ -3745,7 +3921,7 @@ class HttpCli(object):
if not allvols: if not allvols:
ret = [{"kinshi": 1}] ret = [{"kinshi": 1}]
jtxt = '{"u":%s,"c":%s}' % (uret, json.dumps(ret, indent=0)) jtxt = '{"u":%s,"c":%s}' % (uret, json.dumps(ret, separators=(",\n", ": ")))
zi = len(uret.split('\n"pd":')) - 1 zi = len(uret.split('\n"pd":')) - 1
self.log("%s #%d+%d %.2fsec" % (lm, zi, len(ret), time.time() - t0)) self.log("%s #%d+%d %.2fsec" % (lm, zi, len(ret), time.time() - t0))
self.reply(jtxt.encode("utf-8", "replace"), mime="application/json") self.reply(jtxt.encode("utf-8", "replace"), mime="application/json")
@@ -4024,7 +4200,9 @@ class HttpCli(object):
): ):
return self.tx_md(vn, abspath) return self.tx_md(vn, abspath)
return self.tx_file(abspath) return self.tx_file(
abspath, None if st.st_size or "nopipe" in vn.flags else vn.realpath
)
elif is_dir and not self.can_read: elif is_dir and not self.can_read:
if self._use_dirkey(abspath): if self._use_dirkey(abspath):

View File

@@ -55,6 +55,7 @@ class HttpConn(object):
self.E: EnvParams = self.args.E self.E: EnvParams = self.args.E
self.asrv: AuthSrv = hsrv.asrv # mypy404 self.asrv: AuthSrv = hsrv.asrv # mypy404
self.u2fh: Util.FHC = hsrv.u2fh # mypy404 self.u2fh: Util.FHC = hsrv.u2fh # mypy404
self.pipes: Util.CachedDict = hsrv.pipes # mypy404
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
self.xff_nm: Optional[NetMap] = hsrv.xff_nm self.xff_nm: Optional[NetMap] = hsrv.xff_nm
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore self.xff_lan: NetMap = hsrv.xff_lan # type: ignore

View File

@@ -61,6 +61,7 @@ from .u2idx import U2idx
from .util import ( from .util import (
E_SCK, E_SCK,
FHC, FHC,
CachedDict,
Daemon, Daemon,
Garda, Garda,
Magician, Magician,
@@ -130,6 +131,7 @@ class HttpSrv(object):
self.t_periodic: Optional[threading.Thread] = None self.t_periodic: Optional[threading.Thread] = None
self.u2fh = FHC() self.u2fh = FHC()
self.pipes = CachedDict(0.2)
self.metrics = Metrics(self) self.metrics = Metrics(self)
self.nreq = 0 self.nreq = 0
self.nsus = 0 self.nsus = 0

View File

@@ -206,6 +206,7 @@ class MCast(object):
except: except:
t = "announce failed on {} [{}]:\n{}" t = "announce failed on {} [{}]:\n{}"
self.log(t.format(netdev, ip, min_ex()), 3) self.log(t.format(netdev, ip, min_ex()), 3)
sck.close()
if self.args.zm_msub: if self.args.zm_msub:
for s1 in self.srv.values(): for s1 in self.srv.values():

View File

@@ -424,6 +424,12 @@ class SvcHub(object):
t = "WARNING: found config files in [%s]: %s\n config files are not expected here, and will NOT be loaded (unless your setup is intentionally hella funky)" t = "WARNING: found config files in [%s]: %s\n config files are not expected here, and will NOT be loaded (unless your setup is intentionally hella funky)"
self.log("root", t % (E.cfg, ", ".join(hits)), 3) self.log("root", t % (E.cfg, ", ".join(hits)), 3)
if self.args.no_bauth:
t = "WARNING: --no-bauth disables support for the Android app; you may want to use --bauth-last instead"
self.log("root", t, 3)
if self.args.bauth_last:
self.log("root", "WARNING: ignoring --bauth-last due to --no-bauth", 3)
def _process_config(self) -> bool: def _process_config(self) -> bool:
al = self.args al = self.args
@@ -544,6 +550,13 @@ class SvcHub(object):
except: except:
raise Exception("invalid --rm-retry [%s]" % (self.args.rm_retry,)) raise Exception("invalid --rm-retry [%s]" % (self.args.rm_retry,))
try:
zf1, zf2 = self.args.mv_retry.split("/")
self.args.mv_re_t = float(zf1)
self.args.mv_re_r = float(zf2)
except:
raise Exception("invalid --mv-retry [%s]" % (self.args.mv_retry,))
return True return True
def _ipa2re(self, txt) -> Optional[re.Pattern]: def _ipa2re(self, txt) -> Optional[re.Pattern]:

View File

@@ -28,6 +28,7 @@ from .util import (
runcmd, runcmd,
statdir, statdir,
vsplit, vsplit,
wrename,
wunlink, wunlink,
) )
@@ -346,7 +347,7 @@ class ThumbSrv(object):
pass pass
try: try:
bos.rename(ttpath, tpath) wrename(self.log, ttpath, tpath, vn.flags)
except: except:
pass pass

View File

@@ -91,6 +91,9 @@ CV_EXTS = set(zsg.split(","))
HINT_HISTPATH = "you could try moving the database to another location (preferably an SSD or NVME drive) using either the --hist argument (global option for all volumes), or the hist volflag (just for this volume)" HINT_HISTPATH = "you could try moving the database to another location (preferably an SSD or NVME drive) using either the --hist argument (global option for all volumes), or the hist volflag (just for this volume)"
VF_CAREFUL = {"mv_re_t": 5, "rm_re_t": 5, "mv_re_r": 0.1, "rm_re_r": 0.1}
class Dbw(object): class Dbw(object):
def __init__(self, c: "sqlite3.Cursor", n: int, t: float) -> None: def __init__(self, c: "sqlite3.Cursor", n: int, t: float) -> None:
self.c = c self.c = c
@@ -136,6 +139,7 @@ class Up2k(object):
self.need_rescan: set[str] = set() self.need_rescan: set[str] = set()
self.db_act = 0.0 self.db_act = 0.0
self.reg_mutex = threading.Lock()
self.registry: dict[str, dict[str, dict[str, Any]]] = {} self.registry: dict[str, dict[str, dict[str, Any]]] = {}
self.flags: dict[str, dict[str, Any]] = {} self.flags: dict[str, dict[str, Any]] = {}
self.droppable: dict[str, list[str]] = {} self.droppable: dict[str, list[str]] = {}
@@ -143,7 +147,7 @@ class Up2k(object):
self.volsize: dict["sqlite3.Cursor", int] = {} self.volsize: dict["sqlite3.Cursor", int] = {}
self.volstate: dict[str, str] = {} self.volstate: dict[str, str] = {}
self.vol_act: dict[str, float] = {} self.vol_act: dict[str, float] = {}
self.busy_aps: set[str] = set() self.busy_aps: dict[str, int] = {}
self.dupesched: dict[str, list[tuple[str, str, float]]] = {} self.dupesched: dict[str, list[tuple[str, str, float]]] = {}
self.snap_prev: dict[str, Optional[tuple[int, float]]] = {} self.snap_prev: dict[str, Optional[tuple[int, float]]] = {}
@@ -200,11 +204,15 @@ class Up2k(object):
Daemon(self.deferred_init, "up2k-deferred-init") Daemon(self.deferred_init, "up2k-deferred-init")
def reload(self, rescan_all_vols: bool) -> None: def reload(self, rescan_all_vols: bool) -> None:
"""mutex me""" """mutex(main) me"""
self.log("reload #{} scheduled".format(self.gid + 1)) self.log("reload #{} scheduled".format(self.gid + 1))
all_vols = self.asrv.vfs.all_vols all_vols = self.asrv.vfs.all_vols
scan_vols = [k for k, v in all_vols.items() if v.realpath not in self.registry] with self.reg_mutex:
scan_vols = [
k for k, v in all_vols.items() if v.realpath not in self.registry
]
if rescan_all_vols: if rescan_all_vols:
scan_vols = list(all_vols.keys()) scan_vols = list(all_vols.keys())
@@ -217,7 +225,7 @@ class Up2k(object):
if self.stop: if self.stop:
# up-mt consistency not guaranteed if init is interrupted; # up-mt consistency not guaranteed if init is interrupted;
# drop caches for a full scan on next boot # drop caches for a full scan on next boot
with self.mutex: with self.mutex, self.reg_mutex:
self._drop_caches() self._drop_caches()
if self.pp: if self.pp:
@@ -283,10 +291,27 @@ class Up2k(object):
min(1000 * 24 * 60 * 60 - 1, time.time() - self.db_act) min(1000 * 24 * 60 * 60 - 1, time.time() - self.db_act)
), ),
} }
return json.dumps(ret, indent=4) return json.dumps(ret, separators=(",\n", ": "))
def find_job_by_ap(self, ptop: str, ap: str) -> str:
try:
if ANYWIN:
ap = ap.replace("\\", "/")
vp = ap[len(ptop) :].strip("/")
dn, fn = vsplit(vp)
with self.reg_mutex:
tab2 = self.registry[ptop]
for job in tab2.values():
if job["prel"] == dn and job["name"] == fn:
return json.dumps(job, separators=(",\n", ": "))
except:
pass
return "{}"
def get_unfinished_by_user(self, uname, ip) -> str: def get_unfinished_by_user(self, uname, ip) -> str:
if PY2 or not self.mutex.acquire(timeout=2): if PY2 or not self.reg_mutex.acquire(timeout=2):
return '[{"timeout":1}]' return '[{"timeout":1}]'
ret: list[tuple[int, str, int, int, int]] = [] ret: list[tuple[int, str, int, int, int]] = []
@@ -315,17 +340,25 @@ class Up2k(object):
) )
ret.append(zt5) ret.append(zt5)
finally: finally:
self.mutex.release() self.reg_mutex.release()
if ANYWIN:
ret = [(x[0], x[1].replace("\\", "/"), x[2], x[3], x[4]) for x in ret]
ret.sort(reverse=True) ret.sort(reverse=True)
ret2 = [ ret2 = [
{"at": at, "vp": "/" + vp, "pd": 100 - ((nn * 100) // (nh or 1)), "sz": sz} {
"at": at,
"vp": "/" + quotep(vp),
"pd": 100 - ((nn * 100) // (nh or 1)),
"sz": sz,
}
for (at, vp, sz, nn, nh) in ret for (at, vp, sz, nn, nh) in ret
] ]
return json.dumps(ret2, indent=0) return json.dumps(ret2, separators=(",\n", ": "))
def get_unfinished(self) -> str: def get_unfinished(self) -> str:
if PY2 or not self.mutex.acquire(timeout=0.5): if PY2 or not self.reg_mutex.acquire(timeout=0.5):
return "" return ""
ret: dict[str, tuple[int, int]] = {} ret: dict[str, tuple[int, int]] = {}
@@ -347,17 +380,17 @@ class Up2k(object):
ret[ptop] = (nbytes, nfiles) ret[ptop] = (nbytes, nfiles)
finally: finally:
self.mutex.release() self.reg_mutex.release()
return json.dumps(ret, indent=4) return json.dumps(ret, separators=(",\n", ": "))
def get_volsize(self, ptop: str) -> tuple[int, int]: def get_volsize(self, ptop: str) -> tuple[int, int]:
with self.mutex: with self.reg_mutex:
return self._get_volsize(ptop) return self._get_volsize(ptop)
def get_volsizes(self, ptops: list[str]) -> list[tuple[int, int]]: def get_volsizes(self, ptops: list[str]) -> list[tuple[int, int]]:
ret = [] ret = []
with self.mutex: with self.reg_mutex:
for ptop in ptops: for ptop in ptops:
ret.append(self._get_volsize(ptop)) ret.append(self._get_volsize(ptop))
@@ -385,7 +418,7 @@ class Up2k(object):
def _rescan( def _rescan(
self, all_vols: dict[str, VFS], scan_vols: list[str], wait: bool, fscan: bool self, all_vols: dict[str, VFS], scan_vols: list[str], wait: bool, fscan: bool
) -> str: ) -> str:
"""mutex me""" """mutex(main) me"""
if not wait and self.pp: if not wait and self.pp:
return "cannot initiate; scan is already in progress" return "cannot initiate; scan is already in progress"
@@ -667,7 +700,7 @@ class Up2k(object):
self.log(msg, c=3) self.log(msg, c=3)
live_vols = [] live_vols = []
with self.mutex: with self.mutex, self.reg_mutex:
# only need to protect register_vpath but all in one go feels right # only need to protect register_vpath but all in one go feels right
for vol in vols: for vol in vols:
try: try:
@@ -709,7 +742,7 @@ class Up2k(object):
if self.args.re_dhash or [zv for zv in vols if "e2tsr" in zv.flags]: if self.args.re_dhash or [zv for zv in vols if "e2tsr" in zv.flags]:
self.args.re_dhash = False self.args.re_dhash = False
with self.mutex: with self.mutex, self.reg_mutex:
self._drop_caches() self._drop_caches()
for vol in vols: for vol in vols:
@@ -786,7 +819,9 @@ class Up2k(object):
self.volstate[vol.vpath] = "online (mtp soon)" self.volstate[vol.vpath] = "online (mtp soon)"
for vol in need_vac: for vol in need_vac:
reg = self.register_vpath(vol.realpath, vol.flags) with self.mutex, self.reg_mutex:
reg = self.register_vpath(vol.realpath, vol.flags)
assert reg assert reg
cur, _ = reg cur, _ = reg
with self.mutex: with self.mutex:
@@ -800,7 +835,9 @@ class Up2k(object):
if vol.flags["dbd"] == "acid": if vol.flags["dbd"] == "acid":
continue continue
reg = self.register_vpath(vol.realpath, vol.flags) with self.mutex, self.reg_mutex:
reg = self.register_vpath(vol.realpath, vol.flags)
try: try:
assert reg assert reg
cur, db_path = reg cur, db_path = reg
@@ -847,6 +884,7 @@ class Up2k(object):
def register_vpath( def register_vpath(
self, ptop: str, flags: dict[str, Any] self, ptop: str, flags: dict[str, Any]
) -> Optional[tuple["sqlite3.Cursor", str]]: ) -> Optional[tuple["sqlite3.Cursor", str]]:
"""mutex(main,reg) me"""
histpath = self.asrv.vfs.histtab.get(ptop) histpath = self.asrv.vfs.histtab.get(ptop)
if not histpath: if not histpath:
self.log("no histpath for [{}]".format(ptop)) self.log("no histpath for [{}]".format(ptop))
@@ -869,7 +907,7 @@ class Up2k(object):
ft = "\033[0;32m{}{:.0}" ft = "\033[0;32m{}{:.0}"
ff = "\033[0;35m{}{:.0}" ff = "\033[0;35m{}{:.0}"
fv = "\033[0;36m{}:\033[90m{}" fv = "\033[0;36m{}:\033[90m{}"
fx = set(("html_head", "rm_re_t", "rm_re_r")) fx = set(("html_head", "rm_re_t", "rm_re_r", "mv_re_t", "mv_re_r"))
fd = vf_bmap() fd = vf_bmap()
fd.update(vf_cmap()) fd.update(vf_cmap())
fd.update(vf_vmap()) fd.update(vf_vmap())
@@ -1030,7 +1068,9 @@ class Up2k(object):
dev = cst.st_dev if vol.flags.get("xdev") else 0 dev = cst.st_dev if vol.flags.get("xdev") else 0
with self.mutex: with self.mutex:
reg = self.register_vpath(top, vol.flags) with self.reg_mutex:
reg = self.register_vpath(top, vol.flags)
assert reg and self.pp assert reg and self.pp
cur, db_path = reg cur, db_path = reg
@@ -1627,7 +1667,7 @@ class Up2k(object):
def _build_tags_index(self, vol: VFS) -> tuple[int, int, bool]: def _build_tags_index(self, vol: VFS) -> tuple[int, int, bool]:
ptop = vol.realpath ptop = vol.realpath
with self.mutex: with self.mutex, self.reg_mutex:
reg = self.register_vpath(ptop, vol.flags) reg = self.register_vpath(ptop, vol.flags)
assert reg and self.pp assert reg and self.pp
@@ -1648,6 +1688,7 @@ class Up2k(object):
return ret return ret
def _drop_caches(self) -> None: def _drop_caches(self) -> None:
"""mutex(main,reg) me"""
self.log("dropping caches for a full filesystem scan") self.log("dropping caches for a full filesystem scan")
for vol in self.asrv.vfs.all_vols.values(): for vol in self.asrv.vfs.all_vols.values():
reg = self.register_vpath(vol.realpath, vol.flags) reg = self.register_vpath(vol.realpath, vol.flags)
@@ -1823,7 +1864,7 @@ class Up2k(object):
params: tuple[Any, ...], params: tuple[Any, ...],
flt: int, flt: int,
) -> tuple[tempfile.SpooledTemporaryFile[bytes], int]: ) -> tuple[tempfile.SpooledTemporaryFile[bytes], int]:
"""mutex me""" """mutex(main) me"""
n = 0 n = 0
c2 = cur.connection.cursor() c2 = cur.connection.cursor()
tf = tempfile.SpooledTemporaryFile(1024 * 1024 * 8, "w+b", prefix="cpp-tq-") tf = tempfile.SpooledTemporaryFile(1024 * 1024 * 8, "w+b", prefix="cpp-tq-")
@@ -2157,7 +2198,7 @@ class Up2k(object):
ip: str, ip: str,
at: float, at: float,
) -> int: ) -> int:
"""will mutex""" """will mutex(main)"""
assert self.mtag assert self.mtag
try: try:
@@ -2189,7 +2230,7 @@ class Up2k(object):
abspath: str, abspath: str,
tags: dict[str, Union[str, float]], tags: dict[str, Union[str, float]],
) -> int: ) -> int:
"""mutex me""" """mutex(main) me"""
assert self.mtag assert self.mtag
if not bos.path.isfile(abspath): if not bos.path.isfile(abspath):
@@ -2474,28 +2515,36 @@ class Up2k(object):
cur.connection.commit() cur.connection.commit()
def _job_volchk(self, cj: dict[str, Any]) -> None: def handle_json(
if not self.register_vpath(cj["ptop"], cj["vcfg"]): self, cj: dict[str, Any], busy_aps: dict[str, int]
if cj["ptop"] not in self.registry: ) -> dict[str, Any]:
raise Pebkac(410, "location unavailable") # busy_aps is u2fh (always undefined if -j0) so this is safe
def handle_json(self, cj: dict[str, Any], busy_aps: set[str]) -> dict[str, Any]:
self.busy_aps = busy_aps self.busy_aps = busy_aps
got_lock = False
try: try:
# bit expensive; 3.9=10x 3.11=2x # bit expensive; 3.9=10x 3.11=2x
if self.mutex.acquire(timeout=10): if self.mutex.acquire(timeout=10):
self._job_volchk(cj) got_lock = True
self.mutex.release() with self.reg_mutex:
return self._handle_json(cj)
else: else:
t = "cannot receive uploads right now;\nserver busy with {}.\nPlease wait; the client will retry..." t = "cannot receive uploads right now;\nserver busy with {}.\nPlease wait; the client will retry..."
raise Pebkac(503, t.format(self.blocked or "[unknown]")) raise Pebkac(503, t.format(self.blocked or "[unknown]"))
except TypeError: except TypeError:
if not PY2: if not PY2:
raise raise
with self.mutex: with self.mutex, self.reg_mutex:
self._job_volchk(cj) return self._handle_json(cj)
finally:
if got_lock:
self.mutex.release()
def _handle_json(self, cj: dict[str, Any]) -> dict[str, Any]:
ptop = cj["ptop"] ptop = cj["ptop"]
if not self.register_vpath(ptop, cj["vcfg"]):
if ptop not in self.registry:
raise Pebkac(410, "location unavailable")
cj["name"] = sanitize_fn(cj["name"], "", [".prologue.html", ".epilogue.html"]) cj["name"] = sanitize_fn(cj["name"], "", [".prologue.html", ".epilogue.html"])
cj["poke"] = now = self.db_act = self.vol_act[ptop] = time.time() cj["poke"] = now = self.db_act = self.vol_act[ptop] = time.time()
wark = self._get_wark(cj) wark = self._get_wark(cj)
@@ -2510,7 +2559,7 @@ class Up2k(object):
# refuse out-of-order / multithreaded uploading if sprs False # refuse out-of-order / multithreaded uploading if sprs False
sprs = self.fstab.get(pdir) != "ng" sprs = self.fstab.get(pdir) != "ng"
with self.mutex: if True:
jcur = self.cur.get(ptop) jcur = self.cur.get(ptop)
reg = self.registry[ptop] reg = self.registry[ptop]
vfs = self.asrv.vfs.all_vols[cj["vtop"]] vfs = self.asrv.vfs.all_vols[cj["vtop"]]
@@ -2948,7 +2997,7 @@ class Up2k(object):
def handle_chunk( def handle_chunk(
self, ptop: str, wark: str, chash: str self, ptop: str, wark: str, chash: str
) -> tuple[int, list[int], str, float, bool]: ) -> tuple[int, list[int], str, float, bool]:
with self.mutex: with self.mutex, self.reg_mutex:
self.db_act = self.vol_act[ptop] = time.time() self.db_act = self.vol_act[ptop] = time.time()
job = self.registry[ptop].get(wark) job = self.registry[ptop].get(wark)
if not job: if not job:
@@ -2991,7 +3040,7 @@ class Up2k(object):
return chunksize, ofs, path, job["lmod"], job["sprs"] return chunksize, ofs, path, job["lmod"], job["sprs"]
def release_chunk(self, ptop: str, wark: str, chash: str) -> bool: def release_chunk(self, ptop: str, wark: str, chash: str) -> bool:
with self.mutex: with self.reg_mutex:
job = self.registry[ptop].get(wark) job = self.registry[ptop].get(wark)
if job: if job:
job["busy"].pop(chash, None) job["busy"].pop(chash, None)
@@ -2999,7 +3048,7 @@ class Up2k(object):
return True return True
def confirm_chunk(self, ptop: str, wark: str, chash: str) -> tuple[int, str]: def confirm_chunk(self, ptop: str, wark: str, chash: str) -> tuple[int, str]:
with self.mutex: with self.mutex, self.reg_mutex:
self.db_act = self.vol_act[ptop] = time.time() self.db_act = self.vol_act[ptop] = time.time()
try: try:
job = self.registry[ptop][wark] job = self.registry[ptop][wark]
@@ -3022,16 +3071,16 @@ class Up2k(object):
if self.args.nw: if self.args.nw:
self.regdrop(ptop, wark) self.regdrop(ptop, wark)
return ret, dst
return ret, dst return ret, dst
def finish_upload(self, ptop: str, wark: str, busy_aps: set[str]) -> None: def finish_upload(self, ptop: str, wark: str, busy_aps: set[str]) -> None:
self.busy_aps = busy_aps self.busy_aps = busy_aps
with self.mutex: with self.mutex, self.reg_mutex:
self._finish_upload(ptop, wark) self._finish_upload(ptop, wark)
def _finish_upload(self, ptop: str, wark: str) -> None: def _finish_upload(self, ptop: str, wark: str) -> None:
"""mutex(main,reg) me"""
try: try:
job = self.registry[ptop][wark] job = self.registry[ptop][wark]
pdir = djoin(job["ptop"], job["prel"]) pdir = djoin(job["ptop"], job["prel"])
@@ -3044,12 +3093,11 @@ class Up2k(object):
t = "finish_upload {} with remaining chunks {}" t = "finish_upload {} with remaining chunks {}"
raise Pebkac(500, t.format(wark, job["need"])) raise Pebkac(500, t.format(wark, job["need"]))
# self.log("--- " + wark + " " + dst + " finish_upload atomic " + dst, 4)
atomic_move(src, dst)
upt = job.get("at") or time.time() upt = job.get("at") or time.time()
vflags = self.flags[ptop] vflags = self.flags[ptop]
atomic_move(self.log, src, dst, vflags)
times = (int(time.time()), int(job["lmod"])) times = (int(time.time()), int(job["lmod"]))
self.log( self.log(
"no more chunks, setting times {} ({}) on {}".format( "no more chunks, setting times {} ({}) on {}".format(
@@ -3105,6 +3153,7 @@ class Up2k(object):
cur.connection.commit() cur.connection.commit()
def regdrop(self, ptop: str, wark: str) -> None: def regdrop(self, ptop: str, wark: str) -> None:
"""mutex(main,reg) me"""
olds = self.droppable[ptop] olds = self.droppable[ptop]
if wark: if wark:
olds.append(wark) olds.append(wark)
@@ -3199,16 +3248,23 @@ class Up2k(object):
at: float, at: float,
skip_xau: bool = False, skip_xau: bool = False,
) -> None: ) -> None:
"""mutex(main) me"""
self.db_rm(db, rd, fn, sz) self.db_rm(db, rd, fn, sz)
if not ip:
db_ip = ""
else:
# plugins may expect this to look like an actual IP
db_ip = "1.1.1.1" if self.args.no_db_ip else ip
sql = "insert into up values (?,?,?,?,?,?,?)" sql = "insert into up values (?,?,?,?,?,?,?)"
v = (wark, int(ts), sz, rd, fn, ip or "", int(at or 0)) v = (wark, int(ts), sz, rd, fn, db_ip, int(at or 0))
try: try:
db.execute(sql, v) db.execute(sql, v)
except: except:
assert self.mem_cur assert self.mem_cur
rd, fn = s3enc(self.mem_cur, rd, fn) rd, fn = s3enc(self.mem_cur, rd, fn)
v = (wark, int(ts), sz, rd, fn, ip or "", int(at or 0)) v = (wark, int(ts), sz, rd, fn, db_ip, int(at or 0))
db.execute(sql, v) db.execute(sql, v)
self.volsize[db] += sz self.volsize[db] += sz
@@ -3312,7 +3368,7 @@ class Up2k(object):
vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0]) vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0])
vn, rem = vn.get_dbv(rem) vn, rem = vn.get_dbv(rem)
ptop = vn.realpath ptop = vn.realpath
with self.mutex: with self.mutex, self.reg_mutex:
abrt_cfg = self.flags.get(ptop, {}).get("u2abort", 1) abrt_cfg = self.flags.get(ptop, {}).get("u2abort", 1)
addr = (ip or "\n") if abrt_cfg in (1, 2) else "" addr = (ip or "\n") if abrt_cfg in (1, 2) else ""
user = (uname or "\n") if abrt_cfg in (1, 3) else "" user = (uname or "\n") if abrt_cfg in (1, 3) else ""
@@ -3320,7 +3376,10 @@ class Up2k(object):
for wark, job in reg.items(): for wark, job in reg.items():
if (user and user != job["user"]) or (addr and addr != job["addr"]): if (user and user != job["user"]) or (addr and addr != job["addr"]):
continue continue
if djoin(job["prel"], job["name"]) == rem: jrem = djoin(job["prel"], job["name"])
if ANYWIN:
jrem = jrem.replace("\\", "/")
if jrem == rem:
if job["ptop"] != ptop: if job["ptop"] != ptop:
t = "job.ptop [%s] != vol.ptop [%s] ??" t = "job.ptop [%s] != vol.ptop [%s] ??"
raise Exception(t % (job["ptop"] != ptop)) raise Exception(t % (job["ptop"] != ptop))
@@ -3416,7 +3475,7 @@ class Up2k(object):
continue continue
n_files += 1 n_files += 1
with self.mutex: with self.mutex, self.reg_mutex:
cur = None cur = None
try: try:
ptop = dbv.realpath ptop = dbv.realpath
@@ -3534,6 +3593,7 @@ class Up2k(object):
def _mv_file( def _mv_file(
self, uname: str, svp: str, dvp: str, curs: set["sqlite3.Cursor"] self, uname: str, svp: str, dvp: str, curs: set["sqlite3.Cursor"]
) -> str: ) -> str:
"""mutex(main) me; will mutex(reg)"""
svn, srem = self.asrv.vfs.get(svp, uname, True, False, True) svn, srem = self.asrv.vfs.get(svp, uname, True, False, True)
svn, srem = svn.get_dbv(srem) svn, srem = svn.get_dbv(srem)
@@ -3614,7 +3674,9 @@ class Up2k(object):
if c2 and c2 != c1: if c2 and c2 != c1:
self._copy_tags(c1, c2, w) self._copy_tags(c1, c2, w)
has_dupes = self._forget_file(svn.realpath, srem, c1, w, is_xvol, fsize) with self.reg_mutex:
has_dupes = self._forget_file(svn.realpath, srem, c1, w, is_xvol, fsize)
if not is_xvol: if not is_xvol:
has_dupes = self._relink(w, svn.realpath, srem, dabs) has_dupes = self._relink(w, svn.realpath, srem, dabs)
@@ -3653,7 +3715,7 @@ class Up2k(object):
self._symlink(dlink, dabs, dvn.flags, lmod=ftime) self._symlink(dlink, dabs, dvn.flags, lmod=ftime)
wunlink(self.log, sabs, svn.flags) wunlink(self.log, sabs, svn.flags)
else: else:
atomic_move(sabs, dabs) atomic_move(self.log, sabs, dabs, svn.flags)
except OSError as ex: except OSError as ex:
if ex.errno != errno.EXDEV: if ex.errno != errno.EXDEV:
@@ -3744,7 +3806,10 @@ class Up2k(object):
drop_tags: bool, drop_tags: bool,
sz: int, sz: int,
) -> bool: ) -> bool:
"""forgets file in db, fixes symlinks, does not delete""" """
mutex(main,reg) me
forgets file in db, fixes symlinks, does not delete
"""
srd, sfn = vsplit(vrem) srd, sfn = vsplit(vrem)
has_dupes = False has_dupes = False
self.log("forgetting {}".format(vrem)) self.log("forgetting {}".format(vrem))
@@ -3830,8 +3895,7 @@ class Up2k(object):
self.log("linkswap [{}] and [{}]".format(sabs, slabs)) self.log("linkswap [{}] and [{}]".format(sabs, slabs))
mt = bos.path.getmtime(slabs, False) mt = bos.path.getmtime(slabs, False)
flags = self.flags.get(ptop) or {} flags = self.flags.get(ptop) or {}
wunlink(self.log, slabs, flags) atomic_move(self.log, sabs, slabs, flags)
bos.rename(sabs, slabs)
bos.utime(slabs, (int(time.time()), int(mt)), False) bos.utime(slabs, (int(time.time()), int(mt)), False)
self._symlink(slabs, sabs, flags, False) self._symlink(slabs, sabs, flags, False)
full[slabs] = (ptop, rem) full[slabs] = (ptop, rem)
@@ -4070,7 +4134,7 @@ class Up2k(object):
self.do_snapshot() self.do_snapshot()
def do_snapshot(self) -> None: def do_snapshot(self) -> None:
with self.mutex: with self.mutex, self.reg_mutex:
for k, reg in self.registry.items(): for k, reg in self.registry.items():
self._snap_reg(k, reg) self._snap_reg(k, reg)
@@ -4138,11 +4202,11 @@ class Up2k(object):
path2 = "{}.{}".format(path, os.getpid()) path2 = "{}.{}".format(path, os.getpid())
body = {"droppable": self.droppable[ptop], "registry": reg} body = {"droppable": self.droppable[ptop], "registry": reg}
j = json.dumps(body, indent=2, sort_keys=True).encode("utf-8") j = json.dumps(body, sort_keys=True, separators=(",\n", ": ")).encode("utf-8")
with gzip.GzipFile(path2, "wb") as f: with gzip.GzipFile(path2, "wb") as f:
f.write(j) f.write(j)
atomic_move(path2, path) atomic_move(self.log, path2, path, VF_CAREFUL)
self.log("snap: {} |{}|".format(path, len(reg.keys()))) self.log("snap: {} |{}|".format(path, len(reg.keys())))
self.snap_prev[ptop] = etag self.snap_prev[ptop] = etag
@@ -4211,7 +4275,7 @@ class Up2k(object):
raise Exception("invalid hash task") raise Exception("invalid hash task")
try: try:
if not self._hash_t(task): if not self._hash_t(task) and self.stop:
return return
except Exception as ex: except Exception as ex:
self.log("failed to hash %s: %s" % (task, ex), 1) self.log("failed to hash %s: %s" % (task, ex), 1)
@@ -4221,7 +4285,7 @@ class Up2k(object):
) -> bool: ) -> bool:
ptop, vtop, flags, rd, fn, ip, at, usr, skip_xau = task ptop, vtop, flags, rd, fn, ip, at, usr, skip_xau = task
# self.log("hashq {} pop {}/{}/{}".format(self.n_hashq, ptop, rd, fn)) # self.log("hashq {} pop {}/{}/{}".format(self.n_hashq, ptop, rd, fn))
with self.mutex: with self.mutex, self.reg_mutex:
if not self.register_vpath(ptop, flags): if not self.register_vpath(ptop, flags):
return True return True
@@ -4239,7 +4303,7 @@ class Up2k(object):
wark = up2k_wark_from_hashlist(self.salt, inf.st_size, hashes) wark = up2k_wark_from_hashlist(self.salt, inf.st_size, hashes)
with self.mutex: with self.mutex, self.reg_mutex:
self.idx_wark( self.idx_wark(
self.flags[ptop], self.flags[ptop],
rd, rd,

View File

@@ -759,15 +759,46 @@ class CachedSet(object):
self.oldest = now self.oldest = now
class CachedDict(object):
def __init__(self, maxage: float) -> None:
self.c: dict[str, tuple[float, Any]] = {}
self.maxage = maxage
self.oldest = 0.0
def set(self, k: str, v: Any) -> None:
now = time.time()
self.c[k] = (now, v)
if now - self.oldest < self.maxage:
return
c = self.c = {k: v for k, v in self.c.items() if now - v[0] < self.maxage}
try:
self.oldest = min([x[0] for x in c.values()])
except:
self.oldest = now
def get(self, k: str) -> Optional[tuple[str, Any]]:
try:
ts, ret = self.c[k]
now = time.time()
if now - ts > self.maxage:
del self.c[k]
return None
return ret
except:
return None
class FHC(object): class FHC(object):
class CE(object): class CE(object):
def __init__(self, fh: typing.BinaryIO) -> None: def __init__(self, fh: typing.BinaryIO) -> None:
self.ts: float = 0 self.ts: float = 0
self.fhs = [fh] self.fhs = [fh]
self.all_fhs = set([fh])
def __init__(self) -> None: def __init__(self) -> None:
self.cache: dict[str, FHC.CE] = {} self.cache: dict[str, FHC.CE] = {}
self.aps: set[str] = set() self.aps: dict[str, int] = {}
def close(self, path: str) -> None: def close(self, path: str) -> None:
try: try:
@@ -779,7 +810,7 @@ class FHC(object):
fh.close() fh.close()
del self.cache[path] del self.cache[path]
self.aps.remove(path) del self.aps[path]
def clean(self) -> None: def clean(self) -> None:
if not self.cache: if not self.cache:
@@ -800,9 +831,12 @@ class FHC(object):
return self.cache[path].fhs.pop() return self.cache[path].fhs.pop()
def put(self, path: str, fh: typing.BinaryIO) -> None: def put(self, path: str, fh: typing.BinaryIO) -> None:
self.aps.add(path) if path not in self.aps:
self.aps[path] = 0
try: try:
ce = self.cache[path] ce = self.cache[path]
ce.all_fhs.add(fh)
ce.fhs.append(fh) ce.fhs.append(fh)
except: except:
ce = self.CE(fh) ce = self.CE(fh)
@@ -2125,26 +2159,29 @@ def lsof(log: "NamedLogger", abspath: str) -> None:
log("lsof failed; " + min_ex(), 3) log("lsof failed; " + min_ex(), 3)
def atomic_move(usrc: str, udst: str) -> None: def _fs_mvrm(
src = fsenc(usrc) log: "NamedLogger", src: str, dst: str, atomic: bool, flags: dict[str, Any]
dst = fsenc(udst) ) -> bool:
if not PY2: bsrc = fsenc(src)
os.replace(src, dst) bdst = fsenc(dst)
if atomic:
k = "mv_re_"
act = "atomic-rename"
osfun = os.replace
args = [bsrc, bdst]
elif dst:
k = "mv_re_"
act = "rename"
osfun = os.rename
args = [bsrc, bdst]
else: else:
if os.path.exists(dst): k = "rm_re_"
os.unlink(dst) act = "delete"
osfun = os.unlink
args = [bsrc]
os.rename(src, dst) maxtime = flags.get(k + "t", 0.0)
chill = flags.get(k + "r", 0.0)
def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
maxtime = flags.get("rm_re_t", 0.0)
bpath = fsenc(abspath)
if not maxtime:
os.unlink(bpath)
return True
chill = flags.get("rm_re_r", 0.0)
if chill < 0.001: if chill < 0.001:
chill = 0.1 chill = 0.1
@@ -2152,14 +2189,19 @@ def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
t0 = now = time.time() t0 = now = time.time()
for attempt in range(90210): for attempt in range(90210):
try: try:
if ino and os.stat(bpath).st_ino != ino: if ino and os.stat(bsrc).st_ino != ino:
log("inode changed; aborting delete") t = "src inode changed; aborting %s %s"
log(t % (act, src), 1)
return False return False
os.unlink(bpath) if (dst and not atomic) and os.path.exists(bdst):
t = "something appeared at dst; aborting rename [%s] ==> [%s]"
log(t % (src, dst), 1)
return False
osfun(*args)
if attempt: if attempt:
now = time.time() now = time.time()
t = "deleted in %.2f sec, attempt %d" t = "%sd in %.2f sec, attempt %d: %s"
log(t % (now - t0, attempt + 1)) log(t % (act, now - t0, attempt + 1, src))
return True return True
except OSError as ex: except OSError as ex:
now = time.time() now = time.time()
@@ -2169,15 +2211,45 @@ def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
raise raise
if not attempt: if not attempt:
if not PY2: if not PY2:
ino = os.stat(bpath).st_ino ino = os.stat(bsrc).st_ino
t = "delete failed (err.%d); retrying for %d sec: %s" t = "%s failed (err.%d); retrying for %d sec: [%s]"
log(t % (ex.errno, maxtime + 0.99, abspath)) log(t % (act, ex.errno, maxtime + 0.99, src))
time.sleep(chill) time.sleep(chill)
return False # makes pylance happy return False # makes pylance happy
def atomic_move(log: "NamedLogger", src: str, dst: str, flags: dict[str, Any]) -> None:
bsrc = fsenc(src)
bdst = fsenc(dst)
if PY2:
if os.path.exists(bdst):
_fs_mvrm(log, dst, "", False, flags) # unlink
_fs_mvrm(log, src, dst, False, flags) # rename
elif flags.get("mv_re_t"):
_fs_mvrm(log, src, dst, True, flags)
else:
os.replace(bsrc, bdst)
def wrename(log: "NamedLogger", src: str, dst: str, flags: dict[str, Any]) -> bool:
if not flags.get("mv_re_t"):
os.rename(fsenc(src), fsenc(dst))
return True
return _fs_mvrm(log, src, dst, False, flags)
def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
if not flags.get("rm_re_t"):
os.unlink(fsenc(abspath))
return True
return _fs_mvrm(log, abspath, "", False, flags)
def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]: def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]:
try: try:
# some fuses misbehave # some fuses misbehave

View File

@@ -29,6 +29,7 @@ window.baguetteBox = (function () {
isOverlayVisible = false, isOverlayVisible = false,
touch = {}, // start-pos touch = {}, // start-pos
touchFlag = false, // busy touchFlag = false, // busy
scrollTimer = 0,
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i, re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i, re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
anims = ['slideIn', 'fadeIn', 'none'], anims = ['slideIn', 'fadeIn', 'none'],
@@ -91,6 +92,30 @@ window.baguetteBox = (function () {
touchendHandler(); touchendHandler();
}; };
var overlayWheelHandler = function (e) {
if (!options.noScrollbars || anymod(e))
return;
ev(e);
var x = e.deltaX,
y = e.deltaY,
d = Math.abs(x) > Math.abs(y) ? x : y;
if (e.deltaMode)
d *= 10;
if (Date.now() - scrollTimer < (Math.abs(d) > 20 ? 100 : 300))
return;
scrollTimer = Date.now();
if (d > 0)
showNextImage();
else
showPreviousImage();
};
var trapFocusInsideOverlay = function (e) { var trapFocusInsideOverlay = function (e) {
if (overlay.style.display === 'block' && (overlay.contains && !overlay.contains(e.target))) { if (overlay.style.display === 'block' && (overlay.contains && !overlay.contains(e.target))) {
e.stopPropagation(); e.stopPropagation();
@@ -451,6 +476,7 @@ window.baguetteBox = (function () {
bind(document, 'keyup', keyUpHandler); bind(document, 'keyup', keyUpHandler);
bind(document, 'fullscreenchange', onFSC); bind(document, 'fullscreenchange', onFSC);
bind(overlay, 'click', overlayClickHandler); bind(overlay, 'click', overlayClickHandler);
bind(overlay, 'wheel', overlayWheelHandler);
bind(btnPrev, 'click', showPreviousImage); bind(btnPrev, 'click', showPreviousImage);
bind(btnNext, 'click', showNextImage); bind(btnNext, 'click', showNextImage);
bind(btnClose, 'click', hideOverlay); bind(btnClose, 'click', hideOverlay);
@@ -473,6 +499,7 @@ window.baguetteBox = (function () {
unbind(document, 'keyup', keyUpHandler); unbind(document, 'keyup', keyUpHandler);
unbind(document, 'fullscreenchange', onFSC); unbind(document, 'fullscreenchange', onFSC);
unbind(overlay, 'click', overlayClickHandler); unbind(overlay, 'click', overlayClickHandler);
unbind(overlay, 'wheel', overlayWheelHandler);
unbind(btnPrev, 'click', showPreviousImage); unbind(btnPrev, 'click', showPreviousImage);
unbind(btnNext, 'click', showNextImage); unbind(btnNext, 'click', showNextImage);
unbind(btnClose, 'click', hideOverlay); unbind(btnClose, 'click', hideOverlay);
@@ -583,7 +610,7 @@ window.baguetteBox = (function () {
isOverlayVisible = true; isOverlayVisible = true;
} }
function hideOverlay(e) { function hideOverlay(e, dtor) {
ev(e); ev(e);
playvid(false); playvid(false);
removeFromCache('#files'); removeFromCache('#files');
@@ -592,19 +619,21 @@ window.baguetteBox = (function () {
document.body.style.overflowY = 'auto'; document.body.style.overflowY = 'auto';
} }
try {
if (document.fullscreenElement)
document.exitFullscreen();
}
catch (ex) { }
isFullscreen = false;
if (dtor || overlay.style.display === 'none')
return;
if (options.duringHide) if (options.duringHide)
options.duringHide(); options.duringHide();
if (overlay.style.display === 'none')
return;
sethash(''); sethash('');
unbindEvents(); unbindEvents();
try {
document.exitFullscreen();
isFullscreen = false;
}
catch (ex) { }
// Fade out and hide the overlay // Fade out and hide the overlay
overlay.className = ''; overlay.className = '';
@@ -1065,6 +1094,7 @@ window.baguetteBox = (function () {
} }
function destroyPlugin() { function destroyPlugin() {
hideOverlay(undefined, true);
unbindEvents(); unbindEvents();
clearCachedData(); clearCachedData();
document.getElementsByTagName('body')[0].removeChild(ebi('bbox-overlay')); document.getElementsByTagName('body')[0].removeChild(ebi('bbox-overlay'));

View File

@@ -699,12 +699,12 @@ a:hover {
.s0:after, .s0:after,
.s1:after { .s1:after {
content: '⌄'; content: '⌄';
margin-left: -.1em; margin-left: -.15em;
} }
.s0r:after, .s0r:after,
.s1r:after { .s1r:after {
content: '⌃'; content: '⌃';
margin-left: -.1em; margin-left: -.15em;
} }
.s0:after, .s0:after,
.s0r:after { .s0r:after {
@@ -715,7 +715,7 @@ a:hover {
color: var(--sort-2); color: var(--sort-2);
} }
#files thead th:after { #files thead th:after {
margin-right: -.7em; margin-right: -.5em;
} }
#files tbody tr:hover td, #files tbody tr:hover td,
#files tbody tr:hover td+td { #files tbody tr:hover td+td {
@@ -744,6 +744,15 @@ html #files.hhpick thead th {
word-wrap: break-word; word-wrap: break-word;
overflow: hidden; overflow: hidden;
} }
#files tr.fade a {
color: #999;
color: rgba(255, 255, 255, 0.4);
font-style: italic;
}
html.y #files tr.fade a {
color: #999;
color: rgba(0, 0, 0, 0.4);
}
#files tr:nth-child(2n) td { #files tr:nth-child(2n) td {
background: var(--row-alt); background: var(--row-alt);
} }
@@ -1600,6 +1609,7 @@ html {
padding: .2em .4em; padding: .2em .4em;
font-size: 1.2em; font-size: 1.2em;
margin: .2em; margin: .2em;
display: inline-block;
white-space: pre; white-space: pre;
position: relative; position: relative;
top: -.12em; top: -.12em;
@@ -1736,6 +1746,7 @@ html.y #tree.nowrap .ntree a+a:hover {
} }
#files th span { #files th span {
position: relative; position: relative;
white-space: nowrap;
} }
#files>thead>tr>th.min, #files>thead>tr>th.min,
#files td.min { #files td.min {
@@ -1773,9 +1784,6 @@ html.y #tree.nowrap .ntree a+a:hover {
margin: .7em 0 .7em .5em; margin: .7em 0 .7em .5em;
padding-left: .5em; padding-left: .5em;
} }
.opwide>div>div>a {
line-height: 2em;
}
.opwide>div>h3 { .opwide>div>h3 {
color: var(--fg-weak); color: var(--fg-weak);
margin: 0 .4em; margin: 0 .4em;

View File

@@ -290,6 +290,8 @@ var Ls = {
"f_dls": 'the file links in the current folder have\nbeen changed into download links', "f_dls": 'the file links in the current folder have\nbeen changed into download links',
"f_partial": "To safely download a file which is currently being uploaded, please click the file which has the same filename, but without the <code>.PARTIAL</code> file extension. Please press CANCEL or Escape to do this.\n\nPressing OK / Enter will ignore this warning and continue downloading the <code>.PARTIAL</code> scratchfile instead, which will almost definitely give you corrupted data.",
"ft_paste": "paste {0} items$NHotkey: ctrl-V", "ft_paste": "paste {0} items$NHotkey: ctrl-V",
"fr_eperm": 'cannot rename:\nyou do not have “move” permission in this folder', "fr_eperm": 'cannot rename:\nyou do not have “move” permission in this folder',
"fd_eperm": 'cannot delete:\nyou do not have “delete” permission in this folder', "fd_eperm": 'cannot delete:\nyou do not have “delete” permission in this folder',
@@ -420,6 +422,7 @@ var Ls = {
"un_fclr": "clear filter", "un_fclr": "clear filter",
"un_derr": 'unpost-delete failed:\n', "un_derr": 'unpost-delete failed:\n',
"un_f5": 'something broke, please try a refresh or hit F5', "un_f5": 'something broke, please try a refresh or hit F5',
"un_uf5": "sorry but you have to refresh the page (for example by pressing F5 or CTRL-R) before this upload can be aborted",
"un_nou": '<b>warning:</b> server too busy to show unfinished uploads; click the "refresh" link in a bit', "un_nou": '<b>warning:</b> server too busy to show unfinished uploads; click the "refresh" link in a bit',
"un_noc": '<b>warning:</b> unpost of fully uploaded files is not enabled/permitted in server config', "un_noc": '<b>warning:</b> unpost of fully uploaded files is not enabled/permitted in server config',
"un_max": "showing first 2000 files (use the filter)", "un_max": "showing first 2000 files (use the filter)",
@@ -792,6 +795,8 @@ var Ls = {
"f_dls": 'linkene i denne mappen er nå\nomgjort til nedlastningsknapper', "f_dls": 'linkene i denne mappen er nå\nomgjort til nedlastningsknapper',
"f_partial": "For å laste ned en fil som enda ikke er ferdig opplastet, klikk på filen som har samme filnavn som denne, men uten <code>.PARTIAL</code> på slutten. Da vil serveren passe på at nedlastning går bra. Derfor anbefales det sterkt å trykke ABRYT eller Escape-tasten.\n\nHvis du virkelig ønsker å laste ned denne <code>.PARTIAL</code>-filen på en ukontrollert måte, trykk OK / Enter for å ignorere denne advarselen. Slik vil du høyst sannsynlig motta korrupt data.",
"ft_paste": "Lim inn {0} filer$NSnarvei: ctrl-V", "ft_paste": "Lim inn {0} filer$NSnarvei: ctrl-V",
"fr_eperm": 'kan ikke endre navn:\ndu har ikke “move”-rettigheten i denne mappen', "fr_eperm": 'kan ikke endre navn:\ndu har ikke “move”-rettigheten i denne mappen',
"fd_eperm": 'kan ikke slette:\ndu har ikke “delete”-rettigheten i denne mappen', "fd_eperm": 'kan ikke slette:\ndu har ikke “delete”-rettigheten i denne mappen',
@@ -922,6 +927,7 @@ var Ls = {
"un_fclr": "nullstill filter", "un_fclr": "nullstill filter",
"un_derr": 'unpost-sletting feilet:\n', "un_derr": 'unpost-sletting feilet:\n',
"un_f5": 'noe gikk galt, prøv å oppdatere listen eller trykk F5', "un_f5": 'noe gikk galt, prøv å oppdatere listen eller trykk F5',
"un_uf5": "beklager, men du må laste siden på nytt (f.eks. ved å trykke F5 eller CTRL-R) før denne opplastningen kan avbrytes",
"un_nou": '<b>advarsel:</b> kan ikke vise ufullstendige opplastninger akkurat nå; klikk på oppdater-linken om litt', "un_nou": '<b>advarsel:</b> kan ikke vise ufullstendige opplastninger akkurat nå; klikk på oppdater-linken om litt',
"un_noc": '<b>advarsel:</b> angring av fullførte opplastninger er deaktivert i serverkonfigurasjonen', "un_noc": '<b>advarsel:</b> angring av fullførte opplastninger er deaktivert i serverkonfigurasjonen',
"un_max": "viser de første 2000 filene (bruk filteret for å innsnevre)", "un_max": "viser de første 2000 filene (bruk filteret for å innsnevre)",
@@ -1539,22 +1545,24 @@ var mpl = (function () {
set_tint(); set_tint();
r.acode = function (url) { r.acode = function (url) {
var c = true; var c = true,
cs = url.split('?')[0];
if (!have_acode) if (!have_acode)
c = false; c = false;
else if (/\.(wav|flac)$/i.exec(url)) else if (/\.(wav|flac)$/i.exec(cs))
c = r.ac_flac; c = r.ac_flac;
else if (/\.(aac|m4a)$/i.exec(url)) else if (/\.(aac|m4a)$/i.exec(cs))
c = r.ac_aac; c = r.ac_aac;
else if (/\.(ogg|opus)$/i.exec(url) && !can_ogg) else if (/\.(ogg|opus)$/i.exec(cs) && !can_ogg)
c = true; c = true;
else if (re_au_native.exec(url)) else if (re_au_native.exec(cs))
c = false; c = false;
if (!c) if (!c)
return url; return url;
return addq(url, 'th=') + (can_ogg ? 'opus' : (IPHONE || MACOS) ? 'caf' : 'mp3'); return addq(url, 'th=' + (can_ogg ? 'opus' : (IPHONE || MACOS) ? 'caf' : 'mp3'));
}; };
r.pp = function () { r.pp = function () {
@@ -1804,7 +1812,7 @@ function MPlayer() {
r.preload = function (url, full) { r.preload = function (url, full) {
var t0 = Date.now(), var t0 = Date.now(),
fname = uricom_dec(url.split('/').pop()); fname = uricom_dec(url.split('/').pop().split('?')[0]);
url = addq(mpl.acode(url), 'cache=987&_=' + ACB); url = addq(mpl.acode(url), 'cache=987&_=' + ACB);
mpl.preload_url = full ? url : null; mpl.preload_url = full ? url : null;
@@ -2534,7 +2542,7 @@ var mpui = (function () {
rem = pos > 1 ? len - pos : 999, rem = pos > 1 ? len - pos : 999,
full = null; full = null;
if (rem < 7 || (!mpl.fullpre && (rem < 40 || pos > 10))) { if (rem < 7 || (!mpl.fullpre && (rem < 40 || (rem < 90 && pos > 10)))) {
preloaded = fpreloaded = mp.au.rsrc; preloaded = fpreloaded = mp.au.rsrc;
full = false; full = false;
} }
@@ -4286,7 +4294,7 @@ var showfile = (function () {
r.show = function (url, no_push) { r.show = function (url, no_push) {
var xhr = new XHR(), var xhr = new XHR(),
m = /[?&](k=[^&]+)/.exec(url); m = /[?&](k=[^&#]+)/.exec(url);
url = url.split('?')[0] + (m ? '?' + m[1] : ''); url = url.split('?')[0] + (m ? '?' + m[1] : '');
xhr.url = url; xhr.url = url;
@@ -4826,7 +4834,7 @@ var thegrid = (function () {
ihref = ohref; ihref = ohref;
if (r.thumbs) { if (r.thumbs) {
ihref = addq(ihref, 'th=') + (have_webp ? 'w' : 'j'); ihref = addq(ihref, 'th=' + (have_webp ? 'w' : 'j'));
if (!r.crop) if (!r.crop)
ihref += 'f'; ihref += 'f';
if (r.x3) if (r.x3)
@@ -5975,14 +5983,14 @@ var treectl = (function () {
function get_tree(top, dst, rst) { function get_tree(top, dst, rst) {
var xhr = new XHR(), var xhr = new XHR(),
m = /[?&](k=[^&]+)/.exec(dst), m = /[?&](k=[^&#]+)/.exec(dst),
k = m ? '&' + m[1] : dk ? '&k=' + dk : ''; k = m ? '&' + m[1] : dk ? '&k=' + dk : '';
xhr.top = top; xhr.top = top;
xhr.dst = dst; xhr.dst = dst;
xhr.rst = rst; xhr.rst = rst;
xhr.ts = Date.now(); xhr.ts = Date.now();
xhr.open('GET', addq(dst, 'tree=') + top + (r.dots ? '&dots' : '') + k, true); xhr.open('GET', addq(dst, 'tree=' + top + (r.dots ? '&dots' : '') + k), true);
xhr.onload = xhr.onerror = recvtree; xhr.onload = xhr.onerror = recvtree;
xhr.send(); xhr.send();
enspin('#tree'); enspin('#tree');
@@ -6071,7 +6079,7 @@ var treectl = (function () {
cl = ''; cl = '';
if (dk && ehref == cevp && !/[?&]k=/.exec(qhref)) if (dk && ehref == cevp && !/[?&]k=/.exec(qhref))
links[a].setAttribute('href', addq(qhref, 'k=') + dk); links[a].setAttribute('href', addq(qhref, 'k=' + dk));
if (href == cdir) { if (href == cdir) {
act = links[a]; act = links[a];
@@ -6172,7 +6180,7 @@ var treectl = (function () {
return window.location = url; return window.location = url;
var xhr = new XHR(), var xhr = new XHR(),
m = /[?&](k=[^&]+)/.exec(url), m = /[?&](k=[^&#]+)/.exec(url),
k = m ? '&' + m[1] : dk ? '&k=' + dk : ''; k = m ? '&' + m[1] : dk ? '&k=' + dk : '';
xhr.top = url.split('?')[0]; xhr.top = url.split('?')[0];
@@ -6258,7 +6266,7 @@ var treectl = (function () {
for (var a = 0; a < res.dirs.length; a++) { for (var a = 0; a < res.dirs.length; a++) {
var dh = res.dirs[a].href, var dh = res.dirs[a].href,
dn = dh.split('/')[0].split('?')[0], dn = dh.split('/')[0].split('?')[0],
m = /[?&](k=[^&]+)/.exec(dh); m = /[?&](k=[^&#]+)/.exec(dh);
if (m) if (m)
dn += '?' + m[1]; dn += '?' + m[1];
@@ -6379,8 +6387,9 @@ var treectl = (function () {
'" class="doc' + (lang ? ' bri' : '') + '" class="doc' + (lang ? ' bri' : '') +
'" hl="' + id + '" name="' + hname + '">-txt-</a>'; '" hl="' + id + '" name="' + hname + '">-txt-</a>';
var ln = ['<tr><td>' + tn.lead + '</td><td><a href="' + var cl = /\.PARTIAL$/.exec(fname) ? ' class="fade"' : '',
top + tn.href + '" id="' + id + '">' + hname + '</a>', tn.sz]; ln = ['<tr' + cl + '><td>' + tn.lead + '</td><td><a href="' +
top + tn.href + '" id="' + id + '">' + hname + '</a>', tn.sz];
for (var b = 0; b < res.taglist.length; b++) { for (var b = 0; b < res.taglist.length; b++) {
var k = res.taglist[b], var k = res.taglist[b],
@@ -6630,7 +6639,7 @@ var treectl = (function () {
hbase = req, hbase = req,
cbase = location.pathname, cbase = location.pathname,
mdoc = /[?&]doc=/.exec('' + url), mdoc = /[?&]doc=/.exec('' + url),
mdk = /[?&](k=[^&]+)/.exec('' + url); mdk = /[?&](k=[^&#]+)/.exec('' + url);
if (mdoc && hbase == cbase) if (mdoc && hbase == cbase)
return showfile.show(hbase + showfile.sname(url.search), true); return showfile.show(hbase + showfile.sname(url.search), true);
@@ -7267,7 +7276,7 @@ var arcfmt = (function () {
if (!/^(zip|tar|pax|tgz|txz)$/.exec(txt)) if (!/^(zip|tar|pax|tgz|txz)$/.exec(txt))
continue; continue;
var m = /(.*[?&])(tar|zip)([^&]*)(.*)$/.exec(href); var m = /(.*[?&])(tar|zip)([^&#]*)(.*)$/.exec(href);
if (!m) if (!m)
throw new Error('missing arg in url'); throw new Error('missing arg in url');
@@ -7343,7 +7352,7 @@ var msel = (function () {
item.vp = href.indexOf('/') !== -1 ? href : vbase + href; item.vp = href.indexOf('/') !== -1 ? href : vbase + href;
if (dk) { if (dk) {
var m = /[?&](k=[^&]+)/.exec(qhref); var m = /[?&](k=[^&#]+)/.exec(qhref);
item.q = m ? '?' + m[1] : ''; item.q = m ? '?' + m[1] : '';
} }
else item.q = ''; else item.q = '';
@@ -8090,7 +8099,17 @@ var unpost = (function () {
if (!links.length) if (!links.length)
continue; continue;
req.push(uricom_dec(r.files[a].vp.split('?')[0])); var f = r.files[a];
if (f.k == 'u') {
var vp = vsplit(f.vp.split('?')[0]),
dfn = uricom_dec(vp[1]);
for (var iu = 0; iu < up2k.st.files.length; iu++) {
var uf = up2k.st.files[iu];
if (uf.name == dfn && uf.purl == vp[0])
return modal.alert(L.un_uf5);
}
}
req.push(uricom_dec(f.vp.split('?')[0]));
for (var b = 0; b < links.length; b++) { for (var b = 0; b < links.length; b++) {
links[b].removeAttribute('href'); links[b].removeAttribute('href');
links[b].innerHTML = '[busy]'; links[b].innerHTML = '[busy]';
@@ -8213,6 +8232,13 @@ ebi('files').onclick = ebi('docul').onclick = function (e) {
treectl.reqls(tgt.getAttribute('href'), true); treectl.reqls(tgt.getAttribute('href'), true);
return ev(e); return ev(e);
} }
if (tgt && /\.PARTIAL(\?|$)/.exec('' + tgt.getAttribute('href')) && !window.partdlok) {
ev(e);
modal.confirm(L.f_partial, function () {
window.partdlok = 1;
tgt.click();
}, null);
}
tgt = e.target.closest('a[hl]'); tgt = e.target.closest('a[hl]');
if (tgt) { if (tgt) {
@@ -8281,7 +8307,7 @@ function reload_browser() {
for (var a = 0; a < parts.length - 1; a++) { for (var a = 0; a < parts.length - 1; a++) {
link += parts[a] + '/'; link += parts[a] + '/';
var link2 = dks[link] ? addq(link, 'k=') + dks[link] : link; var link2 = dks[link] ? addq(link, 'k=' + dks[link]) : link;
o = mknod('a'); o = mknod('a');
o.setAttribute('href', link2); o.setAttribute('href', link2);

View File

@@ -190,11 +190,21 @@ input {
padding: .5em .7em; padding: .5em .7em;
margin: 0 .5em 0 0; margin: 0 .5em 0 0;
} }
input::placeholder {
font-size: 1.2em;
font-style: italic;
letter-spacing: .04em;
opacity: 0.64;
color: #930;
}
html.z input { html.z input {
color: #fff; color: #fff;
background: #626; background: #626;
border-color: #c2c; border-color: #c2c;
} }
html.z input::placeholder {
color: #fff;
}
html.z .num { html.z .num {
border-color: #777; border-color: #777;
} }

View File

@@ -94,7 +94,7 @@
<div> <div>
<form method="post" enctype="multipart/form-data" action="{{ r }}/{{ qvpath }}"> <form method="post" enctype="multipart/form-data" action="{{ r }}/{{ qvpath }}">
<input type="hidden" name="act" value="login" /> <input type="hidden" name="act" value="login" />
<input type="password" name="cppwd" /> <input type="password" name="cppwd" placeholder=" password" />
<input type="submit" value="Login" /> <input type="submit" value="Login" />
{% if ahttps %} {% if ahttps %}
<a id="w" href="{{ ahttps }}">switch to https</a> <a id="w" href="{{ ahttps }}">switch to https</a>

View File

@@ -148,6 +148,8 @@ try {
hook('error'); hook('error');
} }
catch (ex) { catch (ex) {
if (console.stdlog)
console.log = console.stdlog;
console.log('console capture failed', ex); console.log('console capture failed', ex);
} }
var crashed = false, ignexd = {}, evalex_fatal = false; var crashed = false, ignexd = {}, evalex_fatal = false;
@@ -736,7 +738,11 @@ function vjoin(p1, p2) {
function addq(url, q) { function addq(url, q) {
return url + (url.indexOf('?') < 0 ? '?' : '&') + (q === undefined ? '' : q); var uh = url.split('#', 1),
u = uh[0],
h = uh.length == 1 ? '' : '#' + uh[1];
return u + (u.indexOf('?') < 0 ? '?' : '&') + (q === undefined ? '' : q) + h;
} }

View File

@@ -1,3 +1,72 @@
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0412-2110 `v1.12.2` ie11 fix
## new features
* new option `--bauth-last` for when you're hosting other [basic-auth](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication) services on the same domain 7b94e4ed
* makes it possible to log into copyparty as intended, but it still sees the passwords from the other service until you do
* alternatively, the other new option `--no-bauth` entirely disables basic-auth support, but that also kills [the android app](https://github.com/9001/party-up)
## bugfixes
* internet explorer isn't working?! FIX IT!!! 9e5253ef
* audio transcoding was buggy with filekeys enabled b8733653
* on windows, theoretical chance that antivirus could interrupt renaming files, so preemptively guard against that c8e3ed3a
## other changes
* add a "password" placeholder on the login page since you might think it's asking for a username da26ec36
* config buttons were jank on iOS b772a4f8
* readme: [making your homeserver accessible from the internet](https://github.com/9001/copyparty#at-home)
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0409-2334 `v1.12.1` scrolling stuff
## new features
* while viewing pictures/videos, the scrollwheel can be used to view the prev/next file 844d16b9
## bugfixes
* #81 (scrolling suddenly getting disabled) properly fixed after @icxes found another way to reproduce it (thx) 4f0cad54
* and fixed at least one javascript glitch introduced in v1.12.0 while adding dirkeys 989cc613
* directory tree sidebar could fail to render when popping browser history into the lightbox
## other changes
* music preloader is slightly less hyper f89de6b3
* u2c.exe: updated TLS-certs and deps ab18893c
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0406-2011 `v1.12.0` locksmith
## new features
* #64 dirkeys; option to auto-generate passwords for folders, so you can give someone a link to a specific folder inside a volume without sharing the rest of the volume 10bc2d92 32c912bb ef52e2c0 0ae12868
* enabled by volflag `dk` (exact folder only) and/or volflag `dks` (also subfolders); see [readme](https://github.com/9001/copyparty#dirkeys)
* audio transcoding to mp3 if browser doesn't support opus a080759a
* recursively transcode and download a folder using `?tar&mp3`
* accidentally adds support for playing just about any audio format in ie11
* audio equalizer also applies to videos 7744226b
## bugfixes
* #81 scrolling could break after viewing an image in the lightbox 9c42cbec
* on phones, audio playback could stop if network is slow/unreliable 59f815ff b88cc7b5 59a53ba9
* fixes the issue on android, but ios/safari appears to be [impossible](https://github.com/9001/copyparty/blob/hovudstraum/docs/devnotes.md#music-playback-halting-on-phones) d94b5b3f
## other changes
* updated dompurify to 3.0.11
* copyparty.exe: updated to python 3.11.9
* support for building with pyoxidizer was removed 5ab54763
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0323-1724 `v1.11.2` public idp volumes # 2024-0323-1724 `v1.11.2` public idp volumes

View File

@@ -22,7 +22,6 @@
* [complete release](#complete-release) * [complete release](#complete-release)
* [debugging](#debugging) * [debugging](#debugging)
* [music playback halting on phones](#music-playback-halting-on-phones) - mostly fine on android * [music playback halting on phones](#music-playback-halting-on-phones) - mostly fine on android
* [todo](#todo) - roughly sorted by priority
* [discarded ideas](#discarded-ideas) * [discarded ideas](#discarded-ideas)
@@ -312,17 +311,11 @@ mostly fine on android, but still haven't find a way to massage iphones into be
* conditionally starting/stopping mp.fau according to mp.au.readyState <3 or <4 doesn't help * conditionally starting/stopping mp.fau according to mp.au.readyState <3 or <4 doesn't help
* loop=true doesn't work, and manually looping mp.fau from an onended also doesn't work (it does nothing) * loop=true doesn't work, and manually looping mp.fau from an onended also doesn't work (it does nothing)
* assigning fau.currentTime in a timer doesn't work, as safari merely pretends to assign it * assigning fau.currentTime in a timer doesn't work, as safari merely pretends to assign it
* on ios 16.7.7, mp.fau can sometimes make everything visibly work correctly, but no audio is actually hitting the speakers
can be reproduced with `--no-sendfile --s-wr-sz 8192 --s-wr-slp 0.3 --rsp-slp 6` and then play a collection of small audio files with the screen off, `ffmpeg -i track01.cdda.flac -c:a libopus -b:a 128k -segment_time 12 -f segment smol-%02d.opus` can be reproduced with `--no-sendfile --s-wr-sz 8192 --s-wr-slp 0.3 --rsp-slp 6` and then play a collection of small audio files with the screen off, `ffmpeg -i track01.cdda.flac -c:a libopus -b:a 128k -segment_time 12 -f segment smol-%02d.opus`
# todo
roughly sorted by priority
* nothing! currently
## discarded ideas ## discarded ideas
* optimization attempts which didn't improve performance * optimization attempts which didn't improve performance

View File

@@ -1,82 +1,71 @@
# recipe for building an exe with nuitka (extreme jank edition) # recipe for building an exe with nuitka (extreme jank edition)
#
# NOTE: win7 and win10 builds both work on win10 but NOTE: copyparty runs SLOWER when compiled with nuitka;
# on win7 they immediately c0000005 in kernelbase.dll just use copyparty-sfx.py and/or pyinstaller instead
#
# first install python-3.6.8-amd64.exe ( the sfx and the pyinstaller EXEs are equally fast if you
# [x] add to path have the latest jinja2 installed, but the older jinja that
# comes bundled with the sfx is slightly faster yet )
roughly, copyparty-sfx.py is 6% faster than copyparty.exe
(win10-pyinstaller), and copyparty.exe is 10% faster than
nuitka, making copyparty-sfx.py 17% faster than nuitka
NOTE: every time a nuitka-compiled copyparty.exe is launched,
it will show the windows firewall prompt since nuitka will
pick a new unique location in %TEMP% to unpack an exe into,
unlike pyinstaller which doesn't fork itself on startup...
might be fixable by configuring nuitka differently, idk
NOTE: nuitka EXEs are larger than pyinstaller ones;
a minimal nuitka build of just the sfx (with its bundled
dependencies) was already the same size as the pyinstaller
copyparty.exe which also includes Mutagen and Pillow
NOTE: nuitka takes a lot longer to build than pyinstaller
(due to actual compilation of course, but still)
NOTE: binaries built with nuitka cannot run on windows7,
even when compiled with python 3.6 on windows 7 itself
NOTE: `--python-flags=-m` is the magic sauce to
correctly compile `from .util import Daemon`
(which otherwise only explodes at runtime)
NOTE: `--deployment` doesn't seem to affect performance
########################################################################
# copypaste the rest of this file into cmd # copypaste the rest of this file into cmd
rem from pypi
cd \users\ed\downloads
python -m pip install --user Nuitka-0.6.14.7.tar.gz
rem https://github.com/brechtsanders/winlibs_mingw/releases/download/10.2.0-11.0.0-8.0.0-r5/winlibs-x86_64-posix-seh-gcc-10.2.0-llvm-11.0.0-mingw-w64-8.0.0-r5.zip
mkdir C:\Users\ed\AppData\Local\Nuitka\
mkdir C:\Users\ed\AppData\Local\Nuitka\Nuitka\
mkdir C:\Users\ed\AppData\Local\Nuitka\Nuitka\gcc\
mkdir C:\Users\ed\AppData\Local\Nuitka\Nuitka\gcc\x86_64\
mkdir C:\Users\ed\AppData\Local\Nuitka\Nuitka\gcc\x86_64\10.2.0-11.0.0-8.0.0-r5\
copy c:\users\ed\downloads\winlibs-x86_64-posix-seh-gcc-10.2.0-llvm-11.0.0-mingw-w64-8.0.0-r5.zip C:\Users\ed\AppData\Local\Nuitka\Nuitka\gcc\x86_64\10.2.0-11.0.0-8.0.0-r5\winlibs-x86_64-posix-seh-gcc-10.2.0-llvm-11.0.0-mingw-w64-8.0.0-r5.zip
rem https://github.com/ccache/ccache/releases/download/v3.7.12/ccache-3.7.12-windows-32.zip python -m pip install --user -U nuitka
mkdir C:\Users\ed\AppData\Local\Nuitka\Nuitka\ccache\
mkdir C:\Users\ed\AppData\Local\Nuitka\Nuitka\ccache\v3.7.12\
copy c:\users\ed\downloads\ccache-3.7.12-windows-32.zip C:\Users\ed\AppData\Local\Nuitka\Nuitka\ccache\v3.7.12\ccache-3.7.12-windows-32.zip
rem https://dependencywalker.com/depends22_x64.zip cd %homedrive%
mkdir C:\Users\ed\AppData\Local\Nuitka\Nuitka\depends\ cd %homepath%\downloads
mkdir C:\Users\ed\AppData\Local\Nuitka\Nuitka\depends\x86_64\
copy c:\users\ed\downloads\depends22_x64.zip C:\Users\ed\AppData\Local\Nuitka\Nuitka\depends\x86_64\depends22_x64.zip
cd \ rd /s /q copypuitka
rd /s /q %appdata%\..\local\temp\pe-copyparty mkdir copypuitka
cd \users\ed\downloads cd copypuitka
python copyparty-sfx.py -h
cd %appdata%\..\local\temp\pe-copyparty\copyparty
python rd /s /q %temp%\pe-copyparty
import os, re python ..\copyparty-sfx.py --version
os.rename('../dep-j2/jinja2', '../jinja2')
os.rename('../dep-j2/markupsafe', '../markupsafe')
print("# nuitka dies if .__init__.stuff is imported") move %temp%\pe-copyparty\copyparty .\
with open('__init__.py','r',encoding='utf-8') as f: move %temp%\pe-copyparty\partftpy .\
t1 = f.read() move %temp%\pe-copyparty\ftp\pyftpdlib .\
move %temp%\pe-copyparty\j2\jinja2 .\
move %temp%\pe-copyparty\j2\markupsafe .\
with open('util.py','r',encoding='utf-8') as f: rd /s /q %temp%\pe-copyparty
t2 = f.read().split('\n')[3:]
t2 = [x for x in t2 if 'from .__init__' not in x] python -m nuitka ^
t = t1 + '\n'.join(t2) --onefile --deployment --python-flag=-m ^
with open('__init__.py','w',encoding='utf-8') as f: --include-package=markupsafe ^
f.write('\n') --include-package=jinja2 ^
--include-package=partftpy ^
--include-package=pyftpdlib ^
--include-data-dir=copyparty\web=copyparty\web ^
--include-data-dir=copyparty\res=copyparty\res ^
--run copyparty
with open('util.py','w',encoding='utf-8') as f:
f.write(t)
print("# local-imports fail, prefix module names")
ptn = re.compile(r'^( *from )(\.[^ ]+ import .*)')
for d, _, fs in os.walk('.'):
for f in fs:
fp = os.path.join(d, f)
if not fp.endswith('.py'):
continue
t = ''
with open(fp,'r',encoding='utf-8') as f:
for ln in [x.rstrip('\r\n') for x in f]:
m = ptn.match(ln)
if not m:
t += ln + '\n'
continue
p1, p2 = m.groups()
t += "{}copyparty{}\n".format(p1, p2).replace("__init__", "util")
with open(fp,'w',encoding='utf-8') as f:
f.write(t)
exit()
cd ..
rd /s /q bout & python -m nuitka --standalone --onefile --windows-onefile-tempdir --python-flag=no_site --assume-yes-for-downloads --include-data-dir=copyparty\web=copyparty\web --include-data-dir=copyparty\res=copyparty\res --run --output-dir=bout --mingw64 --include-package=markupsafe --include-package=jinja2 copyparty

View File

@@ -48,6 +48,7 @@ currently up to date with [awesome-selfhosted](https://github.com/awesome-selfho
* [filebrowser](#filebrowser) * [filebrowser](#filebrowser)
* [filegator](#filegator) * [filegator](#filegator)
* [sftpgo](#sftpgo) * [sftpgo](#sftpgo)
* [arozos](#arozos)
* [updog](#updog) * [updog](#updog)
* [goshs](#goshs) * [goshs](#goshs)
* [gimme-that](#gimme-that) * [gimme-that](#gimme-that)
@@ -93,6 +94,7 @@ the softwares,
* `j` = [filebrowser](https://github.com/filebrowser/filebrowser) * `j` = [filebrowser](https://github.com/filebrowser/filebrowser)
* `k` = [filegator](https://github.com/filegator/filegator) * `k` = [filegator](https://github.com/filegator/filegator)
* `l` = [sftpgo](https://github.com/drakkan/sftpgo) * `l` = [sftpgo](https://github.com/drakkan/sftpgo)
* `m` = [arozos](https://github.com/tobychui/arozos)
some softwares not in the matrixes, some softwares not in the matrixes,
* [updog](#updog) * [updog](#updog)
@@ -113,22 +115,22 @@ symbol legend,
## general ## general
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | | feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | | ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| intuitive UX | | | █ | █ | █ | | █ | █ | █ | █ | █ | █ | | intuitive UX | | | █ | █ | █ | | █ | █ | █ | █ | █ | █ | █ |
| config GUI | | █ | █ | █ | █ | | | █ | █ | █ | | █ | | config GUI | | █ | █ | █ | █ | | | █ | █ | █ | | █ | █ |
| good documentation | | | | █ | █ | █ | █ | | | █ | █ | | | good documentation | | | | █ | █ | █ | █ | | | █ | █ | | |
| runs on iOS | | | | | | | | | | | | | | runs on iOS | | | | | | | | | | | | | |
| runs on Android | █ | | | | | █ | | | | | | | | runs on Android | █ | | | | | █ | | | | | | | |
| runs on WinXP | █ | █ | | | | █ | | | | | | | | runs on WinXP | █ | █ | | | | █ | | | | | | | |
| runs on Windows | █ | █ | █ | █ | █ | █ | █ | | █ | █ | █ | █ | | runs on Windows | █ | █ | █ | █ | █ | █ | █ | | █ | █ | █ | █ | |
| runs on Linux | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | runs on Linux | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
| runs on Macos | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | runs on Macos | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | |
| runs on FreeBSD | █ | | | • | █ | █ | █ | • | █ | █ | | █ | | runs on FreeBSD | █ | | | • | █ | █ | █ | • | █ | █ | | █ | |
| portable binary | █ | █ | █ | | | █ | █ | | | █ | | █ | | portable binary | █ | █ | █ | | | █ | █ | | | █ | | █ | █ |
| zero setup, just go | █ | █ | █ | | | | █ | | | █ | | | | zero setup, just go | █ | █ | █ | | | | █ | | | █ | | | █ |
| android app | | | | █ | █ | | | | | | | | | android app | | | | █ | █ | | | | | | | | |
| iOS app | | | | █ | █ | | | | | | | | | iOS app | | | | █ | █ | | | | | | | | |
* `zero setup` = you can get a mostly working setup by just launching the app, without having to install any software or configure whatever * `zero setup` = you can get a mostly working setup by just launching the app, without having to install any software or configure whatever
* `a`/copyparty remarks: * `a`/copyparty remarks:
@@ -140,37 +142,39 @@ symbol legend,
* `f`/rclone must be started with the command `rclone serve webdav .` or similar * `f`/rclone must be started with the command `rclone serve webdav .` or similar
* `h`/chibisafe has undocumented windows support * `h`/chibisafe has undocumented windows support
* `i`/sftpgo must be launched with a command * `i`/sftpgo must be launched with a command
* `m`/arozos has partial windows support
## file transfer ## file transfer
*the thing that copyparty is actually kinda good at* *the thing that copyparty is actually kinda good at*
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | | feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | | ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| download folder as zip | █ | █ | █ | █ | | | █ | | █ | █ | | █ | | download folder as zip | █ | █ | █ | █ | | | █ | | █ | █ | | █ | |
| download folder as tar | █ | | | | | | | | | █ | | | | download folder as tar | █ | | | | | | | | | █ | | | |
| upload | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | upload | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
| parallel uploads | █ | | | █ | █ | | • | | █ | | █ | | | parallel uploads | █ | | | █ | █ | | • | | █ | | █ | | █ |
| resumable uploads | █ | | | | | | | | █ | | █ | | | resumable uploads | █ | | | | | | | | █ | | █ | | |
| upload segmenting | █ | | | | | | | █ | █ | | █ | | | upload segmenting | █ | | | | | | | █ | █ | | █ | | █ |
| upload acceleration | █ | | | | | | | | █ | | █ | | | upload acceleration | █ | | | | | | | | █ | | █ | | |
| upload verification | █ | | | █ | █ | | | | █ | | | | | upload verification | █ | | | █ | █ | | | | █ | | | | |
| upload deduplication | █ | | | | █ | | | | █ | | | | | upload deduplication | █ | | | | █ | | | | █ | | | | |
| upload a 999 TiB file | █ | | | | █ | █ | • | | █ | | █ | | | upload a 999 TiB file | █ | | | | █ | █ | • | | █ | | █ | | |
| keep last-modified time | █ | | | █ | █ | █ | | | | | | | | race the beam ("p2p") | █ | | | | | | | | | • | | | |
| upload rules | | | | | | | | | | | | | | keep last-modified time | █ | | | | | | | | | | | | |
| ┗ max disk usage | █ | █ | | | | | | | | | | | | upload rules | | | | | | | | | | | | | |
| ┗ max filesize | █ | | | | | | | █ | | | █ | █ | | ┗ max disk usage | █ | █ | | | | | | | █ | | | █ | █ |
| ┗ max items in folder | █ | | | | | | | | | | | | | ┗ max filesize | █ | | | | | | | | | | █ | █ | |
| ┗ max file age | █ | | | | | | | | | | | | | ┗ max items in folder | █ | | | | | | | | | | | | |
| ┗ max uploads over time | █ | | | | | | | | | | | | | ┗ max file age | █ | | | | | | | | █ | | | | |
| ┗ compress before write | █ | | | | | | | | | | | | | ┗ max uploads over time | █ | | | | | | | | | | | | |
| ┗ randomize filename | █ | | | | | | | | | | | | | ┗ compress before write | █ | | | | | | | | | | | | |
| ┗ mimetype reject-list | | | | | | | | | | | | | | ┗ randomize filename | | | | | | | | █ | █ | | | | |
| ┗ extension reject-list | | | | | | | | | • | | | | | ┗ mimetype reject-list | | | | | | | | | • | | | | • |
| checksums provided | | | | █ | █ | | | | | | | | | ┗ extension reject-list | | | | | | | | | | | | | • |
| cloud storage backend | | | | █ | █ | | | | | | | | | checksums provided | | | | █ | █ | | | | | | | | |
| cloud storage backend | | | | █ | █ | █ | | | | | █ | █ | |
* `upload segmenting` = files are sliced into chunks, making it possible to upload files larger than 100 MiB on cloudflare for example * `upload segmenting` = files are sliced into chunks, making it possible to upload files larger than 100 MiB on cloudflare for example
@@ -178,6 +182,8 @@ symbol legend,
* `upload verification` = uploads are checksummed or otherwise confirmed to have been transferred correctly * `upload verification` = uploads are checksummed or otherwise confirmed to have been transferred correctly
* `race the beam` = files can be downloaded while they're still uploading; downloaders are slowed down such that the uploader is always ahead
* `checksums provided` = when downloading a file from the server, the file's checksum is provided for verification client-side * `checksums provided` = when downloading a file from the server, the file's checksum is provided for verification client-side
* `cloud storage backend` = able to serve files from (and write to) s3 or similar cloud services; `` means the software can do this with some help from `rclone mount` as a bridge * `cloud storage backend` = able to serve files from (and write to) s3 or similar cloud services; `` means the software can do this with some help from `rclone mount` as a bridge
@@ -192,26 +198,27 @@ symbol legend,
* resumable/segmented uploads only over SFTP, not over HTTP * resumable/segmented uploads only over SFTP, not over HTTP
* upload rules are totals only, not over time * upload rules are totals only, not over time
* can probably do extension/mimetype rejection similar to copyparty * can probably do extension/mimetype rejection similar to copyparty
* `m`/arozos download-as-zip is not streaming; it creates the full zipfile before download can start, and fails on big folders
## protocols and client support ## protocols and client support
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | | feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | | ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| serve https | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | serve https | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
| serve webdav | █ | | | █ | █ | █ | █ | | █ | | | █ | | serve webdav | █ | | | █ | █ | █ | █ | | █ | | | █ | █ |
| serve ftp (tcp) | █ | | | | | █ | | | | | | █ | | serve ftp (tcp) | █ | | | | | █ | | | | | | █ | █ |
| serve ftps (tls) | █ | | | | | █ | | | | | | █ | | serve ftps (tls) | █ | | | | | █ | | | | | | █ | |
| serve tftp (udp) | █ | | | | | | | | | | | | | serve tftp (udp) | █ | | | | | | | | | | | | |
| serve sftp (ssh) | | | | | | █ | | | | | | █ | | serve sftp (ssh) | | | | | | █ | | | | | | █ | █ |
| serve smb/cifs | | | | | | █ | | | | | | | | serve smb/cifs | | | | | | █ | | | | | | | |
| serve dlna | | | | | | █ | | | | | | | | serve dlna | | | | | | █ | | | | | | | |
| listen on unix-socket | | | | █ | █ | | █ | █ | █ | | █ | █ | | listen on unix-socket | | | | █ | █ | | █ | █ | █ | | █ | █ | |
| zeroconf | █ | | | | | | | | | | | | | zeroconf | █ | | | | | | | | | | | | █ |
| supports netscape 4 | | | | | | █ | | | | | • | | | supports netscape 4 | | | | | | █ | | | | | • | | |
| ...internet explorer 6 | | █ | | █ | | █ | | | | | • | | | ...internet explorer 6 | | █ | | █ | | █ | | | | | • | | |
| mojibake filenames | █ | | | • | • | █ | █ | • | | • | | | | mojibake filenames | █ | | | • | • | █ | █ | • | | • | | | |
| undecodable filenames | █ | | | • | • | █ | | • | | | | | | undecodable filenames | █ | | | • | • | █ | | • | | | | | |
* `webdav` = protocol convenient for mounting a remote server as a local filesystem; see zeroconf: * `webdav` = protocol convenient for mounting a remote server as a local filesystem; see zeroconf:
* `zeroconf` = the server announces itself on the LAN, [automatically appearing](https://user-images.githubusercontent.com/241032/215344737-0eae8d98-9496-4256-9aa8-cd2f6971810d.png) on other zeroconf-capable devices * `zeroconf` = the server announces itself on the LAN, [automatically appearing](https://user-images.githubusercontent.com/241032/215344737-0eae8d98-9496-4256-9aa8-cd2f6971810d.png) on other zeroconf-capable devices
@@ -222,61 +229,66 @@ symbol legend,
* extremely minimal samba/cifs server * extremely minimal samba/cifs server
* netscape 4 / ie6 support is mostly listed as a joke altho some people have actually found it useful ([ie4 tho](https://user-images.githubusercontent.com/241032/118192791-fb31fe00-b446-11eb-9647-898ea8efc1f7.png)) * netscape 4 / ie6 support is mostly listed as a joke altho some people have actually found it useful ([ie4 tho](https://user-images.githubusercontent.com/241032/118192791-fb31fe00-b446-11eb-9647-898ea8efc1f7.png))
* `l`/sftpgo translates mojibake filenames into valid utf-8 (information loss) * `l`/sftpgo translates mojibake filenames into valid utf-8 (information loss)
* `m`/arozos has readonly-support for older browsers; no uploading
## server configuration ## server configuration
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | | feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | | ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| config from cmd args | █ | | | | | █ | █ | | | █ | | | | config from cmd args | █ | | | | | █ | █ | | | █ | | | |
| config files | █ | █ | █ | | | █ | | █ | | █ | • | | | config files | █ | █ | █ | | | █ | | █ | | █ | • | | |
| runtime config reload | █ | █ | █ | | | | | █ | █ | █ | █ | | | runtime config reload | █ | █ | █ | | | | | █ | █ | █ | █ | | █ |
| same-port http / https | █ | | | | | | | | | | | | | same-port http / https | █ | | | | | | | | | | | | |
| listen multiple ports | █ | | | | | | | | | | | █ | | listen multiple ports | █ | | | | | | | | | | | █ | |
| virtual file system | █ | █ | █ | | | | █ | | | | | █ | | virtual file system | █ | █ | █ | | | | █ | | | | | █ | |
| reverse-proxy ok | █ | | █ | █ | █ | █ | █ | █ | • | • | • | █ | | reverse-proxy ok | █ | | █ | █ | █ | █ | █ | █ | • | • | • | █ | |
| folder-rproxy ok | █ | | | | █ | █ | | • | • | • | • | | | folder-rproxy ok | █ | | | | █ | █ | | • | • | • | • | | • |
* `folder-rproxy` = reverse-proxying without dedicating an entire (sub)domain, using a subfolder instead * `folder-rproxy` = reverse-proxying without dedicating an entire (sub)domain, using a subfolder instead
* `l`/sftpgo: * `l`/sftpgo:
* config: users must be added through gui / api calls * config: users must be added through gui / api calls
* `m`/arozos:
* configuration is primarily through GUI
* reverse-proxy is not guaranteed to see the correct client IP
## server capabilities ## server capabilities
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | | feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | | ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| accounts | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | accounts | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
| per-account chroot | | | | | | | | | | | | █ | | per-account chroot | | | | | | | | | | | | █ | |
| single-sign-on | | | | █ | █ | | | | • | | | | | single-sign-on | | | | █ | █ | | | | • | | | | |
| token auth | | | | █ | █ | | | █ | | | | | | token auth | | | | █ | █ | | | █ | | | | | █ |
| 2fa | | | | █ | █ | | | | | | | █ | | 2fa | | | | █ | █ | | | | | | | █ | |
| per-volume permissions | █ | █ | █ | █ | █ | █ | █ | | █ | █ | | █ | | per-volume permissions | █ | █ | █ | █ | █ | █ | █ | | █ | █ | | █ | █ |
| per-folder permissions | | | | █ | █ | | █ | | █ | █ | | █ | | per-folder permissions | | | | █ | █ | | █ | | █ | █ | | █ | █ |
| per-file permissions | | | | █ | █ | | █ | | █ | | | | | per-file permissions | | | | █ | █ | | █ | | █ | | | | █ |
| per-file passwords | █ | | | █ | █ | | █ | | █ | | | | | per-file passwords | █ | | | █ | █ | | █ | | █ | | | | █ |
| unmap subfolders | █ | | | | | | █ | | | █ | | • | | unmap subfolders | █ | | | | | | █ | | | █ | | • | |
| index.html blocks list | | | | | | | █ | | | • | | | | index.html blocks list | | | | | | | █ | | | • | | | |
| write-only folders | █ | | | | | | | | | | █ | █ | | write-only folders | █ | | | | | | | | | | █ | █ | |
| files stored as-is | █ | █ | █ | █ | | █ | █ | | | █ | █ | █ | | files stored as-is | █ | █ | █ | █ | | █ | █ | | | █ | █ | █ | █ |
| file versioning | | | | █ | █ | | | | | | | | | file versioning | | | | █ | █ | | | | | | | | |
| file encryption | | | | █ | █ | █ | | | | | | █ | | file encryption | | | | █ | █ | █ | | | | | | █ | |
| file indexing | █ | | █ | █ | █ | | | █ | █ | █ | | | | file indexing | █ | | █ | █ | █ | | | █ | █ | █ | | | |
| ┗ per-volume db | █ | | • | • | • | | | • | • | | | | | ┗ per-volume db | █ | | • | • | • | | | • | • | | | | |
| ┗ db stored in folder | █ | | | | | | | • | • | █ | | | | ┗ db stored in folder | █ | | | | | | | • | • | █ | | | |
| ┗ db stored out-of-tree | █ | | █ | █ | █ | | | • | • | █ | | | | ┗ db stored out-of-tree | █ | | █ | █ | █ | | | • | • | █ | | | |
| ┗ existing file tree | █ | | █ | | | | | | | █ | | | | ┗ existing file tree | █ | | █ | | | | | | | █ | | | |
| file action event hooks | █ | | | | | | | | | █ | | █ | | file action event hooks | █ | | | | | | | | | █ | | █ | • |
| one-way folder sync | █ | | | █ | █ | █ | | | | | | | | one-way folder sync | █ | | | █ | █ | █ | | | | | | | |
| full sync | | | | █ | █ | | | | | | | | | full sync | | | | █ | █ | | | | | | | | |
| speed throttle | | █ | █ | | | █ | | | █ | | | █ | | speed throttle | | █ | █ | | | █ | | | █ | | | █ | |
| anti-bruteforce | █ | █ | █ | █ | █ | | | | • | | | █ | | anti-bruteforce | █ | █ | █ | █ | █ | | | | • | | | █ | • |
| dyndns updater | | █ | | | | | | | | | | | | dyndns updater | | █ | | | | | | | | | | | |
| self-updater | | | █ | | | | | | | | | | | self-updater | | | █ | | | | | | | | | | █ |
| log rotation | █ | | █ | █ | █ | | | • | █ | | | █ | | log rotation | █ | | █ | █ | █ | | | • | █ | | | █ | • |
| upload tracking / log | █ | █ | • | █ | █ | | | █ | █ | | | | | upload tracking / log | █ | █ | • | █ | █ | | | █ | █ | | | | █ |
| curl-friendly ls | █ | | | | | | | | | | | | | prometheus metrics | █ | | | █ | | | | | | | | | |
| curl-friendly upload | █ | | | | | | | | | | | | | curl-friendly ls | █ | | | | | | | | | | | | |
| curl-friendly upload | █ | | | | | █ | █ | • | | | | | |
* `unmap subfolders` = "shadowing"; mounting a local folder in the middle of an existing filesystem tree in order to disable access below that path * `unmap subfolders` = "shadowing"; mounting a local folder in the middle of an existing filesystem tree in order to disable access below that path
* `files stored as-is` = uploaded files are trivially readable from the server HDD, not sliced into chunks or in weird folder structures or anything like that * `files stored as-is` = uploaded files are trivially readable from the server HDD, not sliced into chunks or in weird folder structures or anything like that
@@ -302,49 +314,51 @@ symbol legend,
* `l`/sftpgo: * `l`/sftpgo:
* `file action event hooks` also include on-download triggers * `file action event hooks` also include on-download triggers
* `upload tracking / log` in main logfile * `upload tracking / log` in main logfile
* `m`/arozos:
* `2fa` maybe possible through LDAP/Oauth
## client features ## client features
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | | feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ---------------------- | - | - | - | - | - | - | - | - | - | - | - | - | | ---------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| single-page app | █ | | █ | █ | █ | | | █ | █ | █ | █ | | | single-page app | █ | | █ | █ | █ | | | █ | █ | █ | █ | | █ |
| themes | █ | █ | | █ | | | | | █ | | | | | themes | █ | █ | | █ | | | | | █ | | | | |
| directory tree nav | █ | | | | █ | | | | █ | | | | | directory tree nav | █ | | | | █ | | | | █ | | | | |
| multi-column sorting | █ | | | | | | | | | | | | | multi-column sorting | █ | | | | | | | | | | | | |
| thumbnails | █ | | | | | | | █ | █ | | | | | thumbnails | █ | | | | | | | █ | █ | | | | █ |
| ┗ image thumbnails | █ | | | █ | █ | | | █ | █ | █ | | | | ┗ image thumbnails | █ | | | █ | █ | | | █ | █ | █ | | | █ |
| ┗ video thumbnails | █ | | | █ | █ | | | | █ | | | | | ┗ video thumbnails | █ | | | █ | █ | | | | █ | | | | █ |
| ┗ audio spectrograms | █ | | | | | | | | | | | | | ┗ audio spectrograms | █ | | | | | | | | | | | | |
| audio player | █ | | | █ | █ | | | | █ | | | | | audio player | █ | | | █ | █ | | | | █ | | | | █ |
| ┗ gapless playback | █ | | | | | | | | • | | | | | ┗ gapless playback | █ | | | | | | | | • | | | | |
| ┗ audio equalizer | █ | | | | | | | | | | | | | ┗ audio equalizer | █ | | | | | | | | | | | | |
| ┗ waveform seekbar | █ | | | | | | | | | | | | | ┗ waveform seekbar | █ | | | | | | | | | | | | |
| ┗ OS integration | █ | | | | | | | | | | | | | ┗ OS integration | █ | | | | | | | | | | | | |
| ┗ transcode to lossy | █ | | | | | | | | | | | | | ┗ transcode to lossy | █ | | | | | | | | | | | | |
| video player | █ | | | █ | █ | | | | █ | █ | | | | video player | █ | | | █ | █ | | | | █ | █ | | | █ |
| ┗ video transcoding | | | | | | | | | █ | | | | | ┗ video transcoding | | | | | | | | | █ | | | | |
| audio BPM detector | █ | | | | | | | | | | | | | audio BPM detector | █ | | | | | | | | | | | | |
| audio key detector | █ | | | | | | | | | | | | | audio key detector | █ | | | | | | | | | | | | |
| search by path / name | █ | █ | █ | █ | █ | | █ | | █ | █ | | | | search by path / name | █ | █ | █ | █ | █ | | █ | | █ | █ | | | |
| search by date / size | █ | | | | █ | | | █ | █ | | | | | search by date / size | █ | | | | █ | | | █ | █ | | | | |
| search by bpm / key | █ | | | | | | | | | | | | | search by bpm / key | █ | | | | | | | | | | | | |
| search by custom tags | | | | | | | | █ | █ | | | | | search by custom tags | | | | | | | | █ | █ | | | | |
| search in file contents | | | | █ | █ | | | | █ | | | | | search in file contents | | | | █ | █ | | | | █ | | | | |
| search by custom parser | █ | | | | | | | | | | | | | search by custom parser | █ | | | | | | | | | | | | |
| find local file | █ | | | | | | | | | | | | | find local file | █ | | | | | | | | | | | | |
| undo recent uploads | █ | | | | | | | | | | | | | undo recent uploads | █ | | | | | | | | | | | | |
| create directories | █ | | | █ | █ | | █ | █ | █ | █ | █ | █ | | create directories | █ | | | █ | █ | | █ | █ | █ | █ | █ | █ | █ |
| image viewer | █ | | | █ | █ | | | | █ | █ | █ | | | image viewer | █ | | | █ | █ | | | | █ | █ | █ | | █ |
| markdown viewer | █ | | | | █ | | | | █ | | | | | markdown viewer | █ | | | | █ | | | | █ | | | | █ |
| markdown editor | █ | | | | █ | | | | █ | | | | | markdown editor | █ | | | | █ | | | | █ | | | | █ |
| readme.md in listing | █ | | | █ | | | | | | | | | | readme.md in listing | █ | | | █ | | | | | | | | | |
| rename files | █ | █ | █ | █ | █ | | █ | | █ | █ | █ | █ | | rename files | █ | █ | █ | █ | █ | | █ | | █ | █ | █ | █ | █ |
| batch rename | █ | | | | | | | | █ | | | | | batch rename | █ | | | | | | | | █ | | | | |
| cut / paste files | █ | █ | | █ | █ | | | | █ | | | | | cut / paste files | █ | █ | | █ | █ | | | | █ | | | | █ |
| move files | █ | █ | | █ | █ | | █ | | █ | █ | █ | | | move files | █ | █ | | █ | █ | | █ | | █ | █ | █ | | █ |
| delete files | █ | █ | | █ | █ | | █ | █ | █ | █ | █ | █ | | delete files | █ | █ | | █ | █ | | █ | █ | █ | █ | █ | █ | █ |
| copy files | | | | | █ | | | | █ | █ | █ | | | copy files | | | | | █ | | | | █ | █ | █ | | █ |
* `single-page app` = multitasking; possible to continue navigating while uploading * `single-page app` = multitasking; possible to continue navigating while uploading
* `audio player » os-integration` = use the [lockscreen](https://user-images.githubusercontent.com/241032/142711926-0700be6c-3e31-47b3-9928-53722221f722.png) or [media hotkeys](https://user-images.githubusercontent.com/241032/215347492-b4250797-6c90-4e09-9a4c-721edf2fb15c.png) to play/pause, prev/next song * `audio player » os-integration` = use the [lockscreen](https://user-images.githubusercontent.com/241032/142711926-0700be6c-3e31-47b3-9928-53722221f722.png) or [media hotkeys](https://user-images.githubusercontent.com/241032/215347492-b4250797-6c90-4e09-9a4c-721edf2fb15c.png) to play/pause, prev/next song
@@ -360,14 +374,14 @@ symbol legend,
## integration ## integration
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | | feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | | ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| OS alert on upload | █ | | | | | | | | | | | | | OS alert on upload | █ | | | | | | | | | | | | |
| discord | █ | | | | | | | | | | | | | discord | █ | | | | | | | | | | | | |
| ┗ announce uploads | █ | | | | | | | | | | | | | ┗ announce uploads | █ | | | | | | | | | | | | |
| ┗ custom embeds | | | | | | | | | | | | | | ┗ custom embeds | | | | | | | | | | | | | |
| sharex | █ | | | █ | | █ | | █ | | | | | | sharex | █ | | | █ | | █ | | █ | | | | | |
| flameshot | | | | | | █ | | | | | | | | flameshot | | | | | | █ | | | | | | | |
* sharex `` = yes, but does not provide example sharex config * sharex `` = yes, but does not provide example sharex config
* `a`/copyparty remarks: * `a`/copyparty remarks:
@@ -393,6 +407,7 @@ symbol legend,
| filebrowser | go | █ apl2 | 20 MB | | filebrowser | go | █ apl2 | 20 MB |
| filegator | php | █ mit | • | | filegator | php | █ mit | • |
| sftpgo | go | ‼ agpl | 44 MB | | sftpgo | go | ‼ agpl | 44 MB |
| arozos | go | ░ gpl3 | 531 MB |
| updog | python | █ mit | 17 MB | | updog | python | █ mit | 17 MB |
| goshs | go | █ mit | 11 MB | | goshs | go | █ mit | 11 MB |
| gimme-that | python | █ mit | 4.8 MB | | gimme-that | python | █ mit | 4.8 MB |
@@ -504,12 +519,14 @@ symbol legend,
* ✅ token auth (api keys) * ✅ token auth (api keys)
## [kodbox](https://github.com/kalcaddle/kodbox) ## [kodbox](https://github.com/kalcaddle/kodbox)
* this thing is insane * this thing is insane (but is getting competition from [arozos](#arozos))
* php; [docker](https://hub.docker.com/r/kodcloud/kodbox) * php; [docker](https://hub.docker.com/r/kodcloud/kodbox)
* 🔵 *upload segmenting, acceleration, and integrity checking!* * 🔵 *upload segmenting, acceleration, and integrity checking!*
* ⚠️ but uploads are not resumable(?) * ⚠️ but uploads are not resumable(?)
* ⚠️ not portable * ⚠️ not portable
* ⚠️ isolated on-disk file hierarchy, incompatible with other software * ⚠️ isolated on-disk file hierarchy, incompatible with other software
* ⚠️ uploading small files to copyparty is 16x faster
* ⚠️ uploading large files to copyparty is 3x faster
* ⚠️ http/webdav only; no ftp or zeroconf * ⚠️ http/webdav only; no ftp or zeroconf
* ⚠️ some parts of the GUI are in chinese * ⚠️ some parts of the GUI are in chinese
* ✅ fantastic ui/ux * ✅ fantastic ui/ux
@@ -569,6 +586,24 @@ symbol legend,
* ✅ on-download event hook (otherwise same as copyparty) * ✅ on-download event hook (otherwise same as copyparty)
* ✅ more extensive permissions control * ✅ more extensive permissions control
## [arozos](https://github.com/tobychui/arozos)
* big suite of applications similar to [kodbox](#kodbox), copyparty is better at downloading/uploading/music/indexing but arozos has other advantages
* go; primarily linux (limited support for windows)
* ⚠️ uploads not resumable / integrity-checked
* ⚠️ uploading small files to copyparty is 2.7x faster
* ⚠️ uploading large files to copyparty is at least 10% faster
* arozos is websocket-based, 512 KiB chunks; writes each chunk to separate files and then merges
* copyparty splices directly into the final file; faster and better for the HDD and filesystem
* ⚠️ no directory tree navpane; not as easy to navigate
* ⚠️ download-as-zip is not streaming; creates a temp.file on the server
* ⚠️ not self-contained (pulls from jsdelivr)
* ⚠️ has an audio player, but supports less filetypes
* ⚠️ limited support for configuring real-ip detection
* ✅ sftp server
* ✅ settings gui
* ✅ good-looking gui
* ✅ an IDE, msoffice viewer, rich host integration, much more
## [updog](https://github.com/sc0tfree/updog) ## [updog](https://github.com/sc0tfree/updog)
* python; cross-platform * python; cross-platform
* basic directory listing with upload feature * basic directory listing with upload feature

View File

@@ -118,4 +118,12 @@ base64 | head -c12 >> dist/copyparty.exe
dist/copyparty.exe --version dist/copyparty.exe --version
curl -fkT dist/copyparty.exe -b cppwd=wark https://192.168.123.1:3923/copyparty$esuf.exe csum=$(sha512sum <dist/copyparty.exe | cut -c-56)
curl -fkT dist/copyparty.exe -b cppwd=wark https://192.168.123.1:3923/copyparty$esuf.exe >uplod.log
cat uplod.log
grep -q $csum uplod.log && echo upload OK || {
echo UPLOAD FAILED
exit 1
}

View File

@@ -1,32 +1,32 @@
f117016b1e6a7d7e745db30d3e67f1acf7957c443a0dd301b6c5e10b8368f2aa4db6be9782d2d3f84beadd139bfeef4982e40f21ca5d9065cb794eeb0e473e82 altgraph-0.17.4-py2.py3-none-any.whl f117016b1e6a7d7e745db30d3e67f1acf7957c443a0dd301b6c5e10b8368f2aa4db6be9782d2d3f84beadd139bfeef4982e40f21ca5d9065cb794eeb0e473e82 altgraph-0.17.4-py2.py3-none-any.whl
eda6c38fc4d813fee897e969ff9ecc5acc613df755ae63df0392217bbd67408b5c1f6c676f2bf5497b772a3eb4e1a360e1245e1c16ee83f0af555f1ab82c3977 Git-2.39.1-32-bit.exe e0d2e6183437af321a36944f04a501e85181243e5fa2da3254254305dd8119161f62048bc56bff8849b49f546ff175b02b4c999401f1c404f6b88e6f46a9c96e Git-2.44.0-32-bit.exe
9d2c31701a4d3fef553928c00528a48f9e1854ab5333528b50e358a214eba90029d687f039bcda5760b6fdf9f2de3bcf3784ae21a6374cf2a97a845d33b636c6 packaging-24.0-py3-none-any.whl
17ce52ba50692a9d964f57a23ac163fb74c77fdeb2ca988a6d439ae1fe91955ff43730c073af97a7b3223093ffea3479a996b9b50ee7fba0869247a56f74baa6 pefile-2023.2.7-py3-none-any.whl 17ce52ba50692a9d964f57a23ac163fb74c77fdeb2ca988a6d439ae1fe91955ff43730c073af97a7b3223093ffea3479a996b9b50ee7fba0869247a56f74baa6 pefile-2023.2.7-py3-none-any.whl
126ca016c00256f4ff13c88707ead21b3b98f3c665ae57a5bcbb80c8be3004bff36d9c7f9a1cc9d20551019708f2b195154f302d80a1e5a2026d6d0fe9f3d5f4 pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl 126ca016c00256f4ff13c88707ead21b3b98f3c665ae57a5bcbb80c8be3004bff36d9c7f9a1cc9d20551019708f2b195154f302d80a1e5a2026d6d0fe9f3d5f4 pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl
749a473646c6d4c7939989649733d4c7699fd1c359c27046bf5bc9c070d1a4b8b986bbc65f60d7da725baf16dbfdd75a4c2f5bb8335f2cb5685073f5fee5c2d1 pywin32_ctypes-0.2.2-py3-none-any.whl 749a473646c6d4c7939989649733d4c7699fd1c359c27046bf5bc9c070d1a4b8b986bbc65f60d7da725baf16dbfdd75a4c2f5bb8335f2cb5685073f5fee5c2d1 pywin32_ctypes-0.2.2-py3-none-any.whl
6e0d854040baff861e1647d2bece7d090bc793b2bd9819c56105b94090df54881a6a9b43ebd82578cd7c76d47181571b671e60672afd9def389d03c9dae84fcf setuptools-68.2.2-py3-none-any.whl 6e0d854040baff861e1647d2bece7d090bc793b2bd9819c56105b94090df54881a6a9b43ebd82578cd7c76d47181571b671e60672afd9def389d03c9dae84fcf setuptools-68.2.2-py3-none-any.whl
3c5adf0a36516d284a2ede363051edc1bcc9df925c5a8a9fa2e03cab579dd8d847fdad42f7fd5ba35992e08234c97d2dbfec40a9d12eec61c8dc03758f2bd88e typing_extensions-4.4.0-py3-none-any.whl
# u2c (win7) # u2c (win7)
f3390290b896019b2fa169932390e4930d1c03c014e1f6db2405ca2eb1f51f5f5213f725885853805b742997b0edb369787e5c0069d217bc4e8b957f847f58b6 certifi-2023.11.17-py3-none-any.whl 7a3bd4849f95e1715fe2e99613df70a0fedd944a9bfde71a0fadb837fe62c3431c30da4f0b75c74de6f1a459f1fdf7cb62eaf404fdbe45e2d121e0b1021f1580 certifi-2024.2.2-py3-none-any.whl
904eb57b13bea80aea861de86987e618665d37fa9ea0856e0125a9ba767a53e5064de0b9c4735435a2ddf4f16f7f7d2c75a682e1de83d9f57922bdca8e29988c charset_normalizer-3.3.0-cp37-cp37m-win32.whl 9cc8acc5e269e6421bc32bb89261101da29d6ca337d39d60b9106de9ed7904e188716e4a48d78a2c4329026443fcab7acab013d2fe43778e30d6c4e4506a1b91 charset_normalizer-3.3.2-cp37-cp37m-win32.whl
ffdd45326f4e91c02714f7a944cbcc2fdd09299f709cfa8aec0892053eef0134fb80d9ba3790afd319538a86feb619037cbf533e2f5939cb56b35bb17f56c858 idna-3.4-py3-none-any.whl 0ec1ae5c928b4a0001a254c8598b746049406e1eed720bfafa94d4474078eff76bf6e032124e2d4df4619052836523af36162443c6d746487b387d2e3476e691 idna-3.6-py3-none-any.whl
b795abb26ba2f04f1afcfb196f21f638014b26c8186f8f488f1c2d91e8e0220962fbd259dbc9c3875222eb47fc95c73fc0606aaa6602b9ebc524809c9ba3501f requests-2.31.0-py3-none-any.whl b795abb26ba2f04f1afcfb196f21f638014b26c8186f8f488f1c2d91e8e0220962fbd259dbc9c3875222eb47fc95c73fc0606aaa6602b9ebc524809c9ba3501f requests-2.31.0-py3-none-any.whl
5a25cb9b79bb6107f9055dc3e9f62ebc6d4d9ca2c730d824985c93cd82406b723c200d6300c5064e42ee9fc7a2853d6ec6661394f3ed7bac03750e1f2a6840d1 urllib3-1.26.17-py2.py3-none-any.whl 61ed4500b6361632030f05229705c5c5a52cb47e31c0e6b55151c8f3beed631cd752ca6c3d6393d56a2acf6a453cfcf801e877116123c550922249c3a976e0f4 urllib3-1.26.18-py2.py3-none-any.whl
# win7 # win7
91c025f7d94bcdf93df838fab67053165a414fc84e8496f92ecbb910dd55f6b6af5e360bbd051444066880c5a6877e75157bd95e150ead46e5c605930dfc50f2 future-0.18.2.tar.gz d130bfa136bd171b9972b5c281c578545f2a84a909fdf18a6d2d71dd12fb3d512a7a1fa5cf7300433adece1d306eb2f22d7278f4c90e744e04dc67ba627a82c0 future-1.0.0-py3-none-any.whl
c06b3295d1d0b0f0a6f9a6cd0be861b9b643b4a5ea37857f0bd41c45deaf27bb927b71922dab74e633e43d75d04a9bd0d1c4ad875569740b0f2a98dd2bfa5113 importlib_metadata-5.0.0-py3-none-any.whl 0b4d07434bf8d314f42893d90bce005545b44a509e7353a73cad26dc9360b44e2824218a1a74f8174d02eba87fba91baffa82c8901279a32ebc6b8386b1b4275 importlib_metadata-6.7.0-py3-none-any.whl
016a8cbd09384f1a9a44cb0e8274df75a8bcb2f3966bb5d708c62145289efaa5db98f75256c97e4f8046735ce2e529fbb076f284a46cdb716e89a75660200ad9 pip-23.2.1-py3-none-any.whl 5d7462a584105bccaa9cf376f5a8c5827ead099c813c8af7392d478a4398f373d9e8cac7bbad2db51b335411ab966b21e119b1b1234c9a7ab70c6ddfc9306da6 pip-24.0-py3-none-any.whl
f298e34356b5590dde7477d7b3a88ad39c622a2bcf3fcd7c53870ce8384dd510f690af81b8f42e121a22d3968a767d2e07595036b2ed7049c8ef4d112bcf3a61 pyinstaller-5.13.2-py3-none-win32.whl f298e34356b5590dde7477d7b3a88ad39c622a2bcf3fcd7c53870ce8384dd510f690af81b8f42e121a22d3968a767d2e07595036b2ed7049c8ef4d112bcf3a61 pyinstaller-5.13.2-py3-none-win32.whl
6bb73cc2db795c59c92f2115727f5c173cacc9465af7710db9ff2f2aec2d73130d0992d0f16dcb3fac222dc15c0916562d0813b2337401022020673a4461df3d python-3.7.9-amd64.exe 6bb73cc2db795c59c92f2115727f5c173cacc9465af7710db9ff2f2aec2d73130d0992d0f16dcb3fac222dc15c0916562d0813b2337401022020673a4461df3d python-3.7.9-amd64.exe
500747651c87f59f2436c5ab91207b5b657856e43d10083f3ce27efb196a2580fadd199a4209519b409920c562aaaa7dcbdfb83ed2072a43eaccae6e2d056f31 python-3.7.9.exe 500747651c87f59f2436c5ab91207b5b657856e43d10083f3ce27efb196a2580fadd199a4209519b409920c562aaaa7dcbdfb83ed2072a43eaccae6e2d056f31 python-3.7.9.exe
03e50aecc85914567c114e38a1777e32628ee098756f37177bc23220eab33ac7d3ff591fd162db3b4d4e34d55cee93ef0dc67af68a69c38bb1435e0768dee57e typing_extensions-4.7.1-py3-none-any.whl
2e04acff170ca3bbceeeb18489c687126c951ec0bfd53cccfb389ba8d29a4576c1a9e8f2e5ea26c84dd21bfa2912f4e71fa72c1e2653b71e34afc0e65f1722d4 upx-4.2.2-win32.zip 2e04acff170ca3bbceeeb18489c687126c951ec0bfd53cccfb389ba8d29a4576c1a9e8f2e5ea26c84dd21bfa2912f4e71fa72c1e2653b71e34afc0e65f1722d4 upx-4.2.2-win32.zip
68e1b618d988be56aaae4e2eb92bc0093627a00441c1074ebe680c41aa98a6161e52733ad0c59888c643a33fe56884e4f935178b2557fbbdd105e92e0d993df6 windows6.1-kb2533623-x64.msu 68e1b618d988be56aaae4e2eb92bc0093627a00441c1074ebe680c41aa98a6161e52733ad0c59888c643a33fe56884e4f935178b2557fbbdd105e92e0d993df6 windows6.1-kb2533623-x64.msu
479a63e14586ab2f2228208116fc149ed8ee7b1e4ff360754f5bda4bf765c61af2e04b5ef123976623d04df4976b7886e0445647269da81436bd0a7b5671d361 windows6.1-kb2533623-x86.msu 479a63e14586ab2f2228208116fc149ed8ee7b1e4ff360754f5bda4bf765c61af2e04b5ef123976623d04df4976b7886e0445647269da81436bd0a7b5671d361 windows6.1-kb2533623-x86.msu
ba91ab0518c61eff13e5612d9e6b532940813f6b56e6ed81ea6c7c4d45acee4d98136a383a25067512b8f75538c67c987cf3944bfa0229e3cb677e2fb81e763e zipp-3.10.0-py3-none-any.whl ac96786e5d35882e0c5b724794329c9125c2b86ae7847f17acfc49f0d294312c6afc1c3f248655de3f0ccb4ca426d7957d02ba702f4a15e9fcd7e2c314e72c19 zipp-3.15.0-py3-none-any.whl
# win10 # win10
e3e2e6bd511dec484dd0292f4c46c55c88a885eabf15413d53edea2dd4a4dbae1571735b9424f78c0cd7f1082476a8259f31fd3f63990f726175470f636df2b3 Jinja2-3.1.3-py3-none-any.whl e3e2e6bd511dec484dd0292f4c46c55c88a885eabf15413d53edea2dd4a4dbae1571735b9424f78c0cd7f1082476a8259f31fd3f63990f726175470f636df2b3 Jinja2-3.1.3-py3-none-any.whl
e21495f1d473d855103fb4a243095b498ec90eb68776b0f9b48e994990534f7286c0292448e129c507e5d70409f8a05cca58b98d59ce2a815993d0a873dfc480 MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl e21495f1d473d855103fb4a243095b498ec90eb68776b0f9b48e994990534f7286c0292448e129c507e5d70409f8a05cca58b98d59ce2a815993d0a873dfc480 MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl 8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
656015f5cc2c04aa0653ee5609c39a7e5f0b6a58c84fe26b20bd070c52d20b4effb810132f7fb771168483e9fd975cc3302837dd7a1a687ee058b0460c857cc4 packaging-23.2-py3-none-any.whl
1dfe6f66bef5c9d62c9028a964196b902772ec9e19db215f3f41acb8d2d563586988d81b455fa6b895b434e9e1e9d57e4d271d1b1214483bdb3eadffcbba6a33 pillow-10.3.0-cp311-cp311-win_amd64.whl 1dfe6f66bef5c9d62c9028a964196b902772ec9e19db215f3f41acb8d2d563586988d81b455fa6b895b434e9e1e9d57e4d271d1b1214483bdb3eadffcbba6a33 pillow-10.3.0-cp311-cp311-win_amd64.whl
8760eab271e79256ae3bfb4af8ccc59010cb5d2eccdd74b325d1a533ae25eb127d51c2ec28ff90d449afed32dd7d6af62934fe9caaf1ae1f4d4831e948e912da pyinstaller-6.5.0-py3-none-win_amd64.whl 8760eab271e79256ae3bfb4af8ccc59010cb5d2eccdd74b325d1a533ae25eb127d51c2ec28ff90d449afed32dd7d6af62934fe9caaf1ae1f4d4831e948e912da pyinstaller-6.5.0-py3-none-win_amd64.whl
897a14d5ee5cbc6781a0f48beffc27807a4f789d58c4329d899233f615d168a5dcceddf7f8f2d5bb52212ddcf3eba4664590d9f1fdb25bb5201f44899e03b2f7 python-3.11.9-amd64.exe 897a14d5ee5cbc6781a0f48beffc27807a4f789d58c4329d899233f615d168a5dcceddf7f8f2d5bb52212ddcf3eba4664590d9f1fdb25bb5201f44899e03b2f7 python-3.11.9-amd64.exe

View File

@@ -8,7 +8,7 @@ run ./build.sh in git-bash to build + upload the exe
to obtain the files referenced below, see ./deps.txt to obtain the files referenced below, see ./deps.txt
download + install git (32bit OK on 64): download + install git (32bit OK on 64):
http://192.168.123.1:3923/ro/pyi/Git-2.39.1-32-bit.exe http://192.168.123.1:3923/ro/pyi/Git-2.44.0-32-bit.exe
===[ copy-paste into git-bash ]================================ ===[ copy-paste into git-bash ]================================
uname -s | grep NT-10 && w10=1 || { uname -s | grep NT-10 && w10=1 || {
@@ -16,6 +16,7 @@ uname -s | grep NT-10 && w10=1 || {
} }
fns=( fns=(
altgraph-0.17.4-py2.py3-none-any.whl altgraph-0.17.4-py2.py3-none-any.whl
packaging-24.0-py3-none-any.whl
pefile-2023.2.7-py3-none-any.whl pefile-2023.2.7-py3-none-any.whl
pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl
pywin32_ctypes-0.2.2-py3-none-any.whl pywin32_ctypes-0.2.2-py3-none-any.whl
@@ -26,26 +27,25 @@ fns=(
Jinja2-3.1.3-py3-none-any.whl Jinja2-3.1.3-py3-none-any.whl
MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl
mutagen-1.47.0-py3-none-any.whl mutagen-1.47.0-py3-none-any.whl
packaging-23.2-py3-none-any.whl
pillow-10.3.0-cp311-cp311-win_amd64.whl pillow-10.3.0-cp311-cp311-win_amd64.whl
python-3.11.9-amd64.exe python-3.11.9-amd64.exe
upx-4.2.2-win64.zip upx-4.2.2-win64.zip
) )
[ $w7 ] && fns+=( [ $w7 ] && fns+=(
pyinstaller-5.13.2-py3-none-win32.whl pyinstaller-5.13.2-py3-none-win32.whl
certifi-2023.11.17-py3-none-any.whl certifi-2024.2.2-py3-none-any.whl
chardet-5.1.0-py3-none-any.whl charset_normalizer-3.3.2-cp37-cp37m-win32.whl
idna-3.4-py3-none-any.whl idna-3.6-py3-none-any.whl
requests-2.28.2-py3-none-any.whl requests-2.31.0-py3-none-any.whl
urllib3-1.26.14-py2.py3-none-any.whl urllib3-1.26.18-py2.py3-none-any.whl
upx-4.2.2-win32.zip upx-4.2.2-win32.zip
) )
[ $w7 ] && fns+=( [ $w7 ] && fns+=(
future-0.18.2.tar.gz future-1.0.0-py3-none-any.whl
importlib_metadata-5.0.0-py3-none-any.whl importlib_metadata-6.7.0-py3-none-any.whl
pip-23.2.1-py3-none-any.whl pip-24.0-py3-none-any.whl
typing_extensions-4.4.0-py3-none-any.whl typing_extensions-4.7.1-py3-none-any.whl
zipp-3.10.0-py3-none-any.whl zipp-3.15.0-py3-none-any.whl
) )
[ $w7x64 ] && fns+=( [ $w7x64 ] && fns+=(
windows6.1-kb2533623-x64.msu windows6.1-kb2533623-x64.msu
@@ -77,9 +77,10 @@ yes | unzip upx-*-win32.zip &&
mv upx-*/upx.exe . && mv upx-*/upx.exe . &&
python -m ensurepip && python -m ensurepip &&
{ [ $w10 ] || python -m pip install --user -U pip-*.whl; } && { [ $w10 ] || python -m pip install --user -U pip-*.whl; } &&
{ [ $w7 ] || python -m pip install --user -U {packaging,setuptools,mutagen,Pillow,Jinja2,MarkupSafe}-*.whl; } && python -m pip install --user -U packaging-*.whl &&
{ [ $w7 ] || python -m pip install --user -U {setuptools,mutagen,Pillow,Jinja2,MarkupSafe}-*.whl; } &&
{ [ $w10 ] || python -m pip install --user -U {requests,urllib3,charset_normalizer,certifi,idna}-*.whl; } && { [ $w10 ] || python -m pip install --user -U {requests,urllib3,charset_normalizer,certifi,idna}-*.whl; } &&
{ [ $w10 ] || python -m pip install --user -U future-*.tar.gz importlib_metadata-*.whl typing_extensions-*.whl zipp-*.whl; } && { [ $w10 ] || python -m pip install --user -U future-*.whl importlib_metadata-*.whl typing_extensions-*.whl zipp-*.whl; } &&
python -m pip install --user -U pyinstaller-*.whl pefile-*.whl pywin32_ctypes-*.whl pyinstaller_hooks_contrib-*.whl altgraph-*.whl && python -m pip install --user -U pyinstaller-*.whl pefile-*.whl pywin32_ctypes-*.whl pyinstaller_hooks_contrib-*.whl altgraph-*.whl &&
sed -ri 's/--lzma/--best/' $appd/Python/Python$pyv/site-packages/pyinstaller/building/utils.py && sed -ri 's/--lzma/--best/' $appd/Python/Python$pyv/site-packages/pyinstaller/building/utils.py &&
curl -fkLO https://192.168.123.1:3923/cpp/scripts/uncomment.py && curl -fkLO https://192.168.123.1:3923/cpp/scripts/uncomment.py &&

View File

@@ -45,4 +45,12 @@ $APPDATA/python/python37/scripts/pyinstaller -y --clean --upx-dir=. up2k.spec
./dist/u2c.exe --version ./dist/u2c.exe --version
curl -fkT dist/u2c.exe -HPW:wark https://192.168.123.1:3923/ csum=$(sha512sum <dist/u2c.exe | cut -c-56)
curl -fkT dist/u2c.exe -HPW:wark https://192.168.123.1:3923/ >uplod.log
cat uplod.log
grep -q $csum uplod.log && echo upload OK || {
echo UPLOAD FAILED
exit 1
}

View File

@@ -3,10 +3,8 @@
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
import io import io
import os
import time
import json import json
import pprint import os
import shutil import shutil
import tarfile import tarfile
import tempfile import tempfile

View File

@@ -44,7 +44,7 @@ if MACOS:
from copyparty.__init__ import E from copyparty.__init__ import E
from copyparty.__main__ import init_E from copyparty.__main__ import init_E
from copyparty.u2idx import U2idx from copyparty.u2idx import U2idx
from copyparty.util import FHC, Garda, Unrecv from copyparty.util import FHC, CachedDict, Garda, Unrecv
init_E(E) init_E(E)
@@ -110,7 +110,7 @@ class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None, **ka0): def __init__(self, a=None, v=None, c=None, **ka0):
ka = {} ka = {}
ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw q rand smb srch_dbg stats vague_403 vc ver xdev xlink xvol" ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_pipe no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw q rand smb srch_dbg stats vague_403 vc ver xdev xlink xvol"
ka.update(**{k: False for k in ex.split()}) ka.update(**{k: False for k in ex.split()})
ex = "dotpart dotsrch no_dhash no_fastboot no_rescan no_sendfile no_voldump re_dhash plain_ip" ex = "dotpart dotsrch no_dhash no_fastboot no_rescan no_sendfile no_voldump re_dhash plain_ip"
@@ -155,6 +155,7 @@ class Cfg(Namespace):
mte={"a": True}, mte={"a": True},
mth={}, mth={},
mtp=[], mtp=[],
mv_retry="0/0",
rm_retry="0/0", rm_retry="0/0",
s_rd_sz=256 * 1024, s_rd_sz=256 * 1024,
s_wr_sz=256 * 1024, s_wr_sz=256 * 1024,
@@ -250,6 +251,7 @@ class VHttpConn(object):
self.log_func = log self.log_func = log
self.log_src = "a" self.log_src = "a"
self.mutex = threading.Lock() self.mutex = threading.Lock()
self.pipes = CachedDict(1)
self.u2mutex = threading.Lock() self.u2mutex = threading.Lock()
self.nbyte = 0 self.nbyte = 0
self.nid = None self.nid = None