Compare commits

..

47 Commits

Author SHA1 Message Date
ed
cc0cc8cdf0 v1.16.8 2025-01-11 16:11:15 +00:00
ed
fb13969798 connect-page: add flameshot too 2025-01-11 16:08:12 +00:00
ed
278258ee9f connect-page:
* add sharex, ishare

* change placeholder password from `pw` to `hunter2`

* add a button to use a real password instead of a placeholder
2025-01-11 15:23:47 +00:00
ed
9e542cf86b these can also trigger reloads; dd6e9ea7 2025-01-11 12:52:11 +00:00
ed
244e952f79 copyparty.exe: update pillow 2025-01-11 12:49:07 +00:00
ed
aa2a8fa223 up2k-snap: remove deprecated properties
v1.15.7 is the oldest version which still
has any chance of reading the up2k.snap
2025-01-11 12:16:45 +00:00
ed
467acb47bf up2k-snap-load: assert .PARTIAL for unfinished
when loading up2k snaps, entries are forgotten if
the relevant file has been deleted since last run

when the entry is an unfinished upload, the file that should
be asserted is the .PARTIAL, and not the placeholder / final
filename (which, unintentionally, was the case until now)

if .PARTIAL is missing but the placeholder still exists,
the only safe alternative is to forget/disown the file,
since its state is obviously wrong and unknown
2025-01-11 11:49:53 +00:00
ed
0c0d6b2bfc add ishare config example (macos screenshot uploader)
also includes a slight tweak to the json upload info:

when exactly one file is uploaded, the json-response has a
new top-level property, `fileurl` -- this is just a copy of
`files[0].url` as a workaround for castdrian/ishare#107
("only toplevel json properties can be referenced")
2025-01-10 21:13:20 +00:00
ed
ce0e5be406 bup: alias ?j to request-header Accept: json
and teach PUT to answer in json too
2025-01-10 20:32:12 +00:00
ed
65ce4c90fa link the idp-webdav docs from the main readme too 2025-01-10 18:54:45 +00:00
ed
9897a08d09 hotlink from the connect-page to the idp client-auth docs added in #129 2025-01-10 18:47:12 +00:00
ed
f5753ba720 add chunksize cheat-sheet 2025-01-10 18:24:40 +00:00
Wuast94
fcf32a935b add idp client section to docs 2025-01-10 18:17:57 +01:00
ed
ec50788987 up2k.js: 10x faster hashing on android-chrome
when hashing files on android-chrome, read a contiguous range of
several chunks at a time, ensuring each read is at least 48 MiB
and then slice that cache into the correct chunksizes for hashing

especially on GrapheneOS Vanadium (where webworkers are forbidden),
improves worst-case speed (filesize <= 256 MiB) from 13 to 139 MiB/s

48M was chosen wrt RAM usage (48*4 MiB); a target read-size of
16M would have given 76 MiB/s, 32M = 117 MiB/s, and 64M = 154 MiB/s

additionally, on all platforms (not just android and/or chrome),
allow async hashing of <= 3 chunks in parallel on main-thread
when chunksize <= 48 MiB, and <= 2 at <= 96 MiB; this gives
decent speeds approaching that of webworkers (around 50%)

this is a new take on c06d928bb5
which was removed in 184af0c603
when a chrome-beta temporarily fixed the poor file-read performance
(afaict the fix was reverted and never made it to chrome stable)

as for why any of this is necessary,

the security features in android have the unfortunate side-effect
of making file-reads from web-browsers extremely expensive;
this is especially noticeable in android-chrome, where
file-hashing is painfully slow, around 9 MiB/s worst-case

this is due to a fixed-time overhead for each read operation;
reading 1 MiB takes 60 msec, while reading 16 MiB takes 112 msec
2025-01-10 05:29:55 +00:00
ed
ac0a2da3b5 add/improve reverse-proxy examples
* add haproxy, lighttpd, traefik, caddy

* adjust nginx buffer sizes for way faster downloads

* move unix-socket to /dev/shm/ because
   fedora sets PrivateTmp=true for nginx (orz)
2025-01-07 05:49:40 +00:00
ed
9f84dc42fe recommend kamelåså instead of very-bad-idea; closes #75 2025-01-01 20:26:09 +00:00
ed
21f9304235 add synology howto 2024-12-27 02:16:20 +00:00
ed
5cedd22bbd update pkgs to 1.16.7 2024-12-23 18:24:35 +00:00
ed
c0dacbc4dd v1.16.7 2024-12-23 00:05:49 +00:00
ed
dd6e9ea70c when idp is enabled, always daemon(up2k-rescan)
fixes a bug reported on discord;

1. run with `--idp-h-usr=iu -v=srv::A`
2. upload a file with up2k; this succeeds
3. announce an idp user: `curl -Hiu:a 127.1:3923`
4. upload another file; fails with "fs-reload"

the idp announce would `up2k.reload` which raises the
`reload_flag` and `rescan_cond`, but there is nothing
listening on `rescan_cond` because `have_e2d` was false

must assume e2d if idp is enabled, because `have_e2d` will
only be true if there are non-idp volumes with e2d enabled
2024-12-23 17:16:56 +00:00
ed
87598dcd7f recent-uploads: move rendering to js
* loads 50% faster, reducing server-load by 30%

* inhibits search engines from indexing it

* eyecandy (filter applies automatically on edit)
2024-12-20 23:52:03 +00:00
ed
3bb7b677f8 jinja optimizations 2024-12-20 16:34:17 +00:00
ed
988a7223f4 remove some footguns
in case someone writes a plugin which
expects certain params to be sanitized

note that because mojibake filenames are supported,
URLs and filepaths can still be absolutely bonkers

this fixes one known issue:
invalid rss-feed xml if ?pw contains special chars

...and somehow things now run 2% faster, idgi
2024-12-20 14:03:40 +00:00
ed
7f044372fa 18:17:14 +Mai | ed: volume bar is bad design
18:17:26  &ed | what's wrong with it
18:17:38 +Mai | that you don't know it's the volume bar before you try it
18:17:46  &ed | oh
18:17:48  &ed | yeah i guess
18:17:54 +Mai | especially when it's at 100
18:18:00  &ed | how do i fix it tho
18:19:50 +Mai | you could add an icon that's also a mute button (to not make it a useless icon)
18:22:38  &ed | i'll make the volume text always visible and include a speaker icon before it
18:23:53 +Mai | that is better at least
2024-12-19 18:49:51 +00:00
ed
552897abbc fix log colors on loss of ext.ip 2024-12-19 18:48:03 +00:00
ed
946a8c5baa u2c: fix windowtitle 2024-12-19 18:02:29 +00:00
ed
888b31aa92 update pkgs to 1.16.6 2024-12-19 01:08:34 +00:00
ed
e2dec2510f v1.16.6 2024-12-19 00:37:24 +00:00
ed
da5ad2ab9f warn on ambiguous comments in config files 2024-12-19 00:25:10 +00:00
ed
eaa4b04a22 list recent uploads
also makes the unpost lister 5x faster
2024-12-18 22:17:30 +01:00
ed
3051b13108 try to avoid printing mojibake in logs
unprintable and side-effect-inducing paths and names are hex-escaped,
preserving greppability and making log-parsing slightly more okay
2024-12-18 01:45:54 +01:00
ed
4c4e48bab7 improve dotfile handling; closes #126
when deleting a folder, any dotfiles/folders within would only
be deleted if the user had the dot-permission to see dotfiles;
this gave the confusing behavior of not removing the "empty"
folders after deleting them

fix this to only require the delete-permission, and always
delete the entire folder, including any dotfiles within

similar behavior would also apply to moves, renames, and copies;

fix moves and renames to only require the move-permission in
the source volume; dotfiles will now always be included,
regardless of whether the user does (or does not) have the
dot-permission in either the source and/or destination volumes

copying folders now also behaves more intuitively: if the user has
the dot-permission in the target volume, then dotfiles will only be
included from source folders where the user also has the dot-perm,
to prevent the user from seeing intentionally hidden files/folders
2024-12-17 22:47:34 +01:00
ed
01a3eb29cb ui: improve some eta/idle fields
cpanel db-idle-time indicator would glitch on 0.0s

upload windowtitle was %.2f seconds, but the value is int
2024-12-17 22:01:36 +01:00
ed
73f7249c5f decode and log request URLs; closes #125
as processing of a HTTP request begins (GET, HEAD, PUT, POST, ...),
the original query line is printed in its encoded form. This makes
debugging easier, since there is no ambiguity in how the client
phrased its request.

however, this results in very opaque logs for non-ascii languages;
basically a wall of percent-encoded characters. Avoid this issue
by printing an additional log-message if the URL contains `%`,
immediately below the original url-encoded entry.

also fix tests on macos, and an unrelated bad logmsg in up2k
2024-12-16 00:53:22 +01:00
ed
18c6559199 update pkgs to 1.16.5 2024-12-11 22:59:44 +00:00
ed
e66ece993f v1.16.5 2024-12-11 22:36:19 +00:00
ed
0686860624 connectpage nitpick + update dompurify 2024-12-11 22:24:31 +00:00
ed
24ce46b380 avoid chrome webworker OOM bug; closes #124
chrome (and chromium-based browsers) can OOM when:

* the OS is Windows, MacOS, or Android (but not Linux?)
* the website is hosted on a remote IP (not localhost)
* webworkers are used to read files

unfortunately this also applies to Android, which heavily relies
on webworkers to make read-speeds anywhere close to acceptable

as for android, there are diminishing returns with more than 4
webworkers (1=1x, 2=2.3x, 3=3.8x, 4=4.2x, 6=4.5x, 8=5.3x), and
limiting the number of workers to ensure at least one idle core
appears to sufficiently reduce the OOM probability

on desktop, webworkers are only necessary for hashwasm, so
limit the number of workers to 2 if crypto.subtle is available
and otherwise use the nproc-1 rule for hashwasm in workers

bug report: https://issues.chromium.org/issues/383568268
2024-12-11 22:11:54 +00:00
ed
a49bf81ff2 mdns: improve nic-ip changelog
if a NIC is brought up with several IPs,
it would only mention one of the new IPs in the logs

or if a PCIe bus crashes and all NICs drop dead,
it would only mention one of the IPs that disappeared

as both scenarios are oddly common, be more verbose
2024-12-10 00:36:58 +00:00
ed
64501fd7f1 hybrid IdP (check regular users too); closes #122
previously, when IdP was enabled, the password-based login would be
entirely disabled. This was a semi-conscious decision, based on the
assumption that you would always want to use IdP after enabling it.

it makes more sense to keep password-based login working as usual,
conditionally disengaging it for requests which contains a valid
IdP username header. This makes it possible to define fallback
users, or API-only users, and all similar escape hatches.
2024-12-08 17:18:20 +00:00
ed
db3c0b0907 nice 2024-12-07 22:24:13 +00:00
ed
edda117a7a update pkgs to 1.16.4 2024-12-07 01:10:50 +00:00
ed
cdface0dd5 v1.16.4 2024-12-07 00:24:37 +00:00
ed
be6afe2d3a improve ux for relocating partial uploads
if someone accidentally starts uploading a file in the wrong folder,
it was not obvious that you can forget that upload in the unpost tab

this '(explain)' button in the upload-error hopefully explains that,
and upload immediately commences when the initial attempt is aborted

on the backend, cleanup the dupesched when an upload is
aborted, and save some cpu by adding unique entries only
2024-12-06 23:34:47 +00:00
ed
9163780000 u2c: misc windows fixes
* support globbing/wildcards on windows

* add `osc 9;4` to show upload progress in the taskbar
   (currently windows-only; linux is picking it up)

* workaround msys2-terminal not normalizing
   absolute paths which contain whitespace

* show a helpful "now hashing..." while the
   first file is being hashed, since it kinda
   looks like a deadlock on windows otherwise
2024-12-06 18:44:05 +00:00
ed
d7aa7dfe64 translations: new strings 2024-12-04 09:46:04 +00:00
ed
f1decb531d update pkgs to 2024-12-04 00:41:34 +00:00
62 changed files with 1712 additions and 380 deletions

View File

@@ -48,6 +48,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [shares](#shares) - share a file or folder by creating a temporary link * [shares](#shares) - share a file or folder by creating a temporary link
* [batch rename](#batch-rename) - select some files and press `F2` to bring up the rename UI * [batch rename](#batch-rename) - select some files and press `F2` to bring up the rename UI
* [rss feeds](#rss-feeds) - monitor a folder with your RSS reader * [rss feeds](#rss-feeds) - monitor a folder with your RSS reader
* [recent uploads](#recent-uploads) - list all recent uploads
* [media player](#media-player) - plays almost every audio format there is * [media player](#media-player) - plays almost every audio format there is
* [audio equalizer](#audio-equalizer) - and [dynamic range compressor](https://en.wikipedia.org/wiki/Dynamic_range_compression) * [audio equalizer](#audio-equalizer) - and [dynamic range compressor](https://en.wikipedia.org/wiki/Dynamic_range_compression)
* [fix unreliable playback on android](#fix-unreliable-playback-on-android) - due to phone / app settings * [fix unreliable playback on android](#fix-unreliable-playback-on-android) - due to phone / app settings
@@ -91,6 +92,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [listen on port 80 and 443](#listen-on-port-80-and-443) - become a *real* webserver * [listen on port 80 and 443](#listen-on-port-80-and-443) - become a *real* webserver
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites * [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
* [real-ip](#real-ip) - teaching copyparty how to see client IPs * [real-ip](#real-ip) - teaching copyparty how to see client IPs
* [reverse-proxy performance](#reverse-proxy-performance)
* [prometheus](#prometheus) - metrics/stats can be enabled * [prometheus](#prometheus) - metrics/stats can be enabled
* [other extremely specific features](#other-extremely-specific-features) - you'll never find a use for these * [other extremely specific features](#other-extremely-specific-features) - you'll never find a use for these
* [custom mimetypes](#custom-mimetypes) - change the association of a file extension * [custom mimetypes](#custom-mimetypes) - change the association of a file extension
@@ -139,6 +141,7 @@ just run **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/
* or if you cannot install python, you can use [copyparty.exe](#copypartyexe) instead * or if you cannot install python, you can use [copyparty.exe](#copypartyexe) instead
* or install [on arch](#arch-package) [on NixOS](#nixos-module) [through nix](#nix-package) * or install [on arch](#arch-package) [on NixOS](#nixos-module) [through nix](#nix-package)
* or if you are on android, [install copyparty in termux](#install-on-android) * or if you are on android, [install copyparty in termux](#install-on-android)
* or maybe you have a [synology nas / dsm](./docs/synology-dsm.md)
* or if your computer is messed up and nothing else works, [try the pyz](#zipapp) * or if your computer is messed up and nothing else works, [try the pyz](#zipapp)
* or if you prefer to [use docker](./scripts/docker/) 🐋 you can do that too * or if you prefer to [use docker](./scripts/docker/) 🐋 you can do that too
* docker has all deps built-in, so skip this step: * docker has all deps built-in, so skip this step:
@@ -339,6 +342,9 @@ same order here too
* [Chrome issue 1352210](https://bugs.chromium.org/p/chromium/issues/detail?id=1352210) -- plaintext http may be faster at filehashing than https (but also extremely CPU-intensive) * [Chrome issue 1352210](https://bugs.chromium.org/p/chromium/issues/detail?id=1352210) -- plaintext http may be faster at filehashing than https (but also extremely CPU-intensive)
* [Chrome issue 383568268](https://issues.chromium.org/issues/383568268) -- filereaders in webworkers can OOM / crash the browser-tab
* copyparty has a workaround which seems to work well enough
* [Firefox issue 1790500](https://bugzilla.mozilla.org/show_bug.cgi?id=1790500) -- entire browser can crash after uploading ~4000 small files * [Firefox issue 1790500](https://bugzilla.mozilla.org/show_bug.cgi?id=1790500) -- entire browser can crash after uploading ~4000 small files
* Android: music playback randomly stops due to [battery usage settings](#fix-unreliable-playback-on-android) * Android: music playback randomly stops due to [battery usage settings](#fix-unreliable-playback-on-android)
@@ -642,7 +648,7 @@ dragdrop is the recommended way, but you may also:
* select some files (not folders) in your file explorer and press CTRL-V inside the browser window * select some files (not folders) in your file explorer and press CTRL-V inside the browser window
* use the [command-line uploader](https://github.com/9001/copyparty/tree/hovudstraum/bin#u2cpy) * use the [command-line uploader](https://github.com/9001/copyparty/tree/hovudstraum/bin#u2cpy)
* upload using [curl or sharex](#client-examples) * upload using [curl, sharex, ishare, ...](#client-examples)
when uploading files through dragdrop or CTRL-V, this initiates an upload using `up2k`; there are two browser-based uploaders available: when uploading files through dragdrop or CTRL-V, this initiates an upload using `up2k`; there are two browser-based uploaders available:
* `[🎈] bup`, the basic uploader, supports almost every browser since netscape 4.0 * `[🎈] bup`, the basic uploader, supports almost every browser since netscape 4.0
@@ -714,7 +720,7 @@ files go into `[ok]` if they exist (and you get a link to where it is), otherwis
### unpost ### unpost
undo/delete accidental uploads undo/delete accidental uploads using the `[🧯]` tab in the UI
![copyparty-unpost-fs8](https://user-images.githubusercontent.com/241032/129635368-3afa6634-c20f-418c-90dc-ec411f3b3897.png) ![copyparty-unpost-fs8](https://user-images.githubusercontent.com/241032/129635368-3afa6634-c20f-418c-90dc-ec411f3b3897.png)
@@ -873,6 +879,17 @@ url parameters:
* uppercase = reverse-sort; `M` = oldest file first * uppercase = reverse-sort; `M` = oldest file first
## recent uploads
list all recent uploads by clicking "show recent uploads" in the controlpanel
will show uploader IP and upload-time if the visitor has the admin permission
* global-option `--ups-when` makes upload-time visible to all users, and not just admins
note that the [🧯 unpost](#unpost) feature is better suited for viewing *your own* recent uploads, as it includes the option to undo/delete them
## media player ## media player
plays almost every audio format there is (if the server has FFmpeg installed for on-demand transcoding) plays almost every audio format there is (if the server has FFmpeg installed for on-demand transcoding)
@@ -1089,6 +1106,8 @@ on macos, connect from finder:
in order to grant full write-access to webdav clients, the volflag `daw` must be set and the account must also have delete-access (otherwise the client won't be allowed to replace the contents of existing files, which is how webdav works) in order to grant full write-access to webdav clients, the volflag `daw` must be set and the account must also have delete-access (otherwise the client won't be allowed to replace the contents of existing files, which is how webdav works)
> note: if you have enabled [IdP authentication](#identity-providers) then that may cause issues for some/most webdav clients; see [the webdav section in the IdP docs](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#connecting-webdav-clients)
### connecting to webdav from windows ### connecting to webdav from windows
@@ -1488,7 +1507,9 @@ replace copyparty passwords with oauth and such
you can disable the built-in password-based login system, and instead replace it with a separate piece of software (an identity provider) which will then handle authenticating / authorizing of users; this makes it possible to login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions you can disable the built-in password-based login system, and instead replace it with a separate piece of software (an identity provider) which will then handle authenticating / authorizing of users; this makes it possible to login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
a popular choice is [Authelia](https://www.authelia.com/) (config-file based), another one is [authentik](https://goauthentik.io/) (GUI-based, more complex) * the regular config-defined users will be used as a fallback for requests which don't include a valid (trusted) IdP username header
some popular identity providers are [Authelia](https://www.authelia.com/) (config-file based) and [authentik](https://goauthentik.io/) (GUI-based, more complex)
there is a [docker-compose example](./docs/examples/docker/idp-authelia-traefik) which is hopefully a good starting point (alternatively see [./docs/idp.md](./docs/idp.md) if you're the DIY type) there is a [docker-compose example](./docs/examples/docker/idp-authelia-traefik) which is hopefully a good starting point (alternatively see [./docs/idp.md](./docs/idp.md) if you're the DIY type)
@@ -1652,10 +1673,16 @@ some reverse proxies (such as [Caddy](https://caddyserver.com/)) can automatical
for improved security (and a 10% performance boost) consider listening on a unix-socket with `-i unix:770:www:/tmp/party.sock` (permission `770` means only members of group `www` can access it) for improved security (and a 10% performance boost) consider listening on a unix-socket with `-i unix:770:www:/tmp/party.sock` (permission `770` means only members of group `www` can access it)
example webserver configs: example webserver / reverse-proxy configs:
* [nginx config](contrib/nginx/copyparty.conf) -- entire domain/subdomain * [apache config](contrib/apache/copyparty.conf)
* [apache2 config](contrib/apache/copyparty.conf) -- location-based * caddy uds: `caddy reverse-proxy --from :8080 --to unix///dev/shm/party.sock`
* caddy tcp: `caddy reverse-proxy --from :8081 --to http://127.0.0.1:3923`
* [haproxy config](contrib/haproxy/copyparty.conf)
* [lighttpd subdomain](contrib/lighttpd/subdomain.conf) -- entire domain/subdomain
* [lighttpd subpath](contrib/lighttpd/subpath.conf) -- location-based (not optimal, but in case you need it)
* [nginx config](contrib/nginx/copyparty.conf) -- recommended
* [traefik config](contrib/traefik/copyparty.yaml)
### real-ip ### real-ip
@@ -1667,6 +1694,38 @@ if you (and maybe everybody else) keep getting a message that says `thank you fo
for most common setups, there should be a helpful message in the server-log explaining what to do, but see [docs/xff.md](docs/xff.md) if you want to learn more, including a quick hack to **just make it work** (which is **not** recommended, but hey...) for most common setups, there should be a helpful message in the server-log explaining what to do, but see [docs/xff.md](docs/xff.md) if you want to learn more, including a quick hack to **just make it work** (which is **not** recommended, but hey...)
### reverse-proxy performance
most reverse-proxies support connecting to copyparty either using uds/unix-sockets (`/dev/shm/party.sock`, faster/recommended) or using tcp (`127.0.0.1`)
with copyparty listening on a uds / unix-socket / unix-domain-socket and the reverse-proxy connecting to that:
| index.html | upload | download | software |
| ------------ | ----------- | ----------- | -------- |
| 28'900 req/s | 6'900 MiB/s | 7'400 MiB/s | no-proxy |
| 18'750 req/s | 3'500 MiB/s | 2'370 MiB/s | haproxy |
| 9'900 req/s | 3'750 MiB/s | 2'200 MiB/s | caddy |
| 18'700 req/s | 2'200 MiB/s | 1'570 MiB/s | nginx |
| 9'700 req/s | 1'750 MiB/s | 1'830 MiB/s | apache |
| 9'900 req/s | 1'300 MiB/s | 1'470 MiB/s | lighttpd |
when connecting the reverse-proxy to `127.0.0.1` instead (the basic and/or old-fasioned way), speeds are a bit worse:
| index.html | upload | download | software |
| ------------ | ----------- | ----------- | -------- |
| 21'200 req/s | 5'700 MiB/s | 6'700 MiB/s | no-proxy |
| 14'500 req/s | 1'700 MiB/s | 2'170 MiB/s | haproxy |
| 11'100 req/s | 2'750 MiB/s | 2'000 MiB/s | traefik |
| 8'400 req/s | 2'300 MiB/s | 1'950 MiB/s | caddy |
| 13'400 req/s | 1'100 MiB/s | 1'480 MiB/s | nginx |
| 8'400 req/s | 1'000 MiB/s | 1'000 MiB/s | apache |
| 6'500 req/s | 1'270 MiB/s | 1'500 MiB/s | lighttpd |
in summary, `haproxy > caddy > traefik > nginx > apache > lighttpd`, and use uds when possible (traefik does not support it yet)
* if these results are bullshit because my config exampels are bad, please submit corrections!
## prometheus ## prometheus
metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0) metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0)
@@ -1980,7 +2039,8 @@ interact with copyparty using non-browser clients
* can be downloaded from copyparty: controlpanel -> connect -> [partyfuse.py](http://127.0.0.1:3923/.cpr/a/partyfuse.py) * can be downloaded from copyparty: controlpanel -> connect -> [partyfuse.py](http://127.0.0.1:3923/.cpr/a/partyfuse.py)
* [rclone](https://rclone.org/) as client can give ~5x performance, see [./docs/rclone.md](docs/rclone.md) * [rclone](https://rclone.org/) as client can give ~5x performance, see [./docs/rclone.md](docs/rclone.md)
* sharex (screenshot utility): see [./contrib/sharex.sxcu](contrib/#sharexsxcu) * sharex (screenshot utility): see [./contrib/sharex.sxcu](./contrib/#sharexsxcu)
* and for screenshots on macos, see [./contrib/ishare.iscu](./contrib/#ishareiscu)
* and for screenshots on linux, see [./contrib/flameshot.sh](./contrib/flameshot.sh) * and for screenshots on linux, see [./contrib/flameshot.sh](./contrib/flameshot.sh)
* contextlet (web browser integration); see [contrib contextlet](contrib/#send-to-cppcontextletjson) * contextlet (web browser integration); see [contrib contextlet](contrib/#send-to-cppcontextletjson)

View File

@@ -31,6 +31,9 @@ plugins in this section should only be used with appropriate precautions:
* [very-bad-idea.py](./very-bad-idea.py) combined with [meadup.js](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/meadup.js) converts copyparty into a janky yet extremely flexible chromecast clone * [very-bad-idea.py](./very-bad-idea.py) combined with [meadup.js](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/meadup.js) converts copyparty into a janky yet extremely flexible chromecast clone
* also adds a virtual keyboard by @steinuil to the basic-upload tab for comfy couch crowd control * also adds a virtual keyboard by @steinuil to the basic-upload tab for comfy couch crowd control
* anything uploaded through the [android app](https://github.com/9001/party-up) (files or links) are executed on the server, meaning anyone can infect your PC with malware... so protect this with a password and keep it on a LAN! * anything uploaded through the [android app](https://github.com/9001/party-up) (files or links) are executed on the server, meaning anyone can infect your PC with malware... so protect this with a password and keep it on a LAN!
* [kamelåså](https://github.com/steinuil/kameloso) is a much better (and MUCH safer) alternative to this plugin
* powered by [chicken-curry-banana-pineapple-peanut pizza](https://a.ocv.me/pub/g/i/2025/01/298437ce-8351-4c8c-861c-fa131d217999.jpg?cache) so you know it's good
* and, unlike this plugin, kamelåså even has windows support (nice)
# dependencies # dependencies

View File

@@ -6,6 +6,11 @@ WARNING -- DANGEROUS PLUGIN --
running this plugin, they can execute malware on your machine running this plugin, they can execute malware on your machine
so please keep this on a LAN and protect it with a password so please keep this on a LAN and protect it with a password
here is a MUCH BETTER ALTERNATIVE (which also works on Windows):
https://github.com/steinuil/kameloso
----------------------------------------------------------------------
use copyparty as a chromecast replacement: use copyparty as a chromecast replacement:
* post a URL and it will open in the default browser * post a URL and it will open in the default browser
* upload a file and it will open in the default application * upload a file and it will open in the default application

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
S_VERSION = "2.6" S_VERSION = "2.7"
S_BUILD_DT = "2024-11-10" S_BUILD_DT = "2024-12-06"
""" """
u2c.py: upload to copyparty u2c.py: upload to copyparty
@@ -1033,8 +1033,8 @@ class Ctl(object):
handshake(self.ar, file, False) handshake(self.ar, file, False)
def _fancy(self): def _fancy(self):
if VT100 and not self.ar.ns:
atexit.register(self.cleanup_vt100) atexit.register(self.cleanup_vt100)
if VT100 and not self.ar.ns:
ss.scroll_region(3) ss.scroll_region(3)
Daemon(self.hasher) Daemon(self.hasher)
@@ -1042,6 +1042,7 @@ class Ctl(object):
Daemon(self.handshaker) Daemon(self.handshaker)
Daemon(self.uploader) Daemon(self.uploader)
last_sp = -1
while True: while True:
with self.exit_cond: with self.exit_cond:
self.exit_cond.wait(0.07) self.exit_cond.wait(0.07)
@@ -1080,6 +1081,12 @@ class Ctl(object):
else: else:
txt = " " txt = " "
if not VT100: # OSC9;4 (taskbar-progress)
sp = int(self.up_b * 100 / self.nbytes) or 1
if last_sp != sp:
last_sp = sp
txt += "\033]9;4;1;%d\033\\" % (sp,)
if not self.up_br: if not self.up_br:
spd = self.hash_b / ((time.time() - self.t0) or 1) spd = self.hash_b / ((time.time() - self.t0) or 1)
eta = (self.nbytes - self.hash_b) / (spd or 1) eta = (self.nbytes - self.hash_b) / (spd or 1)
@@ -1096,7 +1103,9 @@ class Ctl(object):
nleft = self.nfiles - self.up_f nleft = self.nfiles - self.up_f
tail = "\033[K\033[u" if VT100 and not self.ar.ns else "\r" tail = "\033[K\033[u" if VT100 and not self.ar.ns else "\r"
t = "%s eta @ %s/s, %s, %d# left\033[K" % (self.eta, spd, sleft, nleft) t = "%s eta @ %s/s, %s, %d# left" % (self.eta, spd, sleft, nleft)
if not self.hash_b:
t = " now hashing..."
eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail)) eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail))
if self.ar.wlist: if self.ar.wlist:
@@ -1117,7 +1126,10 @@ class Ctl(object):
handshake(self.ar, file, False) handshake(self.ar, file, False)
def cleanup_vt100(self): def cleanup_vt100(self):
if VT100:
ss.scroll_region(None) ss.scroll_region(None)
else:
eprint("\033]9;4;0\033\\")
eprint("\033[J\033]0;\033\\") eprint("\033[J\033]0;\033\\")
def cb_hasher(self, file, ofs): def cb_hasher(self, file, ofs):
@@ -1538,6 +1550,38 @@ source file/folder selection uses rsync syntax, meaning that:
except: except:
pass pass
# msys2 doesn't uncygpath absolute paths with whitespace
if not VT100:
zsl = []
for fn in ar.files:
if re.search("^/[a-z]/", fn):
fn = r"%s:\%s" % (fn[1:2], fn[3:])
zsl.append(fn.replace("/", "\\"))
ar.files = zsl
fok = []
fng = []
for fn in ar.files:
if os.path.exists(fn):
fok.append(fn)
elif VT100:
fng.append(fn)
else:
# windows leaves glob-expansion to the invoked process... okayyy let's get to work
from glob import glob
fns = glob(fn)
if fns:
fok.extend(fns)
else:
fng.append(fn)
if fng:
t = "some files/folders were not found:\n %s"
raise Exception(t % ("\n ".join(fng),))
ar.files = fok
if ar.drd: if ar.drd:
ar.dr = True ar.dr = True

View File

@@ -12,14 +12,19 @@
* assumes the webserver and copyparty is running on the same server/IP * assumes the webserver and copyparty is running on the same server/IP
* modify `10.13.1.1` as necessary if you wish to support browsers without javascript * modify `10.13.1.1` as necessary if you wish to support browsers without javascript
### [`sharex.sxcu`](sharex.sxcu) ### [`sharex.sxcu`](sharex.sxcu) - Windows screenshot uploader
* sharex config file to upload screenshots and grab the URL * [sharex](https://getsharex.com/) config file to upload screenshots and grab the URL
* `RequestURL`: full URL to the target folder * `RequestURL`: full URL to the target folder
* `pw`: password (remove the `pw` line if anon-write) * `pw`: password (remove the `pw` line if anon-write)
* the `act:bput` thing is optional since copyparty v1.9.29 * the `act:bput` thing is optional since copyparty v1.9.29
* using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu) * using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu)
### [`flameshot.sh`](flameshot.sh) ### [`ishare.iscu`](ishare.iscu) - MacOS screenshot uploader
* [ishare](https://isharemac.app/) config file to upload screenshots and grab the URL
* `RequestURL`: full URL to the target folder
* `pw`: password (remove the `pw` line if anon-write)
### [`flameshot.sh`](flameshot.sh) - Linux screenshot uploader
* takes a screenshot with [flameshot](https://flameshot.org/) on Linux, uploads it, and writes the URL to clipboard * takes a screenshot with [flameshot](https://flameshot.org/) on Linux, uploads it, and writes the URL to clipboard
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json) ### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
@@ -53,5 +58,10 @@ init-scripts to start copyparty as a service
* [`openrc/copyparty`](openrc/copyparty) * [`openrc/copyparty`](openrc/copyparty)
# Reverse-proxy # Reverse-proxy
copyparty has basic support for running behind another webserver copyparty supports running behind another webserver
* [`nginx/copyparty.conf`](nginx/copyparty.conf) * [`apache/copyparty.conf`](apache/copyparty.conf)
* [`haproxy/copyparty.conf`](haproxy/copyparty.conf)
* [`lighttpd/subdomain.conf`](lighttpd/subdomain.conf)
* [`lighttpd/subpath.conf`](lighttpd/subpath.conf)
* [`nginx/copyparty.conf`](nginx/copyparty.conf) -- recommended
* [`traefik/copyparty.yaml`](traefik/copyparty.yaml)

View File

@@ -1,14 +1,29 @@
# when running copyparty behind a reverse proxy, # if you would like to use unix-sockets (recommended),
# the following arguments are recommended: # you must run copyparty with one of the following:
# #
# -i 127.0.0.1 only accept connections from nginx # -i unix:777:/dev/shm/party.sock
# -i unix:777:/dev/shm/party.sock,127.0.0.1
# #
# if you are doing location-based proxying (such as `/stuff` below) # if you are doing location-based proxying (such as `/stuff` below)
# you must run copyparty with --rp-loc=stuff # you must run copyparty with --rp-loc=stuff
# #
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1 # on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_module modules/mod_proxy.so
ProxyPass "/stuff" "http://127.0.0.1:3923/stuff"
# do not specify ProxyPassReverse
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME} RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
# NOTE: do not specify ProxyPassReverse
##
## then, enable one of the below:
# use subdomain proxying to unix-socket (best)
ProxyPass "/" "unix:///dev/shm/party.sock|http://whatever/"
# use subdomain proxying to 127.0.0.1 (slower)
#ProxyPass "/" "http://127.0.0.1:3923/"
# use subpath proxying to 127.0.0.1 (slow and maybe buggy)
#ProxyPass "/stuff" "http://127.0.0.1:3923/stuff"

View File

@@ -0,0 +1,24 @@
# this config is essentially two separate examples;
#
# foo1 connects to copyparty using tcp, and
# foo2 uses unix-sockets for 27% higher performance
#
# to use foo2 you must run copyparty with one of the following:
#
# -i unix:777:/dev/shm/party.sock
# -i unix:777:/dev/shm/party.sock,127.0.0.1
defaults
mode http
option forwardfor
timeout connect 1s
timeout client 610s
timeout server 610s
listen foo1
bind *:8081
server srv1 127.0.0.1:3923 maxconn 512
listen foo2
bind *:8082
server srv1 /dev/shm/party.sock maxconn 512

10
contrib/ishare.iscu Normal file
View File

@@ -0,0 +1,10 @@
{
"Name": "copyparty",
"RequestURL": "http://127.0.0.1:3923/screenshots/",
"Headers": {
"pw": "PUT_YOUR_PASSWORD_HERE_MY_DUDE",
"accept": "json"
},
"FileFormName": "f",
"ResponseURL": "{{fileurl}}"
}

View File

@@ -0,0 +1,24 @@
# example usage for benchmarking:
#
# taskset -c 1 lighttpd -Df ~/dev/copyparty/contrib/lighttpd/subdomain.conf
#
# lighttpd can connect to copyparty using either tcp (127.0.0.1)
# or a unix-socket, but unix-sockets are 37% faster because
# lighttpd doesn't reuse tcp connections, so we're doing unix-sockets
#
# this means we must run copyparty with one of the following:
#
# -i unix:777:/dev/shm/party.sock
# -i unix:777:/dev/shm/party.sock,127.0.0.1
#
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
server.port = 80
server.document-root = "/var/empty"
server.upload-dirs = ( "/dev/shm", "/tmp" )
server.modules = ( "mod_proxy" )
proxy.forwarded = ( "for" => 1, "proto" => 1 )
proxy.server = ( "" => ( ( "host" => "/dev/shm/party.sock" ) ) )
# if you really need to use tcp instead of unix-sockets, do this instead:
#proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3923" ) ) )

View File

@@ -0,0 +1,31 @@
# example usage for benchmarking:
#
# taskset -c 1 lighttpd -Df ~/dev/copyparty/contrib/lighttpd/subpath.conf
#
# lighttpd can connect to copyparty using either tcp (127.0.0.1)
# or a unix-socket, but unix-sockets are 37% faster because
# lighttpd doesn't reuse tcp connections, so we're doing unix-sockets
#
# this means we must run copyparty with one of the following:
#
# -i unix:777:/dev/shm/party.sock
# -i unix:777:/dev/shm/party.sock,127.0.0.1
#
# also since this example proxies a subpath instead of the
# recommended subdomain-proxying, we must also specify this:
#
# --rp-loc files
#
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
server.port = 80
server.document-root = "/var/empty"
server.upload-dirs = ( "/dev/shm", "/tmp" )
server.modules = ( "mod_proxy" )
$HTTP["url"] =~ "^/files" {
proxy.forwarded = ( "for" => 1, "proto" => 1 )
proxy.server = ( "" => ( ( "host" => "/dev/shm/party.sock" ) ) )
# if you really need to use tcp instead of unix-sockets, do this instead:
#proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3923" ) ) )
}

View File

@@ -36,9 +36,9 @@ upstream cpp_uds {
# but there must be at least one unix-group which both # but there must be at least one unix-group which both
# nginx and copyparty is a member of; if that group is # nginx and copyparty is a member of; if that group is
# "www" then run copyparty with the following args: # "www" then run copyparty with the following args:
# -i unix:770:www:/tmp/party.sock # -i unix:770:www:/dev/shm/party.sock
server unix:/tmp/party.sock fail_timeout=1s; server unix:/dev/shm/party.sock fail_timeout=1s;
keepalive 1; keepalive 1;
} }
@@ -61,6 +61,10 @@ server {
client_max_body_size 0; client_max_body_size 0;
proxy_buffering off; proxy_buffering off;
proxy_request_buffering off; proxy_request_buffering off;
# improve download speed from 600 to 1500 MiB/s
proxy_buffers 32 8k;
proxy_buffer_size 16k;
proxy_busy_buffers_size 24k;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;

View File

@@ -1,6 +1,6 @@
# Maintainer: icxes <dev.null@need.moe> # Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty pkgname=copyparty
pkgver="1.16.2" pkgver="1.16.7"
pkgrel=1 pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++" pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any") arch=("any")
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
) )
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz") source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" ) backup=("etc/${pkgname}.d/init" )
sha256sums=("5ecc1626e3f3a7bb7de5e6697742cd5c5990e20afec867d8de648afb04fbc04b") sha256sums=("22178c98513072a8ef1e0fdb85d1044becf345ee392a9f5a336cc340ae16e4e9")
build() { build() {
cd "${srcdir}/${pkgname}-${pkgver}" cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,5 +1,5 @@
{ {
"url": "https://github.com/9001/copyparty/releases/download/v1.16.2/copyparty-sfx.py", "url": "https://github.com/9001/copyparty/releases/download/v1.16.7/copyparty-sfx.py",
"version": "1.16.2", "version": "1.16.7",
"hash": "sha256-HdgVtsoSX3qgDkNOD9e0PRZlY/wL6T02Q6zECfX/TdM=" "hash": "sha256-mAoZre3hArsdXorZwv0mYESn/mtyMXfcUzcOMwnk8Do="
} }

View File

@@ -0,0 +1,12 @@
# ./traefik --experimental.fastproxy=true --entrypoints.web.address=:8080 --providers.file.filename=copyparty.yaml
http:
services:
service-cpp:
loadBalancer:
servers:
- url: "http://127.0.0.1:3923/"
routers:
my-router:
rule: "PathPrefix(`/`)"
service: service-cpp

View File

@@ -91,6 +91,9 @@ web/mde.html
web/mde.js web/mde.js
web/msg.css web/msg.css
web/msg.html web/msg.html
web/rups.css
web/rups.html
web/rups.js
web/shares.css web/shares.css
web/shares.html web/shares.html
web/shares.js web/shares.js

View File

@@ -1083,7 +1083,7 @@ def add_cert(ap, cert_path):
def add_auth(ap): def add_auth(ap):
ses_db = os.path.join(E.cfg, "sessions.db") ses_db = os.path.join(E.cfg, "sessions.db")
ap2 = ap.add_argument_group('IdP / identity provider / user authentication options') ap2 = ap.add_argument_group('IdP / identity provider / user authentication options')
ap2.add_argument("--idp-h-usr", metavar="HN", type=u, default="", help="bypass the copyparty authentication checks and assume the request-header \033[33mHN\033[0m contains the username of the requesting user (for use with authentik/oauth/...)\n\033[1;31mWARNING:\033[0m if you enable this, make sure clients are unable to specify this header themselves; must be washed away and replaced by a reverse-proxy") ap2.add_argument("--idp-h-usr", metavar="HN", type=u, default="", help="bypass the copyparty authentication checks if the request-header \033[33mHN\033[0m contains a username to associate the request with (for use with authentik/oauth/...)\n\033[1;31mWARNING:\033[0m if you enable this, make sure clients are unable to specify this header themselves; must be washed away and replaced by a reverse-proxy")
ap2.add_argument("--idp-h-grp", metavar="HN", type=u, default="", help="assume the request-header \033[33mHN\033[0m contains the groupname of the requesting user; can be referenced in config files for group-based access control") ap2.add_argument("--idp-h-grp", metavar="HN", type=u, default="", help="assume the request-header \033[33mHN\033[0m contains the groupname of the requesting user; can be referenced in config files for group-based access control")
ap2.add_argument("--idp-h-key", metavar="HN", type=u, default="", help="optional but recommended safeguard; your reverse-proxy will insert a secret header named \033[33mHN\033[0m into all requests, and the other IdP headers will be ignored if this header is not present") ap2.add_argument("--idp-h-key", metavar="HN", type=u, default="", help="optional but recommended safeguard; your reverse-proxy will insert a secret header named \033[33mHN\033[0m into all requests, and the other IdP headers will be ignored if this header is not present")
ap2.add_argument("--idp-gsep", metavar="RE", type=u, default="|:;+,", help="if there are multiple groups in \033[33m--idp-h-grp\033[0m, they are separated by one of the characters in \033[33mRE\033[0m") ap2.add_argument("--idp-gsep", metavar="RE", type=u, default="|:;+,", help="if there are multiple groups in \033[33m--idp-h-grp\033[0m, they are separated by one of the characters in \033[33mRE\033[0m")
@@ -1250,7 +1250,6 @@ def add_optouts(ap):
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar") ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar")
ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)") ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)")
ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time") ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time")
ap2.add_argument("--no-up-list", action="store_true", help="don't show list of incoming files in controlpanel")
ap2.add_argument("--no-pipe", action="store_true", help="disable race-the-beam (lockstep download of files which are currently being uploaded) (volflag=nopipe)") ap2.add_argument("--no-pipe", action="store_true", help="disable race-the-beam (lockstep download of files which are currently being uploaded) (volflag=nopipe)")
ap2.add_argument("--no-db-ip", action="store_true", help="do not write uploader IPs into the database") ap2.add_argument("--no-db-ip", action="store_true", help="do not write uploader IPs into the database")
@@ -1326,7 +1325,10 @@ def add_admin(ap):
ap2.add_argument("--no-reload", action="store_true", help="disable ?reload=cfg (reload users/volumes/volflags from config file)") ap2.add_argument("--no-reload", action="store_true", help="disable ?reload=cfg (reload users/volumes/volflags from config file)")
ap2.add_argument("--no-rescan", action="store_true", help="disable ?scan (volume reindexing)") ap2.add_argument("--no-rescan", action="store_true", help="disable ?scan (volume reindexing)")
ap2.add_argument("--no-stack", action="store_true", help="disable ?stack (list all stacks)") ap2.add_argument("--no-stack", action="store_true", help="disable ?stack (list all stacks)")
ap2.add_argument("--no-ups-page", action="store_true", help="disable ?ru (list of recent uploads)")
ap2.add_argument("--no-up-list", action="store_true", help="don't show list of incoming files in controlpanel")
ap2.add_argument("--dl-list", metavar="LVL", type=int, default=2, help="who can see active downloads in the controlpanel? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=everyone") ap2.add_argument("--dl-list", metavar="LVL", type=int, default=2, help="who can see active downloads in the controlpanel? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=everyone")
ap2.add_argument("--ups-when", action="store_true", help="let everyone see upload timestamps on the ?ru page, not just admins")
def add_thumbnail(ap): def add_thumbnail(ap):
@@ -1505,6 +1507,7 @@ def add_debug(ap):
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)") ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
ap2.add_argument("--bf-dir", metavar="PATH", type=u, default="bf", help="bak-flips: store corrupted chunks at \033[33mPATH\033[0m; default: folder named 'bf' wherever copyparty was started") ap2.add_argument("--bf-dir", metavar="PATH", type=u, default="bf", help="bak-flips: store corrupted chunks at \033[33mPATH\033[0m; default: folder named 'bf' wherever copyparty was started")
ap2.add_argument("--bf-log", metavar="PATH", type=u, default="", help="bak-flips: log corruption info to a textfile at \033[33mPATH\033[0m") ap2.add_argument("--bf-log", metavar="PATH", type=u, default="", help="bak-flips: log corruption info to a textfile at \033[33mPATH\033[0m")
ap2.add_argument("--no-cfg-cmt-warn", action="store_true", help=argparse.SUPPRESS)
# fmt: on # fmt: on
@@ -1737,7 +1740,7 @@ def main(argv: Optional[list[str]] = None) -> None:
except: except:
lprint("\nfailed to disable quick-edit-mode:\n" + min_ex() + "\n") lprint("\nfailed to disable quick-edit-mode:\n" + min_ex() + "\n")
if al.ansi: if not al.ansi:
al.wintitle = "" al.wintitle = ""
# propagate implications # propagate implications

View File

@@ -1,8 +1,8 @@
# coding: utf-8 # coding: utf-8
VERSION = (1, 16, 3) VERSION = (1, 16, 8)
CODENAME = "COPYparty" CODENAME = "COPYparty"
BUILD_DT = (2024, 12, 4) BUILD_DT = (2025, 1, 11)
S_VERSION = ".".join(map(str, VERSION)) S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT) S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -212,7 +212,7 @@ class Lim(object):
df, du, err = get_df(abspath, True) df, du, err = get_df(abspath, True)
if err: if err:
t = "failed to read disk space usage for [%s]: %s" t = "failed to read disk space usage for %r: %s"
self.log(t % (abspath, err), 3) self.log(t % (abspath, err), 3)
self.dfv = 0xAAAAAAAAA # 42.6 GiB self.dfv = 0xAAAAAAAAA # 42.6 GiB
else: else:
@@ -526,7 +526,7 @@ class VFS(object):
"""returns [vfsnode,fs_remainder] if user has the requested permissions""" """returns [vfsnode,fs_remainder] if user has the requested permissions"""
if relchk(vpath): if relchk(vpath):
if self.log: if self.log:
self.log("vfs", "invalid relpath [{}]".format(vpath)) self.log("vfs", "invalid relpath %r @%s" % (vpath, uname))
raise Pebkac(422) raise Pebkac(422)
cvpath = undot(vpath) cvpath = undot(vpath)
@@ -543,11 +543,11 @@ class VFS(object):
if req and uname not in d and uname != LEELOO_DALLAS: if req and uname not in d and uname != LEELOO_DALLAS:
if vpath != cvpath and vpath != "." and self.log: if vpath != cvpath and vpath != "." and self.log:
ap = vn.canonical(rem) ap = vn.canonical(rem)
t = "{} has no {} in [{}] => [{}] => [{}]" t = "%s has no %s in %r => %r => %r"
self.log("vfs", t.format(uname, msg, vpath, cvpath, ap), 6) self.log("vfs", t % (uname, msg, vpath, cvpath, ap), 6)
t = 'you don\'t have %s-access in "/%s" or below "/%s"' t = "you don't have %s-access in %r or below %r"
raise Pebkac(err, t % (msg, cvpath, vn.vpath)) raise Pebkac(err, t % (msg, "/" + cvpath, "/" + vn.vpath))
return vn, rem return vn, rem
@@ -658,7 +658,7 @@ class VFS(object):
seen: list[str], seen: list[str],
uname: str, uname: str,
permsets: list[list[bool]], permsets: list[list[bool]],
wantdots: bool, wantdots: int,
scandir: bool, scandir: bool,
lstat: bool, lstat: bool,
subvols: bool = True, subvols: bool = True,
@@ -693,8 +693,8 @@ class VFS(object):
and fsroot in seen and fsroot in seen
): ):
if self.log: if self.log:
t = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}/{}" t = "bailing from symlink loop,\n prev: %r\n curr: %r\n from: %r / %r"
self.log("vfs.walk", t.format(seen[-1], fsroot, self.vpath, rem), 3) self.log("vfs.walk", t % (seen[-1], fsroot, self.vpath, rem), 3)
return return
if "xdev" in self.flags or "xvol" in self.flags: if "xdev" in self.flags or "xvol" in self.flags:
@@ -706,7 +706,7 @@ class VFS(object):
rm1.append(le) rm1.append(le)
_ = [vfs_ls.remove(x) for x in rm1] # type: ignore _ = [vfs_ls.remove(x) for x in rm1] # type: ignore
dots_ok = wantdots and uname in dbv.axs.udot dots_ok = wantdots and (wantdots == 2 or uname in dbv.axs.udot)
if not dots_ok: if not dots_ok:
vfs_ls = [x for x in vfs_ls if "/." not in "/" + x[0]] vfs_ls = [x for x in vfs_ls if "/." not in "/" + x[0]]
@@ -760,7 +760,7 @@ class VFS(object):
# if single folder: the folder itself is the top-level item # if single folder: the folder itself is the top-level item
folder = "" if flt or not wrap else (vpath.split("/")[-1].lstrip(".") or "top") folder = "" if flt or not wrap else (vpath.split("/")[-1].lstrip(".") or "top")
g = self.walk(folder, vrem, [], uname, [[True, False]], True, scandir, False) g = self.walk(folder, vrem, [], uname, [[True, False]], 1, scandir, False)
for _, _, vpath, apath, files, rd, vd in g: for _, _, vpath, apath, files, rd, vd in g:
if flt: if flt:
files = [x for x in files if x[0] in flt] files = [x for x in files if x[0] in flt]
@@ -818,8 +818,8 @@ class VFS(object):
if vdev != st.st_dev: if vdev != st.st_dev:
if self.log: if self.log:
t = "xdev: {}[{}] => {}[{}]" t = "xdev: %s[%r] => %s[%r]"
self.log("vfs", t.format(vdev, self.realpath, st.st_dev, ap), 3) self.log("vfs", t % (vdev, self.realpath, st.st_dev, ap), 3)
return None return None
@@ -829,7 +829,7 @@ class VFS(object):
return vn return vn
if self.log: if self.log:
self.log("vfs", "xvol: [{}]".format(ap), 3) self.log("vfs", "xvol: %r" % (ap,), 3)
return None return None
@@ -914,7 +914,7 @@ class AuthSrv(object):
self.idp_accs[uname] = gnames self.idp_accs[uname] = gnames
t = "reinitializing due to new user from IdP: [%s:%s]" t = "reinitializing due to new user from IdP: [%r:%r]"
self.log(t % (uname, gnames), 3) self.log(t % (uname, gnames), 3)
if not broker: if not broker:
@@ -1568,7 +1568,7 @@ class AuthSrv(object):
continue continue
if self.args.shr_v: if self.args.shr_v:
t = "loading %s share [%s] by [%s] => [%s]" t = "loading %s share %r by %r => %r"
self.log(t % (s_pr, s_k, s_un, s_vp)) self.log(t % (s_pr, s_k, s_un, s_vp))
if s_pw: if s_pw:
@@ -1765,7 +1765,7 @@ class AuthSrv(object):
use = True use = True
try: try:
_ = float(zs) _ = float(zs)
zs = "%sg" % (zs) zs = "%sg" % (zs,)
except: except:
pass pass
lim.dfl = unhumanize(zs) lim.dfl = unhumanize(zs)
@@ -2181,11 +2181,11 @@ class AuthSrv(object):
if not self.args.no_voldump: if not self.args.no_voldump:
self.log(t) self.log(t)
if have_e2d: if have_e2d or self.args.idp_h_usr:
t = self.chk_sqlite_threadsafe() t = self.chk_sqlite_threadsafe()
if t: if t:
self.log("\n\033[{}\033[0m\n".format(t)) self.log("\n\033[{}\033[0m\n".format(t))
if have_e2d:
if not have_e2t: if not have_e2t:
t = "hint: enable multimedia indexing (artist/title/...) with argument -e2ts" t = "hint: enable multimedia indexing (artist/title/...) with argument -e2ts"
self.log(t, 6) self.log(t, 6)
@@ -2538,7 +2538,7 @@ class AuthSrv(object):
return return
elif self.args.chpw_v == 2: elif self.args.chpw_v == 2:
t = "chpw: %d changed" % (len(uok)) t = "chpw: %d changed" % (len(uok),)
if urst: if urst:
t += ", \033[0munchanged:\033[35m %s" % (", ".join(list(urst))) t += ", \033[0munchanged:\033[35m %s" % (", ".join(list(urst)))
@@ -2696,7 +2696,7 @@ class AuthSrv(object):
[], [],
u, u,
[[True, False]], [[True, False]],
True, 1,
not self.args.no_scandir, not self.args.no_scandir,
False, False,
False, False,
@@ -3017,6 +3017,19 @@ def expand_config_file(
ret.append("#\033[36m closed{}\033[0m".format(ipath)) ret.append("#\033[36m closed{}\033[0m".format(ipath))
zsl = []
for ln in ret:
zs = ln.split(" #")[0]
if " #" in zs and zs.split("#")[0].strip():
zsl.append(ln)
if zsl and "no-cfg-cmt-warn" not in "\n".join(ret):
t = "\033[33mWARNING: there is less than two spaces before the # in the following config lines, so instead of assuming that this is a comment, the whole line will become part of the config value:\n\n>>> %s\n\nif you are familiar with this and would like to mute this warning, specify the global-option no-cfg-cmt-warn\n\033[0m"
t = t % ("\n>>> ".join(zsl),)
if log:
log(t)
else:
print(t, file=sys.stderr)
def upgrade_cfg_fmt( def upgrade_cfg_fmt(
log: Optional["NamedLogger"], args: argparse.Namespace, orig: list[str], cfg_fp: str log: Optional["NamedLogger"], args: argparse.Namespace, orig: list[str], cfg_fp: str

View File

@@ -42,14 +42,14 @@ class Fstab(object):
self.cache = {} self.cache = {}
fs = "ext4" fs = "ext4"
msg = "failed to determine filesystem at [{}]; assuming {}\n{}" msg = "failed to determine filesystem at %r; assuming %s\n%s"
if ANYWIN: if ANYWIN:
fs = "vfat" fs = "vfat"
try: try:
path = self._winpath(path) path = self._winpath(path)
except: except:
self.log(msg.format(path, fs, min_ex()), 3) self.log(msg % (path, fs, min_ex()), 3)
return fs return fs
path = undot(path) path = undot(path)
@@ -61,11 +61,11 @@ class Fstab(object):
try: try:
fs = self.get_w32(path) if ANYWIN else self.get_unix(path) fs = self.get_w32(path) if ANYWIN else self.get_unix(path)
except: except:
self.log(msg.format(path, fs, min_ex()), 3) self.log(msg % (path, fs, min_ex()), 3)
fs = fs.lower() fs = fs.lower()
self.cache[path] = fs self.cache[path] = fs
self.log("found {} at {}".format(fs, path)) self.log("found %s at %r" % (fs, path))
return fs return fs
def _winpath(self, path: str) -> str: def _winpath(self, path: str) -> str:

View File

@@ -145,6 +145,14 @@ A_FILE = os.stat_result(
(0o644, -1, -1, 1, 1000, 1000, 8, 0x39230101, 0x39230101, 0x39230101) (0o644, -1, -1, 1, 1000, 1000, 8, 0x39230101, 0x39230101, 0x39230101)
) )
RE_CC = re.compile(r"[\x00-\x1f]") # search always faster
RE_HSAFE = re.compile(r"[\x00-\x1f<>\"'&]") # search always much faster
RE_HOST = re.compile(r"[^][0-9a-zA-Z.:_-]") # search faster <=17ch
RE_MHOST = re.compile(r"^[][0-9a-zA-Z.:_-]+$") # match faster >=18ch
RE_K = re.compile(r"[^0-9a-zA-Z_-]") # search faster <=17ch
UPARAM_CC_OK = set("doc move tree".split())
class HttpCli(object): class HttpCli(object):
""" """
@@ -389,6 +397,15 @@ class HttpCli(object):
self.host = self.headers.get("x-forwarded-host") or self.host self.host = self.headers.get("x-forwarded-host") or self.host
trusted_xff = True trusted_xff = True
m = RE_HOST.search(self.host)
if m and self.host != self.args.name:
zs = self.host
t = "malicious user; illegal Host header; req(%r) host(%r) => %r"
self.log(t % (self.req, zs, zs[m.span()[0] :]), 1)
self.cbonk(self.conn.hsrv.gmal, zs, "bad_host", "illegal Host header")
self.terse_reply(b"illegal Host header", 400)
return False
if self.is_banned(): if self.is_banned():
return False return False
@@ -434,6 +451,16 @@ class HttpCli(object):
self.loud_reply(t, status=400) self.loud_reply(t, status=400)
return False return False
ptn_cc = RE_CC
m = ptn_cc.search(self.req)
if m:
zs = self.req
t = "malicious user; Cc in req0 %r => %r"
self.log(t % (zs, zs[m.span()[0] :]), 1)
self.cbonk(self.conn.hsrv.gmal, zs, "cc_r0", "Cc in req0")
self.terse_reply(b"", 500)
return False
# split req into vpath + uparam # split req into vpath + uparam
uparam = {} uparam = {}
if "?" not in self.req: if "?" not in self.req:
@@ -446,8 +473,8 @@ class HttpCli(object):
self.trailing_slash = vpath.endswith("/") self.trailing_slash = vpath.endswith("/")
vpath = undot(vpath) vpath = undot(vpath)
ptn = self.conn.hsrv.ptn_cc re_k = RE_K
k_safe = self.conn.hsrv.uparam_cc_ok k_safe = UPARAM_CC_OK
for k in arglist.split("&"): for k in arglist.split("&"):
if "=" in k: if "=" in k:
k, zs = k.split("=", 1) k, zs = k.split("=", 1)
@@ -457,6 +484,14 @@ class HttpCli(object):
else: else:
sv = "" sv = ""
m = re_k.search(k)
if m:
t = "malicious user; bad char in query key; req(%r) qk(%r) => %r"
self.log(t % (self.req, k, k[m.span()[0] :]), 1)
self.cbonk(self.conn.hsrv.gmal, self.req, "bc_q", "illegal qkey")
self.terse_reply(b"", 500)
return False
k = k.lower() k = k.lower()
uparam[k] = sv uparam[k] = sv
@@ -464,23 +499,32 @@ class HttpCli(object):
continue continue
zs = "%s=%s" % (k, sv) zs = "%s=%s" % (k, sv)
m = ptn.search(zs) m = ptn_cc.search(zs)
if not m: if not m:
continue continue
hit = zs[m.span()[0] :] t = "malicious user; Cc in query; req(%r) qp(%r) => %r"
t = "malicious user; Cc in query [{}] => [{!r}]" self.log(t % (self.req, zs, zs[m.span()[0] :]), 1)
self.log(t.format(self.req, hit), 1)
self.cbonk(self.conn.hsrv.gmal, self.req, "cc_q", "Cc in query") self.cbonk(self.conn.hsrv.gmal, self.req, "cc_q", "Cc in query")
self.terse_reply(b"", 500) self.terse_reply(b"", 500)
return False return False
if "k" in uparam:
m = RE_K.search(uparam["k"])
if m:
zs = uparam["k"]
t = "malicious user; illegal filekey; req(%r) k(%r) => %r"
self.log(t % (self.req, zs, zs[m.span()[0] :]), 1)
self.cbonk(self.conn.hsrv.gmal, zs, "bad_k", "illegal filekey")
self.terse_reply(b"illegal filekey", 400)
return False
if self.is_vproxied: if self.is_vproxied:
if vpath.startswith(self.args.R): if vpath.startswith(self.args.R):
vpath = vpath[len(self.args.R) + 1 :] vpath = vpath[len(self.args.R) + 1 :]
else: else:
t = "incorrect --rp-loc or webserver config; expected vpath starting with [{}] but got [{}]" t = "incorrect --rp-loc or webserver config; expected vpath starting with %r but got %r"
self.log(t.format(self.args.R, vpath), 1) self.log(t % (self.args.R, vpath), 1)
self.ouparam = uparam.copy() self.ouparam = uparam.copy()
@@ -518,7 +562,7 @@ class HttpCli(object):
return self.tx_qr() return self.tx_qr()
if relchk(self.vpath) and (self.vpath != "*" or self.mode != "OPTIONS"): if relchk(self.vpath) and (self.vpath != "*" or self.mode != "OPTIONS"):
self.log("invalid relpath [{}]".format(self.vpath)) self.log("illegal relpath; req(%r) => %r" % (self.req, "/" + self.vpath))
self.cbonk(self.conn.hsrv.gmal, self.req, "bad_vp", "invalid relpaths") self.cbonk(self.conn.hsrv.gmal, self.req, "bad_vp", "invalid relpaths")
return self.tx_404() and self.keepalive return self.tx_404() and self.keepalive
@@ -542,8 +586,14 @@ class HttpCli(object):
except: except:
pass pass
self.pw = uparam.get("pw") or self.headers.get("pw") or bauth or cookie_pw
self.uname = (
self.asrv.sesa.get(self.pw)
or self.asrv.iacct.get(self.asrv.ah.hash(self.pw))
or "*"
)
if self.args.idp_h_usr: if self.args.idp_h_usr:
self.pw = ""
idp_usr = self.headers.get(self.args.idp_h_usr) or "" idp_usr = self.headers.get(self.args.idp_h_usr) or ""
if idp_usr: if idp_usr:
idp_grp = ( idp_grp = (
@@ -588,20 +638,11 @@ class HttpCli(object):
idp_grp = "" idp_grp = ""
if idp_usr in self.asrv.vfs.aread: if idp_usr in self.asrv.vfs.aread:
self.pw = ""
self.uname = idp_usr self.uname = idp_usr
self.html_head += "<script>var is_idp=1</script>\n" self.html_head += "<script>var is_idp=1</script>\n"
else: else:
self.log("unknown username: [%s]" % (idp_usr), 1) self.log("unknown username: %r" % (idp_usr,), 1)
self.uname = "*"
else:
self.uname = "*"
else:
self.pw = uparam.get("pw") or self.headers.get("pw") or bauth or cookie_pw
self.uname = (
self.asrv.sesa.get(self.pw)
or self.asrv.iacct.get(self.asrv.ah.hash(self.pw))
or "*"
)
if self.args.ipu and self.uname == "*": if self.args.ipu and self.uname == "*":
self.uname = self.conn.ipu_iu[self.conn.ipu_nm.map(self.ip)] self.uname = self.conn.ipu_iu[self.conn.ipu_nm.map(self.ip)]
@@ -666,7 +707,7 @@ class HttpCli(object):
origin = self.headers.get("origin", "<?>") origin = self.headers.get("origin", "<?>")
proto = "https://" if self.is_https else "http://" proto = "https://" if self.is_https else "http://"
guess = "modifying" if (origin and host) else "stripping" guess = "modifying" if (origin and host) else "stripping"
t = "cors-reject %s because request-header Origin='%s' does not match request-protocol '%s' and host '%s' based on request-header Host='%s' (note: if this request is not malicious, check if your reverse-proxy is accidentally %s request headers, in particular 'Origin', for example by running copyparty with --ihead='*' to show all request headers)" t = "cors-reject %s because request-header Origin=%r does not match request-protocol %r and host %r based on request-header Host=%r (note: if this request is not malicious, check if your reverse-proxy is accidentally %s request headers, in particular 'Origin', for example by running copyparty with --ihead='*' to show all request headers)"
self.log(t % (self.mode, origin, proto, self.host, host, guess), 3) self.log(t % (self.mode, origin, proto, self.host, host, guess), 3)
raise Pebkac(403, "rejected by cors-check") raise Pebkac(403, "rejected by cors-check")
@@ -712,7 +753,7 @@ class HttpCli(object):
if pex.code != 404 or self.do_log: if pex.code != 404 or self.do_log:
self.log( self.log(
"http%d: %s\033[0m, %s" % (pex.code, msg, self.vpath), "http%d: %s\033[0m, %r" % (pex.code, msg, "/" + self.vpath),
6 if em.startswith("client d/c ") else 3, 6 if em.startswith("client d/c ") else 3,
) )
@@ -874,12 +915,12 @@ class HttpCli(object):
for k, zs in list(self.out_headers.items()) + self.out_headerlist: for k, zs in list(self.out_headers.items()) + self.out_headerlist:
response.append("%s: %s" % (k, zs)) response.append("%s: %s" % (k, zs))
ptn_cc = RE_CC
for zs in response: for zs in response:
m = self.conn.hsrv.ptn_cc.search(zs) m = ptn_cc.search(zs)
if m: if m:
hit = zs[m.span()[0] :] t = "malicious user; Cc in out-hdr; req(%r) hdr(%r) => %r"
t = "malicious user; Cc in out-hdr {!r} => [{!r}]" self.log(t % (self.req, zs, zs[m.span()[0] :]), 1)
self.log(t.format(zs, hit), 1)
self.cbonk(self.conn.hsrv.gmal, zs, "cc_hdr", "Cc in out-hdr") self.cbonk(self.conn.hsrv.gmal, zs, "cc_hdr", "Cc in out-hdr")
raise Pebkac(999) raise Pebkac(999)
@@ -1005,7 +1046,7 @@ class HttpCli(object):
if not kv: if not kv:
return "" return ""
r = ["%s=%s" % (k, quotep(zs)) if zs else k for k, zs in kv.items()] r = ["%s=%s" % (quotep(k), quotep(zs)) if zs else k for k, zs in kv.items()]
return "?" + "&amp;".join(r) return "?" + "&amp;".join(r)
def ourlq(self) -> str: def ourlq(self) -> str:
@@ -1121,6 +1162,8 @@ class HttpCli(object):
logmsg += " [\033[36m" + rval + "\033[0m]" logmsg += " [\033[36m" + rval + "\033[0m]"
self.log(logmsg) self.log(logmsg)
if "%" in self.req:
self.log(" `-- %r" % (self.vpath,))
# "embedded" resources # "embedded" resources
if self.vpath.startswith(".cpr"): if self.vpath.startswith(".cpr"):
@@ -1155,8 +1198,8 @@ class HttpCli(object):
return self.tx_res(res_path) return self.tx_res(res_path)
if res_path != undot(res_path): if res_path != undot(res_path):
t = "malicious user; attempted path traversal [{}] => [{}]" t = "malicious user; attempted path traversal; req(%r) vp(%r) => %r"
self.log(t.format(self.vpath, res_path), 1) self.log(t % (self.req, "/" + self.vpath, res_path), 1)
self.cbonk(self.conn.hsrv.gmal, self.req, "trav", "path traversal") self.cbonk(self.conn.hsrv.gmal, self.req, "trav", "path traversal")
self.tx_404() self.tx_404()
@@ -1167,11 +1210,11 @@ class HttpCli(object):
return True return True
if not self.can_read and not self.can_write and not self.can_get: if not self.can_read and not self.can_write and not self.can_get:
t = "@{} has no access to [{}]" t = "@%s has no access to %r"
if "on403" in self.vn.flags: if "on403" in self.vn.flags:
t += " (on403)" t += " (on403)"
self.log(t.format(self.uname, self.vpath)) self.log(t % (self.uname, "/" + self.vpath))
ret = self.on40x(self.vn.flags["on403"], self.vn, self.rem) ret = self.on40x(self.vn.flags["on403"], self.vn, self.rem)
if ret == "true": if ret == "true":
return True return True
@@ -1190,7 +1233,7 @@ class HttpCli(object):
if self.vpath: if self.vpath:
ptn = self.args.nonsus_urls ptn = self.args.nonsus_urls
if not ptn or not ptn.search(self.vpath): if not ptn or not ptn.search(self.vpath):
self.log(t.format(self.uname, self.vpath)) self.log(t % (self.uname, "/" + self.vpath))
return self.tx_404(True) return self.tx_404(True)
@@ -1234,6 +1277,9 @@ class HttpCli(object):
if "dls" in self.uparam: if "dls" in self.uparam:
return self.tx_dls() return self.tx_dls()
if "ru" in self.uparam:
return self.tx_rups()
if "h" in self.uparam: if "h" in self.uparam:
return self.tx_mounts() return self.tx_mounts()
@@ -1296,8 +1342,8 @@ class HttpCli(object):
pw = self.ouparam.get("pw") pw = self.ouparam.get("pw")
if pw: if pw:
q_pw = "?pw=%s" % (pw,) q_pw = "?pw=%s" % (html_escape(pw, True, True),)
a_pw = "&pw=%s" % (pw,) a_pw = "&pw=%s" % (html_escape(pw, True, True),)
for i in hits: for i in hits:
i["rp"] += a_pw if "?" in i["rp"] else q_pw i["rp"] += a_pw if "?" in i["rp"] else q_pw
else: else:
@@ -1384,6 +1430,8 @@ class HttpCli(object):
def handle_propfind(self) -> bool: def handle_propfind(self) -> bool:
if self.do_log: if self.do_log:
self.log("PFIND %s @%s" % (self.req, self.uname)) self.log("PFIND %s @%s" % (self.req, self.uname))
if "%" in self.req:
self.log(" `-- %r" % (self.vpath,))
if self.args.no_dav: if self.args.no_dav:
raise Pebkac(405, "WebDAV is disabled in server config") raise Pebkac(405, "WebDAV is disabled in server config")
@@ -1434,14 +1482,14 @@ class HttpCli(object):
if depth == "infinity": if depth == "infinity":
# allow depth:0 from unmapped root, but require read-axs otherwise # allow depth:0 from unmapped root, but require read-axs otherwise
if not self.can_read and (self.vpath or self.asrv.vfs.realpath): if not self.can_read and (self.vpath or self.asrv.vfs.realpath):
t = "depth:infinity requires read-access in /%s" t = "depth:infinity requires read-access in %r"
t = t % (self.vpath,) t = t % ("/" + self.vpath,)
self.log(t, 3) self.log(t, 3)
raise Pebkac(401, t) raise Pebkac(401, t)
if not stat.S_ISDIR(topdir["st"].st_mode): if not stat.S_ISDIR(topdir["st"].st_mode):
t = "depth:infinity can only be used on folders; /%s is 0o%o" t = "depth:infinity can only be used on folders; %r is 0o%o"
t = t % (self.vpath, topdir["st"]) t = t % ("/" + self.vpath, topdir["st"])
self.log(t, 3) self.log(t, 3)
raise Pebkac(400, t) raise Pebkac(400, t)
@@ -1467,7 +1515,7 @@ class HttpCli(object):
elif depth == "0" or not stat.S_ISDIR(st.st_mode): elif depth == "0" or not stat.S_ISDIR(st.st_mode):
# propfind on a file; return as topdir # propfind on a file; return as topdir
if not self.can_read and not self.can_get: if not self.can_read and not self.can_get:
self.log("inaccessible: [%s]" % (self.vpath,)) self.log("inaccessible: %r" % ("/" + self.vpath,))
raise Pebkac(401, "authenticate") raise Pebkac(401, "authenticate")
elif depth == "1": elif depth == "1":
@@ -1494,7 +1542,7 @@ class HttpCli(object):
raise Pebkac(412, t.format(depth, t2)) raise Pebkac(412, t.format(depth, t2))
if not self.can_read and not self.can_write and not fgen: if not self.can_read and not self.can_write and not fgen:
self.log("inaccessible: [%s]" % (self.vpath,)) self.log("inaccessible: %r" % ("/" + self.vpath,))
raise Pebkac(401, "authenticate") raise Pebkac(401, "authenticate")
fgen = itertools.chain([topdir], fgen) fgen = itertools.chain([topdir], fgen)
@@ -1565,12 +1613,14 @@ class HttpCli(object):
def handle_proppatch(self) -> bool: def handle_proppatch(self) -> bool:
if self.do_log: if self.do_log:
self.log("PPATCH %s @%s" % (self.req, self.uname)) self.log("PPATCH %s @%s" % (self.req, self.uname))
if "%" in self.req:
self.log(" `-- %r" % (self.vpath,))
if self.args.no_dav: if self.args.no_dav:
raise Pebkac(405, "WebDAV is disabled in server config") raise Pebkac(405, "WebDAV is disabled in server config")
if not self.can_write: if not self.can_write:
self.log("{} tried to proppatch [{}]".format(self.uname, self.vpath)) self.log("%s tried to proppatch %r" % (self.uname, "/" + self.vpath))
raise Pebkac(401, "authenticate") raise Pebkac(401, "authenticate")
from xml.etree import ElementTree as ET from xml.etree import ElementTree as ET
@@ -1620,13 +1670,15 @@ class HttpCli(object):
def handle_lock(self) -> bool: def handle_lock(self) -> bool:
if self.do_log: if self.do_log:
self.log("LOCK %s @%s" % (self.req, self.uname)) self.log("LOCK %s @%s" % (self.req, self.uname))
if "%" in self.req:
self.log(" `-- %r" % (self.vpath,))
if self.args.no_dav: if self.args.no_dav:
raise Pebkac(405, "WebDAV is disabled in server config") raise Pebkac(405, "WebDAV is disabled in server config")
# win7+ deadlocks if we say no; just smile and nod # win7+ deadlocks if we say no; just smile and nod
if not self.can_write and "Microsoft-WebDAV" not in self.ua: if not self.can_write and "Microsoft-WebDAV" not in self.ua:
self.log("{} tried to lock [{}]".format(self.uname, self.vpath)) self.log("%s tried to lock %r" % (self.uname, "/" + self.vpath))
raise Pebkac(401, "authenticate") raise Pebkac(401, "authenticate")
from xml.etree import ElementTree as ET from xml.etree import ElementTree as ET
@@ -1655,7 +1707,7 @@ class HttpCli(object):
token = str(uuid.uuid4()) token = str(uuid.uuid4())
if not lk.find(r"./{DAV:}depth"): if lk.find(r"./{DAV:}depth") is None:
depth = self.headers.get("depth", "infinity") depth = self.headers.get("depth", "infinity")
lk.append(mktnod("D:depth", depth)) lk.append(mktnod("D:depth", depth))
@@ -1685,12 +1737,14 @@ class HttpCli(object):
def handle_unlock(self) -> bool: def handle_unlock(self) -> bool:
if self.do_log: if self.do_log:
self.log("UNLOCK %s @%s" % (self.req, self.uname)) self.log("UNLOCK %s @%s" % (self.req, self.uname))
if "%" in self.req:
self.log(" `-- %r" % (self.vpath,))
if self.args.no_dav: if self.args.no_dav:
raise Pebkac(405, "WebDAV is disabled in server config") raise Pebkac(405, "WebDAV is disabled in server config")
if not self.can_write and "Microsoft-WebDAV" not in self.ua: if not self.can_write and "Microsoft-WebDAV" not in self.ua:
self.log("{} tried to lock [{}]".format(self.uname, self.vpath)) self.log("%s tried to lock %r" % (self.uname, "/" + self.vpath))
raise Pebkac(401, "authenticate") raise Pebkac(401, "authenticate")
self.send_headers(None, 204) self.send_headers(None, 204)
@@ -1702,6 +1756,8 @@ class HttpCli(object):
if self.do_log: if self.do_log:
self.log("MKCOL %s @%s" % (self.req, self.uname)) self.log("MKCOL %s @%s" % (self.req, self.uname))
if "%" in self.req:
self.log(" `-- %r" % (self.vpath,))
try: try:
return self._mkdir(self.vpath, True) return self._mkdir(self.vpath, True)
@@ -1753,6 +1809,8 @@ class HttpCli(object):
def handle_options(self) -> bool: def handle_options(self) -> bool:
if self.do_log: if self.do_log:
self.log("OPTIONS %s @%s" % (self.req, self.uname)) self.log("OPTIONS %s @%s" % (self.req, self.uname))
if "%" in self.req:
self.log(" `-- %r" % (self.vpath,))
oh = self.out_headers oh = self.out_headers
oh["Allow"] = ", ".join(self.conn.hsrv.mallow) oh["Allow"] = ", ".join(self.conn.hsrv.mallow)
@@ -1768,10 +1826,14 @@ class HttpCli(object):
def handle_delete(self) -> bool: def handle_delete(self) -> bool:
self.log("DELETE %s @%s" % (self.req, self.uname)) self.log("DELETE %s @%s" % (self.req, self.uname))
if "%" in self.req:
self.log(" `-- %r" % (self.vpath,))
return self.handle_rm([]) return self.handle_rm([])
def handle_put(self) -> bool: def handle_put(self) -> bool:
self.log("PUT %s @%s" % (self.req, self.uname)) self.log("PUT %s @%s" % (self.req, self.uname))
if "%" in self.req:
self.log(" `-- %r" % (self.vpath,))
if not self.can_write: if not self.can_write:
t = "user %s does not have write-access under /%s" t = "user %s does not have write-access under /%s"
@@ -1790,6 +1852,8 @@ class HttpCli(object):
def handle_post(self) -> bool: def handle_post(self) -> bool:
self.log("POST %s @%s" % (self.req, self.uname)) self.log("POST %s @%s" % (self.req, self.uname))
if "%" in self.req:
self.log(" `-- %r" % (self.vpath,))
if self.headers.get("expect", "").lower() == "100-continue": if self.headers.get("expect", "").lower() == "100-continue":
try: try:
@@ -1833,8 +1897,8 @@ class HttpCli(object):
return self.handle_stash(False) return self.handle_stash(False)
if "save" in opt: if "save" in opt:
post_sz, _, _, _, path, _ = self.dump_to_file(False) post_sz, _, _, _, _, path, _ = self.dump_to_file(False)
self.log("urlform: {} bytes, {}".format(post_sz, path)) self.log("urlform: %d bytes, %r" % (post_sz, path))
elif "print" in opt: elif "print" in opt:
reader, _ = self.get_body_reader() reader, _ = self.get_body_reader()
buf = b"" buf = b""
@@ -1845,8 +1909,8 @@ class HttpCli(object):
if buf: if buf:
orig = buf.decode("utf-8", "replace") orig = buf.decode("utf-8", "replace")
t = "urlform_raw {} @ {}\n {}\n" t = "urlform_raw %d @ %r\n %r\n"
self.log(t.format(len(orig), self.vpath, orig)) self.log(t % (len(orig), "/" + self.vpath, orig))
try: try:
zb = unquote(buf.replace(b"+", b" ")) zb = unquote(buf.replace(b"+", b" "))
plain = zb.decode("utf-8", "replace") plain = zb.decode("utf-8", "replace")
@@ -1872,8 +1936,8 @@ class HttpCli(object):
plain, plain,
) )
t = "urlform_dec {} @ {}\n {}\n" t = "urlform_dec %d @ %r\n %r\n"
self.log(t.format(len(plain), self.vpath, plain)) self.log(t % (len(plain), "/" + self.vpath, plain))
except Exception as ex: except Exception as ex:
self.log(repr(ex)) self.log(repr(ex))
@@ -1914,11 +1978,11 @@ class HttpCli(object):
else: else:
return read_socket(self.sr, bufsz, remains), remains return read_socket(self.sr, bufsz, remains), remains
def dump_to_file(self, is_put: bool) -> tuple[int, str, str, int, str, str]: def dump_to_file(self, is_put: bool) -> tuple[int, str, str, str, int, str, str]:
# post_sz, sha_hex, sha_b64, remains, path, url # post_sz, halg, sha_hex, sha_b64, remains, path, url
reader, remains = self.get_body_reader() reader, remains = self.get_body_reader()
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True) vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
rnd, _, lifetime, xbu, xau = self.upload_flags(vfs) rnd, lifetime, xbu, xau = self.upload_flags(vfs)
lim = vfs.get_dbv(rem)[0].lim lim = vfs.get_dbv(rem)[0].lim
fdir = vfs.canonical(rem) fdir = vfs.canonical(rem)
if lim: if lim:
@@ -2068,12 +2132,14 @@ class HttpCli(object):
# small toctou, but better than clobbering a hardlink # small toctou, but better than clobbering a hardlink
wunlink(self.log, path, vfs.flags) wunlink(self.log, path, vfs.flags)
halg = "sha512"
hasher = None hasher = None
copier = hashcopy copier = hashcopy
if "ck" in self.ouparam or "ck" in self.headers: if "ck" in self.ouparam or "ck" in self.headers:
zs = self.ouparam.get("ck") or self.headers.get("ck") or "" halg = zs = self.ouparam.get("ck") or self.headers.get("ck") or ""
if not zs or zs == "no": if not zs or zs == "no":
copier = justcopy copier = justcopy
halg = ""
elif zs == "md5": elif zs == "md5":
hasher = hashlib.md5(**USED4SEC) hasher = hashlib.md5(**USED4SEC)
elif zs == "sha1": elif zs == "sha1":
@@ -2107,7 +2173,7 @@ class HttpCli(object):
raise raise
if self.args.nw: if self.args.nw:
return post_sz, sha_hex, sha_b64, remains, path, "" return post_sz, halg, sha_hex, sha_b64, remains, path, ""
at = mt = time.time() - lifetime at = mt = time.time() - lifetime
cli_mt = self.headers.get("x-oc-mtime") cli_mt = self.headers.get("x-oc-mtime")
@@ -2123,7 +2189,7 @@ class HttpCli(object):
try: try:
ext = self.conn.hsrv.magician.ext(path) ext = self.conn.hsrv.magician.ext(path)
except Exception as ex: except Exception as ex:
self.log("filetype detection failed for [{}]: {}".format(path, ex), 6) self.log("filetype detection failed for %r: %s" % (path, ex), 6)
ext = None ext = None
if ext: if ext:
@@ -2218,19 +2284,30 @@ class HttpCli(object):
self.args.RS + vpath + vsuf, self.args.RS + vpath + vsuf,
) )
return post_sz, sha_hex, sha_b64, remains, path, url return post_sz, halg, sha_hex, sha_b64, remains, path, url
def handle_stash(self, is_put: bool) -> bool: def handle_stash(self, is_put: bool) -> bool:
post_sz, sha_hex, sha_b64, remains, path, url = self.dump_to_file(is_put) post_sz, halg, sha_hex, sha_b64, remains, path, url = self.dump_to_file(is_put)
spd = self._spd(post_sz) spd = self._spd(post_sz)
t = "{} wrote {}/{} bytes to {} # {}" t = "%s wrote %d/%d bytes to %r # %s"
self.log(t.format(spd, post_sz, remains, path, sha_b64[:28])) # 21 self.log(t % (spd, post_sz, remains, path, sha_b64[:28])) # 21
ac = self.uparam.get( mime = "text/plain; charset=utf-8"
"want", self.headers.get("accept", "").lower().split(";")[-1] ac = self.uparam.get("want") or self.headers.get("accept") or ""
) if ac:
ac = ac.split(";", 1)[0].lower()
if ac == "application/json":
ac = "json"
if ac == "url": if ac == "url":
t = url t = url
elif ac == "json" or "j" in self.uparam:
jmsg = {"fileurl": url, "filesz": post_sz}
if halg:
jmsg[halg] = sha_hex[:56]
jmsg["sha_b64"] = sha_b64
mime = "application/json"
t = json.dumps(jmsg, indent=2, sort_keys=True)
else: else:
t = "{}\n{}\n{}\n{}\n".format(post_sz, sha_b64, sha_hex[:56], url) t = "{}\n{}\n{}\n{}\n".format(post_sz, sha_b64, sha_hex[:56], url)
@@ -2240,7 +2317,7 @@ class HttpCli(object):
h["X-OC-MTime"] = "accepted" h["X-OC-MTime"] = "accepted"
t = "" # some webdav clients expect/prefer this t = "" # some webdav clients expect/prefer this
self.reply(t.encode("utf-8"), 201, headers=h) self.reply(t.encode("utf-8", "replace"), 201, mime=mime, headers=h)
return True return True
def bakflip( def bakflip(
@@ -2254,7 +2331,7 @@ class HttpCli(object):
flags: dict[str, Any], flags: dict[str, Any],
) -> None: ) -> None:
now = time.time() now = time.time()
t = "bad-chunk: %.3f %s %s %d %s %s %s" t = "bad-chunk: %.3f %s %s %d %s %s %r"
t = t % (now, bad_sha, good_sha, ofs, self.ip, self.uname, ap) t = t % (now, bad_sha, good_sha, ofs, self.ip, self.uname, ap)
self.log(t, 5) self.log(t, 5)
@@ -2394,7 +2471,7 @@ class HttpCli(object):
body = json.loads(json_buf.decode(enc, "replace")) body = json.loads(json_buf.decode(enc, "replace"))
try: try:
zds = {k: v for k, v in body.items()} zds = {k: v for k, v in body.items()}
zds["hash"] = "%d chunks" % (len(body["hash"])) zds["hash"] = "%d chunks" % (len(body["hash"]),)
except: except:
zds = body zds = body
t = "POST len=%d type=%s ip=%s user=%s req=%r json=%s" t = "POST len=%d type=%s ip=%s user=%s req=%r json=%s"
@@ -2438,7 +2515,7 @@ class HttpCli(object):
if not bos.path.isdir(dst): if not bos.path.isdir(dst):
bos.makedirs(dst) bos.makedirs(dst)
except OSError as ex: except OSError as ex:
self.log("makedirs failed [{}]".format(dst)) self.log("makedirs failed %r" % (dst,))
if not bos.path.isdir(dst): if not bos.path.isdir(dst):
if ex.errno == errno.EACCES: if ex.errno == errno.EACCES:
raise Pebkac(500, "the server OS denied write-access") raise Pebkac(500, "the server OS denied write-access")
@@ -2463,7 +2540,7 @@ class HttpCli(object):
# strip common suffix (uploader's folder structure) # strip common suffix (uploader's folder structure)
vp_req, vp_vfs = vroots(self.vpath, vjoin(dbv.vpath, vrem)) vp_req, vp_vfs = vroots(self.vpath, vjoin(dbv.vpath, vrem))
if not ret["purl"].startswith(vp_vfs): if not ret["purl"].startswith(vp_vfs):
t = "share-mapping failed; req=[%s] dbv=[%s] vrem=[%s] n1=[%s] n2=[%s] purl=[%s]" t = "share-mapping failed; req=%r dbv=%r vrem=%r n1=%r n2=%r purl=%r"
zt = (self.vpath, dbv.vpath, vrem, vp_req, vp_vfs, ret["purl"]) zt = (self.vpath, dbv.vpath, vrem, vp_req, vp_vfs, ret["purl"])
raise Pebkac(500, t % zt) raise Pebkac(500, t % zt)
ret["purl"] = vp_req + ret["purl"][len(vp_vfs) :] ret["purl"] = vp_req + ret["purl"][len(vp_vfs) :]
@@ -2512,13 +2589,13 @@ class HttpCli(object):
# search by query params # search by query params
q = body["q"] q = body["q"]
n = body.get("n", self.args.srch_hits) n = body.get("n", self.args.srch_hits)
self.log("qj: {} |{}|".format(q, n)) self.log("qj: %r |%d|" % (q, n))
hits, taglist, trunc = idx.search(self.uname, vols, q, n) hits, taglist, trunc = idx.search(self.uname, vols, q, n)
msg = len(hits) msg = len(hits)
idx.p_end = time.time() idx.p_end = time.time()
idx.p_dur = idx.p_end - t0 idx.p_dur = idx.p_end - t0
self.log("q#: {} ({:.2f}s)".format(msg, idx.p_dur)) self.log("q#: %r (%.2fs)" % (msg, idx.p_dur))
order = [] order = []
for t in self.args.mte: for t in self.args.mte:
@@ -2629,7 +2706,7 @@ class HttpCli(object):
t = "your client is sending %d bytes which is too much (server expected %d bytes at most)" t = "your client is sending %d bytes which is too much (server expected %d bytes at most)"
raise Pebkac(400, t % (remains, maxsize)) raise Pebkac(400, t % (remains, maxsize))
t = "writing %s %s+%d #%d+%d %s" t = "writing %r %s+%d #%d+%d %s"
chunkno = cstart0[0] // chunksize chunkno = cstart0[0] // chunksize
zs = " ".join([chashes[0][:15]] + [x[:9] for x in chashes[1:]]) zs = " ".join([chashes[0][:15]] + [x[:9] for x in chashes[1:]])
self.log(t % (path, cstart0, remains, chunkno, len(chashes), zs)) self.log(t % (path, cstart0, remains, chunkno, len(chashes), zs))
@@ -2741,7 +2818,7 @@ class HttpCli(object):
cinf = self.headers.get("x-up2k-stat", "") cinf = self.headers.get("x-up2k-stat", "")
spd = self._spd(postsize) spd = self._spd(postsize)
self.log("{:70} thank {}".format(spd, cinf)) self.log("%70s thank %r" % (spd, cinf))
self.reply(b"thank") self.reply(b"thank")
return True return True
@@ -2823,7 +2900,7 @@ class HttpCli(object):
logpwd = "%" + ub64enc(zb[:12]).decode("ascii") logpwd = "%" + ub64enc(zb[:12]).decode("ascii")
if pwd != "x": if pwd != "x":
self.log("invalid password: {}".format(logpwd), 3) self.log("invalid password: %r" % (logpwd,), 3)
self.cbonk(self.conn.hsrv.gpwd, pwd, "pw", "invalid passwords") self.cbonk(self.conn.hsrv.gpwd, pwd, "pw", "invalid passwords")
msg = "naw dude" msg = "naw dude"
@@ -2859,7 +2936,7 @@ class HttpCli(object):
rem = sanitize_vpath(rem, "/") rem = sanitize_vpath(rem, "/")
fn = vfs.canonical(rem) fn = vfs.canonical(rem)
if not fn.startswith(vfs.realpath): if not fn.startswith(vfs.realpath):
self.log("invalid mkdir [%s] [%s]" % (self.gctx, vpath), 1) self.log("invalid mkdir %r %r" % (self.gctx, vpath), 1)
raise Pebkac(422) raise Pebkac(422)
if not nullwrite: if not nullwrite:
@@ -2919,7 +2996,7 @@ class HttpCli(object):
self.redirect(vpath, "?edit") self.redirect(vpath, "?edit")
return True return True
def upload_flags(self, vfs: VFS) -> tuple[int, bool, int, list[str], list[str]]: def upload_flags(self, vfs: VFS) -> tuple[int, int, list[str], list[str]]:
if self.args.nw: if self.args.nw:
rnd = 0 rnd = 0
else: else:
@@ -2927,10 +3004,6 @@ class HttpCli(object):
if vfs.flags.get("rand"): # force-enable if vfs.flags.get("rand"): # force-enable
rnd = max(rnd, vfs.flags["nrand"]) rnd = max(rnd, vfs.flags["nrand"])
ac = self.uparam.get(
"want", self.headers.get("accept", "").lower().split(";")[-1]
)
want_url = ac == "url"
zs = self.uparam.get("life", self.headers.get("life", "")) zs = self.uparam.get("life", self.headers.get("life", ""))
if zs: if zs:
vlife = vfs.flags.get("lifetime") or 0 vlife = vfs.flags.get("lifetime") or 0
@@ -2940,7 +3013,6 @@ class HttpCli(object):
return ( return (
rnd, rnd,
want_url,
lifetime, lifetime,
vfs.flags.get("xbu") or [], vfs.flags.get("xbu") or [],
vfs.flags.get("xau") or [], vfs.flags.get("xau") or [],
@@ -2993,7 +3065,14 @@ class HttpCli(object):
if not nullwrite: if not nullwrite:
bos.makedirs(fdir_base) bos.makedirs(fdir_base)
rnd, want_url, lifetime, xbu, xau = self.upload_flags(vfs) rnd, lifetime, xbu, xau = self.upload_flags(vfs)
zs = self.uparam.get("want") or self.headers.get("accept") or ""
if zs:
zs = zs.split(";", 1)[0].lower()
if zs == "application/json":
zs = "json"
want_url = zs == "url"
want_json = zs == "json" or "j" in self.uparam
files: list[tuple[int, str, str, str, str, str]] = [] files: list[tuple[int, str, str, str, str, str]] = []
# sz, sha_hex, sha_b64, p_file, fname, abspath # sz, sha_hex, sha_b64, p_file, fname, abspath
@@ -3025,9 +3104,9 @@ class HttpCli(object):
elif bos.path.exists(abspath): elif bos.path.exists(abspath):
try: try:
wunlink(self.log, abspath, vfs.flags) wunlink(self.log, abspath, vfs.flags)
t = "overwriting file with new upload: %s" t = "overwriting file with new upload: %r"
except: except:
t = "toctou while deleting for ?replace: %s" t = "toctou while deleting for ?replace: %r"
self.log(t % (abspath,)) self.log(t % (abspath,))
else: else:
open_args = {} open_args = {}
@@ -3110,7 +3189,7 @@ class HttpCli(object):
f, tnam = ren_open(tnam, "wb", self.args.iobuf, **open_args) f, tnam = ren_open(tnam, "wb", self.args.iobuf, **open_args)
try: try:
tabspath = os.path.join(fdir, tnam) tabspath = os.path.join(fdir, tnam)
self.log("writing to {}".format(tabspath)) self.log("writing to %r" % (tabspath,))
sz, sha_hex, sha_b64 = copier( sz, sha_hex, sha_b64 = copier(
p_data, f, hasher, max_sz, self.args.s_wr_slp p_data, f, hasher, max_sz, self.args.s_wr_slp
) )
@@ -3295,7 +3374,7 @@ class HttpCli(object):
jmsg["files"].append(jpart) jmsg["files"].append(jpart)
vspd = self._spd(sz_total, False) vspd = self._spd(sz_total, False)
self.log("{} {}".format(vspd, msg)) self.log("%s %r" % (vspd, msg))
suf = "" suf = ""
if not nullwrite and self.args.write_uplog: if not nullwrite and self.args.write_uplog:
@@ -3315,7 +3394,9 @@ class HttpCli(object):
msg += "\n" + errmsg msg += "\n" + errmsg
self.reply(msg.encode("utf-8", "replace"), status=sc) self.reply(msg.encode("utf-8", "replace"), status=sc)
elif "j" in self.uparam: elif want_json:
if len(jmsg["files"]) == 1:
jmsg["fileurl"] = jmsg["files"][0]["url"]
jtxt = json.dumps(jmsg, indent=2, sort_keys=True).encode("utf-8", "replace") jtxt = json.dumps(jmsg, indent=2, sort_keys=True).encode("utf-8", "replace")
self.reply(jtxt, mime="application/json", status=sc) self.reply(jtxt, mime="application/json", status=sc)
else: else:
@@ -3558,7 +3639,7 @@ class HttpCli(object):
if req == zs: if req == zs:
return True return True
t = "wrong dirkey, want %s, got %s\n vp: %s\n ap: %s" t = "wrong dirkey, want %s, got %s\n vp: %r\n ap: %r"
self.log(t % (zs, req, self.req, ap), 6) self.log(t % (zs, req, self.req, ap), 6)
return False return False
@@ -3586,7 +3667,7 @@ class HttpCli(object):
if req == zs: if req == zs:
return True return True
t = "wrong filekey, want %s, got %s\n vp: %s\n ap: %s" t = "wrong filekey, want %s, got %s\n vp: %r\n ap: %r"
self.log(t % (zs, req, self.req, ap), 6) self.log(t % (zs, req, self.req, ap), 6)
return False return False
@@ -3634,6 +3715,7 @@ class HttpCli(object):
return logues, readmes return logues, readmes
def _expand(self, txt: str, phs: list[str]) -> str: def _expand(self, txt: str, phs: list[str]) -> str:
ptn_hsafe = RE_HSAFE
for ph in phs: for ph in phs:
if ph.startswith("hdr."): if ph.startswith("hdr."):
sv = str(self.headers.get(ph[4:], "")) sv = str(self.headers.get(ph[4:], ""))
@@ -3648,10 +3730,10 @@ class HttpCli(object):
elif ph == "srv.htime": elif ph == "srv.htime":
sv = datetime.now(UTC).strftime("%Y-%m-%d, %H:%M:%S") sv = datetime.now(UTC).strftime("%Y-%m-%d, %H:%M:%S")
else: else:
self.log("unknown placeholder in server config: [%s]" % (ph), 3) self.log("unknown placeholder in server config: [%s]" % (ph,), 3)
continue continue
sv = self.conn.hsrv.ptn_hsafe.sub("_", sv) sv = ptn_hsafe.sub("_", sv)
txt = txt.replace("{{%s}}" % (ph,), sv) txt = txt.replace("{{%s}}" % (ph,), sv)
return txt return txt
@@ -3805,7 +3887,7 @@ class HttpCli(object):
self.pipes.set(req_path, job) self.pipes.set(req_path, job)
except Exception as ex: except Exception as ex:
if getattr(ex, "errno", 0) != errno.ENOENT: if getattr(ex, "errno", 0) != errno.ENOENT:
self.log("will not pipe [%s]; %s" % (ap_data, ex), 6) self.log("will not pipe %r; %s" % (ap_data, ex), 6)
ptop = None ptop = None
# #
@@ -4095,7 +4177,7 @@ class HttpCli(object):
if lower >= data_end: if lower >= data_end:
if data_end: if data_end:
t = "pipe: uploader is too slow; aborting download at %.2f MiB" t = "pipe: uploader is too slow; aborting download at %.2f MiB"
self.log(t % (data_end / M)) self.log(t % (data_end / M,))
raise Pebkac(416, "uploader is too slow") raise Pebkac(416, "uploader is too slow")
raise Pebkac(416, "no data available yet; please retry in a bit") raise Pebkac(416, "no data available yet; please retry in a bit")
@@ -4239,7 +4321,7 @@ class HttpCli(object):
cdis = "attachment; filename=\"{}.{}\"; filename*=UTF-8''{}.{}" cdis = "attachment; filename=\"{}.{}\"; filename*=UTF-8''{}.{}"
cdis = cdis.format(afn, ext, ufn, ext) cdis = cdis.format(afn, ext, ufn, ext)
self.log(cdis) self.log(repr(cdis))
self.send_headers(None, mime=mime, headers={"Content-Disposition": cdis}) self.send_headers(None, mime=mime, headers={"Content-Disposition": cdis})
fgen = vn.zipgen( fgen = vn.zipgen(
@@ -4486,12 +4568,12 @@ class HttpCli(object):
else self.conn.hsrv.nm.map(self.ip) or host else self.conn.hsrv.nm.map(self.ip) or host
) )
# safer than html_escape/quotep since this avoids both XSS and shell-stuff # safer than html_escape/quotep since this avoids both XSS and shell-stuff
pw = re.sub(r"[<>&$?`\"']", "_", self.pw or "pw") pw = re.sub(r"[<>&$?`\"']", "_", self.pw or "hunter2")
vp = re.sub(r"[<>&$?`\"']", "_", self.uparam["hc"] or "").lstrip("/") vp = re.sub(r"[<>&$?`\"']", "_", self.uparam["hc"] or "").lstrip("/")
pw = pw.replace(" ", "%20") pw = pw.replace(" ", "%20")
vp = vp.replace(" ", "%20") vp = vp.replace(" ", "%20")
if pw in self.asrv.sesa: if pw in self.asrv.sesa:
pw = "pwd" pw = "hunter2"
html = self.j2s( html = self.j2s(
"svcs", "svcs",
@@ -4902,9 +4984,9 @@ class HttpCli(object):
raise Pebkac(500, "sqlite3 not found on server; unpost is disabled") raise Pebkac(500, "sqlite3 not found on server; unpost is disabled")
raise Pebkac(500, "server busy, cannot unpost; please retry in a bit") raise Pebkac(500, "server busy, cannot unpost; please retry in a bit")
filt = self.uparam.get("filter") or "" zs = self.uparam.get("filter") or ""
lm = "ups [{}]".format(filt) filt = re.compile(zs, re.I) if zs else None
self.log(lm) lm = "ups %r" % (zs,)
if self.args.shr and self.vpath.startswith(self.args.shr1): if self.args.shr and self.vpath.startswith(self.args.shr1):
shr_dbv, shr_vrem = self.vn.get_dbv(self.rem) shr_dbv, shr_vrem = self.vn.get_dbv(self.rem)
@@ -4945,13 +5027,18 @@ class HttpCli(object):
nfk, fk_alg = fk_vols.get(vol) or (0, 0) nfk, fk_alg = fk_vols.get(vol) or (0, 0)
q = "select sz, rd, fn, at from up where ip=? and at>?" n = 2000
q = "select sz, rd, fn, at from up where ip=? and at>? order by at desc"
for sz, rd, fn, at in cur.execute(q, (self.ip, lim)): for sz, rd, fn, at in cur.execute(q, (self.ip, lim)):
vp = "/" + "/".join(x for x in [vol.vpath, rd, fn] if x) vp = "/" + "/".join(x for x in [vol.vpath, rd, fn] if x)
if filt and filt not in vp: if filt and not filt.search(vp):
continue continue
rv = {"vp": quotep(vp), "sz": sz, "at": at, "nfk": nfk} n -= 1
if not n:
break
rv = {"vp": vp, "sz": sz, "at": at, "nfk": nfk}
if nfk: if nfk:
rv["ap"] = vol.canonical(vjoin(rd, fn)) rv["ap"] = vol.canonical(vjoin(rd, fn))
rv["fk_alg"] = fk_alg rv["fk_alg"] = fk_alg
@@ -4962,8 +5049,12 @@ class HttpCli(object):
ret = ret[:2000] ret = ret[:2000]
ret.sort(key=lambda x: x["at"], reverse=True) # type: ignore ret.sort(key=lambda x: x["at"], reverse=True) # type: ignore
n = 0
for rv in ret[:11000]: if len(ret) > 2000:
ret = ret[:2000]
for rv in ret:
rv["vp"] = quotep(rv["vp"])
nfk = rv.pop("nfk") nfk = rv.pop("nfk")
if not nfk: if not nfk:
continue continue
@@ -4980,12 +5071,6 @@ class HttpCli(object):
) )
rv["vp"] += "?k=" + fk[:nfk] rv["vp"] += "?k=" + fk[:nfk]
n += 1
if n > 2000:
break
ret = ret[:2000]
if shr_dbv: if shr_dbv:
# translate vpaths from share-target to share-url # translate vpaths from share-target to share-url
# to satisfy access checks # to satisfy access checks
@@ -5009,6 +5094,126 @@ class HttpCli(object):
self.reply(jtxt.encode("utf-8", "replace"), mime="application/json") self.reply(jtxt.encode("utf-8", "replace"), mime="application/json")
return True return True
def tx_rups(self) -> bool:
if self.args.no_ups_page:
raise Pebkac(500, "listing of recent uploads is disabled in server config")
idx = self.conn.get_u2idx()
if not idx or not hasattr(idx, "p_end"):
if not HAVE_SQLITE3:
raise Pebkac(500, "sqlite3 not found on server; recent-uploads n/a")
raise Pebkac(500, "server busy, cannot list recent uploads; please retry")
sfilt = self.uparam.get("filter") or ""
filt = re.compile(sfilt, re.I) if sfilt else None
lm = "ru %r" % (sfilt,)
self.log(lm)
ret: list[dict[str, Any]] = []
t0 = time.time()
allvols = [
x
for x in self.asrv.vfs.all_vols.values()
if "e2d" in x.flags and ("*" in x.axs.uread or self.uname in x.axs.uread)
]
fk_vols = {
vol: (vol.flags["fk"], 2 if "fka" in vol.flags else 1)
for vol in allvols
if "fk" in vol.flags and "*" not in vol.axs.uread
}
for vol in allvols:
cur = idx.get_cur(vol)
if not cur:
continue
nfk, fk_alg = fk_vols.get(vol) or (0, 0)
adm = "*" in vol.axs.uadmin or self.uname in vol.axs.uadmin
dots = "*" in vol.axs.udot or self.uname in vol.axs.udot
n = 1000
q = "select sz, rd, fn, ip, at from up where at>0 order by at desc"
for sz, rd, fn, ip, at in cur.execute(q):
vp = "/" + "/".join(x for x in [vol.vpath, rd, fn] if x)
if filt and not filt.search(vp):
continue
if not dots and "/." in vp:
continue
rv = {
"vp": vp,
"sz": sz,
"ip": ip,
"at": at,
"nfk": nfk,
"adm": adm,
}
if nfk:
rv["ap"] = vol.canonical(vjoin(rd, fn))
rv["fk_alg"] = fk_alg
ret.append(rv)
if len(ret) > 2000:
ret.sort(key=lambda x: x["at"], reverse=True) # type: ignore
ret = ret[:1000]
n -= 1
if not n:
break
ret.sort(key=lambda x: x["at"], reverse=True) # type: ignore
if len(ret) > 1000:
ret = ret[:1000]
for rv in ret:
rv["vp"] = quotep(rv["vp"])
nfk = rv.pop("nfk")
if not nfk:
continue
alg = rv.pop("fk_alg")
ap = rv.pop("ap")
try:
st = bos.stat(ap)
except:
continue
fk = self.gen_fk(
alg, self.args.fk_salt, ap, st.st_size, 0 if ANYWIN else st.st_ino
)
rv["vp"] += "?k=" + fk[:nfk]
if self.args.ups_when:
for rv in ret:
adm = rv.pop("adm")
if not adm:
rv["ip"] = "(You)" if rv["ip"] == self.ip else "(?)"
else:
for rv in ret:
adm = rv.pop("adm")
if not adm:
rv["ip"] = "(You)" if rv["ip"] == self.ip else "(?)"
rv["at"] = 0
if self.is_vproxied:
for v in ret:
v["vp"] = self.args.SR + v["vp"]
now = time.time()
self.log("%s #%d %.2fsec" % (lm, len(ret), now - t0))
ret2 = {"now": int(now), "filter": sfilt, "ups": ret}
jtxt = json.dumps(ret2, separators=(",\n", ": "))
if "j" in self.ouparam:
self.reply(jtxt.encode("utf-8", "replace"), mime="application/json")
return True
html = self.j2s("rups", this=self, v=jtxt)
self.reply(html.encode("utf-8"), status=200)
return True
def tx_shares(self) -> bool: def tx_shares(self) -> bool:
if self.uname == "*": if self.uname == "*":
self.loud_reply("you're not logged in") self.loud_reply("you're not logged in")
@@ -5714,7 +5919,7 @@ class HttpCli(object):
linf = stats.get(fn) or bos.lstat(fspath) linf = stats.get(fn) or bos.lstat(fspath)
inf = bos.stat(fspath) if stat.S_ISLNK(linf.st_mode) else linf inf = bos.stat(fspath) if stat.S_ISLNK(linf.st_mode) else linf
except: except:
self.log("broken symlink: {}".format(repr(fspath))) self.log("broken symlink: %r" % (fspath,))
continue continue
is_dir = stat.S_ISDIR(inf.st_mode) is_dir = stat.S_ISDIR(inf.st_mode)
@@ -5829,8 +6034,7 @@ class HttpCli(object):
erd_efn = s3enc(idx.mem_cur, rd, fn) erd_efn = s3enc(idx.mem_cur, rd, fn)
r = icur.execute(q, erd_efn) r = icur.execute(q, erd_efn)
except: except:
t = "tag read error, {}/{}\n{}" self.log("tag read error, %r / %r\n%s" % (rd, fn, min_ex()))
self.log(t.format(rd, fn, min_ex()))
break break
tags = {k: v for k, v in r} tags = {k: v for k, v in r}
@@ -5950,10 +6154,10 @@ class HttpCli(object):
if doc.lower().endswith(".md") and "exp" in vn.flags: if doc.lower().endswith(".md") and "exp" in vn.flags:
doctxt = self._expand(doctxt, vn.flags.get("exp_md") or []) doctxt = self._expand(doctxt, vn.flags.get("exp_md") or [])
else: else:
self.log("doc 2big: [{}]".format(doc), c=6) self.log("doc 2big: %r" % (doc,), 6)
doctxt = "( size of textfile exceeds serverside limit )" doctxt = "( size of textfile exceeds serverside limit )"
else: else:
self.log("doc 404: [{}]".format(doc), c=6) self.log("doc 404: %r" % (doc,), 6)
doctxt = "( textfile not found )" doctxt = "( textfile not found )"
if doctxt is not None: if doctxt is not None:

View File

@@ -172,15 +172,16 @@ class HttpSrv(object):
env = jinja2.Environment() env = jinja2.Environment()
env.loader = jinja2.FunctionLoader(lambda f: load_jinja2_resource(self.E, f)) env.loader = jinja2.FunctionLoader(lambda f: load_jinja2_resource(self.E, f))
jn = [ jn = [
"splash",
"shares",
"svcs",
"browser", "browser",
"browser2", "browser2",
"msg", "cf",
"md", "md",
"mde", "mde",
"cf", "msg",
"rups",
"shares",
"splash",
"svcs",
] ]
self.j2 = {x: env.get_template(x + ".html") for x in jn} self.j2 = {x: env.get_template(x + ".html") for x in jn}
self.prism = has_resource(self.E, "web/deps/prism.js.gz") self.prism = has_resource(self.E, "web/deps/prism.js.gz")
@@ -194,10 +195,6 @@ class HttpSrv(object):
self.xff_nm = build_netmap(self.args.xff_src) self.xff_nm = build_netmap(self.args.xff_src)
self.xff_lan = build_netmap("lan") self.xff_lan = build_netmap("lan")
self.ptn_cc = re.compile(r"[\x00-\x1f]")
self.ptn_hsafe = re.compile(r"[\x00-\x1f<>\"'&]")
self.uparam_cc_ok = set("doc move tree".split())
self.mallow = "GET HEAD POST PUT DELETE OPTIONS".split() self.mallow = "GET HEAD POST PUT DELETE OPTIONS".split()
if not self.args.no_dav: if not self.args.no_dav:
zs = "PROPFIND PROPPATCH LOCK UNLOCK MKCOL COPY MOVE" zs = "PROPFIND PROPPATCH LOCK UNLOCK MKCOL COPY MOVE"

View File

@@ -194,7 +194,7 @@ def au_unpk(
except Exception as ex: except Exception as ex:
if ret: if ret:
t = "failed to decompress audio file [%s]: %r" t = "failed to decompress audio file %r: %r"
log(t % (abspath, ex)) log(t % (abspath, ex))
wunlink(log, ret, vn.flags if vn else VF_CAREFUL) wunlink(log, ret, vn.flags if vn else VF_CAREFUL)
@@ -582,7 +582,7 @@ class MTag(object):
raise Exception() raise Exception()
except Exception as ex: except Exception as ex:
if self.args.mtag_v: if self.args.mtag_v:
self.log("mutagen-err [{}] @ [{}]".format(ex, abspath), "90") self.log("mutagen-err [%s] @ %r" % (ex, abspath), "90")
return self.get_ffprobe(abspath) if self.can_ffprobe else {} return self.get_ffprobe(abspath) if self.can_ffprobe else {}
@@ -699,8 +699,8 @@ class MTag(object):
ret[tag] = zj[tag] ret[tag] = zj[tag]
except: except:
if self.args.mtag_v: if self.args.mtag_v:
t = "mtag error: tagname {}, parser {}, file {} => {}" t = "mtag error: tagname %r, parser %r, file %r => %r"
self.log(t.format(tagname, parser.bin, abspath, min_ex())) self.log(t % (tagname, parser.bin, abspath, min_ex()), 6)
if ap != abspath: if ap != abspath:
wunlink(self.log, ap, VF_CAREFUL) wunlink(self.log, ap, VF_CAREFUL)

View File

@@ -263,7 +263,7 @@ class SMB(object):
time.time(), time.time(),
"", "",
): ):
yeet("blocked by xbu server config: " + vpath) yeet("blocked by xbu server config: %r" % (vpath,))
ret = bos.open(ap, flags, *a, mode=chmod, **ka) ret = bos.open(ap, flags, *a, mode=chmod, **ka)
if wr: if wr:

View File

@@ -110,7 +110,7 @@ def errdesc(
report = ["copyparty failed to add the following files to the archive:", ""] report = ["copyparty failed to add the following files to the archive:", ""]
for fn, err in errors: for fn, err in errors:
report.extend([" file: {}".format(fn), "error: {}".format(err), ""]) report.extend([" file: %r" % (fn,), "error: %s" % (err,), ""])
btxt = "\r\n".join(report).encode("utf-8", "replace") btxt = "\r\n".join(report).encode("utf-8", "replace")
btxt = vol_san(list(vfs.all_vols.values()), btxt) btxt = vol_san(list(vfs.all_vols.values()), btxt)

View File

@@ -402,17 +402,17 @@ class TcpSrv(object):
if not netdevs: if not netdevs:
continue continue
added = "nothing" add = []
removed = "nothing" rem = []
for k, v in netdevs.items(): for k, v in netdevs.items():
if k not in self.netdevs: if k not in self.netdevs:
added = "{} = {}".format(k, v) add.append("\n\033[32m added %s = %s" % (k, v))
for k, v in self.netdevs.items(): for k, v in self.netdevs.items():
if k not in netdevs: if k not in netdevs:
removed = "{} = {}".format(k, v) rem.append("\n\033[33mremoved %s = %s" % (k, v))
t = "network change detected:\n added {}\033[0;33m\nremoved {}" t = "network change detected:%s%s"
self.log("tcpsrv", t.format(added, removed), 3) self.log("tcpsrv", t % ("".join(add), "".join(rem)), 3)
self.netdevs = netdevs self.netdevs = netdevs
self._distribute_netdevs() self._distribute_netdevs()

View File

@@ -357,7 +357,7 @@ class Tftpd(object):
time.time(), time.time(),
"", "",
): ):
yeet("blocked by xbu server config: " + vpath) yeet("blocked by xbu server config: %r" % (vpath,))
if not self.args.tftp_nols and bos.path.isdir(ap): if not self.args.tftp_nols and bos.path.isdir(ap):
return self._ls(vpath, "", 0, True) return self._ls(vpath, "", 0, True)

View File

@@ -109,13 +109,13 @@ class ThumbCli(object):
fmt = sfmt fmt = sfmt
elif fmt[:1] == "p" and not is_au and not is_vid: elif fmt[:1] == "p" and not is_au and not is_vid:
t = "cannot thumbnail [%s]: png only allowed for waveforms" t = "cannot thumbnail %r: png only allowed for waveforms"
self.log(t % (rem), 6) self.log(t % (rem,), 6)
return None return None
histpath = self.asrv.vfs.histtab.get(ptop) histpath = self.asrv.vfs.histtab.get(ptop)
if not histpath: if not histpath:
self.log("no histpath for [{}]".format(ptop)) self.log("no histpath for %r" % (ptop,))
return None return None
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa) tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)

View File

@@ -239,7 +239,7 @@ class ThumbSrv(object):
def get(self, ptop: str, rem: str, mtime: float, fmt: str) -> Optional[str]: def get(self, ptop: str, rem: str, mtime: float, fmt: str) -> Optional[str]:
histpath = self.asrv.vfs.histtab.get(ptop) histpath = self.asrv.vfs.histtab.get(ptop)
if not histpath: if not histpath:
self.log("no histpath for [{}]".format(ptop)) self.log("no histpath for %r" % (ptop,))
return None return None
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa) tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)
@@ -249,7 +249,7 @@ class ThumbSrv(object):
with self.mutex: with self.mutex:
try: try:
self.busy[tpath].append(cond) self.busy[tpath].append(cond)
self.log("joined waiting room for %s" % (tpath,)) self.log("joined waiting room for %r" % (tpath,))
except: except:
thdir = os.path.dirname(tpath) thdir = os.path.dirname(tpath)
bos.makedirs(os.path.join(thdir, "w")) bos.makedirs(os.path.join(thdir, "w"))
@@ -266,11 +266,11 @@ class ThumbSrv(object):
allvols = list(self.asrv.vfs.all_vols.values()) allvols = list(self.asrv.vfs.all_vols.values())
vn = next((x for x in allvols if x.realpath == ptop), None) vn = next((x for x in allvols if x.realpath == ptop), None)
if not vn: if not vn:
self.log("ptop [{}] not in {}".format(ptop, allvols), 3) self.log("ptop %r not in %s" % (ptop, allvols), 3)
vn = self.asrv.vfs.all_aps[0][1] vn = self.asrv.vfs.all_aps[0][1]
self.q.put((abspath, tpath, fmt, vn)) self.q.put((abspath, tpath, fmt, vn))
self.log("conv {} :{} \033[0m{}".format(tpath, fmt, abspath), c=6) self.log("conv %r :%s \033[0m%r" % (tpath, fmt, abspath), 6)
while not self.stopping: while not self.stopping:
with self.mutex: with self.mutex:
@@ -375,8 +375,8 @@ class ThumbSrv(object):
fun(ap_unpk, ttpath, fmt, vn) fun(ap_unpk, ttpath, fmt, vn)
break break
except Exception as ex: except Exception as ex:
msg = "{} could not create thumbnail of {}\n{}" msg = "%s could not create thumbnail of %r\n%s"
msg = msg.format(fun.__name__, abspath, min_ex()) msg = msg % (fun.__name__, abspath, min_ex())
c: Union[str, int] = 1 if "<Signals.SIG" in msg else "90" c: Union[str, int] = 1 if "<Signals.SIG" in msg else "90"
self.log(msg, c) self.log(msg, c)
if getattr(ex, "returncode", 0) != 321: if getattr(ex, "returncode", 0) != 321:

View File

@@ -136,7 +136,7 @@ class U2idx(object):
ptop = vn.realpath ptop = vn.realpath
histpath = self.asrv.vfs.histtab.get(ptop) histpath = self.asrv.vfs.histtab.get(ptop)
if not histpath: if not histpath:
self.log("no histpath for [{}]".format(ptop)) self.log("no histpath for %r" % (ptop,))
return None return None
db_path = os.path.join(histpath, "up2k.db") db_path = os.path.join(histpath, "up2k.db")
@@ -151,7 +151,7 @@ class U2idx(object):
db = sqlite3.connect(uri, timeout=2, uri=True, check_same_thread=False) db = sqlite3.connect(uri, timeout=2, uri=True, check_same_thread=False)
cur = db.cursor() cur = db.cursor()
cur.execute('pragma table_info("up")').fetchone() cur.execute('pragma table_info("up")').fetchone()
self.log("ro: {}".format(db_path)) self.log("ro: %r" % (db_path,))
except: except:
self.log("could not open read-only: {}\n{}".format(uri, min_ex())) self.log("could not open read-only: {}\n{}".format(uri, min_ex()))
# may not fail until the pragma so unset it # may not fail until the pragma so unset it
@@ -161,7 +161,7 @@ class U2idx(object):
# on windows, this steals the write-lock from up2k.deferred_init -- # on windows, this steals the write-lock from up2k.deferred_init --
# seen on win 10.0.17763.2686, py 3.10.4, sqlite 3.37.2 # seen on win 10.0.17763.2686, py 3.10.4, sqlite 3.37.2
cur = sqlite3.connect(db_path, timeout=2, check_same_thread=False).cursor() cur = sqlite3.connect(db_path, timeout=2, check_same_thread=False).cursor()
self.log("opened {}".format(db_path)) self.log("opened %r" % (db_path,))
self.cur[ptop] = cur self.cur[ptop] = cur
return cur return cur

View File

@@ -794,7 +794,7 @@ class Up2k(object):
if ccd != cd: if ccd != cd:
continue continue
self.log("xiu: {}# {}".format(len(wrfs), cmd)) self.log("xiu: %d# %r" % (len(wrfs), cmd))
runihook(self.log, cmd, vol, ups) runihook(self.log, cmd, vol, ups)
def _vis_job_progress(self, job: dict[str, Any]) -> str: def _vis_job_progress(self, job: dict[str, Any]) -> str:
@@ -856,9 +856,9 @@ class Up2k(object):
self.iacct = self.asrv.iacct self.iacct = self.asrv.iacct
self.grps = self.asrv.grps self.grps = self.asrv.grps
have_e2d = self.args.idp_h_usr or self.args.chpw or self.args.shr
vols = list(all_vols.values()) vols = list(all_vols.values())
t0 = time.time() t0 = time.time()
have_e2d = False
if self.no_expr_idx: if self.no_expr_idx:
modified = False modified = False
@@ -1060,7 +1060,7 @@ class Up2k(object):
"""mutex(main,reg) me""" """mutex(main,reg) me"""
histpath = self.vfs.histtab.get(ptop) histpath = self.vfs.histtab.get(ptop)
if not histpath: if not histpath:
self.log("no histpath for [{}]".format(ptop)) self.log("no histpath for %r" % (ptop,))
return None return None
db_path = os.path.join(histpath, "up2k.db") db_path = os.path.join(histpath, "up2k.db")
@@ -1119,6 +1119,7 @@ class Up2k(object):
reg = {} reg = {}
drp = None drp = None
emptylist = [] emptylist = []
dotpart = "." if self.args.dotpart else ""
snap = os.path.join(histpath, "up2k.snap") snap = os.path.join(histpath, "up2k.snap")
if bos.path.exists(snap): if bos.path.exists(snap):
with gzip.GzipFile(snap, "rb") as f: with gzip.GzipFile(snap, "rb") as f:
@@ -1131,6 +1132,8 @@ class Up2k(object):
except: except:
pass pass
reg = reg2 # diff-golf
if reg2 and "dwrk" not in reg2[next(iter(reg2))]: if reg2 and "dwrk" not in reg2[next(iter(reg2))]:
for job in reg2.values(): for job in reg2.values():
job["dwrk"] = job["wark"] job["dwrk"] = job["wark"]
@@ -1138,7 +1141,8 @@ class Up2k(object):
rm = [] rm = []
for k, job in reg2.items(): for k, job in reg2.items():
job["ptop"] = ptop job["ptop"] = ptop
if "done" in job: is_done = "done" in job
if is_done:
job["need"] = job["hash"] = emptylist job["need"] = job["hash"] = emptylist
else: else:
if "need" not in job: if "need" not in job:
@@ -1146,22 +1150,32 @@ class Up2k(object):
if "hash" not in job: if "hash" not in job:
job["hash"] = [] job["hash"] = []
if is_done:
fp = djoin(ptop, job["prel"], job["name"]) fp = djoin(ptop, job["prel"], job["name"])
else:
fp = djoin(ptop, job["prel"], dotpart + job["name"] + ".PARTIAL")
if bos.path.exists(fp): if bos.path.exists(fp):
reg[k] = job if is_done:
if "done" in job:
continue continue
job["poke"] = time.time() job["poke"] = time.time()
job["busy"] = {} job["busy"] = {}
else: else:
self.log("ign deleted file in snap: [{}]".format(fp)) self.log("ign deleted file in snap: %r" % (fp,))
if not n4g: if not n4g:
rm.append(k) rm.append(k)
continue
for x in rm: for x in rm:
del reg2[x] del reg2[x]
# optimize pre-1.15.4 entries
if next((x for x in reg.values() if "done" in x and "poke" in x), None):
zsl = "host tnam busy sprs poke t0c".split()
for job in reg.values():
if "done" in job:
for k in zsl:
job.pop(k, None)
if drp is None: if drp is None:
drp = [k for k, v in reg.items() if not v["need"]] drp = [k for k, v in reg.items() if not v["need"]]
else: else:
@@ -1386,12 +1400,12 @@ class Up2k(object):
xvol: bool, xvol: bool,
) -> tuple[int, int, int]: ) -> tuple[int, int, int]:
if xvol and not rcdir.startswith(top): if xvol and not rcdir.startswith(top):
self.log("skip xvol: [{}] -> [{}]".format(cdir, rcdir), 6) self.log("skip xvol: %r -> %r" % (cdir, rcdir), 6)
return 0, 0, 0 return 0, 0, 0
if rcdir in seen: if rcdir in seen:
t = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}" t = "bailing from symlink loop,\n prev: %r\n curr: %r\n from: %r"
self.log(t.format(seen[-1], rcdir, cdir), 3) self.log(t % (seen[-1], rcdir, cdir), 3)
return 0, 0, 0 return 0, 0, 0
# total-files-added, total-num-files, recursive-size # total-files-added, total-num-files, recursive-size
@@ -1447,7 +1461,7 @@ class Up2k(object):
and inf.st_dev != dev and inf.st_dev != dev
and not (ANYWIN and bos.stat(rap).st_dev == dev) and not (ANYWIN and bos.stat(rap).st_dev == dev)
): ):
self.log("skip xdev {}->{}: {}".format(dev, inf.st_dev, abspath), 6) self.log("skip xdev %s->%s: %r" % (dev, inf.st_dev, abspath), 6)
continue continue
if abspath in excl or rap in excl: if abspath in excl or rap in excl:
unreg.append(rp) unreg.append(rp)
@@ -1476,10 +1490,10 @@ class Up2k(object):
tnf += i2 tnf += i2
rsz += i3 rsz += i3
except: except:
t = "failed to index subdir [{}]:\n{}" t = "failed to index subdir %r:\n%s"
self.log(t.format(abspath, min_ex()), c=1) self.log(t % (abspath, min_ex()), 1)
elif not stat.S_ISREG(inf.st_mode): elif not stat.S_ISREG(inf.st_mode):
self.log("skip type-0%o file [%s]" % (inf.st_mode, abspath)) self.log("skip type-0%o file %r" % (inf.st_mode, abspath))
else: else:
# self.log("file: {}".format(abspath)) # self.log("file: {}".format(abspath))
if rp.endswith(".PARTIAL") and time.time() - lmod < 60: if rp.endswith(".PARTIAL") and time.time() - lmod < 60:
@@ -1562,7 +1576,7 @@ class Up2k(object):
db.c.execute("insert into cv values (?,?,?)", (crd, cdn, cv)) db.c.execute("insert into cv values (?,?,?)", (crd, cdn, cv))
db.n += 1 db.n += 1
except Exception as ex: except Exception as ex:
self.log("cover {}/{} failed: {}".format(rd, cv, ex), 6) self.log("cover %r/%r failed: %s" % (rd, cv, ex), 6)
seen_files = set([x[2] for x in files]) # for dropcheck seen_files = set([x[2] for x in files]) # for dropcheck
for sz, lmod, fn in files: for sz, lmod, fn in files:
@@ -1584,9 +1598,9 @@ class Up2k(object):
self.pp.n -= 1 self.pp.n -= 1
dw, dts, dsz, ip, at = in_db[0] dw, dts, dsz, ip, at = in_db[0]
if len(in_db) > 1: if len(in_db) > 1:
t = "WARN: multiple entries: [{}] => [{}] |{}|\n{}" t = "WARN: multiple entries: %r => %r |%d|\n%r"
rep_db = "\n".join([repr(x) for x in in_db]) rep_db = "\n".join([repr(x) for x in in_db])
self.log(t.format(top, rp, len(in_db), rep_db)) self.log(t % (top, rp, len(in_db), rep_db))
dts = -1 dts = -1
if fat32 and abs(dts - lmod) == 1: if fat32 and abs(dts - lmod) == 1:
@@ -1595,10 +1609,8 @@ class Up2k(object):
if dts == lmod and dsz == sz and (nohash or dw[0] != "#" or not sz): if dts == lmod and dsz == sz and (nohash or dw[0] != "#" or not sz):
continue continue
t = "reindex [{}] => [{}] mtime({}/{}) size({}/{})".format( t = "reindex %r => %r mtime(%s/%s) size(%s/%s)"
top, rp, dts, lmod, dsz, sz self.log(t % (top, rp, dts, lmod, dsz, sz))
)
self.log(t)
self.db_rm(db.c, rd, fn, 0) self.db_rm(db.c, rd, fn, 0)
tfa += 1 tfa += 1
db.n += 1 db.n += 1
@@ -1614,14 +1626,14 @@ class Up2k(object):
wark = up2k_wark_from_metadata(self.salt, sz, lmod, rd, fn) wark = up2k_wark_from_metadata(self.salt, sz, lmod, rd, fn)
else: else:
if sz > 1024 * 1024: if sz > 1024 * 1024:
self.log("file: {}".format(abspath)) self.log("file: %r" % (abspath,))
try: try:
hashes, _ = self._hashlist_from_file( hashes, _ = self._hashlist_from_file(
abspath, "a{}, ".format(self.pp.n) abspath, "a{}, ".format(self.pp.n)
) )
except Exception as ex: except Exception as ex:
self.log("hash: {} @ [{}]".format(repr(ex), abspath)) self.log("hash: %r @ %r" % (ex, abspath))
continue continue
if not hashes: if not hashes:
@@ -1667,8 +1679,8 @@ class Up2k(object):
assert erd_erd # type: ignore # !rm assert erd_erd # type: ignore # !rm
if n: if n:
t = "forgetting {} shadowed autoindexed files in [{}] > [{}]" t = "forgetting %d shadowed autoindexed files in %r > %r"
self.log(t.format(n, top, sh_rd)) self.log(t % (n, top, sh_rd))
q = "delete from dh where (d = ? or d like ?||'/%')" q = "delete from dh where (d = ? or d like ?||'/%')"
db.c.execute(q, erd_erd) db.c.execute(q, erd_erd)
@@ -1865,7 +1877,7 @@ class Up2k(object):
stl = bos.lstat(abspath) stl = bos.lstat(abspath)
st = bos.stat(abspath) if stat.S_ISLNK(stl.st_mode) else stl st = bos.stat(abspath) if stat.S_ISLNK(stl.st_mode) else stl
except Exception as ex: except Exception as ex:
self.log("missing file: %s" % (abspath,), 3) self.log("missing file: %r" % (abspath,), 3)
f404.append((drd, dfn, w)) f404.append((drd, dfn, w))
continue continue
@@ -1876,12 +1888,12 @@ class Up2k(object):
w2 = up2k_wark_from_metadata(self.salt, sz2, mt2, rd, fn) w2 = up2k_wark_from_metadata(self.salt, sz2, mt2, rd, fn)
else: else:
if sz2 > 1024 * 1024 * 32: if sz2 > 1024 * 1024 * 32:
self.log("file: {}".format(abspath)) self.log("file: %r" % (abspath,))
try: try:
hashes, _ = self._hashlist_from_file(abspath, pf) hashes, _ = self._hashlist_from_file(abspath, pf)
except Exception as ex: except Exception as ex:
self.log("hash: {} @ [{}]".format(repr(ex), abspath)) self.log("hash: %r @ %r" % (ex, abspath))
continue continue
if not hashes: if not hashes:
@@ -1901,9 +1913,8 @@ class Up2k(object):
rewark.append((drd, dfn, w2, sz2, mt2)) rewark.append((drd, dfn, w2, sz2, mt2))
t = "hash mismatch: {}\n db: {} ({} byte, {})\n fs: {} ({} byte, {})" t = "hash mismatch: %r\n db: %s (%d byte, %d)\n fs: %s (%d byte, %d)"
t = t.format(abspath, w, sz, mt, w2, sz2, mt2) self.log(t % (abspath, w, sz, mt, w2, sz2, mt2), 1)
self.log(t, 1)
if e2vp and (rewark or f404): if e2vp and (rewark or f404):
self.hub.retcode = 1 self.hub.retcode = 1
@@ -2451,7 +2462,7 @@ class Up2k(object):
q.task_done() q.task_done()
def _log_tag_err(self, parser: Any, abspath: str, ex: Any) -> None: def _log_tag_err(self, parser: Any, abspath: str, ex: Any) -> None:
msg = "{} failed to read tags from {}:\n{}".format(parser, abspath, ex) msg = "%s failed to read tags from %r:\n%s" % (parser, abspath, ex)
self.log(msg.lstrip(), c=1 if "<Signals.SIG" in msg else 3) self.log(msg.lstrip(), c=1 if "<Signals.SIG" in msg else 3)
def _tagscan_file( def _tagscan_file(
@@ -2991,11 +3002,11 @@ class Up2k(object):
job = rj job = rj
break break
else: else:
self.log("asserting contents of %s" % (orig_ap,)) self.log("asserting contents of %r" % (orig_ap,))
hashes2, st = self._hashlist_from_file(orig_ap) hashes2, st = self._hashlist_from_file(orig_ap)
wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2) wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2)
if dwark != wark2: if dwark != wark2:
t = "will not dedup (fs index desync): fs=%s, db=%s, file: %s\n%s" t = "will not dedup (fs index desync): fs=%s, db=%s, file: %r\n%s"
self.log(t % (wark2, dwark, orig_ap, rj)) self.log(t % (wark2, dwark, orig_ap, rj))
lost.append(dupe[3:]) lost.append(dupe[3:])
continue continue
@@ -3007,14 +3018,14 @@ class Up2k(object):
if wark in reg: if wark in reg:
del reg[wark] del reg[wark]
job["hash"] = job["need"] = [] job["hash"] = job["need"] = []
job["done"] = True job["done"] = 1
job["busy"] = {} job["busy"] = {}
if lost: if lost:
c2 = None c2 = None
for cur, dp_dir, dp_fn in lost: for cur, dp_dir, dp_fn in lost:
t = "forgetting desynced db entry: /{}" t = "forgetting desynced db entry: %r"
self.log(t.format(vjoin(vjoin(vfs.vpath, dp_dir), dp_fn))) self.log(t % ("/" + vjoin(vjoin(vfs.vpath, dp_dir), dp_fn)))
self.db_rm(cur, dp_dir, dp_fn, cj["size"]) self.db_rm(cur, dp_dir, dp_fn, cj["size"])
if c2 and c2 != cur: if c2 and c2 != cur:
c2.connection.commit() c2.connection.commit()
@@ -3043,8 +3054,8 @@ class Up2k(object):
except: except:
# missing; restart # missing; restart
if not self.args.nw and not n4g: if not self.args.nw and not n4g:
t = "forgetting deleted partial upload at {}" t = "forgetting deleted partial upload at %r"
self.log(t.format(path)) self.log(t % (path,))
del reg[wark] del reg[wark]
break break
@@ -3055,19 +3066,25 @@ class Up2k(object):
pass pass
elif st.st_size != rj["size"]: elif st.st_size != rj["size"]:
t = "will not dedup (fs index desync): {}, size fs={} db={}, mtime fs={} db={}, file: {}\n{}" t = "will not dedup (fs index desync): %s, size fs=%d db=%d, mtime fs=%d db=%d, file: %r\n%s"
t = t.format( t = t % (
wark, st.st_size, rj["size"], st.st_mtime, rj["lmod"], path, rj wark,
st.st_size,
rj["size"],
st.st_mtime,
rj["lmod"],
path,
rj,
) )
self.log(t) self.log(t)
del reg[wark] del reg[wark]
elif inc_ap != orig_ap and not data_ok and "done" in reg[wark]: elif inc_ap != orig_ap and not data_ok and "done" in reg[wark]:
self.log("asserting contents of %s" % (orig_ap,)) self.log("asserting contents of %r" % (orig_ap,))
hashes2, _ = self._hashlist_from_file(orig_ap) hashes2, _ = self._hashlist_from_file(orig_ap)
wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2) wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2)
if wark != wark2: if wark != wark2:
t = "will not dedup (fs index desync): fs=%s, idx=%s, file: %s\n%s" t = "will not dedup (fs index desync): fs=%s, idx=%s, file: %r\n%s"
self.log(t % (wark2, wark, orig_ap, rj)) self.log(t % (wark2, wark, orig_ap, rj))
del reg[wark] del reg[wark]
@@ -3084,7 +3101,7 @@ class Up2k(object):
vsrc = djoin(job["vtop"], job["prel"], job["name"]) vsrc = djoin(job["vtop"], job["prel"], job["name"])
vsrc = vsrc.replace("\\", "/") # just for prints anyways vsrc = vsrc.replace("\\", "/") # just for prints anyways
if "done" not in job: if "done" not in job:
self.log("unfinished:\n {0}\n {1}".format(src, dst)) self.log("unfinished:\n %r\n %r" % (src, dst))
err = "partial upload exists at a different location; please resume uploading here instead:\n" err = "partial upload exists at a different location; please resume uploading here instead:\n"
err += "/" + quotep(vsrc) + " " err += "/" + quotep(vsrc) + " "
@@ -3093,6 +3110,7 @@ class Up2k(object):
if cur: if cur:
dupe = (cj["prel"], cj["name"], cj["lmod"]) dupe = (cj["prel"], cj["name"], cj["lmod"])
try: try:
if dupe not in self.dupesched[src]:
self.dupesched[src].append(dupe) self.dupesched[src].append(dupe)
except: except:
self.dupesched[src] = [dupe] self.dupesched[src] = [dupe]
@@ -3100,7 +3118,7 @@ class Up2k(object):
raise Pebkac(422, err) raise Pebkac(422, err)
elif "nodupe" in vfs.flags: elif "nodupe" in vfs.flags:
self.log("dupe-reject:\n {0}\n {1}".format(src, dst)) self.log("dupe-reject:\n %r\n %r" % (src, dst))
err = "upload rejected, file already exists:\n" err = "upload rejected, file already exists:\n"
err += "/" + quotep(vsrc) + " " err += "/" + quotep(vsrc) + " "
raise Pebkac(409, err) raise Pebkac(409, err)
@@ -3162,7 +3180,7 @@ class Up2k(object):
"", "",
) )
if not hr: if not hr:
t = "upload blocked by xbu server config: %s" % (dst,) t = "upload blocked by xbu server config: %r" % (dst,)
self.log(t, 1) self.log(t, 1)
raise Pebkac(403, t) raise Pebkac(403, t)
if hr.get("reloc"): if hr.get("reloc"):
@@ -3303,7 +3321,7 @@ class Up2k(object):
times = (int(time.time()), int(cj["lmod"])) times = (int(time.time()), int(cj["lmod"]))
bos.utime(ap, times, False) bos.utime(ap, times, False)
self.log("touched %s from %d to %d" % (ap, job["lmod"], cj["lmod"])) self.log("touched %r from %d to %d" % (ap, job["lmod"], cj["lmod"]))
except Exception as ex: except Exception as ex:
self.log("umod failed, %r" % (ex,), 3) self.log("umod failed, %r" % (ex,), 3)
@@ -3318,7 +3336,7 @@ class Up2k(object):
fp = djoin(fdir, fname) fp = djoin(fdir, fname)
if job.get("replace") and bos.path.exists(fp): if job.get("replace") and bos.path.exists(fp):
self.log("replacing existing file at {}".format(fp)) self.log("replacing existing file at %r" % (fp,))
cur = None cur = None
ptop = job["ptop"] ptop = job["ptop"]
vf = self.flags.get(ptop) or {} vf = self.flags.get(ptop) or {}
@@ -3361,9 +3379,9 @@ class Up2k(object):
raise Exception(t % (src, fsrc, dst)) raise Exception(t % (src, fsrc, dst))
if verbose: if verbose:
t = "linking dupe:\n point-to: {0}\n link-loc: {1}" t = "linking dupe:\n point-to: {0!r}\n link-loc: {1!r}"
if fsrc: if fsrc:
t += "\n data-src: {2}" t += "\n data-src: {2!r}"
self.log(t.format(src, dst, fsrc)) self.log(t.format(src, dst, fsrc))
if self.args.nw: if self.args.nw:
@@ -3433,7 +3451,7 @@ class Up2k(object):
elif fsrc and bos.path.isfile(fsrc): elif fsrc and bos.path.isfile(fsrc):
csrc = fsrc csrc = fsrc
else: else:
t = "BUG: no valid sources to link from! orig(%s) fsrc(%s) link(%s)" t = "BUG: no valid sources to link from! orig(%r) fsrc(%r) link(%r)"
self.log(t, 1) self.log(t, 1)
raise Exception(t % (src, fsrc, dst)) raise Exception(t % (src, fsrc, dst))
shutil.copy2(fsenc(csrc), fsenc(dst)) shutil.copy2(fsenc(csrc), fsenc(dst))
@@ -3611,15 +3629,12 @@ class Up2k(object):
atomic_move(self.log, src, dst, vflags) atomic_move(self.log, src, dst, vflags)
times = (int(time.time()), int(job["lmod"])) times = (int(time.time()), int(job["lmod"]))
self.log( t = "no more chunks, setting times %s (%d) on %r"
"no more chunks, setting times {} ({}) on {}".format( self.log(t % (times, bos.path.getsize(dst), dst))
times, bos.path.getsize(dst), dst
)
)
try: try:
bos.utime(dst, times) bos.utime(dst, times)
except: except:
self.log("failed to utime ({}, {})".format(dst, times)) self.log("failed to utime (%r, %s)" % (dst, times))
zs = "prel name lmod size ptop vtop wark dwrk host user addr" zs = "prel name lmod size ptop vtop wark dwrk host user addr"
z2 = [job[x] for x in zs.split()] z2 = [job[x] for x in zs.split()]
@@ -3936,7 +3951,7 @@ class Up2k(object):
if jrem == rem: if jrem == rem:
if job["ptop"] != ptop: if job["ptop"] != ptop:
t = "job.ptop [%s] != vol.ptop [%s] ??" t = "job.ptop [%s] != vol.ptop [%s] ??"
raise Exception(t % (job["ptop"] != ptop)) raise Exception(t % (job["ptop"], ptop))
partial = vn.canonical(vjoin(job["prel"], job["tnam"])) partial = vn.canonical(vjoin(job["prel"], job["tnam"]))
break break
if partial: if partial:
@@ -3976,7 +3991,7 @@ class Up2k(object):
if is_dir: if is_dir:
# note: deletion inside shares would require a rewrite here; # note: deletion inside shares would require a rewrite here;
# shares necessitate get_dbv which is incompatible with walk # shares necessitate get_dbv which is incompatible with walk
g = vn0.walk("", rem0, [], uname, permsets, True, scandir, True) g = vn0.walk("", rem0, [], uname, permsets, 2, scandir, True)
if unpost: if unpost:
raise Pebkac(400, "cannot unpost folders") raise Pebkac(400, "cannot unpost folders")
elif stat.S_ISLNK(st.st_mode) or stat.S_ISREG(st.st_mode): elif stat.S_ISLNK(st.st_mode) or stat.S_ISREG(st.st_mode):
@@ -3984,7 +3999,7 @@ class Up2k(object):
vpath_dir = vsplit(vpath)[0] vpath_dir = vsplit(vpath)[0]
g = [(vn, voldir, vpath_dir, adir, [(fn, 0)], [], {})] # type: ignore g = [(vn, voldir, vpath_dir, adir, [(fn, 0)], [], {})] # type: ignore
else: else:
self.log("rm: skip type-0%o file [%s]" % (st.st_mode, atop)) self.log("rm: skip type-0%o file %r" % (st.st_mode, atop))
return 0, [], [] return 0, [], []
xbd = vn.flags.get("xbd") xbd = vn.flags.get("xbd")
@@ -4008,7 +4023,7 @@ class Up2k(object):
volpath = ("%s/%s" % (vrem, fn)).strip("/") volpath = ("%s/%s" % (vrem, fn)).strip("/")
vpath = ("%s/%s" % (dbv.vpath, volpath)).strip("/") vpath = ("%s/%s" % (dbv.vpath, volpath)).strip("/")
self.log("rm %s\n %s" % (vpath, abspath)) self.log("rm %r\n %r" % (vpath, abspath))
if not unpost: if not unpost:
# recursion-only sanchk # recursion-only sanchk
_ = dbv.get(volpath, uname, *permsets[0]) _ = dbv.get(volpath, uname, *permsets[0])
@@ -4031,8 +4046,8 @@ class Up2k(object):
time.time(), time.time(),
"", "",
): ):
t = "delete blocked by xbd server config: {}" t = "delete blocked by xbd server config: %r"
self.log(t.format(abspath), 1) self.log(t % (abspath,), 1)
continue continue
n_files += 1 n_files += 1
@@ -4089,6 +4104,7 @@ class Up2k(object):
raise Pebkac(400, "cp: cannot copy parent into subfolder") raise Pebkac(400, "cp: cannot copy parent into subfolder")
svn, srem = self.vfs.get(svp, uname, True, False) svn, srem = self.vfs.get(svp, uname, True, False)
dvn, drem = self.vfs.get(dvp, uname, False, True)
svn_dbv, _ = svn.get_dbv(srem) svn_dbv, _ = svn.get_dbv(srem)
sabs = svn.canonical(srem, False) sabs = svn.canonical(srem, False)
curs: set["sqlite3.Cursor"] = set() curs: set["sqlite3.Cursor"] = set()
@@ -4111,8 +4127,12 @@ class Up2k(object):
permsets = [[True, False]] permsets = [[True, False]]
scandir = not self.args.no_scandir scandir = not self.args.no_scandir
# if user can see dotfiles in target volume, only include
# dots from source vols where user also has the dot perm
dots = 1 if uname in dvn.axs.udot else 2
# don't use svn_dbv; would skip subvols due to _ls `if not rem:` # don't use svn_dbv; would skip subvols due to _ls `if not rem:`
g = svn.walk("", srem, [], uname, permsets, True, scandir, True) g = svn.walk("", srem, [], uname, permsets, dots, scandir, True)
with self.mutex: with self.mutex:
try: try:
for dbv, vrem, _, atop, files, rd, vd in g: for dbv, vrem, _, atop, files, rd, vd in g:
@@ -4121,7 +4141,7 @@ class Up2k(object):
svpf = "/".join(x for x in [dbv.vpath, vrem, fn[0]] if x) svpf = "/".join(x for x in [dbv.vpath, vrem, fn[0]] if x)
if not svpf.startswith(svp + "/"): # assert if not svpf.startswith(svp + "/"): # assert
self.log(min_ex(), 1) self.log(min_ex(), 1)
t = "cp: bug at %s, top %s%s" t = "cp: bug at %r, top %r%s"
raise Pebkac(500, t % (svpf, svp, SEESLOG)) raise Pebkac(500, t % (svpf, svp, SEESLOG))
dvpf = dvp + svpf[len(svp) :] dvpf = dvp + svpf[len(svp) :]
@@ -4161,7 +4181,7 @@ class Up2k(object):
except: except:
pass # broken symlink; keep as-is pass # broken symlink; keep as-is
elif not stat.S_ISREG(st.st_mode): elif not stat.S_ISREG(st.st_mode):
self.log("skipping type-0%o file [%s]" % (st.st_mode, sabs)) self.log("skipping type-0%o file %r" % (st.st_mode, sabs))
return "" return ""
else: else:
is_link = False is_link = False
@@ -4189,7 +4209,7 @@ class Up2k(object):
time.time(), time.time(),
"", "",
): ):
t = "copy blocked by xbr server config: {}".format(svp) t = "copy blocked by xbr server config: %r" % (svp,)
self.log(t, 1) self.log(t, 1)
raise Pebkac(405, t) raise Pebkac(405, t)
@@ -4226,7 +4246,7 @@ class Up2k(object):
) )
curs.add(c2) curs.add(c2)
else: else:
self.log("not found in src db: [{}]".format(svp)) self.log("not found in src db: %r" % (svp,))
try: try:
if is_link and st != stl: if is_link and st != stl:
@@ -4243,7 +4263,7 @@ class Up2k(object):
if ex.errno != errno.EXDEV: if ex.errno != errno.EXDEV:
raise raise
self.log("using plain copy (%s):\n %s\n %s" % (ex.strerror, sabs, dabs)) self.log("using plain copy (%s):\n %r\n %r" % (ex.strerror, sabs, dabs))
b1, b2 = fsenc(sabs), fsenc(dabs) b1, b2 = fsenc(sabs), fsenc(dabs)
is_link = os.path.islink(b1) # due to _relink is_link = os.path.islink(b1) # due to _relink
try: try:
@@ -4324,13 +4344,13 @@ class Up2k(object):
scandir = not self.args.no_scandir scandir = not self.args.no_scandir
# following symlinks is too scary # following symlinks is too scary
g = svn.walk("", srem, [], uname, permsets, True, scandir, True) g = svn.walk("", srem, [], uname, permsets, 2, scandir, True)
for dbv, vrem, _, atop, files, rd, vd in g: for dbv, vrem, _, atop, files, rd, vd in g:
if dbv != jail: if dbv != jail:
# fail early (prevent partial moves) # fail early (prevent partial moves)
raise Pebkac(400, "mv: source folder contains other volumes") raise Pebkac(400, "mv: source folder contains other volumes")
g = svn.walk("", srem, [], uname, permsets, True, scandir, True) g = svn.walk("", srem, [], uname, permsets, 2, scandir, True)
with self.mutex: with self.mutex:
try: try:
for dbv, vrem, _, atop, files, rd, vd in g: for dbv, vrem, _, atop, files, rd, vd in g:
@@ -4343,7 +4363,7 @@ class Up2k(object):
svpf = "/".join(x for x in [dbv.vpath, vrem, fn[0]] if x) svpf = "/".join(x for x in [dbv.vpath, vrem, fn[0]] if x)
if not svpf.startswith(svp + "/"): # assert if not svpf.startswith(svp + "/"): # assert
self.log(min_ex(), 1) self.log(min_ex(), 1)
t = "mv: bug at %s, top %s%s" t = "mv: bug at %r, top %r%s"
raise Pebkac(500, t % (svpf, svp, SEESLOG)) raise Pebkac(500, t % (svpf, svp, SEESLOG))
dvpf = dvp + svpf[len(svp) :] dvpf = dvp + svpf[len(svp) :]
@@ -4362,7 +4382,7 @@ class Up2k(object):
for ap in reversed(zsl): for ap in reversed(zsl):
if not ap.startswith(sabs): if not ap.startswith(sabs):
self.log(min_ex(), 1) self.log(min_ex(), 1)
t = "mv_d: bug at %s, top %s%s" t = "mv_d: bug at %r, top %r%s"
raise Pebkac(500, t % (ap, sabs, SEESLOG)) raise Pebkac(500, t % (ap, sabs, SEESLOG))
rem = ap[len(sabs) :].replace(os.sep, "/").lstrip("/") rem = ap[len(sabs) :].replace(os.sep, "/").lstrip("/")
@@ -4433,7 +4453,7 @@ class Up2k(object):
time.time(), time.time(),
"", "",
): ):
t = "move blocked by xbr server config: {}".format(svp) t = "move blocked by xbr server config: %r" % (svp,)
self.log(t, 1) self.log(t, 1)
raise Pebkac(405, t) raise Pebkac(405, t)
@@ -4443,8 +4463,8 @@ class Up2k(object):
if is_dirlink: if is_dirlink:
dlabs = absreal(sabs) dlabs = absreal(sabs)
t = "moving symlink from [{}] to [{}], target [{}]" t = "moving symlink from %r to %r, target %r"
self.log(t.format(sabs, dabs, dlabs)) self.log(t % (sabs, dabs, dlabs))
mt = bos.path.getmtime(sabs, False) mt = bos.path.getmtime(sabs, False)
wunlink(self.log, sabs, svn.flags) wunlink(self.log, sabs, svn.flags)
self._symlink(dlabs, dabs, dvn.flags, False, lmod=mt) self._symlink(dlabs, dabs, dvn.flags, False, lmod=mt)
@@ -4516,7 +4536,7 @@ class Up2k(object):
) )
curs.add(c2) curs.add(c2)
else: else:
self.log("not found in src db: [{}]".format(svp)) self.log("not found in src db: %r" % (svp,))
try: try:
if is_xvol and has_dupes: if is_xvol and has_dupes:
@@ -4537,7 +4557,7 @@ class Up2k(object):
if ex.errno != errno.EXDEV: if ex.errno != errno.EXDEV:
raise raise
self.log("using copy+delete (%s):\n %s\n %s" % (ex.strerror, sabs, dabs)) self.log("using copy+delete (%s):\n %r\n %r" % (ex.strerror, sabs, dabs))
b1, b2 = fsenc(sabs), fsenc(dabs) b1, b2 = fsenc(sabs), fsenc(dabs)
is_link = os.path.islink(b1) # due to _relink is_link = os.path.islink(b1) # due to _relink
try: try:
@@ -4645,7 +4665,7 @@ class Up2k(object):
""" """
srd, sfn = vsplit(vrem) srd, sfn = vsplit(vrem)
has_dupes = False has_dupes = False
self.log("forgetting {}".format(vrem)) self.log("forgetting %r" % (vrem,))
if wark and cur: if wark and cur:
self.log("found {} in db".format(wark)) self.log("found {} in db".format(wark))
if drop_tags: if drop_tags:
@@ -4676,6 +4696,13 @@ class Up2k(object):
t = "forgetting partial upload {} ({})" t = "forgetting partial upload {} ({})"
p = self._vis_job_progress(job) p = self._vis_job_progress(job)
self.log(t.format(wark, p)) self.log(t.format(wark, p))
src = djoin(ptop, vrem)
zi = len(self.dupesched.pop(src, []))
if zi:
t = "...and forgetting %d links in dupesched"
self.log(t % (zi,))
assert wark assert wark
del reg[wark] del reg[wark]
@@ -4714,7 +4741,7 @@ class Up2k(object):
dvrem = vjoin(rd, fn).strip("/") dvrem = vjoin(rd, fn).strip("/")
if ptop != sptop or srem != dvrem: if ptop != sptop or srem != dvrem:
dupes.append([ptop, dvrem]) dupes.append([ptop, dvrem])
self.log("found {} dupe: [{}] {}".format(wark, ptop, dvrem)) self.log("found %s dupe: %r %r" % (wark, ptop, dvrem))
if not dupes: if not dupes:
return 0 return 0
@@ -4727,7 +4754,7 @@ class Up2k(object):
d = links if bos.path.islink(ap) else full d = links if bos.path.islink(ap) else full
d[ap] = (ptop, vp) d[ap] = (ptop, vp)
except: except:
self.log("relink: not found: [{}]".format(ap)) self.log("relink: not found: %r" % (ap,))
# self.log("full:\n" + "\n".join(" {:90}: {}".format(*x) for x in full.items())) # self.log("full:\n" + "\n".join(" {:90}: {}".format(*x) for x in full.items()))
# self.log("links:\n" + "\n".join(" {:90}: {}".format(*x) for x in links.items())) # self.log("links:\n" + "\n".join(" {:90}: {}".format(*x) for x in links.items()))
@@ -4735,7 +4762,7 @@ class Up2k(object):
# deleting final remaining full copy; swap it with a symlink # deleting final remaining full copy; swap it with a symlink
slabs = list(sorted(links.keys()))[0] slabs = list(sorted(links.keys()))[0]
ptop, rem = links.pop(slabs) ptop, rem = links.pop(slabs)
self.log("linkswap [{}] and [{}]".format(sabs, slabs)) self.log("linkswap %r and %r" % (sabs, slabs))
mt = bos.path.getmtime(slabs, False) mt = bos.path.getmtime(slabs, False)
flags = self.flags.get(ptop) or {} flags = self.flags.get(ptop) or {}
atomic_move(self.log, sabs, slabs, flags) atomic_move(self.log, sabs, slabs, flags)
@@ -4770,7 +4797,7 @@ class Up2k(object):
zs = absreal(alink) zs = absreal(alink)
if ldst != zs: if ldst != zs:
t = "relink because computed != actual destination:\n %s\n %s" t = "relink because computed != actual destination:\n %r\n %r"
self.log(t % (ldst, zs), 3) self.log(t % (ldst, zs), 3)
ldst = zs ldst = zs
faulty = True faulty = True
@@ -4785,7 +4812,7 @@ class Up2k(object):
t = "relink because symlink verification failed: %s; %r" t = "relink because symlink verification failed: %s; %r"
self.log(t % (ex, ex), 3) self.log(t % (ex, ex), 3)
self.log("relinking [%s] to [%s]" % (alink, dabs)) self.log("relinking %r to %r" % (alink, dabs))
flags = self.flags.get(parts[0]) or {} flags = self.flags.get(parts[0]) or {}
try: try:
lmod = bos.path.getmtime(alink, False) lmod = bos.path.getmtime(alink, False)
@@ -4900,7 +4927,7 @@ class Up2k(object):
"", "",
) )
if not hr: if not hr:
t = "upload blocked by xbu server config: {}".format(vp_chk) t = "upload blocked by xbu server config: %r" % (vp_chk,)
self.log(t, 1) self.log(t, 1)
raise Pebkac(403, t) raise Pebkac(403, t)
if hr.get("reloc"): if hr.get("reloc"):
@@ -4950,7 +4977,7 @@ class Up2k(object):
try: try:
sp.check_call(["fsutil", "sparse", "setflag", abspath]) sp.check_call(["fsutil", "sparse", "setflag", abspath])
except: except:
self.log("could not sparse [{}]".format(abspath), 3) self.log("could not sparse %r" % (abspath,), 3)
relabel = True relabel = True
sprs = False sprs = False
@@ -5137,7 +5164,7 @@ class Up2k(object):
self._tag_file(cur, entags, wark, abspath, tags) self._tag_file(cur, entags, wark, abspath, tags)
cur.connection.commit() cur.connection.commit()
self.log("tagged {} ({}+{})".format(abspath, ntags1, len(tags) - ntags1)) self.log("tagged %r (%d+%d)" % (abspath, ntags1, len(tags) - ntags1))
def _hasher(self) -> None: def _hasher(self) -> None:
with self.hashq_mutex: with self.hashq_mutex:
@@ -5156,7 +5183,7 @@ class Up2k(object):
if not self._hash_t(task) and self.stop: if not self._hash_t(task) and self.stop:
return return
except Exception as ex: except Exception as ex:
self.log("failed to hash %s: %s" % (task, ex), 1) self.log("failed to hash %r: %s" % (task, ex), 1)
def _hash_t( def _hash_t(
self, task: tuple[str, str, dict[str, Any], str, str, str, float, str, bool] self, task: tuple[str, str, dict[str, Any], str, str, str, float, str, bool]
@@ -5168,7 +5195,7 @@ class Up2k(object):
return True return True
abspath = djoin(ptop, rd, fn) abspath = djoin(ptop, rd, fn)
self.log("hashing " + abspath) self.log("hashing %r" % (abspath,))
inf = bos.stat(abspath) inf = bos.stat(abspath)
if not inf.st_size: if not inf.st_size:
wark = up2k_wark_from_metadata( wark = up2k_wark_from_metadata(
@@ -5261,7 +5288,7 @@ class Up2k(object):
x = pathmod(self.vfs, "", req_vp, {"vp": fvp, "fn": fn}) x = pathmod(self.vfs, "", req_vp, {"vp": fvp, "fn": fn})
if not x: if not x:
t = "hook_fx(%s): failed to resolve %s based on %s" t = "hook_fx(%s): failed to resolve %r based on %r"
self.log(t % (act, fvp, req_vp)) self.log(t % (act, fvp, req_vp))
continue continue

View File

@@ -1024,7 +1024,7 @@ class ProgressPrinter(threading.Thread):
now = time.time() now = time.time()
if msg and now - tp > 10: if msg and now - tp > 10:
tp = now tp = now
self.log("progress: %s" % (msg,), 6) self.log("progress: %r" % (msg,), 6)
if no_stdout: if no_stdout:
continue continue
@@ -1626,7 +1626,7 @@ class MultipartParser(object):
(only the fallback non-js uploader relies on these filenames) (only the fallback non-js uploader relies on these filenames)
""" """
for ln in read_header(self.sr, 2, 2592000): for ln in read_header(self.sr, 2, 2592000):
self.log(ln) self.log(repr(ln))
m = self.re_ctype.match(ln) m = self.re_ctype.match(ln)
if m: if m:
@@ -1917,11 +1917,11 @@ def gen_filekey_dbg(
if p2 != fspath: if p2 != fspath:
raise Exception() raise Exception()
except: except:
t = "maybe wrong abspath for filekey;\norig: {}\nreal: {}" t = "maybe wrong abspath for filekey;\norig: %r\nreal: %r"
log(t.format(fspath, p2), 1) log(t % (fspath, p2), 1)
t = "fk({}) salt({}) size({}) inode({}) fspath({}) at({})" t = "fk(%s) salt(%s) size(%d) inode(%d) fspath(%r) at(%s)"
log(t.format(ret[:8], salt, fsize, inode, fspath, ctx), 5) log(t % (ret[:8], salt, fsize, inode, fspath, ctx), 5)
return ret return ret
@@ -2277,7 +2277,7 @@ def log_reloc(
rem: str, rem: str,
) -> None: ) -> None:
nap, nvp, nfn, (nvn, nrem) = pm nap, nvp, nfn, (nvn, nrem) = pm
t = "reloc %s:\nold ap [%s]\nnew ap [%s\033[36m/%s\033[0m]\nold vp [%s]\nnew vp [%s\033[36m/%s\033[0m]\nold fn [%s]\nnew fn [%s]\nold vfs [%s]\nnew vfs [%s]\nold rem [%s]\nnew rem [%s]" t = "reloc %s:\nold ap %r\nnew ap %r\033[36m/%r\033[0m\nold vp %r\nnew vp %r\033[36m/%r\033[0m\nold fn %r\nnew fn %r\nold vfs %r\nnew vfs %r\nold rem %r\nnew rem %r"
log(t % (re, ap, nap, nfn, vp, nvp, nfn, fn, nfn, vn.vpath, nvn.vpath, rem, nrem)) log(t % (re, ap, nap, nfn, vp, nvp, nfn, fn, nfn, vn.vpath, nvn.vpath, rem, nrem))
@@ -2448,7 +2448,7 @@ def lsof(log: "NamedLogger", abspath: str) -> None:
try: try:
rc, so, se = runcmd([b"lsof", b"-R", fsenc(abspath)], timeout=45) rc, so, se = runcmd([b"lsof", b"-R", fsenc(abspath)], timeout=45)
zs = (so.strip() + "\n" + se.strip()).strip() zs = (so.strip() + "\n" + se.strip()).strip()
log("lsof {} = {}\n{}".format(abspath, rc, zs), 3) log("lsof %r = %s\n%s" % (abspath, rc, zs), 3)
except: except:
log("lsof failed; " + min_ex(), 3) log("lsof failed; " + min_ex(), 3)
@@ -2484,17 +2484,17 @@ def _fs_mvrm(
for attempt in range(90210): for attempt in range(90210):
try: try:
if ino and os.stat(bsrc).st_ino != ino: if ino and os.stat(bsrc).st_ino != ino:
t = "src inode changed; aborting %s %s" t = "src inode changed; aborting %s %r"
log(t % (act, src), 1) log(t % (act, src), 1)
return False return False
if (dst and not atomic) and os.path.exists(bdst): if (dst and not atomic) and os.path.exists(bdst):
t = "something appeared at dst; aborting rename [%s] ==> [%s]" t = "something appeared at dst; aborting rename %r ==> %r"
log(t % (src, dst), 1) log(t % (src, dst), 1)
return False return False
osfun(*args) osfun(*args)
if attempt: if attempt:
now = time.time() now = time.time()
t = "%sd in %.2f sec, attempt %d: %s" t = "%sd in %.2f sec, attempt %d: %r"
log(t % (act, now - t0, attempt + 1, src)) log(t % (act, now - t0, attempt + 1, src))
return True return True
except OSError as ex: except OSError as ex:
@@ -2506,7 +2506,7 @@ def _fs_mvrm(
if not attempt: if not attempt:
if not PY2: if not PY2:
ino = os.stat(bsrc).st_ino ino = os.stat(bsrc).st_ino
t = "%s failed (err.%d); retrying for %d sec: [%s]" t = "%s failed (err.%d); retrying for %d sec: %r"
log(t % (act, ex.errno, maxtime + 0.99, src)) log(t % (act, ex.errno, maxtime + 0.99, src))
time.sleep(chill) time.sleep(chill)
@@ -3535,7 +3535,7 @@ def runhook(
log, src, cmd, ap, vp, host, uname, perms, mt, sz, ip, at, txt log, src, cmd, ap, vp, host, uname, perms, mt, sz, ip, at, txt
) )
if log and args.hook_v: if log and args.hook_v:
log("hook(%s) [%s] => \033[32m%s" % (src, cmd, hr), 6) log("hook(%s) %r => \033[32m%s" % (src, cmd, hr), 6)
if not hr: if not hr:
return {} return {}
for k, v in hr.items(): for k, v in hr.items():

View File

@@ -2785,6 +2785,7 @@ html.b #u2conf a.b:hover {
padding-left: .2em; padding-left: .2em;
} }
.fsearch_explain { .fsearch_explain {
color: var(--a-dark);
padding-left: .7em; padding-left: .7em;
font-size: 1.1em; font-size: 1.1em;
line-height: 0; line-height: 0;

View File

@@ -131,7 +131,7 @@
<div id="widget"></div> <div id="widget"></div>
<script> <script>
var SR = {{ r|tojson }}, var SR = "{{ r }}",
CGV1 = {{ cgv1 }}, CGV1 = {{ cgv1 }},
CGV = {{ cgv|tojson }}, CGV = {{ cgv|tojson }},
TS = "{{ ts }}", TS = "{{ ts }}",

View File

@@ -542,6 +542,7 @@ var Ls = {
"u_hashdone": 'hashing done', "u_hashdone": 'hashing done',
"u_hashing": 'hash', "u_hashing": 'hash',
"u_hs": 'handshaking...', "u_hs": 'handshaking...',
"u_started": "the files are now being uploaded; see [🚀]",
"u_dupdefer": "duplicate; will be processed after all other files", "u_dupdefer": "duplicate; will be processed after all other files",
"u_actx": "click this text to prevent loss of<br />performance when switching to other windows/tabs", "u_actx": "click this text to prevent loss of<br />performance when switching to other windows/tabs",
"u_fixed": "OK!&nbsp; Fixed it 👍", "u_fixed": "OK!&nbsp; Fixed it 👍",
@@ -577,6 +578,7 @@ var Ls = {
"ue_la": 'you are currently logged in as "{0}"', "ue_la": 'you are currently logged in as "{0}"',
"ue_sr": 'you are currently in file-search mode\n\nswitch to upload-mode by clicking the magnifying glass 🔎 (next to the big SEARCH button), and try uploading again\n\nsorry', "ue_sr": 'you are currently in file-search mode\n\nswitch to upload-mode by clicking the magnifying glass 🔎 (next to the big SEARCH button), and try uploading again\n\nsorry',
"ue_ta": 'try uploading again, it should work now', "ue_ta": 'try uploading again, it should work now',
"ue_ab": "this file is already being uploaded into another folder, and that upload must be completed before the file can be uploaded elsewhere.\n\nYou can abort and forget the initial upload using the top-left 🧯",
"ur_1uo": "OK: File uploaded successfully", "ur_1uo": "OK: File uploaded successfully",
"ur_auo": "OK: All {0} files uploaded successfully", "ur_auo": "OK: All {0} files uploaded successfully",
"ur_1so": "OK: File found on server", "ur_1so": "OK: File found on server",
@@ -1129,6 +1131,7 @@ var Ls = {
"u_hashdone": 'befaring ferdig', "u_hashdone": 'befaring ferdig',
"u_hashing": 'les', "u_hashing": 'les',
"u_hs": 'serveren tenker...', "u_hs": 'serveren tenker...',
"u_started": "filene blir nå lastet opp 🚀",
"u_dupdefer": "duplikat; vil bli håndtert til slutt", "u_dupdefer": "duplikat; vil bli håndtert til slutt",
"u_actx": "klikk her for å forhindre tap av<br />ytelse ved bytte til andre vinduer/faner", "u_actx": "klikk her for å forhindre tap av<br />ytelse ved bytte til andre vinduer/faner",
"u_fixed": "OK!&nbsp; Løste seg 👍", "u_fixed": "OK!&nbsp; Løste seg 👍",
@@ -1164,6 +1167,7 @@ var Ls = {
"ue_la": 'du er logget inn som "{0}"', "ue_la": 'du er logget inn som "{0}"',
"ue_sr": 'du er i filsøk-modus\n\nbytt til opplastning ved å klikke på forstørrelsesglasset 🔎 (ved siden av den store FILSØK-knappen) og prøv igjen\n\nsorry', "ue_sr": 'du er i filsøk-modus\n\nbytt til opplastning ved å klikke på forstørrelsesglasset 🔎 (ved siden av den store FILSØK-knappen) og prøv igjen\n\nsorry',
"ue_ta": 'prøv å laste opp igjen, det burde funke nå', "ue_ta": 'prøv å laste opp igjen, det burde funke nå',
"ue_ab": "den samme filen er allerede under opplastning til en annen mappe, og den må fullføres der før filen kan lastes opp andre steder.\n\nDu kan avbryte og glemme den påbegynte opplastningen ved hjelp av 🧯 oppe til venstre",
"ur_1uo": "OK: Filen ble lastet opp", "ur_1uo": "OK: Filen ble lastet opp",
"ur_auo": "OK: Alle {0} filene ble lastet opp", "ur_auo": "OK: Alle {0} filene ble lastet opp",
"ur_1so": "OK: Filen ble funnet på serveren", "ur_1so": "OK: Filen ble funnet på serveren",
@@ -1716,6 +1720,7 @@ var Ls = {
"u_hashdone": '哈希完成', "u_hashdone": '哈希完成',
"u_hashing": '哈希', "u_hashing": '哈希',
"u_hs": '正在等待服务器...', "u_hs": '正在等待服务器...',
"u_started": "文件现在正在上传 🚀", //m
"u_dupdefer": "这是一个重复文件。它将在所有其他文件上传后进行处理", "u_dupdefer": "这是一个重复文件。它将在所有其他文件上传后进行处理",
"u_actx": "单击此文本以防止切换到其他窗口/选项卡时性能下降", "u_actx": "单击此文本以防止切换到其他窗口/选项卡时性能下降",
"u_fixed": "好!&nbsp;已修复 👍", "u_fixed": "好!&nbsp;已修复 👍",
@@ -1751,6 +1756,7 @@ var Ls = {
"ue_la": '你当前以 "{0}" 登录', "ue_la": '你当前以 "{0}" 登录',
"ue_sr": '你当前处于文件搜索模式\n\n通过点击大搜索按钮旁边的放大镜 🔎 切换到上传模式,然后重试上传\n\n抱歉', "ue_sr": '你当前处于文件搜索模式\n\n通过点击大搜索按钮旁边的放大镜 🔎 切换到上传模式,然后重试上传\n\n抱歉',
"ue_ta": '尝试再次上传,现在应该能正常工作', "ue_ta": '尝试再次上传,现在应该能正常工作',
"ue_ab": "这份文件正在上传到另一个文件夹,必须完成该上传后,才能将文件上传到其他位置。\n\n您可以通过左上角的🧯中止并忘记该上传。", //m
"ur_1uo": "成功:文件上传成功", "ur_1uo": "成功:文件上传成功",
"ur_auo": "成功:所有 {0} 个文件上传成功", "ur_auo": "成功:所有 {0} 个文件上传成功",
"ur_1so": "成功:文件在服务器上找到", "ur_1so": "成功:文件在服务器上找到",
@@ -3058,6 +3064,7 @@ var vbar = (function () {
can = r.can.can; can = r.can.can;
ctx = r.can.ctx; ctx = r.can.ctx;
ctx.font = '.7em sans-serif'; ctx.font = '.7em sans-serif';
ctx.fontVariantCaps = 'small-caps';
w = r.can.w; w = r.can.w;
h = r.can.h; h = r.can.h;
r.draw(); r.draw();
@@ -3083,18 +3090,18 @@ var vbar = (function () {
ctx.fillStyle = grad2; ctx.fillRect(0, 0, w, h); ctx.fillStyle = grad2; ctx.fillRect(0, 0, w, h);
ctx.fillStyle = grad1; ctx.fillRect(0, 0, w * mp.vol, h); ctx.fillStyle = grad1; ctx.fillRect(0, 0, w * mp.vol, h);
if (Date.now() - lastv > 1000) var vt = 'volume ' + Math.floor(mp.vol * 100),
return; tw = ctx.measureText(vt).width,
x = w * mp.vol - tw - 8,
li = dy;
var vt = Math.floor(mp.vol * 100), if (mp.vol < 0.5) {
tw = ctx.measureText(vt).width; x += tw + 16;
var li = dy;
if (mp.vol < 0.05)
li = !li; li = !li;
}
ctx.fillStyle = li ? '#fff' : '#210'; ctx.fillStyle = li ? '#fff' : '#210';
ctx.fillText(vt, Math.max(4, w * mp.vol - tw - 8), h / 3 * 2); ctx.fillText(vt, x, h / 3 * 2);
clearTimeout(untext); clearTimeout(untext);
untext = setTimeout(r.draw, 1000); untext = setTimeout(r.draw, 1000);
@@ -9465,7 +9472,23 @@ var unpost = (function () {
toast.ok(5, this.responseText); toast.ok(5, this.responseText);
if (!QS('#op_unpost a[me]')) if (!QS('#op_unpost a[me]'))
ebi(goto_unpost()); goto_unpost();
var fi = window.up2k && up2k.st.files;
if (fi && fi.length < 9) {
for (var a = 0; a < fi.length; a++) {
var f = fi[a];
if (!f.done && (f.rechecks || f.want_recheck) &&
!has(up2k.st.todo.handshake, f) &&
!has(up2k.st.busy.handshake, f)
) {
up2k.st.todo.handshake.push(f);
up2k.ui.seth(f.n, 2, L.u_hashdone);
up2k.ui.seth(f.n, 1, '📦 wait');
up2k.ui.move(f.n, 'bz');
}
}
}
} }
ct.onclick = function (e) { ct.onclick = function (e) {

View File

@@ -128,9 +128,9 @@ write markdown (most html is 🙆 too)
<script> <script>
var SR = {{ r|tojson }}, var SR = "{{ r }}",
last_modified = {{ lastmod }}, last_modified = {{ lastmod }},
have_emp = {{ have_emp|tojson }}, have_emp = {{ "true" if have_emp else "false" }},
dfavico = "{{ favico }}"; dfavico = "{{ favico }}";
var md_opt = { var md_opt = {

View File

@@ -26,9 +26,9 @@
<a href="#" id="repl">π</a> <a href="#" id="repl">π</a>
<script> <script>
var SR = {{ r|tojson }}, var SR = "{{ r }}",
last_modified = {{ lastmod }}, last_modified = {{ lastmod }},
have_emp = {{ have_emp|tojson }}, have_emp = {{ "true" if have_emp else "false" }},
dfavico = "{{ favico }}"; dfavico = "{{ favico }}";
var md_opt = { var md_opt = {

107
copyparty/web/rups.css Normal file
View File

@@ -0,0 +1,107 @@
html {
color: #333;
background: #f7f7f7;
font-family: sans-serif;
font-family: var(--font-main), sans-serif;
touch-action: manipulation;
}
#wrap {
margin: 2em auto;
padding: 0 1em 3em 1em;
line-height: 2.3em;
}
a {
color: #047;
background: #fff;
text-decoration: none;
border-bottom: 1px solid #8ab;
border-radius: .2em;
padding: .2em .6em;
margin: 0 .3em;
}
#wrap td a {
margin: 0;
line-height: 1em;
display: inline-block;
white-space: initial;
font-family: var(--font-main), sans-serif;
}
#repl {
border: none;
background: none;
color: inherit;
padding: 0;
position: fixed;
bottom: .25em;
left: .2em;
}
#wrap table {
border-collapse: collapse;
position: relative;
margin-top: 2em;
}
#wrap th {
top: -1px;
position: sticky;
background: #f7f7f7;
}
#wrap td {
font-family: var(--font-mono), monospace, monospace;
white-space: pre; /*date*/
overflow: hidden; /*ipv6*/
}
#wrap th:first-child,
#wrap td:first-child {
text-align: right;
}
#wrap td,
#wrap th {
text-align: left;
padding: .3em .6em;
max-width: 30vw;
}
#wrap tr:hover td {
background: #ddd;
box-shadow: 0 -1px 0 rgba(128, 128, 128, 0.5) inset;
}
#wrap th:first-child,
#wrap td:first-child {
border-radius: .5em 0 0 .5em;
}
#wrap th:last-child,
#wrap td:last-child {
border-radius: 0 .5em .5em 0;
}
html.z {
background: #222;
color: #ccc;
}
html.bz {
background: #11121d;
color: #bbd;
}
html.z a {
color: #fff;
background: #057;
border-color: #37a;
}
html.z input[type=text] {
color: #ddd;
background: #223;
border: none;
border-bottom: 1px solid #fc5;
border-radius: .2em;
padding: .2em .3em;
}
html.z #wrap th {
background: #222;
}
html.bz #wrap th {
background: #223;
}
html.z #wrap tr:hover td {
background: #000;
}

50
copyparty/web/rups.html Normal file
View File

@@ -0,0 +1,50 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>{{ s_doctitle }}</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.8">
<meta name="robots" content="noindex, nofollow">
<meta name="theme-color" content="#{{ tcolor }}">
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/rups.css?_={{ ts }}">
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
{{ html_head }}
</head>
<body>
<div id="wrap">
<a href="#" id="re">refresh</a>
<a href="{{ r }}/?h">control-panel</a>
&nbsp; Filter: <input type="text" id="filter" size="20" placeholder="documents/passwords" />
&nbsp; <span id="hits"></span>
<table id="tab"><thead><tr>
<th>size</th>
<th>who</th>
<th>when</th>
<th>age</th>
<th>dir</th>
<th>file</th>
</tr></thead><tbody id="tb"></tbody></table>
</div>
<a href="#" id="repl">π</a>
<script>
var SR="{{ r }}",
lang="{{ lang }}",
dfavico="{{ favico }}";
var STG = window.localStorage;
document.documentElement.className = (STG && STG.cpp_thm) || "{{ this.args.theme }}";
</script>
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>
<script>var V={{ v }};</script>
<script src="{{ r }}/.cpr/rups.js?_={{ ts }}"></script>
{%- if js %}
<script src="{{ js }}_={{ ts }}"></script>
{%- endif %}
</body>
</html>

66
copyparty/web/rups.js Normal file
View File

@@ -0,0 +1,66 @@
function render() {
var ups = V.ups, now = V.now, html = [];
ebi('filter').value = V.filter;
ebi('hits').innerHTML = 'showing ' + ups.length + ' files';
for (var a = 0; a < ups.length; a++) {
var f = ups[a],
vsp = vsplit(f.vp.split('?')[0]),
dn = esc(uricom_dec(vsp[0])),
fn = esc(uricom_dec(vsp[1])),
at = f.at,
td = now - f.at,
ts = !at ? '(?)' : unix2iso(at),
sa = !at ? '(?)' : td > 60 ? shumantime(td) : (td + 's'),
sz = ('' + f.sz).replace(/\B(?=(\d{3})+(?!\d))/g, " ");
html.push('<tr><td>' + sz +
'</td><td>' + f.ip +
'</td><td>' + ts +
'</td><td>' + sa +
'</td><td><a href="' + vsp[0] + '">' + dn +
'</a></td><td><a href="' + f.vp + '">' + fn +
'</a></td></tr>');
}
if (!ups.length) {
var t = V.filter ? ' matching the filter' : '';
html = ['<tr><td colspan="6">there are no uploads' + t + '</td></tr>'];
}
ebi('tb').innerHTML = html.join('');
}
render();
var ti;
function ask(e) {
ev(e);
clearTimeout(ti);
ebi('hits').innerHTML = 'Loading...';
var xhr = new XHR(),
filter = unsmart(ebi('filter').value);
hist_replace(get_evpath().split('?')[0] + '?ru&filter=' + uricom_enc(filter));
xhr.onload = xhr.onerror = function () {
try {
V = JSON.parse(this.responseText)
}
catch (ex) {
ebi('tb').innerHTML = '<tr><td colspan="6">failed to decode server response as json: <pre>' + esc(this.responseText) + '</pre></td></tr>';
return;
}
render();
};
xhr.open('GET', SR + '/?ru&j&filter=' + uricom_enc(filter), true);
xhr.send();
}
ebi('re').onclick = ask;
ebi('filter').oninput = function () {
clearTimeout(ti);
ti = setTimeout(ask, 500);
ebi('hits').innerHTML = '...';
};
ebi('filter').onkeydown = function (e) {
if (('' + e.key).endsWith('Enter'))
ask();
};

View File

@@ -44,9 +44,10 @@ a {
bottom: .25em; bottom: .25em;
left: .2em; left: .2em;
} }
table { #wrap table {
border-collapse: collapse; border-collapse: collapse;
position: relative; position: relative;
margin-top: 2em;
} }
th { th {
top: -1px; top: -1px;
@@ -62,6 +63,14 @@ th {
#wrap td+td+td+td+td+td+td+td { #wrap td+td+td+td+td+td+td+td {
font-family: var(--font-mono), monospace, monospace; font-family: var(--font-mono), monospace, monospace;
} }
#wrap th:first-child,
#wrap td:first-child {
border-radius: .5em 0 0 .5em;
}
#wrap th:last-child,
#wrap td:last-child {
border-radius: 0 .5em .5em 0;
}
@@ -81,3 +90,6 @@ html.bz {
color: #bbd; color: #bbd;
background: #11121d; background: #11121d;
} }
html.bz th {
background: #223;
}

View File

@@ -6,6 +6,7 @@
<title>{{ s_doctitle }}</title> <title>{{ s_doctitle }}</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.8"> <meta name="viewport" content="width=device-width, initial-scale=0.8">
<meta name="robots" content="noindex, nofollow">
<meta name="theme-color" content="#{{ tcolor }}"> <meta name="theme-color" content="#{{ tcolor }}">
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/shares.css?_={{ ts }}"> <link rel="stylesheet" media="screen" href="{{ r }}/.cpr/shares.css?_={{ ts }}">
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}"> <link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
@@ -14,8 +15,8 @@
<body> <body>
<div id="wrap"> <div id="wrap">
<a id="a" href="{{ r }}/?shares" class="af">refresh</a> <a href="{{ r }}/?shares">refresh</a>
<a id="a" href="{{ r }}/?h" class="af">control-panel</a> <a href="{{ r }}/?h">control-panel</a>
<span>axs = perms (read,write,move,delet)</span> <span>axs = perms (read,write,move,delet)</span>
<span>nf = numFiles (0=dir)</span> <span>nf = numFiles (0=dir)</span>
@@ -58,9 +59,11 @@
{% if not rows %} {% if not rows %}
(you don't have any active shares btw) (you don't have any active shares btw)
{% endif %} {% endif %}
</div>
<a href="#" id="repl">π</a>
<script> <script>
var SR = {{ r|tojson }}, var SR="{{ r }}",
shr="{{ shr }}", shr="{{ shr }}",
lang="{{ lang }}", lang="{{ lang }}",
dfavico="{{ favico }}"; dfavico="{{ favico }}";

View File

@@ -157,6 +157,7 @@
<blockquote id="ad">enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!</blockquote></li> <blockquote id="ad">enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!</blockquote></li>
{% endif %} {% endif %}
<li><a id="af" href="{{ r }}/?ru">show recent uploads</a></li>
<li><a id="k" href="{{ r }}/?reset" class="r" onclick="localStorage.clear();return true">reset client settings</a></li> <li><a id="k" href="{{ r }}/?reset" class="r" onclick="localStorage.clear();return true">reset client settings</a></li>
</ul> </ul>
@@ -167,7 +168,7 @@
{%- endif %} {%- endif %}
<script> <script>
var SR = {{ r|tojson }}, var SR="{{ r }}",
lang="{{ lang }}", lang="{{ lang }}",
dfavico="{{ favico }}"; dfavico="{{ favico }}";

View File

@@ -38,6 +38,7 @@ var Ls = {
"ac1": "skru på no304", "ac1": "skru på no304",
"ad1": "no304 stopper all bruk av cache. Hvis ikke k304 var nok, prøv denne. Vil mangedoble dataforbruk!", "ad1": "no304 stopper all bruk av cache. Hvis ikke k304 var nok, prøv denne. Vil mangedoble dataforbruk!",
"ae1": "utgående:", "ae1": "utgående:",
"af1": "vis nylig opplastede filer",
}, },
"eng": { "eng": {
"d2": "shows the state of all active threads", "d2": "shows the state of all active threads",
@@ -88,6 +89,7 @@ var Ls = {
"ac1": "开启 k304", "ac1": "开启 k304",
"ad1": "启用 no304 将禁用所有缓存;如果 k304 不够,可以尝试此选项。这将消耗大量的网络流量!", //m "ad1": "启用 no304 将禁用所有缓存;如果 k304 不够,可以尝试此选项。这将消耗大量的网络流量!", //m
"ae1": "正在下载:", //m "ae1": "正在下载:", //m
"af1": "显示最近上传的文件", //m
} }
}; };

View File

@@ -9,7 +9,7 @@
<meta name="theme-color" content="#{{ tcolor }}"> <meta name="theme-color" content="#{{ tcolor }}">
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/splash.css?_={{ ts }}"> <link rel="stylesheet" media="screen" href="{{ r }}/.cpr/splash.css?_={{ ts }}">
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}"> <link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
<style>ul{padding-left:1.3em}li{margin:.4em 0}</style> <style>ul{padding-left:1.3em}li{margin:.4em 0}.txa{float:right;margin:0 0 0 1em}</style>
{{ html_head }} {{ html_head }}
</head> </head>
@@ -31,15 +31,22 @@
<br /> <br />
<span class="os win lin mac">placeholders:</span> <span class="os win lin mac">placeholders:</span>
<span class="os win"> <span class="os win">
{% if accs %}<code><b>{{ pw }}</b></code>=password, {% endif %}<code><b>W:</b></code>=mountpoint {% if accs %}<code><b id="pw0">{{ pw }}</b></code>=password, {% endif %}<code><b>W:</b></code>=mountpoint
</span> </span>
<span class="os lin mac"> <span class="os lin mac">
{% if accs %}<code><b>{{ pw }}</b></code>=password, {% endif %}<code><b>mp</b></code>=mountpoint {% if accs %}<code><b id="pw0">{{ pw }}</b></code>=password, {% endif %}<code><b>mp</b></code>=mountpoint
</span> </span>
<a href="#" id="setpw">use real password</a>
</p> </p>
{% if args.idp_h_usr %}
<p style="line-height:2em"><b>WARNING:</b> this server is using IdP-based authentication, so this stuff may not work as advertised. Depending on server config, these commands can probably only be used to access areas which don't require authentication, unless you auth using any non-IdP accounts defined in the copyparty config. Please see <a href="https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#connecting-webdav-clients">the IdP docs</a></p>
{% endif %}
{% if not args.no_dav %} {% if not args.no_dav %}
<h1>WebDAV</h1> <h1>WebDAV</h1>
@@ -53,7 +60,6 @@
{% if s %} {% if s %}
<li>running <code>rclone mount</code> on LAN (or just dont have valid certificates)? add <code>--no-check-certificate</code></li> <li>running <code>rclone mount</code> on LAN (or just dont have valid certificates)? add <code>--no-check-certificate</code></li>
{% endif %} {% endif %}
<li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li>
<li>old version of rclone? replace all <code>=</code> with <code>&nbsp;</code> (space)</li> <li>old version of rclone? replace all <code>=</code> with <code>&nbsp;</code> (space)</li>
</ul> </ul>
@@ -137,7 +143,6 @@
{% if args.ftps %} {% if args.ftps %}
<li>running on LAN (or just dont have valid certificates)? add <code>no_check_certificate=true</code> to the config command</li> <li>running on LAN (or just dont have valid certificates)? add <code>no_check_certificate=true</code> to the config command</li>
{% endif %} {% endif %}
<li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li>
<li>old version of rclone? replace all <code>=</code> with <code>&nbsp;</code> (space)</li> <li>old version of rclone? replace all <code>=</code> with <code>&nbsp;</code> (space)</li>
</ul> </ul>
<p>if you want to use the native FTP client in windows instead (please dont), press <code>win+R</code> and run this command:</p> <p>if you want to use the native FTP client in windows instead (please dont), press <code>win+R</code> and run this command:</p>
@@ -231,11 +236,65 @@
<div class="os win">
<h1>ShareX</h1>
<p>to upload screenshots using ShareX <a href="https://github.com/ShareX/ShareX/releases/tag/v12.4.1">v12</a> or <a href="https://getsharex.com/">v15+</a>, save this as <code>copyparty.sxcu</code> and run it:</p>
<pre class="dl" name="copyparty.sxcu">
{ "Name": "copyparty",
"RequestURL": "http{{ s }}://{{ ep }}/{{ rvp }}",
"Headers": {
{% if accs %}"pw": "<b>{{ pw }}</b>",{% endif %}
"accept": "url"
},
"DestinationType": "ImageUploader, TextUploader, FileUploader",
"FileFormName": "f" }
</pre>
</div>
<div class="os mac">
<h1>ishare</h1>
<p>to upload screenshots using <a href="https://isharemac.app/">ishare</a>, save this as <code>copyparty.iscu</code> and run it:</p>
<pre class="dl" name="copyparty.iscu">
{ "Name": "copyparty",
"RequestURL": "http{{ s }}://{{ ep }}/{{ rvp }}",
"Headers": {
{% if accs %}"pw": "<b>{{ pw }}</b>",{% endif %}
"accept": "json"
},
"ResponseURL": "{{ '{{fileurl}}' }}",
"FileFormName": "f" }
</pre>
</div>
<div class="os lin">
<h1>flameshot</h1>
<p>to upload screenshots using <a href="https://flameshot.org/">flameshot</a>, save this as <code>flameshot.sh</code> and run it:</p>
<pre class="dl" name="flameshot.sh">
#!/bin/bash
pw="<b>{{ pw }}</b>"
url="http{{ s }}://{{ ep }}/{{ rvp }}"
filename="$(date +%Y-%m%d-%H%M%S).png"
flameshot gui -s -r | curl -sT- "$url$filename?want=url&pw=$pw" | xsel -ib
</pre>
</div>
</div> </div>
<a href="#" id="repl">π</a> <a href="#" id="repl">π</a>
<script> <script>
var SR = {{ r|tojson }}, var SR="{{ r }}",
lang="{{ lang }}", lang="{{ lang }}",
dfavico="{{ favico }}"; dfavico="{{ favico }}";

View File

@@ -15,6 +15,21 @@ for (var a = 0; a < oa.length; a++) {
oa[a].innerHTML = html.replace(rd, '$1').replace(/[ \r\n]+$/, '').replace(/\r?\n/g, '<br />'); oa[a].innerHTML = html.replace(rd, '$1').replace(/[ \r\n]+$/, '').replace(/\r?\n/g, '<br />');
} }
function add_dls() {
oa = QSA('pre.dl');
for (var a = 0; a < oa.length; a++) {
var an = 'ta' + a,
o = ebi(an) || mknod('a', an, 'download');
oa[a].setAttribute('id', 'tx' + a);
oa[a].parentNode.insertBefore(o, oa[a]);
o.setAttribute('download', oa[a].getAttribute('name'));
o.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(oa[a].innerText));
clmod(o, 'txa', 1);
}
}
add_dls();
oa = QSA('.ossel a'); oa = QSA('.ossel a');
for (var a = 0; a < oa.length; a++) for (var a = 0; a < oa.length; a++)
@@ -40,3 +55,21 @@ function setos(os) {
} }
setos(WINDOWS ? 'win' : LINUX ? 'lin' : MACOS ? 'mac' : 'idk'); setos(WINDOWS ? 'win' : LINUX ? 'lin' : MACOS ? 'mac' : 'idk');
ebi('setpw').onclick = function (e) {
ev(e);
modal.prompt('password:', '', function (v) {
if (!v)
return;
var pw0 = ebi('pw0').innerHTML,
oa = QSA('b');
for (var a = 0; a < oa.length; a++)
if (oa[a].innerHTML == pw0)
oa[a].textContent = v;
add_dls();
});
}

View File

@@ -695,8 +695,9 @@ function Donut(uc, st) {
} }
if (++r.tc >= 10) { if (++r.tc >= 10) {
var s = r.eta === null ? 'paused' : r.eta > 60 ? shumantime(r.eta) : (r.eta + 's');
wintitle("{0}%, {1}, #{2}, ".format( wintitle("{0}%, {1}, #{2}, ".format(
f2f(v * 100 / t, 1), shumantime(r.eta), st.files.length - st.nfile.upload), true); f2f(v * 100 / t, 1), s, st.files.length - st.nfile.upload), true);
r.tc = 0; r.tc = 0;
} }
@@ -880,7 +881,7 @@ function up2k_init(subtle) {
bcfg_bind(uc, 'turbo', 'u2turbo', turbolvl > 1, draw_turbo); bcfg_bind(uc, 'turbo', 'u2turbo', turbolvl > 1, draw_turbo);
bcfg_bind(uc, 'datechk', 'u2tdate', turbolvl < 3, null); bcfg_bind(uc, 'datechk', 'u2tdate', turbolvl < 3, null);
bcfg_bind(uc, 'az', 'u2sort', u2sort.indexOf('n') + 1, set_u2sort); bcfg_bind(uc, 'az', 'u2sort', u2sort.indexOf('n') + 1, set_u2sort);
bcfg_bind(uc, 'hashw', 'hashw', !!WebAssembly && (!subtle || !CHROME || MOBILE || VCHROME >= 107), set_hashw); bcfg_bind(uc, 'hashw', 'hashw', !!WebAssembly && !(CHROME && MOBILE) && (!subtle || !CHROME), set_hashw);
bcfg_bind(uc, 'upnag', 'upnag', false, set_upnag); bcfg_bind(uc, 'upnag', 'upnag', false, set_upnag);
bcfg_bind(uc, 'upsfx', 'upsfx', false, set_upsfx); bcfg_bind(uc, 'upsfx', 'upsfx', false, set_upsfx);
@@ -1359,7 +1360,15 @@ function up2k_init(subtle) {
draw_each = good_files.length < 50; draw_each = good_files.length < 50;
if (WebAssembly && !hws.length) { if (WebAssembly && !hws.length) {
for (var a = 0; a < Math.min(navigator.hardwareConcurrency || 4, 16); a++) var nw = Math.min(navigator.hardwareConcurrency || 4, 16);
if (CHROME) {
// chrome-bug 383568268 // #124
nw = Math.max(1, (nw > 4 ? 4 : (nw - 1)));
nw = (subtle && !MOBILE && nw > 2) ? 2 : nw;
}
for (var a = 0; a < nw; a++)
hws.push(new Worker(SR + '/.cpr/w.hash.js?_=' + TS)); hws.push(new Worker(SR + '/.cpr/w.hash.js?_=' + TS));
if (!subtle) if (!subtle)
@@ -1963,32 +1972,84 @@ function up2k_init(subtle) {
nchunk = 0, nchunk = 0,
chunksize = get_chunksize(t.size), chunksize = get_chunksize(t.size),
nchunks = Math.ceil(t.size / chunksize), nchunks = Math.ceil(t.size / chunksize),
csz_mib = chunksize / 1048576,
tread = t.t_hashing,
cache_buf = null,
cache_car = 0,
cache_cdr = 0,
hashers = 0,
hashtab = {}; hashtab = {};
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
var use_workers = hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'),
hash_par = (!subtle && !use_workers) ? 0 : csz_mib < 48 ? 2 : csz_mib < 96 ? 1 : 0;
pvis.setab(t.n, nchunks); pvis.setab(t.n, nchunks);
pvis.move(t.n, 'bz'); pvis.move(t.n, 'bz');
if (hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden')) if (use_workers)
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
return wexec_hash(t, chunksize, nchunks); return wexec_hash(t, chunksize, nchunks);
var segm_next = function () { var segm_next = function () {
if (nchunk >= nchunks || bpend) if (nchunk >= nchunks || bpend)
return false; return false;
var reader = new FileReader(), var nch = nchunk++,
nch = nchunk++,
car = nch * chunksize, car = nch * chunksize,
cdr = Math.min(chunksize + car, t.size); cdr = Math.min(chunksize + car, t.size);
st.bytes.hashed += cdr - car; st.bytes.hashed += cdr - car;
st.etac.h++; st.etac.h++;
var orz = function (e) { if (MOBILE && CHROME && st.slow_io === null && nch == 1 && cdr - car >= 1024 * 512) {
bpend--; var spd = Math.floor((cdr - car) / (Date.now() + 1 - tread));
segm_next(); st.slow_io = spd < 40 * 1024;
hash_calc(nch, e.target.result); console.log('spd {0}, slow: {1}'.format(spd, st.slow_io));
} }
if (cdr <= cache_cdr && car >= cache_car) {
try {
var ofs = car - cache_car,
ofs2 = ofs + (cdr - car),
buf = cache_buf.subarray(ofs, ofs2);
hash_calc(nch, buf);
}
catch (ex) {
vis_exh(ex + '', 'up2k.js', '', '', ex);
}
return;
}
var reader = new FileReader(),
fr_cdr = cdr;
if (st.slow_io) {
var step = cdr - car,
tgt = 48 * 1048576;
while (step && fr_cdr - car < tgt)
fr_cdr += step;
if (fr_cdr - car > tgt && fr_cdr > cdr)
fr_cdr -= step;
if (fr_cdr > t.size)
fr_cdr = t.size;
}
var orz = function (e) {
bpend = 0;
var buf = e.target.result;
if (fr_cdr > cdr) {
cache_buf = new Uint8Array(buf);
cache_car = car;
cache_cdr = fr_cdr;
buf = cache_buf.subarray(0, cdr - car);
}
if (hashers < hash_par)
segm_next();
hash_calc(nch, buf);
};
reader.onload = function (e) { reader.onload = function (e) {
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); } try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
}; };
@@ -2015,17 +2076,20 @@ function up2k_init(subtle) {
toast.err(0, 'y o u b r o k e i t\nfile: ' + esc(t.name + '') + '\nerror: ' + err); toast.err(0, 'y o u b r o k e i t\nfile: ' + esc(t.name + '') + '\nerror: ' + err);
}; };
bpend++; bpend = 1;
reader.readAsArrayBuffer(t.fobj.slice(car, cdr)); tread = Date.now();
reader.readAsArrayBuffer(t.fobj.slice(car, fr_cdr));
return true; return true;
}; };
var hash_calc = function (nch, buf) { var hash_calc = function (nch, buf) {
hashers++;
var orz = function (hashbuf) { var orz = function (hashbuf) {
var hslice = new Uint8Array(hashbuf).subarray(0, 33), var hslice = new Uint8Array(hashbuf).subarray(0, 33),
b64str = buf2b64(hslice); b64str = buf2b64(hslice);
hashers--;
hashtab[nch] = b64str; hashtab[nch] = b64str;
t.hash.push(nch); t.hash.push(nch);
pvis.hashed(t); pvis.hashed(t);
@@ -2408,6 +2472,9 @@ function up2k_init(subtle) {
msg = 'done'; msg = 'done';
if (t.postlist.length) { if (t.postlist.length) {
if (t.rechecks && QS('#opa_del.act'))
toast.inf(30, L.u_started, L.u_unpt);
var arr = st.todo.upload, var arr = st.todo.upload,
sort = arr.length && arr[arr.length - 1].nfile > t.n; sort = arr.length && arr[arr.length - 1].nfile > t.n;
@@ -2518,10 +2585,15 @@ function up2k_init(subtle) {
if (!t.rechecks && (err_pend || err_srcb)) { if (!t.rechecks && (err_pend || err_srcb)) {
t.rechecks = 0; t.rechecks = 0;
t.want_recheck = true; t.want_recheck = true;
if (st.busy.upload.length || st.busy.handshake.length || st.bytes.uploaded) {
err = L.u_dupdefer; err = L.u_dupdefer;
cls = 'defer'; cls = 'defer';
} }
} }
if (err_pend) {
err += ' <a href="#" onclick="toast.inf(60, L.ue_ab);" class="fsearch_explain">(' + L.u_expl + ')</a>';
}
}
if (err != "") { if (err != "") {
if (!t.t_uploading) if (!t.t_uploading)
@@ -3027,7 +3099,7 @@ function up2k_init(subtle) {
new_state = false; new_state = false;
fixed = true; fixed = true;
} }
if (new_state === undefined) if (new_state === undefined && preferred === undefined)
new_state = can_write ? false : have_up2k_idx ? true : undefined; new_state = can_write ? false : have_up2k_idx ? true : undefined;
} }

View File

@@ -29,7 +29,7 @@ var wah = '',
HTTPS = ('' + location).indexOf('https:') === 0, HTTPS = ('' + location).indexOf('https:') === 0,
TOUCH = 'ontouchstart' in window, TOUCH = 'ontouchstart' in window,
MOBILE = TOUCH, MOBILE = TOUCH,
CHROME = !!window.chrome, CHROME = !!window.chrome, // safari=false
VCHROME = CHROME ? 1 : 0, VCHROME = CHROME ? 1 : 0,
IE = /Trident\//.test(navigator.userAgent), IE = /Trident\//.test(navigator.userAgent),
FIREFOX = ('netscape' in window) && / rv:/.test(navigator.userAgent), FIREFOX = ('netscape' in window) && / rv:/.test(navigator.userAgent),
@@ -886,8 +886,11 @@ if (window.Number && Number.isFinite)
function f2f(val, nd) { function f2f(val, nd) {
// 10.toFixed(1) returns 10.00 for certain values of 10 // 10.toFixed(1) returns 10.00 for certain values of 10
if (!isNum(val)) {
val = parseFloat(val);
if (!isNum(val)) if (!isNum(val))
val = 999; val = 999;
}
val = (val * Math.pow(10, nd)).toFixed(0).split('.')[0]; val = (val * Math.pow(10, nd)).toFixed(0).split('.')[0];
return nd ? (val.slice(0, -nd) || '0') + '.' + val.slice(-nd) : val; return nd ? (val.slice(0, -nd) || '0') + '.' + val.slice(-nd) : val;
} }

View File

@@ -25,6 +25,9 @@
## [`changelog.md`](changelog.md) ## [`changelog.md`](changelog.md)
* occasionally grabbed from github release notes * occasionally grabbed from github release notes
## [`synology-dsm.md`](synology-dsm.md)
* running copyparty on a synology nas
## [`devnotes.md`](devnotes.md) ## [`devnotes.md`](devnotes.md)
* technical stuff * technical stuff

View File

@@ -1,3 +1,152 @@
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1223-0005 `v1.16.7` an idp fix for xmas
# ☃️🎄 **there is still time** 🎅🎁
❄️❄️❄️ please [enjoy some appropriate music](https://a.ocv.me/pub/demo/music/.bonus/#af-55d4554d) -- you'll probably like this more than the idp thing honestly ❄️❄️❄️
## 🧪 new features
* more improvements to the recent-uploads feature 87598dcd
* move html rendering to clientside
* any changes to the filter-text applies in real-time
* loads 50% faster, reduces server-load by 30%
* inhibits search engines from indexing it
## 🩹 bugfixes
* using idp without e2d could mess with uploads dd6e9ea7
* u2c (commandline uploader): fix window title 946a8c5b
* mDNS/SSDP: fix incorrect log colors when multiple primary IPs are lost 552897ab
## 🔧 other changes
* ui: make it more obvious that the volume-control is a volume-control 7f044372
* copyparty.exe: update deps (jinja2, markupsafe, pyinstaller) c0dacbc4
* improve safety of custom plugins 988a7223
* if you've made your own plugins which expect certain values (host-header, filekeys) to be html-safe, then you'll want to upgrade
* also fixes rss-feed xml if password contains special characters
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1219-0037 `v1.16.6` merry \x58mas
# ☃️🎄 **it is time** 🎅🎁
❄️❄️❄️ please [enjoy some appropriate music](https://a.ocv.me/pub/demo/music/.bonus/#af-55d4554d) (trust me on this one, you won't regret it) ❄️❄️❄️
## 🧪 new features
* [list of recent uploads](https://a.ocv.me/?ru) eaa4b04a
* new button in the controlpanel; can be disabled with `--no-ups-page`
* only users with the dot-permission can see dotfiles
* only admins can see uploader-ip and upload-times
* enable `--ups-when` to let all users see upload-times
* #125 log decoded request-URLs 73f7249c
* non-ascii filenames would make the accesslog a wall of `%E5%B9%BB%E6%83%B3%E9%83%B7` so print [the decoded URL](https://github.com/user-attachments/assets/9d411183-30f3-4cb2-a880-84cf18011183) in addition to the original one, which is left as-is for debugging purposes
## 🩹 bugfixes
* #126 improve dotfile handling 4c4e48ba
* was impossible to delete a folder which contained hidden files if the user did not have the permission to see hidden files
* would also affect moving, renaming, copying folders, in which case the dotfiles would not be carried over to the new location
* now, dotfiles are always deleted, and always moved/copied into a new destination, on the condition that this is safe -- if the user has the dotfile permission in the target loocation but not in the source location, the dotfiles will be left behind to avoid accidentally making then browsable
* ux: cosmetic eta/idle-timer fixes 01a3eb29
## 🔧 other changes
* warn on ambiguous comments in config files da5ad2ab
* avoid writing mojibake to the log 3051b131
* use `\x`-encoding for unprintable text
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1211-2236 `v1.16.5` 4chrome
## 🧪 new features
* #124 add workaround for a chrome bug (crash during upload) 24ce46b3
* chrome and chromium-based browsers could OOM
* https://issues.chromium.org/issues/383568268
* #122 "hybrid IdP", regular users can still auth while [IdP](https://github.com/9001/copyparty#identity-providers) is enabled 64501fd7
* previously, enabling IdP would entirely disable password-based login
* now, password-auth is attempted for requests without a valid IdP header
## 🩹 bugfixes
* the terminal window title would only change if `--no-ansi` was specified, which is exactly the opposite of what it should be (and now is) doing db3c0b09
## 🔧 other changes
* mDNS: better log messages when several IPs are added/removed a49bf81f
* webdeps: update dompurify 06868606
----
this release includes a build of [copyparty-winpe64.exe](https://github.com/9001/copyparty/releases/download/v1.16.5/copyparty-winpe64.exe) since the last one was [almost a year ago](https://github.com/9001/copyparty/releases/tag/v1.10.1)
* winpe64.exe is only for *very* specific usecases, you almost definitely *do not* want to download it, please just grab the regular [copyparty.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty.exe) instead (works on all 64bit machines running win8 or newer)
* the only difference between winpe64.exe and [copyparty32.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty32.exe) is that winpe64.exe works in the win7x64 PE (rescue-env), which makes it *almost* entirely useless, and every bit as dangerous to use as copyparty32.exe
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1207-0024 `v1.16.4` ux is hard
## 🧪 new features
* improve the upload ui so it explains how to abort an unfinished upload when someone uploads to the wrong folder by accident be6afe2d
* also reduces serverload slightly when cloning an incoming file to multiple destinations
* u2c (commandline uploader): windows improvements 91637800
* now supports globbing (filename wildcards) on windows
* progressbar in the windows taskbar (requires conemu or the "new windows terminal")
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1204-0003 `v1.16.3` 120%
## 🧪 new features
* #120 add option `--srch-excl` and volflag `srch_excl` for excluding certain paths from search results 697a4fa8
* mDNS: add workaround for https://github.com/avahi/avahi/issues/379 6c1cf68b 94d1924f
* Avahi mDNS Reflection, sometimes used in intricate LAN setups, doesn't understand NSEC records and corrupts them
* the workaround makes copyparty able to read the corrupted packets, but clients without a similar workaround will require either `--zm4` or `--zm6` so copyparty doesn't include the usual NSEC records
* this is mentioned in a very loud warning in the logs when necessary
* mDNS: option to silently ignore buggy devices instead of spamming the log with parser errors 395af051
* webdav: support listing unmapped root with infinite recursion (Depth:0) 21a3f369
* embed current sort config into media URLs (gallery/music) 0f257c93 4cfdc4c5 01670827
* ensures that anyone clicking your link will see the files in the same order as you
* can be confgured serverside (`--hsortn`, volflag `hsortn`) and clientside (`#sort` in settings)
* URL and UI options to disable checksum calculation of PUT, bup, basic uploads c5a000d2
* also allows [choosing either md5, sha1, sha256, or blake2](https://github.com/9001/copyparty/blob/hovudstraum/docs/devnotes.md#write) instead of the default sha512
* can give uploads a nice speed boost when copyparty is running on a potato
## 🩹 bugfixes
* webdav: more correct login challenge 2ce82339
* the previous behavior could make some clients reluctant to send the password
* #120 forget metadata of all files (including uploads) when shadowed d168b2ac
* thanks to @Gremious for all the debugging to narrow this down!
* #120 drop volume caches if relevant config is changed (mainly indexing filters) 2f83c6c7
* #121 couldn't access arbitrary toplevel files from accounts with `h` permission 1f5f42f2
## 🔧 other changes
* exclude thumbnails from accesslog by default 9082c470
* filesearch: show a final summary of time-elapsed and average hashing speed 8a631f04
* improve phrasing of debug messages during indexing at startup 127f414e
* `--license` no longer depends on opensource.org at build time 33c4ccff
* update deps 6cedcfbf
* copyparty.exe: python 3.12.7 => 3.12.8
* webdeps: hashwasm, dompurify
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1123-2336 `v1.16.2` webdav upload fix # 2024-1123-2336 `v1.16.2` webdav upload fix

48
docs/chunksizes.py Executable file
View File

@@ -0,0 +1,48 @@
#!/usr/bin/env python3
# there's far better ways to do this but its 4am and i dont wanna think
# just pypy it my dude
import math
def humansize(sz, terse=False):
for unit in ["B", "KiB", "MiB", "GiB", "TiB"]:
if sz < 1024:
break
sz /= 1024.0
ret = " ".join([str(sz)[:4].rstrip("."), unit])
if not terse:
return ret
return ret.replace("iB", "").replace(" ", "")
def up2k_chunksize(filesize):
chunksize = 1024 * 1024
stepsize = 512 * 1024
while True:
for mul in [1, 2]:
nchunks = math.ceil(filesize * 1.0 / chunksize)
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks <= 4096):
return chunksize
chunksize += stepsize
stepsize *= mul
def main():
prev = 1048576
n = n0 = 524288
while True:
csz = up2k_chunksize(n)
if csz > prev:
print(f"| {n-n0:>18_} | {humansize(n-n0):>8} | {prev:>13_} | {humansize(prev):>8} |".replace("_", " "))
prev = csz
n += n0
main()

View File

@@ -6,6 +6,7 @@
* [up2k](#up2k) - quick outline of the up2k protocol * [up2k](#up2k) - quick outline of the up2k protocol
* [why not tus](#why-not-tus) - I didn't know about [tus](https://tus.io/) * [why not tus](#why-not-tus) - I didn't know about [tus](https://tus.io/)
* [why chunk-hashes](#why-chunk-hashes) - a single sha512 would be better, right? * [why chunk-hashes](#why-chunk-hashes) - a single sha512 would be better, right?
* [list of chunk-sizes](#list-of-chunk-sizes) - specific chunksizes are enforced
* [hashed passwords](#hashed-passwords) - regarding the curious decisions * [hashed passwords](#hashed-passwords) - regarding the curious decisions
* [http api](#http-api) * [http api](#http-api)
* [read](#read) * [read](#read)
@@ -95,6 +96,44 @@ hashwasm would solve the streaming issue but reduces hashing speed for sha512 (x
* blake2 might be a better choice since xxh is non-cryptographic, but that gets ~15 MiB/s on slower androids * blake2 might be a better choice since xxh is non-cryptographic, but that gets ~15 MiB/s on slower androids
### list of chunk-sizes
specific chunksizes are enforced depending on total filesize
each pair of filesize/chunksize is the largest filesize which will use its listed chunksize; a 512 MiB file will use chunksize 2 MiB, but if the file is one byte larger than 512 MiB then it becomes 3 MiB
for the purpose of performance (or dodging arbitrary proxy limitations), it is possible to upload combined and/or partial chunks using stitching and/or subchunks respectively
| filesize | filesize | chunksize | chunksz |
| -----------------: | -------: | ------------: | ------: |
| 268 435 456 | 256 MiB | 1 048 576 | 1.0 MiB |
| 402 653 184 | 384 MiB | 1 572 864 | 1.5 MiB |
| 536 870 912 | 512 MiB | 2 097 152 | 2.0 MiB |
| 805 306 368 | 768 MiB | 3 145 728 | 3.0 MiB |
| 1 073 741 824 | 1.0 GiB | 4 194 304 | 4.0 MiB |
| 1 610 612 736 | 1.5 GiB | 6 291 456 | 6.0 MiB |
| 2 147 483 648 | 2.0 GiB | 8 388 608 | 8.0 MiB |
| 3 221 225 472 | 3.0 GiB | 12 582 912 | 12 MiB |
| 4 294 967 296 | 4.0 GiB | 16 777 216 | 16 MiB |
| 6 442 450 944 | 6.0 GiB | 25 165 824 | 24 MiB |
| 137 438 953 472 | 128 GiB | 33 554 432 | 32 MiB |
| 206 158 430 208 | 192 GiB | 50 331 648 | 48 MiB |
| 274 877 906 944 | 256 GiB | 67 108 864 | 64 MiB |
| 412 316 860 416 | 384 GiB | 100 663 296 | 96 MiB |
| 549 755 813 888 | 512 GiB | 134 217 728 | 128 MiB |
| 824 633 720 832 | 768 GiB | 201 326 592 | 192 MiB |
| 1 099 511 627 776 | 1.0 TiB | 268 435 456 | 256 MiB |
| 1 649 267 441 664 | 1.5 TiB | 402 653 184 | 384 MiB |
| 2 199 023 255 552 | 2.0 TiB | 536 870 912 | 512 MiB |
| 3 298 534 883 328 | 3.0 TiB | 805 306 368 | 768 MiB |
| 4 398 046 511 104 | 4.0 TiB | 1 073 741 824 | 1.0 GiB |
| 6 597 069 766 656 | 6.0 TiB | 1 610 612 736 | 1.5 GiB |
| 8 796 093 022 208 | 8.0 TiB | 2 147 483 648 | 2.0 GiB |
| 13 194 139 533 312 | 12.0 TiB | 3 221 225 472 | 3.0 GiB |
| 17 592 186 044 416 | 16.0 TiB | 4 294 967 296 | 4.0 GiB |
| 26 388 279 066 624 | 24.0 TiB | 6 442 450 944 | 6.0 GiB |
| 35 184 372 088 832 | 32.0 TiB | 8 589 934 592 | 8.0 GiB |
# hashed passwords # hashed passwords
@@ -143,6 +182,9 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
| GET | `?dls` | show active downloads (do this as admin) | | GET | `?dls` | show active downloads (do this as admin) |
| GET | `?ups` | show recent uploads from your IP | | GET | `?ups` | show recent uploads from your IP |
| GET | `?ups&filter=f` | ...where URL contains `f` | | GET | `?ups&filter=f` | ...where URL contains `f` |
| GET | `?ru` | show all recent uploads |
| GET | `?ru&filter=f` | ...where URL contains `f` |
| GET | `?ru&j` | ...as json |
| GET | `?mime=foo` | specify return mimetype `foo` | | GET | `?mime=foo` | specify return mimetype `foo` |
| GET | `?v` | render markdown file at URL | | GET | `?v` | render markdown file at URL |
| GET | `?v` | open image/video/audio in mediaplayer | | GET | `?v` | open image/video/audio in mediaplayer |
@@ -170,6 +212,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
| method | params | body | result | | method | params | body | result |
|--|--|--|--| |--|--|--|--|
| PUT | | (binary data) | upload into file at URL | | PUT | | (binary data) | upload into file at URL |
| PUT | `?j` | (binary data) | ...and reply with json |
| PUT | `?ck` | (binary data) | upload without checksum gen (faster) | | PUT | `?ck` | (binary data) | upload without checksum gen (faster) |
| PUT | `?ck=md5` | (binary data) | return md5 instead of sha512 | | PUT | `?ck=md5` | (binary data) | return md5 instead of sha512 |
| PUT | `?gz` | (binary data) | compress with gzip and write into file at URL | | PUT | `?gz` | (binary data) | compress with gzip and write into file at URL |
@@ -194,6 +237,7 @@ upload modifiers:
| http-header | url-param | effect | | http-header | url-param | effect |
|--|--|--| |--|--|--|
| `Accept: url` | `want=url` | return just the file URL | | `Accept: url` | `want=url` | return just the file URL |
| `Accept: json` | `want=json` | return upload info as json; same as `?j` |
| `Rand: 4` | `rand=4` | generate random filename with 4 characters | | `Rand: 4` | `rand=4` | generate random filename with 4 characters |
| `Life: 30` | `life=30` | delete file after 30 seconds | | `Life: 30` | `life=30` | delete file after 30 seconds |
| `CK: no` | `ck` | disable serverside checksum (maybe faster) | | `CK: no` | `ck` | disable serverside checksum (maybe faster) |

View File

@@ -20,3 +20,25 @@ this means that, if an IdP volume is located inside a folder that is readable by
and likewise -- if the IdP volume is inside a folder that is only accessible by certain users, but the IdP volume is configured to allow access from unauthenticated users, then the contents of the volume will NOT be accessible until it is revived and likewise -- if the IdP volume is inside a folder that is only accessible by certain users, but the IdP volume is configured to allow access from unauthenticated users, then the contents of the volume will NOT be accessible until it is revived
until this limitation is fixed (if ever), it is recommended to place IdP volumes inside an appropriate parent volume, so they can inherit acceptable permissions until their revival; see the "strategic volumes" at the bottom of [./examples/docker/idp/copyparty.conf](./examples/docker/idp/copyparty.conf) until this limitation is fixed (if ever), it is recommended to place IdP volumes inside an appropriate parent volume, so they can inherit acceptable permissions until their revival; see the "strategic volumes" at the bottom of [./examples/docker/idp/copyparty.conf](./examples/docker/idp/copyparty.conf)
## Connecting webdav clients
If you use only idp and want to connect via rclone you have to adapt a few things.
The following steps are for Authelia, but should be easy adaptable to other IdPs and clients. There may be better/smarter ways to do this, but this is a known solution.
1. Add a rule for your domain and set it to one factor
```
rules:
- domain: 'sub.domain.tld'
policy: one_factor
```
2. After you created your rclone config find its location with `rclone config file` and add the headers option to it, change the string to `username:password` base64 encoded. Make sure to set the right url location, otherwise you will get a 401 from copyparty.
```
[servername-dav]
type = webdav
url = https://sub.domain.tld/u/user/priv/
vendor = owncloud
pacer_min_sleep = 0.01ms
headers = Proxy-Authorization,basic base64encodedstring==
```

View File

@@ -259,6 +259,12 @@ for d in /usr /var; do find $d -type f -size +30M 2>/dev/null; done | while IFS=
for f in {0..255}; do echo $f; truncate -s 256M $f; b1=$(printf '%02x' $f); for o in {0..255}; do b2=$(printf '%02x' $o); printf "\x$b1\x$b2" | dd of=$f bs=2 seek=$((o*1024*1024)) conv=notrunc 2>/dev/null; done; done for f in {0..255}; do echo $f; truncate -s 256M $f; b1=$(printf '%02x' $f); for o in {0..255}; do b2=$(printf '%02x' $o); printf "\x$b1\x$b2" | dd of=$f bs=2 seek=$((o*1024*1024)) conv=notrunc 2>/dev/null; done; done
# create 6.06G file with 16 bytes of unique data at start+end of each 32M chunk # create 6.06G file with 16 bytes of unique data at start+end of each 32M chunk
sz=6509559808; truncate -s $sz f; csz=33554432; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done sz=6509559808; truncate -s $sz f; csz=33554432; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done
# same but for chunksizes 16M (3.1G), 24M (4.1G), 48M (128.1G)
sz=3321225472; csz=16777216;
sz=4394967296; csz=25165824;
sz=6509559808; csz=33554432;
sz=138438953472; csz=50331648;
f=csz-$csz; truncate -s $sz $f; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=$f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done
# py2 on osx # py2 on osx
brew install python@2 brew install python@2

140
docs/synology-dsm.md Normal file
View File

@@ -0,0 +1,140 @@
# running copyparty on synology dsm nas
![synology-dsm-container-status.png](https://ocv.me/copyparty/doc/pics/dsm.png)
this has been tested on a `Synology ds218+` NAS with 1 SHR storage-pool and 1 volume, but the same steps should work in more advanced setups too
verified on DSM 7.1 and 7.2, but not on 6.x since my flea-market ds218+ refuses to install it for some reason
# ok let's go
go to controlpanel -> shared-folders, and create the following shared-folders if you don't already have appropriate ones:
* a shared-folder for configuration files, preferably on SSD if you have one
* one or more shared-folders for your actual data/media to share
(btw, when you create the shared-folders, it asks whether you want to enable data checksum and file compression, i would recommend both)
the rest of this doc assumes that these two shared-folders are named `configs` and `media1`, and that you made an empty folder inside the `configs` shared-folder named `cpp`
* your copyparty config file (see below) should be named `something.conf` directly inside that cpp folder, for example `/configs/cpp/copyparty.conf`
* during first start, copyparty will create a folder there named `copyparty`, in other words `/configs/cpp/copyparty` which you should leave alone; that's where copyparty stores its indexes and other runtime config
## recommended copyparty config
open the Package Center and install `Text Editor` (by Synology Inc.) to create and edit your copyparty config:
![synology-text-editor-copyparty-conf.png](https://ocv.me/copyparty/doc/pics/dsm-cfg.png)
* note the `copyparty` and `hist` folders in that screenshot which are autogenerated by copyparty and to be left alone
```yaml
[global]
e2d, e2t # remember uploads & read media tags
rss, daw, ver # some other nice-to-have features
#dedup # you may want this, or maybe not
hist: /cfg/hist # don't pollute the shared-folder
name: synology # shows in the browser, can be anything
[accounts]
ed: wark # username ed, password wark
[/] # share the following at the webroot:
/w # the "/w" docker-volume (the shared-folder)
accs:
A: ed # give Admin to username ed
# hide the synology system files by creating a hidden volume
[/@eaDir]
/w/@eaDir
```
if you ever change the copyparty config file, then [restart the container](https://ocv.me/copyparty/doc/pics/dsm71-02.png) to make the changes take effect
okay now continue with one of these:
* [DSM v7.2 or newer](#dsm-v72-or-newer)
* [all older DSM versions](#dsm-v6x-dsm-v71x-or-older)
# DSM v7.2 or newer
`Docker` was replaced by `Container Manager` in DSM v7.2 but they're almost the same thing;
* open the `Package Center` and install the [Container Manager package](https://ocv.me/copyparty/doc/pics/dsm72-01.png) by `Docker Inc.`
* open the `Container Manager` app
* go to the `Registry` tab and search for `copyparty`
* [doubleclick copyparty/ac](https://ocv.me/copyparty/doc/pics/dsm72-02.png) and keep the [default `latest`](https://ocv.me/copyparty/doc/pics/dsm72-03.png) when it asks you which tag to use
* switch to the `Container` tab and click `Create`
* [choose `copyparty/ac:latest`](https://ocv.me/copyparty/doc/pics/dsm72-04.png) and click `Next`
finally, in the [Advanced Settings](https://ocv.me/copyparty/doc/pics/dsm72-05.png) window,
* under `Port Settings`, type `3923` into the `Local Port` textbox
* click `Add Folder` and select `/configs/cpp` on your nas (the `cpp` folder in the `configs` shared-folder), and change `Mount path` to `/cfg`
* click `Add Folder` and select `/media1` on your nas (the shared-folder that copyparty can share in its web-UI) and change `Mount path` to `/w`
* if you are adding multiple shared-folders for media, then the `Mount path` of the 2nd folder should be something like `/w/share2` or `/w/music`
copyparty will launch and become available at http://192.168.1.9:3923/ (assuming `192.168.1.9` is your nas ip)
# DSM v6.x, DSM v7.1.x or older
if you're using DSM 7.1 or older, then you don't have [Container Manager](https://www.synology.com/en-global/dsm/packages/ContainerManager) yet and you'll have to use [Docker](https://www.synology.com/en-global/dsm/packages/Docker?os_ver=6.2&search=docker) instead. Here's how:
* open the `Package Center` and install the [Docker package](https://ocv.me/copyparty/doc/pics/dsm71-01.png) by `Docker Inc.`
* open the `Docker` app
* go to the `Registry` tab and search for `copyparty`
* [doubleclick copyparty/ac](https://ocv.me/copyparty/doc/pics/dsm71-02.png) and keep the [default `latest`](https://ocv.me/copyparty/doc/pics/dsm71-03.png) when it asks you which tag to use
* switch to the `Container` tab and click `Create`
* [choose `copyparty/ac:latest`](https://ocv.me/copyparty/doc/pics/dsm71-04.png) and `Next`
* in the [Network](https://ocv.me/copyparty/doc/pics/dsm71-05.png) window, keep the default `Use the selected networks: [x] bridge`
* in the [General Settings](https://ocv.me/copyparty/doc/pics/dsm71-06.png) window, just keep everything default (in other words, everything disabled)
* in the [Port Settings](https://ocv.me/copyparty/doc/pics/dsm71-07.png) window, change `Local Port` to `3923` (or choose something else, but it cannot be the default `Auto`)
finally, in the [Volume Settings](https://ocv.me/copyparty/doc/pics/dsm71-08.png) window, add a docker volume for copyparty config, and at least one volume for media-files which copyparty can share in its web-UI
* click `Add Folder` and select `/configs/cpp` on your nas (the `cpp` folder in the `configs` shared-folder), and change `Mount path` to `/cfg`
* click `Add Folder` and select `/media1` on your nas (the shared-folder that copyparty can share in its web-UI) and change `Mount path` to `/w`
* if you are adding multiple shared-folders for media, then the `Mount path` of the 2nd folder should be something like `/w/share2` or `/w/music`
copyparty will launch and become available at http://192.168.1.9:3923/ (assuming `192.168.1.9` is your nas ip)
# misc notes
note that if you only want to share some folders inside your data volume, and not all of it, then you can either give copyparty the whole shared-folder anyways and control/restrict access in the copyparty config file (recommended), or you can add each folder as a new docker volume (not as flexible)
## regarding ram usage
the ram usage indicator in both `Docker` and `Container Manager` is misleading because it also counts the kernel disk cache which makes the number insanely high -- the synology resource monitor shows the correct values, usually less than 100 MiB
to see the actual memory usage by copyparty, see `Resource Monitor` -> `Task Manager` -> `Processes` and look at the `Private Memory` of `python3` which is probably copyparty
## regarding performance
when uploading files to the synology nas with the respective web-UIs,
* `File Station` does about 16 MiB/s,
* `Synology Drive Server` does about 50 MiB/s; deceivingly fast upload speeds at first, but when the file is fully uploaded, there is a lengthy "processing" step at the end, reducing the average speed to about 50% of the initial
* copyparty maxes the HDD write-speeds, 99 MiB/s
when uploading to the synology nas over webdav,
* `WebDAV Server` by `Synology Inc.` in the Package Center does 86 MiB/s
* copyparty does 79 MiB/s; the NAS CPU is a bottleneck because copyparty verifies the upload checksum while `WebDAV Server` doesn't

View File

@@ -3,7 +3,7 @@ WORKDIR /z
ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \ ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
ver_hashwasm=4.12.0 \ ver_hashwasm=4.12.0 \
ver_marked=4.3.0 \ ver_marked=4.3.0 \
ver_dompf=3.2.2 \ ver_dompf=3.2.3 \
ver_mde=2.18.0 \ ver_mde=2.18.0 \
ver_codemirror=5.65.18 \ ver_codemirror=5.65.18 \
ver_fontawesome=5.13.0 \ ver_fontawesome=5.13.0 \

View File

@@ -1,7 +1,7 @@
f117016b1e6a7d7e745db30d3e67f1acf7957c443a0dd301b6c5e10b8368f2aa4db6be9782d2d3f84beadd139bfeef4982e40f21ca5d9065cb794eeb0e473e82 altgraph-0.17.4-py2.py3-none-any.whl f117016b1e6a7d7e745db30d3e67f1acf7957c443a0dd301b6c5e10b8368f2aa4db6be9782d2d3f84beadd139bfeef4982e40f21ca5d9065cb794eeb0e473e82 altgraph-0.17.4-py2.py3-none-any.whl
6a624018f30da375581d5751eca0080edbbe37f102f643f856279fcfded3a4379fd1b6fb0661cdb2e72bbbbc726ca714a1f5990cc348df365db62bc53e4c4503 Git-2.45.2-32-bit.exe 6a624018f30da375581d5751eca0080edbbe37f102f643f856279fcfded3a4379fd1b6fb0661cdb2e72bbbbc726ca714a1f5990cc348df365db62bc53e4c4503 Git-2.45.2-32-bit.exe
17ce52ba50692a9d964f57a23ac163fb74c77fdeb2ca988a6d439ae1fe91955ff43730c073af97a7b3223093ffea3479a996b9b50ee7fba0869247a56f74baa6 pefile-2023.2.7-py3-none-any.whl 17ce52ba50692a9d964f57a23ac163fb74c77fdeb2ca988a6d439ae1fe91955ff43730c073af97a7b3223093ffea3479a996b9b50ee7fba0869247a56f74baa6 pefile-2023.2.7-py3-none-any.whl
749a473646c6d4c7939989649733d4c7699fd1c359c27046bf5bc9c070d1a4b8b986bbc65f60d7da725baf16dbfdd75a4c2f5bb8335f2cb5685073f5fee5c2d1 pywin32_ctypes-0.2.2-py3-none-any.whl b297ff66ec50cf5a1abcf07d6ac949644c5150ba094ffac974c5d27c81574c3e97ed814a47547f4b03a4c83ea0fb8f026433fca06a3f08e32742dc5c024f3d07 pywin32_ctypes-0.2.3-py3-none-any.whl
085d39ef4426aa5f097fbc484595becc16e61ca23fc7da4d2a8bba540a3b82e789e390b176c7151bdc67d01735cce22b1562cdb2e31273225a2d3e275851a4ad setuptools-70.3.0-py3-none-any.whl 085d39ef4426aa5f097fbc484595becc16e61ca23fc7da4d2a8bba540a3b82e789e390b176c7151bdc67d01735cce22b1562cdb2e31273225a2d3e275851a4ad setuptools-70.3.0-py3-none-any.whl
360a141928f4a7ec18a994602cbb28bbf8b5cc7c077a06ac76b54b12fa769ed95ca0333a5cf728923a8e0baeb5cc4d5e73e5b3de2666beb05eb477d8ae719093 upx-4.2.4-win32.zip 360a141928f4a7ec18a994602cbb28bbf8b5cc7c077a06ac76b54b12fa769ed95ca0333a5cf728923a8e0baeb5cc4d5e73e5b3de2666beb05eb477d8ae719093 upx-4.2.4-win32.zip
# win7 # win7
@@ -23,11 +23,11 @@ ac96786e5d35882e0c5b724794329c9125c2b86ae7847f17acfc49f0d294312c6afc1c3f248655de
# win10 # win10
0a2cd4cadf0395f0374974cd2bc2407e5cc65c111275acdffb6ecc5a2026eee9e1bb3da528b35c7f0ff4b64563a74857d5c2149051e281cc09ebd0d1968be9aa en-us_windows_10_enterprise_ltsc_2021_x64_dvd_d289cf96.iso 0a2cd4cadf0395f0374974cd2bc2407e5cc65c111275acdffb6ecc5a2026eee9e1bb3da528b35c7f0ff4b64563a74857d5c2149051e281cc09ebd0d1968be9aa en-us_windows_10_enterprise_ltsc_2021_x64_dvd_d289cf96.iso
16cc0c58b5df6c7040893089f3eb29c074aed61d76dae6cd628d8a89a05f6223ac5d7f3f709a12417c147594a87a94cc808d1e04a6f1e407cc41f7c9f47790d1 virtio-win-0.1.248.iso 16cc0c58b5df6c7040893089f3eb29c074aed61d76dae6cd628d8a89a05f6223ac5d7f3f709a12417c147594a87a94cc808d1e04a6f1e407cc41f7c9f47790d1 virtio-win-0.1.248.iso
d1420c8417fad7888766dd26b9706a87c63e8f33dceeb8e26d0056d5127b0b3ed9272e44b4b761132d4b3320327252eab1696520488ca64c25958896b41f547b jinja2-3.1.4-py3-none-any.whl 18b9e8cfa682da51da1b682612652030bd7f10e4a1d5ea5220ab32bde734b0e6fe1c7dbd903ac37928c0171fd45d5ca602952054de40a4e55e9ed596279516b5 jinja2-3.1.5-py3-none-any.whl
8e6847bcde75a2736be0214731f834bc1b5854238d703351e68bc4e74d38404b212b8568565ae22c844189e466d3fbe6024836351cb69ffb1824131387644fef MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl 6df21f0da408a89f6504417c7cdf9aaafe4ed88cfa13e9b8fa8414f604c0401f885a04bbad0484dc51a29284af5d1548e33c6cc6bfb9896d9992c1b1074f332d MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl 8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
0203ec2551c4836696cfab0b2c9fff603352f03fa36e7476e2e1ca7ec57a3a0c24bd791fcd92f342bf817f0887854d9f072e0271c643de4b313d8c9569ba8813 packaging-24.1-py3-none-any.whl 0203ec2551c4836696cfab0b2c9fff603352f03fa36e7476e2e1ca7ec57a3a0c24bd791fcd92f342bf817f0887854d9f072e0271c643de4b313d8c9569ba8813 packaging-24.1-py3-none-any.whl
2be320b4191f208cdd6af183c77ba2cf460ea52164ee45ac3ff17d6dfa57acd9deff016636c2dd42a21f4f6af977d5f72df7dacf599bebcf41757272354d14c1 pillow-10.4.0-cp312-cp312-win_amd64.whl 12d7921dc7dfd8a4b0ea0fa2bae8f1354fcdd59ece3d7f4e075aed631f9ba791dc142c70b1ccd1e6287c43139df1db26bd57a7a217c8da3a77326036495cdb57 pillow-11.1.0-cp312-cp312-win_amd64.whl
896ddddbd4b85e86e0600cb65eb4c07fbc7f3802d47e7f660411e20b5500831469b97ed4770f25820f4e75cbfac40308da624fd86d4f62e578149d5c276a9cde pyinstaller-6.10.0-py3-none-win_amd64.whl f0463895e9aee97f31a2003323de235fed1b26289766dc0837261e3f4a594a31162b69e9adbb0e9a31e2e2d4b5f25c762ed1669553df7dc89a8ba4f85d297873 pyinstaller-6.11.1-py3-none-win_amd64.whl
873781decaeef07f6a79b0ed8b9f35f3fa534a1ea0d866991e40278a10818fa5b60c70b0d5828971b045364f1099694cd1e5d5d60d480acb93fcfbfbced4a09e pyinstaller_hooks_contrib-2024.8-py3-none-any.whl d550a0a14428386945533de2220c4c2e37c0c890fc51a600f626c6ca90a32d39572c121ec04c157ba3a8d6601cb021f8433d871b5c562a3d342c804fffec90c1 pyinstaller_hooks_contrib-2024.11-py3-none-any.whl
0f623c9ab52d050283e97a986ba626d86b04cd02fa7ffdf352740576940b142b264709abadb5d875c90f625b28103d7210b900e0d77f12c1c140108bd2a159aa python-3.12.8-amd64.exe 0f623c9ab52d050283e97a986ba626d86b04cd02fa7ffdf352740576940b142b264709abadb5d875c90f625b28103d7210b900e0d77f12c1c140108bd2a159aa python-3.12.8-amd64.exe

View File

@@ -38,7 +38,7 @@ fns=(
MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl
mutagen-1.47.0-py3-none-any.whl mutagen-1.47.0-py3-none-any.whl
packaging-24.1-py3-none-any.whl packaging-24.1-py3-none-any.whl
pillow-10.4.0-cp312-cp312-win_amd64.whl pillow-11.1.0-cp312-cp312-win_amd64.whl
pyinstaller-6.10.0-py3-none-win_amd64.whl pyinstaller-6.10.0-py3-none-win_amd64.whl
pyinstaller_hooks_contrib-2024.8-py3-none-any.whl pyinstaller_hooks_contrib-2024.8-py3-none-any.whl
python-3.12.7-amd64.exe python-3.12.7-amd64.exe

View File

@@ -105,6 +105,9 @@ copyparty/web/mde.html,
copyparty/web/mde.js, copyparty/web/mde.js,
copyparty/web/msg.css, copyparty/web/msg.css,
copyparty/web/msg.html, copyparty/web/msg.html,
copyparty/web/rups.css,
copyparty/web/rups.html,
copyparty/web/rups.js,
copyparty/web/shares.css, copyparty/web/shares.css,
copyparty/web/shares.html, copyparty/web/shares.html,
copyparty/web/shares.js, copyparty/web/shares.js,

View File

@@ -80,6 +80,7 @@ var tl_cpanel = {
"ac1": "enable no304", "ac1": "enable no304",
"ad1": "enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!", "ad1": "enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!",
"ae1": "active downloads:", "ae1": "active downloads:",
"af1": "show recent uploads",
}, },
}; };
@@ -291,6 +292,7 @@ var tl_browser = {
"cl_uopts": "up2k switches", "cl_uopts": "up2k switches",
"cl_favico": "favicon", "cl_favico": "favicon",
"cl_bigdir": "big dirs", "cl_bigdir": "big dirs",
"cl_hsort": "#sort",
"cl_keytype": "key notation", "cl_keytype": "key notation",
"cl_hiddenc": "hidden columns", "cl_hiddenc": "hidden columns",
"cl_hidec": "hide", "cl_hidec": "hide",
@@ -333,6 +335,7 @@ var tl_browser = {
"cdt_lim": "max number of files to show in a folder", "cdt_lim": "max number of files to show in a folder",
"cdt_ask": "when scrolling to the bottom,$Ninstead of loading more files,$Nask what to do", "cdt_ask": "when scrolling to the bottom,$Ninstead of loading more files,$Nask what to do",
"cdt_hsort": "how many sorting rules (&lt;code&gt;,sorthref&lt;/code&gt;) to include in media-URLs. Setting this to 0 will also ignore sorting-rules included in media links when clicking them",
"tt_entree": "show navpane (directory tree sidebar)$NHotkey: B", "tt_entree": "show navpane (directory tree sidebar)$NHotkey: B",
"tt_detree": "show breadcrumbs$NHotkey: B", "tt_detree": "show breadcrumbs$NHotkey: B",
@@ -625,6 +628,7 @@ var tl_browser = {
"u_hashdone": 'hashing done', "u_hashdone": 'hashing done',
"u_hashing": 'hash', "u_hashing": 'hash',
"u_hs": 'handshaking...', "u_hs": 'handshaking...',
"u_started": "the files are now being uploaded; see [🚀]",
"u_dupdefer": "duplicate; will be processed after all other files", "u_dupdefer": "duplicate; will be processed after all other files",
"u_actx": "click this text to prevent loss of<br />performance when switching to other windows/tabs", "u_actx": "click this text to prevent loss of<br />performance when switching to other windows/tabs",
"u_fixed": "OK!&nbsp; Fixed it 👍", "u_fixed": "OK!&nbsp; Fixed it 👍",
@@ -660,6 +664,7 @@ var tl_browser = {
"ue_la": 'you are currently logged in as "{0}"', "ue_la": 'you are currently logged in as "{0}"',
"ue_sr": 'you are currently in file-search mode\n\nswitch to upload-mode by clicking the magnifying glass 🔎 (next to the big SEARCH button), and try uploading again\n\nsorry', "ue_sr": 'you are currently in file-search mode\n\nswitch to upload-mode by clicking the magnifying glass 🔎 (next to the big SEARCH button), and try uploading again\n\nsorry',
"ue_ta": 'try uploading again, it should work now', "ue_ta": 'try uploading again, it should work now',
"ue_ab": "this file is already being uploaded into another folder, and that upload must be completed before the file can be uploaded elsewhere.\n\nYou can abort and forget the initial upload using the top-left 🧯",
"ur_1uo": "OK: File uploaded successfully", "ur_1uo": "OK: File uploaded successfully",
"ur_auo": "OK: All {0} files uploaded successfully", "ur_auo": "OK: All {0} files uploaded successfully",
"ur_1so": "OK: File found on server", "ur_1so": "OK: File found on server",

View File

@@ -115,6 +115,7 @@ var tl_cpanel = {{
"ac1": "enable no304", "ac1": "enable no304",
"ad1": "enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!", "ad1": "enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!",
"ae1": "active downloads:", "ae1": "active downloads:",
"af1": "show recent uploads",
}}, }},
}}; }};

View File

@@ -20,7 +20,7 @@ cat $f | awk '
o{next} o{next}
/^#/{s=1;rs=0;pr()} /^#/{s=1;rs=0;pr()}
/^#* *(nix package)/{rs=1} /^#* *(nix package)/{rs=1}
/^#* *(themes|install on android|dev env setup|just the sfx|complete release|optional gpl stuff|nixos module)|```/{s=rs} /^#* *(themes|install on android|dev env setup|just the sfx|complete release|optional gpl stuff|nixos module|reverse-proxy perf)|```/{s=rs}
/^#/{ /^#/{
lv=length($1); lv=length($1);
sub(/[^ ]+ /,""); sub(/[^ ]+ /,"");

View File

@@ -33,14 +33,6 @@ def eprint(*a, **ka):
sys.stderr.flush() sys.stderr.flush()
if MACOS:
import posixpath
posixpath.islink = nah
os.path.islink = nah
# 25% faster; until any tests do symlink stuff
from copyparty.__main__ import init_E from copyparty.__main__ import init_E
from copyparty.broker_thr import BrokerThr from copyparty.broker_thr import BrokerThr
from copyparty.ico import Ico from copyparty.ico import Ico
@@ -283,8 +275,6 @@ class VHttpSrv(object):
self.gurl = Garda("") self.gurl = Garda("")
self.u2idx = None self.u2idx = None
self.ptn_cc = re.compile(r"[\x00-\x1f]")
self.uparam_cc_ok = set("doc move tree".split())
def cachebuster(self): def cachebuster(self):
return "a" return "a"