Compare commits

...

132 Commits

Author SHA1 Message Date
ed
a0da0122b9 v1.10.0 2024-02-15 00:00:41 +00:00
ed
879e83e24f ignore easymde errors
it randomly throws when clicking inside the preview pane
2024-02-14 23:26:06 +00:00
ed
64ad585318 ie11: file selection hotkeys 2024-02-14 23:08:32 +00:00
ed
f262aee800 change folders to preload music when necessary:
on phones especially, hitting the end of a folder while playing music
could permanently stop audio playback, because the browser will
revoke playback privileges unless we have a song ready to go...
there's no time to navigate through folders looking for the next file

the preloader will now start jumping through folders ahead of time
2024-02-14 22:44:33 +00:00
ed
d4da386172 add watchdog for sqlite deadlock on db init:
some cifs servers cause sqlite to fail in interesting ways; any attempt
to create a table can instantly throw an exception, which results in a
zerobyte database being created. During the next startup, the db would
be determined to be corrupted, and up2k would invoke _backup_db before
deleting and recreating it -- except that sqlite's connection.backup()
will hang indefinitely and deadlock up2k

add a watchdog which fires if it takes longer than 1 minute to open the
database, printing a big warning that the filesystem probably does not
support locking or is otherwise sqlite-incompatible, then writing a
stacktrace of all threads to a textfile in the config directory
(in case this deadlock is due to something completely different),
before finally crashing spectacularly

additionally, delete the database if the creation fails, which should
prevents the deadlock on the next startup, so combine that with a
message hinting at the filesystem incompatibility

the 1-minute limit may sound excessively gracious, but considering what
some of the copyparty instances out there is running on, really isn't

this was reported when connecting to a cifs server running alpine

thx to abex on discord for the detailed bug report!
2024-02-14 20:18:36 +00:00
ed
5d92f4df49 mention why -j0 can be a bad idea to enable,
and that `--hist` can also help for loading thumbnails faster
2024-02-13 19:47:42 +00:00
ed
6f8a588c4d up2k: fix a mostly-harmless race
as each chunk is written to the file, httpcli calls
up2k.confirm_chunk to register the chunk as completed, and the reply
indicates whether that was the final outstanding chunk, in which case
httpcli closes the file descriptors since there's nothing more to write

the issue is that the final chunk is registered as completed before the
file descriptors are closed, meaning there could be writes that haven't
finished flushing to disk yet

if the client decides to issue another handshake during this window,
up2k sees that all chunks are complete and calls up2k.finish_upload
even as some threads might still be flushing the final writes to disk

so the conditions to hit this bug were as follows (all must be true):
* multiprocessing is disabled
* there is a reverse-proxy
* a client has several idle connections and reuses one of those
* the server's filesystem is EXTREMELY slow, to the point where
   closing a file takes over 30 seconds

the fix is to stop handshakes from being processed while a file is
being closed, which is unfortunately a small bottleneck in that it
prohibits initiating another upload while one is being finalized, but
the required complexity to handle this better is probably not worth it
(a separate mutex for each upload session or something like that)

this issue is mostly harmless, partially because it is super tricky to
hit (only aware of it happening synthetically), and because there is
usually no harmful consequences; the worst-case is if this were to
happen exactly as the server OS decides to crash, which would make the
file appear to be fully uploaded even though it's missing some data
(all extremely unlikely, but not impossible)

there is no performance impact; if anything it should now accept
new tcp connections slightly faster thanks to more granular locking
2024-02-13 19:24:06 +00:00
ed
7c8e368721 lol markdown 2024-02-12 06:01:09 +01:00
ed
f7a43a8e46 fix grid layout on first toggle from listview 2024-02-12 05:40:18 +01:00
ed
02879713a2 tftp: update readme + small py2 fix 2024-02-12 05:39:54 +01:00
ed
acbb8267e1 tftp: add directory listing 2024-02-10 23:50:17 +00:00
ed
8796c09f56 add --tftp-pr to specify portrange instead of ephemerals 2024-02-10 21:45:57 +00:00
ed
d636316a19 add tftp server 2024-02-10 18:37:21 +00:00
ed
ed524d84bb /np: exclude uploader ip and trim dot-prefix 2024-02-07 23:02:47 +00:00
ed
f0cdd9f25d upgrade copyparty.exe to python 3.11.8 2024-02-07 20:39:51 +00:00
ed
4e797a7156 docker: mention debian issue from discord 2024-02-05 20:11:04 +00:00
ed
136c0fdc2b detect reverse-proxies stripping URL params:
if a reverseproxy decides to strip away URL parameters, show an
appropriate error-toast instead of silently entering a bad state

someone on discord ended up in an infinite page-reload loop
since the js would try to recover by fully navigating to the
requested dir if `?ls` failed, which wouldn't do any good anyways
if the dir in question is the initial dir to display
2024-02-05 19:17:36 +00:00
ed
cab999978e update pkgs to 1.9.31 2024-02-03 16:02:59 +00:00
ed
fabeebd96b v1.9.31 2024-02-03 15:33:11 +00:00
ed
b1cf588452 add lore 2024-02-03 15:05:27 +00:00
ed
c354a38b4c up2k: warn about browser cap on num connections 2024-02-02 23:46:00 +00:00
ed
a17c267d87 bbox: unload pics/vids from DOM; closes #71
videos unloaded correctly when switching between files, but not when
closing the lightbox while playing a video and then clicking another

now, only media within the preload window (+/- 2 from current file)
is kept loaded into DOM, everything else gets ejected, both on
navigation and when closing the lightbox
2024-02-02 23:16:50 +00:00
ed
c1180d6f9c up2k: include inflight bytes in eta calculation;
much more accurate total-ETA when uploading with many connections
and/or uploading huge files to really slow servers

the titlebar % still only does actually confirmed bytes,
partially because that makes sense, partially because
that's what happened by accident
2024-02-02 22:46:24 +00:00
ed
d3db6d296f disable mkdir and new-doc buttons if no name is provided
also fixes toast.hide() unintentionally stopping events from bubbling
2024-02-01 21:41:48 +00:00
ed
eefa0518db change FFmpeg from BtbN to gyan/codex;
deps are more up-to-date and slightly better codec selection
2024-01-28 22:04:01 +00:00
ed
945170e271 fix umod/touching zerobyte files 2024-01-27 20:26:27 +00:00
ed
6c2c6090dc notes: hardlink/symlink conversion + phone cam sync 2024-01-27 18:52:08 +00:00
ed
b2e233403d u2c: apply exclude-filter to deletion too
if a file gets synced and you later add an exclude-filter for it,
delete the file from the server as if it doesn't exist locally
2024-01-27 18:49:25 +00:00
ed
e397ec2e48 update pkgs to 1.9.30 2024-01-25 23:18:21 +00:00
ed
fade751a3e v1.9.30 2024-01-25 22:52:42 +00:00
ed
0f386c4b08 also sanitize histpaths in client error messages;
previously it only did volume abspaths
2024-01-25 21:40:41 +00:00
ed
14bccbe45f backports from IdP branch:
* allow mounting `/` (the entire filesystem) as a volume
  * not that you should (really, you shouldn't)
* improve `-v` helptext
* change IdP group symbol to @ because % is used for file inclusion
  * not technically necessary but is less confusing in docs
2024-01-25 21:39:30 +00:00
ed
55eb692134 up2k: add option to touch existing files to match local 2024-01-24 20:36:41 +00:00
ed
b32d65207b fix js-error on older chromes in incognito mode;
window.localStorage was null, so trying to read would fail

seen on falkon 23.08.4 with qtwebengine 5.15.12 (fedora39)

might as well be paranoid about the other failure modes too
(sudden exceptions on reads and/or writes)
2024-01-24 02:24:27 +00:00
ed
64cac003d8 add missing historic changelog entries 2024-01-24 01:28:29 +00:00
ed
6dbfcddcda don't print indexing progress to stdout if -q 2024-01-20 17:26:52 +00:00
ed
b4e0a34193 ensure windows-safe filenames during batch rename
also handle ctrl-click in the navpane float
2024-01-19 21:41:56 +00:00
ed
01c82b54a7 audio player: add shuffle 2024-01-18 22:59:47 +00:00
ed
4ef3106009 more old-browser support:
* polyfill Set() for gridview (ie9, ie10)
* navpane: do full-page nav if history api is ng (ie9)
* show markdown as plaintext if rendering fails (ie*)
* text-editor: hide preview pane if it doesn't work (ie*)
* explicitly hide toasts on close (ie9, ff10)
2024-01-18 22:56:39 +00:00
ed
aa3a971961 windows: safeguard against parallel deletes
st_ino is valid for NTFS on python3, good enough
2024-01-17 23:32:37 +00:00
ed
b9d0c8536b avoid sendfile bugs on 32bit machines:
https://github.com/python/cpython/issues/114077
2024-01-17 20:56:44 +00:00
ed
3313503ea5 retry deleting busy files on windows:
some clients (clonezilla-webdav) rapidly create and delete files;
this fails if copyparty is still hashing the file (usually the case)

and the same thing can probably happen due to antivirus etc

add global-option --rm-retry (volflag rm_retry) specifying
for how long (and how quickly) to keep retrying the deletion

default: retry for 5sec on windows, 0sec (disabled) on everything else
because this is only a problem on windows
2024-01-17 20:27:53 +00:00
ed
d999d3a921 update pkgs to 1.9.29 2024-01-14 07:03:47 +00:00
ed
e7d00bae39 v1.9.29 2024-01-14 06:29:31 +00:00
ed
650e41c717 update deps:
* web: hashwasm 4.9 -> 4.10
* web: dompurify 3.0.5 -> 3.0.8
* web: codemirror 5.65.12 -> 5.65.16
* win10exe: pillow 10.1 -> 10.2
2024-01-14 05:57:28 +00:00
ed
140f6e0389 add contextlet + igloo irc config + upd changelog 2024-01-14 04:58:24 +00:00
ed
5e111ba5ee only show the unpost hint if unpost is available (-e2d) 2024-01-14 04:24:32 +00:00
ed
95a599961e add RAM usage tracking to thumbnailer;
prevents server OOM from high RAM usage by FFmpeg when generating
spectrograms and waveforms: https://trac.ffmpeg.org/ticket/10797
2024-01-14 04:15:09 +00:00
ed
a55e0d6eb8 add button to bust music player cache,
useful on phones when the server was OOM'ing and
butchering the responses (foreshadowing...)
2024-01-13 04:08:40 +00:00
ed
2fd2c6b948 ie11 fixes (2024? haha no way dude it's like 2004 right)
* fix crash on keyboard input in modals
* text editor works again (but without markdown preview)
* keyboard hotkeys for the few features that actually work
2024-01-13 02:31:50 +00:00
ed
7a936ea01e js: be careful with allocations in crash handler 2024-01-13 01:22:20 +00:00
ed
226c7c3045 fix confusing behavior when reindexing files:
when a file was reindexed (due to a change in size or last-modified
timestamp) the uploader-IP would get removed, but the upload timestamp
was ported over. This was intentional so there was probably a reason...

new behavior is to keep both uploader-IP and upload timestamp if the
file contents are unchanged (determined by comparing warks), and to
discard both uploader-IP and upload timestamp if that is not the case
2024-01-13 00:18:46 +00:00
ed
a4239a466b immediately perform search if a checkbox is toggled 2024-01-12 00:20:38 +01:00
ed
d0eb014c38 improve applefilters + add missing newline in curl 404
* webdav: extend applesan regex with more stuff to exclude
* on macos, set applesan as default `--no-idx` to avoid indexing them
   (they didn't show up in search since they're dotfiles, but still)
2024-01-12 00:13:35 +01:00
ed
e01ba8552a warn if a user doesn't have privileges anywhere
(since the account system isn't super-inutitive and at least
 one dude figured that -a would default to giving admin rights)
2024-01-11 00:24:34 +00:00
ed
024303592a improved logging when a client dies mid-POST;
igloo irc has an absolute time limit of 2 minutes before it just
disconnects mid-upload and that kinda looked like it had a buggy
multipart generator instead of just being funny

anticipating similar events in the future, also log the
client-selected boundary value to eyeball its yoloness
2024-01-10 23:59:43 +00:00
ed
86419b8f47 suboptimizations and some future safeguards 2024-01-10 23:20:42 +01:00
ed
f1358dbaba use scandir for volume smoketests during up2k init;
gives much faster startup on filesystems that are extremely slow
(TLNote: android sdcardfs)
2024-01-09 21:47:02 +01:00
ed
e8a653ca0c don't block non-up2k uploads during indexing;
due to all upload APIs invoking up2k.hash_file to index uploads,
the uploads could block during a rescan for a crazy long time
(past most gateway timeouts); now this is mostly fire-and-forget

"mostly" because this also adds a conditional slowdown to
help the hasher churn through if the queue gets too big

worst case, if the server is restarted before it catches up, this
would rely on filesystem reindexing to eventually index the files
after a restart or on a schedule, meaning uploader info would be
lost on shutdown, but this is usually fine anyways (and this was
also the case until now)
2024-01-08 22:10:16 +00:00
ed
9bc09ce949 accept file POSTs without specifying the act field;
primarily to support uploading from Igloo IRC but also generally useful
(not actually tested with Igloo IRC yet because it's a paid feature
so just gonna wait for spiky to wake up and tell me it didn't work)
2024-01-08 19:09:53 +00:00
ed
dc8e621d7c increase OOM kill-score for FFmpeg and mtp's;
discourage Linux from killing innocent processes
when FFmpeg decides to allocate 1 TiB of RAM
2024-01-07 17:52:10 +00:00
ed
dee0950f74 misc;
* scripts: add log repacker
* bench/filehash: msys support + add more stats
2024-01-06 01:15:43 +00:00
ed
143f72fe36 bench/filehash: fix locale + add more stats 2024-01-03 02:41:18 +01:00
ed
a7889fb6a2 update pkgs to 1.9.28 2023-12-31 19:44:24 +00:00
ed
987caec15d v1.9.28 2023-12-31 18:49:42 +00:00
ed
ab40ff5051 add permission "A" (alias of "rwmda."); closes #70 2023-12-31 18:20:24 +00:00
ed
bed133d3dd pad log source when logging to file too 2023-12-31 17:21:02 +00:00
ed
829c8fca96 curl/CLI-friendly 403/404 2023-12-31 17:20:45 +00:00
ed
5b26ab0096 add option to specify default num parallel uploads 2023-12-28 01:41:17 +01:00
ed
39554b4bc3 guard against unintended access if user-db is corrupted 2023-12-24 16:12:18 +01:00
ed
97d9c149f1 IdP config draft (#62) 2023-12-24 13:46:26 +01:00
ed
59688bc8d7 * rename hdr-au-usr to idp-h-usr
* ensure lowercase idp-h-*, xff-hdr
* more macos support in tooling
2023-12-24 13:46:12 +01:00
ed
a18f63895f fix resource leak on macos 2023-12-21 00:48:51 +01:00
ed
27433d6214 remove fedora/pypi-copr mention because copr has died;
https://github.com/fedora-copr/copr/issues/3056
2023-12-20 22:35:52 +00:00
ed
374c535cfa fix cors-checker so it behaves like the readme says;
any custom header (`pw` in our case) is sufficient validation
2023-12-20 20:03:08 +00:00
ed
ac7815a0ae ensure file can be opened before replying 200 and...
* make gen_tree 0.1% faster
* improve filekey warning message
* fix oversight in 0c50ea1757
* support `--xdev` on windows (the python docs mention that os.scandir
   doesn't assign st_ino, st_dev and st_nlink on win but i can't read)
2023-12-20 01:07:45 +00:00
ed
0c50ea1757 list dotfiles only for specific volumes or users (#66):
* permission `.` grants dotfile visibility if user has `r` too
* `-ed` will grant dotfiles to all `r` accounts (same as before)
* volflag `dots` likewise

also drops compatibility for pre-0.12.0 `-v` syntax
(`-v .::red` will no longer translate to `-v .::r,ed`)
2023-12-16 15:38:48 +00:00
ed
c057c5e8e8 extend --th-covers with dotfiles; closes #67 2023-12-14 10:53:15 +00:00
ed
46d667716e support python 3.15 2023-12-14 10:49:10 +00:00
ed
cba2e10d29 cleanup 2023-12-14 10:47:52 +00:00
ed
b1693f95cb alternative fedora packages for when copr breaks 2023-12-09 02:05:06 +00:00
ed
3f00073256 update pkgs to 1.9.27 2023-12-08 21:58:59 +00:00
ed
d15000062d v1.9.27 2023-12-08 21:33:12 +00:00
ed
6cb3b35a54 fix #65 (symlinks die when moved) 2023-12-08 21:28:20 +00:00
ed
b4031e8d43 forgot to bump this... oh well, at least the exe is correct 2023-12-08 02:16:40 +00:00
ed
a3ca0638cb update pkgs to 1.9.26 2023-12-08 02:10:06 +00:00
ed
a360ac29da v1.9.26 2023-12-08 01:36:01 +00:00
ed
9672b8c9b3 ensure nested symlinks are not broken during deletes;
when moving/deleting a file, all symlinked dupes are verified to ensure
this action does not break any symlinks, however it did this by checking
the realpath of each link. This was not good enough, since the deleted
file may be a part of a series of nested symlinks

this situation occurs because the deduper tries to keep relative
symlinks as close as possible, only traversing into parent/sibling
folders as required, which can lead to several levels of nested links
2023-12-08 01:11:03 +00:00
ed
e70ecd98ef don't freak out when deleting a broken symlink,
also invoke the hooks with the corret lastmod time
2023-12-08 01:01:10 +00:00
ed
5f7ce78d7f avoid duplicate database entries when replacing files,
either from --daw, or by using u2c with --dr
2023-12-08 01:00:01 +00:00
ed
2077dca66f u2c: when deleting from server, heed request size limit 2023-12-08 00:54:57 +00:00
ed
91f010290c improve --help descriptions 2023-12-03 02:35:38 +00:00
ed
395e3386b7 mention --help for features not documented in readme
plus some small fixes to the packaging section
2023-12-02 23:32:31 +00:00
ed
a1dce0f24e update pkgs to 1.9.25 2023-12-01 23:51:35 +00:00
ed
c7770904e6 v1.9.25 2023-12-01 23:26:16 +00:00
ed
1690889ed8 remember scroll position when leaving the textfile viewer 2023-12-01 23:15:48 +00:00
ed
842817d9e3 improve handling of malicious clients;
* start banning malicious clients according to --ban-422
* reply with a blank 500 to stop firefox from retrying like 20 times
* allow Cc's in a few specific URL params (filenames, dirnames)
2023-12-01 23:08:16 +00:00
ed
5fc04152bd also handle NumpadEnter 2023-12-01 21:10:51 +00:00
ed
1be85bdb26 fix modal focus even more (now works on phones too) 2023-12-01 21:02:05 +00:00
ed
2eafaa88a2 update pkgs to 1.9.24 2023-12-01 02:16:24 +00:00
ed
900cc463c3 v1.9.24 2023-12-01 02:10:20 +00:00
ed
97b999c463 update pkgs to 1.9.23 2023-12-01 01:54:23 +00:00
ed
a7cef91b8b v1.9.23 2023-12-01 00:39:49 +00:00
ed
a4a112c0ee update pkgs to 1.9.22 2023-12-01 01:14:18 +00:00
ed
e6bcee28d6 v1.9.22 2023-12-01 00:31:02 +00:00
ed
626b5770a5 add --ftp-ipa 2023-11-30 23:36:46 +00:00
ed
c2f92cacc1 mention the new auth feature 2023-11-30 23:01:05 +00:00
ed
4f8a1f5f6a allow free text selection in modals by deferring focus 2023-11-30 22:41:16 +00:00
ed
4a98b73915 fix a bug previouly concealed by window.event;
hitting enter would clear out an entire chain of modals,
because the event didn't get consumed like it should,
so let's make double sure that will be the case
2023-11-30 22:40:30 +00:00
ed
00812cb1da new option --ipa; client IP allowlist:
connections from outside the specified list of IP prefixes are rejected
(docker-friendly alternative to -i 127.0.0.1)

also mkdir any missing folders when logging to file
2023-11-30 20:45:43 +00:00
ed
16766e702e add basic-docker-compose (#59) 2023-11-30 20:14:38 +00:00
ed
5e932a9504 hilight metavars in help text 2023-11-30 18:19:34 +00:00
ed
ccab44daf2 initial support for identity providers (#62):
add argument --hdr-au-usr which specifies a HTTP header to read
usernames from; entirely bypasses copyparty's password checks
for http/https clients (ftp/smb are unaffected)

users must exist in the copyparty config, passwords can be whatever

just the first step but already a bit useful on its own,
more to come in a few months
2023-11-30 18:18:47 +00:00
ed
8c52b88767 make linters happier 2023-11-30 17:33:07 +00:00
ed
c9fd26255b support environment variables mostly everywhere,
useful for docker/systemd stuff

also makes logfiles flush to disk per line by default;
can be disabled for a small performance gain with --no-logflush
2023-11-30 10:22:52 +00:00
ed
0b9b8dbe72 systemd: get rid of nftables portforwarding;
suggest letting copyparty bind 80/443 itself because nft hard
2023-11-30 10:13:14 +00:00
ed
b7723ac245 rely on filekeys for album-art over bluetooth;
will probably fail when some devices (sup iphone) stream to car stereos
but at least passwords won't end up somewhere unexpected this way
(plus, the js no longer uses the jank url to request waveforms)
2023-11-29 23:20:59 +00:00
ed
35b75c3db1 avoid palemoon bug on dragging a text selection;
"permission denied to access property preventDefault"
2023-11-26 20:22:59 +00:00
ed
f902779050 avoid potential dom confusion (ie8 is already no-js) 2023-11-26 20:08:52 +00:00
ed
fdddd36a5d update pkgs to 1.9.21 2023-11-25 14:48:41 +00:00
ed
c4ba123779 v1.9.21 2023-11-25 14:17:58 +00:00
ed
72e355eb2c prisonparty: prevent overlapping setup/teardown 2023-11-25 14:03:41 +00:00
ed
43d409a5d9 prisonparty accepts user/group names 2023-11-25 13:40:21 +00:00
ed
b1fffc2246 open textfiles inline in grid-view, closes #63;
also fix the Y hotkey (which converts all links in the list-view into
download links), making that apply to the grid-view as well
2023-11-25 13:09:12 +00:00
ed
edd3e53ab3 prisonparty: support zfs-ubuntu
* when bind-mounting, resolve any symlinks ($v/) and read target inode;
   for example merged /bin and /usr/bin
* add failsafe in case this test should break in new exciting ways;
   inspect `mount` for any instances of the jailed path
   (not /proc/mounts since that has funny space encoding)
* unmount in a while-loop because xargs freaks out if one of them fail
   * and systemd doesn't give us a /dev/stderr to write to anyways
2023-11-25 02:16:48 +00:00
ed
aa0b119031 update pkgs to 1.9.20 2023-11-21 23:44:56 +00:00
ed
eddce00765 v1.9.20 2023-11-21 23:25:41 +00:00
ed
6f4bde2111 fix infinite backspin on "previous track";
when playing the first track in a folder and hitting the previous track
button, it would keep switching through the previous folders inifinitely
2023-11-21 23:23:51 +00:00
ed
f3035e8869 clear load-more buttons upon navigation (thx icxes) 2023-11-21 22:53:46 +00:00
ed
a9730499c0 don't suggest loading more search results beyond server cap 2023-11-21 22:38:35 +00:00
ed
b66843efe2 reduce cpu priority of ffmpeg, hooks, parsers 2023-11-21 22:21:33 +00:00
ed
cc1aaea300 update pkgs to 1.9.19 2023-11-19 12:45:32 +00:00
84 changed files with 4257 additions and 1131 deletions

3
.vscode/launch.json vendored
View File

@@ -19,8 +19,7 @@
"-emp", "-emp",
"-e2dsa", "-e2dsa",
"-e2ts", "-e2ts",
"-mtp", "-mtp=.bpm=f,bin/mtag/audio-bpm.py",
".bpm=f,bin/mtag/audio-bpm.py",
"-aed:wark", "-aed:wark",
"-vsrv::r:rw,ed:c,dupe", "-vsrv::r:rw,ed:c,dupe",
"-vdist:dist:r" "-vdist:dist:r"

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2019 ed Copyright (c) 2019 ed <oss@ocv.me>
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

139
README.md
View File

@@ -3,7 +3,7 @@
turn almost any device into a file server with resumable uploads/downloads using [*any*](#browser-support) web browser turn almost any device into a file server with resumable uploads/downloads using [*any*](#browser-support) web browser
* server only needs Python (2 or 3), all dependencies optional * server only needs Python (2 or 3), all dependencies optional
* 🔌 protocols: [http](#the-browser) // [ftp](#ftp-server) // [webdav](#webdav-server) // [smb/cifs](#smb-server) * 🔌 protocols: [http](#the-browser) // [webdav](#webdav-server) // [ftp](#ftp-server) // [tftp](#tftp-server) // [smb/cifs](#smb-server)
* 📱 [android app](#android-app) // [iPhone shortcuts](#ios-shortcuts) * 📱 [android app](#android-app) // [iPhone shortcuts](#ios-shortcuts)
👉 **[Get started](#quickstart)!** or visit the **[read-only demo server](https://a.ocv.me/pub/demo/)** 👀 running from a basement in finland 👉 **[Get started](#quickstart)!** or visit the **[read-only demo server](https://a.ocv.me/pub/demo/)** 👀 running from a basement in finland
@@ -26,6 +26,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [FAQ](#FAQ) - "frequently" asked questions * [FAQ](#FAQ) - "frequently" asked questions
* [accounts and volumes](#accounts-and-volumes) - per-folder, per-user permissions * [accounts and volumes](#accounts-and-volumes) - per-folder, per-user permissions
* [shadowing](#shadowing) - hiding specific subfolders * [shadowing](#shadowing) - hiding specific subfolders
* [dotfiles](#dotfiles) - unix-style hidden files/folders
* [the browser](#the-browser) - accessing a copyparty server using a web-browser * [the browser](#the-browser) - accessing a copyparty server using a web-browser
* [tabs](#tabs) - the main tabs in the ui * [tabs](#tabs) - the main tabs in the ui
* [hotkeys](#hotkeys) - the browser has the following hotkeys * [hotkeys](#hotkeys) - the browser has the following hotkeys
@@ -52,6 +53,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [ftp server](#ftp-server) - an FTP server can be started using `--ftp 3921` * [ftp server](#ftp-server) - an FTP server can be started using `--ftp 3921`
* [webdav server](#webdav-server) - with read-write support * [webdav server](#webdav-server) - with read-write support
* [connecting to webdav from windows](#connecting-to-webdav-from-windows) - using the GUI * [connecting to webdav from windows](#connecting-to-webdav-from-windows) - using the GUI
* [tftp server](#tftp-server) - a TFTP server (read/write) can be started using `--tftp 3969`
* [smb server](#smb-server) - unsafe, slow, not recommended for wan * [smb server](#smb-server) - unsafe, slow, not recommended for wan
* [browser ux](#browser-ux) - tweaking the ui * [browser ux](#browser-ux) - tweaking the ui
* [file indexing](#file-indexing) - enables dedup and music search ++ * [file indexing](#file-indexing) - enables dedup and music search ++
@@ -67,6 +69,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [event hooks](#event-hooks) - trigger a program on uploads, renames etc ([examples](./bin/hooks/)) * [event hooks](#event-hooks) - trigger a program on uploads, renames etc ([examples](./bin/hooks/))
* [upload events](#upload-events) - the older, more powerful approach ([examples](./bin/mtag/)) * [upload events](#upload-events) - the older, more powerful approach ([examples](./bin/mtag/))
* [handlers](#handlers) - redefine behavior with plugins ([examples](./bin/handlers/)) * [handlers](#handlers) - redefine behavior with plugins ([examples](./bin/handlers/))
* [identity providers](#identity-providers) - replace copyparty passwords with oauth and such
* [hiding from google](#hiding-from-google) - tell search engines you dont wanna be indexed * [hiding from google](#hiding-from-google) - tell search engines you dont wanna be indexed
* [themes](#themes) * [themes](#themes)
* [complete examples](#complete-examples) * [complete examples](#complete-examples)
@@ -74,7 +77,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [prometheus](#prometheus) - metrics/stats can be enabled * [prometheus](#prometheus) - metrics/stats can be enabled
* [packages](#packages) - the party might be closer than you think * [packages](#packages) - the party might be closer than you think
* [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes) * [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes)
* [fedora package](#fedora-package) - now [available on copr-pypi](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/) * [fedora package](#fedora-package) - currently **NOT** available on [copr-pypi](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/)
* [nix package](#nix-package) - `nix profile install github:9001/copyparty` * [nix package](#nix-package) - `nix profile install github:9001/copyparty`
* [nixos module](#nixos-module) * [nixos module](#nixos-module)
* [browser support](#browser-support) - TLDR: yes * [browser support](#browser-support) - TLDR: yes
@@ -111,7 +114,7 @@ just run **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/
* or install through pypi: `python3 -m pip install --user -U copyparty` * or install through pypi: `python3 -m pip install --user -U copyparty`
* or if you cannot install python, you can use [copyparty.exe](#copypartyexe) instead * or if you cannot install python, you can use [copyparty.exe](#copypartyexe) instead
* or install [on arch](#arch-package) [on fedora](#fedora-package) [on NixOS](#nixos-module) [through nix](#nix-package) * or install [on arch](#arch-package) [on NixOS](#nixos-module) [through nix](#nix-package)
* or if you are on android, [install copyparty in termux](#install-on-android) * or if you are on android, [install copyparty in termux](#install-on-android)
* or if you prefer to [use docker](./scripts/docker/) 🐋 you can do that too * or if you prefer to [use docker](./scripts/docker/) 🐋 you can do that too
* docker has all deps built-in, so skip this step: * docker has all deps built-in, so skip this step:
@@ -119,8 +122,8 @@ just run **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/
enable thumbnails (images/audio/video), media indexing, and audio transcoding by installing some recommended deps: enable thumbnails (images/audio/video), media indexing, and audio transcoding by installing some recommended deps:
* **Alpine:** `apk add py3-pillow ffmpeg` * **Alpine:** `apk add py3-pillow ffmpeg`
* **Debian:** `apt install python3-pil ffmpeg` * **Debian:** `apt install --no-install-recommends python3-pil ffmpeg`
* **Fedora:** `dnf install python3-pillow ffmpeg` * **Fedora:** rpmfusion + `dnf install python3-pillow ffmpeg`
* **FreeBSD:** `pkg install py39-sqlite3 py39-pillow ffmpeg` * **FreeBSD:** `pkg install py39-sqlite3 py39-pillow ffmpeg`
* **MacOS:** `port install py-Pillow ffmpeg` * **MacOS:** `port install py-Pillow ffmpeg`
* **MacOS** (alternative): `brew install pillow ffmpeg` * **MacOS** (alternative): `brew install pillow ffmpeg`
@@ -147,18 +150,19 @@ you may also want these, especially on servers:
* [contrib/systemd/copyparty.service](contrib/systemd/copyparty.service) to run copyparty as a systemd service (see guide inside) * [contrib/systemd/copyparty.service](contrib/systemd/copyparty.service) to run copyparty as a systemd service (see guide inside)
* [contrib/systemd/prisonparty.service](contrib/systemd/prisonparty.service) to run it in a chroot (for extra security) * [contrib/systemd/prisonparty.service](contrib/systemd/prisonparty.service) to run it in a chroot (for extra security)
* [contrib/openrc/copyparty](contrib/openrc/copyparty) to run copyparty on Alpine / Gentoo
* [contrib/rc/copyparty](contrib/rc/copyparty) to run copyparty on FreeBSD * [contrib/rc/copyparty](contrib/rc/copyparty) to run copyparty on FreeBSD
* [contrib/nginx/copyparty.conf](contrib/nginx/copyparty.conf) to [reverse-proxy](#reverse-proxy) behind nginx (for better https)
* [nixos module](#nixos-module) to run copyparty on NixOS hosts * [nixos module](#nixos-module) to run copyparty on NixOS hosts
* [contrib/nginx/copyparty.conf](contrib/nginx/copyparty.conf) to [reverse-proxy](#reverse-proxy) behind nginx (for better https)
and remember to open the ports you want; here's a complete example including every feature copyparty has to offer: and remember to open the ports you want; here's a complete example including every feature copyparty has to offer:
``` ```
firewall-cmd --permanent --add-port={80,443,3921,3923,3945,3990}/tcp # --zone=libvirt firewall-cmd --permanent --add-port={80,443,3921,3923,3945,3990}/tcp # --zone=libvirt
firewall-cmd --permanent --add-port=12000-12099/tcp --permanent # --zone=libvirt firewall-cmd --permanent --add-port=12000-12099/tcp # --zone=libvirt
firewall-cmd --permanent --add-port={1900,5353}/udp # --zone=libvirt firewall-cmd --permanent --add-port={69,1900,3969,5353}/udp # --zone=libvirt
firewall-cmd --reload firewall-cmd --reload
``` ```
(1900:ssdp, 3921:ftp, 3923:http/https, 3945:smb, 3990:ftps, 5353:mdns, 12000:passive-ftp) (69:tftp, 1900:ssdp, 3921:ftp, 3923:http/https, 3945:smb, 3969:tftp, 3990:ftps, 5353:mdns, 12000:passive-ftp)
## features ## features
@@ -169,6 +173,7 @@ firewall-cmd --reload
* ☑ volumes (mountpoints) * ☑ volumes (mountpoints)
* ☑ [accounts](#accounts-and-volumes) * ☑ [accounts](#accounts-and-volumes)
* ☑ [ftp server](#ftp-server) * ☑ [ftp server](#ftp-server)
* ☑ [tftp server](#tftp-server)
* ☑ [webdav server](#webdav-server) * ☑ [webdav server](#webdav-server)
* ☑ [smb/cifs server](#smb-server) * ☑ [smb/cifs server](#smb-server)
* ☑ [qr-code](#qr-code) for quick access * ☑ [qr-code](#qr-code) for quick access
@@ -340,7 +345,7 @@ upgrade notes
* yes, using [hooks](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/wget.py) * yes, using [hooks](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/wget.py)
* i want to learn python and/or programming and am considering looking at the copyparty source code in that occasion * i want to learn python and/or programming and am considering looking at the copyparty source code in that occasion
```bash * ```bash
_| _ __ _ _|_ _| _ __ _ _|_
(_| (_) | | (_) |_ (_| (_) | | (_) |_
``` ```
@@ -366,10 +371,12 @@ permissions:
* `w` (write): upload files, move files *into* this folder * `w` (write): upload files, move files *into* this folder
* `m` (move): move files/folders *from* this folder * `m` (move): move files/folders *from* this folder
* `d` (delete): delete files/folders * `d` (delete): delete files/folders
* `.` (dots): user can ask to show dotfiles in directory listings
* `g` (get): only download files, cannot see folder contents or zip/tar * `g` (get): only download files, cannot see folder contents or zip/tar
* `G` (upget): same as `g` except uploaders get to see their own [filekeys](#filekeys) (see `fk` in examples below) * `G` (upget): same as `g` except uploaders get to see their own [filekeys](#filekeys) (see `fk` in examples below)
* `h` (html): same as `g` except folders return their index.html, and filekeys are not necessary for index.html * `h` (html): same as `g` except folders return their index.html, and filekeys are not necessary for index.html
* `a` (admin): can see upload time, uploader IPs, config-reload * `a` (admin): can see upload time, uploader IPs, config-reload
* `A` ("all"): same as `rwmda.` (read/write/move/delete/admin/dotfiles)
examples: examples:
* add accounts named u1, u2, u3 with passwords p1, p2, p3: `-a u1:p1 -a u2:p2 -a u3:p3` * add accounts named u1, u2, u3 with passwords p1, p2, p3: `-a u1:p1 -a u2:p2 -a u3:p3`
@@ -397,6 +404,17 @@ hiding specific subfolders by mounting another volume on top of them
for example `-v /mnt::r -v /var/empty:web/certs:r` mounts the server folder `/mnt` as the webroot, but another volume is mounted at `/web/certs` -- so visitors can only see the contents of `/mnt` and `/mnt/web` (at URLs `/` and `/web`), but not `/mnt/web/certs` because URL `/web/certs` is mapped to `/var/empty` for example `-v /mnt::r -v /var/empty:web/certs:r` mounts the server folder `/mnt` as the webroot, but another volume is mounted at `/web/certs` -- so visitors can only see the contents of `/mnt` and `/mnt/web` (at URLs `/` and `/web`), but not `/mnt/web/certs` because URL `/web/certs` is mapped to `/var/empty`
## dotfiles
unix-style hidden files/folders by starting the name with a dot
anyone can access these if they know the name, but they normally don't appear in directory listings
a client can request to see dotfiles in directory listings if global option `-ed` is specified, or the volume has volflag `dots`, or the user has permission `.`
dotfiles do not appear in search results unless one of the above is true, **and** the global option / volflag `dotsrch` is set
# the browser # the browser
accessing a copyparty server using a web-browser accessing a copyparty server using a web-browser
@@ -509,7 +527,7 @@ it does static images with Pillow / pyvips / FFmpeg, and uses FFmpeg for video f
audio files are covnerted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`) audio files are covnerted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`)
images with the following names (see `--th-covers`) become the thumbnail of the folder they're in: `folder.png`, `folder.jpg`, `cover.png`, `cover.jpg` images with the following names (see `--th-covers`) become the thumbnail of the folder they're in: `folder.png`, `folder.jpg`, `cover.png`, `cover.jpg`
* and, if you enable [file indexing](#file-indexing), all remaining folders will also get thumbnails (as long as they contain any pics at all) * and, if you enable [file indexing](#file-indexing), it will also try those names as dotfiles (`.folder.jpg` and so), and then fallback on the first picture in the folder (if it has any pictures at all)
in the grid/thumbnail view, if the audio player panel is open, songs will start playing when clicked in the grid/thumbnail view, if the audio player panel is open, songs will start playing when clicked
* indicated by the audio files having the ▶ icon instead of 💾 * indicated by the audio files having the ▶ icon instead of 💾
@@ -537,7 +555,7 @@ select which type of archive you want in the `[⚙️] config` tab:
* gzip default level is `3` (0=fast, 9=best), change with `?tar=gz:9` * gzip default level is `3` (0=fast, 9=best), change with `?tar=gz:9`
* xz default level is `1` (0=fast, 9=best), change with `?tar=xz:9` * xz default level is `1` (0=fast, 9=best), change with `?tar=xz:9`
* bz2 default level is `2` (1=fast, 9=best), change with `?tar=bz2:9` * bz2 default level is `2` (1=fast, 9=best), change with `?tar=bz2:9`
* hidden files (dotfiles) are excluded unless `-ed` * hidden files ([dotfiles](#dotfiles)) are excluded unless account is allowed to list them
* `up2k.db` and `dir.txt` is always excluded * `up2k.db` and `dir.txt` is always excluded
* bsdtar supports streaming unzipping: `curl foo?zip=utf8 | bsdtar -xv` * bsdtar supports streaming unzipping: `curl foo?zip=utf8 | bsdtar -xv`
* good, because copyparty's zip is faster than tar on small files * good, because copyparty's zip is faster than tar on small files
@@ -723,7 +741,8 @@ some hilights:
click the `play` link next to an audio file, or copy the link target to [share it](https://a.ocv.me/pub/demo/music/Ubiktune%20-%20SOUNDSHOCK%202%20-%20FM%20FUNK%20TERRROR!!/#af-1fbfba61&t=18) (optionally with a timestamp to start playing from, like that example does) click the `play` link next to an audio file, or copy the link target to [share it](https://a.ocv.me/pub/demo/music/Ubiktune%20-%20SOUNDSHOCK%202%20-%20FM%20FUNK%20TERRROR!!/#af-1fbfba61&t=18) (optionally with a timestamp to start playing from, like that example does)
open the `[🎺]` media-player-settings tab to configure it, open the `[🎺]` media-player-settings tab to configure it,
* switches: * "switches":
* `[🔀]` shuffles the files inside each folder
* `[preload]` starts loading the next track when it's about to end, reduces the silence between songs * `[preload]` starts loading the next track when it's about to end, reduces the silence between songs
* `[full]` does a full preload by downloading the entire next file; good for unreliable connections, bad for slow connections * `[full]` does a full preload by downloading the entire next file; good for unreliable connections, bad for slow connections
* `[~s]` toggles the seekbar waveform display * `[~s]` toggles the seekbar waveform display
@@ -733,10 +752,12 @@ open the `[🎺]` media-player-settings tab to configure it,
* `[art]` shows album art on the lockscreen * `[art]` shows album art on the lockscreen
* `[🎯]` keeps the playing song scrolled into view (good when using the player as a taskbar dock) * `[🎯]` keeps the playing song scrolled into view (good when using the player as a taskbar dock)
* `[⟎]` shrinks the playback controls * `[⟎]` shrinks the playback controls
* playback mode: * "buttons":
* `[uncache]` may fix songs that won't play correctly due to bad files in browser cache
* "at end of folder":
* `[loop]` keeps looping the folder * `[loop]` keeps looping the folder
* `[next]` plays into the next folder * `[next]` plays into the next folder
* transcode: * "transcode":
* `[flac]` converts `flac` and `wav` files into opus * `[flac]` converts `flac` and `wav` files into opus
* `[aac]` converts `aac` and `m4a` files into opus * `[aac]` converts `aac` and `m4a` files into opus
* `[oth]` converts all other known formats into opus * `[oth]` converts all other known formats into opus
@@ -822,6 +843,9 @@ using arguments or config files, or a mix of both:
* or click the `[reload cfg]` button in the control-panel if the user has `a`/admin in any volume * or click the `[reload cfg]` button in the control-panel if the user has `a`/admin in any volume
* changes to the `[global]` config section requires a restart to take effect * changes to the `[global]` config section requires a restart to take effect
**NB:** as humongous as this readme is, there is also a lot of undocumented features. Run copyparty with `--help` to see all available global options; all of those can be used in the `[global]` section of config files, and everything listed in `--help-flags` can be used in volumes as volflags.
* if running in docker/podman, try this: `docker run --rm -it copyparty/ac --help`
## zeroconf ## zeroconf
@@ -921,6 +945,28 @@ known client bugs:
* latin-1 is fine, hiragana is not (not even as shift-jis on japanese xp) * latin-1 is fine, hiragana is not (not even as shift-jis on japanese xp)
## tftp server
a TFTP server (read/write) can be started using `--tftp 3969` (you probably want [ftp](#ftp-server) instead unless you are *actually* communicating with hardware from the 90s (in which case we should definitely hang some time))
> that makes this the first RTX DECT Base that has been updated using copyparty 🎉
* based on [partftpy](https://github.com/9001/partftpy)
* no accounts; read from world-readable folders, write to world-writable, overwrite in world-deletable
* needs a dedicated port (cannot share with the HTTP/HTTPS API)
* run as root to use the spec-recommended port `69` (nice)
* can reply from a predefined portrange (good for firewalls)
* only supports the binary/octet/image transfer mode (no netascii)
* [RFC 7440](https://datatracker.ietf.org/doc/html/rfc7440) is **not** supported, so will be extremely slow over WAN
* expect 1100 KiB/s over 1000BASE-T, 400-500 KiB/s over wifi, 200 on bad wifi
some recommended TFTP clients:
* windows: `tftp.exe` (you probably already have it)
* linux: `tftp-hpa`, `atftp`
* `tftp 127.0.0.1 3969 -v -m binary -c put firmware.bin`
* `curl tftp://127.0.0.1:3969/firmware.bin` (read-only)
## smb server ## smb server
unsafe, slow, not recommended for wan, enable with `--smb` for read-only or `--smbw` for read-write unsafe, slow, not recommended for wan, enable with `--smb` for read-only or `--smbw` for read-write
@@ -1009,6 +1055,8 @@ to save some time, you can provide a regex pattern for filepaths to only index
similarly, you can fully ignore files/folders using `--no-idx [...]` and `:c,noidx=\.iso$` similarly, you can fully ignore files/folders using `--no-idx [...]` and `:c,noidx=\.iso$`
* when running on macos, all the usual apple metadata files are excluded by default
if you set `--no-hash [...]` globally, you can enable hashing for specific volumes using flag `:c,nohash=` if you set `--no-hash [...]` globally, you can enable hashing for specific volumes using flag `:c,nohash=`
### filesystem guards ### filesystem guards
@@ -1193,6 +1241,17 @@ redefine behavior with plugins ([examples](./bin/handlers/))
replace 404 and 403 errors with something completely different (that's it for now) replace 404 and 403 errors with something completely different (that's it for now)
## identity providers
replace copyparty passwords with oauth and such
work is [ongoing](https://github.com/9001/copyparty/issues/62) to support authenticating / authorizing users based on a separate authentication proxy, which makes it possible to support oauth, single-sign-on, etc.
it is currently possible to specify `--idp-h-usr x-username`; copyparty will then skip password validation and blindly trust the username specified in the `X-Username` request header
the remaining stuff (accepting user groups through another header, creating volumes on the fly) are still to-do; configuration will probably [look like this](./docs/examples/docker/idp/copyparty.conf)
## hiding from google ## hiding from google
tell search engines you dont wanna be indexed, either using the good old [robots.txt](https://www.robotstxt.org/robotstxt.html) or through copyparty settings: tell search engines you dont wanna be indexed, either using the good old [robots.txt](https://www.robotstxt.org/robotstxt.html) or through copyparty settings:
@@ -1252,8 +1311,8 @@ see the top of [./copyparty/web/browser.css](./copyparty/web/browser.css) where
* anyone can upload, and receive "secret" links for each upload they do: * anyone can upload, and receive "secret" links for each upload they do:
`python copyparty-sfx.py -e2dsa -v .::wG:c,fk=8` `python copyparty-sfx.py -e2dsa -v .::wG:c,fk=8`
* anyone can browse, only `kevin` (password `okgo`) can upload/move/delete files: * anyone can browse (`r`), only `kevin` (password `okgo`) can upload/move/delete (`A`) files:
`python copyparty-sfx.py -e2dsa -a kevin:okgo -v .::r:rwmd,kevin` `python copyparty-sfx.py -e2dsa -a kevin:okgo -v .::r:A,kevin`
* read-only music server: * read-only music server:
`python copyparty-sfx.py -v /mnt/nas/music:/music:r -e2dsa -e2ts --no-robots --force-js --theme 2` `python copyparty-sfx.py -v /mnt/nas/music:/music:r -e2dsa -e2ts --no-robots --force-js --theme 2`
@@ -1353,23 +1412,29 @@ note: the following metrics are counted incorrectly if multiprocessing is enable
the party might be closer than you think the party might be closer than you think
if your distro/OS is not mentioned below, there might be some hints in the [«on servers»](#on-servers) section
## arch package ## arch package
now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes) now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes)
it comes with a [systemd service](./contrib/package/arch/copyparty.service) and expects to find one or more [config files](./docs/example.conf) in `/etc/copyparty.d/`
## fedora package ## fedora package
now [available on copr-pypi](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/) , maintained autonomously -- [track record](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/package/python-copyparty/) seems OK currently **NOT** available on [copr-pypi](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/) , fedora is having issues with their build servers and won't be fixed for several months
if you previously installed copyparty from copr, you may run one of the following commands to upgrade to a more recent version:
```bash ```bash
dnf copr enable @copr/PyPI dnf install https://ocv.me/copyparty/fedora/37/python3-copyparty.fc37.noarch.rpm
dnf install python3-copyparty # just a minimal install, or... dnf install https://ocv.me/copyparty/fedora/38/python3-copyparty.fc38.noarch.rpm
dnf install python3-{copyparty,pillow,argon2-cffi,pyftpdlib,pyOpenSSL} ffmpeg-free # with recommended deps dnf install https://ocv.me/copyparty/fedora/39/python3-copyparty.fc39.noarch.rpm
``` ```
this *may* also work on RHEL but [I'm not paying IBM to verify that](https://www.jeffgeerling.com/blog/2023/dear-red-hat-are-you-dumb) to run copyparty as a service, use the [systemd service scripts](https://github.com/9001/copyparty/tree/hovudstraum/contrib/systemd), just replace `/usr/bin/python3 /usr/local/bin/copyparty-sfx.py` with `/usr/bin/copyparty`
## nix package ## nix package
@@ -1499,15 +1564,16 @@ TLDR: yes
| navpane | - | yep | yep | yep | yep | yep | yep | yep | | navpane | - | yep | yep | yep | yep | yep | yep | yep |
| image viewer | - | yep | yep | yep | yep | yep | yep | yep | | image viewer | - | yep | yep | yep | yep | yep | yep | yep |
| video player | - | yep | yep | yep | yep | yep | yep | yep | | video player | - | yep | yep | yep | yep | yep | yep | yep |
| markdown editor | - | - | yep | yep | yep | yep | yep | yep | | markdown editor | - | - | `*2` | `*2` | yep | yep | yep | yep |
| markdown viewer | - | yep | yep | yep | yep | yep | yep | yep | | markdown viewer | - | `*2` | `*2` | `*2` | yep | yep | yep | yep |
| play mp3/m4a | - | yep | yep | yep | yep | yep | yep | yep | | play mp3/m4a | - | yep | yep | yep | yep | yep | yep | yep |
| play ogg/opus | - | - | - | - | yep | yep | `*3` | yep | | play ogg/opus | - | - | - | - | yep | yep | `*3` | yep |
| **= feature =** | ie6 | ie9 | ie10 | ie11 | ff 52 | c 49 | iOS | Andr | | **= feature =** | ie6 | ie9 | ie10 | ie11 | ff 52 | c 49 | iOS | Andr |
* internet explorer 6 to 8 behave the same * internet explorer 6 through 8 behave the same
* firefox 52 and chrome 49 are the final winxp versions * firefox 52 and chrome 49 are the final winxp versions
* `*1` yes, but extremely slow (ie10: `1 MiB/s`, ie11: `270 KiB/s`) * `*1` yes, but extremely slow (ie10: `1 MiB/s`, ie11: `270 KiB/s`)
* `*2` only able to do plaintext documents (no markdown rendering)
* `*3` iOS 11 and newer, opus only, and requires FFmpeg on the server * `*3` iOS 11 and newer, opus only, and requires FFmpeg on the server
quick summary of more eccentric web-browsers trying to view a directory index: quick summary of more eccentric web-browsers trying to view a directory index:
@@ -1533,10 +1599,12 @@ interact with copyparty using non-browser clients
* `var xhr = new XMLHttpRequest(); xhr.open('POST', '//127.0.0.1:3923/msgs?raw'); xhr.send('foo');` * `var xhr = new XMLHttpRequest(); xhr.open('POST', '//127.0.0.1:3923/msgs?raw'); xhr.send('foo');`
* curl/wget: upload some files (post=file, chunk=stdin) * curl/wget: upload some files (post=file, chunk=stdin)
* `post(){ curl -F act=bput -F f=@"$1" http://127.0.0.1:3923/?pw=wark;}` * `post(){ curl -F f=@"$1" http://127.0.0.1:3923/?pw=wark;}`
`post movie.mkv` `post movie.mkv` (gives HTML in return)
* `post(){ curl -F f=@"$1" 'http://127.0.0.1:3923/?want=url&pw=wark';}`
`post movie.mkv` (gives hotlink in return)
* `post(){ curl -H pw:wark -H rand:8 -T "$1" http://127.0.0.1:3923/;}` * `post(){ curl -H pw:wark -H rand:8 -T "$1" http://127.0.0.1:3923/;}`
`post movie.mkv` `post movie.mkv` (randomized filename)
* `post(){ wget --header='pw: wark' --post-file="$1" -O- http://127.0.0.1:3923/?raw;}` * `post(){ wget --header='pw: wark' --post-file="$1" -O- http://127.0.0.1:3923/?raw;}`
`post movie.mkv` `post movie.mkv`
* `chunk(){ curl -H pw:wark -T- http://127.0.0.1:3923/;}` * `chunk(){ curl -H pw:wark -T- http://127.0.0.1:3923/;}`
@@ -1558,6 +1626,10 @@ interact with copyparty using non-browser clients
* sharex (screenshot utility): see [./contrib/sharex.sxcu](contrib/#sharexsxcu) * sharex (screenshot utility): see [./contrib/sharex.sxcu](contrib/#sharexsxcu)
* contextlet (web browser integration); see [contrib contextlet](contrib/#send-to-cppcontextletjson)
* [igloo irc](https://iglooirc.com/): Method: `post` Host: `https://you.com/up/?want=url&pw=hunter2` Multipart: `yes` File parameter: `f`
copyparty returns a truncated sha512sum of your PUT/POST as base64; you can generate the same checksum locally to verify uplaods: copyparty returns a truncated sha512sum of your PUT/POST as base64; you can generate the same checksum locally to verify uplaods:
b512(){ printf "$((sha512sum||shasum -a512)|sed -E 's/ .*//;s/(..)/\\x\1/g')"|base64|tr '+/' '-_'|head -c44;} b512(){ printf "$((sha512sum||shasum -a512)|sed -E 's/ .*//;s/(..)/\\x\1/g')"|base64|tr '+/' '-_'|head -c44;}
@@ -1576,7 +1648,7 @@ the commandline uploader [u2c.py](https://github.com/9001/copyparty/tree/hovudst
alternatively there is [rclone](./docs/rclone.md) which allows for bidirectional sync and is *way* more flexible (stream files straight from sftp/s3/gcs to copyparty, ...), although there is no integrity check and it won't work with files over 100 MiB if copyparty is behind cloudflare alternatively there is [rclone](./docs/rclone.md) which allows for bidirectional sync and is *way* more flexible (stream files straight from sftp/s3/gcs to copyparty, ...), although there is no integrity check and it won't work with files over 100 MiB if copyparty is behind cloudflare
* starting from rclone v1.63 (currently [in beta](https://beta.rclone.org/?filter=latest)), rclone will also be faster than u2c.py * starting from rclone v1.63, rclone is faster than u2c.py on low-latency connections
## mount as drive ## mount as drive
@@ -1585,7 +1657,7 @@ a remote copyparty server as a local filesystem; go to the control-panel and cl
alternatively, some alternatives roughly sorted by speed (unreproducible benchmark), best first: alternatively, some alternatives roughly sorted by speed (unreproducible benchmark), best first:
* [rclone-webdav](./docs/rclone.md) (25s), read/WRITE ([v1.63-beta](https://beta.rclone.org/?filter=latest)) * [rclone-webdav](./docs/rclone.md) (25s), read/WRITE (rclone v1.63 or later)
* [rclone-http](./docs/rclone.md) (26s), read-only * [rclone-http](./docs/rclone.md) (26s), read-only
* [partyfuse.py](./bin/#partyfusepy) (35s), read-only * [partyfuse.py](./bin/#partyfusepy) (35s), read-only
* [rclone-ftp](./docs/rclone.md) (47s), read/WRITE * [rclone-ftp](./docs/rclone.md) (47s), read/WRITE
@@ -1625,6 +1697,7 @@ below are some tweaks roughly ordered by usefulness:
* `-q` disables logging and can help a bunch, even when combined with `-lo` to redirect logs to file * `-q` disables logging and can help a bunch, even when combined with `-lo` to redirect logs to file
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set * `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
* and also makes thumbnails load faster, regardless of e2d/e2t
* `--no-hash .` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable * `--no-hash .` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable
* `--no-htp --hash-mt=0 --mtag-mt=1 --th-mt=1` minimizes the number of threads; can help in some eccentric environments (like the vscode debugger) * `--no-htp --hash-mt=0 --mtag-mt=1 --th-mt=1` minimizes the number of threads; can help in some eccentric environments (like the vscode debugger)
* `-j0` enables multiprocessing (actual multithreading), can reduce latency to `20+80/numCores` percent and generally improve performance in cpu-intensive workloads, for example: * `-j0` enables multiprocessing (actual multithreading), can reduce latency to `20+80/numCores` percent and generally improve performance in cpu-intensive workloads, for example:
@@ -1632,7 +1705,7 @@ below are some tweaks roughly ordered by usefulness:
* simultaneous downloads and uploads saturating a 20gbps connection * simultaneous downloads and uploads saturating a 20gbps connection
* if `-e2d` is enabled, `-j2` gives 4x performance for directory listings; `-j4` gives 16x * if `-e2d` is enabled, `-j2` gives 4x performance for directory listings; `-j4` gives 16x
...however it adds an overhead to internal communication so it might be a net loss, see if it works 4 u ...however it also increases the server/filesystem/HDD load during uploads, and adds an overhead to internal communication, so it is usually a better idea to don't
* using [pypy](https://www.pypy.org/) instead of [cpython](https://www.python.org/) *can* be 70% faster for some workloads, but slower for many others * using [pypy](https://www.pypy.org/) instead of [cpython](https://www.python.org/) *can* be 70% faster for some workloads, but slower for many others
* and pypy can sometimes crash on startup with `-j0` (TODO make issue) * and pypy can sometimes crash on startup with `-j0` (TODO make issue)
@@ -1641,7 +1714,7 @@ below are some tweaks roughly ordered by usefulness:
when uploading files, when uploading files,
* chrome is recommended, at least compared to firefox: * chrome is recommended (unfortunately), at least compared to firefox:
* up to 90% faster when hashing, especially on SSDs * up to 90% faster when hashing, especially on SSDs
* up to 40% faster when uploading over extremely fast internets * up to 40% faster when uploading over extremely fast internets
* but [u2c.py](https://github.com/9001/copyparty/blob/hovudstraum/bin/u2c.py) can be 40% faster than chrome again * but [u2c.py](https://github.com/9001/copyparty/blob/hovudstraum/bin/u2c.py) can be 40% faster than chrome again
@@ -1843,7 +1916,7 @@ can be convenient on machines where installing python is problematic, however is
meanwhile [copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py) instead relies on your system python which gives better performance and will stay safe as long as you keep your python install up-to-date meanwhile [copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py) instead relies on your system python which gives better performance and will stay safe as long as you keep your python install up-to-date
then again, if you are already into downloading shady binaries from the internet, you may also want my [minimal builds](./scripts/pyinstaller#ffmpeg) of [ffmpeg](https://ocv.me/stuff/bin/ffmpeg.exe) and [ffprobe](https://ocv.me/stuff/bin/ffprobe.exe) which enables copyparty to extract multimedia-info, do audio-transcoding, and thumbnails/spectrograms/waveforms, however it's much better to instead grab a [recent official build](https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-win64-gpl.zip) every once ina while if you can afford the size then again, if you are already into downloading shady binaries from the internet, you may also want my [minimal builds](./scripts/pyinstaller#ffmpeg) of [ffmpeg](https://ocv.me/stuff/bin/ffmpeg.exe) and [ffprobe](https://ocv.me/stuff/bin/ffprobe.exe) which enables copyparty to extract multimedia-info, do audio-transcoding, and thumbnails/spectrograms/waveforms, however it's much better to instead grab a [recent official build](https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z) every once ina while if you can afford the size
# install on android # install on android

View File

@@ -207,7 +207,7 @@ def examples():
def main(): def main():
global NC, BY_PATH global NC, BY_PATH # pylint: disable=global-statement
os.system("") os.system("")
print() print()
@@ -282,7 +282,8 @@ def main():
if ver == "corrupt": if ver == "corrupt":
die("{} database appears to be corrupt, sorry") die("{} database appears to be corrupt, sorry")
if ver < DB_VER1 or ver > DB_VER2: iver = int(ver)
if iver < DB_VER1 or iver > DB_VER2:
m = f"{n} db is version {ver}, this tool only supports versions between {DB_VER1} and {DB_VER2}, please upgrade it with copyparty first" m = f"{n} db is version {ver}, this tool only supports versions between {DB_VER1} and {DB_VER2}, please upgrade it with copyparty first"
die(m) die(m)

View File

@@ -53,7 +53,13 @@ from urllib.parse import unquote_to_bytes as unquote
WINDOWS = sys.platform == "win32" WINDOWS = sys.platform == "win32"
MACOS = platform.system() == "Darwin" MACOS = platform.system() == "Darwin"
UTC = timezone.utc UTC = timezone.utc
info = log = dbg = None
def print(*args, **kwargs):
try:
builtins.print(*list(args), **kwargs)
except:
builtins.print(termsafe(" ".join(str(x) for x in args)), **kwargs)
print( print(
@@ -65,6 +71,13 @@ print(
) )
def null_log(msg):
pass
info = log = dbg = null_log
try: try:
from fuse import FUSE, FuseOSError, Operations from fuse import FUSE, FuseOSError, Operations
except: except:
@@ -84,13 +97,6 @@ except:
raise raise
def print(*args, **kwargs):
try:
builtins.print(*list(args), **kwargs)
except:
builtins.print(termsafe(" ".join(str(x) for x in args)), **kwargs)
def termsafe(txt): def termsafe(txt):
try: try:
return txt.encode(sys.stdout.encoding, "backslashreplace").decode( return txt.encode(sys.stdout.encoding, "backslashreplace").decode(
@@ -119,10 +125,6 @@ def fancy_log(msg):
print("{:10.6f} {} {}\n".format(time.time() % 900, rice_tid(), msg), end="") print("{:10.6f} {} {}\n".format(time.time() % 900, rice_tid(), msg), end="")
def null_log(msg):
pass
def hexler(binary): def hexler(binary):
return binary.replace("\r", "\\r").replace("\n", "\\n") return binary.replace("\r", "\\r").replace("\n", "\\n")
return " ".join(["{}\033[36m{:02x}\033[0m".format(b, ord(b)) for b in binary]) return " ".join(["{}\033[36m{:02x}\033[0m".format(b, ord(b)) for b in binary])

View File

@@ -12,13 +12,13 @@ done
help() { cat <<'EOF' help() { cat <<'EOF'
usage: usage:
./prisonparty.sh <ROOTDIR> <UID> <GID> [VOLDIR [VOLDIR...]] -- python3 copyparty-sfx.py [...] ./prisonparty.sh <ROOTDIR> <USER|UID> <GROUP|GID> [VOLDIR [VOLDIR...]] -- python3 copyparty-sfx.py [...]
example: example:
./prisonparty.sh /var/lib/copyparty-jail 1000 1000 /mnt/nas/music -- python3 copyparty-sfx.py -v /mnt/nas/music::rwmd ./prisonparty.sh /var/lib/copyparty-jail cpp cpp /mnt/nas/music -- python3 copyparty-sfx.py -v /mnt/nas/music::rwmd
example for running straight from source (instead of using an sfx): example for running straight from source (instead of using an sfx):
PYTHONPATH=$PWD ./prisonparty.sh /var/lib/copyparty-jail 1000 1000 /mnt/nas/music -- python3 -um copyparty -v /mnt/nas/music::rwmd PYTHONPATH=$PWD ./prisonparty.sh /var/lib/copyparty-jail cpp cpp /mnt/nas/music -- python3 -um copyparty -v /mnt/nas/music::rwmd
note that if you have python modules installed as --user (such as bpm/key detectors), note that if you have python modules installed as --user (such as bpm/key detectors),
you should add /home/foo/.local as a VOLDIR you should add /home/foo/.local as a VOLDIR
@@ -28,6 +28,16 @@ exit 1
} }
errs=
for c in awk chroot dirname getent lsof mknod mount realpath sed sort stat uniq; do
command -v $c >/dev/null || {
echo ERROR: command not found: $c
errs=1
}
done
[ $errs ] && exit 1
# read arguments # read arguments
trap help EXIT trap help EXIT
jail="$(realpath "$1")"; shift jail="$(realpath "$1")"; shift
@@ -58,11 +68,18 @@ cpp="$1"; shift
} }
trap - EXIT trap - EXIT
usr="$(getent passwd $uid | cut -d: -f1)"
[ "$usr" ] || { echo "ERROR invalid username/uid $uid"; exit 1; }
uid="$(getent passwd $uid | cut -d: -f3)"
grp="$(getent group $gid | cut -d: -f1)"
[ "$grp" ] || { echo "ERROR invalid groupname/gid $gid"; exit 1; }
gid="$(getent group $gid | cut -d: -f3)"
# debug/vis # debug/vis
echo echo
echo "chroot-dir = $jail" echo "chroot-dir = $jail"
echo "user:group = $uid:$gid" echo "user:group = $uid:$gid ($usr:$grp)"
echo " copyparty = $cpp" echo " copyparty = $cpp"
echo echo
printf '\033[33m%s\033[0m\n' "copyparty can access these folders and all their subdirectories:" printf '\033[33m%s\033[0m\n' "copyparty can access these folders and all their subdirectories:"
@@ -80,34 +97,39 @@ jail="${jail%/}"
# bind-mount system directories and volumes # bind-mount system directories and volumes
for a in {1..30}; do mkdir "$jail/.prisonlock" && break; sleep 0.1; done
printf '%s\n' "${sysdirs[@]}" "${vols[@]}" | sed -r 's`/$``' | LC_ALL=C sort | uniq | printf '%s\n' "${sysdirs[@]}" "${vols[@]}" | sed -r 's`/$``' | LC_ALL=C sort | uniq |
while IFS= read -r v; do while IFS= read -r v; do
[ -e "$v" ] || { [ -e "$v" ] || {
printf '\033[1;31mfolder does not exist:\033[0m %s\n' "$v" printf '\033[1;31mfolder does not exist:\033[0m %s\n' "$v"
continue continue
} }
i1=$(stat -c%D.%i "$v" 2>/dev/null || echo a) i1=$(stat -c%D.%i "$v/" 2>/dev/null || echo a)
i2=$(stat -c%D.%i "$jail$v" 2>/dev/null || echo b) i2=$(stat -c%D.%i "$jail$v/" 2>/dev/null || echo b)
# echo "v [$v] i1 [$i1] i2 [$i2]"
[ $i1 = $i2 ] && continue [ $i1 = $i2 ] && continue
mount | grep -qF " $jail$v " && echo wtf $i1 $i2 $v && continue
mkdir -p "$jail$v" mkdir -p "$jail$v"
mount --bind "$v" "$jail$v" mount --bind "$v" "$jail$v"
done done
rmdir "$jail/.prisonlock" || true
cln() { cln() {
rv=$? trap - EXIT
wait -f -p rv $p || true wait -f -n $p && rv=0 || rv=$?
cd / cd /
echo "stopping chroot..." echo "stopping chroot..."
lsof "$jail" | grep -F "$jail" && for a in {1..30}; do mkdir "$jail/.prisonlock" && break; sleep 0.1; done
lsof "$jail" 2>/dev/null | grep -F "$jail" &&
echo "chroot is in use; will not unmount" || echo "chroot is in use; will not unmount" ||
{ {
mount | grep -F " on $jail" | mount | grep -F " on $jail" |
awk '{sub(/ type .*/,"");sub(/.* on /,"");print}' | awk '{sub(/ type .*/,"");sub(/.* on /,"");print}' |
LC_ALL=C sort -r | tee /dev/stderr | tr '\n' '\0' | xargs -r0 umount LC_ALL=C sort -r | while IFS= read -r v; do
umount "$v" && echo "umount OK: $v"
done
} }
rmdir "$jail/.prisonlock" || true
exit $rv exit $rv
} }
trap cln EXIT trap cln EXIT
@@ -128,8 +150,8 @@ chmod 777 "$jail/tmp"
# run copyparty # run copyparty
export HOME=$(getent passwd $uid | cut -d: -f6) export HOME="$(getent passwd $uid | cut -d: -f6)"
export USER=$(getent passwd $uid | cut -d: -f1) export USER="$usr"
export LOGNAME="$USER" export LOGNAME="$USER"
#echo "pybin [$pybin]" #echo "pybin [$pybin]"
#echo "pyarg [$pyarg]" #echo "pyarg [$pyarg]"
@@ -137,5 +159,5 @@ export LOGNAME="$USER"
chroot --userspec=$uid:$gid "$jail" "$pybin" $pyarg "$cpp" "$@" & chroot --userspec=$uid:$gid "$jail" "$pybin" $pyarg "$cpp" "$@" &
p=$! p=$!
trap 'kill -USR1 $p' USR1 trap 'kill -USR1 $p' USR1
trap 'kill $p' INT TERM trap 'trap - INT TERM; kill $p' INT TERM
wait wait

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
S_VERSION = "1.11" S_VERSION = "1.14"
S_BUILD_DT = "2023-11-11" S_BUILD_DT = "2024-01-27"
""" """
u2c.py: upload to copyparty u2c.py: upload to copyparty
@@ -105,8 +105,8 @@ class File(object):
# set by handshake # set by handshake
self.recheck = False # duplicate; redo handshake after all files done self.recheck = False # duplicate; redo handshake after all files done
self.ucids = [] # type: list[str] # chunks which need to be uploaded self.ucids = [] # type: list[str] # chunks which need to be uploaded
self.wark = None # type: str self.wark = "" # type: str
self.url = None # type: str self.url = "" # type: str
self.nhs = 0 self.nhs = 0
# set by upload # set by upload
@@ -223,6 +223,7 @@ class MTHash(object):
def hash_at(self, nch): def hash_at(self, nch):
f = self.f f = self.f
assert f
ofs = ofs0 = nch * self.csz ofs = ofs0 = nch * self.csz
hashobj = hashlib.sha512() hashobj = hashlib.sha512()
chunk_sz = chunk_rem = min(self.csz, self.sz - ofs) chunk_sz = chunk_rem = min(self.csz, self.sz - ofs)
@@ -463,7 +464,7 @@ def quotep(btxt):
if not PY2: if not PY2:
quot1 = quot1.encode("ascii") quot1 = quot1.encode("ascii")
return quot1.replace(b" ", b"+") return quot1.replace(b" ", b"+") # type: ignore
# from copyparty/util.py # from copyparty/util.py
@@ -500,7 +501,7 @@ def up2k_chunksize(filesize):
# mostly from copyparty/up2k.py # mostly from copyparty/up2k.py
def get_hashlist(file, pcb, mth): def get_hashlist(file, pcb, mth):
# type: (File, any, any) -> None # type: (File, Any, Any) -> None
"""generates the up2k hashlist from file contents, inserts it into `file`""" """generates the up2k hashlist from file contents, inserts it into `file`"""
chunk_sz = up2k_chunksize(file.size) chunk_sz = up2k_chunksize(file.size)
@@ -559,8 +560,11 @@ def handshake(ar, file, search):
} }
if search: if search:
req["srch"] = 1 req["srch"] = 1
elif ar.dr: else:
req["replace"] = True if ar.touch:
req["umod"] = True
if ar.dr:
req["replace"] = True
headers = {"Content-Type": "text/plain"} # <=1.5.1 compat headers = {"Content-Type": "text/plain"} # <=1.5.1 compat
if pw: if pw:
@@ -873,6 +877,8 @@ class Ctl(object):
self.st_hash = [file, ofs] self.st_hash = [file, ofs]
def hasher(self): def hasher(self):
ptn = re.compile(self.ar.x.encode("utf-8"), re.I) if self.ar.x else None
sep = "{0}".format(os.sep).encode("ascii")
prd = None prd = None
ls = {} ls = {}
for top, rel, inf in self.filegen: for top, rel, inf in self.filegen:
@@ -905,13 +911,29 @@ class Ctl(object):
if self.ar.drd: if self.ar.drd:
dp = os.path.join(top, rd) dp = os.path.join(top, rd)
lnodes = set(os.listdir(dp)) lnodes = set(os.listdir(dp))
bnames = [x for x in ls if x not in lnodes] if ptn:
if bnames: zs = dp.replace(sep, b"/").rstrip(b"/") + b"/"
vpath = self.ar.url.split("://")[-1].split("/", 1)[-1] zls = [zs + x for x in lnodes]
names = [x.decode("utf-8", "replace") for x in bnames] zls = [x for x in zls if not ptn.match(x)]
locs = [vpath + srd + "/" + x for x in names] lnodes = [x.split(b"/")[-1] for x in zls]
print("DELETING ~{0}/#{1}".format(srd, len(names))) bnames = [x for x in ls if x not in lnodes and x != b".hist"]
req_ses.post(self.ar.url + "?delete", json=locs) vpath = self.ar.url.split("://")[-1].split("/", 1)[-1]
names = [x.decode("utf-8", "replace") for x in bnames]
locs = [vpath + srd + "/" + x for x in names]
while locs:
req = locs
while req:
print("DELETING ~%s/#%s" % (srd, len(req)))
r = req_ses.post(self.ar.url + "?delete", json=req)
if r.status_code == 413 and "json 2big" in r.text:
print(" (delete request too big; slicing...)")
req = req[: len(req) // 2]
continue
elif not r:
t = "delete request failed: %r %s"
raise Exception(t % (r, r.text))
break
locs = locs[len(req) :]
if isdir: if isdir:
continue continue
@@ -1045,14 +1067,13 @@ class Ctl(object):
self.uploader_busy += 1 self.uploader_busy += 1
self.t0_up = self.t0_up or time.time() self.t0_up = self.t0_up or time.time()
zs = "{0}/{1}/{2}/{3} {4}/{5} {6}" stats = "%d/%d/%d/%d %d/%d %s" % (
stats = zs.format(
self.up_f, self.up_f,
len(self.recheck), len(self.recheck),
self.uploader_busy, self.uploader_busy,
self.nfiles - self.up_f, self.nfiles - self.up_f,
int(self.nbytes / (1024 * 1024)), self.nbytes // (1024 * 1024),
int((self.nbytes - self.up_b) / (1024 * 1024)), (self.nbytes - self.up_b) // (1024 * 1024),
self.eta, self.eta,
) )
@@ -1116,8 +1137,9 @@ source file/folder selection uses rsync syntax, meaning that:
ap.add_argument("-v", action="store_true", help="verbose") ap.add_argument("-v", action="store_true", help="verbose")
ap.add_argument("-a", metavar="PASSWORD", help="password or $filepath") ap.add_argument("-a", metavar="PASSWORD", help="password or $filepath")
ap.add_argument("-s", action="store_true", help="file-search (disables upload)") ap.add_argument("-s", action="store_true", help="file-search (disables upload)")
ap.add_argument("-x", type=unicode, metavar="REGEX", default="", help="skip file if filesystem-abspath matches REGEX, example: '.*/\.hist/.*'") ap.add_argument("-x", type=unicode, metavar="REGEX", default="", help="skip file if filesystem-abspath matches REGEX, example: '.*/\\.hist/.*'")
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible") ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)")
ap.add_argument("--version", action="store_true", help="show version and exit") ap.add_argument("--version", action="store_true", help="show version and exit")
ap = app.add_argument_group("compatibility") ap = app.add_argument_group("compatibility")

View File

@@ -66,7 +66,7 @@ def main():
ofs = ln.find("{") ofs = ln.find("{")
j = json.loads(ln[ofs:]) j = json.loads(ln[ofs:])
except: except:
pass continue
w = j["wark"] w = j["wark"]
if db.execute("select w from up where w = ?", (w,)).fetchone(): if db.execute("select w from up where w = ?", (w,)).fetchone():

View File

@@ -22,6 +22,11 @@ however if your copyparty is behind a reverse-proxy, you may want to use [`share
* `URL`: full URL to the root folder (with trailing slash) followed by `$regex:1|1$` * `URL`: full URL to the root folder (with trailing slash) followed by `$regex:1|1$`
* `pw`: password (remove `Parameters` if anon-write) * `pw`: password (remove `Parameters` if anon-write)
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
* browser integration, kind of? custom rightclick actions and stuff
* rightclick a pic and send it to copyparty straight from your browser
* for the [contextlet](https://addons.mozilla.org/en-US/firefox/addon/contextlets/) firefox extension
### [`media-osd-bgone.ps1`](media-osd-bgone.ps1) ### [`media-osd-bgone.ps1`](media-osd-bgone.ps1)
* disables the [windows OSD popup](https://user-images.githubusercontent.com/241032/122821375-0e08df80-d2dd-11eb-9fd9-184e8aacf1d0.png) (the thing on the left) which appears every time you hit media hotkeys to adjust volume or change song while playing music with the copyparty web-ui, or most other audio players really * disables the [windows OSD popup](https://user-images.githubusercontent.com/241032/122821375-0e08df80-d2dd-11eb-9fd9-184e8aacf1d0.png) (the thing on the left) which appears every time you hit media hotkeys to adjust volume or change song while playing music with the copyparty web-ui, or most other audio players really

View File

@@ -1,8 +1,8 @@
# Maintainer: icxes <dev.null@need.moe> # Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty pkgname=copyparty
pkgver="1.9.18" pkgver="1.9.31"
pkgrel=1 pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, zeroconf, media indexer, thumbnails++" pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any") arch=("any")
url="https://github.com/9001/${pkgname}" url="https://github.com/9001/${pkgname}"
license=('MIT') license=('MIT')
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
) )
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz") source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" ) backup=("etc/${pkgname}.d/init" )
sha256sums=("2f89ace0c5bc6a6990e85b6aa635c96dac16a6b399e4c9d040743695f35edb52") sha256sums=("a8ec1faf8cb224515355226882fdb2d1ab1de42d96ff78e148b930318867a71e")
build() { build() {
cd "${srcdir}/${pkgname}-${pkgver}" cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,11 +1,11 @@
# this will start `/usr/bin/copyparty-sfx.py` # this will start `/usr/bin/copyparty-sfx.py`
# in a chroot, preventing accidental access elsewhere # in a chroot, preventing accidental access elsewhere,
# and read config from `/etc/copyparty.d/*.conf` # and read copyparty config from `/etc/copyparty.d/*.conf`
# #
# expose additional filesystem locations to copyparty # expose additional filesystem locations to copyparty
# by listing them between the last `1000` and `--` # by listing them between the last `cpp` and `--`
# #
# `1000 1000` = what user to run copyparty as # `cpp cpp` = user/group to run copyparty as; can be IDs (1000 1000)
# #
# unless you add -q to disable logging, you may want to remove the # unless you add -q to disable logging, you may want to remove the
# following line to allow buffering (slightly better performance): # following line to allow buffering (slightly better performance):
@@ -24,7 +24,9 @@ ExecReload=/bin/kill -s USR1 $MAINPID
ExecStartPre=+/bin/bash -c 'mkdir -p /run/tmpfiles.d/ && echo "x /tmp/pe-copyparty*" > /run/tmpfiles.d/copyparty.conf' ExecStartPre=+/bin/bash -c 'mkdir -p /run/tmpfiles.d/ && echo "x /tmp/pe-copyparty*" > /run/tmpfiles.d/copyparty.conf'
# run copyparty # run copyparty
ExecStart=/bin/bash /usr/bin/prisonparty /var/lib/copyparty-jail 1000 1000 /etc/copyparty.d -- \ ExecStart=/bin/bash /usr/bin/prisonparty /var/lib/copyparty-jail cpp cpp \
/etc/copyparty.d \
-- \
/usr/bin/python3 /usr/bin/copyparty -c /etc/copyparty.d/init /usr/bin/python3 /usr/bin/copyparty -c /etc/copyparty.d/init
[Install] [Install]

View File

@@ -1,5 +1,5 @@
{ {
"url": "https://github.com/9001/copyparty/releases/download/v1.9.18/copyparty-sfx.py", "url": "https://github.com/9001/copyparty/releases/download/v1.9.31/copyparty-sfx.py",
"version": "1.9.18", "version": "1.9.31",
"hash": "sha256-Qun8qzlThNjO+DUH33p7pkPXAJq20B6tEVbR+I4d0Uc=" "hash": "sha256-yp7qoiW5yzm2M7qVmYY7R+SyhZXlqL+JxsXV22aS+MM="
} }

View File

@@ -0,0 +1,11 @@
{
"code": "// https://addons.mozilla.org/en-US/firefox/addon/contextlets/\n// https://github.com/davidmhammond/contextlets\n\nvar url = 'http://partybox.local:3923/';\nvar pw = 'wark';\n\nvar xhr = new XMLHttpRequest();\nxhr.msg = this.info.linkUrl || this.info.srcUrl;\nxhr.open('POST', url, true);\nxhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded;charset=UTF-8');\nxhr.setRequestHeader('PW', pw);\nxhr.send('msg=' + xhr.msg);\n",
"contexts": [
"link"
],
"icons": null,
"patterns": "",
"scope": "background",
"title": "send to cpp",
"type": "normal"
}

View File

@@ -0,0 +1,42 @@
# not actually YAML but lets pretend:
# -*- mode: yaml -*-
# vim: ft=yaml:
# put this file in /etc/
[global]
e2dsa # enable file indexing and filesystem scanning
e2ts # and enable multimedia indexing
ansi # and colors in log messages
# disable logging to stdout/journalctl and log to a file instead;
# $LOGS_DIRECTORY is usually /var/log/copyparty (comes from systemd)
# and copyparty replaces %Y-%m%d with Year-MonthDay, so the
# full path will be something like /var/log/copyparty/2023-1130.txt
# (note: enable compression by adding .xz at the end)
q, lo: $LOGS_DIRECTORY/%Y-%m%d.log
# p: 80,443,3923 # listen on 80/443 as well (requires CAP_NET_BIND_SERVICE)
# i: 127.0.0.1 # only allow connections from localhost (reverse-proxies)
# ftp: 3921 # enable ftp server on port 3921
# p: 3939 # listen on another port
# df: 16 # stop accepting uploads if less than 16 GB free disk space
# ver # show copyparty version in the controlpanel
# grid # show thumbnails/grid-view by default
# theme: 2 # monokai
# name: datasaver # change the server-name that's displayed in the browser
# stats, nos-dup # enable the prometheus endpoint, but disable the dupes counter (too slow)
# no-robots, force-js # make it harder for search engines to read your server
[accounts]
ed: wark # username: password
[/] # create a volume at "/" (the webroot), which will
/mnt # share the contents of the "/mnt" folder
accs:
rw: * # everyone gets read-write access, but
rwmda: ed # the user "ed" gets read-write-move-delete-admin

View File

@@ -1,28 +1,27 @@
# this will start `/usr/local/bin/copyparty-sfx.py` # this will start `/usr/local/bin/copyparty-sfx.py` and
# and share '/mnt' with anonymous read+write # read copyparty config from `/etc/copyparty.conf`, for example:
# https://github.com/9001/copyparty/blob/hovudstraum/contrib/systemd/copyparty.conf
# #
# installation: # installation:
# wget https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py -O /usr/local/bin/copyparty-sfx.py # wget https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py -O /usr/local/bin/copyparty-sfx.py
# cp -pv copyparty.service /etc/systemd/system/ # useradd -r -s /sbin/nologin -d /var/lib/copyparty copyparty
# restorecon -vr /etc/systemd/system/copyparty.service # on fedora/rhel # firewall-cmd --permanent --add-port=3923/tcp # --zone=libvirt
# firewall-cmd --permanent --add-port={80,443,3923}/tcp # --zone=libvirt
# firewall-cmd --reload # firewall-cmd --reload
# cp -pv copyparty.service /etc/systemd/system/
# cp -pv copyparty.conf /etc/
# restorecon -vr /etc/systemd/system/copyparty.service # on fedora/rhel
# systemctl daemon-reload && systemctl enable --now copyparty # systemctl daemon-reload && systemctl enable --now copyparty
# #
# if it fails to start, first check this: systemctl status copyparty # if it fails to start, first check this: systemctl status copyparty
# then try starting it while viewing logs: journalctl -fan 100 # then try starting it while viewing logs:
# journalctl -fan 100
# tail -Fn 100 /var/log/copyparty/$(date +%Y-%m%d.log)
# #
# you may want to: # you may want to:
# change "User=cpp" and "/home/cpp/" to another user # - change "User=copyparty" and "/var/lib/copyparty/" to another user
# remove the nft lines to only listen on port 3923 # - edit /etc/copyparty.conf to configure copyparty
# and in the ExecStart= line: # and in the ExecStart= line:
# change '/usr/bin/python3' to another interpreter # - change '/usr/bin/python3' to another interpreter
# change '/mnt::rw' to another location or permission-set
# add '-q' to disable logging on busy servers
# add '-i 127.0.0.1' to only allow local connections
# add '-e2dsa' to enable filesystem scanning + indexing
# add '-e2ts' to enable metadata indexing
# remove '--ansi' to disable colored logs
# #
# with `Type=notify`, copyparty will signal systemd when it is ready to # with `Type=notify`, copyparty will signal systemd when it is ready to
# accept connections; correctly delaying units depending on copyparty. # accept connections; correctly delaying units depending on copyparty.
@@ -30,11 +29,9 @@
# python disabling line-buffering, so messages are out-of-order: # python disabling line-buffering, so messages are out-of-order:
# https://user-images.githubusercontent.com/241032/126040249-cb535cc7-c599-4931-a796-a5d9af691bad.png # https://user-images.githubusercontent.com/241032/126040249-cb535cc7-c599-4931-a796-a5d9af691bad.png
# #
# unless you add -q to disable logging, you may want to remove the ########################################################################
# following line to allow buffering (slightly better performance): ########################################################################
# Environment=PYTHONUNBUFFERED=x
#
# keep ExecStartPre before ExecStart, at least on rhel8
[Unit] [Unit]
Description=copyparty file server Description=copyparty file server
@@ -44,23 +41,52 @@ Type=notify
SyslogIdentifier=copyparty SyslogIdentifier=copyparty
Environment=PYTHONUNBUFFERED=x Environment=PYTHONUNBUFFERED=x
ExecReload=/bin/kill -s USR1 $MAINPID ExecReload=/bin/kill -s USR1 $MAINPID
PermissionsStartOnly=true
# user to run as + where the TLS certificate is (if any) ## user to run as + where the TLS certificate is (if any)
User=cpp ##
Environment=XDG_CONFIG_HOME=/home/cpp/.config User=copyparty
Group=copyparty
WorkingDirectory=/var/lib/copyparty
Environment=XDG_CONFIG_HOME=/var/lib/copyparty/.config
# OPTIONAL: setup forwarding from ports 80 and 443 to port 3923 ## OPTIONAL: allow copyparty to listen on low ports (like 80/443);
ExecStartPre=+/bin/bash -c 'nft -n -a list table nat | awk "/ to :3923 /{print\$NF}" | xargs -rL1 nft delete rule nat prerouting handle; true' ## you need to uncomment the "p: 80,443,3923" in the config too
ExecStartPre=+nft add table ip nat ## ------------------------------------------------------------
ExecStartPre=+nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \; } ## a slightly safer alternative is to enable partyalone.service
ExecStartPre=+nft add rule ip nat prerouting tcp dport 80 redirect to :3923 ## which does portforwarding with nftables instead, but an even
ExecStartPre=+nft add rule ip nat prerouting tcp dport 443 redirect to :3923 ## better option is to use a reverse-proxy (nginx/caddy/...)
##
AmbientCapabilities=CAP_NET_BIND_SERVICE
# stop systemd-tmpfiles-clean.timer from deleting copyparty while it's running ## some quick hardening; TODO port more from the nixos package
ExecStartPre=+/bin/bash -c 'mkdir -p /run/tmpfiles.d/ && echo "x /tmp/pe-copyparty*" > /run/tmpfiles.d/copyparty.conf' ##
MemoryMax=50%
MemorySwapMax=50%
ProtectClock=true
ProtectControlGroups=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectProc=invisible
RemoveIPC=true
RestrictNamespaces=true
RestrictRealtime=true
RestrictSUIDSGID=true
# copyparty settings ## create a directory for logfiles;
ExecStart=/usr/bin/python3 /usr/local/bin/copyparty-sfx.py --ansi -e2d -v /mnt::rw ## this defines $LOGS_DIRECTORY which is used in copyparty.conf
##
LogsDirectory=copyparty
## finally, start copyparty and give it the config file:
##
ExecStart=/usr/bin/python3 /usr/local/bin/copyparty-sfx.py -c /etc/copyparty.conf
# NOTE: if you installed copyparty from an OS package repo (nice)
# then you probably want something like this instead:
#ExecStart=/usr/bin/copyparty -c /etc/copyparty.conf
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@@ -1,5 +1,5 @@
# this will start `/usr/local/bin/copyparty-sfx.py` # this will start `/usr/local/bin/copyparty-sfx.py`
# in a chroot, preventing accidental access elsewhere # in a chroot, preventing accidental access elsewhere,
# and share '/mnt' with anonymous read+write # and share '/mnt' with anonymous read+write
# #
# installation: # installation:
@@ -7,9 +7,9 @@
# 2) cp -pv prisonparty.service /etc/systemd/system && systemctl enable --now prisonparty # 2) cp -pv prisonparty.service /etc/systemd/system && systemctl enable --now prisonparty
# #
# expose additional filesystem locations to copyparty # expose additional filesystem locations to copyparty
# by listing them between the last `1000` and `--` # by listing them between the last `cpp` and `--`
# #
# `1000 1000` = what user to run copyparty as # `cpp cpp` = user/group to run copyparty as; can be IDs (1000 1000)
# #
# you may want to: # you may want to:
# change '/mnt::rw' to another location or permission-set # change '/mnt::rw' to another location or permission-set
@@ -32,7 +32,9 @@ ExecReload=/bin/kill -s USR1 $MAINPID
ExecStartPre=+/bin/bash -c 'mkdir -p /run/tmpfiles.d/ && echo "x /tmp/pe-copyparty*" > /run/tmpfiles.d/copyparty.conf' ExecStartPre=+/bin/bash -c 'mkdir -p /run/tmpfiles.d/ && echo "x /tmp/pe-copyparty*" > /run/tmpfiles.d/copyparty.conf'
# run copyparty # run copyparty
ExecStart=/bin/bash /usr/local/bin/prisonparty.sh /var/lib/copyparty-jail 1000 1000 /mnt -- \ ExecStart=/bin/bash /usr/local/bin/prisonparty.sh /var/lib/copyparty-jail cpp cpp \
/mnt \
-- \
/usr/bin/python3 /usr/local/bin/copyparty-sfx.py -q -v /mnt::rw /usr/bin/python3 /usr/local/bin/copyparty-sfx.py -q -v /mnt::rw
[Install] [Install]

View File

@@ -23,7 +23,7 @@ if not PY2:
unicode: Callable[[Any], str] = str unicode: Callable[[Any], str] = str
else: else:
sys.dont_write_bytecode = True sys.dont_write_bytecode = True
unicode = unicode # noqa: F821 # pylint: disable=undefined-variable,self-assigning-variable unicode = unicode # type: ignore
WINDOWS: Any = ( WINDOWS: Any = (
[int(x) for x in platform.version().split(".")] [int(x) for x in platform.version().split(".")]

View File

@@ -19,26 +19,39 @@ import threading
import time import time
import traceback import traceback
import uuid import uuid
from textwrap import dedent
from .__init__ import ANYWIN, CORES, EXE, PY2, VT100, WINDOWS, E, EnvParams, unicode from .__init__ import (
ANYWIN,
CORES,
EXE,
MACOS,
PY2,
VT100,
WINDOWS,
E,
EnvParams,
unicode,
)
from .__version__ import CODENAME, S_BUILD_DT, S_VERSION from .__version__ import CODENAME, S_BUILD_DT, S_VERSION
from .authsrv import expand_config_file, re_vol, split_cfg_ln, upgrade_cfg_fmt from .authsrv import expand_config_file, split_cfg_ln, upgrade_cfg_fmt
from .cfg import flagcats, onedash from .cfg import flagcats, onedash
from .svchub import SvcHub from .svchub import SvcHub
from .util import ( from .util import (
APPLESAN_TXT,
DEF_EXP, DEF_EXP,
DEF_MTE, DEF_MTE,
DEF_MTH, DEF_MTH,
IMPLICATIONS, IMPLICATIONS,
JINJA_VER, JINJA_VER,
PARTFTPY_VER,
PY_DESC,
PYFTPD_VER, PYFTPD_VER,
SQLITE_VER, SQLITE_VER,
UNPLICATIONS, UNPLICATIONS,
align_tab, align_tab,
ansi_re, ansi_re,
dedent,
min_ex, min_ex,
py_desc,
pybin, pybin,
termsize, termsize,
wrap, wrap,
@@ -143,9 +156,11 @@ def warn(msg: str) -> None:
lprint("\033[1mwarning:\033[0;33m {}\033[0m\n".format(msg)) lprint("\033[1mwarning:\033[0;33m {}\033[0m\n".format(msg))
def init_E(E: EnvParams) -> None: def init_E(EE: EnvParams) -> None:
# __init__ runs 18 times when oxidized; do expensive stuff here # __init__ runs 18 times when oxidized; do expensive stuff here
E = EE # pylint: disable=redefined-outer-name
def get_unixdir() -> str: def get_unixdir() -> str:
paths: list[tuple[Callable[..., Any], str]] = [ paths: list[tuple[Callable[..., Any], str]] = [
(os.environ.get, "XDG_CONFIG_HOME"), (os.environ.get, "XDG_CONFIG_HOME"),
@@ -246,7 +261,7 @@ def get_srvname() -> str:
return ret return ret
def get_fk_salt(cert_path) -> str: def get_fk_salt() -> str:
fp = os.path.join(E.cfg, "fk-salt.txt") fp = os.path.join(E.cfg, "fk-salt.txt")
try: try:
with open(fp, "rb") as f: with open(fp, "rb") as f:
@@ -320,6 +335,7 @@ def configure_ssl_ver(al: argparse.Namespace) -> None:
# oh man i love openssl # oh man i love openssl
# check this out # check this out
# hold my beer # hold my beer
assert ssl
ptn = re.compile(r"^OP_NO_(TLS|SSL)v") ptn = re.compile(r"^OP_NO_(TLS|SSL)v")
sslver = terse_sslver(al.ssl_ver).split(",") sslver = terse_sslver(al.ssl_ver).split(",")
flags = [k for k in ssl.__dict__ if ptn.match(k)] flags = [k for k in ssl.__dict__ if ptn.match(k)]
@@ -353,6 +369,7 @@ def configure_ssl_ver(al: argparse.Namespace) -> None:
def configure_ssl_ciphers(al: argparse.Namespace) -> None: def configure_ssl_ciphers(al: argparse.Namespace) -> None:
assert ssl
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
if al.ssl_ver: if al.ssl_ver:
ctx.options &= ~al.ssl_flags_en ctx.options &= ~al.ssl_flags_en
@@ -432,9 +449,9 @@ def disable_quickedit() -> None:
if PY2: if PY2:
wintypes.LPDWORD = ctypes.POINTER(wintypes.DWORD) wintypes.LPDWORD = ctypes.POINTER(wintypes.DWORD)
k32.GetStdHandle.errcheck = ecb k32.GetStdHandle.errcheck = ecb # type: ignore
k32.GetConsoleMode.errcheck = ecb k32.GetConsoleMode.errcheck = ecb # type: ignore
k32.SetConsoleMode.errcheck = ecb k32.SetConsoleMode.errcheck = ecb # type: ignore
k32.GetConsoleMode.argtypes = (wintypes.HANDLE, wintypes.LPDWORD) k32.GetConsoleMode.argtypes = (wintypes.HANDLE, wintypes.LPDWORD)
k32.SetConsoleMode.argtypes = (wintypes.HANDLE, wintypes.DWORD) k32.SetConsoleMode.argtypes = (wintypes.HANDLE, wintypes.DWORD)
@@ -494,7 +511,9 @@ def get_sects():
"g" (get): download files, but cannot see folder contents "g" (get): download files, but cannot see folder contents
"G" (upget): "get", but can see filekeys of their own uploads "G" (upget): "get", but can see filekeys of their own uploads
"h" (html): "get", but folders return their index.html "h" (html): "get", but folders return their index.html
"." (dots): user can ask to show dotfiles in listings
"a" (admin): can see uploader IPs, config-reload "a" (admin): can see uploader IPs, config-reload
"A" ("all"): same as "rwmda." (read/write/move/delete/admin/dotfiles)
too many volflags to list here, see --help-flags too many volflags to list here, see --help-flags
@@ -701,6 +720,7 @@ def get_sects():
\033[36mln\033[0m only prints symlinks leaving the volume mountpoint \033[36mln\033[0m only prints symlinks leaving the volume mountpoint
\033[36mp\033[0m exits 1 if any such symlinks are found \033[36mp\033[0m exits 1 if any such symlinks are found
\033[36mr\033[0m resumes startup after the listing \033[36mr\033[0m resumes startup after the listing
examples: examples:
--ls '**' # list all files which are possible to read --ls '**' # list all files which are possible to read
--ls '**,*,ln' # check for dangerous symlinks --ls '**,*,ln' # check for dangerous symlinks
@@ -734,9 +754,12 @@ def get_sects():
""" """
when \033[36m--ah-alg\033[0m is not the default [\033[32mnone\033[0m], all account passwords must be hashed when \033[36m--ah-alg\033[0m is not the default [\033[32mnone\033[0m], all account passwords must be hashed
passwords can be hashed on the commandline with \033[36m--ah-gen\033[0m, but copyparty will also hash and print any passwords that are non-hashed (password which do not start with '+') and then terminate afterwards passwords can be hashed on the commandline with \033[36m--ah-gen\033[0m, but
copyparty will also hash and print any passwords that are non-hashed
(password which do not start with '+') and then terminate afterwards
\033[36m--ah-alg\033[0m specifies the hashing algorithm and a list of optional comma-separated arguments: \033[36m--ah-alg\033[0m specifies the hashing algorithm and a
list of optional comma-separated arguments:
\033[36m--ah-alg argon2\033[0m # which is the same as: \033[36m--ah-alg argon2\033[0m # which is the same as:
\033[36m--ah-alg argon2,3,256,4,19\033[0m \033[36m--ah-alg argon2,3,256,4,19\033[0m
@@ -816,10 +839,10 @@ def add_general(ap, nc, srvname):
ap2.add_argument("-nc", metavar="NUM", type=int, default=nc, help="max num clients") ap2.add_argument("-nc", metavar="NUM", type=int, default=nc, help="max num clients")
ap2.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores, 0=all") ap2.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores, 0=all")
ap2.add_argument("-a", metavar="ACCT", type=u, action="append", help="add account, \033[33mUSER\033[0m:\033[33mPASS\033[0m; example [\033[32med:wark\033[0m]") ap2.add_argument("-a", metavar="ACCT", type=u, action="append", help="add account, \033[33mUSER\033[0m:\033[33mPASS\033[0m; example [\033[32med:wark\033[0m]")
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m]") ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m], see --help-accounts")
ap2.add_argument("-ed", action="store_true", help="enable the ?dots url parameter / client option which allows clients to see dotfiles / hidden files") ap2.add_argument("-ed", action="store_true", help="enable the ?dots url parameter / client option which allows clients to see dotfiles / hidden files (volflag=dots)")
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,get", help="how to handle url-form POSTs; see --help-urlform") ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,get", help="how to handle url-form POSTs; see \033[33m--help-urlform\033[0m")
ap2.add_argument("--wintitle", metavar="TXT", type=u, default="cpp @ $pub", help="window title, for example [\033[32m$ip-10.1.2.\033[0m] or [\033[32m$ip-]") ap2.add_argument("--wintitle", metavar="TXT", type=u, default="cpp @ $pub", help="server terminal title, for example [\033[32m$ip-10.1.2.\033[0m] or [\033[32m$ip-]")
ap2.add_argument("--name", metavar="TXT", type=u, default=srvname, help="server name (displayed topleft in browser and in mDNS)") ap2.add_argument("--name", metavar="TXT", type=u, default=srvname, help="server name (displayed topleft in browser and in mDNS)")
ap2.add_argument("--license", action="store_true", help="show licenses and exit") ap2.add_argument("--license", action="store_true", help="show licenses and exit")
ap2.add_argument("--version", action="store_true", help="show versions and exit") ap2.add_argument("--version", action="store_true", help="show versions and exit")
@@ -830,36 +853,43 @@ def add_qr(ap, tty):
ap2.add_argument("--qr", action="store_true", help="show http:// QR-code on startup") ap2.add_argument("--qr", action="store_true", help="show http:// QR-code on startup")
ap2.add_argument("--qrs", action="store_true", help="show https:// QR-code on startup") ap2.add_argument("--qrs", action="store_true", help="show https:// QR-code on startup")
ap2.add_argument("--qrl", metavar="PATH", type=u, default="", help="location to include in the url, for example [\033[32mpriv/?pw=hunter2\033[0m]") ap2.add_argument("--qrl", metavar="PATH", type=u, default="", help="location to include in the url, for example [\033[32mpriv/?pw=hunter2\033[0m]")
ap2.add_argument("--qri", metavar="PREFIX", type=u, default="", help="select IP which starts with PREFIX; [\033[32m.\033[0m] to force default IP when mDNS URL would have been used instead") ap2.add_argument("--qri", metavar="PREFIX", type=u, default="", help="select IP which starts with \033[33mPREFIX\033[0m; [\033[32m.\033[0m] to force default IP when mDNS URL would have been used instead")
ap2.add_argument("--qr-fg", metavar="COLOR", type=int, default=0 if tty else 16, help="foreground; try [\033[32m0\033[0m] if the qr-code is unreadable") ap2.add_argument("--qr-fg", metavar="COLOR", type=int, default=0 if tty else 16, help="foreground; try [\033[32m0\033[0m] if the qr-code is unreadable")
ap2.add_argument("--qr-bg", metavar="COLOR", type=int, default=229, help="background (white=255)") ap2.add_argument("--qr-bg", metavar="COLOR", type=int, default=229, help="background (white=255)")
ap2.add_argument("--qrp", metavar="CELLS", type=int, default=4, help="padding (spec says 4 or more, but 1 is usually fine)") ap2.add_argument("--qrp", metavar="CELLS", type=int, default=4, help="padding (spec says 4 or more, but 1 is usually fine)")
ap2.add_argument("--qrz", metavar="N", type=int, default=0, help="[\033[32m1\033[0m]=1x, [\033[32m2\033[0m]=2x, [\033[32m0\033[0m]=auto (try [\033[32m2\033[0m] on broken fonts)") ap2.add_argument("--qrz", metavar="N", type=int, default=0, help="[\033[32m1\033[0m]=1x, [\033[32m2\033[0m]=2x, [\033[32m0\033[0m]=auto (try [\033[32m2\033[0m] on broken fonts)")
def add_fs(ap):
ap2 = ap.add_argument_group("filesystem options")
rm_re_def = "5/0.1" if ANYWIN else "0/0"
ap2.add_argument("--rm-retry", metavar="T/R", type=u, default=rm_re_def, help="if a file cannot be deleted because it is busy, continue trying for \033[33mT\033[0m seconds, retry every \033[33mR\033[0m seconds; disable with 0/0 (volflag=rm_retry)")
def add_upload(ap): def add_upload(ap):
ap2 = ap.add_argument_group('upload options') ap2 = ap.add_argument_group('upload options')
ap2.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads, hiding them from clients unless -ed") ap2.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads, hiding them from clients unless \033[33m-ed\033[0m")
ap2.add_argument("--plain-ip", action="store_true", help="when avoiding filename collisions by appending the uploader's ip to the filename: append the plaintext ip instead of salting and hashing the ip") ap2.add_argument("--plain-ip", action="store_true", help="when avoiding filename collisions by appending the uploader's ip to the filename: append the plaintext ip instead of salting and hashing the ip")
ap2.add_argument("--unpost", metavar="SEC", type=int, default=3600*12, help="grace period where uploads can be deleted by the uploader, even without delete permissions; 0=disabled") ap2.add_argument("--unpost", metavar="SEC", type=int, default=3600*12, help="grace period where uploads can be deleted by the uploader, even without delete permissions; 0=disabled, default=12h")
ap2.add_argument("--blank-wt", metavar="SEC", type=int, default=300, help="file write grace period (any client can write to a blank file last-modified more recently than SEC seconds ago)") ap2.add_argument("--blank-wt", metavar="SEC", type=int, default=300, help="file write grace period (any client can write to a blank file last-modified more recently than \033[33mSEC\033[0m seconds ago)")
ap2.add_argument("--reg-cap", metavar="N", type=int, default=38400, help="max number of uploads to keep in memory when running without -e2d; roughly 1 MiB RAM per 600") ap2.add_argument("--reg-cap", metavar="N", type=int, default=38400, help="max number of uploads to keep in memory when running without \033[33m-e2d\033[0m; roughly 1 MiB RAM per 600")
ap2.add_argument("--no-fpool", action="store_true", help="disable file-handle pooling -- instead, repeatedly close and reopen files during upload (very slow on windows)") ap2.add_argument("--no-fpool", action="store_true", help="disable file-handle pooling -- instead, repeatedly close and reopen files during upload (bad idea to enable this on windows and/or cow filesystems)")
ap2.add_argument("--use-fpool", action="store_true", help="force file-handle pooling, even when it might be dangerous (multiprocessing, filesystems lacking sparse-files support, ...)") ap2.add_argument("--use-fpool", action="store_true", help="force file-handle pooling, even when it might be dangerous (multiprocessing, filesystems lacking sparse-files support, ...)")
ap2.add_argument("--hardlink", action="store_true", help="prefer hardlinks instead of symlinks when possible (within same filesystem) (volflag=hardlink)") ap2.add_argument("--hardlink", action="store_true", help="prefer hardlinks instead of symlinks when possible (within same filesystem) (volflag=hardlink)")
ap2.add_argument("--never-symlink", action="store_true", help="do not fallback to symlinks when a hardlink cannot be made (volflag=neversymlink)") ap2.add_argument("--never-symlink", action="store_true", help="do not fallback to symlinks when a hardlink cannot be made (volflag=neversymlink)")
ap2.add_argument("--no-dedup", action="store_true", help="disable symlink/hardlink creation; copy file contents instead (volflag=copydupes)") ap2.add_argument("--no-dedup", action="store_true", help="disable symlink/hardlink creation; copy file contents instead (volflag=copydupes)")
ap2.add_argument("--no-dupe", action="store_true", help="reject duplicate files during upload; only matches within the same volume (volflag=nodupe)") ap2.add_argument("--no-dupe", action="store_true", help="reject duplicate files during upload; only matches within the same volume (volflag=nodupe)")
ap2.add_argument("--no-snap", action="store_true", help="disable snapshots -- forget unfinished uploads on shutdown; don't create .hist/up2k.snap files -- abandoned/interrupted uploads must be cleaned up manually") ap2.add_argument("--no-snap", action="store_true", help="disable snapshots -- forget unfinished uploads on shutdown; don't create .hist/up2k.snap files -- abandoned/interrupted uploads must be cleaned up manually")
ap2.add_argument("--snap-wri", metavar="SEC", type=int, default=300, help="write upload state to ./hist/up2k.snap every SEC seconds; allows resuming incomplete uploads after a server crash") ap2.add_argument("--snap-wri", metavar="SEC", type=int, default=300, help="write upload state to ./hist/up2k.snap every \033[33mSEC\033[0m seconds; allows resuming incomplete uploads after a server crash")
ap2.add_argument("--snap-drop", metavar="MIN", type=float, default=1440, help="forget unfinished uploads after MIN minutes; impossible to resume them after that (360=6h, 1440=24h)") ap2.add_argument("--snap-drop", metavar="MIN", type=float, default=1440, help="forget unfinished uploads after \033[33mMIN\033[0m minutes; impossible to resume them after that (360=6h, 1440=24h)")
ap2.add_argument("--u2ts", metavar="TXT", type=u, default="c", help="how to timestamp uploaded files; [\033[32mc\033[0m]=client-last-modified, [\033[32mu\033[0m]=upload-time, [\033[32mfc\033[0m]=force-c, [\033[32mfu\033[0m]=force-u (volflag=u2ts)") ap2.add_argument("--u2ts", metavar="TXT", type=u, default="c", help="how to timestamp uploaded files; [\033[32mc\033[0m]=client-last-modified, [\033[32mu\033[0m]=upload-time, [\033[32mfc\033[0m]=force-c, [\033[32mfu\033[0m]=force-u (volflag=u2ts)")
ap2.add_argument("--rand", action="store_true", help="force randomized filenames, --nrand chars long (volflag=rand)") ap2.add_argument("--rand", action="store_true", help="force randomized filenames, \033[33m--nrand\033[0m chars long (volflag=rand)")
ap2.add_argument("--nrand", metavar="NUM", type=int, default=9, help="randomized filenames length (volflag=nrand)") ap2.add_argument("--nrand", metavar="NUM", type=int, default=9, help="randomized filenames length (volflag=nrand)")
ap2.add_argument("--magic", action="store_true", help="enable filetype detection on nameless uploads (volflag=magic)") ap2.add_argument("--magic", action="store_true", help="enable filetype detection on nameless uploads (volflag=magic)")
ap2.add_argument("--df", metavar="GiB", type=float, default=0, help="ensure GiB free disk space by rejecting upload requests") ap2.add_argument("--df", metavar="GiB", type=float, default=0, help="ensure \033[33mGiB\033[0m free disk space by rejecting upload requests")
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files") ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck") ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine") ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory") ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
@@ -868,11 +898,12 @@ def add_network(ap):
ap2 = ap.add_argument_group('network options') ap2 = ap.add_argument_group('network options')
ap2.add_argument("-i", metavar="IP", type=u, default="::", help="ip to bind (comma-sep.), default: all IPv4 and IPv6") ap2.add_argument("-i", metavar="IP", type=u, default="::", help="ip to bind (comma-sep.), default: all IPv4 and IPv6")
ap2.add_argument("-p", metavar="PORT", type=u, default="3923", help="ports to bind (comma/range)") ap2.add_argument("-p", metavar="PORT", type=u, default="3923", help="ports to bind (comma/range)")
ap2.add_argument("--ll", action="store_true", help="include link-local IPv4/IPv6 even if the NIC has routable IPs (breaks some mdns clients)") ap2.add_argument("--ll", action="store_true", help="include link-local IPv4/IPv6 in mDNS replies, even if the NIC has routable IPs (breaks some mDNS clients)")
ap2.add_argument("--rproxy", metavar="DEPTH", type=int, default=1, help="which ip to keep; [\033[32m0\033[0m]=tcp, [\033[32m1\033[0m]=origin (first x-fwd, unsafe), [\033[32m2\033[0m]=outermost-proxy, [\033[32m3\033[0m]=second-proxy, [\033[32m-1\033[0m]=closest-proxy") ap2.add_argument("--rproxy", metavar="DEPTH", type=int, default=1, help="which ip to associate clients with; [\033[32m0\033[0m]=tcp, [\033[32m1\033[0m]=origin (first x-fwd, unsafe), [\033[32m2\033[0m]=outermost-proxy, [\033[32m3\033[0m]=second-proxy, [\033[32m-1\033[0m]=closest-proxy")
ap2.add_argument("--xff-hdr", metavar="NAME", type=u, default="x-forwarded-for", help="if reverse-proxied, which http header to read the client's real ip from (argument must be lowercase, but not the actual header)") ap2.add_argument("--xff-hdr", metavar="NAME", type=u, default="x-forwarded-for", help="if reverse-proxied, which http header to read the client's real ip from")
ap2.add_argument("--xff-src", metavar="IP", type=u, default="127., ::1", help="comma-separated list of trusted reverse-proxy IPs; only accept the real-ip header (--xff-hdr) if the incoming connection is from an IP starting with either of these. Can be disabled with [\033[32many\033[0m] if you are behind cloudflare (or similar) and are using --xff-hdr=cf-connecting-ip (or similar)") ap2.add_argument("--xff-src", metavar="IP", type=u, default="127., ::1", help="comma-separated list of trusted reverse-proxy IPs; only accept the real-ip header (\033[33m--xff-hdr\033[0m) if the incoming connection is from an IP starting with either of these. Can be disabled with [\033[32many\033[0m] if you are behind cloudflare (or similar) and are using \033[32m--xff-hdr=cf-connecting-ip\033[0m (or similar)")
ap2.add_argument("--rp-loc", metavar="PATH", type=u, default="", help="if reverse-proxying on a location instead of a dedicated domain/subdomain, provide the base location here (eg. /foo/bar)") ap2.add_argument("--ipa", metavar="PREFIX", type=u, default="", help="only accept connections from IP-addresses starting with \033[33mPREFIX\033[0m; example: [\033[32m127., 10.89., 192.168.\033[0m]")
ap2.add_argument("--rp-loc", metavar="PATH", type=u, default="", help="if reverse-proxying on a location instead of a dedicated domain/subdomain, provide the base location here; example: [\033[32m/foo/bar\033[0m]")
if ANYWIN: if ANYWIN:
ap2.add_argument("--reuseaddr", action="store_true", help="set reuseaddr on listening sockets on windows; allows rapid restart of copyparty at the expense of being able to accidentally start multiple instances") ap2.add_argument("--reuseaddr", action="store_true", help="set reuseaddr on listening sockets on windows; allows rapid restart of copyparty at the expense of being able to accidentally start multiple instances")
else: else:
@@ -882,7 +913,7 @@ def add_network(ap):
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes") ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0, help="debug: socket write delay in seconds") ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0, help="debug: socket write delay in seconds")
ap2.add_argument("--rsp-slp", metavar="SEC", type=float, default=0, help="debug: response delay in seconds") ap2.add_argument("--rsp-slp", metavar="SEC", type=float, default=0, help="debug: response delay in seconds")
ap2.add_argument("--rsp-jtr", metavar="SEC", type=float, default=0, help="debug: response delay, random duration 0..SEC") ap2.add_argument("--rsp-jtr", metavar="SEC", type=float, default=0, help="debug: response delay, random duration 0..\033[33mSEC\033[0m")
def add_tls(ap, cert_path): def add_tls(ap, cert_path):
@@ -901,7 +932,7 @@ def add_cert(ap, cert_path):
ap2 = ap.add_argument_group('TLS certificate generator options') ap2 = ap.add_argument_group('TLS certificate generator options')
ap2.add_argument("--no-crt", action="store_true", help="disable automatic certificate creation") ap2.add_argument("--no-crt", action="store_true", help="disable automatic certificate creation")
ap2.add_argument("--crt-ns", metavar="N,N", type=u, default="", help="comma-separated list of FQDNs (domains) to add into the certificate") ap2.add_argument("--crt-ns", metavar="N,N", type=u, default="", help="comma-separated list of FQDNs (domains) to add into the certificate")
ap2.add_argument("--crt-exact", action="store_true", help="do not add wildcard entries for each --crt-ns") ap2.add_argument("--crt-exact", action="store_true", help="do not add wildcard entries for each \033[33m--crt-ns\033[0m")
ap2.add_argument("--crt-noip", action="store_true", help="do not add autodetected IP addresses into cert") ap2.add_argument("--crt-noip", action="store_true", help="do not add autodetected IP addresses into cert")
ap2.add_argument("--crt-nolo", action="store_true", help="do not add 127.0.0.1 / localhost into cert") ap2.add_argument("--crt-nolo", action="store_true", help="do not add 127.0.0.1 / localhost into cert")
ap2.add_argument("--crt-nohn", action="store_true", help="do not add mDNS names / hostname into cert") ap2.add_argument("--crt-nohn", action="store_true", help="do not add mDNS names / hostname into cert")
@@ -912,7 +943,14 @@ def add_cert(ap, cert_path):
ap2.add_argument("--crt-cnc", metavar="TXT", type=u, default="--crt-cn", help="override CA name") ap2.add_argument("--crt-cnc", metavar="TXT", type=u, default="--crt-cn", help="override CA name")
ap2.add_argument("--crt-cns", metavar="TXT", type=u, default="--crt-cn cpp", help="override server-cert name") ap2.add_argument("--crt-cns", metavar="TXT", type=u, default="--crt-cn cpp", help="override server-cert name")
ap2.add_argument("--crt-back", metavar="HRS", type=float, default=72, help="backdate in hours") ap2.add_argument("--crt-back", metavar="HRS", type=float, default=72, help="backdate in hours")
ap2.add_argument("--crt-alg", metavar="S-N", type=u, default="ecdsa-256", help="algorithm and keysize; one of these: ecdsa-256 rsa-4096 rsa-2048") ap2.add_argument("--crt-alg", metavar="S-N", type=u, default="ecdsa-256", help="algorithm and keysize; one of these: \033[32mecdsa-256 rsa-4096 rsa-2048\033[0m")
def add_auth(ap):
ap2 = ap.add_argument_group('IdP / identity provider / user authentication options')
ap2.add_argument("--idp-h-usr", metavar="HN", type=u, default="", help="bypass the copyparty authentication checks and assume the request-header \033[33mHN\033[0m contains the username of the requesting user (for use with authentik/oauth/...)\n\033[1;31mWARNING:\033[0m if you enable this, make sure clients are unable to specify this header themselves; must be washed away and replaced by a reverse-proxy")
return
ap2.add_argument("--idp-h-grp", metavar="HN", type=u, default="", help="assume the request-header \033[33mHN\033[0m contains the groupname of the requesting user; can be referenced in config files for group-based access control")
def add_zeroconf(ap): def add_zeroconf(ap):
@@ -920,21 +958,21 @@ def add_zeroconf(ap):
ap2.add_argument("-z", action="store_true", help="enable all zeroconf backends (mdns, ssdp)") ap2.add_argument("-z", action="store_true", help="enable all zeroconf backends (mdns, ssdp)")
ap2.add_argument("--z-on", metavar="NETS", type=u, default="", help="enable zeroconf ONLY on the comma-separated list of subnets and/or interface names/indexes\n └─example: \033[32meth0, wlo1, virhost0, 192.168.123.0/24, fd00:fda::/96\033[0m") ap2.add_argument("--z-on", metavar="NETS", type=u, default="", help="enable zeroconf ONLY on the comma-separated list of subnets and/or interface names/indexes\n └─example: \033[32meth0, wlo1, virhost0, 192.168.123.0/24, fd00:fda::/96\033[0m")
ap2.add_argument("--z-off", metavar="NETS", type=u, default="", help="disable zeroconf on the comma-separated list of subnets and/or interface names/indexes") ap2.add_argument("--z-off", metavar="NETS", type=u, default="", help="disable zeroconf on the comma-separated list of subnets and/or interface names/indexes")
ap2.add_argument("--z-chk", metavar="SEC", type=int, default=10, help="check for network changes every SEC seconds (0=disable)") ap2.add_argument("--z-chk", metavar="SEC", type=int, default=10, help="check for network changes every \033[33mSEC\033[0m seconds (0=disable)")
ap2.add_argument("-zv", action="store_true", help="verbose all zeroconf backends") ap2.add_argument("-zv", action="store_true", help="verbose all zeroconf backends")
ap2.add_argument("--mc-hop", metavar="SEC", type=int, default=0, help="rejoin multicast groups every SEC seconds (workaround for some switches/routers which cause mDNS to suddenly stop working after some time); try [\033[32m300\033[0m] or [\033[32m180\033[0m]") ap2.add_argument("--mc-hop", metavar="SEC", type=int, default=0, help="rejoin multicast groups every \033[33mSEC\033[0m seconds (workaround for some switches/routers which cause mDNS to suddenly stop working after some time); try [\033[32m300\033[0m] or [\033[32m180\033[0m]\n └─note: can be due to firewalls; make sure UDP port 5353 is open in both directions (on clients too)")
def add_zc_mdns(ap): def add_zc_mdns(ap):
ap2 = ap.add_argument_group("Zeroconf-mDNS options; also see --help-zm") ap2 = ap.add_argument_group("Zeroconf-mDNS options; also see --help-zm")
ap2.add_argument("--zm", action="store_true", help="announce the enabled protocols over mDNS (multicast DNS-SD) -- compatible with KDE, gnome, macOS, ...") ap2.add_argument("--zm", action="store_true", help="announce the enabled protocols over mDNS (multicast DNS-SD) -- compatible with KDE, gnome, macOS, ...")
ap2.add_argument("--zm-on", metavar="NETS", type=u, default="", help="enable zeroconf ONLY on the comma-separated list of subnets and/or interface names/indexes") ap2.add_argument("--zm-on", metavar="NETS", type=u, default="", help="enable mDNS ONLY on the comma-separated list of subnets and/or interface names/indexes")
ap2.add_argument("--zm-off", metavar="NETS", type=u, default="", help="disable zeroconf on the comma-separated list of subnets and/or interface names/indexes") ap2.add_argument("--zm-off", metavar="NETS", type=u, default="", help="disable mDNS on the comma-separated list of subnets and/or interface names/indexes")
ap2.add_argument("--zm4", action="store_true", help="IPv4 only -- try this if some clients can't connect") ap2.add_argument("--zm4", action="store_true", help="IPv4 only -- try this if some clients can't connect")
ap2.add_argument("--zm6", action="store_true", help="IPv6 only") ap2.add_argument("--zm6", action="store_true", help="IPv6 only")
ap2.add_argument("--zmv", action="store_true", help="verbose mdns") ap2.add_argument("--zmv", action="store_true", help="verbose mdns")
ap2.add_argument("--zmvv", action="store_true", help="verboser mdns") ap2.add_argument("--zmvv", action="store_true", help="verboser mdns")
ap2.add_argument("--zms", metavar="dhf", type=u, default="", help="list of services to announce -- d=webdav h=http f=ftp s=smb -- lowercase=plaintext uppercase=TLS -- default: all enabled services except http/https (\033[32mDdfs\033[0m if \033[33m--ftp\033[0m and \033[33m--smb\033[0m is set)") ap2.add_argument("--zms", metavar="dhf", type=u, default="", help="list of services to announce -- d=webdav h=http f=ftp s=smb -- lowercase=plaintext uppercase=TLS -- default: all enabled services except http/https (\033[32mDdfs\033[0m if \033[33m--ftp\033[0m and \033[33m--smb\033[0m is set, \033[32mDd\033[0m otherwise)")
ap2.add_argument("--zm-ld", metavar="PATH", type=u, default="", help="link a specific folder for webdav shares") ap2.add_argument("--zm-ld", metavar="PATH", type=u, default="", help="link a specific folder for webdav shares")
ap2.add_argument("--zm-lh", metavar="PATH", type=u, default="", help="link a specific folder for http shares") ap2.add_argument("--zm-lh", metavar="PATH", type=u, default="", help="link a specific folder for http shares")
ap2.add_argument("--zm-lf", metavar="PATH", type=u, default="", help="link a specific folder for ftp shares") ap2.add_argument("--zm-lf", metavar="PATH", type=u, default="", help="link a specific folder for ftp shares")
@@ -942,26 +980,27 @@ def add_zc_mdns(ap):
ap2.add_argument("--zm-mnic", action="store_true", help="merge NICs which share subnets; assume that same subnet means same network") ap2.add_argument("--zm-mnic", action="store_true", help="merge NICs which share subnets; assume that same subnet means same network")
ap2.add_argument("--zm-msub", action="store_true", help="merge subnets on each NIC -- always enabled for ipv6 -- reduces network load, but gnome-gvfs clients may stop working, and clients cannot be in subnets that the server is not") ap2.add_argument("--zm-msub", action="store_true", help="merge subnets on each NIC -- always enabled for ipv6 -- reduces network load, but gnome-gvfs clients may stop working, and clients cannot be in subnets that the server is not")
ap2.add_argument("--zm-noneg", action="store_true", help="disable NSEC replies -- try this if some clients don't see copyparty") ap2.add_argument("--zm-noneg", action="store_true", help="disable NSEC replies -- try this if some clients don't see copyparty")
ap2.add_argument("--zm-spam", metavar="SEC", type=float, default=0, help="send unsolicited announce every SEC; useful if clients have IPs in a subnet which doesn't overlap with the server") ap2.add_argument("--zm-spam", metavar="SEC", type=float, default=0, help="send unsolicited announce every \033[33mSEC\033[0m; useful if clients have IPs in a subnet which doesn't overlap with the server, or to avoid some firewall issues")
def add_zc_ssdp(ap): def add_zc_ssdp(ap):
ap2 = ap.add_argument_group("Zeroconf-SSDP options") ap2 = ap.add_argument_group("Zeroconf-SSDP options")
ap2.add_argument("--zs", action="store_true", help="announce the enabled protocols over SSDP -- compatible with Windows") ap2.add_argument("--zs", action="store_true", help="announce the enabled protocols over SSDP -- compatible with Windows")
ap2.add_argument("--zs-on", metavar="NETS", type=u, default="", help="enable zeroconf ONLY on the comma-separated list of subnets and/or interface names/indexes") ap2.add_argument("--zs-on", metavar="NETS", type=u, default="", help="enable SSDP ONLY on the comma-separated list of subnets and/or interface names/indexes")
ap2.add_argument("--zs-off", metavar="NETS", type=u, default="", help="disable zeroconf on the comma-separated list of subnets and/or interface names/indexes") ap2.add_argument("--zs-off", metavar="NETS", type=u, default="", help="disable SSDP on the comma-separated list of subnets and/or interface names/indexes")
ap2.add_argument("--zsv", action="store_true", help="verbose SSDP") ap2.add_argument("--zsv", action="store_true", help="verbose SSDP")
ap2.add_argument("--zsl", metavar="PATH", type=u, default="/?hc", help="location to include in the url (or a complete external URL), for example [\033[32mpriv/?pw=hunter2\033[0m] (goes directly to /priv/ with password hunter2) or [\033[32m?hc=priv&pw=hunter2\033[0m] (shows mounting options for /priv/ with password)") ap2.add_argument("--zsl", metavar="PATH", type=u, default="/?hc", help="location to include in the url (or a complete external URL), for example [\033[32mpriv/?pw=hunter2\033[0m] (goes directly to /priv/ with password hunter2) or [\033[32m?hc=priv&pw=hunter2\033[0m] (shows mounting options for /priv/ with password)")
ap2.add_argument("--zsid", metavar="UUID", type=u, default=zsid, help="USN (device identifier) to announce") ap2.add_argument("--zsid", metavar="UUID", type=u, default=zsid, help="USN (device identifier) to announce")
def add_ftp(ap): def add_ftp(ap):
ap2 = ap.add_argument_group('FTP options') ap2 = ap.add_argument_group('FTP options (TCP only)')
ap2.add_argument("--ftp", metavar="PORT", type=int, help="enable FTP server on PORT, for example \033[32m3921") ap2.add_argument("--ftp", metavar="PORT", type=int, help="enable FTP server on \033[33mPORT\033[0m, for example \033[32m3921")
ap2.add_argument("--ftps", metavar="PORT", type=int, help="enable FTPS server on PORT, for example \033[32m3990") ap2.add_argument("--ftps", metavar="PORT", type=int, help="enable FTPS server on \033[33mPORT\033[0m, for example \033[32m3990")
ap2.add_argument("--ftpv", action="store_true", help="verbose") ap2.add_argument("--ftpv", action="store_true", help="verbose")
ap2.add_argument("--ftp4", action="store_true", help="only listen on IPv4") ap2.add_argument("--ftp4", action="store_true", help="only listen on IPv4")
ap2.add_argument("--ftp-wt", metavar="SEC", type=int, default=7, help="grace period for resuming interrupted uploads (any client can write to any file last-modified more recently than SEC seconds ago)") ap2.add_argument("--ftp-ipa", metavar="PFX", type=u, default="", help="only accept connections from IP-addresses starting with \033[33mPFX\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Example: [\033[32m127., 10.89., 192.168.\033[0m]")
ap2.add_argument("--ftp-wt", metavar="SEC", type=int, default=7, help="grace period for resuming interrupted uploads (any client can write to any file last-modified more recently than \033[33mSEC\033[0m seconds ago)")
ap2.add_argument("--ftp-nat", metavar="ADDR", type=u, help="the NAT address to use for passive connections") ap2.add_argument("--ftp-nat", metavar="ADDR", type=u, help="the NAT address to use for passive connections")
ap2.add_argument("--ftp-pr", metavar="P-P", type=u, help="the range of TCP ports to use for passive connections, for example \033[32m12000-13000") ap2.add_argument("--ftp-pr", metavar="P-P", type=u, help="the range of TCP ports to use for passive connections, for example \033[32m12000-13000")
@@ -975,9 +1014,20 @@ def add_webdav(ap):
ap2.add_argument("--dav-auth", action="store_true", help="force auth for all folders (required by davfs2 when only some folders are world-readable) (volflag=davauth)") ap2.add_argument("--dav-auth", action="store_true", help="force auth for all folders (required by davfs2 when only some folders are world-readable) (volflag=davauth)")
def add_tftp(ap):
ap2 = ap.add_argument_group('TFTP options (UDP only)')
ap2.add_argument("--tftp", metavar="PORT", type=int, help="enable TFTP server on \033[33mPORT\033[0m, for example \033[32m69 \033[0mor \033[32m3969")
ap2.add_argument("--tftpv", action="store_true", help="verbose")
ap2.add_argument("--tftpvv", action="store_true", help="verboser")
ap2.add_argument("--tftp-lsf", metavar="PTN", type=u, default="\\.?(dir|ls)(\\.txt)?", help="return a directory listing if a file with this name is requested and it does not exist; defaults matches .ls, dir, .dir.txt, ls.txt, ...")
ap2.add_argument("--tftp-nols", action="store_true", help="if someone tries to download a directory, return an error instead of showing its directory listing")
ap2.add_argument("--tftp-ipa", metavar="PFX", type=u, default="", help="only accept connections from IP-addresses starting with \033[33mPFX\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Example: [\033[32m127., 10.89., 192.168.\033[0m]")
ap2.add_argument("--tftp-pr", metavar="P-P", type=u, help="the range of UDP ports to use for data transfer, for example \033[32m12000-13000")
def add_smb(ap): def add_smb(ap):
ap2 = ap.add_argument_group('SMB/CIFS options') ap2 = ap.add_argument_group('SMB/CIFS options')
ap2.add_argument("--smb", action="store_true", help="enable smb (read-only) -- this requires running copyparty as root on linux and macos unless --smb-port is set above 1024 and your OS does port-forwarding from 445 to that.\n\033[1;31mWARNING:\033[0m this protocol is dangerous! Never expose to the internet!") ap2.add_argument("--smb", action="store_true", help="enable smb (read-only) -- this requires running copyparty as root on linux and macos unless \033[33m--smb-port\033[0m is set above 1024 and your OS does port-forwarding from 445 to that.\n\033[1;31mWARNING:\033[0m this protocol is DANGEROUS and buggy! Never expose to the internet!")
ap2.add_argument("--smbw", action="store_true", help="enable write support (please dont)") ap2.add_argument("--smbw", action="store_true", help="enable write support (please dont)")
ap2.add_argument("--smb1", action="store_true", help="disable SMBv2, only enable SMBv1 (CIFS)") ap2.add_argument("--smb1", action="store_true", help="disable SMBv2, only enable SMBv1 (CIFS)")
ap2.add_argument("--smb-port", metavar="PORT", type=int, default=445, help="port to listen on -- if you change this value, you must NAT from TCP:445 to this port using iptables or similar") ap2.add_argument("--smb-port", metavar="PORT", type=int, default=445, help="port to listen on -- if you change this value, you must NAT from TCP:445 to this port using iptables or similar")
@@ -991,22 +1041,22 @@ def add_smb(ap):
def add_handlers(ap): def add_handlers(ap):
ap2 = ap.add_argument_group('handlers (see --help-handlers)') ap2 = ap.add_argument_group('handlers (see --help-handlers)')
ap2.add_argument("--on404", metavar="PY", type=u, action="append", help="handle 404s by executing PY file") ap2.add_argument("--on404", metavar="PY", type=u, action="append", help="handle 404s by executing \033[33mPY\033[0m file")
ap2.add_argument("--on403", metavar="PY", type=u, action="append", help="handle 403s by executing PY file") ap2.add_argument("--on403", metavar="PY", type=u, action="append", help="handle 403s by executing \033[33mPY\033[0m file")
ap2.add_argument("--hot-handlers", action="store_true", help="reload handlers on each request -- expensive but convenient when hacking on stuff") ap2.add_argument("--hot-handlers", action="store_true", help="recompile handlers on each request -- expensive but convenient when hacking on stuff")
def add_hooks(ap): def add_hooks(ap):
ap2 = ap.add_argument_group('event hooks (see --help-hooks)') ap2 = ap.add_argument_group('event hooks (see --help-hooks)')
ap2.add_argument("--xbu", metavar="CMD", type=u, action="append", help="execute CMD before a file upload starts") ap2.add_argument("--xbu", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file upload starts")
ap2.add_argument("--xau", metavar="CMD", type=u, action="append", help="execute CMD after a file upload finishes") ap2.add_argument("--xau", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file upload finishes")
ap2.add_argument("--xiu", metavar="CMD", type=u, action="append", help="execute CMD after all uploads finish and volume is idle") ap2.add_argument("--xiu", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after all uploads finish and volume is idle")
ap2.add_argument("--xbr", metavar="CMD", type=u, action="append", help="execute CMD before a file move/rename") ap2.add_argument("--xbr", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file move/rename")
ap2.add_argument("--xar", metavar="CMD", type=u, action="append", help="execute CMD after a file move/rename") ap2.add_argument("--xar", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file move/rename")
ap2.add_argument("--xbd", metavar="CMD", type=u, action="append", help="execute CMD before a file delete") ap2.add_argument("--xbd", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file delete")
ap2.add_argument("--xad", metavar="CMD", type=u, action="append", help="execute CMD after a file delete") ap2.add_argument("--xad", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file delete")
ap2.add_argument("--xm", metavar="CMD", type=u, action="append", help="execute CMD on message") ap2.add_argument("--xm", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m on message")
ap2.add_argument("--xban", metavar="CMD", type=u, action="append", help="execute CMD if someone gets banned (pw/404/403/url)") ap2.add_argument("--xban", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m if someone gets banned (pw/404/403/url)")
def add_stats(ap): def add_stats(ap):
@@ -1028,17 +1078,17 @@ def add_yolo(ap):
def add_optouts(ap): def add_optouts(ap):
ap2 = ap.add_argument_group('opt-outs') ap2 = ap.add_argument_group('opt-outs')
ap2.add_argument("-nw", action="store_true", help="never write anything to disk (debug/benchmark)") ap2.add_argument("-nw", action="store_true", help="never write anything to disk (debug/benchmark)")
ap2.add_argument("--keep-qem", action="store_true", help="do not disable quick-edit-mode on windows (it is disabled to avoid accidental text selection which will deadlock copyparty)") ap2.add_argument("--keep-qem", action="store_true", help="do not disable quick-edit-mode on windows (it is disabled to avoid accidental text selection in the terminal window, as this would pause execution)")
ap2.add_argument("--no-dav", action="store_true", help="disable webdav support") ap2.add_argument("--no-dav", action="store_true", help="disable webdav support")
ap2.add_argument("--no-del", action="store_true", help="disable delete operations") ap2.add_argument("--no-del", action="store_true", help="disable delete operations")
ap2.add_argument("--no-mv", action="store_true", help="disable move/rename operations") ap2.add_argument("--no-mv", action="store_true", help="disable move/rename operations")
ap2.add_argument("-nth", action="store_true", help="no title hostname; don't show --name in <title>") ap2.add_argument("-nth", action="store_true", help="no title hostname; don't show \033[33m--name\033[0m in <title>")
ap2.add_argument("-nih", action="store_true", help="no info hostname -- don't show in UI") ap2.add_argument("-nih", action="store_true", help="no info hostname -- don't show in UI")
ap2.add_argument("-nid", action="store_true", help="no info disk-usage -- don't show in UI") ap2.add_argument("-nid", action="store_true", help="no info disk-usage -- don't show in UI")
ap2.add_argument("-nb", action="store_true", help="no powered-by-copyparty branding in UI") ap2.add_argument("-nb", action="store_true", help="no powered-by-copyparty branding in UI")
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar") ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar")
ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)") ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)")
ap2.add_argument("--no-lifetime", action="store_true", help="disable automatic deletion of uploads after a certain time (as specified by the 'lifetime' volflag)") ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time")
def add_safety(ap): def add_safety(ap):
@@ -1046,36 +1096,36 @@ def add_safety(ap):
ap2.add_argument("-s", action="count", default=0, help="increase safety: Disable thumbnails / potentially dangerous software (ffmpeg/pillow/vips), hide partial uploads, avoid crawlers.\n └─Alias of\033[32m --dotpart --no-thumb --no-mtag-ff --no-robots --force-js") ap2.add_argument("-s", action="count", default=0, help="increase safety: Disable thumbnails / potentially dangerous software (ffmpeg/pillow/vips), hide partial uploads, avoid crawlers.\n └─Alias of\033[32m --dotpart --no-thumb --no-mtag-ff --no-robots --force-js")
ap2.add_argument("-ss", action="store_true", help="further increase safety: Prevent js-injection, accidental move/delete, broken symlinks, webdav, 404 on 403, ban on excessive 404s.\n └─Alias of\033[32m -s --unpost=0 --no-del --no-mv --hardlink --vague-403 -nih") ap2.add_argument("-ss", action="store_true", help="further increase safety: Prevent js-injection, accidental move/delete, broken symlinks, webdav, 404 on 403, ban on excessive 404s.\n └─Alias of\033[32m -s --unpost=0 --no-del --no-mv --hardlink --vague-403 -nih")
ap2.add_argument("-sss", action="store_true", help="further increase safety: Enable logging to disk, scan for dangerous symlinks.\n └─Alias of\033[32m -ss --no-dav --no-logues --no-readme -lo=cpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz --ls=**,*,ln,p,r") ap2.add_argument("-sss", action="store_true", help="further increase safety: Enable logging to disk, scan for dangerous symlinks.\n └─Alias of\033[32m -ss --no-dav --no-logues --no-readme -lo=cpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz --ls=**,*,ln,p,r")
ap2.add_argument("--ls", metavar="U[,V[,F]]", type=u, help="do a sanity/safety check of all volumes on startup; arguments \033[33mUSER\033[0m,\033[33mVOL\033[0m,\033[33mFLAGS\033[0m; example [\033[32m**,*,ln,p,r\033[0m]") ap2.add_argument("--ls", metavar="U[,V[,F]]", type=u, help="do a sanity/safety check of all volumes on startup; arguments \033[33mUSER\033[0m,\033[33mVOL\033[0m,\033[33mFLAGS\033[0m (see \033[33m--help-ls\033[0m); example [\033[32m**,*,ln,p,r\033[0m]")
ap2.add_argument("--xvol", action="store_true", help="never follow symlinks leaving the volume root, unless the link is into another volume where the user has similar access (volflag=xvol)") ap2.add_argument("--xvol", action="store_true", help="never follow symlinks leaving the volume root, unless the link is into another volume where the user has similar access (volflag=xvol)")
ap2.add_argument("--xdev", action="store_true", help="stay within the filesystem of the volume root; do not descend into other devices (symlink or bind-mount to another HDD, ...) (volflag=xdev)") ap2.add_argument("--xdev", action="store_true", help="stay within the filesystem of the volume root; do not descend into other devices (symlink or bind-mount to another HDD, ...) (volflag=xdev)")
ap2.add_argument("--no-dot-mv", action="store_true", help="disallow moving dotfiles; makes it impossible to move folders containing dotfiles") ap2.add_argument("--no-dot-mv", action="store_true", help="disallow moving dotfiles; makes it impossible to move folders containing dotfiles")
ap2.add_argument("--no-dot-ren", action="store_true", help="disallow renaming dotfiles; makes it impossible to make something a dotfile") ap2.add_argument("--no-dot-ren", action="store_true", help="disallow renaming dotfiles; makes it impossible to turn something into a dotfile")
ap2.add_argument("--no-logues", action="store_true", help="disable rendering .prologue/.epilogue.html into directory listings") ap2.add_argument("--no-logues", action="store_true", help="disable rendering .prologue/.epilogue.html into directory listings")
ap2.add_argument("--no-readme", action="store_true", help="disable rendering readme.md into directory listings") ap2.add_argument("--no-readme", action="store_true", help="disable rendering readme.md into directory listings")
ap2.add_argument("--vague-403", action="store_true", help="send 404 instead of 403 (security through ambiguity, very enterprise)") ap2.add_argument("--vague-403", action="store_true", help="send 404 instead of 403 (security through ambiguity, very enterprise)")
ap2.add_argument("--force-js", action="store_true", help="don't send folder listings as HTML, force clients to use the embedded json instead -- slight protection against misbehaving search engines which ignore --no-robots") ap2.add_argument("--force-js", action="store_true", help="don't send folder listings as HTML, force clients to use the embedded json instead -- slight protection against misbehaving search engines which ignore \033[33m--no-robots\033[0m")
ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything (volflag=norobots)") ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything (volflag=norobots)")
ap2.add_argument("--logout", metavar="H", type=float, default="8086", help="logout clients after H hours of inactivity; [\033[32m0.0028\033[0m]=10sec, [\033[32m0.1\033[0m]=6min, [\033[32m24\033[0m]=day, [\033[32m168\033[0m]=week, [\033[32m720\033[0m]=month, [\033[32m8760\033[0m]=year)") ap2.add_argument("--logout", metavar="H", type=float, default="8086", help="logout clients after \033[33mH\033[0m hours of inactivity; [\033[32m0.0028\033[0m]=10sec, [\033[32m0.1\033[0m]=6min, [\033[32m24\033[0m]=day, [\033[32m168\033[0m]=week, [\033[32m720\033[0m]=month, [\033[32m8760\033[0m]=year)")
ap2.add_argument("--ban-pw", metavar="N,W,B", type=u, default="9,60,1440", help="more than \033[33mN\033[0m wrong passwords in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; disable with [\033[32mno\033[0m]") ap2.add_argument("--ban-pw", metavar="N,W,B", type=u, default="9,60,1440", help="more than \033[33mN\033[0m wrong passwords in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; disable with [\033[32mno\033[0m]")
ap2.add_argument("--ban-404", metavar="N,W,B", type=u, default="50,60,1440", help="hitting more than \033[33mN\033[0m 404's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; only affects users who cannot see directory listings because their access is either g/G/h") ap2.add_argument("--ban-404", metavar="N,W,B", type=u, default="50,60,1440", help="hitting more than \033[33mN\033[0m 404's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; only affects users who cannot see directory listings because their access is either g/G/h")
ap2.add_argument("--ban-403", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m 403's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; [\033[32m1440\033[0m]=day, [\033[32m10080\033[0m]=week, [\033[32m43200\033[0m]=month") ap2.add_argument("--ban-403", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m 403's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; [\033[32m1440\033[0m]=day, [\033[32m10080\033[0m]=week, [\033[32m43200\033[0m]=month")
ap2.add_argument("--ban-422", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m 422's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes (422 is server fuzzing, invalid POSTs and so)") ap2.add_argument("--ban-422", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m 422's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes (invalid requests, attempted exploits ++)")
ap2.add_argument("--ban-url", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m sus URL's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; applies only to access g/G/h (decent replacement for --ban-404 if that can't be used)") ap2.add_argument("--ban-url", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m sus URL's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; applies only to permissions g/G/h (decent replacement for \033[33m--ban-404\033[0m if that can't be used)")
ap2.add_argument("--sus-urls", metavar="R", type=u, default=r"\.php$|(^|/)wp-(admin|content|includes)/", help="URLs which are considered sus / eligible for banning; disable with blank or [\033[32mno\033[0m]") ap2.add_argument("--sus-urls", metavar="R", type=u, default=r"\.php$|(^|/)wp-(admin|content|includes)/", help="URLs which are considered sus / eligible for banning; disable with blank or [\033[32mno\033[0m]")
ap2.add_argument("--nonsus-urls", metavar="R", type=u, default=r"^(favicon\.ico|robots\.txt)$|^apple-touch-icon|^\.well-known", help="harmless URLs ignored from 404-bans; disable with blank or [\033[32mno\033[0m]") ap2.add_argument("--nonsus-urls", metavar="R", type=u, default=r"^(favicon\.ico|robots\.txt)$|^apple-touch-icon|^\.well-known", help="harmless URLs ignored from 404-bans; disable with blank or [\033[32mno\033[0m]")
ap2.add_argument("--aclose", metavar="MIN", type=int, default=10, help="if a client maxes out the server connection limit, downgrade it from connection:keep-alive to connection:close for MIN minutes (and also kill its active connections) -- disable with 0") ap2.add_argument("--aclose", metavar="MIN", type=int, default=10, help="if a client maxes out the server connection limit, downgrade it from connection:keep-alive to connection:close for \033[33mMIN\033[0m minutes (and also kill its active connections) -- disable with 0")
ap2.add_argument("--loris", metavar="B", type=int, default=60, help="if a client maxes out the server connection limit without sending headers, ban it for B minutes; disable with [\033[32m0\033[0m]") ap2.add_argument("--loris", metavar="B", type=int, default=60, help="if a client maxes out the server connection limit without sending headers, ban it for \033[33mB\033[0m minutes; disable with [\033[32m0\033[0m]")
ap2.add_argument("--acao", metavar="V[,V]", type=u, default="*", help="Access-Control-Allow-Origin; list of origins (domains/IPs without port) to accept requests from; [\033[32mhttps://1.2.3.4\033[0m]. Default [\033[32m*\033[0m] allows requests from all sites but removes cookies and http-auth; only ?pw=hunter2 survives") ap2.add_argument("--acao", metavar="V[,V]", type=u, default="*", help="Access-Control-Allow-Origin; list of origins (domains/IPs without port) to accept requests from; [\033[32mhttps://1.2.3.4\033[0m]. Default [\033[32m*\033[0m] allows requests from all sites but removes cookies and http-auth; only ?pw=hunter2 survives")
ap2.add_argument("--acam", metavar="V[,V]", type=u, default="GET,HEAD", help="Access-Control-Allow-Methods; list of methods to accept from offsite ('*' behaves like described in --acao)") ap2.add_argument("--acam", metavar="V[,V]", type=u, default="GET,HEAD", help="Access-Control-Allow-Methods; list of methods to accept from offsite ('*' behaves like \033[33m--acao\033[0m's description)")
def add_salt(ap, fk_salt, ah_salt): def add_salt(ap, fk_salt, ah_salt):
ap2 = ap.add_argument_group('salting options') ap2 = ap.add_argument_group('salting options')
ap2.add_argument("--ah-alg", metavar="ALG", type=u, default="none", help="account-pw hashing algorithm; one of these, best to worst: argon2 scrypt sha2 none (each optionally followed by alg-specific comma-sep. config)") ap2.add_argument("--ah-alg", metavar="ALG", type=u, default="none", help="account-pw hashing algorithm; one of these, best to worst: \033[32margon2 scrypt sha2 none\033[0m (each optionally followed by alg-specific comma-sep. config)")
ap2.add_argument("--ah-salt", metavar="SALT", type=u, default=ah_salt, help="account-pw salt; ignored if --ah-alg is none (default)") ap2.add_argument("--ah-salt", metavar="SALT", type=u, default=ah_salt, help="account-pw salt; ignored if \033[33m--ah-alg\033[0m is none (default)")
ap2.add_argument("--ah-gen", metavar="PW", type=u, default="", help="generate hashed password for \033[33mPW\033[0m, or read passwords from STDIN if \033[33mPW\033[0m is [\033[32m-\033[0m]") ap2.add_argument("--ah-gen", metavar="PW", type=u, default="", help="generate hashed password for \033[33mPW\033[0m, or read passwords from STDIN if \033[33mPW\033[0m is [\033[32m-\033[0m]")
ap2.add_argument("--ah-cli", action="store_true", help="interactive shell which hashes passwords without ever storing or displaying the original passwords") ap2.add_argument("--ah-cli", action="store_true", help="launch an interactive shell which hashes passwords without ever storing or displaying the original passwords")
ap2.add_argument("--fk-salt", metavar="SALT", type=u, default=fk_salt, help="per-file accesskey salt; used to generate unpredictable URLs for hidden files") ap2.add_argument("--fk-salt", metavar="SALT", type=u, default=fk_salt, help="per-file accesskey salt; used to generate unpredictable URLs for hidden files")
ap2.add_argument("--warksalt", metavar="SALT", type=u, default="hunter2", help="up2k file-hash salt; serves no purpose, no reason to change this (but delete all databases if you do)") ap2.add_argument("--warksalt", metavar="SALT", type=u, default="hunter2", help="up2k file-hash salt; serves no purpose, no reason to change this (but delete all databases if you do)")
@@ -1084,22 +1134,23 @@ def add_shutdown(ap):
ap2 = ap.add_argument_group('shutdown options') ap2 = ap.add_argument_group('shutdown options')
ap2.add_argument("--ign-ebind", action="store_true", help="continue running even if it's impossible to listen on some of the requested endpoints") ap2.add_argument("--ign-ebind", action="store_true", help="continue running even if it's impossible to listen on some of the requested endpoints")
ap2.add_argument("--ign-ebind-all", action="store_true", help="continue running even if it's impossible to receive connections at all") ap2.add_argument("--ign-ebind-all", action="store_true", help="continue running even if it's impossible to receive connections at all")
ap2.add_argument("--exit", metavar="WHEN", type=u, default="", help="shutdown after WHEN has finished; [\033[32mcfg\033[0m] config parsing, [\033[32midx\033[0m] volscan + multimedia indexing") ap2.add_argument("--exit", metavar="WHEN", type=u, default="", help="shutdown after \033[33mWHEN\033[0m has finished; [\033[32mcfg\033[0m] config parsing, [\033[32midx\033[0m] volscan + multimedia indexing")
def add_logging(ap): def add_logging(ap):
ap2 = ap.add_argument_group('logging options') ap2 = ap.add_argument_group('logging options')
ap2.add_argument("-q", action="store_true", help="quiet") ap2.add_argument("-q", action="store_true", help="quiet; disable most STDOUT messages")
ap2.add_argument("-lo", metavar="PATH", type=u, help="logfile, example: \033[32mcpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz") ap2.add_argument("-lo", metavar="PATH", type=u, help="logfile, example: \033[32mcpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz\033[0m (NB: some errors may appear on STDOUT only)")
ap2.add_argument("--no-ansi", action="store_true", default=not VT100, help="disable colors; same as environment-variable NO_COLOR") ap2.add_argument("--no-ansi", action="store_true", default=not VT100, help="disable colors; same as environment-variable NO_COLOR")
ap2.add_argument("--ansi", action="store_true", help="force colors; overrides environment-variable NO_COLOR") ap2.add_argument("--ansi", action="store_true", help="force colors; overrides environment-variable NO_COLOR")
ap2.add_argument("--no-logflush", action="store_true", help="don't flush the logfile after each write; tiny bit faster")
ap2.add_argument("--no-voldump", action="store_true", help="do not list volumes and permissions on startup") ap2.add_argument("--no-voldump", action="store_true", help="do not list volumes and permissions on startup")
ap2.add_argument("--log-tdec", metavar="N", type=int, default=3, help="timestamp resolution / number of timestamp decimals") ap2.add_argument("--log-tdec", metavar="N", type=int, default=3, help="timestamp resolution / number of timestamp decimals")
ap2.add_argument("--log-badpwd", metavar="N", type=int, default=1, help="log failed login attempt passwords: 0=terse, 1=plaintext, 2=hashed") ap2.add_argument("--log-badpwd", metavar="N", type=int, default=1, help="log failed login attempt passwords: 0=terse, 1=plaintext, 2=hashed")
ap2.add_argument("--log-conn", action="store_true", help="debug: print tcp-server msgs") ap2.add_argument("--log-conn", action="store_true", help="debug: print tcp-server msgs")
ap2.add_argument("--log-htp", action="store_true", help="debug: print http-server threadpool scaling") ap2.add_argument("--log-htp", action="store_true", help="debug: print http-server threadpool scaling")
ap2.add_argument("--ihead", metavar="HEADER", type=u, action='append', help="dump incoming header") ap2.add_argument("--ihead", metavar="HEADER", type=u, action='append', help="print request \033[33mHEADER\033[0m; [\033[32m*\033[0m]=all")
ap2.add_argument("--lf-url", metavar="RE", type=u, default=r"^/\.cpr/|\?th=[wj]$|/\.(_|ql_|DS_Store$|localized$)", help="dont log URLs matching") ap2.add_argument("--lf-url", metavar="RE", type=u, default=r"^/\.cpr/|\?th=[wj]$|/\.(_|ql_|DS_Store$|localized$)", help="dont log URLs matching regex \033[33mRE\033[0m")
def add_admin(ap): def add_admin(ap):
@@ -1117,16 +1168,17 @@ def add_thumbnail(ap):
ap2.add_argument("--th-size", metavar="WxH", default="320x256", help="thumbnail res (volflag=thsize)") ap2.add_argument("--th-size", metavar="WxH", default="320x256", help="thumbnail res (volflag=thsize)")
ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails") ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails")
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60, help="conversion timeout in seconds (volflag=convt)") ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60, help="conversion timeout in seconds (volflag=convt)")
ap2.add_argument("--th-no-crop", action="store_true", help="dynamic height; show full image by default (volflag=nocrop)") ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=6, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
ap2.add_argument("--th-no-crop", action="store_true", help="dynamic height; show full image by default (client can override in UI) (volflag=nocrop)")
ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference") ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference")
ap2.add_argument("--th-no-jpg", action="store_true", help="disable jpg output") ap2.add_argument("--th-no-jpg", action="store_true", help="disable jpg output")
ap2.add_argument("--th-no-webp", action="store_true", help="disable webp output") ap2.add_argument("--th-no-webp", action="store_true", help="disable webp output")
ap2.add_argument("--th-ff-jpg", action="store_true", help="force jpg output for video thumbs") ap2.add_argument("--th-ff-jpg", action="store_true", help="force jpg output for video thumbs (avoids issues on some FFmpeg builds)")
ap2.add_argument("--th-ff-swr", action="store_true", help="use swresample instead of soxr for audio thumbs") ap2.add_argument("--th-ff-swr", action="store_true", help="use swresample instead of soxr for audio thumbs (faster, lower accuracy, avoids issues on some FFmpeg builds)")
ap2.add_argument("--th-poke", metavar="SEC", type=int, default=300, help="activity labeling cooldown -- avoids doing keepalive pokes (updating the mtime) on thumbnail folders more often than SEC seconds") ap2.add_argument("--th-poke", metavar="SEC", type=int, default=300, help="activity labeling cooldown -- avoids doing keepalive pokes (updating the mtime) on thumbnail folders more often than \033[33mSEC\033[0m seconds")
ap2.add_argument("--th-clean", metavar="SEC", type=int, default=43200, help="cleanup interval; 0=disabled") ap2.add_argument("--th-clean", metavar="SEC", type=int, default=43200, help="cleanup interval; 0=disabled")
ap2.add_argument("--th-maxage", metavar="SEC", type=int, default=604800, help="max folder age -- folders which haven't been poked for longer than --th-poke seconds will get deleted every --th-clean seconds") ap2.add_argument("--th-maxage", metavar="SEC", type=int, default=604800, help="max folder age -- folders which haven't been poked for longer than \033[33m--th-poke\033[0m seconds will get deleted every \033[33m--th-clean\033[0m seconds")
ap2.add_argument("--th-covers", metavar="N,N", type=u, default="folder.png,folder.jpg,cover.png,cover.jpg", help="folder thumbnails to stat/look for; enabling -e2d will make these case-insensitive, and also automatically select thumbnails for all folders that contain pics, even if none match this pattern") ap2.add_argument("--th-covers", metavar="N,N", type=u, default="folder.png,folder.jpg,cover.png,cover.jpg", help="folder thumbnails to stat/look for; enabling \033[33m-e2d\033[0m will make these case-insensitive, and try them as dotfiles (.folder.jpg), and also automatically select thumbnails for all folders that contain pics, even if none match this pattern")
# https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html # https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html
# https://github.com/libvips/libvips # https://github.com/libvips/libvips
# ffmpeg -hide_banner -demuxers | awk '/^ D /{print$2}' | while IFS= read -r x; do ffmpeg -hide_banner -h demuxer=$x; done | grep -E '^Demuxer |extensions:' # ffmpeg -hide_banner -demuxers | awk '/^ D /{print$2}' | while IFS= read -r x; do ffmpeg -hide_banner -h demuxer=$x; done | grep -E '^Demuxer |extensions:'
@@ -1141,29 +1193,30 @@ def add_transcoding(ap):
ap2 = ap.add_argument_group('transcoding options') ap2 = ap.add_argument_group('transcoding options')
ap2.add_argument("--no-acode", action="store_true", help="disable audio transcoding") ap2.add_argument("--no-acode", action="store_true", help="disable audio transcoding")
ap2.add_argument("--no-bacode", action="store_true", help="disable batch audio transcoding by folder download (zip/tar)") ap2.add_argument("--no-bacode", action="store_true", help="disable batch audio transcoding by folder download (zip/tar)")
ap2.add_argument("--ac-maxage", metavar="SEC", type=int, default=86400, help="delete cached transcode output after SEC seconds") ap2.add_argument("--ac-maxage", metavar="SEC", type=int, default=86400, help="delete cached transcode output after \033[33mSEC\033[0m seconds")
def add_db_general(ap, hcores): def add_db_general(ap, hcores):
noidx = APPLESAN_TXT if MACOS else ""
ap2 = ap.add_argument_group('general db options') ap2 = ap.add_argument_group('general db options')
ap2.add_argument("-e2d", action="store_true", help="enable up2k database, making files searchable + enables upload deduplication") ap2.add_argument("-e2d", action="store_true", help="enable up2k database, making files searchable + enables upload deduplication")
ap2.add_argument("-e2ds", action="store_true", help="scan writable folders for new files on startup; sets -e2d") ap2.add_argument("-e2ds", action="store_true", help="scan writable folders for new files on startup; sets \033[33m-e2d\033[0m")
ap2.add_argument("-e2dsa", action="store_true", help="scans all folders on startup; sets -e2ds") ap2.add_argument("-e2dsa", action="store_true", help="scans all folders on startup; sets \033[33m-e2ds\033[0m")
ap2.add_argument("-e2v", action="store_true", help="verify file integrity; rehash all files and compare with db") ap2.add_argument("-e2v", action="store_true", help="verify file integrity; rehash all files and compare with db")
ap2.add_argument("-e2vu", action="store_true", help="on hash mismatch: update the database with the new hash") ap2.add_argument("-e2vu", action="store_true", help="on hash mismatch: update the database with the new hash")
ap2.add_argument("-e2vp", action="store_true", help="on hash mismatch: panic and quit copyparty") ap2.add_argument("-e2vp", action="store_true", help="on hash mismatch: panic and quit copyparty")
ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume data (db, thumbs) (volflag=hist)") ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume data (db, thumbs) (volflag=hist)")
ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)") ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
ap2.add_argument("--no-idx", metavar="PTN", type=u, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)") ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower") ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
ap2.add_argument("--re-dhash", action="store_true", help="rebuild the cache if it gets out of sync (for example crash on startup during metadata scanning)") ap2.add_argument("--re-dhash", action="store_true", help="rebuild the cache if it gets out of sync (for example crash on startup during metadata scanning)")
ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice (volflag=noforget)") ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice -- only useful for offloading uploads to a cloud service or something (volflag=noforget)")
ap2.add_argument("--dbd", metavar="PROFILE", default="wal", help="database durability profile; sets the tradeoff between robustness and speed, see --help-dbd (volflag=dbd)") ap2.add_argument("--dbd", metavar="PROFILE", default="wal", help="database durability profile; sets the tradeoff between robustness and speed, see \033[33m--help-dbd\033[0m (volflag=dbd)")
ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (volflag=xlink)") ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (volflag=xlink)")
ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing") ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing")
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="disk rescan volume interval, 0=off (volflag=scan)") ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="rescan filesystem for changes every \033[33mSEC\033[0m seconds; 0=off (volflag=scan)")
ap2.add_argument("--db-act", metavar="SEC", type=float, default=10, help="defer any scheduled volume reindexing until SEC seconds after last db write (uploads, renames, ...)") ap2.add_argument("--db-act", metavar="SEC", type=float, default=10, help="defer any scheduled volume reindexing until \033[33mSEC\033[0m seconds after last db write (uploads, renames, ...)")
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=45, help="search deadline -- terminate searches running for more than SEC seconds") ap2.add_argument("--srch-time", metavar="SEC", type=int, default=45, help="search deadline -- terminate searches running for more than \033[33mSEC\033[0m seconds")
ap2.add_argument("--srch-hits", metavar="N", type=int, default=7999, help="max search results to allow clients to fetch; 125 results will be shown initially") ap2.add_argument("--srch-hits", metavar="N", type=int, default=7999, help="max search results to allow clients to fetch; 125 results will be shown initially")
ap2.add_argument("--dotsrch", action="store_true", help="show dotfiles in search results (volflags: dotsrch | nodotsrch)") ap2.add_argument("--dotsrch", action="store_true", help="show dotfiles in search results (volflags: dotsrch | nodotsrch)")
@@ -1171,25 +1224,25 @@ def add_db_general(ap, hcores):
def add_db_metadata(ap): def add_db_metadata(ap):
ap2 = ap.add_argument_group('metadata db options') ap2 = ap.add_argument_group('metadata db options')
ap2.add_argument("-e2t", action="store_true", help="enable metadata indexing; makes it possible to search for artist/title/codec/resolution/...") ap2.add_argument("-e2t", action="store_true", help="enable metadata indexing; makes it possible to search for artist/title/codec/resolution/...")
ap2.add_argument("-e2ts", action="store_true", help="scan existing files on startup; sets -e2t") ap2.add_argument("-e2ts", action="store_true", help="scan newly discovered files for metadata on startup; sets \033[33m-e2t\033[0m")
ap2.add_argument("-e2tsr", action="store_true", help="delete all metadata from DB and do a full rescan; sets -e2ts") ap2.add_argument("-e2tsr", action="store_true", help="delete all metadata from DB and do a full rescan; sets \033[33m-e2ts\033[0m")
ap2.add_argument("--no-mutagen", action="store_true", help="use FFprobe for tags instead; will catch more tags") ap2.add_argument("--no-mutagen", action="store_true", help="use FFprobe for tags instead; will detect more tags")
ap2.add_argument("--no-mtag-ff", action="store_true", help="never use FFprobe as tag reader; is probably safer") ap2.add_argument("--no-mtag-ff", action="store_true", help="never use FFprobe as tag reader; is probably safer")
ap2.add_argument("--mtag-to", metavar="SEC", type=int, default=60, help="timeout for ffprobe tag-scan") ap2.add_argument("--mtag-to", metavar="SEC", type=int, default=60, help="timeout for FFprobe tag-scan")
ap2.add_argument("--mtag-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for tag scanning") ap2.add_argument("--mtag-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for tag scanning")
ap2.add_argument("--mtag-v", action="store_true", help="verbose tag scanning; print errors from mtp subprocesses and such") ap2.add_argument("--mtag-v", action="store_true", help="verbose tag scanning; print errors from mtp subprocesses and such")
ap2.add_argument("--mtag-vv", action="store_true", help="debug mtp settings and mutagen/ffprobe parsers") ap2.add_argument("--mtag-vv", action="store_true", help="debug mtp settings and mutagen/FFprobe parsers")
ap2.add_argument("-mtm", metavar="M=t,t,t", type=u, action="append", help="add/replace metadata mapping") ap2.add_argument("-mtm", metavar="M=t,t,t", type=u, action="append", help="add/replace metadata mapping")
ap2.add_argument("-mte", metavar="M,M,M", type=u, help="tags to index/display (comma-sep.); either an entire replacement list, or add/remove stuff on the default-list with +foo or /bar", default=DEF_MTE) ap2.add_argument("-mte", metavar="M,M,M", type=u, help="tags to index/display (comma-sep.); either an entire replacement list, or add/remove stuff on the default-list with +foo or /bar", default=DEF_MTE)
ap2.add_argument("-mth", metavar="M,M,M", type=u, help="tags to hide by default (comma-sep.); assign/add/remove same as -mte", default=DEF_MTH) ap2.add_argument("-mth", metavar="M,M,M", type=u, help="tags to hide by default (comma-sep.); assign/add/remove same as \033[33m-mte\033[0m", default=DEF_MTH)
ap2.add_argument("-mtp", metavar="M=[f,]BIN", type=u, action="append", help="read tag M using program BIN to parse the file") ap2.add_argument("-mtp", metavar="M=[f,]BIN", type=u, action="append", help="read tag \033[33mM\033[0m using program \033[33mBIN\033[0m to parse the file")
def add_txt(ap): def add_txt(ap):
ap2 = ap.add_argument_group('textfile options') ap2 = ap.add_argument_group('textfile options')
ap2.add_argument("-mcr", metavar="SEC", type=int, default=60, help="textfile editor checks for serverside changes every SEC seconds") ap2.add_argument("-mcr", metavar="SEC", type=int, default=60, help="the textfile editor will check for serverside changes every \033[33mSEC\033[0m seconds")
ap2.add_argument("-emp", action="store_true", help="enable markdown plugins -- neat but dangerous, big XSS risk") ap2.add_argument("-emp", action="store_true", help="enable markdown plugins -- neat but dangerous, big XSS risk")
ap2.add_argument("--exp", action="store_true", help="enable textfile expansion -- replace {{self.ip}} and such; see --help-exp (volflag=exp)") ap2.add_argument("--exp", action="store_true", help="enable textfile expansion -- replace {{self.ip}} and such; see \033[33m--help-exp\033[0m (volflag=exp)")
ap2.add_argument("--exp-md", metavar="V,V,V", type=u, default=DEF_EXP, help="comma/space-separated list of placeholders to expand in markdown files; add/remove stuff on the default list with +hdr_foo or /vf.scan (volflag=exp_md)") ap2.add_argument("--exp-md", metavar="V,V,V", type=u, default=DEF_EXP, help="comma/space-separated list of placeholders to expand in markdown files; add/remove stuff on the default list with +hdr_foo or /vf.scan (volflag=exp_md)")
ap2.add_argument("--exp-lg", metavar="V,V,V", type=u, default=DEF_EXP, help="comma/space-separated list of placeholders to expand in prologue/epilogue files (volflag=exp_lg)") ap2.add_argument("--exp-lg", metavar="V,V,V", type=u, default=DEF_EXP, help="comma/space-separated list of placeholders to expand in prologue/epilogue files (volflag=exp_lg)")
@@ -1197,11 +1250,11 @@ def add_txt(ap):
def add_ui(ap, retry): def add_ui(ap, retry):
ap2 = ap.add_argument_group('ui options') ap2 = ap.add_argument_group('ui options')
ap2.add_argument("--grid", action="store_true", help="show grid/thumbnails by default (volflag=grid)") ap2.add_argument("--grid", action="store_true", help="show grid/thumbnails by default (volflag=grid)")
ap2.add_argument("--lang", metavar="LANG", type=u, default="eng", help="language; one of the following: eng nor") ap2.add_argument("--lang", metavar="LANG", type=u, default="eng", help="language; one of the following: \033[32meng nor\033[0m")
ap2.add_argument("--theme", metavar="NUM", type=int, default=0, help="default theme to use") ap2.add_argument("--theme", metavar="NUM", type=int, default=0, help="default theme to use (0..7)")
ap2.add_argument("--themes", metavar="NUM", type=int, default=8, help="number of themes installed") ap2.add_argument("--themes", metavar="NUM", type=int, default=8, help="number of themes installed")
ap2.add_argument("--sort", metavar="C,C,C", type=u, default="href", help="default sort order, comma-separated column IDs (see header tooltips), prefix with '-' for descending. Examples: \033[32mhref -href ext sz ts tags/Album tags/.tn\033[0m (volflag=sort)") ap2.add_argument("--sort", metavar="C,C,C", type=u, default="href", help="default sort order, comma-separated column IDs (see header tooltips), prefix with '-' for descending. Examples: \033[32mhref -href ext sz ts tags/Album tags/.tn\033[0m (volflag=sort)")
ap2.add_argument("--unlist", metavar="REGEX", type=u, default="", help="don't show files matching REGEX in file list. Purely cosmetic! Does not affect API calls, just the browser. Example: [\033[32m\\.(js|css)$\033[0m] (volflag=unlist)") ap2.add_argument("--unlist", metavar="REGEX", type=u, default="", help="don't show files matching \033[33mREGEX\033[0m in file list. Purely cosmetic! Does not affect API calls, just the browser. Example: [\033[32m\\.(js|css)$\033[0m] (volflag=unlist)")
ap2.add_argument("--favico", metavar="TXT", type=u, default="c 000 none" if retry else "🎉 000 none", help="\033[33mfavicon-text\033[0m [ \033[33mforeground\033[0m [ \033[33mbackground\033[0m ] ], set blank to disable") ap2.add_argument("--favico", metavar="TXT", type=u, default="c 000 none" if retry else "🎉 000 none", help="\033[33mfavicon-text\033[0m [ \033[33mforeground\033[0m [ \033[33mbackground\033[0m ] ], set blank to disable")
ap2.add_argument("--mpmc", metavar="URL", type=u, default="", help="change the mediaplayer-toggle mouse cursor; URL to a folder with {2..5}.png inside (or disable with [\033[32m.\033[0m])") ap2.add_argument("--mpmc", metavar="URL", type=u, default="", help="change the mediaplayer-toggle mouse cursor; URL to a folder with {2..5}.png inside (or disable with [\033[32m.\033[0m])")
ap2.add_argument("--js-browser", metavar="L", type=u, help="URL to additional JS to include") ap2.add_argument("--js-browser", metavar="L", type=u, help="URL to additional JS to include")
@@ -1212,8 +1265,8 @@ def add_ui(ap, retry):
ap2.add_argument("--txt-max", metavar="KiB", type=int, default=64, help="max size of embedded textfiles on ?doc= (anything bigger will be lazy-loaded by JS)") ap2.add_argument("--txt-max", metavar="KiB", type=int, default=64, help="max size of embedded textfiles on ?doc= (anything bigger will be lazy-loaded by JS)")
ap2.add_argument("--doctitle", metavar="TXT", type=u, default="copyparty @ --name", help="title / service-name to show in html documents") ap2.add_argument("--doctitle", metavar="TXT", type=u, default="copyparty @ --name", help="title / service-name to show in html documents")
ap2.add_argument("--bname", metavar="TXT", type=u, default="--name", help="server name (displayed in filebrowser document title)") ap2.add_argument("--bname", metavar="TXT", type=u, default="--name", help="server name (displayed in filebrowser document title)")
ap2.add_argument("--pb-url", metavar="URL", type=u, default="https://github.com/9001/copyparty", help="powered-by link; disable with -np") ap2.add_argument("--pb-url", metavar="URL", type=u, default="https://github.com/9001/copyparty", help="powered-by link; disable with \033[33m-np\033[0m")
ap2.add_argument("--ver", action="store_true", help="show version on the control panel (incompatible with -nb)") ap2.add_argument("--ver", action="store_true", help="show version on the control panel (incompatible with \033[33m-nb\033[0m)")
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox") ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox")
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for prologue/epilogue docs (volflag=lg_sbf)") ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for prologue/epilogue docs (volflag=lg_sbf)")
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README.md documents (volflags: no_sb_md | sb_md)") ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README.md documents (volflags: no_sb_md | sb_md)")
@@ -1224,18 +1277,18 @@ def add_debug(ap):
ap2 = ap.add_argument_group('debug options') ap2 = ap.add_argument_group('debug options')
ap2.add_argument("--vc", action="store_true", help="verbose config file parser (explain config)") ap2.add_argument("--vc", action="store_true", help="verbose config file parser (explain config)")
ap2.add_argument("--cgen", action="store_true", help="generate config file from current config (best-effort; probably buggy)") ap2.add_argument("--cgen", action="store_true", help="generate config file from current config (best-effort; probably buggy)")
ap2.add_argument("--no-sendfile", action="store_true", help="disable sendfile; instead using a traditional file read loop") ap2.add_argument("--no-sendfile", action="store_true", help="kernel-bug workaround: disable sendfile; do a safe and slow read-send-loop instead")
ap2.add_argument("--no-scandir", action="store_true", help="disable scandir; instead using listdir + stat on each file") ap2.add_argument("--no-scandir", action="store_true", help="kernel-bug workaround: disable scandir; do a listdir + stat on each file instead")
ap2.add_argument("--no-fastboot", action="store_true", help="wait for up2k indexing before starting the httpd") ap2.add_argument("--no-fastboot", action="store_true", help="wait for initial filesystem indexing before accepting client requests")
ap2.add_argument("--no-htp", action="store_true", help="disable httpserver threadpool, create threads as-needed instead") ap2.add_argument("--no-htp", action="store_true", help="disable httpserver threadpool, create threads as-needed instead")
ap2.add_argument("--srch-dbg", action="store_true", help="explain search processing, and do some extra expensive sanity checks") ap2.add_argument("--srch-dbg", action="store_true", help="explain search processing, and do some extra expensive sanity checks")
ap2.add_argument("--rclone-mdns", action="store_true", help="use mdns-domain instead of server-ip on /?hc") ap2.add_argument("--rclone-mdns", action="store_true", help="use mdns-domain instead of server-ip on /?hc")
ap2.add_argument("--stackmon", metavar="P,S", type=u, help="write stacktrace to Path every S second, for example --stackmon=\033[32m./st/%%Y-%%m/%%d/%%H%%M.xz,60") ap2.add_argument("--stackmon", metavar="P,S", type=u, help="write stacktrace to \033[33mP\033[0math every \033[33mS\033[0m second, for example --stackmon=\033[32m./st/%%Y-%%m/%%d/%%H%%M.xz,60")
ap2.add_argument("--log-thrs", metavar="SEC", type=float, help="list active threads every SEC") ap2.add_argument("--log-thrs", metavar="SEC", type=float, help="list active threads every \033[33mSEC\033[0m")
ap2.add_argument("--log-fk", metavar="REGEX", type=u, default="", help="log filekey params for files where path matches REGEX; [\033[32m.\033[0m] (a single dot) = all files") ap2.add_argument("--log-fk", metavar="REGEX", type=u, default="", help="log filekey params for files where path matches \033[33mREGEX\033[0m; [\033[32m.\033[0m] (a single dot) = all files")
ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to --bf-nc and --bf-dir") ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to \033[33m--bf-nc\033[0m and \033[33m--bf-dir\033[0m")
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than NUM files at --kf-dir already; default: 6.3 GiB max (200*32M)") ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
ap2.add_argument("--bf-dir", metavar="PATH", type=u, default="bf", help="bak-flips: store corrupted chunks at PATH; default: folder named 'bf' wherever copyparty was started") ap2.add_argument("--bf-dir", metavar="PATH", type=u, default="bf", help="bak-flips: store corrupted chunks at \033[33mPATH\033[0m; default: folder named 'bf' wherever copyparty was started")
# fmt: on # fmt: on
@@ -1252,7 +1305,7 @@ def run_argparse(
cert_path = os.path.join(E.cfg, "cert.pem") cert_path = os.path.join(E.cfg, "cert.pem")
fk_salt = get_fk_salt(cert_path) fk_salt = get_fk_salt()
ah_salt = get_ah_salt() ah_salt = get_ah_salt()
# alpine peaks at 5 threads for some reason, # alpine peaks at 5 threads for some reason,
@@ -1268,10 +1321,12 @@ def run_argparse(
add_network(ap) add_network(ap)
add_tls(ap, cert_path) add_tls(ap, cert_path)
add_cert(ap, cert_path) add_cert(ap, cert_path)
add_auth(ap)
add_qr(ap, tty) add_qr(ap, tty)
add_zeroconf(ap) add_zeroconf(ap)
add_zc_mdns(ap) add_zc_mdns(ap)
add_zc_ssdp(ap) add_zc_ssdp(ap)
add_fs(ap)
add_upload(ap) add_upload(ap)
add_db_general(ap, hcores) add_db_general(ap, hcores)
add_db_metadata(ap) add_db_metadata(ap)
@@ -1279,6 +1334,7 @@ def run_argparse(
add_transcoding(ap) add_transcoding(ap)
add_ftp(ap) add_ftp(ap)
add_webdav(ap) add_webdav(ap)
add_tftp(ap)
add_smb(ap) add_smb(ap)
add_safety(ap) add_safety(ap)
add_salt(ap, fk_salt, ah_salt) add_salt(ap, fk_salt, ah_salt)
@@ -1332,15 +1388,16 @@ def main(argv: Optional[list[str]] = None) -> None:
if argv is None: if argv is None:
argv = sys.argv argv = sys.argv
f = '\033[36mcopyparty v{} "\033[35m{}\033[36m" ({})\n{}\033[0;36m\n sqlite v{} | jinja2 v{} | pyftpd v{}\n\033[0m' f = '\033[36mcopyparty v{} "\033[35m{}\033[36m" ({})\n{}\033[0;36m\n sqlite {} | jinja {} | pyftpd {} | tftp {}\n\033[0m'
f = f.format( f = f.format(
S_VERSION, S_VERSION,
CODENAME, CODENAME,
S_BUILD_DT, S_BUILD_DT,
py_desc().replace("[", "\033[90m["), PY_DESC.replace("[", "\033[90m["),
SQLITE_VER, SQLITE_VER,
JINJA_VER, JINJA_VER,
PYFTPD_VER, PYFTPD_VER,
PARTFTPY_VER,
) )
lprint(f) lprint(f)
@@ -1369,7 +1426,10 @@ def main(argv: Optional[list[str]] = None) -> None:
supp = args_from_cfg(v) supp = args_from_cfg(v)
argv.extend(supp) argv.extend(supp)
deprecated: list[tuple[str, str]] = [("--salt", "--warksalt")] deprecated: list[tuple[str, str]] = [
("--salt", "--warksalt"),
("--hdr-au-usr", "--idp-h-usr"),
]
for dk, nk in deprecated: for dk, nk in deprecated:
idx = -1 idx = -1
ov = "" ov = ""
@@ -1407,7 +1467,7 @@ def main(argv: Optional[list[str]] = None) -> None:
_, hard = resource.getrlimit(resource.RLIMIT_NOFILE) _, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
if hard > 0: # -1 == infinite if hard > 0: # -1 == infinite
nc = min(nc, hard // 4) nc = min(nc, int(hard / 4))
except: except:
nc = 512 nc = 512
@@ -1444,40 +1504,6 @@ def main(argv: Optional[list[str]] = None) -> None:
if al.ansi: if al.ansi:
al.wintitle = "" al.wintitle = ""
nstrs: list[str] = []
anymod = False
for ostr in al.v or []:
m = re_vol.match(ostr)
if not m:
# not our problem
nstrs.append(ostr)
continue
src, dst, perms = m.groups()
na = [src, dst]
mod = False
for opt in perms.split(":"):
if re.match("c[^,]", opt):
mod = True
na.append("c," + opt[1:])
elif re.sub("^[rwmdgGha]*", "", opt) and "," not in opt:
mod = True
perm = opt[0]
na.append(perm + "," + opt[1:])
else:
na.append(opt)
nstr = ":".join(na)
nstrs.append(nstr if mod else ostr)
if mod:
msg = "\033[1;31mWARNING:\033[0;1m\n -v {} \033[0;33mwas replaced with\033[0;1m\n -v {} \n\033[0m"
lprint(msg.format(ostr, nstr))
anymod = True
if anymod:
al.v = nstrs
time.sleep(2)
# propagate implications # propagate implications
for k1, k2 in IMPLICATIONS: for k1, k2 in IMPLICATIONS:
if getattr(al, k1): if getattr(al, k1):
@@ -1533,6 +1559,9 @@ def main(argv: Optional[list[str]] = None) -> None:
if sys.version_info < (3, 6): if sys.version_info < (3, 6):
al.no_scandir = True al.no_scandir = True
if not hasattr(os, "sendfile"):
al.no_sendfile = True
# signal.signal(signal.SIGINT, sighandler) # signal.signal(signal.SIGINT, sighandler)
SvcHub(al, dal, argv, "".join(printed)).run() SvcHub(al, dal, argv, "".join(printed)).run()

View File

@@ -1,8 +1,8 @@
# coding: utf-8 # coding: utf-8
VERSION = (1, 9, 19) VERSION = (1, 10, 0)
CODENAME = "prometheable" CODENAME = "tftp"
BUILD_DT = (2023, 11, 19) BUILD_DT = (2024, 2, 15)
S_VERSION = ".".join(map(str, VERSION)) S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT) S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -72,6 +72,7 @@ class AXS(object):
upget: Optional[Union[list[str], set[str]]] = None, upget: Optional[Union[list[str], set[str]]] = None,
uhtml: Optional[Union[list[str], set[str]]] = None, uhtml: Optional[Union[list[str], set[str]]] = None,
uadmin: Optional[Union[list[str], set[str]]] = None, uadmin: Optional[Union[list[str], set[str]]] = None,
udot: Optional[Union[list[str], set[str]]] = None,
) -> None: ) -> None:
self.uread: set[str] = set(uread or []) self.uread: set[str] = set(uread or [])
self.uwrite: set[str] = set(uwrite or []) self.uwrite: set[str] = set(uwrite or [])
@@ -81,9 +82,10 @@ class AXS(object):
self.upget: set[str] = set(upget or []) self.upget: set[str] = set(upget or [])
self.uhtml: set[str] = set(uhtml or []) self.uhtml: set[str] = set(uhtml or [])
self.uadmin: set[str] = set(uadmin or []) self.uadmin: set[str] = set(uadmin or [])
self.udot: set[str] = set(udot or [])
def __repr__(self) -> str: def __repr__(self) -> str:
ks = "uread uwrite umove udel uget upget uhtml uadmin".split() ks = "uread uwrite umove udel uget upget uhtml uadmin udot".split()
return "AXS(%s)" % (", ".join("%s=%r" % (k, self.__dict__[k]) for k in ks),) return "AXS(%s)" % (", ".join("%s=%r" % (k, self.__dict__[k]) for k in ks),)
@@ -336,6 +338,8 @@ class VFS(object):
self.apget: dict[str, list[str]] = {} self.apget: dict[str, list[str]] = {}
self.ahtml: dict[str, list[str]] = {} self.ahtml: dict[str, list[str]] = {}
self.aadmin: dict[str, list[str]] = {} self.aadmin: dict[str, list[str]] = {}
self.adot: dict[str, list[str]] = {}
self.all_vols: dict[str, VFS] = {}
if realpath: if realpath:
rp = realpath + ("" if realpath.endswith(os.sep) else os.sep) rp = realpath + ("" if realpath.endswith(os.sep) else os.sep)
@@ -377,7 +381,7 @@ class VFS(object):
def add(self, src: str, dst: str) -> "VFS": def add(self, src: str, dst: str) -> "VFS":
"""get existing, or add new path to the vfs""" """get existing, or add new path to the vfs"""
assert not src.endswith("/") # nosec assert src == "/" or not src.endswith("/") # nosec
assert not dst.endswith("/") # nosec assert not dst.endswith("/") # nosec
if "/" in dst: if "/" in dst:
@@ -414,7 +418,7 @@ class VFS(object):
hist = flags.get("hist") hist = flags.get("hist")
if hist and hist != "-": if hist and hist != "-":
zs = "{}/{}".format(hist.rstrip("/"), name) zs = "{}/{}".format(hist.rstrip("/"), name)
flags["hist"] = os.path.expanduser(zs) if zs.startswith("~") else zs flags["hist"] = os.path.expandvars(os.path.expanduser(zs))
return flags return flags
@@ -445,8 +449,8 @@ class VFS(object):
def can_access( def can_access(
self, vpath: str, uname: str self, vpath: str, uname: str
) -> tuple[bool, bool, bool, bool, bool, bool, bool]: ) -> tuple[bool, bool, bool, bool, bool, bool, bool, bool]:
"""can Read,Write,Move,Delete,Get,Upget,Admin""" """can Read,Write,Move,Delete,Get,Upget,Admin,Dot"""
if vpath: if vpath:
vn, _ = self._find(undot(vpath)) vn, _ = self._find(undot(vpath))
else: else:
@@ -454,13 +458,14 @@ class VFS(object):
c = vn.axs c = vn.axs
return ( return (
uname in c.uread or "*" in c.uread, uname in c.uread,
uname in c.uwrite or "*" in c.uwrite, uname in c.uwrite,
uname in c.umove or "*" in c.umove, uname in c.umove,
uname in c.udel or "*" in c.udel, uname in c.udel,
uname in c.uget or "*" in c.uget, uname in c.uget,
uname in c.upget or "*" in c.upget, uname in c.upget,
uname in c.uadmin or "*" in c.uadmin, uname in c.uadmin,
uname in c.udot,
) )
# skip uhtml because it's rarely needed # skip uhtml because it's rarely needed
@@ -492,7 +497,7 @@ class VFS(object):
(will_del, c.udel, "delete"), (will_del, c.udel, "delete"),
(will_get, c.uget, "get"), (will_get, c.uget, "get"),
]: ]:
if req and (uname not in d and "*" not in d) and uname != LEELOO_DALLAS: if req and uname not in d and uname != LEELOO_DALLAS:
if vpath != cvpath and vpath != "." and self.log: if vpath != cvpath and vpath != "." and self.log:
ap = vn.canonical(rem) ap = vn.canonical(rem)
t = "{} has no {} in [{}] => [{}] => [{}]" t = "{} has no {} in [{}] => [{}] => [{}]"
@@ -553,7 +558,7 @@ class VFS(object):
for pset in permsets: for pset in permsets:
ok = True ok = True
for req, lst in zip(pset, axs): for req, lst in zip(pset, axs):
if req and uname not in lst and "*" not in lst: if req and uname not in lst:
ok = False ok = False
if ok: if ok:
break break
@@ -577,7 +582,7 @@ class VFS(object):
seen: list[str], seen: list[str],
uname: str, uname: str,
permsets: list[list[bool]], permsets: list[list[bool]],
dots: bool, wantdots: bool,
scandir: bool, scandir: bool,
lstat: bool, lstat: bool,
subvols: bool = True, subvols: bool = True,
@@ -621,6 +626,10 @@ class VFS(object):
rm1.append(le) rm1.append(le)
_ = [vfs_ls.remove(x) for x in rm1] # type: ignore _ = [vfs_ls.remove(x) for x in rm1] # type: ignore
dots_ok = wantdots and uname in dbv.axs.udot
if not dots_ok:
vfs_ls = [x for x in vfs_ls if "/." not in "/" + x[0]]
seen = seen[:] + [fsroot] seen = seen[:] + [fsroot]
rfiles = [x for x in vfs_ls if not stat.S_ISDIR(x[1].st_mode)] rfiles = [x for x in vfs_ls if not stat.S_ISDIR(x[1].st_mode)]
rdirs = [x for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)] rdirs = [x for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)]
@@ -633,13 +642,13 @@ class VFS(object):
yield dbv, vrem, rel, fsroot, rfiles, rdirs, vfs_virt yield dbv, vrem, rel, fsroot, rfiles, rdirs, vfs_virt
for rdir, _ in rdirs: for rdir, _ in rdirs:
if not dots and rdir.startswith("."): if not dots_ok and rdir.startswith("."):
continue continue
wrel = (rel + "/" + rdir).lstrip("/") wrel = (rel + "/" + rdir).lstrip("/")
wrem = (rem + "/" + rdir).lstrip("/") wrem = (rem + "/" + rdir).lstrip("/")
for x in self.walk( for x in self.walk(
wrel, wrem, seen, uname, permsets, dots, scandir, lstat, subvols wrel, wrem, seen, uname, permsets, wantdots, scandir, lstat, subvols
): ):
yield x yield x
@@ -647,11 +656,13 @@ class VFS(object):
return return
for n, vfs in sorted(vfs_virt.items()): for n, vfs in sorted(vfs_virt.items()):
if not dots and n.startswith("."): if not dots_ok and n.startswith("."):
continue continue
wrel = (rel + "/" + n).lstrip("/") wrel = (rel + "/" + n).lstrip("/")
for x in vfs.walk(wrel, "", seen, uname, permsets, dots, scandir, lstat): for x in vfs.walk(
wrel, "", seen, uname, permsets, wantdots, scandir, lstat
):
yield x yield x
def zipgen( def zipgen(
@@ -660,7 +671,6 @@ class VFS(object):
vrem: str, vrem: str,
flt: set[str], flt: set[str],
uname: str, uname: str,
dots: bool,
dirs: bool, dirs: bool,
scandir: bool, scandir: bool,
wrap: bool = True, wrap: bool = True,
@@ -670,7 +680,7 @@ class VFS(object):
# if single folder: the folder itself is the top-level item # if single folder: the folder itself is the top-level item
folder = "" if flt or not wrap else (vpath.split("/")[-1].lstrip(".") or "top") folder = "" if flt or not wrap else (vpath.split("/")[-1].lstrip(".") or "top")
g = self.walk(folder, vrem, [], uname, [[True, False]], dots, scandir, False) g = self.walk(folder, vrem, [], uname, [[True, False]], True, scandir, False)
for _, _, vpath, apath, files, rd, vd in g: for _, _, vpath, apath, files, rd, vd in g:
if flt: if flt:
files = [x for x in files if x[0] in flt] files = [x for x in files if x[0] in flt]
@@ -689,18 +699,6 @@ class VFS(object):
apaths = [os.path.join(apath, n) for n in fnames] apaths = [os.path.join(apath, n) for n in fnames]
ret = list(zip(vpaths, apaths, files)) ret = list(zip(vpaths, apaths, files))
if not dots:
# dotfile filtering based on vpath (intended visibility)
ret = [x for x in ret if "/." not in "/" + x[0]]
zel = [ze for ze in rd if ze[0].startswith(".")]
for ze in zel:
rd.remove(ze)
zsl = [zs for zs in vd.keys() if zs.startswith(".")]
for zs in zsl:
del vd[zs]
for f in [{"vp": v, "ap": a, "st": n[1]} for v, a, n in ret]: for f in [{"vp": v, "ap": a, "st": n[1]} for v, a, n in ret]:
yield f yield f
@@ -781,7 +779,6 @@ class AuthSrv(object):
self.warn_anonwrite = warn_anonwrite self.warn_anonwrite = warn_anonwrite
self.line_ctr = 0 self.line_ctr = 0
self.indent = "" self.indent = ""
self.desc = []
self.mutex = threading.Lock() self.mutex = threading.Lock()
self.reload() self.reload()
@@ -864,7 +861,6 @@ class AuthSrv(object):
mflags: dict[str, dict[str, Any]], mflags: dict[str, dict[str, Any]],
mount: dict[str, str], mount: dict[str, str],
) -> None: ) -> None:
self.desc = []
self.line_ctr = 0 self.line_ctr = 0
expand_config_file(cfg_lines, fp, "") expand_config_file(cfg_lines, fp, "")
@@ -947,9 +943,7 @@ class AuthSrv(object):
if vp is not None and ap is None: if vp is not None and ap is None:
ap = ln ap = ln
if ap.startswith("~"): ap = os.path.expandvars(os.path.expanduser(ap))
ap = os.path.expanduser(ap)
ap = absreal(ap) ap = absreal(ap)
self._l(ln, 2, "bound to filesystem-path [{}]".format(ap)) self._l(ln, 2, "bound to filesystem-path [{}]".format(ap))
self._map_volume(ap, vp, mount, daxs, mflags) self._map_volume(ap, vp, mount, daxs, mflags)
@@ -960,16 +954,17 @@ class AuthSrv(object):
try: try:
self._l(ln, 5, "volume access config:") self._l(ln, 5, "volume access config:")
sk, sv = ln.split(":") sk, sv = ln.split(":")
if re.sub("[rwmdgGha]", "", sk) or not sk: if re.sub("[rwmdgGhaA.]", "", sk) or not sk:
err = "invalid accs permissions list; " err = "invalid accs permissions list; "
raise Exception(err) raise Exception(err)
if " " in re.sub(", *", "", sv).strip(): if " " in re.sub(", *", "", sv).strip():
err = "list of users is not comma-separated; " err = "list of users is not comma-separated; "
raise Exception(err) raise Exception(err)
assert vp is not None
self._read_vol_str(sk, sv.replace(" ", ""), daxs[vp], mflags[vp]) self._read_vol_str(sk, sv.replace(" ", ""), daxs[vp], mflags[vp])
continue continue
except: except:
err += "accs entries must be 'rwmdgGha: user1, user2, ...'" err += "accs entries must be 'rwmdgGhaA.: user1, user2, ...'"
raise Exception(err + SBADCFG) raise Exception(err + SBADCFG)
if cat == catf: if cat == catf:
@@ -988,9 +983,11 @@ class AuthSrv(object):
fstr += "," + sk fstr += "," + sk
else: else:
fstr += ",{}={}".format(sk, sv) fstr += ",{}={}".format(sk, sv)
assert vp is not None
self._read_vol_str("c", fstr[1:], daxs[vp], mflags[vp]) self._read_vol_str("c", fstr[1:], daxs[vp], mflags[vp])
fstr = "" fstr = ""
if fstr: if fstr:
assert vp is not None
self._read_vol_str("c", fstr[1:], daxs[vp], mflags[vp]) self._read_vol_str("c", fstr[1:], daxs[vp], mflags[vp])
continue continue
except: except:
@@ -1005,10 +1002,12 @@ class AuthSrv(object):
def _read_vol_str( def _read_vol_str(
self, lvl: str, uname: str, axs: AXS, flags: dict[str, Any] self, lvl: str, uname: str, axs: AXS, flags: dict[str, Any]
) -> None: ) -> None:
if lvl.strip("crwmdgGha"): if lvl.strip("crwmdgGhaA."):
raise Exception("invalid volflag: {},{}".format(lvl, uname)) t = "%s,%s" % (lvl, uname) if uname else lvl
raise Exception("invalid config value (volume or volflag): %s" % (t,))
if lvl == "c": if lvl == "c":
# here, 'uname' is not a username; it is a volflag name... sorry
cval: Union[bool, str] = True cval: Union[bool, str] = True
try: try:
# volflag with arguments, possibly with a preceding list of bools # volflag with arguments, possibly with a preceding list of bools
@@ -1028,19 +1027,31 @@ class AuthSrv(object):
if uname == "": if uname == "":
uname = "*" uname = "*"
junkset = set()
for un in uname.replace(",", " ").strip().split(): for un in uname.replace(",", " ").strip().split():
for alias, mapping in [
("h", "gh"),
("G", "gG"),
("A", "rwmda.A"),
]:
expanded = ""
for ch in mapping:
if ch not in lvl:
expanded += ch
lvl = lvl.replace(alias, expanded + alias)
for ch, al in [ for ch, al in [
("r", axs.uread), ("r", axs.uread),
("w", axs.uwrite), ("w", axs.uwrite),
("m", axs.umove), ("m", axs.umove),
("d", axs.udel), ("d", axs.udel),
(".", axs.udot),
("a", axs.uadmin), ("a", axs.uadmin),
("h", axs.uhtml), ("A", junkset),
("h", axs.uget),
("g", axs.uget), ("g", axs.uget),
("G", axs.uget),
("G", axs.upget), ("G", axs.upget),
]: # b bb bbb ("h", axs.uhtml),
]:
if ch in lvl: if ch in lvl:
if un == "*": if un == "*":
t = "└─add permission [{0}] for [everyone] -- {2}" t = "└─add permission [{0}] for [everyone] -- {2}"
@@ -1112,7 +1123,7 @@ class AuthSrv(object):
if self.args.v: if self.args.v:
# list of src:dst:permset:permset:... # list of src:dst:permset:permset:...
# permset is <rwmdgGha>[,username][,username] or <c>,<flag>[=args] # permset is <rwmdgGhaA.>[,username][,username] or <c>,<flag>[=args]
for v_str in self.args.v: for v_str in self.args.v:
m = re_vol.match(v_str) m = re_vol.match(v_str)
if not m: if not m:
@@ -1186,12 +1197,13 @@ class AuthSrv(object):
vfs = VFS(self.log_func, mount[dst], dst, daxs[dst], mflags[dst]) vfs = VFS(self.log_func, mount[dst], dst, daxs[dst], mflags[dst])
continue continue
assert vfs # type: ignore
zv = vfs.add(mount[dst], dst) zv = vfs.add(mount[dst], dst)
zv.axs = daxs[dst] zv.axs = daxs[dst]
zv.flags = mflags[dst] zv.flags = mflags[dst]
zv.dbv = None zv.dbv = None
assert vfs assert vfs # type: ignore
vfs.all_vols = {} vfs.all_vols = {}
vfs.all_aps = [] vfs.all_aps = []
vfs.all_vps = [] vfs.all_vps = []
@@ -1201,20 +1213,28 @@ class AuthSrv(object):
vol.all_vps.sort(key=lambda x: len(x[0]), reverse=True) vol.all_vps.sort(key=lambda x: len(x[0]), reverse=True)
vol.root = vfs vol.root = vfs
for perm in "read write move del get pget html admin".split(): for perm in "read write move del get pget html admin dot".split():
axs_key = "u" + perm axs_key = "u" + perm
unames = ["*"] + list(acct.keys()) unames = ["*"] + list(acct.keys())
for vp, vol in vfs.all_vols.items():
zx = getattr(vol.axs, axs_key)
if "*" in zx:
for usr in unames:
zx.add(usr)
# aread,... = dict[uname, list[volnames] or []]
umap: dict[str, list[str]] = {x: [] for x in unames} umap: dict[str, list[str]] = {x: [] for x in unames}
for usr in unames: for usr in unames:
for vp, vol in vfs.all_vols.items(): for vp, vol in vfs.all_vols.items():
zx = getattr(vol.axs, axs_key) zx = getattr(vol.axs, axs_key)
if usr in zx or "*" in zx: if usr in zx:
umap[usr].append(vp) umap[usr].append(vp)
umap[usr].sort() umap[usr].sort()
setattr(vfs, "a" + perm, umap) setattr(vfs, "a" + perm, umap)
all_users = {} all_users = {}
missing_users = {} missing_users = {}
associated_users = {}
for axs in daxs.values(): for axs in daxs.values():
for d in [ for d in [
axs.uread, axs.uread,
@@ -1225,11 +1245,14 @@ class AuthSrv(object):
axs.upget, axs.upget,
axs.uhtml, axs.uhtml,
axs.uadmin, axs.uadmin,
axs.udot,
]: ]:
for usr in d: for usr in d:
all_users[usr] = 1 all_users[usr] = 1
if usr != "*" and usr not in acct: if usr != "*" and usr not in acct:
missing_users[usr] = 1 missing_users[usr] = 1
if "*" not in d:
associated_users[usr] = 1
if missing_users: if missing_users:
self.log( self.log(
@@ -1250,6 +1273,16 @@ class AuthSrv(object):
raise Exception(BAD_CFG) raise Exception(BAD_CFG)
seenpwds[pwd] = usr seenpwds[pwd] = usr
for usr in acct:
if usr not in associated_users:
if len(vfs.all_vols) > 1:
# user probably familiar enough that the verbose message is not necessary
t = "account [%s] is not mentioned in any volume definitions; see --help-accounts"
self.log(t % (usr,), 1)
else:
t = "WARNING: the account [%s] is not mentioned in any volume definitions and thus has the same access-level and privileges that guests have; please see --help-accounts for details. For example, if you intended to give that user full access to the current directory, you could do this: -v .::A,%s"
self.log(t % (usr, usr), 1)
promote = [] promote = []
demote = [] demote = []
for vol in vfs.all_vols.values(): for vol in vfs.all_vols.values():
@@ -1259,9 +1292,7 @@ class AuthSrv(object):
if vflag == "-": if vflag == "-":
pass pass
elif vflag: elif vflag:
if vflag.startswith("~"): vflag = os.path.expandvars(os.path.expanduser(vflag))
vflag = os.path.expanduser(vflag)
vol.histpath = uncyg(vflag) if WINDOWS else vflag vol.histpath = uncyg(vflag) if WINDOWS else vflag
elif self.args.hist: elif self.args.hist:
for nch in range(len(hid)): for nch in range(len(hid)):
@@ -1462,6 +1493,14 @@ class AuthSrv(object):
if k in vol.flags: if k in vol.flags:
vol.flags[k] = float(vol.flags[k]) vol.flags[k] = float(vol.flags[k])
try:
zs1, zs2 = vol.flags["rm_retry"].split("/")
vol.flags["rm_re_t"] = float(zs1)
vol.flags["rm_re_r"] = float(zs2)
except:
t = 'volume "/%s" has invalid rm_retry [%s]'
raise Exception(t % (vol.vpath, vol.flags.get("rm_retry")))
for k1, k2 in IMPLICATIONS: for k1, k2 in IMPLICATIONS:
if k1 in vol.flags: if k1 in vol.flags:
vol.flags[k2] = True vol.flags[k2] = True
@@ -1473,8 +1512,8 @@ class AuthSrv(object):
dbds = "acid|swal|wal|yolo" dbds = "acid|swal|wal|yolo"
vol.flags["dbd"] = dbd = vol.flags.get("dbd") or self.args.dbd vol.flags["dbd"] = dbd = vol.flags.get("dbd") or self.args.dbd
if dbd not in dbds.split("|"): if dbd not in dbds.split("|"):
t = "invalid dbd [{}]; must be one of [{}]" t = 'volume "/%s" has invalid dbd [%s]; must be one of [%s]'
raise Exception(t.format(dbd, dbds)) raise Exception(t % (vol.vpath, dbd, dbds))
# default tag cfgs if unset # default tag cfgs if unset
for k in ("mte", "mth", "exp_md", "exp_lg"): for k in ("mte", "mth", "exp_md", "exp_lg"):
@@ -1635,6 +1674,11 @@ class AuthSrv(object):
vol.flags.pop(k[1:], None) vol.flags.pop(k[1:], None)
vol.flags.pop(k) vol.flags.pop(k)
for vol in vfs.all_vols.values():
if vol.flags.get("dots"):
for name in vol.axs.uread:
vol.axs.udot.add(name)
if errors: if errors:
sys.exit(1) sys.exit(1)
@@ -1653,12 +1697,14 @@ class AuthSrv(object):
[" write", "uwrite"], [" write", "uwrite"],
[" move", "umove"], [" move", "umove"],
["delete", "udel"], ["delete", "udel"],
[" dots", "udot"],
[" get", "uget"], [" get", "uget"],
[" upget", "upget"], [" upGet", "upget"],
[" html", "uhtml"], [" html", "uhtml"],
["uadmin", "uadmin"], ["uadmin", "uadmin"],
]: ]:
u = list(sorted(getattr(zv.axs, attr))) u = list(sorted(getattr(zv.axs, attr)))
u = ["*"] if "*" in u else u
u = ", ".join("\033[35meverybody\033[0m" if x == "*" else x for x in u) u = ", ".join("\033[35meverybody\033[0m" if x == "*" else x for x in u)
u = u if u else "\033[36m--none--\033[0m" u = u if u else "\033[36m--none--\033[0m"
t += "\n| {}: {}".format(txt, u) t += "\n| {}: {}".format(txt, u)
@@ -1815,7 +1861,7 @@ class AuthSrv(object):
raise Exception("volume not found: " + zs) raise Exception("volume not found: " + zs)
self.log(str({"users": users, "vols": vols, "flags": flags})) self.log(str({"users": users, "vols": vols, "flags": flags}))
t = "/{}: read({}) write({}) move({}) del({}) get({}) upget({}) uadmin({})" t = "/{}: read({}) write({}) move({}) del({}) dots({}) get({}) upGet({}) uadmin({})"
for k, zv in self.vfs.all_vols.items(): for k, zv in self.vfs.all_vols.items():
vc = zv.axs vc = zv.axs
vs = [ vs = [
@@ -1824,6 +1870,7 @@ class AuthSrv(object):
vc.uwrite, vc.uwrite,
vc.umove, vc.umove,
vc.udel, vc.udel,
vc.udot,
vc.uget, vc.uget,
vc.upget, vc.upget,
vc.uhtml, vc.uhtml,
@@ -1966,6 +2013,7 @@ class AuthSrv(object):
"w": "uwrite", "w": "uwrite",
"m": "umove", "m": "umove",
"d": "udel", "d": "udel",
".": "udot",
"g": "uget", "g": "uget",
"G": "upget", "G": "upget",
"h": "uhtml", "h": "uhtml",
@@ -2172,7 +2220,7 @@ def upgrade_cfg_fmt(
else: else:
sn = sn.replace(",", ", ") sn = sn.replace(",", ", ")
ret.append(" " + sn) ret.append(" " + sn)
elif sn[:1] in "rwmdgGha": elif sn[:1] in "rwmdgGhaA.":
if cat != catx: if cat != catx:
cat = catx cat = catx
ret.append(cat) ret.append(cat)

View File

@@ -43,6 +43,10 @@ def open(p: str, *a, **ka) -> int:
return os.open(fsenc(p), *a, **ka) return os.open(fsenc(p), *a, **ka)
def readlink(p: str) -> str:
return fsdec(os.readlink(fsenc(p)))
def rename(src: str, dst: str) -> None: def rename(src: str, dst: str) -> None:
return os.rename(fsenc(src), fsenc(dst)) return os.rename(fsenc(src), fsenc(dst))

View File

@@ -46,8 +46,8 @@ class BrokerMp(object):
self.num_workers = self.args.j or CORES self.num_workers = self.args.j or CORES
self.log("broker", "booting {} subprocesses".format(self.num_workers)) self.log("broker", "booting {} subprocesses".format(self.num_workers))
for n in range(1, self.num_workers + 1): for n in range(1, self.num_workers + 1):
q_pend: queue.Queue[tuple[int, str, list[Any]]] = mp.Queue(1) q_pend: queue.Queue[tuple[int, str, list[Any]]] = mp.Queue(1) # type: ignore
q_yield: queue.Queue[tuple[int, str, list[Any]]] = mp.Queue(64) q_yield: queue.Queue[tuple[int, str, list[Any]]] = mp.Queue(64) # type: ignore
proc = MProcess(q_pend, q_yield, MpWorker, (q_pend, q_yield, self.args, n)) proc = MProcess(q_pend, q_yield, MpWorker, (q_pend, q_yield, self.args, n))
Daemon(self.collector, "mp-sink-{}".format(n), (proc,)) Daemon(self.collector, "mp-sink-{}".format(n), (proc,))

View File

@@ -76,7 +76,7 @@ class MpWorker(BrokerCli):
pass pass
def logw(self, msg: str, c: Union[int, str] = 0) -> None: def logw(self, msg: str, c: Union[int, str] = 0) -> None:
self.log("mp{}".format(self.n), msg, c) self.log("mp%d" % (self.n,), msg, c)
def main(self) -> None: def main(self) -> None:
while True: while True:

View File

@@ -9,6 +9,9 @@ onedash = set(zs.split())
def vf_bmap() -> dict[str, str]: def vf_bmap() -> dict[str, str]:
"""argv-to-volflag: simple bools""" """argv-to-volflag: simple bools"""
ret = { ret = {
"dav_auth": "davauth",
"dav_rt": "davrt",
"ed": "dots",
"never_symlink": "neversymlink", "never_symlink": "neversymlink",
"no_dedup": "copydupes", "no_dedup": "copydupes",
"no_dupe": "nodupe", "no_dupe": "nodupe",
@@ -18,8 +21,6 @@ def vf_bmap() -> dict[str, str]:
"no_vthumb": "dvthumb", "no_vthumb": "dvthumb",
"no_athumb": "dathumb", "no_athumb": "dathumb",
"th_no_crop": "nocrop", "th_no_crop": "nocrop",
"dav_auth": "davauth",
"dav_rt": "davrt",
} }
for k in ( for k in (
"dotsrch", "dotsrch",
@@ -61,6 +62,7 @@ def vf_vmap() -> dict[str, str]:
"lg_sbf", "lg_sbf",
"md_sbf", "md_sbf",
"nrand", "nrand",
"rm_retry",
"sort", "sort",
"unlist", "unlist",
"u2ts", "u2ts",
@@ -98,10 +100,12 @@ permdescs = {
"w": 'write; upload files; need "r" to see the uploads', "w": 'write; upload files; need "r" to see the uploads',
"m": 'move; move files and folders; need "w" at destination', "m": 'move; move files and folders; need "w" at destination',
"d": "delete; permanently delete files and folders", "d": "delete; permanently delete files and folders",
".": "dots; user can ask to show dotfiles in listings",
"g": "get; download files, but cannot see folder contents", "g": "get; download files, but cannot see folder contents",
"G": 'upget; same as "g" but can see filekeys of their own uploads', "G": 'upget; same as "g" but can see filekeys of their own uploads',
"h": 'html; same as "g" but folders return their index.html', "h": 'html; same as "g" but folders return their index.html',
"a": "admin; can see uploader IPs, config-reload", "a": "admin; can see uploader IPs, config-reload",
"A": "all; same as 'rwmda.' (read/write/move/delete/dotfiles)",
} }
@@ -202,8 +206,10 @@ flagcats = {
"nohtml": "return html and markdown as text/html", "nohtml": "return html and markdown as text/html",
}, },
"others": { "others": {
"dots": "allow all users with read-access to\nenable the option to show dotfiles in listings",
"fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes', "fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
"fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers', "fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers',
"rm_retry": "ms-windows: timeout for deleting busy files",
"davauth": "ask webdav clients to login for all folders", "davauth": "ask webdav clients to login for all folders",
"davrt": "show lastmod time of symlink destination, not the link itself\n(note: this option is always enabled for recursive listings)", "davrt": "show lastmod time of symlink destination, not the link itself\n(note: this option is always enabled for recursive listings)",
}, },

View File

@@ -15,7 +15,7 @@ from pyftpdlib.handlers import FTPHandler
from pyftpdlib.ioloop import IOLoop from pyftpdlib.ioloop import IOLoop
from pyftpdlib.servers import FTPServer from pyftpdlib.servers import FTPServer
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, E from .__init__ import PY2, TYPE_CHECKING
from .authsrv import VFS from .authsrv import VFS
from .bos import bos from .bos import bos
from .util import ( from .util import (
@@ -73,6 +73,7 @@ class FtpAuth(DummyAuthorizer):
asrv = self.hub.asrv asrv = self.hub.asrv
uname = "*" uname = "*"
if username != "anonymous": if username != "anonymous":
uname = ""
for zs in (password, username): for zs in (password, username):
zs = asrv.iacct.get(asrv.ah.hash(zs), "") zs = asrv.iacct.get(asrv.ah.hash(zs), "")
if zs: if zs:
@@ -88,8 +89,8 @@ class FtpAuth(DummyAuthorizer):
bans[ip] = bonk bans[ip] = bonk
try: try:
# only possible if multiprocessing disabled # only possible if multiprocessing disabled
self.hub.broker.httpsrv.bans[ip] = bonk self.hub.broker.httpsrv.bans[ip] = bonk # type: ignore
self.hub.broker.httpsrv.nban += 1 self.hub.broker.httpsrv.nban += 1 # type: ignore
except: except:
pass pass
@@ -132,7 +133,7 @@ class FtpFs(AbstractedFS):
self.can_read = self.can_write = self.can_move = False self.can_read = self.can_write = self.can_move = False
self.can_delete = self.can_get = self.can_upget = False self.can_delete = self.can_get = self.can_upget = False
self.can_admin = False self.can_admin = self.can_dot = False
self.listdirinfo = self.listdir self.listdirinfo = self.listdir
self.chdir(".") self.chdir(".")
@@ -167,7 +168,7 @@ class FtpFs(AbstractedFS):
if not avfs: if not avfs:
raise FSE(t.format(vpath), 1) raise FSE(t.format(vpath), 1)
cr, cw, cm, cd, _, _, _ = avfs.can_access("", self.h.uname) cr, cw, cm, cd, _, _, _, _ = avfs.can_access("", self.h.uname)
if r and not cr or w and not cw or m and not cm or d and not cd: if r and not cr or w and not cw or m and not cm or d and not cd:
raise FSE(t.format(vpath), 1) raise FSE(t.format(vpath), 1)
@@ -243,6 +244,7 @@ class FtpFs(AbstractedFS):
self.can_get, self.can_get,
self.can_upget, self.can_upget,
self.can_admin, self.can_admin,
self.can_dot,
) = avfs.can_access("", self.h.uname) ) = avfs.can_access("", self.h.uname)
def mkdir(self, path: str) -> None: def mkdir(self, path: str) -> None:
@@ -265,7 +267,7 @@ class FtpFs(AbstractedFS):
vfs_ls = [x[0] for x in vfs_ls1] vfs_ls = [x[0] for x in vfs_ls1]
vfs_ls.extend(vfs_virt.keys()) vfs_ls.extend(vfs_virt.keys())
if not self.args.ed: if not self.can_dot:
vfs_ls = exclude_dotfiles(vfs_ls) vfs_ls = exclude_dotfiles(vfs_ls)
vfs_ls.sort() vfs_ls.sort()
@@ -404,7 +406,16 @@ class FtpHandler(FTPHandler):
super(FtpHandler, self).__init__(conn, server, ioloop) super(FtpHandler, self).__init__(conn, server, ioloop)
cip = self.remote_ip cip = self.remote_ip
self.cli_ip = cip[7:] if cip.startswith("::ffff:") else cip if cip.startswith("::ffff:"):
cip = cip[7:]
if self.args.ftp_ipa_re and not self.args.ftp_ipa_re.match(cip):
logging.warning("client rejected (--ftp-ipa): %s", cip)
self.connected = False
conn.close()
return
self.cli_ip = cip
# abspath->vpath mapping to resolve log_transfer paths # abspath->vpath mapping to resolve log_transfer paths
self.vfs_map: dict[str, str] = {} self.vfs_map: dict[str, str] = {}

View File

@@ -37,6 +37,8 @@ from .star import StreamTar
from .sutil import StreamArc, gfilter from .sutil import StreamArc, gfilter
from .szip import StreamZip from .szip import StreamZip
from .util import ( from .util import (
APPLESAN_RE,
BITNESS,
HTTPCODE, HTTPCODE,
META_NOBOTS, META_NOBOTS,
UTC, UTC,
@@ -45,6 +47,7 @@ from .util import (
ODict, ODict,
Pebkac, Pebkac,
UnrecvEOF, UnrecvEOF,
WrongPostKey,
absreal, absreal,
alltrace, alltrace,
atomic_move, atomic_move,
@@ -81,11 +84,12 @@ from .util import (
sendfile_py, sendfile_py,
undot, undot,
unescape_cookie, unescape_cookie,
unquote, unquote, # type: ignore
unquotep, unquotep,
vjoin, vjoin,
vol_san, vol_san,
vsplit, vsplit,
wunlink,
yieldfile, yieldfile,
) )
@@ -111,7 +115,7 @@ class HttpCli(object):
self.t0 = time.time() self.t0 = time.time()
self.conn = conn self.conn = conn
self.mutex = conn.mutex # mypy404 self.u2mutex = conn.u2mutex # mypy404
self.s = conn.s self.s = conn.s
self.sr = conn.sr self.sr = conn.sr
self.ip = conn.addr[0] self.ip = conn.addr[0]
@@ -135,7 +139,7 @@ class HttpCli(object):
self.headers: dict[str, str] = {} self.headers: dict[str, str] = {}
self.mode = " " self.mode = " "
self.req = " " self.req = " "
self.http_ver = " " self.http_ver = ""
self.hint = "" self.hint = ""
self.host = " " self.host = " "
self.ua = " " self.ua = " "
@@ -154,10 +158,6 @@ class HttpCli(object):
self.pw = " " self.pw = " "
self.rvol = [" "] self.rvol = [" "]
self.wvol = [" "] self.wvol = [" "]
self.mvol = [" "]
self.dvol = [" "]
self.gvol = [" "]
self.upvol = [" "]
self.avol = [" "] self.avol = [" "]
self.do_log = True self.do_log = True
self.can_read = False self.can_read = False
@@ -167,6 +167,7 @@ class HttpCli(object):
self.can_get = False self.can_get = False
self.can_upget = False self.can_upget = False
self.can_admin = False self.can_admin = False
self.can_dot = False
self.out_headerlist: list[tuple[str, str]] = [] self.out_headerlist: list[tuple[str, str]] = []
self.out_headers: dict[str, str] = {} self.out_headers: dict[str, str] = {}
self.html_head = " " self.html_head = " "
@@ -192,7 +193,7 @@ class HttpCli(object):
def unpwd(self, m: Match[str]) -> str: def unpwd(self, m: Match[str]) -> str:
a, b, c = m.groups() a, b, c = m.groups()
return "{}\033[7m {} \033[27m{}".format(a, self.asrv.iacct[b], c) return "%s\033[7m %s \033[27m%s" % (a, self.asrv.iacct[b], c)
def _check_nonfatal(self, ex: Pebkac, post: bool) -> bool: def _check_nonfatal(self, ex: Pebkac, post: bool) -> bool:
if post: if post:
@@ -236,6 +237,11 @@ class HttpCli(object):
if self.is_banned(): if self.is_banned():
return False return False
if self.args.ipa_re and not self.args.ipa_re.match(self.conn.addr[0]):
self.log("client rejected (--ipa)", 3)
self.terse_reply(b"", 500)
return False
try: try:
self.s.settimeout(2) self.s.settimeout(2)
headerlines = read_header(self.sr, self.args.s_thead, self.args.s_thead) headerlines = read_header(self.sr, self.args.s_thead, self.args.s_thead)
@@ -368,22 +374,33 @@ class HttpCli(object):
self.trailing_slash = vpath.endswith("/") self.trailing_slash = vpath.endswith("/")
vpath = undot(vpath) vpath = undot(vpath)
zs = unquotep(arglist) ptn = self.conn.hsrv.ptn_cc
m = self.conn.hsrv.ptn_cc.search(zs)
if m:
hit = zs[m.span()[0] :]
t = "malicious user; Cc in query [{}] => [{!r}]"
self.log(t.format(self.req, hit), 1)
return False
for k in arglist.split("&"): for k in arglist.split("&"):
if "=" in k: if "=" in k:
k, zs = k.split("=", 1) k, zs = k.split("=", 1)
# x-www-form-urlencoded (url query part) uses # x-www-form-urlencoded (url query part) uses
# either + or %20 for 0x20 so handle both # either + or %20 for 0x20 so handle both
uparam[k.lower()] = unquotep(zs.strip().replace("+", " ")) sv = unquotep(zs.strip().replace("+", " "))
else: else:
uparam[k.lower()] = "" sv = ""
k = k.lower()
uparam[k] = sv
if k in ("doc", "move", "tree"):
continue
zs = "%s=%s" % (k, sv)
m = ptn.search(zs)
if not m:
continue
hit = zs[m.span()[0] :]
t = "malicious user; Cc in query [{}] => [{!r}]"
self.log(t.format(self.req, hit), 1)
self.cbonk(self.conn.hsrv.gmal, self.req, "cc_q", "Cc in query")
self.terse_reply(b"", 500)
return False
if self.is_vproxied: if self.is_vproxied:
if vpath.startswith(self.args.R): if vpath.startswith(self.args.R):
@@ -422,7 +439,7 @@ class HttpCli(object):
if relchk(self.vpath) and (self.vpath != "*" or self.mode != "OPTIONS"): if relchk(self.vpath) and (self.vpath != "*" or self.mode != "OPTIONS"):
self.log("invalid relpath [{}]".format(self.vpath)) self.log("invalid relpath [{}]".format(self.vpath))
self.cbonk(self.conn.hsrv.g422, self.vpath, "bad_vp", "invalid relpaths") self.cbonk(self.conn.hsrv.gmal, self.req, "bad_vp", "invalid relpaths")
return self.tx_404() and self.keepalive return self.tx_404() and self.keepalive
zso = self.headers.get("authorization") zso = self.headers.get("authorization")
@@ -439,14 +456,18 @@ class HttpCli(object):
except: except:
pass pass
self.pw = uparam.get("pw") or self.headers.get("pw") or bauth or cookie_pw if self.args.idp_h_usr:
self.uname = self.asrv.iacct.get(self.asrv.ah.hash(self.pw)) or "*" self.pw = ""
self.uname = self.headers.get(self.args.idp_h_usr) or "*"
if self.uname not in self.asrv.vfs.aread:
self.log("unknown username: [%s]" % (self.uname), 1)
self.uname = "*"
else:
self.pw = uparam.get("pw") or self.headers.get("pw") or bauth or cookie_pw
self.uname = self.asrv.iacct.get(self.asrv.ah.hash(self.pw)) or "*"
self.rvol = self.asrv.vfs.aread[self.uname] self.rvol = self.asrv.vfs.aread[self.uname]
self.wvol = self.asrv.vfs.awrite[self.uname] self.wvol = self.asrv.vfs.awrite[self.uname]
self.mvol = self.asrv.vfs.amove[self.uname]
self.dvol = self.asrv.vfs.adel[self.uname]
self.gvol = self.asrv.vfs.aget[self.uname]
self.upvol = self.asrv.vfs.apget[self.uname]
self.avol = self.asrv.vfs.aadmin[self.uname] self.avol = self.asrv.vfs.aadmin[self.uname]
if self.pw and ( if self.pw and (
@@ -479,8 +500,9 @@ class HttpCli(object):
self.can_get, self.can_get,
self.can_upget, self.can_upget,
self.can_admin, self.can_admin,
self.can_dot,
) = ( ) = (
avn.can_access("", self.uname) if avn else [False] * 7 avn.can_access("", self.uname) if avn else [False] * 8
) )
self.avn = avn self.avn = avn
self.vn = vn self.vn = vn
@@ -530,6 +552,7 @@ class HttpCli(object):
try: try:
if pex.code == 999: if pex.code == 999:
self.terse_reply(b"", 500)
return False return False
post = self.mode in ["POST", "PUT"] or "content-length" in self.headers post = self.mode in ["POST", "PUT"] or "content-length" in self.headers
@@ -537,16 +560,16 @@ class HttpCli(object):
self.keepalive = False self.keepalive = False
em = str(ex) em = str(ex)
msg = em if pex == ex else min_ex() msg = em if pex is ex else min_ex()
if pex.code != 404 or self.do_log: if pex.code != 404 or self.do_log:
self.log( self.log(
"{}\033[0m, {}".format(msg, self.vpath), "%s\033[0m, %s" % (msg, self.vpath),
6 if em.startswith("client d/c ") else 3, 6 if em.startswith("client d/c ") else 3,
) )
msg = "{}\r\nURL: {}\r\n".format(em, self.vpath) msg = "%s\r\nURL: %s\r\n" % (em, self.vpath)
if self.hint: if self.hint:
msg += "hint: {}\r\n".format(self.hint) msg += "hint: %s\r\n" % (self.hint,)
if "database is locked" in em: if "database is locked" in em:
self.conn.hsrv.broker.say("log_stacks") self.conn.hsrv.broker.say("log_stacks")
@@ -614,8 +637,7 @@ class HttpCli(object):
return False return False
self.log("banned for {:.0f} sec".format(rt), 6) self.log("banned for {:.0f} sec".format(rt), 6)
zb = b"HTTP/1.0 403 Forbidden\r\n\r\nthank you for playing" self.terse_reply(b"thank you for playing", 403)
self.s.sendall(zb)
return True return True
def permit_caching(self) -> None: def permit_caching(self) -> None:
@@ -669,6 +691,7 @@ class HttpCli(object):
hit = zs[m.span()[0] :] hit = zs[m.span()[0] :]
t = "malicious user; Cc in out-hdr {!r} => [{!r}]" t = "malicious user; Cc in out-hdr {!r} => [{!r}]"
self.log(t.format(zs, hit), 1) self.log(t.format(zs, hit), 1)
self.cbonk(self.conn.hsrv.gmal, zs, "cc_hdr", "Cc in out-hdr")
raise Pebkac(999) raise Pebkac(999)
try: try:
@@ -745,6 +768,19 @@ class HttpCli(object):
self.log(body.rstrip()) self.log(body.rstrip())
self.reply(body.encode("utf-8") + b"\r\n", *list(args), **kwargs) self.reply(body.encode("utf-8") + b"\r\n", *list(args), **kwargs)
def terse_reply(self, body: bytes, status: int = 200) -> None:
self.keepalive = False
lines = [
"%s %s %s" % (self.http_ver or "HTTP/1.1", status, HTTPCODE[status]),
"Connection: Close",
]
if body:
lines.append("Content-Length: " + unicode(len(body)))
self.s.sendall("\r\n".join(lines).encode("utf-8") + b"\r\n\r\n" + body)
def urlq(self, add: dict[str, str], rm: list[str]) -> str: def urlq(self, add: dict[str, str], rm: list[str]) -> str:
""" """
generates url query based on uparam (b, pw, all others) generates url query based on uparam (b, pw, all others)
@@ -776,7 +812,7 @@ class HttpCli(object):
if k in skip: if k in skip:
continue continue
t = "{}={}".format(quotep(k), quotep(v)) t = "%s=%s" % (quotep(k), quotep(v))
ret.append(t.replace(" ", "+").rstrip("=")) ret.append(t.replace(" ", "+").rstrip("="))
if not ret: if not ret:
@@ -824,21 +860,22 @@ class HttpCli(object):
oh = self.out_headers oh = self.out_headers
origin = origin.lower() origin = origin.lower()
good_origins = self.args.acao + [ good_origins = self.args.acao + [
"{}://{}".format( "%s://%s"
% (
"https" if self.is_https else "http", "https" if self.is_https else "http",
self.host.lower().split(":")[0], self.host.lower().split(":")[0],
) )
] ]
if re.sub(r"(:[0-9]{1,5})?/?$", "", origin) in good_origins: if "pw" in ih or re.sub(r"(:[0-9]{1,5})?/?$", "", origin) in good_origins:
good_origin = True good_origin = True
bad_hdrs = ("",) bad_hdrs = ("",)
else: else:
good_origin = False good_origin = False
bad_hdrs = ("", "pw") bad_hdrs = ("", "pw")
# '*' blocks all credentials (cookies, http-auth); # '*' blocks auth through cookies / WWW-Authenticate;
# exact-match for Origin is necessary to unlock those, # exact-match for Origin is necessary to unlock those,
# however yolo-requests (?pw=) are always allowed # but the ?pw= param and PW: header are always allowed
acah = ih.get("access-control-request-headers", "") acah = ih.get("access-control-request-headers", "")
acao = (origin if good_origin else None) or ( acao = (origin if good_origin else None) or (
"*" if "*" in good_origins else None "*" if "*" in good_origins else None
@@ -888,7 +925,11 @@ class HttpCli(object):
return self.tx_ico(self.vpath.split("/")[-1], exact=True) return self.tx_ico(self.vpath.split("/")[-1], exact=True)
if self.vpath.startswith(".cpr/ssdp"): if self.vpath.startswith(".cpr/ssdp"):
return self.conn.hsrv.ssdp.reply(self) if self.conn.hsrv.ssdp:
return self.conn.hsrv.ssdp.reply(self)
else:
self.reply(b"ssdp is disabled in server config", 404)
return False
if self.vpath.startswith(".cpr/dd/") and self.args.mpmc: if self.vpath.startswith(".cpr/dd/") and self.args.mpmc:
if self.args.mpmc == ".": if self.args.mpmc == ".":
@@ -910,6 +951,7 @@ class HttpCli(object):
if not static_path.startswith(path_base): if not static_path.startswith(path_base):
t = "malicious user; attempted path traversal [{}] => [{}]" t = "malicious user; attempted path traversal [{}] => [{}]"
self.log(t.format(self.vpath, static_path), 1) self.log(t.format(self.vpath, static_path), 1)
self.cbonk(self.conn.hsrv.gmal, self.req, "trav", "path traversal")
self.tx_404() self.tx_404()
return False return False
@@ -1016,7 +1058,7 @@ class HttpCli(object):
self.can_read = self.can_write = self.can_get = False self.can_read = self.can_write = self.can_get = False
if not self.can_read and not self.can_write and not self.can_get: if not self.can_read and not self.can_write and not self.can_get:
self.log("inaccessible: [{}]".format(self.vpath)) self.log("inaccessible: [%s]" % (self.vpath,))
raise Pebkac(401, "authenticate") raise Pebkac(401, "authenticate")
from .dxml import parse_xml from .dxml import parse_xml
@@ -1088,7 +1130,6 @@ class HttpCli(object):
rem, rem,
set(), set(),
self.uname, self.uname,
self.args.ed,
True, True,
not self.args.no_scandir, not self.args.no_scandir,
wrap=False, wrap=False,
@@ -1102,7 +1143,7 @@ class HttpCli(object):
[[True, False]], [[True, False]],
lstat="davrt" not in vn.flags, lstat="davrt" not in vn.flags,
) )
if not self.args.ed: if not self.can_dot:
names = set(exclude_dotfiles([x[0] for x in vfs_ls])) names = set(exclude_dotfiles([x[0] for x in vfs_ls]))
vfs_ls = [x for x in vfs_ls if x[0] in names] vfs_ls = [x for x in vfs_ls if x[0] in names]
@@ -1346,8 +1387,7 @@ class HttpCli(object):
return False return False
vp = "/" + self.vpath vp = "/" + self.vpath
ptn = r"/\.(_|DS_Store|Spotlight-|fseventsd|Trashes|AppleDouble)|/__MACOS" if re.search(APPLESAN_RE, vp):
if re.search(ptn, vp):
zt = '<?xml version="1.0" encoding="utf-8"?>\n<D:error xmlns:D="DAV:"><D:lock-token-submitted><D:href>{}</D:href></D:lock-token-submitted></D:error>' zt = '<?xml version="1.0" encoding="utf-8"?>\n<D:error xmlns:D="DAV:"><D:lock-token-submitted><D:href>{}</D:href></D:lock-token-submitted></D:error>'
zb = zt.format(vp).encode("utf-8", "replace") zb = zt.format(vp).encode("utf-8", "replace")
self.reply(zb, 423, "text/xml; charset=utf-8") self.reply(zb, 423, "text/xml; charset=utf-8")
@@ -1367,7 +1407,7 @@ class HttpCli(object):
if txt and len(txt) == orig_len: if txt and len(txt) == orig_len:
raise Pebkac(500, "chunk slicing failed") raise Pebkac(500, "chunk slicing failed")
buf = "{:x}\r\n".format(len(buf)).encode(enc) + buf buf = ("%x\r\n" % (len(buf),)).encode(enc) + buf
self.s.sendall(buf + b"\r\n") self.s.sendall(buf + b"\r\n")
return txt return txt
@@ -1653,7 +1693,7 @@ class HttpCli(object):
and bos.path.getmtime(path) >= time.time() - self.args.blank_wt and bos.path.getmtime(path) >= time.time() - self.args.blank_wt
): ):
# small toctou, but better than clobbering a hardlink # small toctou, but better than clobbering a hardlink
bos.unlink(path) wunlink(self.log, path, vfs.flags)
with ren_open(fn, *open_a, **params) as zfw: with ren_open(fn, *open_a, **params) as zfw:
f, fn = zfw["orz"] f, fn = zfw["orz"]
@@ -1667,7 +1707,7 @@ class HttpCli(object):
lim.chk_sz(post_sz) lim.chk_sz(post_sz)
lim.chk_vsz(self.conn.hsrv.broker, vfs.realpath, post_sz) lim.chk_vsz(self.conn.hsrv.broker, vfs.realpath, post_sz)
except: except:
bos.unlink(path) wunlink(self.log, path, vfs.flags)
raise raise
if self.args.nw: if self.args.nw:
@@ -1720,7 +1760,7 @@ class HttpCli(object):
): ):
t = "upload blocked by xau server config" t = "upload blocked by xau server config"
self.log(t, 1) self.log(t, 1)
os.unlink(path) wunlink(self.log, path, vfs.flags)
raise Pebkac(403, t) raise Pebkac(403, t)
vfs, rem = vfs.get_dbv(rem) vfs, rem = vfs.get_dbv(rem)
@@ -1826,7 +1866,16 @@ class HttpCli(object):
self.parser = MultipartParser(self.log, self.sr, self.headers) self.parser = MultipartParser(self.log, self.sr, self.headers)
self.parser.parse() self.parser.parse()
act = self.parser.require("act", 64) file0: list[tuple[str, Optional[str], Generator[bytes, None, None]]] = []
try:
act = self.parser.require("act", 64)
except WrongPostKey as ex:
if ex.got == "f" and ex.fname:
self.log("missing 'act', but looks like an upload so assuming that")
file0 = [(ex.got, ex.fname, ex.datagen)]
act = "bput"
else:
raise
if act == "login": if act == "login":
return self.handle_login() return self.handle_login()
@@ -1839,7 +1888,7 @@ class HttpCli(object):
return self.handle_new_md() return self.handle_new_md()
if act == "bput": if act == "bput":
return self.handle_plain_upload() return self.handle_plain_upload(file0)
if act == "tput": if act == "tput":
return self.handle_text_upload() return self.handle_text_upload()
@@ -1867,7 +1916,7 @@ class HttpCli(object):
items = [unquotep(x) for x in items if items] items = [unquotep(x) for x in items if items]
self.parser.drop() self.parser.drop()
return self.tx_zip(k, v, "", vn, rem, items, self.args.ed) return self.tx_zip(k, v, "", vn, rem, items)
def handle_post_json(self) -> bool: def handle_post_json(self) -> bool:
try: try:
@@ -1939,8 +1988,11 @@ class HttpCli(object):
except: except:
raise Pebkac(500, min_ex()) raise Pebkac(500, min_ex())
x = self.conn.hsrv.broker.ask("up2k.handle_json", body, self.u2fh.aps) # not to protect u2fh, but to prevent handshakes while files are closing
ret = x.get() with self.u2mutex:
x = self.conn.hsrv.broker.ask("up2k.handle_json", body, self.u2fh.aps)
ret = x.get()
if self.is_vproxied: if self.is_vproxied:
if "purl" in ret: if "purl" in ret:
ret["purl"] = self.args.SR + ret["purl"] ret["purl"] = self.args.SR + ret["purl"]
@@ -1953,10 +2005,10 @@ class HttpCli(object):
def handle_search(self, body: dict[str, Any]) -> bool: def handle_search(self, body: dict[str, Any]) -> bool:
idx = self.conn.get_u2idx() idx = self.conn.get_u2idx()
if not idx or not hasattr(idx, "p_end"): if not idx or not hasattr(idx, "p_end"):
raise Pebkac(500, "sqlite3 is not available on the server; cannot search") raise Pebkac(500, "server busy, or sqlite3 not available; cannot search")
vols = [] vols: list[VFS] = []
seen = {} seen: dict[VFS, bool] = {}
for vtop in self.rvol: for vtop in self.rvol:
vfs, _ = self.asrv.vfs.get(vtop, self.uname, True, False) vfs, _ = self.asrv.vfs.get(vtop, self.uname, True, False)
vfs = vfs.dbv or vfs vfs = vfs.dbv or vfs
@@ -1964,7 +2016,7 @@ class HttpCli(object):
continue continue
seen[vfs] = True seen[vfs] = True
vols.append((vfs.vpath, vfs.realpath, vfs.flags)) vols.append(vfs)
t0 = time.time() t0 = time.time()
if idx.p_end: if idx.p_end:
@@ -1979,7 +2031,7 @@ class HttpCli(object):
vbody = copy.deepcopy(body) vbody = copy.deepcopy(body)
vbody["hash"] = len(vbody["hash"]) vbody["hash"] = len(vbody["hash"])
self.log("qj: " + repr(vbody)) self.log("qj: " + repr(vbody))
hits = idx.fsearch(vols, body) hits = idx.fsearch(self.uname, vols, body)
msg: Any = repr(hits) msg: Any = repr(hits)
taglist: list[str] = [] taglist: list[str] = []
trunc = False trunc = False
@@ -1988,7 +2040,7 @@ class HttpCli(object):
q = body["q"] q = body["q"]
n = body.get("n", self.args.srch_hits) n = body.get("n", self.args.srch_hits)
self.log("qj: {} |{}|".format(q, n)) self.log("qj: {} |{}|".format(q, n))
hits, taglist, trunc = idx.search(vols, q, n) hits, taglist, trunc = idx.search(self.uname, vols, q, n)
msg = len(hits) msg = len(hits)
idx.p_end = time.time() idx.p_end = time.time()
@@ -2045,7 +2097,7 @@ class HttpCli(object):
f = None f = None
fpool = not self.args.no_fpool and sprs fpool = not self.args.no_fpool and sprs
if fpool: if fpool:
with self.mutex: with self.u2mutex:
try: try:
f = self.u2fh.pop(path) f = self.u2fh.pop(path)
except: except:
@@ -2088,7 +2140,7 @@ class HttpCli(object):
if not fpool: if not fpool:
f.close() f.close()
else: else:
with self.mutex: with self.u2mutex:
self.u2fh.put(path, f) self.u2fh.put(path, f)
except: except:
# maybe busted handle (eg. disk went full) # maybe busted handle (eg. disk went full)
@@ -2107,7 +2159,7 @@ class HttpCli(object):
return False return False
if not num_left and fpool: if not num_left and fpool:
with self.mutex: with self.u2mutex:
self.u2fh.close(path) self.u2fh.close(path)
if not num_left and not self.args.nw: if not num_left and not self.args.nw:
@@ -2161,17 +2213,17 @@ class HttpCli(object):
msg = "naw dude" msg = "naw dude"
pwd = "x" # nosec pwd = "x" # nosec
dur = None dur = 0
if pwd == "x": if pwd == "x":
# reset both plaintext and tls # reset both plaintext and tls
# (only affects active tls cookies when tls) # (only affects active tls cookies when tls)
for k in ("cppwd", "cppws") if self.is_https else ("cppwd",): for k in ("cppwd", "cppws") if self.is_https else ("cppwd",):
ck = gencookie(k, pwd, self.args.R, False, dur) ck = gencookie(k, pwd, self.args.R, False)
self.out_headerlist.append(("Set-Cookie", ck)) self.out_headerlist.append(("Set-Cookie", ck))
else: else:
k = "cppws" if self.is_https else "cppwd" k = "cppws" if self.is_https else "cppwd"
ck = gencookie(k, pwd, self.args.R, self.is_https, dur) ck = gencookie(k, pwd, self.args.R, self.is_https, dur, "; HttpOnly")
self.out_headerlist.append(("Set-Cookie", ck)) self.out_headerlist.append(("Set-Cookie", ck))
return msg return msg
@@ -2278,7 +2330,9 @@ class HttpCli(object):
vfs.flags.get("xau") or [], vfs.flags.get("xau") or [],
) )
def handle_plain_upload(self) -> bool: def handle_plain_upload(
self, file0: list[tuple[str, Optional[str], Generator[bytes, None, None]]]
) -> bool:
assert self.parser assert self.parser
nullwrite = self.args.nw nullwrite = self.args.nw
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True) vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
@@ -2300,11 +2354,13 @@ class HttpCli(object):
files: list[tuple[int, str, str, str, str, str]] = [] files: list[tuple[int, str, str, str, str, str]] = []
# sz, sha_hex, sha_b64, p_file, fname, abspath # sz, sha_hex, sha_b64, p_file, fname, abspath
errmsg = "" errmsg = ""
tabspath = ""
dip = self.dip() dip = self.dip()
t0 = time.time() t0 = time.time()
try: try:
assert self.parser.gen assert self.parser.gen
for nfile, (p_field, p_file, p_data) in enumerate(self.parser.gen): gens = itertools.chain(file0, self.parser.gen)
for nfile, (p_field, p_file, p_data) in enumerate(gens):
if not p_file: if not p_file:
self.log("discarding incoming file without filename") self.log("discarding incoming file without filename")
# fallthrough # fallthrough
@@ -2388,14 +2444,16 @@ class HttpCli(object):
lim.chk_nup(self.ip) lim.chk_nup(self.ip)
except: except:
if not nullwrite: if not nullwrite:
bos.unlink(tabspath) wunlink(self.log, tabspath, vfs.flags)
bos.unlink(abspath) wunlink(self.log, abspath, vfs.flags)
fname = os.devnull fname = os.devnull
raise raise
if not nullwrite: if not nullwrite:
atomic_move(tabspath, abspath) atomic_move(tabspath, abspath)
tabspath = ""
files.append( files.append(
(sz, sha_hex, sha_b64, p_file or "(discarded)", fname, abspath) (sz, sha_hex, sha_b64, p_file or "(discarded)", fname, abspath)
) )
@@ -2415,7 +2473,7 @@ class HttpCli(object):
): ):
t = "upload blocked by xau server config" t = "upload blocked by xau server config"
self.log(t, 1) self.log(t, 1)
os.unlink(abspath) wunlink(self.log, abspath, vfs.flags)
raise Pebkac(403, t) raise Pebkac(403, t)
dbv, vrem = vfs.get_dbv(rem) dbv, vrem = vfs.get_dbv(rem)
@@ -2441,6 +2499,12 @@ class HttpCli(object):
errmsg = vol_san( errmsg = vol_san(
list(self.asrv.vfs.all_vols.values()), unicode(ex).encode("utf-8") list(self.asrv.vfs.all_vols.values()), unicode(ex).encode("utf-8")
).decode("utf-8") ).decode("utf-8")
try:
got = bos.path.getsize(tabspath)
t = "connection lost after receiving %s of the file"
self.log(t % (humansize(got),), 3)
except:
pass
td = max(0.1, time.time() - t0) td = max(0.1, time.time() - t0)
sz_total = sum(x[0] for x in files) sz_total = sum(x[0] for x in files)
@@ -2653,7 +2717,7 @@ class HttpCli(object):
raise Pebkac(403, t) raise Pebkac(403, t)
if bos.path.exists(fp): if bos.path.exists(fp):
bos.unlink(fp) wunlink(self.log, fp, vfs.flags)
with open(fsenc(fp), "wb", 512 * 1024) as f: with open(fsenc(fp), "wb", 512 * 1024) as f:
sz, sha512, _ = hashcopy(p_data, f, self.args.s_wr_slp) sz, sha512, _ = hashcopy(p_data, f, self.args.s_wr_slp)
@@ -2665,7 +2729,7 @@ class HttpCli(object):
lim.chk_sz(sz) lim.chk_sz(sz)
lim.chk_vsz(self.conn.hsrv.broker, vfs.realpath, sz) lim.chk_vsz(self.conn.hsrv.broker, vfs.realpath, sz)
except: except:
bos.unlink(fp) wunlink(self.log, fp, vfs.flags)
raise raise
new_lastmod = bos.stat(fp).st_mtime new_lastmod = bos.stat(fp).st_mtime
@@ -2688,7 +2752,7 @@ class HttpCli(object):
): ):
t = "save blocked by xau server config" t = "save blocked by xau server config"
self.log(t, 1) self.log(t, 1)
os.unlink(fp) wunlink(self.log, fp, vfs.flags)
raise Pebkac(403, t) raise Pebkac(403, t)
vfs, rem = vfs.get_dbv(rem) vfs, rem = vfs.get_dbv(rem)
@@ -2900,9 +2964,11 @@ class HttpCli(object):
# 512 kB is optimal for huge files, use 64k # 512 kB is optimal for huge files, use 64k
open_args = [fsenc(fs_path), "rb", 64 * 1024] open_args = [fsenc(fs_path), "rb", 64 * 1024]
use_sendfile = ( use_sendfile = (
not self.tls # # fmt: off
not self.tls
and not self.args.no_sendfile and not self.args.no_sendfile
and hasattr(os, "sendfile") and (BITNESS > 32 or file_sz < 0x7fffFFFF)
# fmt: on
) )
# #
@@ -2924,18 +2990,19 @@ class HttpCli(object):
mime = "text/plain; charset=utf-8" mime = "text/plain; charset=utf-8"
self.out_headers["Accept-Ranges"] = "bytes" self.out_headers["Accept-Ranges"] = "bytes"
self.send_headers(length=upper - lower, status=status, mime=mime)
logmsg += unicode(status) + logtail logmsg += unicode(status) + logtail
if self.mode == "HEAD" or not do_send: if self.mode == "HEAD" or not do_send:
if self.do_log: if self.do_log:
self.log(logmsg) self.log(logmsg)
self.send_headers(length=upper - lower, status=status, mime=mime)
return True return True
ret = True ret = True
with open_func(*open_args) as f: with open_func(*open_args) as f:
self.send_headers(length=upper - lower, status=status, mime=mime)
sendfun = sendfile_kern if use_sendfile else sendfile_py sendfun = sendfile_kern if use_sendfile else sendfile_py
remains = sendfun( remains = sendfun(
self.log, lower, upper, f, self.s, self.args.s_wr_sz, self.args.s_wr_slp self.log, lower, upper, f, self.s, self.args.s_wr_sz, self.args.s_wr_slp
@@ -2943,7 +3010,7 @@ class HttpCli(object):
if remains > 0: if remains > 0:
logmsg += " \033[31m" + unicode(upper - remains) + "\033[0m" logmsg += " \033[31m" + unicode(upper - remains) + "\033[0m"
self.keepalive = False ret = False
spd = self._spd((upper - lower) - remains) spd = self._spd((upper - lower) - remains)
if self.do_log: if self.do_log:
@@ -2959,7 +3026,6 @@ class HttpCli(object):
vn: VFS, vn: VFS,
rem: str, rem: str,
items: list[str], items: list[str],
dots: bool,
) -> bool: ) -> bool:
if self.args.no_zip: if self.args.no_zip:
raise Pebkac(400, "not enabled") raise Pebkac(400, "not enabled")
@@ -3016,7 +3082,7 @@ class HttpCli(object):
self.send_headers(None, mime=mime, headers={"Content-Disposition": cdis}) self.send_headers(None, mime=mime, headers={"Content-Disposition": cdis})
fgen = vn.zipgen( fgen = vn.zipgen(
vpath, rem, set(items), self.uname, dots, False, not self.args.no_scandir vpath, rem, set(items), self.uname, False, not self.args.no_scandir
) )
# for f in fgen: print(repr({k: f[k] for k in ["vp", "ap"]})) # for f in fgen: print(repr({k: f[k] for k in ["vp", "ap"]}))
cfmt = "" cfmt = ""
@@ -3299,7 +3365,7 @@ class HttpCli(object):
if v == "y": if v == "y":
dur = 86400 * 299 dur = 86400 * 299
else: else:
dur = None dur = 0
v = "x" v = "x"
ck = gencookie("k304", v, self.args.R, False, dur) ck = gencookie("k304", v, self.args.R, False, dur)
@@ -3309,7 +3375,7 @@ class HttpCli(object):
def setck(self) -> bool: def setck(self) -> bool:
k, v = self.uparam["setck"].split("=", 1) k, v = self.uparam["setck"].split("=", 1)
t = None if v == "" else 86400 * 299 t = 0 if v == "" else 86400 * 299
ck = gencookie(k, v, self.args.R, False, t) ck = gencookie(k, v, self.args.R, False, t)
self.out_headerlist.append(("Set-Cookie", ck)) self.out_headerlist.append(("Set-Cookie", ck))
self.reply(b"o7\n") self.reply(b"o7\n")
@@ -3317,7 +3383,7 @@ class HttpCli(object):
def set_cfg_reset(self) -> bool: def set_cfg_reset(self) -> bool:
for k in ("k304", "js", "idxh", "cppwd", "cppws"): for k in ("k304", "js", "idxh", "cppwd", "cppws"):
cookie = gencookie(k, "x", self.args.R, False, None) cookie = gencookie(k, "x", self.args.R, False)
self.out_headerlist.append(("Set-Cookie", cookie)) self.out_headerlist.append(("Set-Cookie", cookie))
self.redirect("", "?h#cc") self.redirect("", "?h#cc")
@@ -3327,11 +3393,19 @@ class HttpCli(object):
rc = 404 rc = 404
if self.args.vague_403: if self.args.vague_403:
t = '<h1 id="n">404 not found &nbsp;┐( ´ -`)┌</h1><p id="o">or maybe you don\'t have access -- try logging in or <a href="{}/?h">go home</a></p>' t = '<h1 id="n">404 not found &nbsp;┐( ´ -`)┌</h1><p id="o">or maybe you don\'t have access -- try logging in or <a href="{}/?h">go home</a></p>'
pt = "404 not found ┐( ´ -`)┌ (or maybe you don't have access -- try logging in)"
elif is_403: elif is_403:
t = '<h1 id="p">403 forbiddena &nbsp;~┻━┻</h1><p id="q">you\'ll have to log in or <a href="{}/?h">go home</a></p>' t = '<h1 id="p">403 forbiddena &nbsp;~┻━┻</h1><p id="q">you\'ll have to log in or <a href="{}/?h">go home</a></p>'
pt = "403 forbiddena ~┻━┻ (you'll have to log in)"
rc = 403 rc = 403
else: else:
t = '<h1 id="n">404 not found &nbsp;┐( ´ -`)┌</h1><p><a id="r" href="{}/?h">go home</a></p>' t = '<h1 id="n">404 not found &nbsp;┐( ´ -`)┌</h1><p><a id="r" href="{}/?h">go home</a></p>'
pt = "404 not found ┐( ´ -`)┌"
if self.ua.startswith("curl/") or self.ua.startswith("fetch"):
pt = "# acct: %s\n%s\n" % (self.uname, pt)
self.reply(pt.encode("utf-8"), status=rc)
return True
t = t.format(self.args.SR) t = t.format(self.args.SR)
qv = quotep(self.vpaths) + self.ourlq() qv = quotep(self.vpaths) + self.ourlq()
@@ -3430,6 +3504,7 @@ class HttpCli(object):
ret["k" + quotep(excl)] = sub ret["k" + quotep(excl)] = sub
vfs = self.asrv.vfs vfs = self.asrv.vfs
dots = False
try: try:
vn, rem = vfs.get(top, self.uname, True, False) vn, rem = vfs.get(top, self.uname, True, False)
fsroot, vfs_ls, vfs_virt = vn.ls( fsroot, vfs_ls, vfs_virt = vn.ls(
@@ -3438,6 +3513,7 @@ class HttpCli(object):
not self.args.no_scandir, not self.args.no_scandir,
[[True, False], [False, True]], [[True, False], [False, True]],
) )
dots = self.uname in vn.axs.udot
except: except:
vfs_ls = [] vfs_ls = []
vfs_virt = {} vfs_virt = {}
@@ -3446,15 +3522,12 @@ class HttpCli(object):
if d1 == top: if d1 == top:
vfs_virt[d2] = vfs # typechk, value never read vfs_virt[d2] = vfs # typechk, value never read
dirs = [] dirs = [x[0] for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)]
dirnames = [x[0] for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)] if not dots or "dots" not in self.uparam:
dirs = exclude_dotfiles(dirs)
if not self.args.ed or "dots" not in self.uparam: dirs = [quotep(x) for x in dirs if x != excl]
dirnames = exclude_dotfiles(dirnames)
for fn in [x for x in dirnames if x != excl]:
dirs.append(quotep(fn))
for x in vfs_virt: for x in vfs_virt:
if x != excl: if x != excl:
@@ -3486,7 +3559,8 @@ class HttpCli(object):
fk_vols = { fk_vols = {
vol: (vol.flags["fk"], 2 if "fka" in vol.flags else 1) vol: (vol.flags["fk"], 2 if "fka" in vol.flags else 1)
for vp, vol in self.asrv.vfs.all_vols.items() for vp, vol in self.asrv.vfs.all_vols.items()
if "fk" in vol.flags and (vp in self.rvol or vp in self.upvol) if "fk" in vol.flags
and (self.uname in vol.axs.uread or self.uname in vol.axs.upget)
} }
for vol in self.asrv.vfs.all_vols.values(): for vol in self.asrv.vfs.all_vols.values():
cur = idx.get_cur(vol.realpath) cur = idx.get_cur(vol.realpath)
@@ -3757,7 +3831,7 @@ class HttpCli(object):
elif self.can_get and self.avn: elif self.can_get and self.avn:
axs = self.avn.axs axs = self.avn.axs
if self.uname not in axs.uhtml and "*" not in axs.uhtml: if self.uname not in axs.uhtml:
pass pass
elif is_dir: elif is_dir:
for fn in ("index.htm", "index.html"): for fn in ("index.htm", "index.html"):
@@ -3797,7 +3871,8 @@ class HttpCli(object):
)[: vn.flags["fk"]] )[: vn.flags["fk"]]
got = self.uparam.get("k") got = self.uparam.get("k")
if got != correct: if got != correct:
self.log("wrong filekey, want {}, got {}".format(correct, got)) t = "wrong filekey, want %s, got %s\n vp: %s\n ap: %s"
self.log(t % (correct, got, self.req, abspath), 6)
return self.tx_404() return self.tx_404()
if ( if (
@@ -3929,6 +4004,7 @@ class HttpCli(object):
"dsort": vf["sort"], "dsort": vf["sort"],
"themes": self.args.themes, "themes": self.args.themes,
"turbolvl": self.args.turbo, "turbolvl": self.args.turbo,
"u2j": self.args.u2j,
"idxh": int(self.args.ih), "idxh": int(self.args.ih),
"u2sort": self.args.u2sort, "u2sort": self.args.u2sort,
} }
@@ -3978,7 +4054,7 @@ class HttpCli(object):
for k in ["zip", "tar"]: for k in ["zip", "tar"]:
v = self.uparam.get(k) v = self.uparam.get(k)
if v is not None: if v is not None:
return self.tx_zip(k, v, self.vpath, vn, rem, [], self.args.ed) return self.tx_zip(k, v, self.vpath, vn, rem, [])
fsroot, vfs_ls, vfs_virt = vn.ls( fsroot, vfs_ls, vfs_virt = vn.ls(
rem, rem,
@@ -4009,13 +4085,13 @@ class HttpCli(object):
pass pass
# show dotfiles if permitted and requested # show dotfiles if permitted and requested
if not self.args.ed or ( if not self.can_dot or (
"dots" not in self.uparam and (is_ls or "dots" not in self.cookies) "dots" not in self.uparam and (is_ls or "dots" not in self.cookies)
): ):
ls_names = exclude_dotfiles(ls_names) ls_names = exclude_dotfiles(ls_names)
add_fk = vn.flags.get("fk") add_fk = vf.get("fk")
fk_alg = 2 if "fka" in vn.flags else 1 fk_alg = 2 if "fka" in vf else 1
dirs = [] dirs = []
files = [] files = []
@@ -4172,7 +4248,7 @@ class HttpCli(object):
if icur: if icur:
lmte = list(mte) lmte = list(mte)
if self.can_admin: if self.can_admin:
lmte += ["up_ip", ".up_at"] lmte.extend(("up_ip", ".up_at"))
taglist = [k for k in lmte if k in tagset] taglist = [k for k in lmte if k in tagset]
for fe in dirs: for fe in dirs:

View File

@@ -50,7 +50,7 @@ class HttpConn(object):
self.addr = addr self.addr = addr
self.hsrv = hsrv self.hsrv = hsrv
self.mutex: threading.Lock = hsrv.mutex # mypy404 self.u2mutex: threading.Lock = hsrv.u2mutex # mypy404
self.args: argparse.Namespace = hsrv.args # mypy404 self.args: argparse.Namespace = hsrv.args # mypy404
self.E: EnvParams = self.args.E self.E: EnvParams = self.args.E
self.asrv: AuthSrv = hsrv.asrv # mypy404 self.asrv: AuthSrv = hsrv.asrv # mypy404
@@ -93,7 +93,7 @@ class HttpConn(object):
self.rproxy = ip self.rproxy = ip
self.ip = ip self.ip = ip
self.log_src = "{} \033[{}m{}".format(ip, color, self.addr[1]).ljust(26) self.log_src = ("%s \033[%dm%d" % (ip, color, self.addr[1])).ljust(26)
return self.log_src return self.log_src
def respath(self, res_name: str) -> str: def respath(self, res_name: str) -> str:
@@ -112,32 +112,30 @@ class HttpConn(object):
return self.u2idx return self.u2idx
def _detect_https(self) -> bool: def _detect_https(self) -> bool:
method = None try:
if True: method = self.s.recv(4, socket.MSG_PEEK)
try: except socket.timeout:
method = self.s.recv(4, socket.MSG_PEEK) return False
except socket.timeout: except AttributeError:
return False # jython does not support msg_peek; forget about https
except AttributeError: method = self.s.recv(4)
# jython does not support msg_peek; forget about https self.sr = Util.Unrecv(self.s, self.log)
method = self.s.recv(4) self.sr.buf = method
self.sr = Util.Unrecv(self.s, self.log)
self.sr.buf = method
# jython used to do this, they stopped since it's broken # jython used to do this, they stopped since it's broken
# but reimplementing sendall is out of scope for now # but reimplementing sendall is out of scope for now
if not getattr(self.s, "sendall", None): if not getattr(self.s, "sendall", None):
self.s.sendall = self.s.send # type: ignore self.s.sendall = self.s.send # type: ignore
if len(method) != 4: if len(method) != 4:
err = "need at least 4 bytes in the first packet; got {}".format( err = "need at least 4 bytes in the first packet; got {}".format(
len(method) len(method)
) )
if method: if method:
self.log(err) self.log(err)
self.s.send(b"HTTP/1.1 400 Bad Request\r\n\r\n" + err.encode("utf-8")) self.s.send(b"HTTP/1.1 400 Bad Request\r\n\r\n" + err.encode("utf-8"))
return False return False
return not method or not bool(PTN_HTTP.match(method)) return not method or not bool(PTN_HTTP.match(method))
@@ -178,7 +176,7 @@ class HttpConn(object):
self.s = ctx.wrap_socket(self.s, server_side=True) self.s = ctx.wrap_socket(self.s, server_side=True)
msg = [ msg = [
"\033[1;3{:d}m{}".format(c, s) "\033[1;3%dm%s" % (c, s)
for c, s in zip([0, 5, 0], self.s.cipher()) # type: ignore for c, s in zip([0, 5, 0], self.s.cipher()) # type: ignore
] ]
self.log(" ".join(msg) + "\033[0m") self.log(" ".join(msg) + "\033[0m")

View File

@@ -109,6 +109,7 @@ class HttpSrv(object):
self.g404 = Garda(self.args.ban_404) self.g404 = Garda(self.args.ban_404)
self.g403 = Garda(self.args.ban_403) self.g403 = Garda(self.args.ban_403)
self.g422 = Garda(self.args.ban_422, False) self.g422 = Garda(self.args.ban_422, False)
self.gmal = Garda(self.args.ban_422)
self.gurl = Garda(self.args.ban_url) self.gurl = Garda(self.args.ban_url)
self.bans: dict[str, int] = {} self.bans: dict[str, int] = {}
self.aclose: dict[str, int] = {} self.aclose: dict[str, int] = {}
@@ -116,6 +117,7 @@ class HttpSrv(object):
self.bound: set[tuple[str, int]] = set() self.bound: set[tuple[str, int]] = set()
self.name = "hsrv" + nsuf self.name = "hsrv" + nsuf
self.mutex = threading.Lock() self.mutex = threading.Lock()
self.u2mutex = threading.Lock()
self.stopping = False self.stopping = False
self.tp_nthr = 0 # actual self.tp_nthr = 0 # actual
@@ -219,7 +221,7 @@ class HttpSrv(object):
def periodic(self) -> None: def periodic(self) -> None:
while True: while True:
time.sleep(2 if self.tp_ncli or self.ncli else 10) time.sleep(2 if self.tp_ncli or self.ncli else 10)
with self.mutex: with self.u2mutex, self.mutex:
self.u2fh.clean() self.u2fh.clean()
if self.tp_q: if self.tp_q:
self.tp_ncli = max(self.ncli, self.tp_ncli - 2) self.tp_ncli = max(self.ncli, self.tp_ncli - 2)
@@ -365,7 +367,7 @@ class HttpSrv(object):
if not self.t_periodic: if not self.t_periodic:
name = "hsrv-pt" name = "hsrv-pt"
if self.nid: if self.nid:
name += "-{}".format(self.nid) name += "-%d" % (self.nid,)
self.t_periodic = Daemon(self.periodic, name) self.t_periodic = Daemon(self.periodic, name)
@@ -384,7 +386,7 @@ class HttpSrv(object):
Daemon( Daemon(
self.thr_client, self.thr_client,
"httpconn-{}-{}".format(addr[0].split(".", 2)[-1][-6:], addr[1]), "httpconn-%s-%d" % (addr[0].split(".", 2)[-1][-6:], addr[1]),
(sck, addr), (sck, addr),
) )
@@ -401,9 +403,7 @@ class HttpSrv(object):
try: try:
sck, addr = task sck, addr = task
me = threading.current_thread() me = threading.current_thread()
me.name = "httpconn-{}-{}".format( me.name = "httpconn-%s-%d" % (addr[0].split(".", 2)[-1][-6:], addr[1])
addr[0].split(".", 2)[-1][-6:], addr[1]
)
self.thr_client(sck, addr) self.thr_client(sck, addr)
me.name = self.name + "-poolw" me.name = self.name + "-poolw"
except Exception as ex: except Exception as ex:

View File

@@ -8,7 +8,7 @@ import re
from .__init__ import PY2 from .__init__ import PY2
from .th_srv import HAVE_PIL, HAVE_PILF from .th_srv import HAVE_PIL, HAVE_PILF
from .util import BytesIO from .util import BytesIO # type: ignore
class Ico(object): class Ico(object):
@@ -22,18 +22,18 @@ class Ico(object):
ext = bext.decode("utf-8") ext = bext.decode("utf-8")
zb = hashlib.sha1(bext).digest()[2:4] zb = hashlib.sha1(bext).digest()[2:4]
if PY2: if PY2:
zb = [ord(x) for x in zb] zb = [ord(x) for x in zb] # type: ignore
c1 = colorsys.hsv_to_rgb(zb[0] / 256.0, 1, 0.3) c1 = colorsys.hsv_to_rgb(zb[0] / 256.0, 1, 0.3)
c2 = colorsys.hsv_to_rgb(zb[0] / 256.0, 0.8 if HAVE_PILF else 1, 1) c2 = colorsys.hsv_to_rgb(zb[0] / 256.0, 0.8 if HAVE_PILF else 1, 1)
ci = [int(x * 255) for x in list(c1) + list(c2)] ci = [int(x * 255) for x in list(c1) + list(c2)]
c = "".join(["{:02x}".format(x) for x in ci]) c = "".join(["%02x" % (x,) for x in ci])
w = 100 w = 100
h = 30 h = 30
if not self.args.th_no_crop and as_thumb: if not self.args.th_no_crop and as_thumb:
sw, sh = self.args.th_size.split("x") sw, sh = self.args.th_size.split("x")
h = int(100 / (float(sw) / float(sh))) h = int(100.0 / (float(sw) / float(sh)))
w = 100 w = 100
if chrome: if chrome:
@@ -47,12 +47,12 @@ class Ico(object):
# [.lt] are hard to see lowercase / unspaced # [.lt] are hard to see lowercase / unspaced
ext2 = re.sub("(.)", "\\1 ", ext).upper() ext2 = re.sub("(.)", "\\1 ", ext).upper()
h = int(128 * h / w) h = int(128.0 * h / w)
w = 128 w = 128
img = Image.new("RGB", (w, h), "#" + c[:6]) img = Image.new("RGB", (w, h), "#" + c[:6])
pb = ImageDraw.Draw(img) pb = ImageDraw.Draw(img)
_, _, tw, th = pb.textbbox((0, 0), ext2, font_size=16) _, _, tw, th = pb.textbbox((0, 0), ext2, font_size=16)
xy = ((w - tw) // 2, (h - th) // 2) xy = (int((w - tw) / 2), int((h - th) / 2))
pb.text(xy, ext2, fill="#" + c[6:], font_size=16) pb.text(xy, ext2, fill="#" + c[6:], font_size=16)
img = img.resize((w * 2, h * 2), Image.NEAREST) img = img.resize((w * 2, h * 2), Image.NEAREST)
@@ -68,7 +68,7 @@ class Ico(object):
# svg: 3s, cache: 6s, this: 8s # svg: 3s, cache: 6s, this: 8s
from PIL import Image, ImageDraw from PIL import Image, ImageDraw
h = int(64 * h / w) h = int(64.0 * h / w)
w = 64 w = 64
img = Image.new("RGB", (w, h), "#" + c[:6]) img = Image.new("RGB", (w, h), "#" + c[:6])
pb = ImageDraw.Draw(img) pb = ImageDraw.Draw(img)
@@ -91,20 +91,6 @@ class Ico(object):
img.save(buf, format="PNG", compress_level=1) img.save(buf, format="PNG", compress_level=1)
return "image/png", buf.getvalue() return "image/png", buf.getvalue()
elif False:
# 48s, too slow
import pyvips
h = int(192 * h / w)
w = 192
img = pyvips.Image.text(
ext, width=w, height=h, dpi=192, align=pyvips.Align.CENTRE
)
img = img.ifthenelse(ci[3:], ci[:3], blend=True)
# i = i.resize(3, kernel=pyvips.Kernel.NEAREST)
buf = img.write_to_buffer(".png[compression=1]")
return "image/png", buf
svg = """\ svg = """\
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<svg version="1.1" viewBox="0 0 100 {}" xmlns="http://www.w3.org/2000/svg"><g> <svg version="1.1" viewBox="0 0 100 {}" xmlns="http://www.w3.org/2000/svg"><g>

View File

@@ -118,7 +118,7 @@ def ffprobe(
b"--", b"--",
fsenc(abspath), fsenc(abspath),
] ]
rc, so, se = runcmd(cmd, timeout=timeout) rc, so, se = runcmd(cmd, timeout=timeout, nice=True, oom=200)
retchk(rc, cmd, se) retchk(rc, cmd, se)
return parse_ffprobe(so) return parse_ffprobe(so)
@@ -240,7 +240,7 @@ def parse_ffprobe(txt: str) -> tuple[dict[str, tuple[int, Any]], dict[str, list[
if "/" in fps: if "/" in fps:
fa, fb = fps.split("/") fa, fb = fps.split("/")
try: try:
fps = int(fa) * 1.0 / int(fb) fps = float(fa) / float(fb)
except: except:
fps = 9001 fps = 9001
@@ -261,7 +261,8 @@ def parse_ffprobe(txt: str) -> tuple[dict[str, tuple[int, Any]], dict[str, list[
if ".resw" in ret and ".resh" in ret: if ".resw" in ret and ".resh" in ret:
ret["res"] = "{}x{}".format(ret[".resw"], ret[".resh"]) ret["res"] = "{}x{}".format(ret[".resw"], ret[".resh"])
zd = {k: (0, v) for k, v in ret.items()} zero = int("0")
zd = {k: (zero, v) for k, v in ret.items()}
return zd, md return zd, md
@@ -562,6 +563,8 @@ class MTag(object):
args = { args = {
"env": env, "env": env,
"nice": True,
"oom": 300,
"timeout": parser.timeout, "timeout": parser.timeout,
"kill": parser.kill, "kill": parser.kill,
"capture": parser.capture, "capture": parser.capture,
@@ -572,11 +575,6 @@ class MTag(object):
zd.update(ret) zd.update(ret)
args["sin"] = json.dumps(zd).encode("utf-8", "replace") args["sin"] = json.dumps(zd).encode("utf-8", "replace")
if WINDOWS:
args["creationflags"] = 0x4000
else:
cmd = ["nice"] + cmd
bcmd = [sfsenc(x) for x in cmd[:-1]] + [fsenc(cmd[-1])] bcmd = [sfsenc(x) for x in cmd[:-1]] + [fsenc(cmd[-1])]
rc, v, err = runcmd(bcmd, **args) # type: ignore rc, v, err = runcmd(bcmd, **args) # type: ignore
retchk(rc, bcmd, err, self.log, 5, self.args.mtag_v) retchk(rc, bcmd, err, self.log, 5, self.args.mtag_v)

View File

@@ -406,6 +406,7 @@ class SMB(object):
smbserver.os.path.abspath = self._hook smbserver.os.path.abspath = self._hook
smbserver.os.path.expanduser = self._hook smbserver.os.path.expanduser = self._hook
smbserver.os.path.expandvars = self._hook
smbserver.os.path.getatime = self._hook smbserver.os.path.getatime = self._hook
smbserver.os.path.getctime = self._hook smbserver.os.path.getctime = self._hook
smbserver.os.path.getmtime = self._hook smbserver.os.path.getmtime = self._hook

View File

@@ -65,21 +65,21 @@ class StreamTar(StreamArc):
cmp = re.sub(r"[^a-z0-9]*pax[^a-z0-9]*", "", cmp) cmp = re.sub(r"[^a-z0-9]*pax[^a-z0-9]*", "", cmp)
try: try:
cmp, lv = cmp.replace(":", ",").split(",") cmp, zs = cmp.replace(":", ",").split(",")
lv = int(lv) lv = int(zs)
except: except:
lv = None lv = -1
arg = {"name": None, "fileobj": self.qfile, "mode": "w", "format": fmt} arg = {"name": None, "fileobj": self.qfile, "mode": "w", "format": fmt}
if cmp == "gz": if cmp == "gz":
fun = tarfile.TarFile.gzopen fun = tarfile.TarFile.gzopen
arg["compresslevel"] = lv if lv is not None else 3 arg["compresslevel"] = lv if lv >= 0 else 3
elif cmp == "bz2": elif cmp == "bz2":
fun = tarfile.TarFile.bz2open fun = tarfile.TarFile.bz2open
arg["compresslevel"] = lv if lv is not None else 2 arg["compresslevel"] = lv if lv >= 0 else 2
elif cmp == "xz": elif cmp == "xz":
fun = tarfile.TarFile.xzopen fun = tarfile.TarFile.xzopen
arg["preset"] = lv if lv is not None else 1 arg["preset"] = lv if lv >= 0 else 1
else: else:
fun = tarfile.open fun = tarfile.open
arg["mode"] = "w|" arg["mode"] = "w|"

View File

@@ -61,7 +61,7 @@ class Adapter(object):
) )
if True: if True: # pylint: disable=using-constant-test
# Type of an IPv4 address (a string in "xxx.xxx.xxx.xxx" format) # Type of an IPv4 address (a string in "xxx.xxx.xxx.xxx" format)
_IPv4Address = str _IPv4Address = str

View File

@@ -133,12 +133,13 @@ class SvcHub(object):
if not self._process_config(): if not self._process_config():
raise Exception(BAD_CFG) raise Exception(BAD_CFG)
# for non-http clients (ftp) # for non-http clients (ftp, tftp)
self.bans: dict[str, int] = {} self.bans: dict[str, int] = {}
self.gpwd = Garda(self.args.ban_pw) self.gpwd = Garda(self.args.ban_pw)
self.g404 = Garda(self.args.ban_404) self.g404 = Garda(self.args.ban_404)
self.g403 = Garda(self.args.ban_403) self.g403 = Garda(self.args.ban_403)
self.g422 = Garda(self.args.ban_422) self.g422 = Garda(self.args.ban_422, False)
self.gmal = Garda(self.args.ban_422)
self.gurl = Garda(self.args.ban_url) self.gurl = Garda(self.args.ban_url)
self.log_div = 10 ** (6 - args.log_tdec) self.log_div = 10 ** (6 - args.log_tdec)
@@ -267,6 +268,12 @@ class SvcHub(object):
Daemon(self.start_ftpd, "start_ftpd") Daemon(self.start_ftpd, "start_ftpd")
zms += "f" if args.ftp else "F" zms += "f" if args.ftp else "F"
if args.tftp:
from .tftpd import Tftpd
self.tftpd: Optional[Tftpd] = None
Daemon(self.start_ftpd, "start_tftpd")
if args.smb: if args.smb:
# impacket.dcerpc is noisy about listen timeouts # impacket.dcerpc is noisy about listen timeouts
sto = socket.getdefaulttimeout() sto = socket.getdefaulttimeout()
@@ -296,10 +303,12 @@ class SvcHub(object):
def start_ftpd(self) -> None: def start_ftpd(self) -> None:
time.sleep(30) time.sleep(30)
if self.ftpd:
return
self.restart_ftpd() if hasattr(self, "ftpd") and not self.ftpd:
self.restart_ftpd()
if hasattr(self, "tftpd") and not self.tftpd:
self.restart_tftpd()
def restart_ftpd(self) -> None: def restart_ftpd(self) -> None:
if not hasattr(self, "ftpd"): if not hasattr(self, "ftpd"):
@@ -316,6 +325,17 @@ class SvcHub(object):
self.ftpd = Ftpd(self) self.ftpd = Ftpd(self)
self.log("root", "started FTPd") self.log("root", "started FTPd")
def restart_tftpd(self) -> None:
if not hasattr(self, "tftpd"):
return
from .tftpd import Tftpd
if self.tftpd:
return # todo
self.tftpd = Tftpd(self)
def thr_httpsrv_up(self) -> None: def thr_httpsrv_up(self) -> None:
time.sleep(1 if self.args.ign_ebind_all else 5) time.sleep(1 if self.args.ign_ebind_all else 5)
expected = self.broker.num_workers * self.tcpsrv.nsrv expected = self.broker.num_workers * self.tcpsrv.nsrv
@@ -404,20 +424,25 @@ class SvcHub(object):
if al.rsp_jtr: if al.rsp_jtr:
al.rsp_slp = 0.000001 al.rsp_slp = 0.000001
al.th_covers = set(al.th_covers.split(",")) zsl = al.th_covers.split(",")
zsl = [x.strip() for x in zsl]
zsl = [x for x in zsl if x]
al.th_covers = set(zsl)
al.th_coversd = set(zsl + ["." + x for x in zsl])
for k in "c".split(" "): for k in "c".split(" "):
vl = getattr(al, k) vl = getattr(al, k)
if not vl: if not vl:
continue continue
vl = [os.path.expanduser(x) if x.startswith("~") else x for x in vl] vl = [os.path.expandvars(os.path.expanduser(x)) for x in vl]
setattr(al, k, vl) setattr(al, k, vl)
for k in "lo hist ssl_log".split(" "): for k in "lo hist ssl_log".split(" "):
vs = getattr(al, k) vs = getattr(al, k)
if vs and vs.startswith("~"): if vs:
setattr(al, k, os.path.expanduser(vs)) vs = os.path.expandvars(os.path.expanduser(vs))
setattr(al, k, vs)
for k in "sus_urls nonsus_urls".split(" "): for k in "sus_urls nonsus_urls".split(" "):
vs = getattr(al, k) vs = getattr(al, k)
@@ -426,16 +451,26 @@ class SvcHub(object):
else: else:
setattr(al, k, re.compile(vs)) setattr(al, k, re.compile(vs))
for k in "tftp_lsf".split(" "):
vs = getattr(al, k)
if not vs or vs == "no":
setattr(al, k, None)
else:
setattr(al, k, re.compile("^" + vs + "$"))
if not al.sus_urls: if not al.sus_urls:
al.ban_url = "no" al.ban_url = "no"
elif al.ban_url == "no": elif al.ban_url == "no":
al.sus_urls = None al.sus_urls = None
if al.xff_src in ("any", "0", ""): al.xff_hdr = al.xff_hdr.lower()
al.xff_re = None al.idp_h_usr = al.idp_h_usr.lower()
else: # al.idp_h_grp = al.idp_h_grp.lower()
zs = al.xff_src.replace(" ", "").replace(".", "\\.").replace(",", "|")
al.xff_re = re.compile("^(?:" + zs + ")") al.xff_re = self._ipa2re(al.xff_src)
al.ipa_re = self._ipa2re(al.ipa)
al.ftp_ipa_re = self._ipa2re(al.ftp_ipa or al.ipa)
al.tftp_ipa_re = self._ipa2re(al.tftp_ipa or al.ipa)
mte = ODict.fromkeys(DEF_MTE.split(","), True) mte = ODict.fromkeys(DEF_MTE.split(","), True)
al.mte = odfusion(mte, al.mte) al.mte = odfusion(mte, al.mte)
@@ -452,14 +487,28 @@ class SvcHub(object):
if ptn: if ptn:
setattr(self.args, k, re.compile(ptn)) setattr(self.args, k, re.compile(ptn))
try:
zf1, zf2 = self.args.rm_retry.split("/")
self.args.rm_re_t = float(zf1)
self.args.rm_re_r = float(zf2)
except:
raise Exception("invalid --rm-retry [%s]" % (self.args.rm_retry,))
return True return True
def _ipa2re(self, txt) -> Optional[re.Pattern]:
if txt in ("any", "0", ""):
return None
zs = txt.replace(" ", "").replace(".", "\\.").replace(",", "|")
return re.compile("^(?:" + zs + ")")
def _setlimits(self) -> None: def _setlimits(self) -> None:
try: try:
import resource import resource
soft, hard = [ soft, hard = [
x if x > 0 else 1024 * 1024 int(x) if x > 0 else 1024 * 1024
for x in list(resource.getrlimit(resource.RLIMIT_NOFILE)) for x in list(resource.getrlimit(resource.RLIMIT_NOFILE))
] ]
except: except:
@@ -516,12 +565,17 @@ class SvcHub(object):
sel_fn = "{}.{}".format(fn, ctr) sel_fn = "{}.{}".format(fn, ctr)
fn = sel_fn fn = sel_fn
try:
os.makedirs(os.path.dirname(fn))
except:
pass
try: try:
if do_xz: if do_xz:
import lzma import lzma
lh = lzma.open(fn, "wt", encoding="utf-8", errors="replace", preset=0) lh = lzma.open(fn, "wt", encoding="utf-8", errors="replace", preset=0)
self.args.no_logflush = True
else: else:
lh = open(fn, "wt", encoding="utf-8", errors="replace") lh = open(fn, "wt", encoding="utf-8", errors="replace")
except: except:
@@ -751,10 +805,27 @@ class SvcHub(object):
(zd.hour * 100 + zd.minute) * 100 + zd.second, (zd.hour * 100 + zd.minute) * 100 + zd.second,
zd.microsecond // self.log_div, zd.microsecond // self.log_div,
) )
self.logf.write("@%s [%s\033[0m] %s\n" % (ts, src, msg))
if c and not self.args.no_ansi:
if isinstance(c, int):
msg = "\033[3%sm%s\033[0m" % (c, msg)
elif "\033" not in c:
msg = "\033[%sm%s\033[0m" % (c, msg)
else:
msg = "%s%s\033[0m" % (c, msg)
if "\033" in src:
src += "\033[0m"
if "\033" in msg:
msg += "\033[0m"
self.logf.write("@%s [%-21s] %s\n" % (ts, src, msg))
if not self.args.no_logflush:
self.logf.flush()
now = time.time() now = time.time()
if now >= self.next_day: if int(now) >= self.next_day:
self._set_next_day() self._set_next_day()
def _set_next_day(self) -> None: def _set_next_day(self) -> None:
@@ -782,7 +853,7 @@ class SvcHub(object):
"""handles logging from all components""" """handles logging from all components"""
with self.log_mutex: with self.log_mutex:
now = time.time() now = time.time()
if now >= self.next_day: if int(now) >= self.next_day:
dt = datetime.fromtimestamp(now, UTC) dt = datetime.fromtimestamp(now, UTC)
zs = "{}\n" if self.no_ansi else "\033[36m{}\033[0m\n" zs = "{}\n" if self.no_ansi else "\033[36m{}\033[0m\n"
zs = zs.format(dt.strftime("%Y-%m-%d")) zs = zs.format(dt.strftime("%Y-%m-%d"))
@@ -827,6 +898,8 @@ class SvcHub(object):
if self.logf: if self.logf:
self.logf.write(msg) self.logf.write(msg)
if not self.args.no_logflush:
self.logf.flush()
def pr(self, *a: Any, **ka: Any) -> None: def pr(self, *a: Any, **ka: Any) -> None:
try: try:

View File

@@ -241,6 +241,11 @@ class TcpSrv(object):
raise OSError(E_ADDR_IN_USE[0], "") raise OSError(E_ADDR_IN_USE[0], "")
self.srv.append(srv) self.srv.append(srv)
except (OSError, socket.error) as ex: except (OSError, socket.error) as ex:
try:
srv.close()
except:
pass
if ex.errno in E_ADDR_IN_USE: if ex.errno in E_ADDR_IN_USE:
e = "\033[1;31mport {} is busy on interface {}\033[0m".format(port, ip) e = "\033[1;31mport {} is busy on interface {}\033[0m".format(port, ip)
elif ex.errno in E_ADDR_NOT_AVAIL: elif ex.errno in E_ADDR_NOT_AVAIL:
@@ -304,6 +309,7 @@ class TcpSrv(object):
self.hub.start_zeroconf() self.hub.start_zeroconf()
gencert(self.log, self.args, self.netdevs) gencert(self.log, self.args, self.netdevs)
self.hub.restart_ftpd() self.hub.restart_ftpd()
self.hub.restart_tftpd()
def shutdown(self) -> None: def shutdown(self) -> None:
self.stopping = True self.stopping = True

309
copyparty/tftpd.py Normal file
View File

@@ -0,0 +1,309 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
try:
from types import SimpleNamespace
except:
class SimpleNamespace(object):
def __init__(self, **attr):
self.__dict__.update(attr)
import inspect
import logging
import os
import stat
from datetime import datetime
from partftpy import TftpContexts, TftpServer, TftpStates
from partftpy.TftpShared import TftpException
from .__init__ import PY2, TYPE_CHECKING
from .authsrv import VFS
from .bos import bos
from .util import BytesIO, Daemon, exclude_dotfiles, runhook, undot
if True: # pylint: disable=using-constant-test
from typing import Any, Union
if TYPE_CHECKING:
from .svchub import SvcHub
lg = logging.getLogger("tftp")
debug, info, warning, error = (lg.debug, lg.info, lg.warning, lg.error)
def _serverInitial(self, pkt: Any, raddress: str, rport: int) -> bool:
info("connection from %s:%s", raddress, rport)
ret = _orig_serverInitial(self, pkt, raddress, rport)
ptn = _hub[0].args.tftp_ipa_re
if ptn and not ptn.match(raddress):
yeet("client rejected (--tftp-ipa): %s" % (raddress,))
return ret
# patch ipa-check into partftpd
_hub: list["SvcHub"] = []
_orig_serverInitial = TftpStates.TftpServerState.serverInitial
TftpStates.TftpServerState.serverInitial = _serverInitial
class Tftpd(object):
def __init__(self, hub: "SvcHub") -> None:
self.hub = hub
self.args = hub.args
self.asrv = hub.asrv
self.log = hub.log
_hub[:] = []
_hub.append(hub)
lg.setLevel(logging.DEBUG if self.args.tftpv else logging.INFO)
for x in ["partftpy", "partftpy.TftpStates", "partftpy.TftpServer"]:
lgr = logging.getLogger(x)
lgr.setLevel(logging.DEBUG if self.args.tftpv else logging.INFO)
# patch vfs into partftpy
TftpContexts.open = self._open
TftpStates.open = self._open
fos = SimpleNamespace()
for k in os.__dict__:
try:
setattr(fos, k, getattr(os, k))
except:
pass
fos.access = self._access
fos.mkdir = self._mkdir
fos.unlink = self._unlink
fos.sep = "/"
TftpContexts.os = fos
TftpServer.os = fos
TftpStates.os = fos
fop = SimpleNamespace()
for k in os.path.__dict__:
try:
setattr(fop, k, getattr(os.path, k))
except:
pass
fop.abspath = self._p_abspath
fop.exists = self._p_exists
fop.isdir = self._p_isdir
fop.normpath = self._p_normpath
fos.path = fop
self._disarm(fos)
ip = next((x for x in self.args.i if ":" not in x), None)
if not ip:
self.log("tftp", "IPv6 not supported for tftp; listening on 0.0.0.0", 3)
ip = "0.0.0.0"
self.ip = ip
self.port = int(self.args.tftp)
self.srv = TftpServer.TftpServer("/", self._ls)
self.stop = self.srv.stop
ports = []
if self.args.tftp_pr:
p1, p2 = [int(x) for x in self.args.tftp_pr.split("-")]
ports = list(range(p1, p2 + 1))
Daemon(self.srv.listen, "tftp", [self.ip, self.port], ka={"ports": ports})
def nlog(self, msg: str, c: Union[int, str] = 0) -> None:
self.log("tftp", msg, c)
def _v2a(self, caller: str, vpath: str, perms: list, *a: Any) -> tuple[VFS, str]:
vpath = vpath.replace("\\", "/").lstrip("/")
if not perms:
perms = [True, True]
debug('%s("%s", %s) %s\033[K\033[0m', caller, vpath, str(a), perms)
vfs, rem = self.asrv.vfs.get(vpath, "*", *perms)
return vfs, vfs.canonical(rem)
def _ls(self, vpath: str, raddress: str, rport: int, force=False) -> Any:
# generate file listing if vpath is dir.txt and return as file object
if not force:
vpath, fn = os.path.split(vpath.replace("\\", "/"))
ptn = self.args.tftp_lsf
if not ptn or not ptn.match(fn.lower()):
return None
vn, rem = self.asrv.vfs.get(vpath, "*", True, False)
fsroot, vfs_ls, vfs_virt = vn.ls(
rem,
"*",
not self.args.no_scandir,
[[True, False]],
)
dnames = set([x[0] for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)])
dirs1 = [(v.st_mtime, v.st_size, k + "/") for k, v in vfs_ls if k in dnames]
fils1 = [(v.st_mtime, v.st_size, k) for k, v in vfs_ls if k not in dnames]
real1 = dirs1 + fils1
realt = [(datetime.fromtimestamp(mt), sz, fn) for mt, sz, fn in real1]
reals = [
(
"%04d-%02d-%02d %02d:%02d:%02d"
% (
zd.year,
zd.month,
zd.day,
zd.hour,
zd.minute,
zd.second,
),
sz,
fn,
)
for zd, sz, fn in realt
]
virs = [("????-??-?? ??:??:??", 0, k + "/") for k in vfs_virt.keys()]
ls = virs + reals
if "*" not in vn.axs.udot:
names = set(exclude_dotfiles([x[2] for x in ls]))
ls = [x for x in ls if x[2] in names]
try:
biggest = max([x[1] for x in ls])
except:
biggest = 0
perms = []
if "*" in vn.axs.uread:
perms.append("read")
if "*" in vn.axs.udot:
perms.append("hidden")
if "*" in vn.axs.uwrite:
if "*" in vn.axs.udel:
perms.append("overwrite")
else:
perms.append("write")
fmt = "{{}} {{:{},}} {{}}"
fmt = fmt.format(len("{:,}".format(biggest)))
retl = ["# permissions: %s" % (", ".join(perms),)]
retl += [fmt.format(*x) for x in ls]
ret = "\n".join(retl).encode("utf-8", "replace")
return BytesIO(ret)
def _open(self, vpath: str, mode: str, *a: Any, **ka: Any) -> Any:
rd = wr = False
if mode == "rb":
rd = True
elif mode == "wb":
wr = True
else:
raise Exception("bad mode %s" % (mode,))
vfs, ap = self._v2a("open", vpath, [rd, wr])
if wr:
if "*" not in vfs.axs.uwrite:
yeet("blocked write; folder not world-writable: /%s" % (vpath,))
if bos.path.exists(ap) and "*" not in vfs.axs.udel:
yeet("blocked write; folder not world-deletable: /%s" % (vpath,))
xbu = vfs.flags.get("xbu")
if xbu and not runhook(
self.nlog, xbu, ap, vpath, "", "", 0, 0, "8.3.8.7", 0, ""
):
yeet("blocked by xbu server config: " + vpath)
if not self.args.tftp_nols and bos.path.isdir(ap):
return self._ls(vpath, "", 0, True)
return open(ap, mode, *a, **ka)
def _mkdir(self, vpath: str, *a) -> None:
vfs, ap = self._v2a("mkdir", vpath, [])
if "*" not in vfs.axs.uwrite:
yeet("blocked mkdir; folder not world-writable: /%s" % (vpath,))
return bos.mkdir(ap)
def _unlink(self, vpath: str) -> None:
# return bos.unlink(self._v2a("stat", vpath, *a)[1])
vfs, ap = self._v2a("delete", vpath, [True, False, False, True])
try:
inf = bos.stat(ap)
except:
return
if not stat.S_ISREG(inf.st_mode) or inf.st_size:
yeet("attempted delete of non-empty file")
vpath = vpath.replace("\\", "/").lstrip("/")
self.hub.up2k.handle_rm("*", "8.3.8.7", [vpath], [], False)
def _access(self, *a: Any) -> bool:
return True
def _p_abspath(self, vpath: str) -> str:
return "/" + undot(vpath)
def _p_normpath(self, *a: Any) -> str:
return ""
def _p_exists(self, vpath: str) -> bool:
try:
ap = self._v2a("p.exists", vpath, [False, False])[1]
bos.stat(ap)
return True
except:
return False
def _p_isdir(self, vpath: str) -> bool:
try:
st = bos.stat(self._v2a("p.isdir", vpath, [False, False])[1])
ret = stat.S_ISDIR(st.st_mode)
return ret
except:
return False
def _hook(self, *a: Any, **ka: Any) -> None:
src = inspect.currentframe().f_back.f_code.co_name
error("\033[31m%s:hook(%s)\033[0m", src, a)
raise Exception("nope")
def _disarm(self, fos: SimpleNamespace) -> None:
fos.chmod = self._hook
fos.chown = self._hook
fos.close = self._hook
fos.ftruncate = self._hook
fos.lchown = self._hook
fos.link = self._hook
fos.listdir = self._hook
fos.lstat = self._hook
fos.open = self._hook
fos.remove = self._hook
fos.rename = self._hook
fos.replace = self._hook
fos.scandir = self._hook
fos.stat = self._hook
fos.symlink = self._hook
fos.truncate = self._hook
fos.utime = self._hook
fos.walk = self._hook
fos.path.expanduser = self._hook
fos.path.expandvars = self._hook
fos.path.getatime = self._hook
fos.path.getctime = self._hook
fos.path.getmtime = self._hook
fos.path.getsize = self._hook
fos.path.isabs = self._hook
fos.path.isfile = self._hook
fos.path.islink = self._hook
fos.path.realpath = self._hook
def yeet(msg: str) -> None:
warning(msg)
raise TftpException(msg)

View File

@@ -18,7 +18,7 @@ from .bos import bos
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, ffprobe from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, ffprobe
from .util import ( from .util import (
FFMPEG_URL, FFMPEG_URL,
BytesIO, BytesIO, # type: ignore
Cooldown, Cooldown,
Daemon, Daemon,
Pebkac, Pebkac,
@@ -28,6 +28,7 @@ from .util import (
runcmd, runcmd,
statdir, statdir,
vsplit, vsplit,
wunlink,
) )
if True: # pylint: disable=using-constant-test if True: # pylint: disable=using-constant-test
@@ -102,7 +103,7 @@ def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -
rd += "\n" + fmt rd += "\n" + fmt
h = hashlib.sha512(afsenc(rd)).digest() h = hashlib.sha512(afsenc(rd)).digest()
b64 = base64.urlsafe_b64encode(h).decode("ascii")[:24] b64 = base64.urlsafe_b64encode(h).decode("ascii")[:24]
rd = "{}/{}/".format(b64[:2], b64[2:4]).lower() + b64 rd = ("%s/%s/" % (b64[:2], b64[2:4])).lower() + b64
# could keep original filenames but this is safer re pathlen # could keep original filenames but this is safer re pathlen
h = hashlib.sha512(afsenc(fn)).digest() h = hashlib.sha512(afsenc(fn)).digest()
@@ -115,7 +116,7 @@ def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -
fmt = "webp" if fc == "w" else "png" if fc == "p" else "jpg" fmt = "webp" if fc == "w" else "png" if fc == "p" else "jpg"
cat = "th" cat = "th"
return "{}/{}/{}/{}.{:x}.{}".format(histpath, cat, rd, fn, int(mtime), fmt) return "%s/%s/%s/%s.%x.%s" % (histpath, cat, rd, fn, int(mtime), fmt)
class ThumbSrv(object): class ThumbSrv(object):
@@ -129,6 +130,8 @@ class ThumbSrv(object):
self.mutex = threading.Lock() self.mutex = threading.Lock()
self.busy: dict[str, list[threading.Condition]] = {} self.busy: dict[str, list[threading.Condition]] = {}
self.ram: dict[str, float] = {}
self.memcond = threading.Condition(self.mutex)
self.stopping = False self.stopping = False
self.nthr = max(1, self.args.th_mt) self.nthr = max(1, self.args.th_mt)
@@ -214,7 +217,7 @@ class ThumbSrv(object):
with self.mutex: with self.mutex:
try: try:
self.busy[tpath].append(cond) self.busy[tpath].append(cond)
self.log("wait {}".format(tpath)) self.log("joined waiting room for %s" % (tpath,))
except: except:
thdir = os.path.dirname(tpath) thdir = os.path.dirname(tpath)
bos.makedirs(os.path.join(thdir, "w")) bos.makedirs(os.path.join(thdir, "w"))
@@ -265,6 +268,23 @@ class ThumbSrv(object):
"ffa": self.fmt_ffa, "ffa": self.fmt_ffa,
} }
def wait4ram(self, need: float, ttpath: str) -> None:
ram = self.args.th_ram_max
if need > ram * 0.99:
t = "file too big; need %.2f GiB RAM, but --th-ram-max is only %.1f"
raise Exception(t % (need, ram))
while True:
with self.mutex:
used = sum([v for k, v in self.ram.items() if k != ttpath]) + need
if used < ram:
# self.log("XXX self.ram: %s" % (self.ram,), 5)
self.ram[ttpath] = need
return
with self.memcond:
# self.log("at RAM limit; used %.2f GiB, need %.2f more" % (used-need, need), 1)
self.memcond.wait(3)
def worker(self) -> None: def worker(self) -> None:
while not self.stopping: while not self.stopping:
task = self.q.get() task = self.q.get()
@@ -298,7 +318,7 @@ class ThumbSrv(object):
tdir, tfn = os.path.split(tpath) tdir, tfn = os.path.split(tpath)
ttpath = os.path.join(tdir, "w", tfn) ttpath = os.path.join(tdir, "w", tfn)
try: try:
bos.unlink(ttpath) wunlink(self.log, ttpath, vn.flags)
except: except:
pass pass
@@ -318,7 +338,7 @@ class ThumbSrv(object):
else: else:
# ffmpeg may spawn empty files on windows # ffmpeg may spawn empty files on windows
try: try:
os.unlink(ttpath) wunlink(self.log, ttpath, vn.flags)
except: except:
pass pass
@@ -330,11 +350,15 @@ class ThumbSrv(object):
with self.mutex: with self.mutex:
subs = self.busy[tpath] subs = self.busy[tpath]
del self.busy[tpath] del self.busy[tpath]
self.ram.pop(ttpath, None)
for x in subs: for x in subs:
with x: with x:
x.notify_all() x.notify_all()
with self.memcond:
self.memcond.notify_all()
with self.mutex: with self.mutex:
self.nthr -= 1 self.nthr -= 1
@@ -366,6 +390,7 @@ class ThumbSrv(object):
return im return im
def conv_pil(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None: def conv_pil(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
self.wait4ram(0.2, tpath)
with Image.open(fsenc(abspath)) as im: with Image.open(fsenc(abspath)) as im:
try: try:
im = self.fancy_pillow(im, fmt, vn) im = self.fancy_pillow(im, fmt, vn)
@@ -382,7 +407,7 @@ class ThumbSrv(object):
# method 0 = pillow-default, fast # method 0 = pillow-default, fast
# method 4 = ffmpeg-default # method 4 = ffmpeg-default
# method 6 = max, slow # method 6 = max, slow
fmts += ["RGBA", "LA"] fmts.extend(("RGBA", "LA"))
args["method"] = 6 args["method"] = 6
else: else:
# default q = 75 # default q = 75
@@ -395,6 +420,7 @@ class ThumbSrv(object):
im.save(tpath, **args) im.save(tpath, **args)
def conv_vips(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None: def conv_vips(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
self.wait4ram(0.2, tpath)
crops = ["centre", "none"] crops = ["centre", "none"]
if fmt.endswith("f"): if fmt.endswith("f"):
crops = ["none"] crops = ["none"]
@@ -411,9 +437,11 @@ class ThumbSrv(object):
if c == crops[-1]: if c == crops[-1]:
raise raise
assert img # type: ignore
img.write_to_file(tpath, Q=40) img.write_to_file(tpath, Q=40)
def conv_ffmpeg(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None: def conv_ffmpeg(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
self.wait4ram(0.2, tpath)
ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2)) ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
if not ret: if not ret:
return return
@@ -466,9 +494,9 @@ class ThumbSrv(object):
cmd += [fsenc(tpath)] cmd += [fsenc(tpath)]
self._run_ff(cmd, vn) self._run_ff(cmd, vn)
def _run_ff(self, cmd: list[bytes], vn: VFS) -> None: def _run_ff(self, cmd: list[bytes], vn: VFS, oom: int = 400) -> None:
# self.log((b" ".join(cmd)).decode("utf-8")) # self.log((b" ".join(cmd)).decode("utf-8"))
ret, _, serr = runcmd(cmd, timeout=vn.flags["convt"]) ret, _, serr = runcmd(cmd, timeout=vn.flags["convt"], nice=True, oom=oom)
if not ret: if not ret:
return return
@@ -516,8 +544,21 @@ class ThumbSrv(object):
if "ac" not in ret: if "ac" not in ret:
raise Exception("not audio") raise Exception("not audio")
flt = ( # jt_versi.xm: 405M/839s
b"[0:a:0]" dur = ret[".dur"][1] if ".dur" in ret else 300
need = 0.2 + dur / 3000
speedup = b""
if need > self.args.th_ram_max * 0.7:
self.log("waves too big (need %.2f GiB); trying to optimize" % (need,))
need = 0.2 + dur / 4200 # only helps about this much...
speedup = b"aresample=8000,"
if need > self.args.th_ram_max * 0.96:
raise Exception("file too big; cannot waves")
self.wait4ram(need, tpath)
flt = b"[0:a:0]" + speedup
flt += (
b"compand=.3|.3:1|1:-90/-60|-60/-40|-40/-30|-20/-20:6:0:-90:0.2" b"compand=.3|.3:1|1:-90/-60|-60/-40|-40/-30|-20/-20:6:0:-90:0.2"
b",volume=2" b",volume=2"
b",showwavespic=s=2048x64:colors=white" b",showwavespic=s=2048x64:colors=white"
@@ -544,6 +585,15 @@ class ThumbSrv(object):
if "ac" not in ret: if "ac" not in ret:
raise Exception("not audio") raise Exception("not audio")
# https://trac.ffmpeg.org/ticket/10797
# expect 1 GiB every 600 seconds when duration is tricky;
# simple filetypes are generally safer so let's special-case those
safe = ("flac", "wav", "aif", "aiff", "opus")
coeff = 1800 if abspath.split(".")[-1].lower() in safe else 600
dur = ret[".dur"][1] if ".dur" in ret else 300
need = 0.2 + dur / coeff
self.wait4ram(need, tpath)
fc = "[0:a:0]aresample=48000{},showspectrumpic=s=640x512,crop=780:544:70:50[o]" fc = "[0:a:0]aresample=48000{},showspectrumpic=s=640x512,crop=780:544:70:50[o]"
if self.args.th_ff_swr: if self.args.th_ff_swr:
@@ -586,6 +636,7 @@ class ThumbSrv(object):
if self.args.no_acode: if self.args.no_acode:
raise Exception("disabled in server config") raise Exception("disabled in server config")
self.wait4ram(0.2, tpath)
ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2)) ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
if "ac" not in ret: if "ac" not in ret:
raise Exception("not audio") raise Exception("not audio")
@@ -601,7 +652,7 @@ class ThumbSrv(object):
if want_caf: if want_caf:
tmp_opus = tpath + ".opus" tmp_opus = tpath + ".opus"
try: try:
bos.unlink(tmp_opus) wunlink(self.log, tmp_opus, vn.flags)
except: except:
pass pass
@@ -622,7 +673,7 @@ class ThumbSrv(object):
fsenc(tmp_opus) fsenc(tmp_opus)
] ]
# fmt: on # fmt: on
self._run_ff(cmd, vn) self._run_ff(cmd, vn, oom=300)
# iOS fails to play some "insufficiently complex" files # iOS fails to play some "insufficiently complex" files
# (average file shorter than 8 seconds), so of course we # (average file shorter than 8 seconds), so of course we
@@ -646,7 +697,7 @@ class ThumbSrv(object):
fsenc(tpath) fsenc(tpath)
] ]
# fmt: on # fmt: on
self._run_ff(cmd, vn) self._run_ff(cmd, vn, oom=300)
elif want_caf: elif want_caf:
# simple remux should be safe # simple remux should be safe
@@ -664,11 +715,11 @@ class ThumbSrv(object):
fsenc(tpath) fsenc(tpath)
] ]
# fmt: on # fmt: on
self._run_ff(cmd, vn) self._run_ff(cmd, vn, oom=300)
if tmp_opus != tpath: if tmp_opus != tpath:
try: try:
bos.unlink(tmp_opus) wunlink(self.log, tmp_opus, vn.flags)
except: except:
pass pass
@@ -695,7 +746,10 @@ class ThumbSrv(object):
else: else:
self.log("\033[Jcln {} ({})/\033[A".format(histpath, vol)) self.log("\033[Jcln {} ({})/\033[A".format(histpath, vol))
ndirs += self.clean(histpath) try:
ndirs += self.clean(histpath)
except Exception as ex:
self.log("\033[Jcln err in %s: %r" % (histpath, ex), 3)
self.log("\033[Jcln ok; rm {} dirs".format(ndirs)) self.log("\033[Jcln ok; rm {} dirs".format(ndirs))

View File

@@ -9,7 +9,7 @@ import time
from operator import itemgetter from operator import itemgetter
from .__init__ import ANYWIN, TYPE_CHECKING, unicode from .__init__ import ANYWIN, TYPE_CHECKING, unicode
from .authsrv import LEELOO_DALLAS from .authsrv import LEELOO_DALLAS, VFS
from .bos import bos from .bos import bos
from .up2k import up2k_wark_from_hashlist from .up2k import up2k_wark_from_hashlist
from .util import ( from .util import (
@@ -63,7 +63,7 @@ class U2idx(object):
self.log_func("u2idx", msg, c) self.log_func("u2idx", msg, c)
def fsearch( def fsearch(
self, vols: list[tuple[str, str, dict[str, Any]]], body: dict[str, Any] self, uname: str, vols: list[VFS], body: dict[str, Any]
) -> list[dict[str, Any]]: ) -> list[dict[str, Any]]:
"""search by up2k hashlist""" """search by up2k hashlist"""
if not HAVE_SQLITE3: if not HAVE_SQLITE3:
@@ -77,7 +77,7 @@ class U2idx(object):
uv: list[Union[str, int]] = [wark[:16], wark] uv: list[Union[str, int]] = [wark[:16], wark]
try: try:
return self.run_query(vols, uq, uv, True, False, 99999)[0] return self.run_query(uname, vols, uq, uv, False, 99999)[0]
except: except:
raise Pebkac(500, min_ex()) raise Pebkac(500, min_ex())
@@ -103,7 +103,7 @@ class U2idx(object):
uri = "" uri = ""
try: try:
uri = "{}?mode=ro&nolock=1".format(Path(db_path).as_uri()) uri = "{}?mode=ro&nolock=1".format(Path(db_path).as_uri())
db = sqlite3.connect(uri, 2, uri=True, check_same_thread=False) db = sqlite3.connect(uri, timeout=2, uri=True, check_same_thread=False)
cur = db.cursor() cur = db.cursor()
cur.execute('pragma table_info("up")').fetchone() cur.execute('pragma table_info("up")').fetchone()
self.log("ro: {}".format(db_path)) self.log("ro: {}".format(db_path))
@@ -115,14 +115,14 @@ class U2idx(object):
if not cur: if not cur:
# on windows, this steals the write-lock from up2k.deferred_init -- # on windows, this steals the write-lock from up2k.deferred_init --
# seen on win 10.0.17763.2686, py 3.10.4, sqlite 3.37.2 # seen on win 10.0.17763.2686, py 3.10.4, sqlite 3.37.2
cur = sqlite3.connect(db_path, 2, check_same_thread=False).cursor() cur = sqlite3.connect(db_path, timeout=2, check_same_thread=False).cursor()
self.log("opened {}".format(db_path)) self.log("opened {}".format(db_path))
self.cur[ptop] = cur self.cur[ptop] = cur
return cur return cur
def search( def search(
self, vols: list[tuple[str, str, dict[str, Any]]], uq: str, lim: int self, uname: str, vols: list[VFS], uq: str, lim: int
) -> tuple[list[dict[str, Any]], list[str], bool]: ) -> tuple[list[dict[str, Any]], list[str], bool]:
"""search by query params""" """search by query params"""
if not HAVE_SQLITE3: if not HAVE_SQLITE3:
@@ -131,7 +131,6 @@ class U2idx(object):
q = "" q = ""
v: Union[str, int] = "" v: Union[str, int] = ""
va: list[Union[str, int]] = [] va: list[Union[str, int]] = []
have_up = False # query has up.* operands
have_mt = False have_mt = False
is_key = True is_key = True
is_size = False is_size = False
@@ -176,26 +175,21 @@ class U2idx(object):
if v == "size": if v == "size":
v = "up.sz" v = "up.sz"
is_size = True is_size = True
have_up = True
elif v == "date": elif v == "date":
v = "up.mt" v = "up.mt"
is_date = True is_date = True
have_up = True
elif v == "up_at": elif v == "up_at":
v = "up.at" v = "up.at"
is_date = True is_date = True
have_up = True
elif v == "path": elif v == "path":
v = "trim(?||up.rd,'/')" v = "trim(?||up.rd,'/')"
va.append("\nrd") va.append("\nrd")
have_up = True
elif v == "name": elif v == "name":
v = "up.fn" v = "up.fn"
have_up = True
elif v == "tags" or ptn_mt.match(v): elif v == "tags" or ptn_mt.match(v):
have_mt = True have_mt = True
@@ -271,22 +265,22 @@ class U2idx(object):
q += " lower({}) {} ? ) ".format(field, oper) q += " lower({}) {} ? ) ".format(field, oper)
try: try:
return self.run_query(vols, q, va, have_up, have_mt, lim) return self.run_query(uname, vols, q, va, have_mt, lim)
except Exception as ex: except Exception as ex:
raise Pebkac(500, repr(ex)) raise Pebkac(500, repr(ex))
def run_query( def run_query(
self, self,
vols: list[tuple[str, str, dict[str, Any]]], uname: str,
vols: list[VFS],
uq: str, uq: str,
uv: list[Union[str, int]], uv: list[Union[str, int]],
have_up: bool,
have_mt: bool, have_mt: bool,
lim: int, lim: int,
) -> tuple[list[dict[str, Any]], list[str], bool]: ) -> tuple[list[dict[str, Any]], list[str], bool]:
if self.args.srch_dbg: if self.args.srch_dbg:
t = "searching across all %s volumes in which the user has 'r' (full read access):\n %s" t = "searching across all %s volumes in which the user has 'r' (full read access):\n %s"
zs = "\n ".join(["/%s = %s" % (x[0], x[1]) for x in vols]) zs = "\n ".join(["/%s = %s" % (x.vpath, x.realpath) for x in vols])
self.log(t % (len(vols), zs), 5) self.log(t % (len(vols), zs), 5)
done_flag: list[bool] = [] done_flag: list[bool] = []
@@ -307,12 +301,22 @@ class U2idx(object):
ret = [] ret = []
seen_rps: set[str] = set() seen_rps: set[str] = set()
lim = min(lim, int(self.args.srch_hits)) clamp = int(self.args.srch_hits)
if lim >= clamp:
lim = clamp
clamped = True
else:
clamped = False
taglist = {} taglist = {}
for (vtop, ptop, flags) in vols: for vol in vols:
if lim < 0: if lim < 0:
break break
vtop = vol.vpath
ptop = vol.realpath
flags = vol.flags
cur = self.get_cur(ptop) cur = self.get_cur(ptop)
if not cur: if not cur:
continue continue
@@ -337,7 +341,7 @@ class U2idx(object):
sret = [] sret = []
fk = flags.get("fk") fk = flags.get("fk")
dots = flags.get("dotsrch") dots = flags.get("dotsrch") and uname in vol.axs.udot
fk_alg = 2 if "fka" in flags else 1 fk_alg = 2 if "fka" in flags else 1
c = cur.execute(uq, tuple(vuv)) c = cur.execute(uq, tuple(vuv))
for hit in c: for hit in c:
@@ -420,7 +424,7 @@ class U2idx(object):
ret.sort(key=itemgetter("rp")) ret.sort(key=itemgetter("rp"))
return ret, list(taglist.keys()), lim < 0 return ret, list(taglist.keys()), lim < 0 and not clamped
def terminator(self, identifier: str, done_flag: list[bool]) -> None: def terminator(self, identifier: str, done_flag: list[bool]) -> None:
for _ in range(self.timeout): for _ in range(self.timeout):

View File

@@ -21,7 +21,7 @@ from copy import deepcopy
from queue import Queue from queue import Queue
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, WINDOWS from .__init__ import ANYWIN, PY2, TYPE_CHECKING, WINDOWS, E
from .authsrv import LEELOO_DALLAS, SSEELOG, VFS, AuthSrv from .authsrv import LEELOO_DALLAS, SSEELOG, VFS, AuthSrv
from .bos import bos from .bos import bos
from .cfg import vf_bmap, vf_cmap, vf_vmap from .cfg import vf_bmap, vf_cmap, vf_vmap
@@ -35,8 +35,10 @@ from .util import (
Pebkac, Pebkac,
ProgressPrinter, ProgressPrinter,
absreal, absreal,
alltrace,
atomic_move, atomic_move,
db_ex_chk, db_ex_chk,
dir_is_empty,
djoin, djoin,
fsenc, fsenc,
gen_filekey, gen_filekey,
@@ -63,6 +65,7 @@ from .util import (
vsplit, vsplit,
w8b64dec, w8b64dec,
w8b64enc, w8b64enc,
wunlink,
) )
try: try:
@@ -85,6 +88,9 @@ zsg = "avif,avifs,bmp,gif,heic,heics,heif,heifs,ico,j2p,j2k,jp2,jpeg,jpg,jpx,png
CV_EXTS = set(zsg.split(",")) CV_EXTS = set(zsg.split(","))
HINT_HISTPATH = "you could try moving the database to another location (preferably an SSD or NVME drive) using either the --hist argument (global option for all volumes), or the hist volflag (just for this volume)"
class Dbw(object): class Dbw(object):
def __init__(self, c: "sqlite3.Cursor", n: int, t: float) -> None: def __init__(self, c: "sqlite3.Cursor", n: int, t: float) -> None:
self.c = c self.c = c
@@ -145,9 +151,12 @@ class Up2k(object):
self.entags: dict[str, set[str]] = {} self.entags: dict[str, set[str]] = {}
self.mtp_parsers: dict[str, dict[str, MParser]] = {} self.mtp_parsers: dict[str, dict[str, MParser]] = {}
self.pending_tags: list[tuple[set[str], str, str, dict[str, Any]]] = [] self.pending_tags: list[tuple[set[str], str, str, dict[str, Any]]] = []
self.hashq: Queue[tuple[str, str, str, str, str, float, str, bool]] = Queue() self.hashq: Queue[
tuple[str, str, dict[str, Any], str, str, str, float, str, bool]
] = Queue()
self.tagq: Queue[tuple[str, str, str, str, str, float]] = Queue() self.tagq: Queue[tuple[str, str, str, str, str, float]] = Queue()
self.tag_event = threading.Condition() self.tag_event = threading.Condition()
self.hashq_mutex = threading.Lock()
self.n_hashq = 0 self.n_hashq = 0
self.n_tagq = 0 self.n_tagq = 0
self.mpool_used = False self.mpool_used = False
@@ -419,50 +428,49 @@ class Up2k(object):
def _check_lifetimes(self) -> float: def _check_lifetimes(self) -> float:
now = time.time() now = time.time()
timeout = now + 9001 timeout = now + 9001
if now: # diff-golf for vp, vol in sorted(self.asrv.vfs.all_vols.items()):
for vp, vol in sorted(self.asrv.vfs.all_vols.items()): lifetime = vol.flags.get("lifetime")
lifetime = vol.flags.get("lifetime") if not lifetime:
if not lifetime: continue
continue
cur = self.cur.get(vol.realpath) cur = self.cur.get(vol.realpath)
if not cur: if not cur:
continue continue
nrm = 0 nrm = 0
deadline = time.time() - lifetime deadline = time.time() - lifetime
timeout = min(timeout, now + lifetime) timeout = min(timeout, now + lifetime)
q = "select rd, fn from up where at > 0 and at < ? limit 100" q = "select rd, fn from up where at > 0 and at < ? limit 100"
while True: while True:
with self.mutex:
hits = cur.execute(q, (deadline,)).fetchall()
if not hits:
break
for rd, fn in hits:
if rd.startswith("//") or fn.startswith("//"):
rd, fn = s3dec(rd, fn)
fvp = ("%s/%s" % (rd, fn)).strip("/")
if vp:
fvp = "%s/%s" % (vp, fvp)
self._handle_rm(LEELOO_DALLAS, "", fvp, [], True)
nrm += 1
if nrm:
self.log("{} files graduated in {}".format(nrm, vp))
if timeout < 10:
continue
q = "select at from up where at > 0 order by at limit 1"
with self.mutex: with self.mutex:
hits = cur.execute(q).fetchone() hits = cur.execute(q, (deadline,)).fetchall()
if hits: if not hits:
timeout = min(timeout, now + lifetime - (now - hits[0])) break
for rd, fn in hits:
if rd.startswith("//") or fn.startswith("//"):
rd, fn = s3dec(rd, fn)
fvp = ("%s/%s" % (rd, fn)).strip("/")
if vp:
fvp = "%s/%s" % (vp, fvp)
self._handle_rm(LEELOO_DALLAS, "", fvp, [], True)
nrm += 1
if nrm:
self.log("{} files graduated in {}".format(nrm, vp))
if timeout < 10:
continue
q = "select at from up where at > 0 order by at limit 1"
with self.mutex:
hits = cur.execute(q).fetchone()
if hits:
timeout = min(timeout, now + lifetime - (now - hits[0]))
return timeout return timeout
@@ -579,7 +587,7 @@ class Up2k(object):
if gid: if gid:
self.log("reload #{} running".format(self.gid)) self.log("reload #{} running".format(self.gid))
self.pp = ProgressPrinter() self.pp = ProgressPrinter(self.log, self.args)
vols = list(all_vols.values()) vols = list(all_vols.values())
t0 = time.time() t0 = time.time()
have_e2d = False have_e2d = False
@@ -602,7 +610,7 @@ class Up2k(object):
for vol in vols: for vol in vols:
try: try:
bos.makedirs(vol.realpath) # gonna happen at snap anyways bos.makedirs(vol.realpath) # gonna happen at snap anyways
bos.listdir(vol.realpath) dir_is_empty(self.log_func, not self.args.no_scandir, vol.realpath)
except: except:
self.volstate[vol.vpath] = "OFFLINE (cannot access folder)" self.volstate[vol.vpath] = "OFFLINE (cannot access folder)"
self.log("cannot access " + vol.realpath, c=1) self.log("cannot access " + vol.realpath, c=1)
@@ -805,7 +813,7 @@ class Up2k(object):
ft = "\033[0;32m{}{:.0}" ft = "\033[0;32m{}{:.0}"
ff = "\033[0;35m{}{:.0}" ff = "\033[0;35m{}{:.0}"
fv = "\033[0;36m{}:\033[90m{}" fv = "\033[0;36m{}:\033[90m{}"
fx = set(("html_head",)) fx = set(("html_head", "rm_re_t", "rm_re_r"))
fd = vf_bmap() fd = vf_bmap()
fd.update(vf_cmap()) fd.update(vf_cmap())
fd.update(vf_vmap()) fd.update(vf_vmap())
@@ -882,11 +890,13 @@ class Up2k(object):
try: try:
if bos.makedirs(histpath): if bos.makedirs(histpath):
hidedir(histpath) hidedir(histpath)
except: except Exception as ex:
t = "failed to initialize volume '/%s': %s"
self.log(t % (vpath, ex), 1)
return None return None
try: try:
cur = self._open_db(db_path) cur = self._open_db_wd(db_path)
# speeds measured uploading 520 small files on a WD20SPZX (SMR 2.5" 5400rpm 4kb) # speeds measured uploading 520 small files on a WD20SPZX (SMR 2.5" 5400rpm 4kb)
dbd = flags["dbd"] dbd = flags["dbd"]
@@ -929,8 +939,8 @@ class Up2k(object):
return cur, db_path return cur, db_path
except: except:
msg = "cannot use database at [{}]:\n{}" msg = "ERROR: cannot use database at [%s]:\n%s\n\033[33mhint: %s\n"
self.log(msg.format(ptop, traceback.format_exc())) self.log(msg % (db_path, traceback.format_exc(), HINT_HISTPATH), 1)
return None return None
@@ -982,12 +992,12 @@ class Up2k(object):
excl = [x.replace("/", "\\") for x in excl] excl = [x.replace("/", "\\") for x in excl]
else: else:
# ~/.wine/dosdevices/z:/ and such # ~/.wine/dosdevices/z:/ and such
excl += ["/dev", "/proc", "/run", "/sys"] excl.extend(("/dev", "/proc", "/run", "/sys"))
rtop = absreal(top) rtop = absreal(top)
n_add = n_rm = 0 n_add = n_rm = 0
try: try:
if not bos.listdir(rtop): if dir_is_empty(self.log_func, not self.args.no_scandir, rtop):
t = "volume /%s at [%s] is empty; will not be indexed as this could be due to an offline filesystem" t = "volume /%s at [%s] is empty; will not be indexed as this could be due to an offline filesystem"
self.log(t % (vol.vpath, rtop), 6) self.log(t % (vol.vpath, rtop), 6)
return True, False return True, False
@@ -1084,7 +1094,7 @@ class Up2k(object):
cv = "" cv = ""
assert self.pp and self.mem_cur assert self.pp and self.mem_cur
self.pp.msg = "a{} {}".format(self.pp.n, cdir) self.pp.msg = "a%d %s" % (self.pp.n, cdir)
rd = cdir[len(top) :].strip("/") rd = cdir[len(top) :].strip("/")
if WINDOWS: if WINDOWS:
@@ -1117,7 +1127,11 @@ class Up2k(object):
if stat.S_ISDIR(inf.st_mode): if stat.S_ISDIR(inf.st_mode):
rap = absreal(abspath) rap = absreal(abspath)
if dev and inf.st_dev != dev: if (
dev
and inf.st_dev != dev
and not (ANYWIN and bos.stat(rap).st_dev == dev)
):
self.log("skip xdev {}->{}: {}".format(dev, inf.st_dev, abspath), 6) self.log("skip xdev {}->{}: {}".format(dev, inf.st_dev, abspath), 6)
continue continue
if abspath in excl or rap in excl: if abspath in excl or rap in excl:
@@ -1155,8 +1169,8 @@ class Up2k(object):
continue continue
if not sz and ( if not sz and (
"{}.PARTIAL".format(iname) in partials "%s.PARTIAL" % (iname,) in partials
or ".{}.PARTIAL".format(iname) in partials or ".%s.PARTIAL" % (iname,) in partials
): ):
# placeholder for unfinished upload # placeholder for unfinished upload
continue continue
@@ -1164,8 +1178,12 @@ class Up2k(object):
files.append((sz, lmod, iname)) files.append((sz, lmod, iname))
liname = iname.lower() liname = iname.lower()
if sz and ( if sz and (
iname in self.args.th_covers iname in self.args.th_coversd
or (not cv and liname.rsplit(".", 1)[-1] in CV_EXTS) or (
not cv
and liname.rsplit(".", 1)[-1] in CV_EXTS
and not iname.startswith(".")
)
): ):
cv = iname cv = iname
@@ -1215,76 +1233,80 @@ class Up2k(object):
abspath = os.path.join(cdir, fn) abspath = os.path.join(cdir, fn)
nohash = reh.search(abspath) if reh else False nohash = reh.search(abspath) if reh else False
if fn: # diff-golf sql = "select w, mt, sz, ip, at from up where rd = ? and fn = ?"
try:
c = db.c.execute(sql, (rd, fn))
except:
c = db.c.execute(sql, s3enc(self.mem_cur, rd, fn))
sql = "select w, mt, sz, at from up where rd = ? and fn = ?" in_db = list(c.fetchall())
try: if in_db:
c = db.c.execute(sql, (rd, fn)) self.pp.n -= 1
except: dw, dts, dsz, ip, at = in_db[0]
c = db.c.execute(sql, s3enc(self.mem_cur, rd, fn)) if len(in_db) > 1:
t = "WARN: multiple entries: [{}] => [{}] |{}|\n{}"
rep_db = "\n".join([repr(x) for x in in_db])
self.log(t.format(top, rp, len(in_db), rep_db))
dts = -1
in_db = list(c.fetchall()) if fat32 and abs(dts - lmod) == 1:
if in_db: dts = lmod
self.pp.n -= 1
dw, dts, dsz, at = in_db[0]
if len(in_db) > 1:
t = "WARN: multiple entries: [{}] => [{}] |{}|\n{}"
rep_db = "\n".join([repr(x) for x in in_db])
self.log(t.format(top, rp, len(in_db), rep_db))
dts = -1
if fat32 and abs(dts - lmod) == 1: if dts == lmod and dsz == sz and (nohash or dw[0] != "#" or not sz):
dts = lmod continue
if dts == lmod and dsz == sz and (nohash or dw[0] != "#" or not sz): t = "reindex [{}] => [{}] ({}/{}) ({}/{})".format(
continue top, rp, dts, lmod, dsz, sz
)
t = "reindex [{}] => [{}] ({}/{}) ({}/{})".format( self.log(t)
top, rp, dts, lmod, dsz, sz self.db_rm(db.c, rd, fn, 0)
)
self.log(t)
self.db_rm(db.c, rd, fn, 0)
ret += 1
db.n += 1
in_db = []
else:
at = 0
self.pp.msg = "a{} {}".format(self.pp.n, abspath)
if nohash or not sz:
wark = up2k_wark_from_metadata(self.salt, sz, lmod, rd, fn)
else:
if sz > 1024 * 1024:
self.log("file: {}".format(abspath))
try:
hashes = self._hashlist_from_file(
abspath, "a{}, ".format(self.pp.n)
)
except Exception as ex:
self.log("hash: {} @ [{}]".format(repr(ex), abspath))
continue
if not hashes:
return -1
wark = up2k_wark_from_hashlist(self.salt, sz, hashes)
# skip upload hooks by not providing vflags
self.db_add(db.c, {}, rd, fn, lmod, sz, "", "", wark, "", "", "", at)
db.n += 1
ret += 1 ret += 1
td = time.time() - db.t db.n += 1
if db.n >= 4096 or td >= 60: in_db = []
self.log("commit {} new files".format(db.n)) else:
db.c.connection.commit() dw = ""
db.n = 0 ip = ""
db.t = time.time() at = 0
self.pp.msg = "a%d %s" % (self.pp.n, abspath)
if nohash or not sz:
wark = up2k_wark_from_metadata(self.salt, sz, lmod, rd, fn)
else:
if sz > 1024 * 1024:
self.log("file: {}".format(abspath))
try:
hashes = self._hashlist_from_file(
abspath, "a{}, ".format(self.pp.n)
)
except Exception as ex:
self.log("hash: {} @ [{}]".format(repr(ex), abspath))
continue
if not hashes:
return -1
wark = up2k_wark_from_hashlist(self.salt, sz, hashes)
if dw and dw != wark:
ip = ""
at = 0
# skip upload hooks by not providing vflags
self.db_add(db.c, {}, rd, fn, lmod, sz, "", "", wark, "", "", ip, at)
db.n += 1
ret += 1
td = time.time() - db.t
if db.n >= 4096 or td >= 60:
self.log("commit {} new files".format(db.n))
db.c.connection.commit()
db.n = 0
db.t = time.time()
if not self.args.no_dhash: if not self.args.no_dhash:
db.c.execute("delete from dh where d = ?", (drd,)) db.c.execute("delete from dh where d = ?", (drd,)) # type: ignore
db.c.execute("insert into dh values (?,?)", (drd, dhash)) db.c.execute("insert into dh values (?,?)", (drd, dhash)) # type: ignore
if self.stop: if self.stop:
return -1 return -1
@@ -1303,7 +1325,7 @@ class Up2k(object):
if n: if n:
t = "forgetting {} shadowed autoindexed files in [{}] > [{}]" t = "forgetting {} shadowed autoindexed files in [{}] > [{}]"
self.log(t.format(n, top, sh_rd)) self.log(t.format(n, top, sh_rd))
assert sh_erd assert sh_erd # type: ignore
q = "delete from dh where (d = ? or d like ?||'%')" q = "delete from dh where (d = ? or d like ?||'%')"
db.c.execute(q, (sh_erd, sh_erd + "/")) db.c.execute(q, (sh_erd, sh_erd + "/"))
@@ -1350,7 +1372,7 @@ class Up2k(object):
rd = drd rd = drd
abspath = djoin(top, rd) abspath = djoin(top, rd)
self.pp.msg = "b{} {}".format(ndirs - nchecked, abspath) self.pp.msg = "b%d %s" % (ndirs - nchecked, abspath)
try: try:
if os.path.isdir(abspath): if os.path.isdir(abspath):
continue continue
@@ -1702,7 +1724,7 @@ class Up2k(object):
cur.execute(q, (w[:16],)) cur.execute(q, (w[:16],))
abspath = djoin(ptop, rd, fn) abspath = djoin(ptop, rd, fn)
self.pp.msg = "c{} {}".format(nq, abspath) self.pp.msg = "c%d %s" % (nq, abspath)
if not mpool: if not mpool:
n_tags = self._tagscan_file(cur, entags, w, abspath, ip, at) n_tags = self._tagscan_file(cur, entags, w, abspath, ip, at)
else: else:
@@ -1759,7 +1781,7 @@ class Up2k(object):
if c2.execute(q, (row[0][:16],)).fetchone(): if c2.execute(q, (row[0][:16],)).fetchone():
continue continue
gf.write("{}\n".format("\x00".join(row)).encode("utf-8")) gf.write(("%s\n" % ("\x00".join(row),)).encode("utf-8"))
n += 1 n += 1
c2.close() c2.close()
@@ -2137,8 +2159,50 @@ class Up2k(object):
def _trace(self, msg: str) -> None: def _trace(self, msg: str) -> None:
self.log("ST: {}".format(msg)) self.log("ST: {}".format(msg))
def _open_db_wd(self, db_path: str) -> "sqlite3.Cursor":
ok: list[int] = []
Daemon(self._open_db_timeout, "opendb_watchdog", [db_path, ok])
try:
return self._open_db(db_path)
finally:
ok.append(1)
def _open_db_timeout(self, db_path, ok: list[int]) -> None:
# give it plenty of time due to the count statement (and wisdom from byte's box)
for _ in range(60):
time.sleep(1)
if ok:
return
t = "WARNING:\n\n initializing an up2k database is taking longer than one minute; something has probably gone wrong:\n\n"
self._log_sqlite_incompat(db_path, t)
def _log_sqlite_incompat(self, db_path, t0) -> None:
txt = t0 or ""
digest = hashlib.sha512(db_path.encode("utf-8", "replace")).digest()
stackname = base64.urlsafe_b64encode(digest[:9]).decode("utf-8")
stackpath = os.path.join(E.cfg, "stack-%s.txt" % (stackname,))
t = " the filesystem at %s may not support locking, or is otherwise incompatible with sqlite\n\n %s\n\n"
t += " PS: if you think this is a bug and wish to report it, please include your configuration + the following file: %s\n"
txt += t % (db_path, HINT_HISTPATH, stackpath)
self.log(txt, 3)
try:
stk = alltrace()
with open(stackpath, "wb") as f:
f.write(stk.encode("utf-8", "replace"))
except Exception as ex:
self.log("warning: failed to write %s: %s" % (stackpath, ex), 3)
if self.args.q:
t = "-" * 72
raise Exception("%s\n%s\n%s" % (t, txt, t))
def _orz(self, db_path: str) -> "sqlite3.Cursor": def _orz(self, db_path: str) -> "sqlite3.Cursor":
c = sqlite3.connect(db_path, self.timeout, check_same_thread=False).cursor() c = sqlite3.connect(
db_path, timeout=self.timeout, check_same_thread=False
).cursor()
# c.connection.set_trace_callback(self._trace) # c.connection.set_trace_callback(self._trace)
return c return c
@@ -2147,7 +2211,7 @@ class Up2k(object):
cur = self._orz(db_path) cur = self._orz(db_path)
ver = self._read_ver(cur) ver = self._read_ver(cur)
if not existed and ver is None: if not existed and ver is None:
return self._create_db(db_path, cur) return self._try_create_db(db_path, cur)
if ver == 4: if ver == 4:
try: try:
@@ -2185,8 +2249,16 @@ class Up2k(object):
db = cur.connection db = cur.connection
cur.close() cur.close()
db.close() db.close()
bos.unlink(db_path) self._delete_db(db_path)
return self._create_db(db_path, None) return self._try_create_db(db_path, None)
def _delete_db(self, db_path: str):
for suf in ("", "-shm", "-wal", "-journal"):
try:
bos.unlink(db_path + suf)
except:
if not suf:
raise
def _backup_db( def _backup_db(
self, db_path: str, cur: "sqlite3.Cursor", ver: Optional[int], msg: str self, db_path: str, cur: "sqlite3.Cursor", ver: Optional[int], msg: str
@@ -2202,7 +2274,7 @@ class Up2k(object):
t = "native sqlite3 backup failed; using fallback method:\n" t = "native sqlite3 backup failed; using fallback method:\n"
self.log(t + min_ex()) self.log(t + min_ex())
finally: finally:
c2.close() c2.close() # type: ignore
db = cur.connection db = cur.connection
cur.close() cur.close()
@@ -2223,6 +2295,18 @@ class Up2k(object):
return int(rows[0][0]) return int(rows[0][0])
return None return None
def _try_create_db(
self, db_path: str, cur: Optional["sqlite3.Cursor"]
) -> "sqlite3.Cursor":
try:
return self._create_db(db_path, cur)
except:
try:
self._delete_db(db_path)
except:
pass
raise
def _create_db( def _create_db(
self, db_path: str, cur: Optional["sqlite3.Cursor"] self, db_path: str, cur: Optional["sqlite3.Cursor"]
) -> "sqlite3.Cursor": ) -> "sqlite3.Cursor":
@@ -2342,7 +2426,8 @@ class Up2k(object):
t = "cannot receive uploads right now;\nserver busy with {}.\nPlease wait; the client will retry..." t = "cannot receive uploads right now;\nserver busy with {}.\nPlease wait; the client will retry..."
raise Pebkac(503, t.format(self.blocked or "[unknown]")) raise Pebkac(503, t.format(self.blocked or "[unknown]"))
except TypeError: except TypeError:
# py2 if not PY2:
raise
with self.mutex: with self.mutex:
self._job_volchk(cj) self._job_volchk(cj)
@@ -2565,12 +2650,13 @@ class Up2k(object):
raise Pebkac(403, t) raise Pebkac(403, t)
if not self.args.nw: if not self.args.nw:
dvf: dict[str, Any] = vfs.flags
try: try:
dvf = self.flags[job["ptop"]] dvf = self.flags[job["ptop"]]
self._symlink(src, dst, dvf, lmod=cj["lmod"], rm=True) self._symlink(src, dst, dvf, lmod=cj["lmod"], rm=True)
except: except:
if bos.path.exists(dst): if bos.path.exists(dst):
bos.unlink(dst) wunlink(self.log, dst, dvf)
if not n4g: if not n4g:
raise raise
@@ -2659,12 +2745,7 @@ class Up2k(object):
not ret["hash"] not ret["hash"]
and "fk" in vfs.flags and "fk" in vfs.flags
and not self.args.nw and not self.args.nw
and ( and (cj["user"] in vfs.axs.uread or cj["user"] in vfs.axs.upget)
cj["user"] in vfs.axs.uread
or cj["user"] in vfs.axs.upget
or "*" in vfs.axs.uread
or "*" in vfs.axs.upget
)
): ):
alg = 2 if "fka" in vfs.flags else 1 alg = 2 if "fka" in vfs.flags else 1
ap = absreal(djoin(job["ptop"], job["prel"], job["name"])) ap = absreal(djoin(job["ptop"], job["prel"], job["name"]))
@@ -2672,6 +2753,28 @@ class Up2k(object):
fk = self.gen_fk(alg, self.args.fk_salt, ap, job["size"], ino) fk = self.gen_fk(alg, self.args.fk_salt, ap, job["size"], ino)
ret["fk"] = fk[: vfs.flags["fk"]] ret["fk"] = fk[: vfs.flags["fk"]]
if (
not ret["hash"]
and cur
and cj.get("umod")
and int(cj["lmod"]) != int(job["lmod"])
and not self.args.nw
and cj["user"] in vfs.axs.uwrite
and cj["user"] in vfs.axs.udel
):
sql = "update up set mt=? where substr(w,1,16)=? and +rd=? and +fn=?"
try:
cur.execute(sql, (cj["lmod"], wark[:16], job["prel"], job["name"]))
cur.connection.commit()
ap = djoin(job["ptop"], job["prel"], job["name"])
times = (int(time.time()), int(cj["lmod"]))
bos.utime(ap, times, False)
self.log("touched %s from %d to %d" % (ap, job["lmod"], cj["lmod"]))
except Exception as ex:
self.log("umod failed, %r" % (ex,), 3)
return ret return ret
def _untaken(self, fdir: str, job: dict[str, Any], ts: float) -> str: def _untaken(self, fdir: str, job: dict[str, Any], ts: float) -> str:
@@ -2684,14 +2787,14 @@ class Up2k(object):
fp = djoin(fdir, fname) fp = djoin(fdir, fname)
if job.get("replace") and bos.path.exists(fp): if job.get("replace") and bos.path.exists(fp):
self.log("replacing existing file at {}".format(fp)) self.log("replacing existing file at {}".format(fp))
bos.unlink(fp) wunlink(self.log, fp, self.flags.get(job["ptop"]) or {})
if self.args.plain_ip: if self.args.plain_ip:
dip = ip.replace(":", ".") dip = ip.replace(":", ".")
else: else:
dip = self.hub.iphash.s(ip) dip = self.hub.iphash.s(ip)
suffix = "-{:.6f}-{}".format(ts, dip) suffix = "-%.6f-%s" % (ts, dip)
with ren_open(fname, "wb", fdir=fdir, suffix=suffix) as zfw: with ren_open(fname, "wb", fdir=fdir, suffix=suffix) as zfw:
return zfw["orz"][1] return zfw["orz"][1]
@@ -2742,7 +2845,7 @@ class Up2k(object):
ldst = ldst.replace("/", "\\") ldst = ldst.replace("/", "\\")
if rm and bos.path.exists(dst): if rm and bos.path.exists(dst):
bos.unlink(dst) wunlink(self.log, dst, flags)
try: try:
if "hardlink" in flags: if "hardlink" in flags:
@@ -2758,7 +2861,7 @@ class Up2k(object):
Path(ldst).symlink_to(lsrc) Path(ldst).symlink_to(lsrc)
if not bos.path.exists(dst): if not bos.path.exists(dst):
try: try:
bos.unlink(dst) wunlink(self.log, dst, flags)
except: except:
pass pass
t = "the created symlink [%s] did not resolve to [%s]" t = "the created symlink [%s] did not resolve to [%s]"
@@ -2906,7 +3009,7 @@ class Up2k(object):
except: except:
pass pass
z2 += [upt] z2.append(upt)
if self.idx_wark(vflags, *z2): if self.idx_wark(vflags, *z2):
del self.registry[ptop][wark] del self.registry[ptop][wark]
else: else:
@@ -2929,7 +3032,6 @@ class Up2k(object):
self._symlink(dst, d2, self.flags[ptop], lmod=lmod) self._symlink(dst, d2, self.flags[ptop], lmod=lmod)
if cur: if cur:
self.db_rm(cur, rd, fn, job["size"])
self.db_add(cur, vflags, rd, fn, lmod, *z2[3:]) self.db_add(cur, vflags, rd, fn, lmod, *z2[3:])
if cur: if cur:
@@ -2972,7 +3074,6 @@ class Up2k(object):
self.db_act = self.vol_act[ptop] = time.time() self.db_act = self.vol_act[ptop] = time.time()
try: try:
self.db_rm(cur, rd, fn, sz)
self.db_add( self.db_add(
cur, cur,
vflags, vflags,
@@ -3031,6 +3132,8 @@ class Up2k(object):
at: float, at: float,
skip_xau: bool = False, skip_xau: bool = False,
) -> None: ) -> None:
self.db_rm(db, rd, fn, sz)
sql = "insert into up values (?,?,?,?,?,?,?)" sql = "insert into up values (?,?,?,?,?,?,?)"
v = (wark, int(ts), sz, rd, fn, ip or "", int(at or 0)) v = (wark, int(ts), sz, rd, fn, ip or "", int(at or 0))
try: try:
@@ -3061,7 +3164,7 @@ class Up2k(object):
): ):
t = "upload blocked by xau server config" t = "upload blocked by xau server config"
self.log(t, 1) self.log(t, 1)
bos.unlink(dst) wunlink(self.log, dst, vflags)
self.registry[ptop].pop(wark, None) self.registry[ptop].pop(wark, None)
raise Pebkac(403, t) raise Pebkac(403, t)
@@ -3087,7 +3190,7 @@ class Up2k(object):
with self.rescan_cond: with self.rescan_cond:
self.rescan_cond.notify_all() self.rescan_cond.notify_all()
if rd and sz and fn.lower() in self.args.th_covers: if rd and sz and fn.lower() in self.args.th_coversd:
# wasteful; db_add will re-index actual covers # wasteful; db_add will re-index actual covers
# but that won't catch existing files # but that won't catch existing files
crd, cdn = rd.rsplit("/", 1) if "/" in rd else ("", rd) crd, cdn = rd.rsplit("/", 1) if "/" in rd else ("", rd)
@@ -3192,10 +3295,16 @@ class Up2k(object):
break break
abspath = djoin(adir, fn) abspath = djoin(adir, fn)
st = bos.stat(abspath) st = stl = bos.lstat(abspath)
volpath = "{}/{}".format(vrem, fn).strip("/") if stat.S_ISLNK(st.st_mode):
vpath = "{}/{}".format(dbv.vpath, volpath).strip("/") try:
self.log("rm {}\n {}".format(vpath, abspath)) st = bos.stat(abspath)
except:
pass
volpath = ("%s/%s" % (vrem, fn)).strip("/")
vpath = ("%s/%s" % (dbv.vpath, volpath)).strip("/")
self.log("rm %s\n %s" % (vpath, abspath))
_ = dbv.get(volpath, uname, *permsets[0]) _ = dbv.get(volpath, uname, *permsets[0])
if xbd: if xbd:
if not runhook( if not runhook(
@@ -3205,7 +3314,7 @@ class Up2k(object):
vpath, vpath,
"", "",
uname, uname,
st.st_mtime, stl.st_mtime,
st.st_size, st.st_size,
ip, ip,
0, 0,
@@ -3226,7 +3335,7 @@ class Up2k(object):
if cur: if cur:
cur.connection.commit() cur.connection.commit()
bos.unlink(abspath) wunlink(self.log, abspath, dbv.flags)
if xad: if xad:
runhook( runhook(
self.log, self.log,
@@ -3235,7 +3344,7 @@ class Up2k(object):
vpath, vpath,
"", "",
uname, uname,
st.st_mtime, stl.st_mtime,
st.st_size, st.st_size,
ip, ip,
0, 0,
@@ -3352,28 +3461,27 @@ class Up2k(object):
if bos.path.exists(dabs): if bos.path.exists(dabs):
raise Pebkac(400, "mv2: target file exists") raise Pebkac(400, "mv2: target file exists")
stl = bos.lstat(sabs) is_link = is_dirlink = False
try: st = stl = bos.lstat(sabs)
st = bos.stat(sabs) if stat.S_ISLNK(stl.st_mode):
except: is_link = True
st = stl try:
st = bos.stat(sabs)
is_dirlink = stat.S_ISDIR(st.st_mode)
except:
pass # broken symlink; keep as-is
xbr = svn.flags.get("xbr") xbr = svn.flags.get("xbr")
xar = dvn.flags.get("xar") xar = dvn.flags.get("xar")
if xbr: if xbr:
if not runhook( if not runhook(
self.log, xbr, sabs, svp, "", uname, st.st_mtime, st.st_size, "", 0, "" self.log, xbr, sabs, svp, "", uname, stl.st_mtime, st.st_size, "", 0, ""
): ):
t = "move blocked by xbr server config: {}".format(svp) t = "move blocked by xbr server config: {}".format(svp)
self.log(t, 1) self.log(t, 1)
raise Pebkac(405, t) raise Pebkac(405, t)
is_xvol = svn.realpath != dvn.realpath is_xvol = svn.realpath != dvn.realpath
if stat.S_ISLNK(stl.st_mode):
is_dirlink = stat.S_ISDIR(st.st_mode)
is_link = True
else:
is_link = is_dirlink = False
bos.makedirs(os.path.dirname(dabs)) bos.makedirs(os.path.dirname(dabs))
@@ -3382,7 +3490,7 @@ class Up2k(object):
t = "moving symlink from [{}] to [{}], target [{}]" t = "moving symlink from [{}] to [{}], target [{}]"
self.log(t.format(sabs, dabs, dlabs)) self.log(t.format(sabs, dabs, dlabs))
mt = bos.path.getmtime(sabs, False) mt = bos.path.getmtime(sabs, False)
bos.unlink(sabs) wunlink(self.log, sabs, svn.flags)
self._symlink(dlabs, dabs, dvn.flags, False, lmod=mt) self._symlink(dlabs, dabs, dvn.flags, False, lmod=mt)
# folders are too scary, schedule rescan of both vols # folders are too scary, schedule rescan of both vols
@@ -3400,7 +3508,7 @@ class Up2k(object):
c2 = self.cur.get(dvn.realpath) c2 = self.cur.get(dvn.realpath)
if ftime_ is None: if ftime_ is None:
ftime = st.st_mtime ftime = stl.st_mtime
fsize = st.st_size fsize = st.st_size
else: else:
ftime = ftime_ ftime = ftime_
@@ -3442,7 +3550,16 @@ class Up2k(object):
if is_xvol and has_dupes: if is_xvol and has_dupes:
raise OSError(errno.EXDEV, "src is symlink") raise OSError(errno.EXDEV, "src is symlink")
atomic_move(sabs, dabs) if is_link and st != stl:
# relink non-broken symlinks to still work after the move,
# but only resolve 1st level to maintain relativity
dlink = bos.readlink(sabs)
dlink = os.path.join(os.path.dirname(sabs), dlink)
dlink = bos.path.abspath(dlink)
self._symlink(dlink, dabs, dvn.flags, lmod=ftime)
wunlink(self.log, sabs, svn.flags)
else:
atomic_move(sabs, dabs)
except OSError as ex: except OSError as ex:
if ex.errno != errno.EXDEV: if ex.errno != errno.EXDEV:
@@ -3455,7 +3572,7 @@ class Up2k(object):
shutil.copy2(b1, b2) shutil.copy2(b1, b2)
except: except:
try: try:
os.unlink(b2) wunlink(self.log, dabs, dvn.flags)
except: except:
pass pass
@@ -3467,7 +3584,7 @@ class Up2k(object):
zb = os.readlink(b1) zb = os.readlink(b1)
os.symlink(zb, b2) os.symlink(zb, b2)
except: except:
os.unlink(b2) wunlink(self.log, dabs, dvn.flags)
raise raise
if is_link: if is_link:
@@ -3477,7 +3594,7 @@ class Up2k(object):
except: except:
pass pass
os.unlink(b1) wunlink(self.log, sabs, svn.flags)
if xar: if xar:
runhook(self.log, xar, dabs, dvp, "", uname, 0, 0, "", 0, "") runhook(self.log, xar, dabs, dvp, "", uname, 0, 0, "", 0, "")
@@ -3609,16 +3726,19 @@ class Up2k(object):
except: except:
self.log("relink: not found: [{}]".format(ap)) self.log("relink: not found: [{}]".format(ap))
# self.log("full:\n" + "\n".join(" {:90}: {}".format(*x) for x in full.items()))
# self.log("links:\n" + "\n".join(" {:90}: {}".format(*x) for x in links.items()))
if not dabs and not full and links: if not dabs and not full and links:
# deleting final remaining full copy; swap it with a symlink # deleting final remaining full copy; swap it with a symlink
slabs = list(sorted(links.keys()))[0] slabs = list(sorted(links.keys()))[0]
ptop, rem = links.pop(slabs) ptop, rem = links.pop(slabs)
self.log("linkswap [{}] and [{}]".format(sabs, slabs)) self.log("linkswap [{}] and [{}]".format(sabs, slabs))
mt = bos.path.getmtime(slabs, False) mt = bos.path.getmtime(slabs, False)
bos.unlink(slabs) flags = self.flags.get(ptop) or {}
wunlink(self.log, slabs, flags)
bos.rename(sabs, slabs) bos.rename(sabs, slabs)
bos.utime(slabs, (int(time.time()), int(mt)), False) bos.utime(slabs, (int(time.time()), int(mt)), False)
self._symlink(slabs, sabs, self.flags.get(ptop) or {}, False) self._symlink(slabs, sabs, flags, False)
full[slabs] = (ptop, rem) full[slabs] = (ptop, rem)
sabs = slabs sabs = slabs
@@ -3626,18 +3746,51 @@ class Up2k(object):
dabs = list(sorted(full.keys()))[0] dabs = list(sorted(full.keys()))[0]
for alink, parts in links.items(): for alink, parts in links.items():
lmod = None lmod = 0.0
try: try:
if alink != sabs and absreal(alink) != sabs: faulty = False
continue ldst = alink
try:
for n in range(40): # MAXSYMLINKS
zs = bos.readlink(ldst)
ldst = os.path.join(os.path.dirname(ldst), zs)
ldst = bos.path.abspath(ldst)
if not bos.path.islink(ldst):
break
self.log("relinking [{}] to [{}]".format(alink, dabs)) if ldst == sabs:
t = "relink because level %d would break:"
self.log(t % (n,), 6)
faulty = True
except Exception as ex:
self.log("relink because walk failed: %s; %r" % (ex, ex), 3)
faulty = True
zs = absreal(alink)
if ldst != zs:
t = "relink because computed != actual destination:\n %s\n %s"
self.log(t % (ldst, zs), 3)
ldst = zs
faulty = True
if bos.path.islink(ldst):
raise Exception("broken symlink: %s" % (alink,))
if alink != sabs and ldst != sabs and not faulty:
continue # original symlink OK; leave it be
except Exception as ex:
t = "relink because symlink verification failed: %s; %r"
self.log(t % (ex, ex), 3)
self.log("relinking [%s] to [%s]" % (alink, dabs))
flags = self.flags.get(parts[0]) or {}
try:
lmod = bos.path.getmtime(alink, False) lmod = bos.path.getmtime(alink, False)
bos.unlink(alink) wunlink(self.log, alink, flags)
except: except:
pass pass
flags = self.flags.get(parts[0]) or {}
self._symlink(dabs, alink, flags, False, lmod=lmod or 0) self._symlink(dabs, alink, flags, False, lmod=lmod or 0)
return len(full) + len(links) return len(full) + len(links)
@@ -3684,7 +3837,7 @@ class Up2k(object):
return [] return []
if self.pp: if self.pp:
mb = int(fsz / 1024 / 1024) mb = fsz // (1024 * 1024)
self.pp.msg = prefix + str(mb) + suffix self.pp.msg = prefix + str(mb) + suffix
hashobj = hashlib.sha512() hashobj = hashlib.sha512()
@@ -3706,8 +3859,14 @@ class Up2k(object):
def _new_upload(self, job: dict[str, Any]) -> None: def _new_upload(self, job: dict[str, Any]) -> None:
pdir = djoin(job["ptop"], job["prel"]) pdir = djoin(job["ptop"], job["prel"])
if not job["size"] and bos.path.isfile(djoin(pdir, job["name"])): if not job["size"]:
return try:
inf = bos.stat(djoin(pdir, job["name"]))
if stat.S_ISREG(inf.st_mode):
job["lmod"] = inf.st_size
return
except:
pass
self.registry[job["ptop"]][job["wark"]] = job self.registry[job["ptop"]][job["wark"]] = job
job["name"] = self._untaken(pdir, job, job["t0"]) job["name"] = self._untaken(pdir, job, job["t0"])
@@ -3749,7 +3908,7 @@ class Up2k(object):
else: else:
dip = self.hub.iphash.s(job["addr"]) dip = self.hub.iphash.s(job["addr"])
suffix = "-{:.6f}-{}".format(job["t0"], dip) suffix = "-%.6f-%s" % (job["t0"], dip)
with ren_open(tnam, "wb", fdir=pdir, suffix=suffix) as zfw: with ren_open(tnam, "wb", fdir=pdir, suffix=suffix) as zfw:
f, job["tnam"] = zfw["orz"] f, job["tnam"] = zfw["orz"]
abspath = djoin(pdir, job["tnam"]) abspath = djoin(pdir, job["tnam"])
@@ -3938,16 +4097,16 @@ class Up2k(object):
self.log("tagged {} ({}+{})".format(abspath, ntags1, len(tags) - ntags1)) self.log("tagged {} ({}+{})".format(abspath, ntags1, len(tags) - ntags1))
def _hasher(self) -> None: def _hasher(self) -> None:
with self.mutex: with self.hashq_mutex:
self.n_hashq += 1 self.n_hashq += 1
while True: while True:
with self.mutex: with self.hashq_mutex:
self.n_hashq -= 1 self.n_hashq -= 1
# self.log("hashq {}".format(self.n_hashq)) # self.log("hashq {}".format(self.n_hashq))
task = self.hashq.get() task = self.hashq.get()
if len(task) != 8: if len(task) != 9:
raise Exception("invalid hash task") raise Exception("invalid hash task")
try: try:
@@ -3956,11 +4115,14 @@ class Up2k(object):
except Exception as ex: except Exception as ex:
self.log("failed to hash %s: %s" % (task, ex), 1) self.log("failed to hash %s: %s" % (task, ex), 1)
def _hash_t(self, task: tuple[str, str, str, str, str, float, str, bool]) -> bool: def _hash_t(
ptop, vtop, rd, fn, ip, at, usr, skip_xau = task self, task: tuple[str, str, dict[str, Any], str, str, str, float, str, bool]
) -> bool:
ptop, vtop, flags, rd, fn, ip, at, usr, skip_xau = task
# self.log("hashq {} pop {}/{}/{}".format(self.n_hashq, ptop, rd, fn)) # self.log("hashq {} pop {}/{}/{}".format(self.n_hashq, ptop, rd, fn))
if "e2d" not in self.flags[ptop]: with self.mutex:
return True if not self.register_vpath(ptop, flags):
return True
abspath = djoin(ptop, rd, fn) abspath = djoin(ptop, rd, fn)
self.log("hashing " + abspath) self.log("hashing " + abspath)
@@ -4011,11 +4173,22 @@ class Up2k(object):
usr: str, usr: str,
skip_xau: bool = False, skip_xau: bool = False,
) -> None: ) -> None:
with self.mutex: if "e2d" not in flags:
self.register_vpath(ptop, flags) return
self.hashq.put((ptop, vtop, rd, fn, ip, at, usr, skip_xau))
if self.n_hashq > 1024:
t = "%d files in hashq; taking a nap"
self.log(t % (self.n_hashq,), 6)
for _ in range(self.n_hashq // 1024):
time.sleep(0.1)
if self.n_hashq < 1024:
break
zt = (ptop, vtop, flags, rd, fn, ip, at, usr, skip_xau)
with self.hashq_mutex:
self.hashq.put(zt)
self.n_hashq += 1 self.n_hashq += 1
# self.log("hashq {} push {}/{}/{}".format(self.n_hashq, ptop, rd, fn))
def shutdown(self) -> None: def shutdown(self) -> None:
self.stop = True self.stop = True
@@ -4066,6 +4239,6 @@ def up2k_wark_from_hashlist(salt: str, filesize: int, hashes: list[str]) -> str:
def up2k_wark_from_metadata(salt: str, sz: int, lastmod: int, rd: str, fn: str) -> str: def up2k_wark_from_metadata(salt: str, sz: int, lastmod: int, rd: str, fn: str) -> str:
ret = sfsenc("{}\n{}\n{}\n{}\n{}".format(salt, lastmod, sz, rd, fn)) ret = sfsenc("%s\n%d\n%d\n%s\n%s" % (salt, lastmod, sz, rd, fn))
ret = base64.urlsafe_b64encode(hashlib.sha512(ret).digest()) ret = base64.urlsafe_b64encode(hashlib.sha512(ret).digest())
return "#{}".format(ret.decode("ascii"))[:44] return ("#%s" % (ret.decode("ascii"),))[:44]

View File

@@ -1,6 +1,7 @@
# coding: utf-8 # coding: utf-8
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
import argparse
import base64 import base64
import contextlib import contextlib
import errno import errno
@@ -115,6 +116,11 @@ if True: # pylint: disable=using-constant-test
import typing import typing
from typing import Any, Generator, Optional, Pattern, Protocol, Union from typing import Any, Generator, Optional, Pattern, Protocol, Union
try:
from typing import LiteralString
except:
pass
class RootLogger(Protocol): class RootLogger(Protocol):
def __call__(self, src: str, msg: str, c: Union[int, str] = 0) -> None: def __call__(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
return None return None
@@ -144,15 +150,15 @@ if not PY2:
from urllib.parse import quote_from_bytes as quote from urllib.parse import quote_from_bytes as quote
from urllib.parse import unquote_to_bytes as unquote from urllib.parse import unquote_to_bytes as unquote
else: else:
from StringIO import StringIO as BytesIO from StringIO import StringIO as BytesIO # type: ignore
from urllib import quote # pylint: disable=no-name-in-module from urllib import quote # type: ignore # pylint: disable=no-name-in-module
from urllib import unquote # pylint: disable=no-name-in-module from urllib import unquote # type: ignore # pylint: disable=no-name-in-module
try: try:
struct.unpack(b">i", b"idgi") struct.unpack(b">i", b"idgi")
spack = struct.pack spack = struct.pack # type: ignore
sunpack = struct.unpack sunpack = struct.unpack # type: ignore
except: except:
def spack(fmt: bytes, *a: Any) -> bytes: def spack(fmt: bytes, *a: Any) -> bytes:
@@ -162,6 +168,12 @@ except:
return struct.unpack(fmt.decode("ascii"), a) return struct.unpack(fmt.decode("ascii"), a)
try:
BITNESS = struct.calcsize(b"P") * 8
except:
BITNESS = struct.calcsize("P") * 8
ansi_re = re.compile("\033\\[[^mK]*[mK]") ansi_re = re.compile("\033\\[[^mK]*[mK]")
@@ -338,6 +350,11 @@ CMD_EXEB = set(_exestr.encode("utf-8").split())
CMD_EXES = set(_exestr.split()) CMD_EXES = set(_exestr.split())
# mostly from https://github.com/github/gitignore/blob/main/Global/macOS.gitignore
APPLESAN_TXT = r"/(__MACOS|Icon\r\r)|/\.(_|DS_Store|AppleDouble|LSOverride|DocumentRevisions-|fseventsd|Spotlight-|TemporaryItems|Trashes|VolumeIcon\.icns|com\.apple\.timemachine\.donotpresent|AppleDB|AppleDesktop|apdisk)"
APPLESAN_RE = re.compile(APPLESAN_TXT)
pybin = sys.executable or "" pybin = sys.executable or ""
if EXE: if EXE:
pybin = "" pybin = ""
@@ -361,11 +378,6 @@ def py_desc() -> str:
if ofs > 0: if ofs > 0:
py_ver = py_ver[:ofs] py_ver = py_ver[:ofs]
try:
bitness = struct.calcsize(b"P") * 8
except:
bitness = struct.calcsize("P") * 8
host_os = platform.system() host_os = platform.system()
compiler = platform.python_compiler().split("http")[0] compiler = platform.python_compiler().split("http")[0]
@@ -373,11 +385,12 @@ def py_desc() -> str:
os_ver = m.group(1) if m else "" os_ver = m.group(1) if m else ""
return "{:>9} v{} on {}{} {} [{}]".format( return "{:>9} v{} on {}{} {} [{}]".format(
interp, py_ver, host_os, bitness, os_ver, compiler interp, py_ver, host_os, BITNESS, os_ver, compiler
) )
def _sqlite_ver() -> str: def _sqlite_ver() -> str:
assert sqlite3 # type: ignore
try: try:
co = sqlite3.connect(":memory:") co = sqlite3.connect(":memory:")
cur = co.cursor() cur = co.cursor()
@@ -410,14 +423,32 @@ try:
except: except:
PYFTPD_VER = "(None)" PYFTPD_VER = "(None)"
try:
from partftpy.__init__ import __version__ as PARTFTPY_VER
except:
PARTFTPY_VER = "(None)"
VERSIONS = "copyparty v{} ({})\n{}\n sqlite v{} | jinja v{} | pyftpd v{}".format(
S_VERSION, S_BUILD_DT, py_desc(), SQLITE_VER, JINJA_VER, PYFTPD_VER PY_DESC = py_desc()
VERSIONS = (
"copyparty v{} ({})\n{}\n sqlite {} | jinja {} | pyftpd {} | tftp {}".format(
S_VERSION, S_BUILD_DT, PY_DESC, SQLITE_VER, JINJA_VER, PYFTPD_VER, PARTFTPY_VER
)
) )
_: Any = (mp, BytesIO, quote, unquote, SQLITE_VER, JINJA_VER, PYFTPD_VER) _: Any = (mp, BytesIO, quote, unquote, SQLITE_VER, JINJA_VER, PYFTPD_VER, PARTFTPY_VER)
__all__ = ["mp", "BytesIO", "quote", "unquote", "SQLITE_VER", "JINJA_VER", "PYFTPD_VER"] __all__ = [
"mp",
"BytesIO",
"quote",
"unquote",
"SQLITE_VER",
"JINJA_VER",
"PYFTPD_VER",
"PARTFTPY_VER",
]
class Daemon(threading.Thread): class Daemon(threading.Thread):
@@ -521,6 +552,8 @@ class HLog(logging.Handler):
elif record.name.startswith("impacket"): elif record.name.startswith("impacket"):
if self.ptn_smb_ign.match(msg): if self.ptn_smb_ign.match(msg):
return return
elif record.name.startswith("partftpy."):
record.name = record.name[9:]
self.log_func(record.name[-21:], msg, c) self.log_func(record.name[-21:], msg, c)
@@ -617,9 +650,14 @@ class _Unrecv(object):
while nbytes > len(ret): while nbytes > len(ret):
ret += self.recv(nbytes - len(ret)) ret += self.recv(nbytes - len(ret))
except OSError: except OSError:
t = "client only sent {} of {} expected bytes".format(len(ret), nbytes) t = "client stopped sending data; expected at least %d more bytes"
if len(ret) <= 16: if not ret:
t += "; got {!r}".format(ret) t = t % (nbytes,)
else:
t += ", only got %d"
t = t % (nbytes, len(ret))
if len(ret) <= 16:
t += "; %r" % (ret,)
if raise_on_trunc: if raise_on_trunc:
raise UnrecvEOF(5, t) raise UnrecvEOF(5, t)
@@ -769,16 +807,20 @@ class ProgressPrinter(threading.Thread):
periodically print progress info without linefeeds periodically print progress info without linefeeds
""" """
def __init__(self) -> None: def __init__(self, log: "NamedLogger", args: argparse.Namespace) -> None:
threading.Thread.__init__(self, name="pp") threading.Thread.__init__(self, name="pp")
self.daemon = True self.daemon = True
self.log = log
self.args = args
self.msg = "" self.msg = ""
self.end = False self.end = False
self.n = -1 self.n = -1
self.start() self.start()
def run(self) -> None: def run(self) -> None:
tp = 0
msg = None msg = None
no_stdout = self.args.q
fmt = " {}\033[K\r" if VT100 else " {} $\r" fmt = " {}\033[K\r" if VT100 else " {} $\r"
while not self.end: while not self.end:
time.sleep(0.1) time.sleep(0.1)
@@ -786,10 +828,21 @@ class ProgressPrinter(threading.Thread):
continue continue
msg = self.msg msg = self.msg
now = time.time()
if msg and now - tp > 10:
tp = now
self.log("progress: %s" % (msg,), 6)
if no_stdout:
continue
uprint(fmt.format(msg)) uprint(fmt.format(msg))
if PY2: if PY2:
sys.stdout.flush() sys.stdout.flush()
if no_stdout:
return
if VT100: if VT100:
print("\033[K", end="") print("\033[K", end="")
elif msg: elif msg:
@@ -843,7 +896,7 @@ class MTHash(object):
ex = ex or str(qe) ex = ex or str(qe)
if pp: if pp:
mb = int((fsz - nch * chunksz) / 1024 / 1024) mb = (fsz - nch * chunksz) // (1024 * 1024)
pp.msg = prefix + str(mb) + suffix pp.msg = prefix + str(mb) + suffix
if ex: if ex:
@@ -1065,7 +1118,18 @@ def uprint(msg: str) -> None:
def nuprint(msg: str) -> None: def nuprint(msg: str) -> None:
uprint("{}\n".format(msg)) uprint("%s\n" % (msg,))
def dedent(txt: str) -> str:
pad = 64
lns = txt.replace("\r", "").split("\n")
for ln in lns:
zs = ln.lstrip()
pad2 = len(ln) - len(zs)
if zs and pad > pad2:
pad = pad2
return "\n".join([ln[pad:] for ln in lns])
def rice_tid() -> str: def rice_tid() -> str:
@@ -1077,10 +1141,10 @@ def rice_tid() -> str:
def trace(*args: Any, **kwargs: Any) -> None: def trace(*args: Any, **kwargs: Any) -> None:
t = time.time() t = time.time()
stack = "".join( stack = "".join(
"\033[36m{}\033[33m{}".format(x[0].split(os.sep)[-1][:-3], x[1]) "\033[36m%s\033[33m%s" % (x[0].split(os.sep)[-1][:-3], x[1])
for x in traceback.extract_stack()[3:-1] for x in traceback.extract_stack()[3:-1]
) )
parts = ["{:.6f}".format(t), rice_tid(), stack] parts = ["%.6f" % (t,), rice_tid(), stack]
if args: if args:
parts.append(repr(args)) parts.append(repr(args))
@@ -1097,17 +1161,17 @@ def alltrace() -> str:
threads: dict[str, types.FrameType] = {} threads: dict[str, types.FrameType] = {}
names = dict([(t.ident, t.name) for t in threading.enumerate()]) names = dict([(t.ident, t.name) for t in threading.enumerate()])
for tid, stack in sys._current_frames().items(): for tid, stack in sys._current_frames().items():
name = "{} ({:x})".format(names.get(tid), tid) name = "%s (%x)" % (names.get(tid), tid)
threads[name] = stack threads[name] = stack
rret: list[str] = [] rret: list[str] = []
bret: list[str] = [] bret: list[str] = []
for name, stack in sorted(threads.items()): for name, stack in sorted(threads.items()):
ret = ["\n\n# {}".format(name)] ret = ["\n\n# %s" % (name,)]
pad = None pad = None
for fn, lno, name, line in traceback.extract_stack(stack): for fn, lno, name, line in traceback.extract_stack(stack):
fn = os.sep.join(fn.split(os.sep)[-3:]) fn = os.sep.join(fn.split(os.sep)[-3:])
ret.append('File: "{}", line {}, in {}'.format(fn, lno, name)) ret.append('File: "%s", line %d, in %s' % (fn, lno, name))
if line: if line:
ret.append(" " + str(line.strip())) ret.append(" " + str(line.strip()))
if "self.not_empty.wait()" in line: if "self.not_empty.wait()" in line:
@@ -1116,7 +1180,7 @@ def alltrace() -> str:
if pad: if pad:
bret += [ret[0]] + [pad + x for x in ret[1:]] bret += [ret[0]] + [pad + x for x in ret[1:]]
else: else:
rret += ret rret.extend(ret)
return "\n".join(rret + bret) + "\n" return "\n".join(rret + bret) + "\n"
@@ -1199,12 +1263,20 @@ def log_thrs(log: Callable[[str, str, int], None], ival: float, name: str) -> No
def vol_san(vols: list["VFS"], txt: bytes) -> bytes: def vol_san(vols: list["VFS"], txt: bytes) -> bytes:
txt0 = txt
for vol in vols: for vol in vols:
txt = txt.replace(vol.realpath.encode("utf-8"), vol.vpath.encode("utf-8")) bap = vol.realpath.encode("utf-8")
txt = txt.replace( bhp = vol.histpath.encode("utf-8")
vol.realpath.encode("utf-8").replace(b"\\", b"\\\\"), bvp = vol.vpath.encode("utf-8")
vol.vpath.encode("utf-8"), bvph = b"$hist(/" + bvp + b")"
)
txt = txt.replace(bap, bvp)
txt = txt.replace(bhp, bvph)
txt = txt.replace(bap.replace(b"\\", b"\\\\"), bvp)
txt = txt.replace(bhp.replace(b"\\", b"\\\\"), bvph)
if txt != txt0:
txt += b"\r\nNOTE: filepaths sanitized; see serverlog for correct values"
return txt return txt
@@ -1212,9 +1284,9 @@ def vol_san(vols: list["VFS"], txt: bytes) -> bytes:
def min_ex(max_lines: int = 8, reverse: bool = False) -> str: def min_ex(max_lines: int = 8, reverse: bool = False) -> str:
et, ev, tb = sys.exc_info() et, ev, tb = sys.exc_info()
stb = traceback.extract_tb(tb) stb = traceback.extract_tb(tb)
fmt = "{} @ {} <{}>: {}" fmt = "%s @ %d <%s>: %s"
ex = [fmt.format(fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb] ex = [fmt % (fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb]
ex.append("[{}] {}".format(et.__name__ if et else "(anonymous)", ev)) ex.append("[%s] %s" % (et.__name__ if et else "(anonymous)", ev))
return "\n".join(ex[-max_lines:][:: -1 if reverse else 1]) return "\n".join(ex[-max_lines:][:: -1 if reverse else 1])
@@ -1265,7 +1337,7 @@ def ren_open(
with fun(fsenc(fpath), *args, **kwargs) as f: with fun(fsenc(fpath), *args, **kwargs) as f:
if b64: if b64:
assert fdir assert fdir
fp2 = "fn-trunc.{}.txt".format(b64) fp2 = "fn-trunc.%s.txt" % (b64,)
fp2 = os.path.join(fdir, fp2) fp2 = os.path.join(fdir, fp2)
with open(fsenc(fp2), "wb") as f2: with open(fsenc(fp2), "wb") as f2:
f2.write(orig_name.encode("utf-8")) f2.write(orig_name.encode("utf-8"))
@@ -1294,7 +1366,7 @@ def ren_open(
raise raise
if not b64: if not b64:
zs = "{}\n{}".format(orig_name, suffix).encode("utf-8", "replace") zs = ("%s\n%s" % (orig_name, suffix)).encode("utf-8", "replace")
zs = hashlib.sha512(zs).digest()[:12] zs = hashlib.sha512(zs).digest()[:12]
b64 = base64.urlsafe_b64encode(zs).decode("utf-8") b64 = base64.urlsafe_b64encode(zs).decode("utf-8")
@@ -1314,7 +1386,7 @@ def ren_open(
# okay do the first letter then # okay do the first letter then
ext = "." + ext[2:] ext = "." + ext[2:]
fname = "{}~{}{}".format(bname, b64, ext) fname = "%s~%s%s" % (bname, b64, ext)
class MultipartParser(object): class MultipartParser(object):
@@ -1500,15 +1572,20 @@ class MultipartParser(object):
return ret return ret
def parse(self) -> None: def parse(self) -> None:
boundary = get_boundary(self.headers)
self.log("boundary=%r" % (boundary,))
# spec says there might be junk before the first boundary, # spec says there might be junk before the first boundary,
# can't have the leading \r\n if that's not the case # can't have the leading \r\n if that's not the case
self.boundary = b"--" + get_boundary(self.headers).encode("utf-8") self.boundary = b"--" + boundary.encode("utf-8")
# discard junk before the first boundary # discard junk before the first boundary
for junk in self._read_data(): for junk in self._read_data():
self.log( if not junk:
"discarding preamble: [{}]".format(junk.decode("utf-8", "replace")) continue
)
jtxt = junk.decode("utf-8", "replace")
self.log("discarding preamble |%d| %r" % (len(junk), jtxt))
# nice, now make it fast # nice, now make it fast
self.boundary = b"\r\n" + self.boundary self.boundary = b"\r\n" + self.boundary
@@ -1520,11 +1597,9 @@ class MultipartParser(object):
raises if the field name is not as expected raises if the field name is not as expected
""" """
assert self.gen assert self.gen
p_field, _, p_data = next(self.gen) p_field, p_fname, p_data = next(self.gen)
if p_field != field_name: if p_field != field_name:
raise Pebkac( raise WrongPostKey(field_name, p_field, p_fname, p_data)
422, 'expected field "{}", got "{}"'.format(field_name, p_field)
)
return self._read_value(p_data, max_len).decode("utf-8", "surrogateescape") return self._read_value(p_data, max_len).decode("utf-8", "surrogateescape")
@@ -1593,7 +1668,7 @@ def rand_name(fdir: str, fn: str, rnd: int) -> str:
break break
nc = rnd + extra nc = rnd + extra
nb = int((6 + 6 * nc) / 8) nb = (6 + 6 * nc) // 8
zb = os.urandom(nb) zb = os.urandom(nb)
zb = base64.urlsafe_b64encode(zb) zb = base64.urlsafe_b64encode(zb)
fn = zb[:nc].decode("utf-8") + ext fn = zb[:nc].decode("utf-8") + ext
@@ -1647,16 +1722,15 @@ def gen_filekey_dbg(
return ret return ret
def gencookie(k: str, v: str, r: str, tls: bool, dur: Optional[int]) -> str: def gencookie(k: str, v: str, r: str, tls: bool, dur: int = 0, txt: str = "") -> str:
v = v.replace("%", "%25").replace(";", "%3B") v = v.replace("%", "%25").replace(";", "%3B")
if dur: if dur:
exp = formatdate(time.time() + dur, usegmt=True) exp = formatdate(time.time() + dur, usegmt=True)
else: else:
exp = "Fri, 15 Aug 1997 01:00:00 GMT" exp = "Fri, 15 Aug 1997 01:00:00 GMT"
return "{}={}; Path=/{}; Expires={}{}; SameSite=Lax".format( t = "%s=%s; Path=/%s; Expires=%s%s%s; SameSite=Lax"
k, v, r, exp, "; Secure" if tls else "" return t % (k, v, r, exp, "; Secure" if tls else "", txt)
)
def humansize(sz: float, terse: bool = False) -> str: def humansize(sz: float, terse: bool = False) -> str:
@@ -1697,7 +1771,7 @@ def get_spd(nbyte: int, t0: float, t: Optional[float] = None) -> str:
bps = nbyte / ((t - t0) + 0.001) bps = nbyte / ((t - t0) + 0.001)
s1 = humansize(nbyte).replace(" ", "\033[33m").replace("iB", "") s1 = humansize(nbyte).replace(" ", "\033[33m").replace("iB", "")
s2 = humansize(bps).replace(" ", "\033[35m").replace("iB", "") s2 = humansize(bps).replace(" ", "\033[35m").replace("iB", "")
return "{} \033[0m{}/s\033[0m".format(s1, s2) return "%s \033[0m%s/s\033[0m" % (s1, s2)
def s2hms(s: float, optional_h: bool = False) -> str: def s2hms(s: float, optional_h: bool = False) -> str:
@@ -1705,9 +1779,9 @@ def s2hms(s: float, optional_h: bool = False) -> str:
h, s = divmod(s, 3600) h, s = divmod(s, 3600)
m, s = divmod(s, 60) m, s = divmod(s, 60)
if not h and optional_h: if not h and optional_h:
return "{}:{:02}".format(m, s) return "%d:%02d" % (m, s)
return "{}:{:02}:{:02}".format(h, m, s) return "%d:%02d:%02d" % (h, m, s)
def djoin(*paths: str) -> str: def djoin(*paths: str) -> str:
@@ -1818,7 +1892,9 @@ def exclude_dotfiles(filepaths: list[str]) -> list[str]:
return [x for x in filepaths if not x.split("/")[-1].startswith(".")] return [x for x in filepaths if not x.split("/")[-1].startswith(".")]
def odfusion(base: ODict[str, bool], oth: str) -> ODict[str, bool]: def odfusion(
base: Union[ODict[str, bool], ODict["LiteralString", bool]], oth: str
) -> ODict[str, bool]:
# merge an "ordered set" (just a dict really) with another list of keys # merge an "ordered set" (just a dict really) with another list of keys
words0 = [x for x in oth.split(",") if x] words0 = [x for x in oth.split(",") if x]
words1 = [x for x in oth[1:].split(",") if x] words1 = [x for x in oth[1:].split(",") if x]
@@ -1988,10 +2064,10 @@ else:
# moonrunes become \x3f with bytestrings, # moonrunes become \x3f with bytestrings,
# losing mojibake support is worth # losing mojibake support is worth
def _not_actually_mbcs_enc(txt: str) -> bytes: def _not_actually_mbcs_enc(txt: str) -> bytes:
return txt return txt # type: ignore
def _not_actually_mbcs_dec(txt: bytes) -> str: def _not_actually_mbcs_dec(txt: bytes) -> str:
return txt return txt # type: ignore
fsenc = afsenc = sfsenc = _not_actually_mbcs_enc fsenc = afsenc = sfsenc = _not_actually_mbcs_enc
fsdec = _not_actually_mbcs_dec fsdec = _not_actually_mbcs_dec
@@ -2047,9 +2123,51 @@ def atomic_move(usrc: str, udst: str) -> None:
os.rename(src, dst) os.rename(src, dst)
def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
maxtime = flags.get("rm_re_t", 0.0)
bpath = fsenc(abspath)
if not maxtime:
os.unlink(bpath)
return True
chill = flags.get("rm_re_r", 0.0)
if chill < 0.001:
chill = 0.1
ino = 0
t0 = now = time.time()
for attempt in range(90210):
try:
if ino and os.stat(bpath).st_ino != ino:
log("inode changed; aborting delete")
return False
os.unlink(bpath)
if attempt:
now = time.time()
t = "deleted in %.2f sec, attempt %d"
log(t % (now - t0, attempt + 1))
return True
except OSError as ex:
now = time.time()
if ex.errno == errno.ENOENT:
return False
if now - t0 > maxtime or attempt == 90209:
raise
if not attempt:
if not PY2:
ino = os.stat(bpath).st_ino
t = "delete failed (err.%d); retrying for %d sec: %s"
log(t % (ex.errno, maxtime + 0.99, abspath))
time.sleep(chill)
return False # makes pylance happy
def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]: def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]:
try: try:
# some fuses misbehave # some fuses misbehave
assert ctypes
if ANYWIN: if ANYWIN:
bfree = ctypes.c_ulonglong(0) bfree = ctypes.c_ulonglong(0)
ctypes.windll.kernel32.GetDiskFreeSpaceExW( # type: ignore ctypes.windll.kernel32.GetDiskFreeSpaceExW( # type: ignore
@@ -2174,7 +2292,7 @@ def read_socket_chunked(
raise Pebkac(400, t.format(x)) raise Pebkac(400, t.format(x))
if log: if log:
log("receiving {} byte chunk".format(chunklen)) log("receiving %d byte chunk" % (chunklen,))
for chunk in read_socket(sr, chunklen): for chunk in read_socket(sr, chunklen):
yield chunk yield chunk
@@ -2346,6 +2464,12 @@ def statdir(
print(t) print(t)
def dir_is_empty(logger: "RootLogger", scandir: bool, top: str):
for _ in statdir(logger, scandir, False, top):
return False
return True
def rmdirs( def rmdirs(
logger: "RootLogger", scandir: bool, lstat: bool, top: str, depth: int logger: "RootLogger", scandir: bool, lstat: bool, top: str, depth: int
) -> tuple[list[str], list[str]]: ) -> tuple[list[str], list[str]]:
@@ -2452,6 +2576,7 @@ def getalive(pids: list[int], pgid: int) -> list[int]:
alive.append(pid) alive.append(pid)
else: else:
# windows doesn't have pgroups; assume # windows doesn't have pgroups; assume
assert psutil
psutil.Process(pid) psutil.Process(pid)
alive.append(pid) alive.append(pid)
except: except:
@@ -2469,6 +2594,7 @@ def killtree(root: int) -> None:
pgid = 0 pgid = 0
if HAVE_PSUTIL: if HAVE_PSUTIL:
assert psutil
pids = [root] pids = [root]
parent = psutil.Process(root) parent = psutil.Process(root)
for child in parent.children(recursive=True): for child in parent.children(recursive=True):
@@ -2508,9 +2634,35 @@ def killtree(root: int) -> None:
pass pass
def _find_nice() -> str:
if WINDOWS:
return "" # use creationflags
try:
zs = shutil.which("nice")
if zs:
return zs
except:
pass
# busted PATHs and/or py2
for zs in ("/bin", "/sbin", "/usr/bin", "/usr/sbin"):
zs += "/nice"
if os.path.exists(zs):
return zs
return ""
NICES = _find_nice()
NICEB = NICES.encode("utf-8")
def runcmd( def runcmd(
argv: Union[list[bytes], list[str]], timeout: Optional[float] = None, **ka: Any argv: Union[list[bytes], list[str]], timeout: Optional[float] = None, **ka: Any
) -> tuple[int, str, str]: ) -> tuple[int, str, str]:
isbytes = isinstance(argv[0], (bytes, bytearray))
oom = ka.pop("oom", 0) # 0..1000
kill = ka.pop("kill", "t") # [t]ree [m]ain [n]one kill = ka.pop("kill", "t") # [t]ree [m]ain [n]one
capture = ka.pop("capture", 3) # 0=none 1=stdout 2=stderr 3=both capture = ka.pop("capture", 3) # 0=none 1=stdout 2=stderr 3=both
@@ -2524,14 +2676,31 @@ def runcmd(
berr: bytes berr: bytes
if ANYWIN: if ANYWIN:
if isinstance(argv[0], (bytes, bytearray)): if isbytes:
if argv[0] in CMD_EXEB: if argv[0] in CMD_EXEB:
argv[0] += b".exe" argv[0] += b".exe"
else: else:
if argv[0] in CMD_EXES: if argv[0] in CMD_EXES:
argv[0] += ".exe" argv[0] += ".exe"
if ka.pop("nice", None):
if WINDOWS:
ka["creationflags"] = 0x4000
elif NICEB:
if isbytes:
argv = [NICEB] + argv
else:
argv = [NICES] + argv
p = sp.Popen(argv, stdout=cout, stderr=cerr, **ka) p = sp.Popen(argv, stdout=cout, stderr=cerr, **ka)
if oom and not ANYWIN and not MACOS:
try:
with open("/proc/%d/oom_score_adj" % (p.pid,), "wb") as f:
f.write(("%d\n" % (oom,)).encode("utf-8"))
except:
pass
if not timeout or PY2: if not timeout or PY2:
bout, berr = p.communicate(sin) bout, berr = p.communicate(sin)
else: else:
@@ -2678,13 +2847,14 @@ def _parsehook(
sp_ka = { sp_ka = {
"env": env, "env": env,
"nice": True,
"oom": 300,
"timeout": tout, "timeout": tout,
"kill": kill, "kill": kill,
"capture": cap, "capture": cap,
} }
if cmd.startswith("~"): cmd = os.path.expandvars(os.path.expanduser(cmd))
cmd = os.path.expanduser(cmd)
return chk, fork, jtxt, wait, sp_ka, cmd return chk, fork, jtxt, wait, sp_ka, cmd
@@ -2823,9 +2993,7 @@ def loadpy(ap: str, hot: bool) -> Any:
depending on what other inconveniently named files happen depending on what other inconveniently named files happen
to be in the same folder to be in the same folder
""" """
if ap.startswith("~"): ap = os.path.expandvars(os.path.expanduser(ap))
ap = os.path.expanduser(ap)
mdir, mfile = os.path.split(absreal(ap)) mdir, mfile = os.path.split(absreal(ap))
mname = mfile.rsplit(".", 1)[0] mname = mfile.rsplit(".", 1)[0]
sys.path.insert(0, mdir) sys.path.insert(0, mdir)
@@ -2833,7 +3001,7 @@ def loadpy(ap: str, hot: bool) -> Any:
if PY2: if PY2:
mod = __import__(mname) mod = __import__(mname)
if hot: if hot:
reload(mod) reload(mod) # type: ignore
else: else:
import importlib import importlib
@@ -2897,7 +3065,7 @@ def visual_length(txt: str) -> int:
pend = None pend = None
else: else:
if ch == "\033": if ch == "\033":
pend = "{0}".format(ch) pend = "%s" % (ch,)
else: else:
co = ord(ch) co = ord(ch)
# the safe parts of latin1 and cp437 (no greek stuff) # the safe parts of latin1 and cp437 (no greek stuff)
@@ -2976,6 +3144,7 @@ def termsize() -> tuple[int, int]:
def hidedir(dp) -> None: def hidedir(dp) -> None:
if ANYWIN: if ANYWIN:
try: try:
assert ctypes
k32 = ctypes.WinDLL("kernel32") k32 = ctypes.WinDLL("kernel32")
attrs = k32.GetFileAttributesW(dp) attrs = k32.GetFileAttributesW(dp)
if attrs >= 0: if attrs >= 0:
@@ -2994,3 +3163,20 @@ class Pebkac(Exception):
def __repr__(self) -> str: def __repr__(self) -> str:
return "Pebkac({}, {})".format(self.code, repr(self.args)) return "Pebkac({}, {})".format(self.code, repr(self.args))
class WrongPostKey(Pebkac):
def __init__(
self,
expected: str,
got: str,
fname: Optional[str],
datagen: Generator[bytes, None, None],
) -> None:
msg = 'expected field "{}", got "{}"'.format(expected, got)
super(WrongPostKey, self).__init__(422, msg)
self.expected = expected
self.got = got
self.fname = fname
self.datagen = datagen

View File

@@ -255,19 +255,19 @@ window.baguetteBox = (function () {
if (anymod(e, true)) if (anymod(e, true))
return; return;
var k = e.code + '', v = vid(), pos = -1; var k = (e.code || e.key) + '', v = vid(), pos = -1;
if (k == "BracketLeft") if (k == "BracketLeft")
setloop(1); setloop(1);
else if (k == "BracketRight") else if (k == "BracketRight")
setloop(2); setloop(2);
else if (e.shiftKey && k != 'KeyR') else if (e.shiftKey && k != "KeyR" && k != "R")
return; return;
else if (k == "ArrowLeft" || k == "KeyJ") else if (k == "ArrowLeft" || k == "KeyJ" || k == "Left" || k == "j")
showPreviousImage(); showPreviousImage();
else if (k == "ArrowRight" || k == "KeyL") else if (k == "ArrowRight" || k == "KeyL" || k == "Right" || k == "l")
showNextImage(); showNextImage();
else if (k == "Escape") else if (k == "Escape" || k == "Esc")
hideOverlay(); hideOverlay();
else if (k == "Home") else if (k == "Home")
showFirstImage(e); showFirstImage(e);
@@ -295,9 +295,9 @@ window.baguetteBox = (function () {
} }
else if (k == "KeyF") else if (k == "KeyF")
tglfull(); tglfull();
else if (k == "KeyS") else if (k == "KeyS" || k == "s")
tglsel(); tglsel();
else if (k == "KeyR") else if (k == "KeyR" || k == "r" || k == "R")
rotn(e.shiftKey ? -1 : 1); rotn(e.shiftKey ? -1 : 1);
else if (k == "KeyY") else if (k == "KeyY")
dlpic(); dlpic();
@@ -615,7 +615,43 @@ window.baguetteBox = (function () {
documentLastFocus && documentLastFocus.focus(); documentLastFocus && documentLastFocus.focus();
isOverlayVisible = false; isOverlayVisible = false;
}, 500); unvid();
unfig();
}, 250);
}
function unvid(keep) {
var vids = QSA('#bbox-overlay video');
for (var a = vids.length - 1; a >= 0; a--) {
var v = vids[a];
if (v == keep)
continue;
v.src = '';
v.load();
var p = v.parentNode;
p.removeChild(v);
p.parentNode.removeChild(p);
}
}
function unfig(keep) {
var figs = QSA('#bbox-overlay figure'),
npre = options.preload || 0,
k = [];
if (keep === undefined)
keep = -9;
for (var a = keep - npre; a <= keep + npre; a++)
k.push('bbox-figure-' + a);
for (var a = figs.length - 1; a >= 0; a--) {
var f = figs[a];
if (!has(k, f.getAttribute('id')))
f.parentNode.removeChild(f);
}
} }
function loadImage(index, callback) { function loadImage(index, callback) {
@@ -708,6 +744,7 @@ window.baguetteBox = (function () {
} }
function show(index, gallery) { function show(index, gallery) {
gallery = gallery || currentGallery;
if (!isOverlayVisible && index >= 0 && index < gallery.length) { if (!isOverlayVisible && index >= 0 && index < gallery.length) {
prepareOverlay(gallery, options); prepareOverlay(gallery, options);
showOverlay(index); showOverlay(index);
@@ -720,12 +757,10 @@ window.baguetteBox = (function () {
if (index >= imagesElements.length) if (index >= imagesElements.length)
return bounceAnimation('right'); return bounceAnimation('right');
var v = vid(); try {
if (v) { vid().pause();
v.src = '';
v.load();
v.parentNode.removeChild(v);
} }
catch (ex) { }
currentIndex = index; currentIndex = index;
loadImage(currentIndex, function () { loadImage(currentIndex, function () {
@@ -734,6 +769,15 @@ window.baguetteBox = (function () {
}); });
updateOffset(); updateOffset();
if (options.animation == 'none')
unvid(vid());
else
setTimeout(function () {
unvid(vid());
}, 100);
unfig(index);
if (options.onChange) if (options.onChange)
options.onChange(currentIndex, imagesElements.length); options.onChange(currentIndex, imagesElements.length);

View File

@@ -818,6 +818,10 @@ html.y #path a:hover {
.logue:empty { .logue:empty {
display: none; display: none;
} }
.logue.raw {
white-space: pre;
font-family: 'scp', 'consolas', monospace;
}
#doc>iframe, #doc>iframe,
.logue>iframe { .logue>iframe {
background: var(--bgg); background: var(--bgg);
@@ -1653,7 +1657,9 @@ html.cz .tgl.btn.on {
color: var(--fg-max); color: var(--fg-max);
} }
#tree ul a.hl { #tree ul a.hl {
color: #fff;
color: var(--btn-1-fg); color: var(--btn-1-fg);
background: #000;
background: var(--btn-1-bg); background: var(--btn-1-bg);
text-shadow: none; text-shadow: none;
} }
@@ -1769,6 +1775,7 @@ html.y #tree.nowrap .ntree a+a:hover {
padding: 0; padding: 0;
} }
#thumbs, #thumbs,
#au_prescan,
#au_fullpre, #au_fullpre,
#au_os_seek, #au_os_seek,
#au_osd_cv, #au_osd_cv,
@@ -1776,7 +1783,8 @@ html.y #tree.nowrap .ntree a+a:hover {
opacity: .3; opacity: .3;
} }
#griden.on+#thumbs, #griden.on+#thumbs,
#au_preload.on+#au_fullpre, #au_preload.on+#au_prescan,
#au_preload.on+#au_prescan+#au_fullpre,
#au_os_ctl.on+#au_os_seek, #au_os_ctl.on+#au_os_seek,
#au_os_ctl.on+#au_os_seek+#au_osd_cv, #au_os_ctl.on+#au_os_seek+#au_osd_cv,
#u2turbo.on+#u2tdate { #u2turbo.on+#u2tdate {
@@ -2174,6 +2182,7 @@ html.y #bbox-overlay figcaption a {
} }
#bbox-halp { #bbox-halp {
color: var(--fg-max); color: var(--fg-max);
background: #fff;
background: var(--bg); background: var(--bg);
position: absolute; position: absolute;
top: 0; top: 0;

View File

@@ -148,7 +148,8 @@
logues = {{ logues|tojson if sb_lg else "[]" }}, logues = {{ logues|tojson if sb_lg else "[]" }},
ls0 = {{ ls0|tojson }}; ls0 = {{ ls0|tojson }};
document.documentElement.className = localStorage.cpp_thm || dtheme; var STG = window.localStorage;
document.documentElement.className = (STG && STG.cpp_thm) || dtheme;
</script> </script>
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script> <script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>
<script src="{{ r }}/.cpr/baguettebox.js?_={{ ts }}"></script> <script src="{{ r }}/.cpr/baguettebox.js?_={{ ts }}"></script>

File diff suppressed because it is too large Load Diff

View File

@@ -139,16 +139,15 @@ var md_opt = {
}; };
(function () { (function () {
var l = localStorage, var l = window.localStorage,
drk = l.light != 1, drk = (l && l.light) != 1,
btn = document.getElementById("lightswitch"), btn = document.getElementById("lightswitch"),
f = function (e) { f = function (e) {
if (e) { e.preventDefault(); drk = !drk; } if (e) { e.preventDefault(); drk = !drk; }
document.documentElement.className = drk? "z":"y"; document.documentElement.className = drk? "z":"y";
btn.innerHTML = "go " + (drk ? "light":"dark"); btn.innerHTML = "go " + (drk ? "light":"dark");
l.light = drk? 0:1; try { l.light = drk? 0:1; } catch (ex) { }
}; };
btn.onclick = f; btn.onclick = f;
f(); f();
})(); })();

View File

@@ -216,6 +216,11 @@ function convert_markdown(md_text, dest_dom) {
md_html = DOMPurify.sanitize(md_html); md_html = DOMPurify.sanitize(md_html);
} }
catch (ex) { catch (ex) {
if (IE) {
dest_dom.innerHTML = 'IE cannot into markdown ;_;';
return;
}
if (ext) if (ext)
md_plug_err(ex, ext[1]); md_plug_err(ex, ext[1]);

View File

@@ -163,7 +163,7 @@ redraw = (function () {
dom_sbs.onclick = setsbs; dom_sbs.onclick = setsbs;
dom_nsbs.onclick = modetoggle; dom_nsbs.onclick = modetoggle;
onresize(); (IE ? modetoggle : onresize)();
return onresize; return onresize;
})(); })();
@@ -931,7 +931,13 @@ var set_lno = (function () {
// hotkeys / toolbar // hotkeys / toolbar
(function () { (function () {
var keydown = function (ev) { var keydown = function (ev) {
ev = ev || window.event; if (!ev && window.event) {
ev = window.event;
if (dev_fbw == 1) {
toast.warn(10, 'hello from fallback code ;_;\ncheck console trace');
console.error('using window.event');
}
}
var kc = ev.code || ev.keyCode || ev.which, var kc = ev.code || ev.keyCode || ev.which,
editing = document.activeElement == dom_src; editing = document.activeElement == dom_src;
@@ -1003,7 +1009,7 @@ var set_lno = (function () {
md_home(ev.shiftKey); md_home(ev.shiftKey);
return false; return false;
} }
if (!ev.shiftKey && (ev.code == "Enter" || kc == 13)) { if (!ev.shiftKey && ((ev.code + '').endsWith("Enter") || kc == 13)) {
return md_newline(); return md_newline();
} }
if (!ev.shiftKey && kc == 8) { if (!ev.shiftKey && kc == 8) {

View File

@@ -37,12 +37,12 @@ var md_opt = {
}; };
var lightswitch = (function () { var lightswitch = (function () {
var l = localStorage, var l = window.localStorage,
drk = l.light != 1, drk = (l && l.light) != 1,
f = function (e) { f = function (e) {
if (e) drk = !drk; if (e) drk = !drk;
document.documentElement.className = drk? "z":"y"; document.documentElement.className = drk? "z":"y";
l.light = drk? 0:1; try { l.light = drk? 0:1; } catch (ex) { }
}; };
f(); f();
return f; return f;

View File

@@ -110,7 +110,8 @@ var SR = {{ r|tojson }},
lang="{{ lang }}", lang="{{ lang }}",
dfavico="{{ favico }}"; dfavico="{{ favico }}";
document.documentElement.className=localStorage.cpp_thm||"{{ this.args.theme }}"; var STG = window.localStorage;
document.documentElement.className = (STG && STG.cpp_thm) || "{{ this.args.theme }}";
</script> </script>
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script> <script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>

View File

@@ -238,7 +238,8 @@ var SR = {{ r|tojson }},
lang="{{ lang }}", lang="{{ lang }}",
dfavico="{{ favico }}"; dfavico="{{ favico }}";
document.documentElement.className=localStorage.cpp_thm||"{{ args.theme }}"; var STG = window.localStorage;
document.documentElement.className = (STG && STG.cpp_thm) || "{{ args.theme }}";
</script> </script>
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script> <script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>

View File

@@ -105,6 +105,9 @@ html {
#toast pre { #toast pre {
margin: 0; margin: 0;
} }
#toast.hide {
display: none;
}
#toast.vis { #toast.vis {
right: 1.3em; right: 1.3em;
transform: inherit; transform: inherit;
@@ -144,6 +147,10 @@ html {
#toast.err #toastc { #toast.err #toastc {
background: #d06; background: #d06;
} }
#toast code {
padding: 0 .2em;
background: rgba(0,0,0,0.2);
}
#tth { #tth {
color: #fff; color: #fff;
background: #111; background: #111;

View File

@@ -431,7 +431,7 @@ function U2pvis(act, btns, uc, st) {
if (sread('potato') === null) { if (sread('potato') === null) {
btn.click(); btn.click();
toast.inf(30, L.u_gotpot); toast.inf(30, L.u_gotpot);
localStorage.removeItem('potato'); sdrop('potato');
} }
u2f.appendChild(ode); u2f.appendChild(ode);
@@ -852,7 +852,7 @@ function up2k_init(subtle) {
setmsg(suggest_up2k, 'msg'); setmsg(suggest_up2k, 'msg');
var parallel_uploads = icfg_get('nthread'), var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
uc = {}, uc = {},
fdom_ctr = 0, fdom_ctr = 0,
biggest_file = 0; biggest_file = 0;
@@ -861,6 +861,7 @@ function up2k_init(subtle) {
bcfg_bind(uc, 'multitask', 'multitask', true, null, false); bcfg_bind(uc, 'multitask', 'multitask', true, null, false);
bcfg_bind(uc, 'potato', 'potato', false, set_potato, false); bcfg_bind(uc, 'potato', 'potato', false, set_potato, false);
bcfg_bind(uc, 'ask_up', 'ask_up', true, null, false); bcfg_bind(uc, 'ask_up', 'ask_up', true, null, false);
bcfg_bind(uc, 'umod', 'umod', false, null, false);
bcfg_bind(uc, 'u2ts', 'u2ts', !u2ts.endsWith('u'), set_u2ts, false); bcfg_bind(uc, 'u2ts', 'u2ts', !u2ts.endsWith('u'), set_u2ts, false);
bcfg_bind(uc, 'fsearch', 'fsearch', false, set_fsearch, false); bcfg_bind(uc, 'fsearch', 'fsearch', false, set_fsearch, false);
@@ -894,6 +895,7 @@ function up2k_init(subtle) {
"bytes": { "bytes": {
"total": 0, "total": 0,
"hashed": 0, "hashed": 0,
"inflight": 0,
"uploaded": 0, "uploaded": 0,
"finished": 0 "finished": 0
}, },
@@ -1043,7 +1045,7 @@ function up2k_init(subtle) {
clmod(ebi(v), 'hl', 1); clmod(ebi(v), 'hl', 1);
} }
function offdrag(e) { function offdrag(e) {
ev(e); noope(e);
var v = this.getAttribute('v'); var v = this.getAttribute('v');
if (v) if (v)
@@ -1332,7 +1334,8 @@ function up2k_init(subtle) {
return modal.confirm(msg.join('') + '</ul>', function () { return modal.confirm(msg.join('') + '</ul>', function () {
start_actx(); start_actx();
up_them(good_files); up_them(good_files);
toast.inf(15, L.u_unpt, L.u_unpt); if (have_up2k_idx)
toast.inf(15, L.u_unpt, L.u_unpt);
}, null); }, null);
up_them(good_files); up_them(good_files);
@@ -1391,6 +1394,8 @@ function up2k_init(subtle) {
entry.rand = true; entry.rand = true;
entry.name = 'a\n' + entry.name; entry.name = 'a\n' + entry.name;
} }
else if (uc.umod)
entry.umod = true;
if (biggest_file < entry.size) if (biggest_file < entry.size)
biggest_file = entry.size; biggest_file = entry.size;
@@ -1539,17 +1544,21 @@ function up2k_init(subtle) {
if (uc.fsearch) if (uc.fsearch)
t.push(['u2etat', st.bytes.hashed, st.bytes.hashed, st.time.hashing]); t.push(['u2etat', st.bytes.hashed, st.bytes.hashed, st.time.hashing]);
} }
var b_up = st.bytes.inflight + st.bytes.uploaded,
b_fin = st.bytes.inflight + st.bytes.finished;
if (nsend) { if (nsend) {
st.time.uploading += td; st.time.uploading += td;
t.push(['u2etau', st.bytes.uploaded, st.bytes.finished, st.time.uploading]); t.push(['u2etau', b_up, b_fin, st.time.uploading]);
} }
if ((nhash || nsend) && !uc.fsearch) { if ((nhash || nsend) && !uc.fsearch) {
if (!st.bytes.finished) { if (!b_fin) {
ebi('u2etat').innerHTML = L.u_etaprep; ebi('u2etat').innerHTML = L.u_etaprep;
} }
else { else {
st.time.busy += td; st.time.busy += td;
t.push(['u2etat', st.bytes.finished, st.bytes.finished, st.time.busy]); t.push(['u2etat', b_fin, b_fin, st.time.busy]);
} }
} }
for (var a = 0; a < t.length; a++) { for (var a = 0; a < t.length; a++) {
@@ -2467,6 +2476,8 @@ function up2k_init(subtle) {
req.srch = 1; req.srch = 1;
else if (t.rand) else if (t.rand)
req.rand = true; req.rand = true;
else if (t.umod)
req.umod = true;
xhr.open('POST', t.purl, true); xhr.open('POST', t.purl, true);
xhr.responseType = 'text'; xhr.responseType = 'text';
@@ -2533,6 +2544,7 @@ function up2k_init(subtle) {
cdr = t.size; cdr = t.size;
var orz = function (xhr) { var orz = function (xhr) {
st.bytes.inflight -= xhr.bsent;
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText); var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
if (txt.indexOf('upload blocked by x') + 1) { if (txt.indexOf('upload blocked by x') + 1) {
apop(st.busy.upload, upt); apop(st.busy.upload, upt);
@@ -2577,7 +2589,10 @@ function up2k_init(subtle) {
btot = Math.floor(st.bytes.total / 1024 / 1024); btot = Math.floor(st.bytes.total / 1024 / 1024);
xhr.upload.onprogress = function (xev) { xhr.upload.onprogress = function (xev) {
pvis.prog(t, npart, xev.loaded); var nb = xev.loaded;
st.bytes.inflight += nb - xhr.bsent;
xhr.bsent = nb;
pvis.prog(t, npart, nb);
}; };
xhr.onload = function (xev) { xhr.onload = function (xev) {
try { orz(xhr); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); } try { orz(xhr); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
@@ -2586,6 +2601,8 @@ function up2k_init(subtle) {
if (crashed) if (crashed)
return; return;
st.bytes.inflight -= (xhr.bsent || 0);
if (!toast.visible) if (!toast.visible)
toast.warn(9.98, L.u_cuerr.format(npart, Math.ceil(t.size / chunksize), t.name), t); toast.warn(9.98, L.u_cuerr.format(npart, Math.ceil(t.size / chunksize), t.name), t);
@@ -2602,6 +2619,7 @@ function up2k_init(subtle) {
if (xhr.overrideMimeType) if (xhr.overrideMimeType)
xhr.overrideMimeType('Content-Type', 'application/octet-stream'); xhr.overrideMimeType('Content-Type', 'application/octet-stream');
xhr.bsent = 0;
xhr.responseType = 'text'; xhr.responseType = 'text';
xhr.send(t.fobj.slice(car, cdr)); xhr.send(t.fobj.slice(car, cdr));
} }
@@ -2685,7 +2703,11 @@ function up2k_init(subtle) {
} }
parallel_uploads = v; parallel_uploads = v;
swrite('nthread', v); if (v == u2j)
sdrop('nthread');
else
swrite('nthread', v);
clmod(obj, 'err'); clmod(obj, 'err');
return; return;
} }
@@ -2698,8 +2720,11 @@ function up2k_init(subtle) {
if (parallel_uploads > 16) if (parallel_uploads > 16)
parallel_uploads = 16; parallel_uploads = 16;
if (parallel_uploads > 7)
toast.warn(10, L.u_maxconn);
obj.value = parallel_uploads; obj.value = parallel_uploads;
bumpthread({ "target": 1 }) bumpthread({ "target": 1 });
} }
function tgl_fsearch() { function tgl_fsearch() {

View File

@@ -12,9 +12,11 @@ if (window.CGV)
var wah = '', var wah = '',
STG = null,
NOAC = 'autocorrect="off" autocapitalize="off"', NOAC = 'autocorrect="off" autocapitalize="off"',
L, tt, treectl, thegrid, up2k, asmCrypto, hashwasm, vbar, marked, L, tt, treectl, thegrid, up2k, asmCrypto, hashwasm, vbar, marked,
CB = '?_=' + Date.now(), T0 = Date.now(),
CB = '?_=' + Math.floor(T0 / 1000).toString(36),
R = SR.slice(1), R = SR.slice(1),
RS = R ? "/" + R : "", RS = R ? "/" + R : "",
HALFMAX = 8192 * 8192 * 8192 * 8192, HALFMAX = 8192 * 8192 * 8192 * 8192,
@@ -39,6 +41,16 @@ if (!window.Notification || !Notification.permission)
if (!window.FormData) if (!window.FormData)
window.FormData = false; window.FormData = false;
try {
STG = window.localStorage;
STG.STG;
}
catch (ex) {
STG = null;
if ((ex + '').indexOf('sandbox') < 0)
console.log('no localStorage: ' + ex);
}
try { try {
CB = '?' + document.currentScript.src.split('?').pop(); CB = '?' + document.currentScript.src.split('?').pop();
@@ -145,6 +157,10 @@ catch (ex) {
} }
var crashed = false, ignexd = {}, evalex_fatal = false; var crashed = false, ignexd = {}, evalex_fatal = false;
function vis_exh(msg, url, lineNo, columnNo, error) { function vis_exh(msg, url, lineNo, columnNo, error) {
var ekey = url + '\n' + lineNo + '\n' + msg;
if (ignexd[ekey] || crashed)
return;
msg = String(msg); msg = String(msg);
url = String(url); url = String(url);
@@ -160,10 +176,12 @@ function vis_exh(msg, url, lineNo, columnNo, error) {
if (url.indexOf(' > eval') + 1 && !evalex_fatal) if (url.indexOf(' > eval') + 1 && !evalex_fatal)
return; // md timer return; // md timer
var ekey = url + '\n' + lineNo + '\n' + msg; if (IE && url.indexOf('prism.js') + 1)
if (ignexd[ekey] || crashed)
return; return;
if (url.indexOf('easymde.js') + 1)
return; // clicking the preview pane
if (url.indexOf('deps/marked.js') + 1 && !window.WebAssembly) if (url.indexOf('deps/marked.js') + 1 && !window.WebAssembly)
return; // ff<52 return; // ff<52
@@ -202,19 +220,24 @@ function vis_exh(msg, url, lineNo, columnNo, error) {
} }
ignexd[ekey] = true; ignexd[ekey] = true;
var ls = jcp(localStorage); var ls = {},
if (ls.fman_clip) lsk = Object.keys(localStorage),
ls.fman_clip = ls.fman_clip.length + ' items'; nka = lsk.length,
nk = Math.min(200, nka);
var lsk = Object.keys(ls); for (var a = 0; a < nk; a++) {
lsk.sort(); var k = lsk[a],
html.push('<p class="b">'); v = localStorage.getItem(k);
for (var a = 0; a < lsk.length; a++) {
if (ls[lsk[a]].length > 9000)
continue;
html.push(' <b>' + esc(lsk[a]) + '</b> <code>' + esc(ls[lsk[a]]) + '</code> '); ls[k] = v.length > 256 ? v.slice(0, 32) + '[...' + v.length + 'b]' : v;
} }
lsk = Object.keys(ls);
lsk.sort();
html.push('<p class="b"><b>' + nka + ':&nbsp;</b>');
for (var a = 0; a < nk; a++)
html.push(' <b>' + esc(lsk[a]) + '</b> <code>' + esc(ls[lsk[a]]) + '</code> ');
html.push('</p>'); html.push('</p>');
} }
catch (e) { } catch (e) { }
@@ -276,8 +299,15 @@ function anymod(e, shift_ok) {
} }
var dev_fbw = sread('dev_fbw');
function ev(e) { function ev(e) {
e = e || window.event; if (!e && window.event) {
e = window.event;
if (dev_fbw == 1) {
toast.warn(10, 'hello from fallback code ;_;\ncheck console trace');
console.error('using window.event');
}
}
if (!e) if (!e)
return; return;
@@ -296,7 +326,7 @@ function ev(e) {
function noope(e) { function noope(e) {
ev(e); try { ev(e); } catch (ex) { }
} }
@@ -364,6 +394,22 @@ catch (ex) {
} }
} }
if (!window.Set)
window.Set = function () {
var r = this;
r.size = 0;
r.d = {};
r.add = function (k) {
if (!r.d[k]) {
r.d[k] = 1;
r.size++;
}
};
r.has = function (k) {
return r.d[k];
};
};
// https://stackoverflow.com/a/950146 // https://stackoverflow.com/a/950146
function import_js(url, cb, ecb) { function import_js(url, cb, ecb) {
var head = document.head || document.getElementsByTagName('head')[0]; var head = document.head || document.getElementsByTagName('head')[0];
@@ -389,6 +435,25 @@ function unsmart(txt) {
} }
function namesan(txt, win, fslash) {
if (win)
txt = (txt.
replace(/</g, "").
replace(/>/g, "").
replace(/:/g, "").
replace(/"/g, "").
replace(/\\/g, "").
replace(/\|/g, "").
replace(/\?/g, "").
replace(/\*/g, ""));
if (fslash)
txt = txt.replace(/\//g, "");
return txt;
}
var crctab = (function () { var crctab = (function () {
var c, tab = []; var c, tab = [];
for (var n = 0; n < 256; n++) { for (var n = 0; n < 256; n++) {
@@ -755,17 +820,6 @@ function noq_href(el) {
} }
function get_pwd() {
var k = HTTPS ? 's=' : 'd=',
pwd = ('; ' + document.cookie).split('; cppw' + k);
if (pwd.length < 2)
return null;
return decodeURIComponent(pwd[1].split(';')[0]);
}
function unix2iso(ts) { function unix2iso(ts) {
return new Date(ts * 1000).toISOString().replace("T", " ").slice(0, -5); return new Date(ts * 1000).toISOString().replace("T", " ").slice(0, -5);
} }
@@ -886,9 +940,16 @@ function jcp(obj) {
} }
function sdrop(key) {
try {
STG.removeItem(key);
}
catch (ex) { }
}
function sread(key, al) { function sread(key, al) {
try { try {
var ret = localStorage.getItem(key); var ret = STG.getItem(key);
return (!al || has(al, ret)) ? ret : null; return (!al || has(al, ret)) ? ret : null;
} }
catch (e) { catch (e) {
@@ -899,9 +960,9 @@ function sread(key, al) {
function swrite(key, val) { function swrite(key, val) {
try { try {
if (val === undefined || val === null) if (val === undefined || val === null)
localStorage.removeItem(key); STG.removeItem(key);
else else
localStorage.setItem(key, val); STG.setItem(key, val);
} }
catch (e) { } catch (e) { }
} }
@@ -936,7 +997,7 @@ function fcfg_get(name, defval) {
val = parseFloat(sread(name)); val = parseFloat(sread(name));
if (!isNum(val)) if (!isNum(val))
return parseFloat(o ? o.value : defval); return parseFloat(o && o.value !== '' ? o.value : defval);
if (o) if (o)
o.value = val; o.value = val;
@@ -1062,7 +1123,7 @@ function dl_file(url) {
function cliptxt(txt, ok) { function cliptxt(txt, ok) {
var fb = function () { var fb = function () {
console.log('fb'); console.log('clip-fb');
var o = mknod('input'); var o = mknod('input');
o.value = txt; o.value = txt;
document.body.appendChild(o); document.body.appendChild(o);
@@ -1397,15 +1458,23 @@ var toast = (function () {
} }
r.hide = function (e) { r.hide = function (e) {
ev(e); if (this === ebi('toastc'))
ev(e);
unscroll(); unscroll();
clearTimeout(te); clearTimeout(te);
clmod(obj, 'vis'); clmod(obj, 'vis');
r.visible = false; r.visible = false;
r.tag = obj; r.tag = obj;
if (!window.WebAssembly)
te = setTimeout(function () {
obj.className = 'hide';
}, 500);
}; };
r.show = function (cl, sec, txt, tag) { r.show = function (cl, sec, txt, tag) {
txt = (txt + '').slice(0, 16384);
var same = r.visible && txt == r.p_txt && r.p_sec == sec, var same = r.visible && txt == r.p_txt && r.p_sec == sec,
delta = Date.now() - r.p_t; delta = Date.now() - r.p_t;
@@ -1473,6 +1542,7 @@ var modal = (function () {
r.load(); r.load();
r.busy = false; r.busy = false;
r.nofocus = 0;
r.show = function (html) { r.show = function (html) {
o = mknod('div', 'modal'); o = mknod('div', 'modal');
@@ -1486,6 +1556,7 @@ var modal = (function () {
a.onclick = ng; a.onclick = ng;
a = ebi('modal-ok'); a = ebi('modal-ok');
a.addEventListener('blur', onblur);
a.onclick = ok; a.onclick = ok;
var inp = ebi('modali'); var inp = ebi('modali');
@@ -1496,6 +1567,7 @@ var modal = (function () {
}, 0); }, 0);
document.addEventListener('focus', onfocus); document.addEventListener('focus', onfocus);
document.addEventListener('selectionchange', onselch);
timer.add(onfocus); timer.add(onfocus);
if (cb_up) if (cb_up)
setTimeout(cb_up, 1); setTimeout(cb_up, 1);
@@ -1503,6 +1575,11 @@ var modal = (function () {
r.hide = function () { r.hide = function () {
timer.rm(onfocus); timer.rm(onfocus);
try {
ebi('modal-ok').removeEventListener('blur', onblur);
}
catch (ex) { }
document.removeEventListener('selectionchange', onselch);
document.removeEventListener('focus', onfocus); document.removeEventListener('focus', onfocus);
document.removeEventListener('keydown', onkey); document.removeEventListener('keydown', onkey);
o.parentNode.removeChild(o); o.parentNode.removeChild(o);
@@ -1524,20 +1601,38 @@ var modal = (function () {
cb_ng(null); cb_ng(null);
} }
var onselch = function () {
try {
if (window.getSelection() + '')
r.nofocus = 15;
}
catch (ex) { }
};
var onblur = function () {
r.nofocus = 3;
};
var onfocus = function (e) { var onfocus = function (e) {
if (MOBILE)
return;
var ctr = ebi('modalc'); var ctr = ebi('modalc');
if (!ctr || !ctr.contains || !document.activeElement || ctr.contains(document.activeElement)) if (!ctr || !ctr.contains || !document.activeElement || ctr.contains(document.activeElement))
return; return;
setTimeout(function () { setTimeout(function () {
if (--r.nofocus >= 0)
return;
if (ctr = ebi('modal-ok')) if (ctr = ebi('modal-ok'))
ctr.focus(); ctr.focus();
}, 20); }, 20);
ev(e); ev(e);
} };
var onkey = function (e) { var onkey = function (e) {
var k = e.code, var k = (e.code || e.key) + '',
eok = ebi('modal-ok'), eok = ebi('modal-ok'),
eng = ebi('modal-ng'), eng = ebi('modal-ng'),
ae = document.activeElement; ae = document.activeElement;
@@ -1545,18 +1640,18 @@ var modal = (function () {
if (k == 'Space' && ae && (ae === eok || ae === eng)) if (k == 'Space' && ae && (ae === eok || ae === eng))
k = 'Enter'; k = 'Enter';
if (k == 'Enter') { if (k.endsWith('Enter')) {
if (ae && ae == eng) if (ae && ae == eng)
return ng(); return ng(e);
return ok(); return ok(e);
} }
if ((k == 'ArrowLeft' || k == 'ArrowRight') && eng && (ae == eok || ae == eng)) if ((k == 'ArrowLeft' || k == 'ArrowRight' || k == 'Left' || k == 'Right') && eng && (ae == eok || ae == eng))
return (ae == eok ? eng : eok).focus() || ev(e); return (ae == eok ? eng : eok).focus() || ev(e);
if (k == 'Escape') if (k == 'Escape' || k == 'Esc')
return ng(); return ng(e);
} }
var next = function () { var next = function () {
@@ -1833,21 +1928,17 @@ var favico = (function () {
var b64; var b64;
try { try {
b64 = btoa(svg ? svg_decl + svg : gx(r.txt)); b64 = btoa(svg ? svg_decl + svg : gx(r.txt));
//console.log('f1');
} }
catch (e1) { catch (e1) {
try { try {
b64 = btoa(gx(encodeURIComponent(r.txt).replace(/%([0-9A-F]{2})/g, b64 = btoa(gx(encodeURIComponent(r.txt).replace(/%([0-9A-F]{2})/g,
function x(m, v) { return String.fromCharCode('0x' + v); }))); function x(m, v) { return String.fromCharCode('0x' + v); })));
//console.log('f2');
} }
catch (e2) { catch (e2) {
try { try {
b64 = btoa(gx(unescape(encodeURIComponent(r.txt)))); b64 = btoa(gx(unescape(encodeURIComponent(r.txt))));
//console.log('f3');
} }
catch (e3) { catch (e3) {
//console.log('fe');
return; return;
} }
} }
@@ -1901,6 +1992,9 @@ function xhrchk(xhr, prefix, e404, lvl, tag) {
if (xhr.status < 400 && xhr.status >= 200) if (xhr.status < 400 && xhr.status >= 200)
return true; return true;
if (tag === undefined)
tag = prefix;
var errtxt = (xhr.response && xhr.response.err) || xhr.responseText, var errtxt = (xhr.response && xhr.response.err) || xhr.responseText,
fun = toast[lvl || 'err'], fun = toast[lvl || 'err'],
is_cf = /[Cc]loud[f]lare|>Just a mo[m]ent|#cf-b[u]bbles|Chec[k]ing your br[o]wser|\/chall[e]nge-platform|"chall[e]nge-error|nable Ja[v]aScript and cook/.test(errtxt); is_cf = /[Cc]loud[f]lare|>Just a mo[m]ent|#cf-b[u]bbles|Chec[k]ing your br[o]wser|\/chall[e]nge-platform|"chall[e]nge-error|nable Ja[v]aScript and cook/.test(errtxt);

File diff suppressed because it is too large Load Diff

View File

@@ -162,8 +162,8 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
| PUT | | (binary data) | upload into file at URL | | PUT | | (binary data) | upload into file at URL |
| PUT | `?gz` | (binary data) | compress with gzip and write into file at URL | | PUT | `?gz` | (binary data) | compress with gzip and write into file at URL |
| PUT | `?xz` | (binary data) | compress with xz and write into file at URL | | PUT | `?xz` | (binary data) | compress with xz and write into file at URL |
| mPOST | | `act=bput`, `f=FILE` | upload `FILE` into the folder at URL | | mPOST | | `f=FILE` | upload `FILE` into the folder at URL |
| mPOST | `?j` | `act=bput`, `f=FILE` | ...and reply with json | | mPOST | `?j` | `f=FILE` | ...and reply with json |
| mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL | | mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL |
| POST | `?delete` | | delete URL recursively | | POST | `?delete` | | delete URL recursively |
| jPOST | `?delete` | `["/foo","/bar"]` | delete `/foo` and `/bar` recursively | | jPOST | `?delete` | `["/foo","/bar"]` | delete `/foo` and `/bar` recursively |
@@ -242,6 +242,7 @@ python3 -m venv .venv
pip install jinja2 strip_hints # MANDATORY pip install jinja2 strip_hints # MANDATORY
pip install mutagen # audio metadata pip install mutagen # audio metadata
pip install pyftpdlib # ftp server pip install pyftpdlib # ftp server
pip install partftpy # tftp server
pip install impacket # smb server -- disable Windows Defender if you REALLY need this on windows pip install impacket # smb server -- disable Windows Defender if you REALLY need this on windows
pip install Pillow pyheif-pillow-opener pillow-avif-plugin # thumbnails pip install Pillow pyheif-pillow-opener pillow-avif-plugin # thumbnails
pip install pyvips # faster thumbnails pip install pyvips # faster thumbnails

View File

@@ -0,0 +1,33 @@
# not actually YAML but lets pretend:
# -*- mode: yaml -*-
# vim: ft=yaml:
[global]
e2dsa # enable file indexing and filesystem scanning
e2ts # enable multimedia indexing
ansi # enable colors in log messages
# q, lo: /cfg/log/%Y-%m%d.log # log to file instead of docker
# ftp: 3921 # enable ftp server on port 3921
# p: 3939 # listen on another port
# ipa: 10.89. # only allow connections from 10.89.*
# df: 16 # stop accepting uploads if less than 16 GB free disk space
# ver # show copyparty version in the controlpanel
# grid # show thumbnails/grid-view by default
# theme: 2 # monokai
# name: datasaver # change the server-name that's displayed in the browser
# stats, nos-dup # enable the prometheus endpoint, but disable the dupes counter (too slow)
# no-robots, force-js # make it harder for search engines to read your server
[accounts]
ed: wark # username: password
[/] # create a volume at "/" (the webroot), which will
/w # share /w (the docker data volume)
accs:
rw: * # everyone gets read-write access, but
rwmda: ed # the user "ed" gets read-write-move-delete-admin

View File

@@ -0,0 +1,20 @@
version: '3'
services:
copyparty:
image: copyparty/ac:latest
container_name: copyparty
user: "1000:1000"
ports:
- 3923:3923
volumes:
- ./:/cfg:z
- /path/to/your/fileshare/top/folder:/w:z
stop_grace_period: 15s # thumbnailer is allowed to continue finishing up for 10s after the shutdown signal
healthcheck:
test: ["CMD-SHELL", "wget --spider -q 127.0.0.1:3923/?reset"]
interval: 1m
timeout: 2s
retries: 5
start_period: 15s

View File

@@ -0,0 +1,72 @@
# not actually YAML but lets pretend:
# -*- mode: yaml -*-
# vim: ft=yaml:
# example config for how copyparty can be used with an identity
# provider, replacing the built-in authentication/authorization
# mechanism, and instead expecting the reverse-proxy to provide
# the requester's username (and possibly a group-name, for
# optional group-based access control)
#
# the filesystem-path `/w` is used as the storage location
# because that is the data-volume in the docker containers,
# because a deployment like this (with an IdP) is more commonly
# seen in containerized environments -- but this is not required
[global]
e2dsa # enable file indexing and filesystem scanning
e2ts # enable multimedia indexing
ansi # enable colors in log messages
# enable IdP support by expecting username/groupname in
# http-headers provided by the reverse-proxy; header "X-IdP-User"
# will contain the username, "X-IdP-Group" the groupname
idp-h-usr: x-idp-user
idp-h-grp: x-idp-group
[/] # create a volume at "/" (the webroot), which will
/w # share /w (the docker data volume)
accs:
rw: * # everyone gets read-access, but
rwmda: @su # the group "su" gets read-write-move-delete-admin
[/u/${u}] # each user gets their own home-folder at /u/username
/w/u/${u} # which will be "u/username" in the docker data volume
accs:
r: * # read-access for anyone, and
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
[/u/${u}/priv] # each user also gets a private area at /u/username/priv
/w/u/${u}/priv # stored at DATAVOLUME/u/username/priv
accs:
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
[/lounge/${g}] # each group gets their own shared volume
/w/lounge/${g} # stored at DATAVOLUME/lounge/groupname
accs:
r: * # read-access for anyone, and
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
[/lounge/${g}/priv] # and a private area for each group too
/w/lounge/${g}/priv # stored at DATAVOLUME/lounge/groupname/priv
accs:
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
# and create some strategic volumes to prevent anyone from gaining
# unintended access to priv folders if the users/groups db is lost
[/u]
/w/u
accs:
rwmda: @su
[/lounge]
/w/lounge
accs:
rwmda: @su

View File

@@ -6,7 +6,7 @@ you will definitely need either [copyparty.exe](https://github.com/9001/copypart
* if you decided to grab `copyparty-sfx.py` instead of the exe you will also need to install the ["Latest Python 3 Release"](https://www.python.org/downloads/windows/) * if you decided to grab `copyparty-sfx.py` instead of the exe you will also need to install the ["Latest Python 3 Release"](https://www.python.org/downloads/windows/)
then you probably want to download [FFmpeg](https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-win64-gpl.zip) and put `ffmpeg.exe` and `ffprobe.exe` in your PATH (so for example `C:\Windows\System32\`) -- this enables thumbnails, audio transcoding, and making music metadata searchable then you probably want to download [FFmpeg](https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z) and put `ffmpeg.exe` and `ffprobe.exe` in your PATH (so for example `C:\Windows\System32\`) -- this enables thumbnails, audio transcoding, and making music metadata searchable
## the config file ## the config file

View File

@@ -24,6 +24,10 @@ https://github.com/giampaolo/pyftpdlib/
C: 2007 Giampaolo Rodola C: 2007 Giampaolo Rodola
L: MIT L: MIT
https://github.com/9001/partftpy
C: 2010-2021 Michael P. Soulier
L: MIT
https://github.com/nayuki/QR-Code-generator/ https://github.com/nayuki/QR-Code-generator/
C: Project Nayuki C: Project Nayuki
L: MIT L: MIT

View File

@@ -1,3 +1,19 @@
this file accidentally got committed at some point, so let's put it to use
# trivia / lore
copyparty started as [three separate php projects](https://a.ocv.me/pub/stuff/old-php-projects/); an nginx custom directory listing (which became a php script), and a php music/picture viewer, and an additional php project for resumable uploads:
* findex -- directory browser / gallery with thumbnails and a music player which sometime back in 2009 had a canvas visualizer grabbing fft data from a flash audio player
* findex.mini -- plain-listing fork of findex with streaming zip-download of folders (the js and design should look familiar)
* upper and up2k -- up2k being the star of the show and where copyparty's chunked resumable uploads came from
the first link has screenshots but if that doesn't work there's also a [tar here](https://ocv.me/dev/old-php-projects.tgz)
----
below this point is misc useless scribbles
# up2k.js # up2k.js
## potato detection ## potato detection

View File

@@ -24,6 +24,27 @@ gzip -d < .hist/up2k.snap | jq -r '.[].name' | while IFS= read -r f; do wc -c --
echo; find -type f | while IFS= read -r x; do printf '\033[A\033[36m%s\033[K\033[0m\n' "$x"; tail -c$((1024*1024)) <"$x" | xxd -a | awk 'NR==1&&/^[0: ]+.{16}$/{next} NR==2&&/^\*$/{next} NR==3&&/^[0f]+: [0 ]+65 +.{16}$/{next} {e=1} END {exit e}' || continue; printf '\033[A\033[31msus:\033[33m %s \033[0m\n\n' "$x"; done echo; find -type f | while IFS= read -r x; do printf '\033[A\033[36m%s\033[K\033[0m\n' "$x"; tail -c$((1024*1024)) <"$x" | xxd -a | awk 'NR==1&&/^[0: ]+.{16}$/{next} NR==2&&/^\*$/{next} NR==3&&/^[0f]+: [0 ]+65 +.{16}$/{next} {e=1} END {exit e}' || continue; printf '\033[A\033[31msus:\033[33m %s \033[0m\n\n' "$x"; done
##
## sync pics/vids from phone
## (takes all files named (IMG|PXL|PANORAMA|Screenshot)_20231224_*)
cd /storage/emulated/0/DCIM/Camera
find -mindepth 1 -maxdepth 1 | sort | cut -c3- > ls
url=https://192.168.1.3:3923/rw/pics/Camera/$d/; awk -F_ '!/^[A-Z][A-Za-z]{1,16}_[0-9]{8}[_-]/{next} {d=substr($2,1,6)} !t[d]++{print d}' ls | while read d; do grep -E "^[A-Z][A-Za-z]{1,16}_$d" ls | tr '\n' '\0' | xargs -0 python3 ~/dev/copyparty/bin/u2c.py -td $url --; done
##
## convert symlinks to hardlinks (probably safe, no guarantees)
find -type l | while IFS= read -r lnk; do [ -h "$lnk" ] || { printf 'nonlink: %s\n' "$lnk"; continue; }; dst="$(readlink -f -- "$lnk")"; [ -e "$dst" ] || { printf '???\n%s\n%s\n' "$lnk" "$dst"; continue; }; printf 'relinking:\n %s\n %s\n' "$lnk" "$dst"; rm -- "$lnk"; ln -- "$dst" "$lnk"; done
##
## convert hardlinks to symlinks (maybe not as safe? use with caution)
e=; p=; find -printf '%i %p\n' | awk '{i=$1;sub(/[^ ]+ /,"")} !n[i]++{p[i]=$0;next} {printf "real %s\nlink %s\n",p[i],$0}' | while read cls p; do [ -e "$p" ] || e=1; p="$(realpath -- "$p")" || e=1; [ -e "$p" ] || e=1; [ $cls = real ] && { real="$p"; continue; }; [ $cls = link ] || e=1; [ "$p" ] || e=1; [ $e ] && { echo "ERROR $p"; break; }; printf '\033[36m%s \033[0m -> \033[35m%s\033[0m\n' "$p" "$real"; rm "$p"; ln -s "$real" "$p" || { echo LINK FAILED; break; }; done
## ##
## create a test payload ## create a test payload

View File

@@ -200,9 +200,10 @@ symbol legend,
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | | ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - |
| serve https | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | serve https | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
| serve webdav | █ | | | █ | █ | █ | █ | | █ | | | █ | | serve webdav | █ | | | █ | █ | █ | █ | | █ | | | █ |
| serve ftp | █ | | | | | █ | | | | | | █ | | serve ftp (tcp) | █ | | | | | █ | | | | | | █ |
| serve ftps | █ | | | | | █ | | | | | | █ | | serve ftps (tls) | █ | | | | | █ | | | | | | █ |
| serve sftp | | | | | | █ | | | | | | | | serve tftp (udp) | █ | | | | | | | | | | | |
| serve sftp (ssh) | | | | | | █ | | | | | | █ |
| serve smb/cifs | | | | | | █ | | | | | | | | serve smb/cifs | | | | | | █ | | | | | | |
| serve dlna | | | | | | █ | | | | | | | | serve dlna | | | | | | █ | | | | | | |
| listen on unix-socket | | | | █ | █ | | █ | █ | █ | | █ | █ | | listen on unix-socket | | | | █ | █ | | █ | █ | █ | | █ | █ |

View File

@@ -28,6 +28,7 @@ classifiers = [
"Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: Jython", "Programming Language :: Python :: Implementation :: Jython",
"Programming Language :: Python :: Implementation :: PyPy", "Programming Language :: Python :: Implementation :: PyPy",
"Operating System :: OS Independent",
"Environment :: Console", "Environment :: Console",
"Environment :: No Input/Output (Daemon)", "Environment :: No Input/Output (Daemon)",
"Intended Audience :: End Users/Desktop", "Intended Audience :: End Users/Desktop",
@@ -48,6 +49,7 @@ thumbnails2 = ["pyvips"]
audiotags = ["mutagen"] audiotags = ["mutagen"]
ftpd = ["pyftpdlib"] ftpd = ["pyftpdlib"]
ftps = ["pyftpdlib", "pyopenssl"] ftps = ["pyftpdlib", "pyopenssl"]
tftpd = ["partftpy>=0.2.0"]
pwhash = ["argon2-cffi"] pwhash = ["argon2-cffi"]
[project.scripts] [project.scripts]
@@ -100,6 +102,10 @@ include_trailing_comma = true
[tool.bandit] [tool.bandit]
skips = ["B104", "B110", "B112"] skips = ["B104", "B110", "B112"]
[tool.ruff]
line-length = 120
ignore = ["E402", "E722"]
# ===================================================================== # =====================================================================
[tool.pylint.MAIN] [tool.pylint.MAIN]

View File

@@ -12,6 +12,11 @@ set -euo pipefail
# #
# can be adjusted with --hash-mt (but alpine caps out at 5) # can be adjusted with --hash-mt (but alpine caps out at 5)
fsize=256
nfiles=128
pybin=$(command -v python3 || command -v python)
#pybin=~/.pyenv/versions/nogil-3.9.10-2/bin/python3
[ $# -ge 1 ] || { [ $# -ge 1 ] || {
echo 'need arg 1: path to copyparty-sfx.py' echo 'need arg 1: path to copyparty-sfx.py'
echo ' (remaining args will be passed on to copyparty,' echo ' (remaining args will be passed on to copyparty,'
@@ -22,6 +27,8 @@ sfx="$1"
shift shift
sfx="$(realpath "$sfx" || readlink -e "$sfx" || echo "$sfx")" sfx="$(realpath "$sfx" || readlink -e "$sfx" || echo "$sfx")"
awk=$(command -v gawk || command -v awk) awk=$(command -v gawk || command -v awk)
uname -s | grep -E MSYS && win=1 || win=
totalsize=$((fsize*nfiles))
# try to use /dev/shm to avoid hitting filesystems at all, # try to use /dev/shm to avoid hitting filesystems at all,
# otherwise fallback to mktemp which probably uses /tmp # otherwise fallback to mktemp which probably uses /tmp
@@ -30,20 +37,24 @@ mkdir $td || td=$(mktemp -d)
trap "rm -rf $td" INT TERM EXIT trap "rm -rf $td" INT TERM EXIT
cd $td cd $td
echo creating 256 MiB testfile in $td echo creating $fsize MiB testfile in $td
head -c $((1024*1024*256)) /dev/urandom > 1 sz=$((1024*1024*fsize))
head -c $sz /dev/zero | openssl enc -aes-256-ctr -iter 1 -pass pass:k -nosalt 2>/dev/null >1 || true
wc -c 1 | awk '$1=='$sz'{r=1}END{exit 1-r}' || head -c $sz /dev/urandom >1
echo creating 127 symlinks to it echo creating $((nfiles-1)) symlinks to it
for n in $(seq 2 128); do ln -s 1 $n; done for n in $(seq 2 $nfiles); do MSYS=winsymlinks:nativestrict ln -s 1 $n; done
echo warming up cache echo warming up cache
cat 1 >/dev/null cat 1 >/dev/null
echo ok lets go echo ok lets go
python3 "$sfx" -p39204 -e2dsa --dbd=yolo --exit=idx -lo=t -q "$@" $pybin "$sfx" -p39204 -e2dsa --dbd=yolo --exit=idx -lo=t -q "$@" && err= || err=$?
[ $win ] && [ $err = 15 ] && err= # sigterm doesn't hook on windows, ah whatever
[ $err ] && echo ERROR $err && exit $err
echo and the results are... echo and the results are...
$awk '/1 volumes in / {printf "%s MiB/s\n", 256*128/$(NF-1)}' <t LC_ALL=C $awk '/1 volumes in / {s=$(NF-1); printf "speed: %.1f MiB/s (time=%.2fs)\n", '$totalsize'/s, s}' <t
echo deleting $td and exiting echo deleting $td and exiting
@@ -52,16 +63,30 @@ echo deleting $td and exiting
# MiB/s @ cpu or device (copyparty, pythonver, distro/os) // comment # MiB/s @ cpu or device (copyparty, pythonver, distro/os) // comment
# 3887 @ Ryzen 5 4500U (cpp 1.9.5, nogil 3.9, fedora 39) // --hash-mt=6; laptop
# 3732 @ Ryzen 5 4500U (cpp 1.9.5, py 3.12.1, fedora 39) // --hash-mt=6; laptop
# 3608 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.5, fedora 38) // --hash-mt=6; laptop # 3608 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.5, fedora 38) // --hash-mt=6; laptop
# 2726 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.5, fedora 38) // --hash-mt=4 (old-default) # 2726 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.5, fedora 38) // --hash-mt=4 (old-default)
# 2202 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.5, docker-alpine 3.18.3) ??? alpine slow # 2202 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.5, docker-alpine 3.18.3) ??? alpine slow
# 2719 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.2, docker-debian 12.1) # 2719 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.2, docker-debian 12.1)
# 7746 @ mbp 2023 m3pro (cpp 1.9.5, py 3.11.7, macos 14.1) // --hash-mt=6
# 6687 @ mbp 2023 m3pro (cpp 1.9.5, py 3.11.7, macos 14.1) // --hash-mt=5 (default)
# 5544 @ Intel i5-12500 (cpp 1.9.5, py 3.11.2, debian 12.0) // --hash-mt=12; desktop # 5544 @ Intel i5-12500 (cpp 1.9.5, py 3.11.2, debian 12.0) // --hash-mt=12; desktop
# 5197 @ Ryzen 7 3700X (cpp 1.9.5, py 3.9.18, freebsd 13.2) // --hash-mt=8; 2u server # 5197 @ Ryzen 7 3700X (cpp 1.9.5, py 3.9.18, freebsd 13.2) // --hash-mt=8; 2u server
# 4551 @ mbp 2020 m1 (cpp 1.9.5, py 3.11.7, macos 14.2.1)
# 4190 @ Ryzen 7 5800X (cpp 1.9.5, py 3.11.6, fedora 37) // --hash-mt=8 (vbox-VM on win10-17763.4974)
# 3028 @ Ryzen 7 5800X (cpp 1.9.5, py 3.11.6, fedora 37) // --hash-mt=5 (vbox-VM on win10-17763.4974)
# 2629 @ Ryzen 7 5800X (cpp 1.9.5, py 3.11.7, win10-ltsc-1809-17763.4974) // --hash-mt=5 (default)
# 2576 @ Ryzen 7 5800X (cpp 1.9.5, py 3.11.7, win10-ltsc-1809-17763.4974) // --hash-mt=8 (hello??)
# 2606 @ Ryzen 7 3700X (cpp 1.9.5, py 3.9.18, freebsd 13.2) // --hash-mt=4 (old-default) # 2606 @ Ryzen 7 3700X (cpp 1.9.5, py 3.9.18, freebsd 13.2) // --hash-mt=4 (old-default)
# 1436 @ Ryzen 5 5500U (cpp 1.9.5, py 3.11.4, alpine 3.18.3) // nuc # 1436 @ Ryzen 5 5500U (cpp 1.9.5, py 3.11.4, alpine 3.18.3) // nuc
# 1065 @ Pixel 7 (cpp 1.9.5, py 3.11.5, termux 2023-09) # 1065 @ Pixel 7 (cpp 1.9.5, py 3.11.5, termux 2023-09)
# 945 @ Pi 5B v1.0 (cpp 1.9.5, py 3.11.6, alpine 3.19.0)
# 548 @ Pi 4B v1.5 (cpp 1.9.5, py 3.11.6, debian 11)
# 435 @ Pi 4B v1.5 (cpp 1.9.5, py 3.11.6, alpine 3.19.0)
# 212 @ Pi Zero2W v1.0 (cpp 1.9.5, py 3.11.6, alpine 3.19.0)
# 10.0 @ Pi Zero W v1.1 (cpp 1.9.5, py 3.11.6, alpine 3.19.0)
# notes, # notes,
# podman run --rm -it --shm-size 512m --entrypoint /bin/ash localhost/copyparty-min # podman run --rm -it --shm-size 512m --entrypoint /bin/ash localhost/copyparty-min

View File

@@ -1,11 +1,11 @@
FROM alpine:3.18 FROM alpine:3.18
WORKDIR /z WORKDIR /z
ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \ ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
ver_hashwasm=4.9.0 \ ver_hashwasm=4.10.0 \
ver_marked=4.3.0 \ ver_marked=4.3.0 \
ver_dompf=3.0.5 \ ver_dompf=3.0.8 \
ver_mde=2.18.0 \ ver_mde=2.18.0 \
ver_codemirror=5.65.12 \ ver_codemirror=5.65.16 \
ver_fontawesome=5.13.0 \ ver_fontawesome=5.13.0 \
ver_prism=1.29.0 \ ver_prism=1.29.0 \
ver_zopfli=1.0.3 ver_zopfli=1.0.3
@@ -80,7 +80,7 @@ RUN cd asmcrypto.js-$ver_asmcrypto \
# build hash-wasm # build hash-wasm
RUN cd hash-wasm \ RUN cd hash-wasm/dist \
&& mv sha512.umd.min.js /z/dist/sha512.hw.js && mv sha512.umd.min.js /z/dist/sha512.hw.js

View File

@@ -1,6 +1,6 @@
all: $(addsuffix .gz, $(wildcard *.js *.css)) all: $(addsuffix .gz, $(wildcard *.js *.css))
%.gz: % %.gz: %
pigz -11 -I 573 $< pigz -11 -I 2048 $<
# pigz -11 -J 34 -I 100 -F < $< > $@.first # pigz -11 -J 34 -I 100 -F < $< > $@.first

View File

@@ -17,11 +17,15 @@ docker run --rm -it -u 1000 -p 3923:3923 -v /mnt/nas:/w -v $PWD/cfgdir:/cfg copy
* if you are using rootless podman, remove `-u 1000` * if you are using rootless podman, remove `-u 1000`
* if you have selinux, append `:z` to all `-v` args (for example `-v /mnt/nas:/w:z`) * if you have selinux, append `:z` to all `-v` args (for example `-v /mnt/nas:/w:z`)
i'm unfamiliar with docker-compose and alternatives so let me know if this section could be better 🙏 this example is also available as a podman-compatible [docker-compose yaml](https://github.com/9001/copyparty/blob/hovudstraum/docs/examples/docker/basic-docker-compose); example usage: `docker-compose up` (you may need to `systemctl enable --now podman.socket` or similar)
i'm not very familiar with containers, so let me know if this section could be better 🙏
## configuration ## configuration
> this section basically explains how the [docker-compose yaml](https://github.com/9001/copyparty/blob/hovudstraum/docs/examples/docker/basic-docker-compose) works, so you may look there instead
the container has the same default config as the sfx and the pypi module, meaning it will listen on port 3923 and share the "current folder" (`/w` inside the container) as read-write for anyone the container has the same default config as the sfx and the pypi module, meaning it will listen on port 3923 and share the "current folder" (`/w` inside the container) as read-write for anyone
the recommended way to configure copyparty inside a container is to mount a folder which has one or more [config files](https://github.com/9001/copyparty/blob/hovudstraum/docs/example.conf) inside; `-v /your/config/folder:/cfg` the recommended way to configure copyparty inside a container is to mount a folder which has one or more [config files](https://github.com/9001/copyparty/blob/hovudstraum/docs/example.conf) inside; `-v /your/config/folder:/cfg`
@@ -75,6 +79,15 @@ or using commandline arguments,
``` ```
# faq
the following advice is best-effort and not guaranteed to be entirely correct
* q: starting a rootless container on debian 12 fails with `failed to register layer: lsetxattr user.overlay.impure /etc: operation not supported`
* a: docker's default rootless configuration on debian is to use the overlay2 storage driver; this does not work. Your options are to replace docker with podman (good choice), or to configure docker to use the `fuse-overlayfs` storage driver
# build the images yourself # build the images yourself
basically `./make.sh hclean pull img push` but see [devnotes.md](./devnotes.md) basically `./make.sh hclean pull img push` but see [devnotes.md](./devnotes.md)

73
scripts/logpack.sh Executable file
View File

@@ -0,0 +1,73 @@
#!/bin/bash
set -e
# recompress logs so they decompress faster + save some space;
# * will not recurse into subfolders
# * each file in current folder gets recompressed to zstd; input file is DELETED
# * any xz-compressed logfiles are decompressed before converting to zstd
# * SHOULD ignore and skip files which are currently open; SHOULD be safe to run while copyparty is running
# for files larger than $cutoff, compress with `zstd -T0`
# (otherwise do several files in parallel (scales better))
cutoff=400M
# osx support:
# port install findutils gsed coreutils
command -v gfind >/dev/null &&
command -v gsed >/dev/null &&
command -v gsort >/dev/null && {
find() { gfind "$@"; }
sed() { gsed "$@"; }
sort() { gsort "$@"; }
}
packfun() {
local jobs=$1 fn="$2"
printf '%s\n' "$fn" | grep -qF .zst && return
local of="$(printf '%s\n' "$fn" | sed -r 's/\.(xz|txt)/.zst/')"
[ "$fn" = "$of" ] &&
of="$of.zst"
[ -e "$of" ] &&
echo "SKIP: output file exists: $of" &&
return
lsof -- "$fn" 2>/dev/null | grep -E .. &&
printf "SKIP: file in use: %s\n\n" $fn &&
return
# determine by header; old copyparty versions would produce xz output without .xz names
head -c3 "$fn" | grep -qF 7z &&
cmd="xz -dkc" || cmd="cat"
printf '<%s> T%d: %s\n' "$cmd" $jobs "$of"
$cmd <"$fn" >/dev/null || {
echo "ERROR: uncompress failed: $fn"
return
}
$cmd <"$fn" | zstd --long -19 -T$jobs >"$of"
touch -r "$fn" -- "$of"
cmp <($cmd <"$fn") <(zstd -d <"$of") || {
echo "ERROR: data mismatch: $of"
mv "$of"{,.BAD}
return
}
rm -- "$fn"
}
# do small files in parallel first (in descending size);
# each file can use 4 threads in case the cutoff is poor
export -f packfun
export -f sed 2>/dev/null || true
find -maxdepth 1 -type f -size -$cutoff -printf '%s %p\n' |
sort -nr | sed -r 's`[^ ]+ ``; s`^\./``' | tr '\n' '\0' |
xargs "$@" -0i -P$(nproc) bash -c 'packfun 4 "$@"' _ {}
# then the big ones, letting each file use the whole cpu
for f in *; do packfun 0 "$f"; done

View File

@@ -77,13 +77,14 @@ function have() {
} }
function load_env() { function load_env() {
. buildenv/bin/activate . buildenv/bin/activate || return 1
have setuptools have setuptools &&
have wheel have wheel &&
have build have build &&
have twine have twine &&
have jinja2 have jinja2 &&
have strip_hints have strip_hints &&
return 0 || return 1
} }
load_env || { load_env || {

View File

@@ -26,8 +26,9 @@ help() { exec cat <<'EOF'
# _____________________________________________________________________ # _____________________________________________________________________
# core features: # core features:
# #
# `no-ftp` saves ~33k by removing the ftp server and filetype detector, # `no-ftp` saves ~30k by removing the ftp server, disabling --ftp
# disabling --ftpd and --magic #
# `no-tfp` saves ~10k by removing the tftp server, disabling --tftp
# #
# `no-smb` saves ~3.5k by removing the smb / cifs server # `no-smb` saves ~3.5k by removing the smb / cifs server
# #
@@ -114,6 +115,7 @@ while [ ! -z "$1" ]; do
gz) use_gz=1 ; ;; gz) use_gz=1 ; ;;
gzz) shift;use_gzz=$1;use_gz=1; ;; gzz) shift;use_gzz=$1;use_gz=1; ;;
no-ftp) no_ftp=1 ; ;; no-ftp) no_ftp=1 ; ;;
no-tfp) no_tfp=1 ; ;;
no-smb) no_smb=1 ; ;; no-smb) no_smb=1 ; ;;
no-zm) no_zm=1 ; ;; no-zm) no_zm=1 ; ;;
no-fnt) no_fnt=1 ; ;; no-fnt) no_fnt=1 ; ;;
@@ -165,7 +167,8 @@ necho() {
[ $repack ] && { [ $repack ] && {
old="$tmpdir/pe-copyparty.$(id -u)" old="$tmpdir/pe-copyparty.$(id -u)"
echo "repack of files in $old" echo "repack of files in $old"
cp -pR "$old/"*{py2,py37,j2,copyparty} . cp -pR "$old/"*{py2,py37,magic,j2,copyparty} .
cp -pR "$old/"*partftpy . || true
cp -pR "$old/"*ftp . || true cp -pR "$old/"*ftp . || true
} }
@@ -221,6 +224,16 @@ necho() {
mkdir ftp/ mkdir ftp/
mv pyftpdlib ftp/ mv pyftpdlib ftp/
necho collecting partftpy
f="../build/partftpy-0.2.0.tar.gz"
[ -e "$f" ] ||
(url=https://files.pythonhosted.org/packages/64/4a/360dde1e7277758a4ccb0d6434ec661042d9d745aa6c3baa9ec0699df3e9/partftpy-0.2.0.tar.gz;
wget -O$f "$url" || curl -L "$url" >$f)
tar -zxf $f
mv partftpy-*/partftpy .
rm -rf partftpy-* partftpy/bin
necho collecting python-magic necho collecting python-magic
v=0.4.27 v=0.4.27
f="../build/python-magic-$v.tar.gz" f="../build/python-magic-$v.tar.gz"
@@ -234,7 +247,6 @@ necho() {
rm -rf python-magic-* rm -rf python-magic-*
rm magic/compat.py rm magic/compat.py
iawk '/^def _add_compat/{o=1} !o; /^_add_compat/{o=0}' magic/__init__.py iawk '/^def _add_compat/{o=1} !o; /^_add_compat/{o=0}' magic/__init__.py
mv magic ftp/ # doesn't provide a version label anyways
# enable this to dynamically remove type hints at startup, # enable this to dynamically remove type hints at startup,
# in case a future python version can use them for performance # in case a future python version can use them for performance
@@ -409,8 +421,10 @@ iawk '/^ {0,4}[^ ]/{s=0}/^ {4}def (serve_forever|_loop)/{s=1}!s' ftp/pyftpdlib/s
rm -f ftp/pyftpdlib/{__main__,prefork}.py rm -f ftp/pyftpdlib/{__main__,prefork}.py
[ $no_ftp ] && [ $no_ftp ] &&
rm -rf copyparty/ftpd.py ftp && rm -rf copyparty/ftpd.py ftp
sed -ri '/\.ftp/d' copyparty/svchub.py
[ $no_tfp ] &&
rm -rf copyparty/tftpd.py partftpy
[ $no_smb ] && [ $no_smb ] &&
rm -f copyparty/smbd.py rm -f copyparty/smbd.py
@@ -584,7 +598,7 @@ nf=$(ls -1 "$zdir"/arc.* 2>/dev/null | wc -l)
echo gen tarlist echo gen tarlist
for d in copyparty j2 py2 py37 ftp; do find $d -type f; done | # strip_hints for d in copyparty partftpy magic j2 py2 py37 ftp; do find $d -type f || true; done | # strip_hints
sed -r 's/(.*)\.(.*)/\2 \1/' | LC_ALL=C sort | sed -r 's/(.*)\.(.*)/\2 \1/' | LC_ALL=C sort |
sed -r 's/([^ ]*) (.*)/\2.\1/' | grep -vE '/list1?$' > list1 sed -r 's/([^ ]*) (.*)/\2.\1/' | grep -vE '/list1?$' > list1

View File

@@ -28,5 +28,5 @@ ba91ab0518c61eff13e5612d9e6b532940813f6b56e6ed81ea6c7c4d45acee4d98136a383a250675
7f8f4daa4f4f2dbf24cdd534b2952ee3fba6334eb42b37465ccda3aa1cccc3d6204aa6bfffb8a83bf42ec59c702b5b5247d4c8ee0d4df906334ae53072ef8c4c MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl 7f8f4daa4f4f2dbf24cdd534b2952ee3fba6334eb42b37465ccda3aa1cccc3d6204aa6bfffb8a83bf42ec59c702b5b5247d4c8ee0d4df906334ae53072ef8c4c MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl 8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
656015f5cc2c04aa0653ee5609c39a7e5f0b6a58c84fe26b20bd070c52d20b4effb810132f7fb771168483e9fd975cc3302837dd7a1a687ee058b0460c857cc4 packaging-23.2-py3-none-any.whl 656015f5cc2c04aa0653ee5609c39a7e5f0b6a58c84fe26b20bd070c52d20b4effb810132f7fb771168483e9fd975cc3302837dd7a1a687ee058b0460c857cc4 packaging-23.2-py3-none-any.whl
6401616fdfdd720d1aaa9a0ed1398d00664b28b6d84517dff8d1f9c416452610c6afa64cfb012a78e61d1cf4f6d0784eca6e7610957859e511f15bc6f3b3bd53 Pillow-10.1.0-cp311-cp311-win_amd64.whl 424e20dc7263a31d524307bc39ed755a9dd82f538086fff68d98dd97e236c9b00777a8ac2e3853081b532b0e93cef44983e74d0ab274877440e8b7341b19358a pillow-10.2.0-cp311-cp311-win_amd64.whl
36442c017d8fc603745d33ca888b5b1194644103cbe1ff53e32d9b0355e290d5efac655fa1ae1b8e552ad8468878dc600d550c1158224260ca463991442e5264 python-3.11.6-amd64.exe e6bdbae1affd161e62fc87407c912462dfe875f535ba9f344d0c4ade13715c947cd3ae832eff60f1bad4161938311d06ac8bc9b52ef203f7b0d9de1409f052a5 python-3.11.8-amd64.exe

View File

@@ -28,8 +28,8 @@ fns=(
MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl
mutagen-1.47.0-py3-none-any.whl mutagen-1.47.0-py3-none-any.whl
packaging-23.2-py3-none-any.whl packaging-23.2-py3-none-any.whl
Pillow-10.1.0-cp311-cp311-win_amd64.whl pillow-10.2.0-cp311-cp311-win_amd64.whl
python-3.11.6-amd64.exe python-3.11.8-amd64.exe
) )
[ $w7 ] && fns+=( [ $w7 ] && fns+=(
pyinstaller-5.13.2-py3-none-win32.whl pyinstaller-5.13.2-py3-none-win32.whl

View File

@@ -1,6 +1,17 @@
#!/bin/bash #!/bin/bash
set -ex set -ex
# osx support
gtar=$(command -v gtar || command -v gnutar) || true
[ ! -z "$gtar" ] && command -v gfind >/dev/null && {
tar() { $gtar "$@"; }
sed() { gsed "$@"; }
find() { gfind "$@"; }
sort() { gsort "$@"; }
command -v grealpath >/dev/null &&
realpath() { grealpath "$@"; }
}
rm -rf unt rm -rf unt
mkdir -p unt/srv mkdir -p unt/srv
cp -pR copyparty tests unt/ cp -pR copyparty tests unt/
@@ -30,9 +41,11 @@ for py in python{2,3}; do
[ "${1:0:6}" = python ] && [ "$1" != $py ] && continue [ "${1:0:6}" = python ] && [ "$1" != $py ] && continue
PYTHONPATH= PYTHONPATH=
[ $py = python2 ] && PYTHONPATH=../scripts/py2:../sfx/py37 [ $py = python2 ] && PYTHONPATH=../scripts/py2:../sfx/py37:../sfx/j2
export PYTHONPATH export PYTHONPATH
[ $py = python2 ] && py=$(command -v python2.7 || echo $py)
nice $py -m unittest discover -s tests >/dev/null & nice $py -m unittest discover -s tests >/dev/null &
pids+=($!) pids+=($!)
done done

View File

@@ -54,6 +54,7 @@ copyparty/sutil.py,
copyparty/svchub.py, copyparty/svchub.py,
copyparty/szip.py, copyparty/szip.py,
copyparty/tcpsrv.py, copyparty/tcpsrv.py,
copyparty/tftpd.py,
copyparty/th_cli.py, copyparty/th_cli.py,
copyparty/th_srv.py, copyparty/th_srv.py,
copyparty/u2idx.py, copyparty/u2idx.py,

View File

@@ -262,7 +262,7 @@ def unpack():
final = opj(top, name) final = opj(top, name)
san = opj(final, "copyparty/up2k.py") san = opj(final, "copyparty/up2k.py")
for suf in range(0, 9001): for suf in range(0, 9001):
withpid = "{}.{}.{}".format(name, os.getpid(), suf) withpid = "%s.%d.%s" % (name, os.getpid(), suf)
mine = opj(top, withpid) mine = opj(top, withpid)
if not ofe(mine): if not ofe(mine):
break break
@@ -285,8 +285,8 @@ def unpack():
ck = hashfile(tar) ck = hashfile(tar)
if ck != CKSUM: if ck != CKSUM:
t = "\n\nexpected {} ({} byte)\nobtained {} ({} byte)\nsfx corrupt" t = "\n\nexpected %s (%d byte)\nobtained %s (%d byte)\nsfx corrupt"
raise Exception(t.format(CKSUM, SIZE, ck, sz)) raise Exception(t % (CKSUM, SIZE, ck, sz))
with tarfile.open(tar, "r:bz2") as tf: with tarfile.open(tar, "r:bz2") as tf:
# this is safe against traversal # this is safe against traversal

View File

@@ -84,7 +84,7 @@ args = {
"version": about["__version__"], "version": about["__version__"],
"description": ( "description": (
"Portable file server with accelerated resumable uploads, " "Portable file server with accelerated resumable uploads, "
+ "deduplication, WebDAV, FTP, zeroconf, media indexer, " + "deduplication, WebDAV, FTP, TFTP, zeroconf, media indexer, "
+ "video thumbnails, audio transcoding, and write-only folders" + "video thumbnails, audio transcoding, and write-only folders"
), ),
"long_description": long_description, "long_description": long_description,
@@ -111,6 +111,7 @@ args = {
"Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: Jython", "Programming Language :: Python :: Implementation :: Jython",
"Programming Language :: Python :: Implementation :: PyPy", "Programming Language :: Python :: Implementation :: PyPy",
"Operating System :: OS Independent",
"Environment :: Console", "Environment :: Console",
"Environment :: No Input/Output (Daemon)", "Environment :: No Input/Output (Daemon)",
"Intended Audience :: End Users/Desktop", "Intended Audience :: End Users/Desktop",
@@ -140,6 +141,7 @@ args = {
"audiotags": ["mutagen"], "audiotags": ["mutagen"],
"ftpd": ["pyftpdlib"], "ftpd": ["pyftpdlib"],
"ftps": ["pyftpdlib", "pyopenssl"], "ftps": ["pyftpdlib", "pyopenssl"],
"tftpd": ["partftpy>=0.2.0"],
"pwhash": ["argon2-cffi"], "pwhash": ["argon2-cffi"],
}, },
"entry_points": {"console_scripts": ["copyparty = copyparty.__main__:main"]}, "entry_points": {"console_scripts": ["copyparty = copyparty.__main__:main"]},

View File

@@ -1,16 +1,16 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import itertools
import re import re
import sys import sys
import time import time
import itertools
from . import util as tu
from .util import Cfg
from copyparty.authsrv import AuthSrv from copyparty.authsrv import AuthSrv
from copyparty.httpcli import HttpCli from copyparty.httpcli import HttpCli
from . import util as tu
from .util import Cfg
atlas = ["%", "25", "2e", "2f", ".", "/"] atlas = ["%", "25", "2e", "2f", ".", "/"]

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import sys
import runpy import runpy
import sys
host = sys.argv[1] host = sys.argv[1]
sys.argv = sys.argv[:1] + sys.argv[2:] sys.argv = sys.argv[:1] + sys.argv[2:]

111
tests/test_dots.py Normal file
View File

@@ -0,0 +1,111 @@
#!/usr/bin/env python3
# coding: utf-8
from __future__ import print_function, unicode_literals
import io
import os
import shutil
import tarfile
import tempfile
import unittest
from copyparty.authsrv import AuthSrv
from copyparty.httpcli import HttpCli
from copyparty.u2idx import U2idx
from copyparty.up2k import Up2k
from tests import util as tu
from tests.util import Cfg
def hdr(query, uname):
h = "GET /%s HTTP/1.1\r\nPW: %s\r\nConnection: close\r\n\r\n"
return (h % (query, uname)).encode("utf-8")
class TestHttpCli(unittest.TestCase):
def setUp(self):
self.td = tu.get_ramdisk()
def tearDown(self):
os.chdir(tempfile.gettempdir())
shutil.rmtree(self.td)
def test(self):
td = os.path.join(self.td, "vfs")
os.mkdir(td)
os.chdir(td)
# topDir volA volA/*dirA .volB .volB/*dirB
spaths = " t .t a a/da a/.da .b .b/db .b/.db"
for n, dirpath in enumerate(spaths.split(" ")):
if dirpath:
os.makedirs(dirpath)
for pfx in "f", ".f":
filepath = pfx + str(n)
if dirpath:
filepath = os.path.join(dirpath, filepath)
with open(filepath, "wb") as f:
f.write(filepath.encode("utf-8"))
vcfg = [
".::r,u1:r.,u2",
"a:a:r,u1:r,u2",
".b:.b:r.,u1:r,u2"
]
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"], e2dsa=True)
self.asrv = AuthSrv(self.args, self.log)
self.assertEqual(self.tardir("", "u1"), "f0 t/f1 a/f3 a/da/f4")
self.assertEqual(self.tardir(".t", "u1"), "f2")
self.assertEqual(self.tardir(".b", "u1"), ".f6 f6 .db/.f8 .db/f8 db/.f7 db/f7")
zs = ".f0 f0 .t/.f2 .t/f2 t/.f1 t/f1 .b/f6 .b/db/f7 a/f3 a/da/f4"
self.assertEqual(self.tardir("", "u2"), zs)
self.assertEqual(self.curl("?tar", "x")[1][:17], "\nJ2EOT")
# search
up2k = Up2k(self)
u2idx = U2idx(self)
allvols = list(self.asrv.vfs.all_vols.values())
x = u2idx.search("u1", allvols, "", 999)
x = " ".join(sorted([x["rp"] for x in x[0]]))
# u1 can see dotfiles in volB so they should be included
xe = ".b/.db/.f8 .b/.db/f8 .b/.f6 .b/db/.f7 .b/db/f7 .b/f6 a/da/f4 a/f3 f0 t/f1"
self.assertEqual(x, xe)
x = u2idx.search("u2", allvols, "", 999)
x = " ".join(sorted([x["rp"] for x in x[0]]))
self.assertEqual(x, ".f0 .t/.f2 .t/f2 a/da/f4 a/f3 f0 t/.f1 t/f1")
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"], dotsrch=False)
self.asrv = AuthSrv(self.args, self.log)
u2idx = U2idx(self)
x = u2idx.search("u1", self.asrv.vfs.all_vols.values(), "", 999)
x = " ".join(sorted([x["rp"] for x in x[0]]))
# u1 can see dotfiles in volB so they should be included
xe = "a/da/f4 a/f3 f0 t/f1"
self.assertEqual(x, xe)
def tardir(self, url, uname):
h, b = self.curl("/" + url + "?tar", uname, True)
tar = tarfile.open(fileobj=io.BytesIO(b), mode="r|").getnames()
top = ("top" if not url else url.lstrip(".").split("/")[0]) + "/"
assert len(tar) == len([x for x in tar if x.startswith(top)])
return " ".join([x[len(top):] for x in tar])
def curl(self, url, uname, binary=False):
conn = tu.VHttpConn(self.args, self.asrv, self.log, hdr(url, uname))
HttpCli(conn).run()
if binary:
h, b = conn.s._reply.split(b"\r\n\r\n", 1)
return [h.decode("utf-8"), b]
return conn.s._reply.decode("utf-8").split("\r\n\r\n", 1)
def log(self, src, msg, c=0):
print(msg)

View File

@@ -4,9 +4,9 @@ from __future__ import print_function, unicode_literals
import re import re
import unittest import unittest
from xml.etree import ElementTree as ET from xml.etree import ElementTree as ET
from copyparty.dxml import parse_xml, BadXML, mkenod, mktnod
from copyparty.dxml import BadXML, mkenod, mktnod, parse_xml
ET.register_namespace("D", "DAV:") ET.register_namespace("D", "DAV:")

View File

@@ -4,18 +4,17 @@ from __future__ import print_function, unicode_literals
import io import io
import os import os
import time
import shutil
import pprint import pprint
import shutil
import tarfile import tarfile
import tempfile import tempfile
import time
import unittest import unittest
from tests import util as tu
from tests.util import Cfg, eprint
from copyparty.authsrv import AuthSrv from copyparty.authsrv import AuthSrv
from copyparty.httpcli import HttpCli from copyparty.httpcli import HttpCli
from tests import util as tu
from tests.util import Cfg, eprint
def hdr(query): def hdr(query):

View File

@@ -2,19 +2,17 @@
# coding: utf-8 # coding: utf-8
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
import os
import json import json
import os
import shutil import shutil
import tempfile import tempfile
import unittest import unittest
from textwrap import dedent
from copyparty import util
from copyparty.authsrv import VFS, AuthSrv
from tests import util as tu from tests import util as tu
from tests.util import Cfg from tests.util import Cfg
from copyparty.authsrv import AuthSrv, VFS
from copyparty import util
class TestVFS(unittest.TestCase): class TestVFS(unittest.TestCase):
def setUp(self): def setUp(self):
@@ -176,11 +174,11 @@ class TestVFS(unittest.TestCase):
self.assertEqual(len(vfs.nodes), 1) self.assertEqual(len(vfs.nodes), 1)
self.assertEqual(n.vpath, "a") self.assertEqual(n.vpath, "a")
self.assertEqual(n.realpath, os.path.join(td, "a")) self.assertEqual(n.realpath, os.path.join(td, "a"))
self.assertAxs(n.axs.uread, ["*"]) self.assertAxs(n.axs.uread, ["*", "k"])
self.assertAxs(n.axs.uwrite, []) self.assertAxs(n.axs.uwrite, [])
perm_na = (False, False, False, False, False, False, False) perm_na = (False, False, False, False, False, False, False, False)
perm_rw = (True, True, False, False, False, False, False) perm_rw = (True, True, False, False, False, False, False, False)
perm_ro = (True, False, False, False, False, False, False) perm_ro = (True, False, False, False, False, False, False, False)
self.assertEqual(vfs.can_access("/", "*"), perm_na) self.assertEqual(vfs.can_access("/", "*"), perm_na)
self.assertEqual(vfs.can_access("/", "k"), perm_rw) self.assertEqual(vfs.can_access("/", "k"), perm_rw)
self.assertEqual(vfs.can_access("/a", "*"), perm_ro) self.assertEqual(vfs.can_access("/a", "*"), perm_ro)
@@ -233,7 +231,7 @@ class TestVFS(unittest.TestCase):
cfg_path = os.path.join(self.td, "test.cfg") cfg_path = os.path.join(self.td, "test.cfg")
with open(cfg_path, "wb") as f: with open(cfg_path, "wb") as f:
f.write( f.write(
dedent( util.dedent(
""" """
u a:123 u a:123
u asd:fgh:jkl u asd:fgh:jkl

View File

@@ -3,23 +3,23 @@
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
import os import os
import re
import sys
import time
import shutil
import jinja2
import threading
import tempfile
import platform import platform
import re
import shutil
import subprocess as sp import subprocess as sp
import sys
import tempfile
import threading
import time
from argparse import Namespace from argparse import Namespace
import jinja2
WINDOWS = platform.system() == "Windows" WINDOWS = platform.system() == "Windows"
ANYWIN = WINDOWS or sys.platform in ["msys"] ANYWIN = WINDOWS or sys.platform in ["msys"]
MACOS = platform.system() == "Darwin" MACOS = platform.system() == "Darwin"
J2_ENV = jinja2.Environment(loader=jinja2.BaseLoader) J2_ENV = jinja2.Environment(loader=jinja2.BaseLoader) # type: ignore
J2_FILES = J2_ENV.from_string("{{ files|join('\n') }}\nJ2EOT") J2_FILES = J2_ENV.from_string("{{ files|join('\n') }}\nJ2EOT")
@@ -43,7 +43,8 @@ if MACOS:
from copyparty.__init__ import E from copyparty.__init__ import E
from copyparty.__main__ import init_E from copyparty.__main__ import init_E
from copyparty.util import Unrecv, FHC, Garda from copyparty.u2idx import U2idx
from copyparty.util import FHC, Garda, Unrecv
init_E(E) init_E(E)
@@ -83,8 +84,8 @@ def get_ramdisk():
for _ in range(10): for _ in range(10):
try: try:
_, _ = chkcmd(["diskutil", "eraseVolume", "HFS+", "cptd", devname]) _, _ = chkcmd(["diskutil", "eraseVolume", "HFS+", "cptd", devname])
with open("/Volumes/cptd/.metadata_never_index", "w") as f: with open("/Volumes/cptd/.metadata_never_index", "wb") as f:
f.write("orz") f.write(b"orz")
try: try:
shutil.rmtree("/Volumes/cptd/.fseventsd") shutil.rmtree("/Volumes/cptd/.fseventsd")
@@ -99,67 +100,76 @@ def get_ramdisk():
raise Exception("ramdisk creation failed") raise Exception("ramdisk creation failed")
ret = os.path.join(tempfile.gettempdir(), "copyparty-test") ret = os.path.join(tempfile.gettempdir(), "copyparty-test")
try: if not os.path.isdir(ret):
os.mkdir(ret) os.mkdir(ret)
finally:
return subdir(ret) return subdir(ret)
class Cfg(Namespace): class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None): def __init__(self, a=None, v=None, c=None, **ka0):
ka = {} ka = {}
ex = "daw dav_auth dav_inf dav_mac dav_rt dotsrch e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw rand smb th_no_crop vague_403 vc ver xdev xlink xvol" ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw q rand smb srch_dbg stats th_no_crop vague_403 vc ver xdev xlink xvol"
ka.update(**{k: False for k in ex.split()}) ka.update(**{k: False for k in ex.split()})
ex = "dotpart no_rescan no_sendfile no_voldump plain_ip" ex = "dotpart dotsrch no_dhash no_fastboot no_rescan no_sendfile no_voldump re_dhash plain_ip"
ka.update(**{k: True for k in ex.split()}) ka.update(**{k: True for k in ex.split()})
ex = "ah_cli ah_gen css_browser hist js_browser no_forget no_hash no_idx nonsus_urls" ex = "ah_cli ah_gen css_browser hist ipa_re js_browser no_forget no_hash no_idx nonsus_urls"
ka.update(**{k: None for k in ex.split()}) ka.update(**{k: None for k in ex.split()})
ex = "s_thead s_tbody th_convt" ex = "hash_mt srch_time u2j"
ka.update(**{k: 1 for k in ex.split()})
ex = "reg_cap s_thead s_tbody th_convt"
ka.update(**{k: 9 for k in ex.split()}) ka.update(**{k: 9 for k in ex.split()})
ex = "df loris re_maxage rproxy rsp_jtr rsp_slp s_wr_slp theme themes turbo" ex = "db_act df loris re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
ka.update(**{k: 0 for k in ex.split()}) ka.update(**{k: 0 for k in ex.split()})
ex = "ah_alg bname doctitle favico html_head lg_sbf log_fk md_sbf name textfiles unlist vname R RS SR" ex = "ah_alg bname doctitle exit favico idp_h_usr html_head lg_sbf log_fk md_sbf name textfiles unlist vname R RS SR"
ka.update(**{k: "" for k in ex.split()}) ka.update(**{k: "" for k in ex.split()})
ex = "on403 on404 xad xar xau xban xbd xbr xbu xiu xm" ex = "on403 on404 xad xar xau xban xbd xbr xbu xiu xm"
ka.update(**{k: [] for k in ex.split()}) ka.update(**{k: [] for k in ex.split()})
ex = "exp_lg exp_md" ex = "exp_lg exp_md th_coversd"
ka.update(**{k: {} for k in ex.split()}) ka.update(**{k: {} for k in ex.split()})
ka.update(ka0)
super(Cfg, self).__init__( super(Cfg, self).__init__(
a=a or [], a=a or [],
v=v or [], v=v or [],
c=c, c=c,
E=E, E=E,
dbd="wal", dbd="wal",
s_wr_sz=512 * 1024,
th_size="320x256",
fk_salt="a" * 16, fk_salt="a" * 16,
unpost=600, lang="eng",
u2sort="s", log_badpwd=1,
u2ts="c", logout=573,
sort="href",
mtp=[],
mte={"a": True}, mte={"a": True},
mth={}, mth={},
lang="eng", mtp=[],
logout=573, rm_retry="0/0",
s_wr_sz=512 * 1024,
sort="href",
srch_hits=99999,
th_size="320x256",
u2sort="s",
u2ts="c",
unpost=600,
warksalt="hunter2",
**ka **ka
) )
class NullBroker(object): class NullBroker(object):
def say(*args): def say(self, *args):
pass pass
def ask(*args): def ask(self, *args):
pass pass
@@ -186,11 +196,16 @@ class VSock(object):
class VHttpSrv(object): class VHttpSrv(object):
def __init__(self): def __init__(self, args, asrv, log):
self.args = args
self.asrv = asrv
self.log = log
self.broker = NullBroker() self.broker = NullBroker()
self.prism = None self.prism = None
self.bans = {} self.bans = {}
self.nreq = 0 self.nreq = 0
self.nsus = 0
aliases = ["splash", "browser", "browser2", "msg", "md", "mde"] aliases = ["splash", "browser", "browser2", "msg", "md", "mde"]
self.j2 = {x: J2_FILES for x in aliases} self.j2 = {x: J2_FILES for x in aliases}
@@ -200,31 +215,39 @@ class VHttpSrv(object):
self.g403 = Garda("") self.g403 = Garda("")
self.gurl = Garda("") self.gurl = Garda("")
self.u2idx = None
self.ptn_cc = re.compile(r"[\x00-\x1f]") self.ptn_cc = re.compile(r"[\x00-\x1f]")
def cachebuster(self): def cachebuster(self):
return "a" return "a"
def get_u2idx(self):
self.u2idx = self.u2idx or U2idx(self)
return self.u2idx
class VHttpConn(object): class VHttpConn(object):
def __init__(self, args, asrv, log, buf): def __init__(self, args, asrv, log, buf):
self.t0 = time.time()
self.s = VSock(buf) self.s = VSock(buf)
self.sr = Unrecv(self.s, None) self.sr = Unrecv(self.s, None) # type: ignore
self.aclose = {}
self.addr = ("127.0.0.1", "42069") self.addr = ("127.0.0.1", "42069")
self.args = args self.args = args
self.asrv = asrv self.asrv = asrv
self.nid = None self.bans = {}
self.freshen_pwd = 0.0
self.hsrv = VHttpSrv(args, asrv, log)
self.ico = None
self.lf_url = None
self.log_func = log self.log_func = log
self.log_src = "a" self.log_src = "a"
self.lf_url = None
self.hsrv = VHttpSrv()
self.bans = {}
self.aclose = {}
self.u2fh = FHC()
self.mutex = threading.Lock() self.mutex = threading.Lock()
self.nreq = -1 self.u2mutex = threading.Lock()
self.nbyte = 0 self.nbyte = 0
self.ico = None self.nid = None
self.nreq = -1
self.thumbcli = None self.thumbcli = None
self.freshen_pwd = 0.0 self.u2fh = FHC()
self.t0 = time.time()
self.get_u2idx = self.hsrv.get_u2idx