Compare commits

...

133 Commits

Author SHA1 Message Date
ed
5a009a2a64 v1.14.4 2024-09-02 01:08:41 +00:00
ed
d9e9526247 fix js typo (could panic on network glitches) 2024-09-02 00:58:15 +00:00
ed
5a8c3b8be0 optimize test_httpcli.py too, from 1.64 to 1.51s 2024-08-31 22:03:06 +00:00
ed
1c9c17fb9b optimize test_dedup.py
* 7.71s originally
* 4.51s with fstab reuse
* 4.34s without db_wd
* 4.02s with no pp start
* 3.73s with Cfg reuse
2024-08-31 21:54:47 +00:00
ed
7f82449179 changelog: cleanup historic entries 2024-08-31 20:39:37 +00:00
ed
e455ec994e logo tweaks (kerning, footer-slant) 2024-08-31 20:37:58 +00:00
ed
c111027420 update pkgs to 1.14.3 2024-08-30 23:29:47 +00:00
ed
abcdf479e6 v1.14.3 2024-08-30 23:11:22 +00:00
ed
ad2371f810 shares: add revival and expiration extension 2024-08-30 22:25:50 +00:00
ed
c4e2b0f95f doc-viewer: always wordwrap code 2024-08-30 22:13:10 +00:00
ed
3da62ec234 fix dedup bug as of v1.13.8:
* v1.13.8 broke collision resolving for non-identical files;
   the correct filename was reserved but not symlinked to
   the original file, leaving a zerobyte file instead.
   See v1.14.3 github release notes for remediation info

* add sanchecks for early detection of index/fs desync;
   saves performance and gives less confusing logs
2024-08-30 22:06:25 +00:00
ed
01233991f3 tftp: support unmapped root 2024-08-30 16:08:50 +00:00
ed
ee35974273 readme hacking 2024-08-29 22:17:13 +00:00
ed
7037e7365e add logo 2024-08-29 22:00:08 +00:00
ed
03b13e8a1c sfx-customizer:
* better translation stripping
* add support in bruteforcer
* add examples

and fix login-banner usage example
2024-08-28 05:53:26 +00:00
ed
cdd2da0208 update pkgs to 1.14.2 2024-08-23 23:43:46 +00:00
ed
cec0e0cf02 v1.14.2 2024-08-23 23:07:18 +00:00
ed
8122ddedfe share multiple files (#84);
if files (one or more) are selected for sharing, then
a virtual folder is created to hold the selected files

if a single file is selected for sharing, then
the returned URL will point directly to that file

and fix some shares-related bugs:
* password coalescing
* log-spam on reload
2024-08-23 22:55:31 +00:00
ultwcz
55a77c5e89 Chinese translation fixes (#95)
* fix: translation: changing from `" "` to `' '` for some strings;
	using `./scripts/tlcheck.sh eng chi copyparty/web/browser.js`

* fix: translation: Check the newly added Chinese translation
2024-08-23 08:14:24 +00:00
ed
461f31582d add IDs for ricing (#93) + fix a11y bleed 2024-08-22 20:14:08 +00:00
ed
f356faa278 u2c: support multiple exclusion patterns 2024-08-22 20:03:25 +00:00
ed
9f034d9c4c fix confusing logmsg for zerobyte files 2024-08-22 19:54:10 +00:00
ed
ba52590ae4 translation tweaks 2024-08-22 19:52:20 +00:00
ultwcz
92edea1de5 add translation: Chinese (#94) 2024-08-22 17:19:16 +00:00
ed
7ff46966da fix some issues with shares mentioned in #84;
* crash when root volume is unmapped
* rephrase login-page for shares
* add chrome support (lol)
* fix confusing helptext
* improve ux
  * placeholders in share creator
  * button to disable expiration in share creator
  * human-readable timestamps in share listing
2024-08-19 21:38:47 +00:00
ed
fca70b3508 update pkgs to 1.14.1 2024-08-19 00:24:52 +00:00
ed
70009cd984 v1.14.1 2024-08-19 00:14:44 +00:00
ed
8d8b88c4fd update pkgs to 1.14.0 2024-08-18 23:36:57 +00:00
ed
c4b0cccefd v1.14.0 2024-08-18 23:11:36 +00:00
ed
7c2beba555 add file/folder sharing; closes #84 2024-08-18 22:49:13 +00:00
ed
7d8d94388b invert volume scrollwheel
<daniiooo> also iirc some time ago we were talking about the scroll for volume ed
<daniiooo> and how its reversed
<ed> is it reversed though? most people said it worked the way they expected
<daniiooo> fuck maybe i agreed back then too
<daniiooo> its the opposite in both aimp and mpv though
<ed> is it w
<tatsu> its a feature
<Devices> it's to keep you on your toes
<Devices> consciously use copyparty
<ed> i can invert it no problem
<ed> would be a nice surprise for anyone who's used it
<Flaminator> Scroll down turns the audio down right?
<daniiooo> ye it makes it louder in cpp
<Devices> why would scrolling down make something louder
<Vin> yeah that's odd
<Vin> scrolling up should make it louder
<Flaminator> It's what it does for me in winamp, mpc-hc and foobar2000.
<daniiooo> so now the question is who itc agreed to whats currently in cpp
<daniiooo> haha
<ed> idk but i'm inverting it
<ed> let's invert it every 6 months
2024-08-17 20:36:59 +00:00
ed
0b46b1a614 fix some vproxy issues (#93):
* navpane would always feed the vproxy paths into the tree
   instead of only when necessary (the initial load)

* mkdir would return `X-New-Dir` without the `rp-loc` prefix
  * chpw and some other redirects also sent raw vpaths

Reported-by: @iridial
2024-08-17 18:17:40 +00:00
ed
5153db6bff ux: login margin; theme2: yellow buttons
the red buttons from protonmail's monokai theme look better,
but they're confusing because intuitively red means off
2024-08-17 15:55:55 +00:00
ed
b0af4b3712 hook/reloc: dupe in one vol doesn't mean dupe in another 2024-08-16 21:08:22 +00:00
ed
c8f4aeaefa hook/reloc: fix up2k jank
* wark landed in the wrong registry when moved to another volume
   (harmless; upload would succeed on the next handshake)

* dedup did not apply correctly when moved into another volume,
   since all the checks were done based on the previous vol;
   fix this by recursing the whole thing

also update the reloc example after some real-world experience

Reported-by: @daniiooo
2024-08-15 19:26:06 +00:00
ed
00da74400c password-changer fixes:
* fix `--chpw-no` which did nothing
* print list of users with unchanged passwords by default
* more granular verbosity levels
2024-08-15 17:30:01 +00:00
ed
83fb569d61 make passwords user-changeable; closes #92 2024-08-14 20:09:57 +00:00
ed
5a62cb4869 fix custom fonts in sandboxed docs;
`@import` must be at the very start of a `<style>` tag

Reported-by: @thaddeuskkr (thx!)
2024-08-14 15:30:04 +00:00
ed
687df2fabd unix-socket fixes:
* support x-forwarded-for
* option to specify socket permissions and group
* in containers, avoid collision during restart
* add --help-bind with examples
2024-08-14 04:47:10 +00:00
ed
cdd0794d6e update pkgs to 1.13.8 2024-08-13 00:20:04 +00:00
ed
dcc988135e v1.13.8 2024-08-13 00:08:23 +00:00
ed
3db117d85f list status of optional dependencies 2024-08-12 22:48:53 +00:00
ed
ee9aad82dd support listening on unix sockets 2024-08-12 21:58:02 +00:00
ed
2d6eb63fce scripts/uncomment: python 3.12 support;
`tokenize.FSTRING_MIDDLE` was introduced, changing the
representation of `f"x{{y"` from `STRING(f"x{{y")` to:

* `FSTRING_START('f"')`
* `FSTRING_MIDDLE('x{')`
* `FSTRING_MIDDLE('y')`
* `FSTRING_END('"')`

each literal `{` (encoded as `{{` in the input) now appears as a
single `{` as the final character of its `FSTRING_MIDDLE`, with
additional consecutive `FSTRING_MIDDLE` tokens if necessary

regular interpolating `{` are encoded as separate `OP` tokens

the fact that the literal `{` is encoded as a single `{` instead
of `{{` breaks the assumption that the string-value of each token
maps directly to the original code

fix this by replacing `{` with `{{` and `}` with `}}` in
`FSTRING_MIDDLE` tokens, and not adding whitespace after
`FSTRING_MIDDLE` tokens
2024-08-12 19:55:17 +00:00
ed
ca001c8504 update deps (pyftpdlib, win10-python) 2024-08-12 18:51:52 +00:00
ed
4e581c59da fix s390x w/a, up2k name-randomizer 2024-08-12 17:45:19 +00:00
ed
dbd42bc6bf add option to load custom js on all pages 2024-08-11 23:51:17 +00:00
ed
c862ec1b64 up2k.js: optimal pipelining 2024-08-11 21:15:44 +00:00
ed
f709140571 hook/reloc: helptext mentioned jank that doesn't exist anymore 2024-08-11 15:07:21 +00:00
ed
ef1c4b7a20 this guy didn't make it in 2024-08-11 14:55:51 +00:00
ed
6c94a63f1c add hook side-effects; closes #86
hooks can now interrupt or redirect actions, and initiate
related actions, by printing json on stdout with commands

mainly to mitigate limitations such as sharex/sharex#3992

xbr/xau can redirect uploads to other destinations with `reloc`
and most hooks can initiate indexing or deletion of additional
files by giving a list of vpaths in json-keys `idx` or `del`

there are limitations;
* xbu/xau effects don't apply to ftp, tftp, smb
* xau will intentionally fail if a reloc destination exists
* xau effects do not apply to up2k

also provides more details for hooks:
* xbu/xau: basic-uploader vpath with filename
* xbr/xar: add client ip
2024-08-11 14:52:32 +00:00
ed
20669c73d3 rm dead code (gridview conditional dl/play)
and maybe fix negative eta when a chunk gets eaten by the network
2024-08-09 21:57:42 +00:00
ed
0da719f4c2 up2k: shrink request headers
v1.13.5 made some proxies angry with its massive chunklists

when stitching chunks, only list the first chunk hash in full,
and include a truncated hash for the consecutive chunks

should be enough for logfiles to make sense
and to smoketest that clients are behaving
2024-08-08 18:24:18 +00:00
ed
373194c38a better up2k stitching on fat32 servers:
* the batches don't need to be window-aligned
* improve js backoff (in case of more funnies)
2024-08-05 19:52:50 +00:00
ed
3d245431fc linter fixes 2024-08-05 18:48:16 +00:00
ed
250c8c56f0 fix deadlock on IBM mainframes (s390x) 2024-08-02 23:05:44 +00:00
ed
e136231c8e docker: add portainer howto 2024-08-02 23:01:32 +00:00
ed
98ffaadf52 docker: use less RAM at runtime
compile to bytecode so cpython doesn't have to keep it in memory

ram usage reduced by:
* min: 5.4 MiB (32.6 to 27.2)
* ac/im: 5.2 MiB (39.0 to 33.8)
* dj/iv: 10.6 MiB (67.3 to 56.7)

startup time reduced from:
* min: 1.3s to 0.6s
* ac/im: 1.6s to 0.9s
* dj/iv: 2.0s to 1.1s

image size increased by 4 MiB (min), 6 MiB (ac/im/iv), 9 MiB (dj)

ram usage measured on idle with:
while true; do ps aux | grep -E 'R[S]S|no[-]crt'; read -n1; echo; done

startup time measured with:
time podman run --rm -it localhost/copyparty-min-amd64 --exit=idx
2024-08-02 22:11:23 +00:00
ed
ebb1981803 py2: reduce ram usage 2024-08-01 20:01:42 +00:00
ed
72361c99e1 add import chickenbits 2024-08-01 18:29:25 +00:00
ed
d5c9c8ebbd make it 5% faster 2024-07-31 17:51:53 +00:00
ed
746229846d add test for zip-download 2024-07-30 22:44:29 +00:00
ed
ffd7cd3ca8 update pkgs to 1.13.6 2024-07-29 20:56:00 +00:00
ed
b3cecabca3 v1.13.6 2024-07-29 20:28:51 +00:00
ed
662541c64c audio-player: show status while loading 2024-07-29 20:14:39 +00:00
ed
225bd80ea8 up2k.js: fix overshoot in chunk stitcher 2024-07-29 19:19:22 +00:00
ed
85e54980cc up2k.js: set timeouts for uploads
in the event that an upload chunk gets stuck, the js would
never stop waiting for a response, requiring a page reload

improves reliability when running behind a reverse-proxy
which is configured to never timeout requests (can make
sense when combined with other services on the same box)
2024-07-29 19:17:03 +00:00
ed
a19a0fa9f3 fix modal wordwrap in firefox;
with overflow:auto, firefox picks the div-width before estimating
the height, causing it to undershoot by the scrollbar width
and then messing up the text alignment

fix: conditionally set overflow-y:scroll using js
2024-07-29 18:04:35 +00:00
ed
9bb6e0dc62 misc ux:
* wait until page (au) has loaded to register hotkeys
* hotkey `m` would grow sidebar if tree was minimized
* more exact warning about num.parallel uploads
* keep more console logs in memory
* message phrasing
2024-07-29 17:59:34 +00:00
ed
15ddcf53e7 add bsod theme 2024-07-26 22:09:59 +00:00
ed
6b54972ec0 update comparison vs similar software:
* general changes:
  * upload speed comparisons considering v1.13.5

* hfs2:
  * dead project with unfixed vulnerabilities

* hfs3:
  * has replaced hfs2
  * uploads are now resumable
  * add new functionality:
    * write-only folders
    * unmap subfolders
    * move and delete files
    * folder-rproxy
    * themes
    * basic audio player, image viewer

* filebrowser:
  * uploads are now parallelized, resumable, segmented
    * but single large files are not accelerated
  * can listen on unix sockets
  * folder-rproxy is supported
  * more cpu efficient than copyparty
2024-07-26 19:46:03 +00:00
ed
0219eada23 cleanup: strip trailing whitespace 2024-07-26 19:33:56 +00:00
ed
8916bce306 u2c fixes:
* `--sz` was num.chunks, not the intended MiB
* crash on exit with `-z` and no modified files
* summary upload elapsed-time could exceed wallclock
2024-07-26 19:28:47 +00:00
ed
99edba4fd9 change xm examples to reject users without write-access; #68 2024-07-25 19:23:08 +00:00
ed
64de3e01e8 update pkgs to 1.13.5 2024-07-22 23:48:24 +00:00
ed
8222ccc40b v1.13.5 2024-07-22 23:23:53 +00:00
ed
dc449bf8b0 fix grid toolbar undocking after viewing a pic/vid 2024-07-22 23:09:25 +00:00
ed
ef0ecf878b recommend rclone over davfs2; closes #90 2024-07-22 22:46:24 +00:00
ed
53f1e3c91d ui option to play video as audio
audio extraction happens serverside to opus or mp3
depending on browser support

remuxing (extracting audio without transcoding)
is currently not supported, and is not planned
2024-07-22 22:30:21 +00:00
ed
eeef80919f css-fix for firefox52 (centos6) 2024-07-22 20:59:05 +00:00
ed
987bce2182 u2c fixes:
* don't stitch across deduplicated blocks
* print speed/time for hash/upload
* more compact json in handshakes
2024-07-22 20:55:32 +00:00
ed
b511d686f0 up2k fixes:
* progress donuts should include inflight bytes
* changes to stitch-size in settings didn't apply until next refresh
* serverlog was too verbose; truncate chunk hashes
* mention absolute cloudflare limit in readme
2024-07-22 19:06:01 +00:00
ed
132a83501e add chunk stitching; twice as fast long-distance uploads:
rather than sending each file chunk as a separate HTTP request,
sibling chunks will now be fused together into larger HTTP POSTs
which results in unreasonably huge speed boosts on some routes
( `2.6x` from Norway to US-East,  `1.6x` from US-West to Finland )

the `x-up2k-hash` request header now takes a comma-separated list
of chunk hashes, which must all be sibling chunks, resulting in
one large consecutive range of file data as the post body

a new global-option `--u2sz`, default `1,64,96`, sets the target
request size as 64 MiB, allowing the settings ui to specify any
value between 1 and 96 MiB, which is cloudflare's max value

this does not cause any issues for resumable uploads; thanks to the
streaming HTTP POST parser, each chunk will be verified and written
to disk as they arrive, meaning only the untransmitted chunks will
have to be resent in the event of a connection drop -- of course
assuming there are no misconfigured WAFs or caching-proxies

the previous up2k approach of uploading each chunk in a separate HTTP
POST was inefficient in many real-world scenarios, mainly due to TCP
window-scaling behaving erratically in some IXPs / along some routes

a particular link from Norway to Virginia,US is unusably slow for
the first 4 MiB, only reaching optimal speeds after 100 MiB, and
then immediately resets the scale when the request has been sent;
connection reuse does not help in this case

on this route, the basic-uploader was somehow faster than up2k
with 6 parallel uploads; only time i've seen this
2024-07-21 23:35:37 +00:00
ed
e565ad5f55 better errors through broker 2024-07-21 20:36:50 +00:00
ed
f955d2bd58 dangit 2024-07-20 22:28:40 +00:00
ed
5953399090 add helptext exporters (html, txt) 2024-07-17 23:06:01 +00:00
ed
d26a944d95 hooks: add cache-warmer 2024-07-17 21:00:59 +00:00
ed
50dac15568 update pkgs to 1.13.4 2024-07-16 05:48:45 +00:00
ed
ac1e11e4ce v1.13.4 2024-07-16 04:57:26 +00:00
ed
d749683d48 hooks: add permission filtering, argv-prepend;
hooks can be restricted to users with certain permissions, for example
`--xm aw,notify-send` will only `notify-send` if user has write-access

the user's list of permissions are now also included in the json
that is passed to the hook if enabled; `--xm aw,j,notify-send`

will now also stop parsing flags when encountering a blank value,
allowing to specify any initial arguments to the command:
`--xm aw,j,,notify-send,hey` would run `notify-send` with `hey`
as its first argument, and the json would be the 2nd argument,
similarly `--xm ,notify-send,hey` when no flags specified

this is somewhat explained in `--help-hooks`, but
additional related features are planned in the near future
and will all be better documented when the dust settles
2024-07-16 04:45:02 +00:00
ed
84e8e1ddfb ftpd: only mention vols that user can access
if an ftp client tried to list the toplevel folder on a server
where nothing is mounted toplevel, it would syntheisze a
directory listing which included all volumes, even those
which the user would not be able to access

so basically not a problem, just very confusing
2024-07-15 21:24:26 +00:00
ed
6e58514b84 update deps:
* win10:
  * python 3.12
  * pillow 10.4
  * pyinstaller 6.9
* win: upx 4.2.4
* web: dompurify 3.1.6
2024-07-15 21:16:19 +00:00
ed
803e156509 hooks: improve torrent downloader 2024-07-14 17:57:36 +00:00
ed
c06aa683eb allow audio-DL regardless of current folder 2024-07-13 17:10:24 +02:00
ed
6644ceef49 mention davfs2 workaround, closes #91 2024-07-13 16:55:27 +02:00
ed
bd3b3863ae hooks: add bittorrent downloader 2024-07-13 01:37:17 +02:00
ed
ffd4f9c8b9 hooks: describe examples better 2024-07-13 01:32:26 +02:00
ed
760ff2db72 other linter nitpicks (not actually bugs) 2024-07-13 01:18:14 +02:00
ed
f37187a041 fix bugs detected by pyright but not pylance:
* race-the-beam broke in v1.13.3 (i'm good at this)

* wrong logger type in certgen
2024-07-13 01:09:19 +02:00
ed
1cdb170290 order-significant --th-covers;
the first matching filename as listed in the
`--th-covers` global-option will always be selected
2024-07-13 00:54:38 +02:00
ed
d5de3f2fe0 improve --cgen (configfile generator) 2024-07-12 22:57:57 +02:00
ed
d76673e62d use correct mtime for folder thumbs;
mtime the file that was used to produce the folder thumbnail
(rather than the folder itself) since the folder-thumb is
always resolved to the file's thumb in the on-disk cache
2024-07-11 23:12:51 +02:00
ed
c549f367c1 reduce timeout of unbounded socket reads;
if a request body is expected, but request has no content-length,
set the timeout to 1/20 of `--s-tbody`, so 9 seconds by default,
or 3 seconds if it's 60 as recommended in helptext

this gives less confusing behavior if a client accidentally does
something invalid, replying with an error response before the
previous timeout of 186 seconds

also raise the slowloris flag, in case a client bugs out and
keeps making such requests
2024-07-10 11:14:42 +02:00
ed
927c3bce96 support descript.ion; makes listings 2% slower 2024-07-06 17:02:33 +02:00
ed
d75a2c77da og: fix viewing readmes 2024-07-06 16:55:15 +02:00
ed
e6c55d7ff9 systemd service: fix install notes, closes #88
the linked issue mentions that creating the `th` folder inside `.hist`
failed when RestrictSUIDSGID=true was enabled; this was on raspbian11
inside an ext4 chmod 777 owned by another user, so I have no idea why
that option would make any difference... but might as well mention it
2024-06-27 17:35:23 +02:00
ed
4c2cb26991 readme: add mimetype mapping examples; closes #89 2024-06-27 15:24:15 +02:00
ed
dfe7f1d9af point out that HTTP/2 tends to be slower than HTTP/1.1
re discord, someone with a fairly standard setup (cpp behind nginx)
found that switching from HTTP/2 to HTTP/1.1 made it 5x faster
2024-06-27 15:19:21 +02:00
ed
666297f6fb remove excessive warning on ancient machines;
sqlite<3.9 combined with python<3.6> would always warn
that `-e2t` is not supported, even when not requested
2024-06-27 14:55:12 +02:00
ed
55a011b9c1 fix jank when trying to play a corrupt audio file
if a song fails to play for some reason (network loss,
corrupt file), a timer plays the next track after 5s

the timer was not cancelled if the user
started another track in the meantime
2024-06-23 01:59:02 +02:00
ed
27aff12a1e fix helptext, closes #87 2024-06-19 10:42:41 +02:00
ed
9a87ee2fe4 add gsel option; closes #85
global-option `--gsel`, volflag `gsel` default-enables the
client setting to select files by ctrl-clicking them in the grid
2024-06-18 22:47:17 +02:00
ed
0a9f4c6074 ftpd: allow implicit overwrite if user has delete perms
the spec doesn't say what you're supposed to do if the target filename of an upload is already taken, but this seems to be the most common behavior on other ftp servers, and is required by wondows 2000 (otherwise it'll freak out and issue a delete and then not actually upload it, nice)

new option `--ftp-no-ow` restores old default behavior of rejecting upload if target filename exists
2024-06-18 12:07:45 +02:00
ed
7219331057 bugfixes;
* `--og` went 500 if thumbnails were disabled / not available
* strip_hints wasn't very helpful explaining why it crashed
2024-06-18 12:01:48 +02:00
ed
2fd12a839c more windows2000 support 2024-06-18 12:01:21 +02:00
ed
8c73e0cbc2 support windows 2000 and XP 2024-06-17 00:09:52 +02:00
ed
52e06226a2 make thumbnails compatible with dirkeys/filekeys
was intentionally skipped to avoid complexity but enough people have
asked why it doesn't work that it's time to do something about it

turns out it wasn't that bad
2024-06-16 21:35:43 +02:00
ed
452592519d tftp:
* upgrade to partftpy 0.4.0
  * workarounds for buggy clients/servers
  * improved ipv6 support, especially on macos
  * improved robustness on unreliable networks

* make `--tftp4` separate from `--ftp4`
2024-06-16 21:20:09 +02:00
ed
c9281f8912 option to return media-links for uploads 2024-06-07 12:56:02 +00:00
ed
36d6d29a0c set audio volume by scrollwheel 2024-06-07 12:23:55 +00:00
ed
db6059e100 music preloader fixes:
* stop scanning after 5 folders
* don't walk into errorpages (such as unmapped root)

and improve errortoast in case of network issues
2024-06-07 11:38:40 +00:00
ed
aab57cb24b update pkgs to 1.13.3 2024-06-01 23:51:14 +00:00
ed
f00b939402 v1.13.3 2024-06-01 23:24:35 +00:00
ed
bef9617638 u2c.exe: explain that https is disabled 2024-06-01 22:26:47 +00:00
ed
692175f5b0 md-editor autoindent was duplicating hr markers
only keep characters `>+-*` if there's less than three of them,
and discard entire prefix if there's more

markdown spec only cares about exactly-one or three-or-more, but
let's keep pairs in case anyone use that as unconventional markup
2024-06-01 20:56:15 +00:00
ed
5ad65450c4 more intuitive df option/volflag, closes #83 2024-06-01 01:15:34 +00:00
ed
60c96f990a ux: hide video ui + floor seekbar text
* hide lightbox buttons when a video is playing

* move audio seekbar text to the bottom, so it
   hides less of the waveform and minute-markers
2024-06-01 00:35:44 +00:00
ed
07b2bf1104 better support for 700+ connections
when there was more than ~700 active connections,
* sendfile (non-https downloads) could fail
* mdns and ssdp could fail to reinitialize on network changes

...because `select` can't handle FDs higher than 512 on windows
(1024 on linux/macos), so prefer `poll` where possible (linux/macos)

but apple keeps breaking and unbreaking `poll` in macos,
so use `--no-poll` if necessary to force `select` instead
2024-05-31 23:31:32 +00:00
ed
ac1bc232a9 black 2024-05-31 08:57:33 +00:00
ed
5919607ad0 sanitize fs-paths in archive error summary
also gets rid of a dumb debug print i forgot
2024-05-30 23:55:37 +00:00
ed
07ea629ca5 keep most tags during audio transcode
metadata is no longer discarded when transcoding to opus or mp3;
this was a good idea back when the transcodes were only used by
the webplayer, but now that folders can be batch-downloaded with
on-the-fly transcoding, it makes sense to keep most of the tags

individual tags are discarded if its value exceeds 1023 letters

this should mainly affect the following:
* traktor beatmaps, size usually somewhere around 100 KiB
* non-standard cover-art embeddings, size around 250 KiB
* XMP (project data from adobe premiere), around 48 KiB
2024-05-30 23:46:56 +00:00
ed
b629d18df6 print helpful warning if unix env is inhospitable
thx kipu you're the best
2024-05-11 18:34:41 +00:00
ed
566cbb6507 update pkgs to 1.13.2 2024-05-10 15:04:33 +00:00
106 changed files with 6996 additions and 1158 deletions

24
.vscode/settings.json vendored
View File

@@ -22,6 +22,9 @@
"terminal.ansiBrightCyan": "#9cf0ed",
"terminal.ansiBrightWhite": "#ffffff",
},
"python.terminal.activateEnvironment": false,
"python.analysis.enablePytestSupport": false,
"python.analysis.typeCheckingMode": "standard",
"python.testing.pytestEnabled": false,
"python.testing.unittestEnabled": true,
"python.testing.unittestArgs": [
@@ -31,23 +34,8 @@
"-p",
"test_*.py"
],
"python.linting.pylintEnabled": true,
"python.linting.flake8Enabled": true,
"python.linting.banditEnabled": true,
"python.linting.mypyEnabled": true,
"python.linting.flake8Args": [
"--max-line-length=120",
"--ignore=E722,F405,E203,W503,W293,E402,E501,E128,E226",
],
"python.linting.banditArgs": [
"--ignore=B104,B110,B112"
],
// python3 -m isort --py=27 --profile=black copyparty/
"python.formatting.provider": "none",
"[python]": {
"editor.defaultFormatter": "ms-python.black-formatter"
},
"editor.formatOnSave": true,
// python3 -m isort --py=27 --profile=black ~/dev/copyparty/{copyparty,tests}/*.py && python3 -m black -t py27 ~/dev/copyparty/{copyparty,tests,bin}/*.py $(find ~/dev/copyparty/copyparty/stolen -iname '*.py')
"editor.formatOnSave": false,
"[html]": {
"editor.formatOnSave": false,
"editor.autoIndent": "keep",
@@ -58,6 +46,4 @@
"files.associations": {
"*.makefile": "makefile"
},
"python.linting.enabled": true,
"python.pythonPath": "/usr/bin/python3"
}

178
README.md
View File

@@ -1,4 +1,6 @@
# 💾🎉 copyparty
<img src="docs/logo.svg" width="250" align="right"/>
### 💾🎉 copyparty
turn almost any device into a file server with resumable uploads/downloads using [*any*](#browser-support) web browser
@@ -42,6 +44,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [self-destruct](#self-destruct) - uploads can be given a lifetime
* [race the beam](#race-the-beam) - download files while they're still uploading ([demo video](http://a.ocv.me/pub/g/nerd-stuff/cpp/2024-0418-race-the-beam.webm))
* [file manager](#file-manager) - cut/paste, rename, and delete files/folders (if you have permission)
* [shares](#shares) - share a file or folder by creating a temporary link
* [batch rename](#batch-rename) - select some files and press `F2` to bring up the rename UI
* [media player](#media-player) - plays almost every audio format there is
* [audio equalizer](#audio-equalizer) - and [dynamic range compressor](https://en.wikipedia.org/wiki/Dynamic_range_compression)
@@ -76,6 +79,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [upload events](#upload-events) - the older, more powerful approach ([examples](./bin/mtag/))
* [handlers](#handlers) - redefine behavior with plugins ([examples](./bin/handlers/))
* [identity providers](#identity-providers) - replace copyparty passwords with oauth and such
* [user-changeable passwords](#user-changeable-passwords) - if permitted, users can change their own passwords
* [using the cloud as storage](#using-the-cloud-as-storage) - connecting to an aws s3 bucket and similar
* [hiding from google](#hiding-from-google) - tell search engines you dont wanna be indexed
* [themes](#themes)
@@ -83,6 +87,9 @@ turn almost any device into a file server with resumable uploads/downloads using
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
* [prometheus](#prometheus) - metrics/stats can be enabled
* [other extremely specific features](#other-extremely-specific-features) - you'll never find a use for these
* [custom mimetypes](#custom-mimetypes) - change the association of a file extension
* [feature chickenbits](#feature-chickenbits) - buggy feature? rip it out
* [packages](#packages) - the party might be closer than you think
* [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes)
* [fedora package](#fedora-package) - does not exist yet
@@ -109,6 +116,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [HTTP API](#HTTP-API) - see [devnotes](./docs/devnotes.md#http-api)
* [dependencies](#dependencies) - mandatory deps
* [optional dependencies](#optional-dependencies) - install these to enable bonus features
* [dependency chickenbits](#dependency-chickenbits) - prevent loading an optional dependency
* [optional gpl stuff](#optional-gpl-stuff)
* [sfx](#sfx) - the self-contained "binary" (recommended!)
* [copyparty.exe](#copypartyexe) - download [copyparty.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty.exe) (win8+) or [copyparty32.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty32.exe) (win7+)
@@ -122,7 +130,7 @@ turn almost any device into a file server with resumable uploads/downloads using
just run **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py)** -- that's it! 🎉
* or install through pypi: `python3 -m pip install --user -U copyparty`
* or install through [pypi](https://pypi.org/project/copyparty/): `python3 -m pip install --user -U copyparty`
* or if you cannot install python, you can use [copyparty.exe](#copypartyexe) instead
* or install [on arch](#arch-package) [on NixOS](#nixos-module) [through nix](#nix-package)
* or if you are on android, [install copyparty in termux](#install-on-android)
@@ -192,7 +200,7 @@ firewall-cmd --reload
also see [comparison to similar software](./docs/versus.md)
* backend stuff
* ☑ IPv6
* ☑ IPv6 + unix-sockets
* ☑ [multiprocessing](#performance) (actual multithreading)
* ☑ volumes (mountpoints)
* ☑ [accounts](#accounts-and-volumes)
@@ -207,7 +215,7 @@ also see [comparison to similar software](./docs/versus.md)
* upload
* ☑ basic: plain multipart, ie6 support
* ☑ [up2k](#uploading): js, resumable, multithreaded
* unaffected by cloudflare's max-upload-size (100 MiB)
* **no filesize limit!** ...unless you use Cloudflare, then it's 383.9 GiB
* ☑ stash: simple PUT filedropper
* ☑ filename randomizer
* ☑ write-only folders
@@ -223,6 +231,7 @@ also see [comparison to similar software](./docs/versus.md)
* ☑ [navpane](#navpane) (directory tree sidebar)
* ☑ file manager (cut/paste, delete, [batch-rename](#batch-rename))
* ☑ audio player (with [OS media controls](https://user-images.githubusercontent.com/241032/215347492-b4250797-6c90-4e09-9a4c-721edf2fb15c.png) and opus/mp3 transcoding)
* ☑ play video files as audio (converted on server)
* ☑ image gallery with webm player
* ☑ textfile browser with syntax hilighting
* ☑ [thumbnails](#thumbnails)
@@ -573,13 +582,12 @@ it does static images with Pillow / pyvips / FFmpeg, and uses FFmpeg for video f
audio files are covnerted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`)
images with the following names (see `--th-covers`) become the thumbnail of the folder they're in: `folder.png`, `folder.jpg`, `cover.png`, `cover.jpg`
* the order is significant, so if both `cover.png` and `folder.jpg` exist in a folder, it will pick the first matching `--th-covers` entry (`folder.jpg`)
* and, if you enable [file indexing](#file-indexing), it will also try those names as dotfiles (`.folder.jpg` and so), and then fallback on the first picture in the folder (if it has any pictures at all)
in the grid/thumbnail view, if the audio player panel is open, songs will start playing when clicked
* indicated by the audio files having the ▶ icon instead of 💾
enabling `multiselect` lets you click files to select them, and then shift-click another file for range-select
* `multiselect` is mostly intended for phones/tablets, but the `sel` option in the `[⚙️] settings` tab is better suited for desktop use, allowing selection by CTRL-clicking and range-selection with SHIFT-click, all without affecting regular clicking
* the `sel` option can be made default globally with `--gsel` or per-volume with volflag `gsel`
## zip downloads
@@ -642,6 +650,7 @@ up2k has several advantages:
* uploads resume if you reboot your browser or pc, just upload the same files again
* server detects any corruption; the client reuploads affected chunks
* the client doesn't upload anything that already exists on the server
* no filesize limit unless imposed by a proxy, for example Cloudflare, which blocks uploads over 383.9 GiB
* much higher speeds than ftp/scp/tarpipe on some internet connections (mainly american ones) thanks to parallel connections
* the last-modified timestamp of the file is preserved
@@ -709,7 +718,7 @@ uploads can be given a lifetime, afer which they expire / self-destruct
the feature must be enabled per-volume with the `lifetime` [upload rule](#upload-rules) which sets the upper limit for how long a file gets to stay on the server
clients can specify a shorter expiration time using the [up2k ui](#uploading) -- the relevant options become visible upon navigating into a folder with `lifetimes` enabled -- or by using the `life` [upload modifier](#write)
clients can specify a shorter expiration time using the [up2k ui](#uploading) -- the relevant options become visible upon navigating into a folder with `lifetimes` enabled -- or by using the `life` [upload modifier](./docs/devnotes.md#write)
specifying a custom expiration time client-side will affect the timespan in which unposts are permitted, so keep an eye on the estimates in the up2k ui
@@ -739,6 +748,42 @@ file selection: click somewhere on the line (not the link itsef), then:
you can move files across browser tabs (cut in one tab, paste in another)
## shares
share a file or folder by creating a temporary link
when enabled in the server settings (`--shr`), click the bottom-right `share` button to share the folder you're currently in, or alternatively:
* select a folder first to share that folder instead
* select one or more files to share only those files
this feature was made with [identity providers](#identity-providers) in mind -- configure your reverseproxy to skip the IdP's access-control for a given URL prefix and use that to safely share specific files/folders sans the usual auth checks
when creating a share, the creator can choose any of the following options:
* password-protection
* expire after a certain time; `0` or blank means infinite
* allow visitors to upload (if the user who creates the share has write-access)
semi-intentional limitations:
* cleanup of expired shares only works when global option `e2d` is set, and/or at least one volume on the server has volflag `e2d`
* only folders from the same volume are shared; if you are sharing a folder which contains other volumes, then the contents of those volumes will not be available
* no option to "delete after first access" because tricky
* when linking something to discord (for example) it'll get accessed by their scraper and that would count as a hit
* browsers wouldn't be able to resume a broken download unless the requester's IP gets allowlisted for X minutes (ref. tricky)
specify `--shr /foobar` to enable this feature; a toplevel virtual folder named `foobar` is then created, and that's where all the shares will be served from
* you can name it whatever, `foobar` is just an example
* if you're using config files, put `shr: /foobar` inside the `[global]` section instead
users can delete their own shares in the controlpanel, and a list of privileged users (`--shr-adm`) are allowed to see and/or delet any share on the server
after a share has expired, it remains visible in the controlpanel for `--shr-rt` minutes (default is 1 day), and the owner can revive it by extending the expiration time there
**security note:** using this feature does not mean that you can skip the [accounts and volumes](#accounts-and-volumes) section -- you still need to restrict access to volumes that you do not intend to share with unauthenticated users! it is not sufficient to use rules in the reverseproxy to restrict access to just the `/share` folder.
## batch rename
select some files and press `F2` to bring up the rename UI
@@ -796,6 +841,7 @@ some hilights:
* OS integration; control playback from your phone's lockscreen ([windows](https://user-images.githubusercontent.com/241032/233213022-298a98ba-721a-4cf1-a3d4-f62634bc53d5.png) // [iOS](https://user-images.githubusercontent.com/241032/142711926-0700be6c-3e31-47b3-9928-53722221f722.png) // [android](https://user-images.githubusercontent.com/241032/233212311-a7368590-08c7-4f9f-a1af-48ccf3f36fad.png))
* shows the audio waveform in the seekbar
* not perfectly gapless but can get really close (see settings + eq below); good enough to enjoy gapless albums as intended
* videos can be played as audio, without wasting bandwidth on the video
click the `play` link next to an audio file, or copy the link target to [share it](https://a.ocv.me/pub/demo/music/Ubiktune%20-%20SOUNDSHOCK%202%20-%20FM%20FUNK%20TERRROR!!/#af-1fbfba61&t=18) (optionally with a timestamp to start playing from, like that example does)
@@ -877,6 +923,8 @@ see [./srv/expand/](./srv/expand/) for usage and examples
* files named `.prologue.html` / `.epilogue.html` will be rendered before/after directory listings unless `--no-logues`
* files named `descript.ion` / `DESCRIPT.ION` are parsed and displayed in the file listing, or as the epilogue if nonstandard
* files named `README.md` / `readme.md` will be rendered after directory listings unless `--no-readme` (but `.epilogue.html` takes precedence)
* `README.md` and `*logue.html` can contain placeholder values which are replaced server-side before embedding into directory listings; see `--help-exp`
@@ -981,7 +1029,7 @@ some recommended FTP / FTPS clients; `wark` = example password:
## webdav server
with read-write support, supports winXP and later, macos, nautilus/gvfs
with read-write support, supports winXP and later, macos, nautilus/gvfs ... a greay way to [access copyparty straight from the file explorer in your OS](#mount-as-drive)
click the [connect](http://127.0.0.1:3923/?hc) button in the control-panel to see connection instructions for windows, linux, macos
@@ -1305,6 +1353,8 @@ you can set hooks before and/or after an event happens, and currently you can ho
there's a bunch of flags and stuff, see `--help-hooks`
if you want to write your own hooks, see [devnotes](./docs/devnotes.md#event-hooks)
### upload events
@@ -1345,6 +1395,29 @@ there is a [docker-compose example](./docs/examples/docker/idp-authelia-traefik)
a more complete example of the copyparty configuration options [look like this](./docs/examples/docker/idp/copyparty.conf)
but if you just want to let users change their own passwords, then you probably want [user-changeable passwords](#user-changeable-passwords) instead
## user-changeable passwords
if permitted, users can change their own passwords in the control-panel
* not compatible with [identity providers](#identity-providers)
* must be enabled with `--chpw` because account-sharing is a popular usecase
* if you want to enable the feature but deny password-changing for a specific list of accounts, you can do that with `--chpw-no name1,name2,name3,...`
* to perform a password reset, edit the server config and give the user another password there, then do a [config reload](#server-config) or server restart
* the custom passwords are kept in a textfile at filesystem-path `--chpw-db`, by default `chpw.json` in the copyparty config folder
* if you run multiple copyparty instances with different users you *almost definitely* want to specify separate DBs for each instance
* if [password hashing](#password-hashing) is enbled, the passwords in the db are also hashed
* ...which means that all user-defined passwords will be forgotten if you change password-hashing settings
## using the cloud as storage
@@ -1445,8 +1518,11 @@ you can either:
* or do location-based proxying, using `--rp-loc=/stuff` to tell copyparty where it is mounted -- has a slight performance cost and higher chance of bugs
* if copyparty says `incorrect --rp-loc or webserver config; expected vpath starting with [...]` it's likely because the webserver is stripping away the proxy location from the request URLs -- see the `ProxyPass` in the apache example below
some reverse proxies (such as [Caddy](https://caddyserver.com/)) can automatically obtain a valid https/tls certificate for you, and some support HTTP/2 and QUIC which could be a nice speed boost
* **warning:** nginx-QUIC is still experimental and can make uploads much slower, so HTTP/2 is recommended for now
some reverse proxies (such as [Caddy](https://caddyserver.com/)) can automatically obtain a valid https/tls certificate for you, and some support HTTP/2 and QUIC which *could* be a nice speed boost, depending on a lot of factors
* **warning:** nginx-QUIC (HTTP/3) is still experimental and can make uploads much slower, so HTTP/1.1 is recommended for now
* depending on server/client, HTTP/1.1 can also be 5x faster than HTTP/2
for improved security (and a 10% performance boost) consider listening on a unix-socket with `-i unix:770:www:/tmp/party.sock` (permission `770` means only members of group `www` can access it)
example webserver configs:
@@ -1526,6 +1602,45 @@ the following options are available to disable some of the metrics:
note: the following metrics are counted incorrectly if multiprocessing is enabled with `-j`: `cpp_http_conns`, `cpp_http_reqs`, `cpp_sus_reqs`, `cpp_active_bans`, `cpp_total_bans`
## other extremely specific features
you'll never find a use for these:
### custom mimetypes
change the association of a file extension
using commandline args, you can do something like `--mime gif=image/jif` and `--mime ts=text/x.typescript` (can be specified multiple times)
in a config-file, this is the same as:
```yaml
[global]
mime: gif=image/jif
mime: ts=text/x.typescript
```
run copyparty with `--mimes` to list all the default mappings
### feature chickenbits
buggy feature? rip it out by setting any of the following environment variables to disable its associated bell or whistle,
| env-var | what it does |
| -------------------- | ------------ |
| `PRTY_NO_IFADDR` | disable ip/nic discovery by poking into your OS with ctypes |
| `PRTY_NO_IPV6` | disable some ipv6 support (should not be necessary since windows 2000) |
| `PRTY_NO_LZMA` | disable streaming xz compression of incoming uploads |
| `PRTY_NO_MP` | disable all use of the python `multiprocessing` module (actual multithreading, cpu-count for parsers/thumbnailers) |
| `PRTY_NO_SQLITE` | disable all database-related functionality (file indexing, metadata indexing, most file deduplication logic) |
| `PRTY_NO_TLS` | disable native HTTPS support; if you still want to accept HTTPS connections then TLS must now be terminated by a reverse-proxy |
| `PRTY_NO_TPOKE` | disable systemd-tmpfilesd avoider |
example: `PRTY_NO_IFADDR=1 python3 copyparty-sfx.py`
# packages
the party might be closer than you think
@@ -1769,12 +1884,14 @@ alternatively, some alternatives roughly sorted by speed (unreproducible benchma
* [rclone-http](./docs/rclone.md) (26s), read-only
* [partyfuse.py](./bin/#partyfusepy) (35s), read-only
* [rclone-ftp](./docs/rclone.md) (47s), read/WRITE
* davfs2 (103s), read/WRITE, *very fast* on small files
* davfs2 (103s), read/WRITE
* [win10-webdav](#webdav-server) (138s), read/WRITE
* [win10-smb2](#smb-server) (387s), read/WRITE
most clients will fail to mount the root of a copyparty server unless there is a root volume (so you get the admin-panel instead of a browser when accessing it) -- in that case, mount a specific volume instead
if you have volumes that are accessible without a password, then some webdav clients (such as davfs2) require the global-option `--dav-auth` to access any password-protected areas
# android app
@@ -1803,6 +1920,7 @@ defaults are usually fine - expect `8 GiB/s` download, `1 GiB/s` upload
below are some tweaks roughly ordered by usefulness:
* disabling HTTP/2 and HTTP/3 can make uploads 5x faster, depending on server/client software
* `-q` disables logging and can help a bunch, even when combined with `-lo` to redirect logs to file
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
* and also makes thumbnails load faster, regardless of e2d/e2t
@@ -1845,6 +1963,7 @@ some notes on hardening
* cors doesn't work right otherwise
* if you allow anonymous uploads or otherwise don't trust the contents of a volume, you can prevent XSS with volflag `nohtml`
* this returns html documents as plaintext, and also disables markdown rendering
* when running behind a reverse-proxy, listen on a unix-socket for tighter access control (and more performance); see [reverse-proxy](#reverse-proxy) or `--help-bind`
safety profiles:
@@ -1918,7 +2037,7 @@ volflag `dk` generates dirkeys (per-directory accesskeys) for all folders, grant
volflag `dky` disables the actual key-check, meaning anyone can see the contents of a folder where they have `g` access, but not its subdirectories
* `dk` + `dky` gives the same behavior as if all users with `g` access have full read-access, but subfolders are hidden files (their names start with a dot), so `dky` is an alternative to renaming all the folders for that purpose, maybe just for some users
* `dk` + `dky` gives the same behavior as if all users with `g` access have full read-access, but subfolders are hidden files (as if their names start with a dot), so `dky` is an alternative to renaming all the folders for that purpose, maybe just for some users
volflag `dks` lets people enter subfolders as well, and also enables download-as-zip/tar
@@ -1943,7 +2062,7 @@ the default configs take about 0.4 sec and 256 MiB RAM to process a new password
both HTTP and HTTPS are accepted by default, but letting a [reverse proxy](#reverse-proxy) handle the https/tls/ssl would be better (probably more secure by default)
copyparty doesn't speak HTTP/2 or QUIC, so using a reverse proxy would solve that as well
copyparty doesn't speak HTTP/2 or QUIC, so using a reverse proxy would solve that as well -- but note that HTTP/1 is usually faster than both HTTP/2 and HTTP/3
if [cfssl](https://github.com/cloudflare/cfssl/releases/latest) is installed, copyparty will automatically create a CA and server-cert on startup
* the certs are written to `--crt-dir` for distribution, see `--help` for the other `--crt` options
@@ -2009,6 +2128,37 @@ enable [smb](#smb-server) support (**not** recommended):
`pyvips` gives higher quality thumbnails than `Pillow` and is 320% faster, using 270% more ram: `sudo apt install libvips42 && python3 -m pip install --user -U pyvips`
### dependency chickenbits
prevent loading an optional dependency , for example if:
* you have an incompatible version installed and it causes problems
* you just don't want copyparty to use it, maybe to save ram
set any of the following environment variables to disable its associated optional feature,
| env-var | what it does |
| -------------------- | ------------ |
| `PRTY_NO_ARGON2` | disable argon2-cffi password hashing |
| `PRTY_NO_CFSSL` | never attempt to generate self-signed certificates using [cfssl](https://github.com/cloudflare/cfssl) |
| `PRTY_NO_FFMPEG` | **audio transcoding** goes byebye, **thumbnailing** must be handled by Pillow/libvips |
| `PRTY_NO_FFPROBE` | **audio transcoding** goes byebye, **thumbnailing** must be handled by Pillow/libvips, **metadata-scanning** must be handled by mutagen |
| `PRTY_NO_MUTAGEN` | do not use [mutagen](https://pypi.org/project/mutagen/) for reading metadata from media files; will fallback to ffprobe |
| `PRTY_NO_PIL` | disable all [Pillow](https://pypi.org/project/pillow/)-based thumbnail support; will fallback to libvips or ffmpeg |
| `PRTY_NO_PILF` | disable Pillow `ImageFont` text rendering, used for folder thumbnails |
| `PRTY_NO_PIL_AVIF` | disable 3rd-party Pillow plugin for [AVIF support](https://pypi.org/project/pillow-avif-plugin/) |
| `PRTY_NO_PIL_HEIF` | disable 3rd-party Pillow plugin for [HEIF support](https://pypi.org/project/pyheif-pillow-opener/) |
| `PRTY_NO_PIL_WEBP` | disable use of native webp support in Pillow |
| `PRTY_NO_PSUTIL` | do not use [psutil](https://pypi.org/project/psutil/) for reaping stuck hooks and plugins on Windows |
| `PRTY_NO_VIPS` | disable all [libvips](https://pypi.org/project/pyvips/)-based thumbnail support; will fallback to Pillow or ffmpeg |
example: `PRTY_NO_PIL=1 python3 copyparty-sfx.py`
* `PRTY_NO_PIL` saves ram
* `PRTY_NO_VIPS` saves ram and startup time
* python2.7 on windows: `PRTY_NO_FFMPEG` + `PRTY_NO_FFPROBE` saves startup time
## optional gpl stuff
some bundled tools have copyleft dependencies, see [./bin/#mtag](bin/#mtag)

View File

@@ -2,7 +2,7 @@ standalone programs which are executed by copyparty when an event happens (uploa
these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info
run copyparty with `--help-hooks` for usage details / hook type explanations (xbu/xau/xiu/xbr/xar/xbd/xad)
run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbr/xar/xbd/xad/xban)
> **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead
@@ -13,6 +13,7 @@ run copyparty with `--help-hooks` for usage details / hook type explanations (xb
* [image-noexif.py](image-noexif.py) removes image exif by overwriting / directly editing the uploaded file
* [discord-announce.py](discord-announce.py) announces new uploads on discord using webhooks ([example](https://user-images.githubusercontent.com/241032/215304439-1c1cb3c8-ec6f-4c17-9f27-81f969b1811a.png))
* [reject-mimetype.py](reject-mimetype.py) rejects uploads unless the mimetype is acceptable
* [into-the-cache-it-goes.py](into-the-cache-it-goes.py) avoids bugs in caching proxies by immediately downloading each file that is uploaded
# upload batches
@@ -23,7 +24,10 @@ these are `--xiu` hooks; unlike `xbu` and `xau` (which get executed on every sin
# before upload
* [reject-extension.py](reject-extension.py) rejects uploads if they match a list of file extensions
* [reloc-by-ext.py](reloc-by-ext.py) redirects an upload to another destination based on the file extension
# on message
* [wget.py](wget.py) lets you download files by POSTing URLs to copyparty
* [qbittorrent-magnet.py](qbittorrent-magnet.py) starts downloading a torrent if you post a magnet url
* [msg-log.py](msg-log.py) is a guestbook; logs messages to a doc in the same folder

View File

@@ -12,19 +12,28 @@ announces a new upload on discord
example usage as global config:
--xau f,t5,j,bin/hooks/discord-announce.py
parameters explained,
xau = execute after upload
f = fork; don't delay other hooks while this is running
t5 = timeout if it's still running after 5 sec
j = this hook needs upload information as json (not just the filename)
example usage as a volflag (per-volume config):
-v srv/inc:inc:r:rw,ed:c,xau=f,t5,j,bin/hooks/discord-announce.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(share filesystem-path srv/inc as volume /inc,
readable by everyone, read-write for user 'ed',
running this plugin on all uploads with the params listed below)
running this plugin on all uploads with the params explained above)
parameters explained,
xbu = execute after upload
f = fork; don't wait for it to finish
t5 = timeout if it's still running after 5 sec
j = provide upload information as json; not just the filename
example usage as a volflag in a copyparty config file:
[/inc]
srv/inc
accs:
r: *
rw: ed
flags:
xau: f,t5,j,bin/hooks/discord-announce.py
replace "xau" with "xbu" to announce Before upload starts instead of After completion

View File

@@ -0,0 +1,140 @@
#!/usr/bin/env python3
import sys
import json
import shutil
import platform
import subprocess as sp
from urllib.parse import quote
_ = r"""
try to avoid race conditions in caching proxies
(primarily cloudflare, but probably others too)
by means of the most obvious solution possible:
just as each file has finished uploading, use
the server's external URL to download the file
so that it ends up in the cache, warm and snug
this intentionally delays the upload response
as it waits for the file to finish downloading
before copyparty is allowed to return the URL
NOTE: you must edit this script before use,
replacing https://example.com with your URL
NOTE: if the files are only accessible with a
password and/or filekey, you must also add
a cromulent password in the PASSWORD field
NOTE: needs either wget, curl, or "requests":
python3 -m pip install --user -U requests
example usage as global config:
--xau j,t10,bin/hooks/into-the-cache-it-goes.py
parameters explained,
xau = execute after upload
j = this hook needs upload information as json (not just the filename)
t10 = abort download and continue if it takes longer than 10sec
example usage as a volflag (per-volume config):
-v srv/inc:inc:r:rw,ed:c,xau=j,t10,bin/hooks/into-the-cache-it-goes.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(share filesystem-path srv/inc as volume /inc,
readable by everyone, read-write for user 'ed',
running this plugin on all uploads with params explained above)
example usage as a volflag in a copyparty config file:
[/inc]
srv/inc
accs:
r: *
rw: ed
flags:
xau: j,t10,bin/hooks/into-the-cache-it-goes.py
"""
# replace this with your site's external URL
# (including the :portnumber if necessary)
SITE_URL = "https://example.com"
# if downloading is protected by passwords or filekeys,
# specify a valid password between the quotes below:
PASSWORD = ""
# if file is larger than this, skip download
MAX_MEGABYTES = 8
# =============== END OF CONFIG ===============
WINDOWS = platform.system() == "Windows"
def main():
fun = download_with_python
if shutil.which("curl"):
fun = download_with_curl
elif shutil.which("wget"):
fun = download_with_wget
inf = json.loads(sys.argv[1])
if inf["sz"] > 1024 * 1024 * MAX_MEGABYTES:
print("[into-the-cache] file is too large; will not download")
return
file_url = "/"
if inf["vp"]:
file_url += inf["vp"] + "/"
file_url += inf["ap"].replace("\\", "/").split("/")[-1]
file_url = SITE_URL.rstrip("/") + quote(file_url, safe=b"/")
print("[into-the-cache] %s(%s)" % (fun.__name__, file_url))
fun(file_url, PASSWORD.strip())
print("[into-the-cache] Download OK")
def download_with_curl(url, pw):
cmd = ["curl"]
if pw:
cmd += ["-HPW:%s" % (pw,)]
nah = sp.DEVNULL
sp.check_call(cmd + [url], stdout=nah, stderr=nah)
def download_with_wget(url, pw):
cmd = ["wget", "-O"]
cmd += ["nul" if WINDOWS else "/dev/null"]
if pw:
cmd += ["--header=PW:%s" % (pw,)]
nah = sp.DEVNULL
sp.check_call(cmd + [url], stdout=nah, stderr=nah)
def download_with_python(url, pw):
import requests
headers = {}
if pw:
headers["PW"] = pw
with requests.get(url, headers=headers, stream=True) as r:
r.raise_for_status()
for _ in r.iter_content(chunk_size=1024 * 256):
pass
if __name__ == "__main__":
main()

View File

@@ -14,19 +14,32 @@ except:
from datetime import datetime
"""
_ = r"""
use copyparty as a dumb messaging server / guestbook thing;
accepts guestbook entries from 📟 (message-to-server-log) in the web-ui
initially contributed by @clach04 in https://github.com/9001/copyparty/issues/35 (thanks!)
Sample usage:
example usage as global config:
python copyparty-sfx.py --xm j,bin/hooks/msg-log.py
Where:
parameters explained,
xm = execute on message (📟)
j = this hook needs message information as json (not just the message-text)
xm = execute on message-to-server-log
j = provide message information as json; not just the text - this script REQUIRES json
t10 = timeout and kill download after 10 secs
example usage as a volflag (per-volume config):
python copyparty-sfx.py -v srv/log:log:r:c,xm=j,bin/hooks/msg-log.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^
(share filesystem-path srv/log as volume /log, readable by everyone,
running this plugin on all messages with the params explained above)
example usage as a volflag in a copyparty config file:
[/log]
srv/log
accs:
r: *
flags:
xm: j,bin/hooks/msg-log.py
"""

128
bin/hooks/qbittorrent-magnet.py Executable file
View File

@@ -0,0 +1,128 @@
#!/usr/bin/env python3
# coding: utf-8
import os
import sys
import json
import shutil
import subprocess as sp
_ = r"""
start downloading a torrent by POSTing a magnet URL to copyparty,
for example using 📟 (message-to-server-log) in the web-ui
by default it will download the torrent to the folder you were in
when you pasted the magnet into the message-to-server-log field
you can optionally specify another location by adding a whitespace
after the magnet URL followed by the name of the subfolder to DL into,
or for example "anime/airing" would download to /srv/media/anime/airing
because the keyword "anime" is in the DESTS config below
needs python3
example usage as global config (not a good idea):
python copyparty-sfx.py --xm aw,f,j,t60,bin/hooks/qbittorrent-magnet.py
parameters explained,
xm = execute on message (📟)
aw = only users with write-access can use this
f = fork; don't delay other hooks while this is running
j = provide message information as json (not just the text)
t60 = abort if qbittorrent has to think about it for more than 1 min
example usage as a volflag (per-volume config, much better):
-v srv/qb:qb:A,ed:c,xm=aw,f,j,t60,bin/hooks/qbittorrent-magnet.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(share filesystem-path srv/qb as volume /qb with Admin for user 'ed',
running this plugin on all messages with the params explained above)
example usage as a volflag in a copyparty config file:
[/qb]
srv/qb
accs:
A: ed
flags:
xm: aw,f,j,t60,bin/hooks/qbittorrent-magnet.py
the volflag examples only kicks in if you send the torrent magnet
while you're in the /qb folder (or any folder below there)
"""
# list of usernames to allow
ALLOWLIST = [ "ed", "morpheus" ]
# list of destination aliases to translate into full filesystem
# paths; takes effect if the first folder component in the
# custom download location matches anything in this dict
DESTS = {
"iso": "/srv/pub/linux-isos",
"anime": "/srv/media/anime",
}
def main():
inf = json.loads(sys.argv[1])
url = inf["txt"]
if not url.lower().startswith("magnet:?"):
# not a magnet, abort
return
if inf["user"] not in ALLOWLIST:
print("🧲 denied for user", inf["user"])
return
# might as well run the command inside the filesystem folder
# which matches the URL that the magnet message was sent to
os.chdir(inf["ap"])
# is there is a custom download location in the url?
dst = ""
if " " in url:
url, dst = url.split(" ", 1)
# is the location in the predefined list of locations?
parts = dst.replace("\\", "/").split("/")
if parts[0] in DESTS:
dst = os.path.join(DESTS[parts[0]], *(parts[1:]))
else:
# nope, so download to the current folder instead;
# comment the dst line below to instead use the default
# download location from your qbittorrent settings
dst = inf["ap"]
pass
# archlinux has a -nox suffix for qbittorrent if headless
# so check if we should be using that
if shutil.which("qbittorrent-nox"):
torrent_bin = "qbittorrent-nox"
else:
torrent_bin = "qbittorrent"
# the command to add a new torrent, adjust if necessary
cmd = [torrent_bin, url]
if dst:
cmd += ["--save-path=%s" % (dst,)]
# if copyparty and qbittorrent are running as different users
# you may have to do something like the following
# (assuming qbittorrent* is nopasswd-allowed in sudoers):
#
# cmd = ["sudo", "-u", "qbitter"] + cmd
print("🧲", cmd)
try:
sp.check_call(cmd)
except:
print("🧲 FAILED TO ADD", url)
if __name__ == "__main__":
main()

127
bin/hooks/reloc-by-ext.py Normal file
View File

@@ -0,0 +1,127 @@
#!/usr/bin/env python3
import json
import os
import re
import sys
_ = r"""
relocate/redirect incoming uploads according to file extension or name
example usage as global config:
--xbu j,c1,bin/hooks/reloc-by-ext.py
parameters explained,
xbu = execute before upload
j = this hook needs upload information as json (not just the filename)
c1 = this hook returns json on stdout, so tell copyparty to read that
example usage as a volflag (per-volume config):
-v srv/inc:inc:r:rw,ed:c,xbu=j,c1,bin/hooks/reloc-by-ext.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(share filesystem-path srv/inc as volume /inc,
readable by everyone, read-write for user 'ed',
running this plugin on all uploads with the params explained above)
example usage as a volflag in a copyparty config file:
[/inc]
srv/inc
accs:
r: *
rw: ed
flags:
xbu: j,c1,bin/hooks/reloc-by-ext.py
note: this could also work as an xau hook (after-upload), but
because it doesn't need to read the file contents its better
as xbu (before-upload) since that's safer / less buggy,
and only xbu works with up2k (dragdrop into browser)
"""
PICS = "avif bmp gif heic heif jpeg jpg jxl png psd qoi tga tif tiff webp"
VIDS = "3gp asf avi flv mkv mov mp4 mpeg mpeg2 mpegts mpg mpg2 nut ogm ogv rm ts vob webm wmv"
MUSIC = "aac aif aiff alac amr ape dfpwm flac m4a mp3 ogg opus ra tak tta wav wma wv"
def main():
inf = json.loads(sys.argv[1])
vdir, fn = os.path.split(inf["vp"])
try:
fn, ext = fn.rsplit(".", 1)
except:
# no file extension; pretend it's "bin"
ext = "bin"
ext = ext.lower()
# this function must end by printing the action to perform;
# that's handled by the print(json.dumps(... at the bottom
#
# the action can contain the following keys:
# "vp" is the folder URL to move the upload to,
# "ap" is the filesystem-path to move it to (but "vp" is safer),
# "fn" overrides the final filename to use
##
## some example actions to take; pick one by
## selecting it inside the print at the end:
##
# create a subfolder named after the filetype and move it into there
into_subfolder = {"vp": ext}
# move it into a toplevel folder named after the filetype
into_toplevel = {"vp": "/" + ext}
# move it into a filetype-named folder next to the target folder
into_sibling = {"vp": "../" + ext}
# move images into "/just/pics", vids into "/just/vids",
# music into "/just/tunes", and anything else as-is
if ext in PICS.split():
by_category = {"vp": "/just/pics"}
elif ext in VIDS.split():
by_category = {"vp": "/just/vids"}
elif ext in MUSIC.split():
by_category = {"vp": "/just/tunes"}
else:
by_category = {} # no action
# now choose the default effect to apply; can be any of these:
# into_subfolder into_toplevel into_sibling by_category
effect = {"vp": "/junk"}
##
## but we can keep going, adding more speicifc rules
## which can take precedence, replacing the fallback
## effect we just specified:
##
fn = fn.lower() # lowercase filename to make this easier
if "screenshot" in fn:
effect = {"vp": "/ss"}
if "mpv_" in fn:
effect = {"vp": "/anishots"}
elif "debian" in fn or "biebian" in fn:
effect = {"vp": "/linux-ISOs"}
elif re.search(r"ep(isode |\.)?[0-9]", fn):
effect = {"vp": "/podcasts"}
# regex lets you grab a part of the matching
# text and use that in the upload path:
m = re.search(r"\b(op|ed)([^a-z]|$)", fn)
if m:
# the regex matched; use "anime-op" or "anime-ed"
effect = {"vp": "/anime-" + m[1]}
# aaand DO IT
print(json.dumps({"reloc": effect}))
if __name__ == "__main__":
main()

View File

@@ -9,25 +9,38 @@ import subprocess as sp
_ = r"""
use copyparty as a file downloader by POSTing URLs as
application/x-www-form-urlencoded (for example using the
message/pager function on the website)
📟 message-to-server-log in the web-ui)
example usage as global config:
--xm f,j,t3600,bin/hooks/wget.py
example usage as a volflag (per-volume config):
-v srv/inc:inc:r:rw,ed:c,xm=f,j,t3600,bin/hooks/wget.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(share filesystem-path srv/inc as volume /inc,
readable by everyone, read-write for user 'ed',
running this plugin on all messages with the params listed below)
--xm aw,f,j,t3600,bin/hooks/wget.py
parameters explained,
xm = execute on message-to-server-log
f = fork so it doesn't block uploads
j = provide message information as json; not just the text
aw = only users with write-access can use this
f = fork; don't delay other hooks while this is running
j = provide message information as json (not just the text)
c3 = mute all output
t3600 = timeout and kill download after 1 hour
t3600 = timeout and abort download after 1 hour
example usage as a volflag (per-volume config):
-v srv/inc:inc:r:rw,ed:c,xm=aw,f,j,t3600,bin/hooks/wget.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(share filesystem-path srv/inc as volume /inc,
readable by everyone, read-write for user 'ed',
running this plugin on all messages with the params explained above)
example usage as a volflag in a copyparty config file:
[/inc]
srv/inc
accs:
r: *
rw: ed
flags:
xm: aw,f,j,t3600,bin/hooks/wget.py
the volflag examples only kicks in if you send the message
while you're in the /inc folder (or any folder below there)
"""

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3
from __future__ import print_function, unicode_literals
S_VERSION = "1.17"
S_BUILD_DT = "2024-05-09"
S_VERSION = "1.23"
S_BUILD_DT = "2024-08-22"
"""
u2c.py: upload to copyparty
@@ -20,6 +20,7 @@ import sys
import stat
import math
import time
import json
import atexit
import signal
import socket
@@ -79,7 +80,7 @@ req_ses = requests.Session()
class Daemon(threading.Thread):
def __init__(self, target, name = None, a = None):
def __init__(self, target, name=None, a=None):
threading.Thread.__init__(self, name=name)
self.a = a or ()
self.fun = target
@@ -110,18 +111,22 @@ class File(object):
# set by get_hashlist
self.cids = [] # type: list[tuple[str, int, int]] # [ hash, ofs, sz ]
self.kchunks = {} # type: dict[str, tuple[int, int]] # hash: [ ofs, sz ]
self.t_hash = 0.0 # type: float
# set by handshake
self.recheck = False # duplicate; redo handshake after all files done
self.ucids = [] # type: list[str] # chunks which need to be uploaded
self.wark = "" # type: str
self.url = "" # type: str
self.nhs = 0
self.nhs = 0 # type: int
# set by upload
self.t0_up = 0.0 # type: float
self.t1_up = 0.0 # type: float
self.nojoin = 0 # type: int
self.up_b = 0 # type: int
self.up_c = 0 # type: int
self.cd = 0
self.cd = 0 # type: int
# t = "size({}) lmod({}) top({}) rel({}) abs({}) name({})\n"
# eprint(t.format(self.size, self.lmod, self.top, self.rel, self.abs, self.name))
@@ -130,10 +135,20 @@ class File(object):
class FileSlice(object):
"""file-like object providing a fixed window into a file"""
def __init__(self, file, cid):
def __init__(self, file, cids):
# type: (File, str) -> None
self.car, self.len = file.kchunks[cid]
self.file = file
self.cids = cids
self.car, tlen = file.kchunks[cids[0]]
for cid in cids[1:]:
ofs, clen = file.kchunks[cid]
if ofs != self.car + tlen:
raise Exception(9)
tlen += clen
self.len = tlen
self.cdr = self.car + self.len
self.ofs = 0 # type: int
self.f = open(file.abs, "rb", 512 * 1024)
@@ -357,7 +372,7 @@ def undns(url):
usp = urlsplit(url)
hn = usp.hostname
gai = None
eprint("resolving host [{0}] ...".format(hn), end="")
eprint("resolving host [%s] ..." % (hn,))
try:
gai = socket.getaddrinfo(hn, None)
hn = gai[0][4][0]
@@ -375,7 +390,7 @@ def undns(url):
usp = usp._replace(netloc=hn)
url = urlunsplit(usp)
eprint(" {0}".format(url))
eprint(" %s\n" % (url,))
return url
@@ -518,6 +533,8 @@ def get_hashlist(file, pcb, mth):
file_ofs = 0
ret = []
with open(file.abs, "rb", 512 * 1024) as f:
t0 = time.time()
if mth and file.size >= 1024 * 512:
ret = mth.hash(f, file.size, chunk_sz, pcb, file)
file_rem = 0
@@ -544,10 +561,12 @@ def get_hashlist(file, pcb, mth):
if pcb:
pcb(file, file_ofs)
file.t_hash = time.time() - t0
file.cids = ret
file.kchunks = {}
for k, v1, v2 in ret:
file.kchunks[k] = [v1, v2]
if k not in file.kchunks:
file.kchunks[k] = [v1, v2]
def handshake(ar, file, search):
@@ -589,7 +608,8 @@ def handshake(ar, file, search):
sc = 600
txt = ""
try:
r = req_ses.post(url, headers=headers, json=req)
zs = json.dumps(req, separators=(",\n", ": "))
r = req_ses.post(url, headers=headers, data=zs)
sc = r.status_code
txt = r.text
if sc < 400:
@@ -636,13 +656,20 @@ def handshake(ar, file, search):
return r["hash"], r["sprs"]
def upload(file, cid, pw, stats):
# type: (File, str, str, str) -> None
"""upload one specific chunk, `cid` (a chunk-hash)"""
def upload(fsl, pw, stats):
# type: (FileSlice, str, str) -> None
"""upload a range of file data, defined by one or more `cid` (chunk-hash)"""
ctxt = fsl.cids[0]
if len(fsl.cids) > 1:
n = 192 // len(fsl.cids)
n = 9 if n > 9 else 2 if n < 2 else n
zsl = [zs[:n] for zs in fsl.cids[1:]]
ctxt += ",%d,%s" % (n, "".join(zsl))
headers = {
"X-Up2k-Hash": cid,
"X-Up2k-Wark": file.wark,
"X-Up2k-Hash": ctxt,
"X-Up2k-Wark": fsl.file.wark,
"Content-Type": "application/octet-stream",
}
@@ -652,15 +679,24 @@ def upload(file, cid, pw, stats):
if pw:
headers["Cookie"] = "=".join(["cppwd", pw])
f = FileSlice(file, cid)
try:
r = req_ses.post(file.url, headers=headers, data=f)
r = req_ses.post(fsl.file.url, headers=headers, data=fsl)
if r.status_code == 400:
txt = r.text
if (
"already being written" in txt
or "already got that" in txt
or "only sibling chunks" in txt
):
fsl.file.nojoin = 1
if not r:
raise Exception(repr(r))
_ = r.content
finally:
f.f.close()
fsl.f.close()
class Ctl(object):
@@ -724,6 +760,9 @@ class Ctl(object):
if ar.safe:
self._safe()
else:
self.at_hash = 0.0
self.at_up = 0.0
self.at_upr = 0.0
self.hash_f = 0
self.hash_c = 0
self.hash_b = 0
@@ -743,7 +782,7 @@ class Ctl(object):
self.mutex = threading.Lock()
self.q_handshake = Queue() # type: Queue[File]
self.q_upload = Queue() # type: Queue[tuple[File, str]]
self.q_upload = Queue() # type: Queue[FileSlice]
self.st_hash = [None, "(idle, starting...)"] # type: tuple[File, int]
self.st_up = [None, "(idle, starting...)"] # type: tuple[File, int]
@@ -788,7 +827,8 @@ class Ctl(object):
for nc, cid in enumerate(hs):
print(" {0} up {1}".format(ncs - nc, cid))
stats = "{0}/0/0/{1}".format(nf, self.nfiles - nf)
upload(file, cid, self.ar.a, stats)
fslice = FileSlice(file, [cid])
upload(fslice, self.ar.a, stats)
print(" ok!")
if file.recheck:
@@ -797,7 +837,7 @@ class Ctl(object):
if not self.recheck:
return
eprint("finalizing {0} duplicate files".format(len(self.recheck)))
eprint("finalizing %d duplicate files\n" % (len(self.recheck),))
for file in self.recheck:
handshake(self.ar, file, search)
@@ -871,10 +911,17 @@ class Ctl(object):
t = "{0} eta @ {1}/s, {2}, {3}# left".format(self.eta, spd, sleft, nleft)
eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail))
if self.hash_b and self.at_hash:
spd = humansize(self.hash_b / self.at_hash)
eprint("\nhasher: %.2f sec, %s/s\n" % (self.at_hash, spd))
if self.up_b and self.at_up:
spd = humansize(self.up_b / self.at_up)
eprint("upload: %.2f sec, %s/s\n" % (self.at_up, spd))
if not self.recheck:
return
eprint("finalizing {0} duplicate files".format(len(self.recheck)))
eprint("finalizing %d duplicate files\n" % (len(self.recheck),))
for file in self.recheck:
handshake(self.ar, file, False)
@@ -1060,21 +1107,62 @@ class Ctl(object):
self.handshaker_busy -= 1
if not hs:
kw = "uploaded" if file.up_b else " found"
print("{0} {1}".format(kw, upath))
for cid in hs:
self.q_upload.put([file, cid])
self.at_hash += file.t_hash
if self.ar.spd:
if VT100:
c1 = "\033[36m"
c2 = "\033[0m"
else:
c1 = c2 = ""
spd_h = humansize(file.size / file.t_hash, True)
if file.up_b:
t_up = file.t1_up - file.t0_up
spd_u = humansize(file.size / t_up, True)
t = "uploaded %s %s(h:%.2fs,%s/s,up:%.2fs,%s/s)%s"
print(t % (upath, c1, file.t_hash, spd_h, t_up, spd_u, c2))
else:
t = " found %s %s(%.2fs,%s/s)%s"
print(t % (upath, c1, file.t_hash, spd_h, c2))
else:
kw = "uploaded" if file.up_b else " found"
print("{0} {1}".format(kw, upath))
chunksz = up2k_chunksize(file.size)
njoin = (self.ar.sz * 1024 * 1024) // chunksz
cs = hs[:]
while cs:
fsl = FileSlice(file, cs[:1])
try:
if file.nojoin:
raise Exception()
for n in range(2, min(len(cs), njoin + 1)):
fsl = FileSlice(file, cs[:n])
except:
pass
cs = cs[len(fsl.cids) :]
self.q_upload.put(fsl)
def uploader(self):
while True:
task = self.q_upload.get()
if not task:
fsl = self.q_upload.get()
if not fsl:
self.st_up = [None, "(finished)"]
break
file = fsl.file
cids = fsl.cids
with self.mutex:
if not self.uploader_busy:
self.at_upr = time.time()
self.uploader_busy += 1
self.t0_up = self.t0_up or time.time()
if not file.t0_up:
file.t0_up = time.time()
if not self.t0_up:
self.t0_up = file.t0_up
stats = "%d/%d/%d/%d %d/%d %s" % (
self.up_f,
@@ -1086,28 +1174,30 @@ class Ctl(object):
self.eta,
)
file, cid = task
try:
upload(file, cid, self.ar.a, stats)
upload(fsl, self.ar.a, stats)
except Exception as ex:
t = "upload failed, retrying: {0} #{1} ({2})\n"
eprint(t.format(file.name, cid[:8], ex))
t = "upload failed, retrying: %s #%s+%d (%s)\n"
eprint(t % (file.name, cids[0][:8], len(cids) - 1, ex))
file.cd = time.time() + self.ar.cd
# handshake will fix it
with self.mutex:
sz = file.kchunks[cid][1]
file.ucids = [x for x in file.ucids if x != cid]
sz = fsl.len
file.ucids = [x for x in file.ucids if x not in cids]
if not file.ucids:
file.t1_up = time.time()
self.q_handshake.put(file)
self.st_up = [file, cid]
self.st_up = [file, cids[0]]
file.up_b += sz
self.up_b += sz
self.up_br += sz
file.up_c += 1
self.up_c += 1
self.uploader_busy -= 1
if not self.uploader_busy:
self.at_up += time.time() - self.at_upr
def up_done(self, file):
if self.ar.dl:
@@ -1144,12 +1234,13 @@ source file/folder selection uses rsync syntax, meaning that:
ap.add_argument("url", type=unicode, help="server url, including destination folder")
ap.add_argument("files", type=unicode, nargs="+", help="files and/or folders to process")
ap.add_argument("-v", action="store_true", help="verbose")
ap.add_argument("-a", metavar="PASSWORD", help="password or $filepath")
ap.add_argument("-a", metavar="PASSWD", help="password or $filepath")
ap.add_argument("-s", action="store_true", help="file-search (disables upload)")
ap.add_argument("-x", type=unicode, metavar="REGEX", default="", help="skip file if filesystem-abspath matches REGEX, example: '.*/\\.hist/.*'")
ap.add_argument("-x", type=unicode, metavar="REGEX", action="append", help="skip file if filesystem-abspath matches REGEX (option can be repeated), example: '.*/\\.hist/.*'")
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)")
ap.add_argument("--ow", action="store_true", help="overwrite existing files instead of autorenaming")
ap.add_argument("--spd", action="store_true", help="print speeds for each file")
ap.add_argument("--version", action="store_true", help="show version and exit")
ap = app.add_argument_group("compatibility")
@@ -1162,8 +1253,9 @@ source file/folder selection uses rsync syntax, meaning that:
ap.add_argument("--drd", action="store_true", help="delete remote files during upload instead of afterwards; reduces peak disk space usage, but will reupload instead of detecting renames")
ap = app.add_argument_group("performance tweaks")
ap.add_argument("-j", type=int, metavar="THREADS", default=4, help="parallel connections")
ap.add_argument("-J", type=int, metavar="THREADS", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing")
ap.add_argument("-j", type=int, metavar="CONNS", default=2, help="parallel connections")
ap.add_argument("-J", type=int, metavar="CORES", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing")
ap.add_argument("--sz", type=int, metavar="MiB", default=64, help="try to make each POST this big")
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
ap.add_argument("-ns", action="store_true", help="no status panel (for slow consoles and macos)")
ap.add_argument("--cd", type=float, metavar="SEC", default=5, help="delay before reattempting a failed handshake/upload")
@@ -1171,7 +1263,7 @@ source file/folder selection uses rsync syntax, meaning that:
ap.add_argument("-z", action="store_true", help="ZOOMIN' (skip uploading files if they exist at the destination with the ~same last-modified timestamp, so same as yolo / turbo with date-chk but even faster)")
ap = app.add_argument_group("tls")
ap.add_argument("-te", metavar="PEM_FILE", help="certificate to expect/verify")
ap.add_argument("-te", metavar="PATH", help="path to ca.pem or cert.pem to expect/verify")
ap.add_argument("-td", action="store_true", help="disable certificate check")
# fmt: on
@@ -1191,6 +1283,8 @@ source file/folder selection uses rsync syntax, meaning that:
if ar.dr:
ar.ow = True
ar.x = "|".join(ar.x or [])
for k in "dl dr drd".split():
errs = []
if ar.safe and getattr(ar, k):
@@ -1209,6 +1303,14 @@ source file/folder selection uses rsync syntax, meaning that:
if "://" not in ar.url:
ar.url = "http://" + ar.url
if "https://" in ar.url.lower():
try:
import ssl, zipfile
except:
t = "ERROR: https is not available for some reason; please use http"
print("\n\n %s\n\n" % (t,))
raise
if ar.a and ar.a.startswith("$"):
fn = ar.a[1:]
print("reading password from file [{0}]".format(fn))

View File

@@ -1,6 +1,6 @@
# Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty
pkgver="1.13.1"
pkgver="1.14.3"
pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any")
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
)
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" )
sha256sums=("f103b784c423a45fbab47c584e4cc53d887fe0616f803bffe009fbfdab3963d7")
sha256sums=("7fcdcec0d7b118bf17a98b1a409331dcfc0fbbd431b362b991f5800006fd8c98")
build() {
cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,5 +1,5 @@
{
"url": "https://github.com/9001/copyparty/releases/download/v1.13.1/copyparty-sfx.py",
"version": "1.13.1",
"hash": "sha256-NFfnveCrR1SbiNlibVyU3UPePLUGJMc4XZvWdksXNd8="
"url": "https://github.com/9001/copyparty/releases/download/v1.14.3/copyparty-sfx.py",
"version": "1.14.3",
"hash": "sha256-1yVeJfYnyNNKYX3KdmYP0ECx7K8EjuWvApMw0diJ1sk="
}

View File

@@ -20,6 +20,13 @@ point `--js-browser` to one of these by URL:
## example any-js
point `--js-browser` and/or `--js-other` to one of these by URL:
* [`banner.js`](banner.js) shows a very enterprise [legal-banner](https://github.com/user-attachments/assets/8ae8e087-b209-449c-b08d-74e040f0284b)
## example browser-css
point `--css-browser` to one of these by URL:

93
contrib/plugins/banner.js Normal file
View File

@@ -0,0 +1,93 @@
(function() {
// usage: copy this to '.banner.js' in your webroot,
// and run copyparty with the following arguments:
// --js-browser /.banner.js --js-other /.banner.js
// had to pick the most chuuni one as the default
var bannertext = '' +
'<h3>You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only.</h3>' +
'<p>By using this IS (which includes any device attached to this IS), you consent to the following conditions:</p>' +
'<ul>' +
'<li>The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.</li>' +
'<li>At any time, the USG may inspect and seize data stored on this IS.</li>' +
'<li>Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG-authorized purpose.</li>' +
'<li>This IS includes security measures (e.g., authentication and access controls) to protect USG interests -- not for your personal benefit or privacy.</li>' +
'<li>Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details.</li>' +
'</ul>';
// fancy div to insert into pages
function bannerdiv(border) {
var ret = mknod('div', null, bannertext);
if (border)
ret.setAttribute("style", "border:1em solid var(--fg); border-width:.3em 0; margin:3em 0");
return ret;
}
// keep all of these false and then selectively enable them in the if-blocks below
var show_msgbox = false,
login_top = false,
top = false,
bottom = false,
top_bordered = false,
bottom_bordered = false;
if (QS("h1#cc") && QS("a#k")) {
// this is the controlpanel
// (you probably want to keep just one of these enabled)
show_msgbox = true;
login_top = true;
bottom = true;
}
else if (ebi("swin") && ebi("smac")) {
// this is the connect-page, same deal here
show_msgbox = true;
top_bordered = true;
bottom_bordered = true;
}
else if (ebi("op_cfg") || ebi("div#mw") ) {
// we're running in the main filebrowser (op_cfg) or markdown-viewer/editor (div#mw),
// fragile pages which break if you do something too fancy
show_msgbox = true;
}
// shows a fullscreen messagebox; works on all pages
if (show_msgbox) {
var now = Math.floor(Date.now() / 1000),
last_shown = sread("bannerts") || 0;
// 60 * 60 * 17 = 17 hour cooldown
if (now - last_shown > 60 * 60 * 17) {
swrite("bannerts", now);
modal.confirm(bannertext, null, function () {
location = 'https://this-page-intentionally-left-blank.org/';
});
}
}
// show a message on the page footer; only works on the connect-page
if (top || top_bordered) {
var dst = ebi('wrap');
dst.insertBefore(bannerdiv(top_bordered), dst.firstChild);
}
// show a message on the page footer; only works on the controlpanel and connect-page
if (bottom || bottom_bordered) {
ebi('wrap').appendChild(bannerdiv(bottom_bordered));
}
// show a message on the top of the page; only works on the controlpanel
if (login_top) {
var dst = QS('h1');
dst.parentNode.insertBefore(bannerdiv(false), dst);
}
})();

View File

@@ -4,7 +4,7 @@
#
# installation:
# wget https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py -O /usr/local/bin/copyparty-sfx.py
# useradd -r -s /sbin/nologin -d /var/lib/copyparty copyparty
# useradd -r -s /sbin/nologin -m -d /var/lib/copyparty copyparty
# firewall-cmd --permanent --add-port=3923/tcp # --zone=libvirt
# firewall-cmd --reload
# cp -pv copyparty.service /etc/systemd/system/
@@ -12,11 +12,18 @@
# restorecon -vr /etc/systemd/system/copyparty.service # on fedora/rhel
# systemctl daemon-reload && systemctl enable --now copyparty
#
# every time you edit this file, you must "systemctl daemon-reload"
# for the changes to take effect and then "systemctl restart copyparty"
#
# if it fails to start, first check this: systemctl status copyparty
# then try starting it while viewing logs:
# journalctl -fan 100
# tail -Fn 100 /var/log/copyparty/$(date +%Y-%m%d.log)
#
# if you run into any issues, for example thumbnails not working,
# try removing the "some quick hardening" section and then please
# let me know if that actually helped so we can look into it
#
# you may want to:
# - change "User=copyparty" and "/var/lib/copyparty/" to another user
# - edit /etc/copyparty.conf to configure copyparty

116
contrib/themes/bsod.css Normal file
View File

@@ -0,0 +1,116 @@
/* copy bsod.* into a folder named ".themes" in your webroot and then
--themes=10 --theme=9 --css-browser=/.themes/bsod.css
*/
html.ey {
--w2: #3d7bbc;
--w3: #5fcbec;
--fg: #fff;
--fg-max: #fff;
--fg-weak: var(--w3);
--bg: #2067b2;
--bg-d3: var(--bg);
--bg-d2: var(--w2);
--bg-d1: var(--fg-weak);
--bg-u2: var(--bg);
--bg-u3: var(--bg);
--bg-u5: var(--w2);
--tab-alt: var(--fg-weak);
--row-alt: var(--w2);
--scroll: var(--w3);
--a: #fff;
--a-b: #fff;
--a-hil: #fff;
--a-h-bg: var(--fg-weak);
--a-dark: var(--a);
--a-gray: var(--fg-weak);
--btn-fg: var(--a);
--btn-bg: var(--w2);
--btn-h-fg: var(--w2);
--btn-1-fg: var(--bg);
--btn-1-bg: var(--a);
--txt-sh: a;
--txt-bg: var(--w2);
--u2-b1-bg: var(--w2);
--u2-b2-bg: var(--w2);
--u2-txt-bg: var(--w2);
--u2-tab-bg: a;
--u2-tab-1-bg: var(--w2);
--sort-1: var(--a);
--sort-1: var(--fg-weak);
--tree-bg: var(--bg);
--g-b1: a;
--g-b2: a;
--g-f-bg: var(--w2);
--f-sh1: 0.1;
--f-sh2: 0.02;
--f-sh3: 0.1;
--f-h-b1: a;
--srv-1: var(--a);
--srv-3: var(--a);
--mp-sh: a;
}
html.ey {
background: url('bsod.png') top 5em right 4.5em no-repeat fixed var(--bg);
}
html.ey body#b {
background: var(--bg); /*sandbox*/
}
html.ey #ops {
margin: 1.7em 1.5em 0 1.5em;
border-radius: .3em;
border-width: 1px 0;
}
html.ey #ops a {
text-shadow: 1px 1px 0 rgba(0,0,0,0.5);
}
html.ey .opbox {
margin: 1.5em 0 0 0;
}
html.ey #tree {
box-shadow: none;
}
html.ey #tt {
border-color: var(--w2);
background: var(--w2);
}
html.ey .mdo a {
background: none;
text-decoration: underline;
}
html.ey .mdo pre,
html.ey .mdo code {
color: #fff;
background: var(--w2);
border: none;
}
html.ey .mdo h1,
html.ey .mdo h2 {
background: none;
border-color: var(--w2);
}
html.ey .mdo ul ul,
html.ey .mdo ul ol,
html.ey .mdo ol ul,
html.ey .mdo ol ol {
border-color: var(--w2);
}
html.ey .mdo p>em,
html.ey .mdo li>em,
html.ey .mdo td>em {
color: #fd0;
}

BIN
contrib/themes/bsod.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@@ -13,6 +13,7 @@ import base64
import locale
import os
import re
import select
import socket
import sys
import threading
@@ -41,6 +42,7 @@ from .util import (
DEF_EXP,
DEF_MTE,
DEF_MTH,
HAVE_IPV6,
IMPLICATIONS,
JINJA_VER,
MIMES,
@@ -65,7 +67,13 @@ if True: # pylint: disable=using-constant-test
from typing import Any, Optional
if PY2:
range = xrange # type: ignore
try:
if os.environ.get("PRTY_NO_TLS"):
raise Exception()
HAVE_SSL = True
import ssl
except:
@@ -173,8 +181,10 @@ def init_E(EE: EnvParams) -> None:
(os.environ.get, "TMP"),
(unicode, "/tmp"),
]
errs = []
for chk in [os.listdir, os.mkdir]:
for pf, pa in paths:
for npath, (pf, pa) in enumerate(paths):
p = ""
try:
p = pf(pa)
# print(chk.__name__, p, pa)
@@ -187,9 +197,20 @@ def init_E(EE: EnvParams) -> None:
if not os.path.isdir(p):
os.mkdir(p)
if npath > 1:
t = "Using [%s] for config; filekeys/dirkeys will change on every restart. Consider setting XDG_CONFIG_HOME or giving the unix-user a ~/.config/"
errs.append(t % (p,))
elif errs:
errs.append("Using [%s] instead" % (p,))
if errs:
print("WARNING: " + ". ".join(errs))
return p # type: ignore
except:
pass
except Exception as ex:
if p and npath < 2:
t = "Unable to store config in [%s] due to %r"
errs.append(t % (p, ex))
raise Exception("could not find a writable path for config")
@@ -279,6 +300,9 @@ def get_ah_salt() -> str:
def ensure_locale() -> None:
if ANYWIN and PY2:
return # maybe XP, so busted 65001
safe = "en_US.UTF-8"
for x in [
safe,
@@ -326,7 +350,7 @@ def configure_ssl_ver(al: argparse.Namespace) -> None:
# oh man i love openssl
# check this out
# hold my beer
assert ssl
assert ssl # type: ignore
ptn = re.compile(r"^OP_NO_(TLS|SSL)v")
sslver = terse_sslver(al.ssl_ver).split(",")
flags = [k for k in ssl.__dict__ if ptn.match(k)]
@@ -360,7 +384,7 @@ def configure_ssl_ver(al: argparse.Namespace) -> None:
def configure_ssl_ciphers(al: argparse.Namespace) -> None:
assert ssl
assert ssl # type: ignore
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
if al.ssl_ver:
ctx.options &= ~al.ssl_flags_en
@@ -473,11 +497,20 @@ def disable_quickedit() -> None:
def sfx_tpoke(top: str):
files = [os.path.join(dp, p) for dp, dd, df in os.walk(top) for p in dd + df]
if os.environ.get("PRTY_NO_TPOKE"):
return
files = [top] + [
os.path.join(dp, p) for dp, dd, df in os.walk(top) for p in dd + df
]
while True:
t = int(time.time())
for f in [top] + files:
os.utime(f, (t, t))
for f in list(files):
try:
os.utime(f, (t, t))
except Exception as ex:
lprint("<TPOKE> [%s] %r" % (f, ex))
files.remove(f)
time.sleep(78123)
@@ -494,6 +527,41 @@ def showlic() -> None:
def get_sects():
return [
[
"bind",
"configure listening",
dedent(
"""
\033[33m-i\033[0m takes a comma-separated list of interfaces to listen on;
IP-addresses and/or unix-sockets (Unix Domain Sockets)
the default (\033[32m-i ::\033[0m) means all IPv4 and IPv6 addresses
\033[32m-i 0.0.0.0\033[0m listens on all IPv4 NICs/subnets
\033[32m-i 127.0.0.1\033[0m listens on IPv4 localhost only
\033[32m-i 127.1\033[0m listens on IPv4 localhost only
\033[32m-i 127.1,192.168.123.1\033[0m = IPv4 localhost and 192.168.123.1
\033[33m-p\033[0m takes a comma-separated list of tcp ports to listen on;
the default is \033[32m-p 3923\033[0m but as root you can \033[32m-p 80,443,3923\033[0m
when running behind a reverse-proxy, it's recommended to
use unix-sockets for improved performance and security;
\033[32m-i unix:770:www:\033[33m/tmp/a.sock\033[0m listens on \033[33m/tmp/a.sock\033[0m with
permissions \033[33m0770\033[0m; only accessible to members of the \033[33mwww\033[0m
group. This is the best approach. Alternatively,
\033[32m-i unix:777:\033[33m/tmp/a.sock\033[0m sets perms \033[33m0777\033[0m so anyone can
access it; bad unless it's inside a restricted folder
\033[32m-i unix:\033[33m/tmp/a.sock\033[0m keeps umask-defined permissions
(usually \033[33m0600\033[0m) and the same user/group as copyparty
\033[33m-p\033[0m (tcp ports) is ignored for unix sockets
"""
),
],
[
"accounts",
"accounts and volumes",
@@ -616,12 +684,12 @@ def get_sects():
\033[36mxban\033[35m executes CMD if someone gets banned
\033[0m
can be defined as --args or volflags; for example \033[36m
--xau notify-send
-v .::r:c,xau=notify-send
--xau foo.py
-v .::r:c,xau=bar.py
\033[0m
commands specified as --args are appended to volflags;
each --arg and volflag can be specified multiple times,
each command will execute in order unless one returns non-zero
hooks specified as commandline --args are appended to volflags;
each commandline --arg and volflag can be specified multiple times,
each hook will execute in order unless one returns non-zero
optionally prefix the command with comma-sep. flags similar to -mtp:
@@ -632,6 +700,10 @@ def get_sects():
\033[36mtN\033[35m sets an N sec timeout before the command is abandoned
\033[36miN\033[35m xiu only: volume must be idle for N sec (default = 5)
\033[36mar\033[35m only run hook if user has read-access
\033[36marw\033[35m only run hook if user has read-write-access
\033[36marwmd\033[35m ...and so on... (doesn't work for xiu or xban)
\033[36mkt\033[35m kills the entire process tree on timeout (default),
\033[36mkm\033[35m kills just the main process
\033[36mkn\033[35m lets it continue running until copyparty is terminated
@@ -641,6 +713,21 @@ def get_sects():
\033[36mc2\033[35m show only stdout
\033[36mc3\033[35m mute all process otput
\033[0m
examples:
\033[36m--xm some.py\033[35m runs \033[33msome.py msgtxt\033[35m on each 📟 message;
\033[33mmsgtxt\033[35m is the message that was written into the web-ui
\033[36m--xm j,some.py\033[35m runs \033[33msome.py jsontext\033[35m on each 📟 message;
\033[33mjsontext\033[35m is the message info (ip, user, ..., msg-text)
\033[36m--xm aw,j,some.py\033[35m requires user to have write-access
\033[36m--xm aw,,notify-send,hey,--\033[35m shows an OS alert on linux;
the \033[33m,,\033[35m stops copyparty from reading the rest as flags and
the \033[33m--\033[35m stops notify-send from reading the message as args
and the alert will be "hey" followed by the messagetext
\033[0m
each hook is executed once for each event, except for \033[36mxiu\033[0m
which builds up a backlog of uploads, running the hook just once
as soon as the volume has been idle for iN seconds (5 by default)
@@ -652,6 +739,11 @@ def get_sects():
\033[36mxban\033[0m can be used to overrule / cancel a user ban event;
if the program returns 0 (true/OK) then the ban will NOT happen
effects can be used to redirect uploads into other
locations, and to delete or index other files based
on new uploads, but with certain limitations. See
bin/hooks/reloc* and docs/devnotes.md#hook-effects
except for \033[36mxm\033[0m, only one hook / one action can run at a time,
so it's recommended to use the \033[36mf\033[0m flag unless you really need
to wait for the hook to finish before continuing (without \033[36mf\033[0m
@@ -667,7 +759,10 @@ def get_sects():
\033[36mstash\033[35m dumps the data to file and returns length + checksum
\033[36msave,get\033[35m dumps to file and returns the page like a GET
\033[36mprint,get\033[35m prints the data in the log and returns GET
(leave out the ",get" to return an error instead)
(leave out the ",get" to return an error instead)\033[0m
note that the \033[35m--xm\033[0m hook will only run if \033[35m--urlform\033[0m
is either \033[36mprint\033[0m or the default \033[36mprint,get\033[0m
"""
),
],
@@ -877,6 +972,16 @@ def add_fs(ap):
ap2.add_argument("--mtab-age", metavar="SEC", type=int, default=60, help="rebuild mountpoint cache every \033[33mSEC\033[0m to keep track of sparse-files support; keep low on servers with removable media")
def add_share(ap):
db_path = os.path.join(E.cfg, "shares.db")
ap2 = ap.add_argument_group('share-url options')
ap2.add_argument("--shr", metavar="DIR", type=u, default="", help="toplevel virtual folder for shared files/folders, for example [\033[32m/share\033[0m]")
ap2.add_argument("--shr-db", metavar="FILE", type=u, default=db_path, help="database to store shares in")
ap2.add_argument("--shr-adm", metavar="U,U", type=u, default="", help="comma-separated list of users allowed to view/delete any share")
ap2.add_argument("--shr-rt", metavar="MIN", type=int, default=1440, help="shares can be revived by their owner if they expired less than MIN minutes ago; [\033[32m60\033[0m]=hour, [\033[32m1440\033[0m]=day, [\033[32m10080\033[0m]=week")
ap2.add_argument("--shr-v", action="store_true", help="debug")
def add_upload(ap):
ap2 = ap.add_argument_group('upload options')
ap2.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads, hiding them from clients unless \033[33m-ed\033[0m")
@@ -893,23 +998,24 @@ def add_upload(ap):
ap2.add_argument("--no-dupe", action="store_true", help="reject duplicate files during upload; only matches within the same volume (volflag=nodupe)")
ap2.add_argument("--no-snap", action="store_true", help="disable snapshots -- forget unfinished uploads on shutdown; don't create .hist/up2k.snap files -- abandoned/interrupted uploads must be cleaned up manually")
ap2.add_argument("--snap-wri", metavar="SEC", type=int, default=300, help="write upload state to ./hist/up2k.snap every \033[33mSEC\033[0m seconds; allows resuming incomplete uploads after a server crash")
ap2.add_argument("--snap-drop", metavar="MIN", type=float, default=1440, help="forget unfinished uploads after \033[33mMIN\033[0m minutes; impossible to resume them after that (360=6h, 1440=24h)")
ap2.add_argument("--snap-drop", metavar="MIN", type=float, default=1440.0, help="forget unfinished uploads after \033[33mMIN\033[0m minutes; impossible to resume them after that (360=6h, 1440=24h)")
ap2.add_argument("--u2ts", metavar="TXT", type=u, default="c", help="how to timestamp uploaded files; [\033[32mc\033[0m]=client-last-modified, [\033[32mu\033[0m]=upload-time, [\033[32mfc\033[0m]=force-c, [\033[32mfu\033[0m]=force-u (volflag=u2ts)")
ap2.add_argument("--rand", action="store_true", help="force randomized filenames, \033[33m--nrand\033[0m chars long (volflag=rand)")
ap2.add_argument("--nrand", metavar="NUM", type=int, default=9, help="randomized filenames length (volflag=nrand)")
ap2.add_argument("--magic", action="store_true", help="enable filetype detection on nameless uploads (volflag=magic)")
ap2.add_argument("--df", metavar="GiB", type=float, default=0, help="ensure \033[33mGiB\033[0m free disk space by rejecting upload requests")
ap2.add_argument("--df", metavar="GiB", type=u, default="0", help="ensure \033[33mGiB\033[0m free disk space by rejecting upload requests; assumes gigabytes unless a unit suffix is given: [\033[32m256m\033[0m], [\033[32m4\033[0m], [\033[32m2T\033[0m] (volflag=df)")
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for this size. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
def add_network(ap):
ap2 = ap.add_argument_group('network options')
ap2.add_argument("-i", metavar="IP", type=u, default="::", help="ip to bind (comma-sep.), default: all IPv4 and IPv6")
ap2.add_argument("-p", metavar="PORT", type=u, default="3923", help="ports to bind (comma/range)")
ap2.add_argument("-i", metavar="IP", type=u, default="::", help="IPs and/or unix-sockets to listen on (see \033[33m--help-bind\033[0m). Default: all IPv4 and IPv6")
ap2.add_argument("-p", metavar="PORT", type=u, default="3923", help="ports to listen on (comma/range); ignored for unix-sockets")
ap2.add_argument("--ll", action="store_true", help="include link-local IPv4/IPv6 in mDNS replies, even if the NIC has routable IPs (breaks some mDNS clients)")
ap2.add_argument("--rproxy", metavar="DEPTH", type=int, default=1, help="which ip to associate clients with; [\033[32m0\033[0m]=tcp, [\033[32m1\033[0m]=origin (first x-fwd, unsafe), [\033[32m2\033[0m]=outermost-proxy, [\033[32m3\033[0m]=second-proxy, [\033[32m-1\033[0m]=closest-proxy")
ap2.add_argument("--xff-hdr", metavar="NAME", type=u, default="x-forwarded-for", help="if reverse-proxied, which http header to read the client's real ip from")
@@ -921,12 +1027,12 @@ def add_network(ap):
else:
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0, help="debug: socket write delay in seconds")
ap2.add_argument("--rsp-slp", metavar="SEC", type=float, default=0, help="debug: response delay in seconds")
ap2.add_argument("--rsp-jtr", metavar="SEC", type=float, default=0, help="debug: response delay, random duration 0..\033[33mSEC\033[0m")
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0.0, help="debug: socket write delay in seconds")
ap2.add_argument("--rsp-slp", metavar="SEC", type=float, default=0.0, help="debug: response delay in seconds")
ap2.add_argument("--rsp-jtr", metavar="SEC", type=float, default=0.0, help="debug: response delay, random duration 0..\033[33mSEC\033[0m")
def add_tls(ap, cert_path):
@@ -934,10 +1040,10 @@ def add_tls(ap, cert_path):
ap2.add_argument("--http-only", action="store_true", help="disable ssl/tls -- force plaintext")
ap2.add_argument("--https-only", action="store_true", help="disable plaintext -- force tls")
ap2.add_argument("--cert", metavar="PATH", type=u, default=cert_path, help="path to TLS certificate")
ap2.add_argument("--ssl-ver", metavar="LIST", type=u, help="set allowed ssl/tls versions; [\033[32mhelp\033[0m] shows available versions; default is what your python version considers safe")
ap2.add_argument("--ciphers", metavar="LIST", type=u, help="set allowed ssl/tls ciphers; [\033[32mhelp\033[0m] shows available ciphers")
ap2.add_argument("--ssl-ver", metavar="LIST", type=u, default="", help="set allowed ssl/tls versions; [\033[32mhelp\033[0m] shows available versions; default is what your python version considers safe")
ap2.add_argument("--ciphers", metavar="LIST", type=u, default="", help="set allowed ssl/tls ciphers; [\033[32mhelp\033[0m] shows available ciphers")
ap2.add_argument("--ssl-dbg", action="store_true", help="dump some tls info")
ap2.add_argument("--ssl-log", metavar="PATH", type=u, help="log master secrets for later decryption in wireshark")
ap2.add_argument("--ssl-log", metavar="PATH", type=u, default="", help="log master secrets for later decryption in wireshark")
def add_cert(ap, cert_path):
@@ -950,12 +1056,12 @@ def add_cert(ap, cert_path):
ap2.add_argument("--crt-nolo", action="store_true", help="do not add 127.0.0.1 / localhost into cert")
ap2.add_argument("--crt-nohn", action="store_true", help="do not add mDNS names / hostname into cert")
ap2.add_argument("--crt-dir", metavar="PATH", default=cert_dir, help="where to save the CA cert")
ap2.add_argument("--crt-cdays", metavar="D", type=float, default=3650, help="ca-certificate expiration time in days")
ap2.add_argument("--crt-sdays", metavar="D", type=float, default=365, help="server-cert expiration time in days")
ap2.add_argument("--crt-cdays", metavar="D", type=float, default=3650.0, help="ca-certificate expiration time in days")
ap2.add_argument("--crt-sdays", metavar="D", type=float, default=365.0, help="server-cert expiration time in days")
ap2.add_argument("--crt-cn", metavar="TXT", type=u, default="partyco", help="CA/server-cert common-name")
ap2.add_argument("--crt-cnc", metavar="TXT", type=u, default="--crt-cn", help="override CA name")
ap2.add_argument("--crt-cns", metavar="TXT", type=u, default="--crt-cn cpp", help="override server-cert name")
ap2.add_argument("--crt-back", metavar="HRS", type=float, default=72, help="backdate in hours")
ap2.add_argument("--crt-back", metavar="HRS", type=float, default=72.0, help="backdate in hours")
ap2.add_argument("--crt-alg", metavar="S-N", type=u, default="ecdsa-256", help="algorithm and keysize; one of these: \033[32mecdsa-256 rsa-4096 rsa-2048\033[0m")
@@ -969,6 +1075,16 @@ def add_auth(ap):
ap2.add_argument("--bauth-last", action="store_true", help="keeps basic-authentication enabled, but only as a last-resort; if a cookie is also provided then the cookie wins")
def add_chpw(ap):
db_path = os.path.join(E.cfg, "chpw.json")
ap2 = ap.add_argument_group('user-changeable passwords options')
ap2.add_argument("--chpw", action="store_true", help="allow users to change their own passwords")
ap2.add_argument("--chpw-no", metavar="U,U,U", type=u, action="append", help="do not allow password-changes for this comma-separated list of usernames")
ap2.add_argument("--chpw-db", metavar="PATH", type=u, default=db_path, help="where to store the passwords database (if you run multiple copyparty instances, make sure they use different DBs)")
ap2.add_argument("--chpw-len", metavar="N", type=int, default=8, help="minimum password length")
ap2.add_argument("--chpw-v", metavar="LVL", type=int, default=2, help="verbosity of summary on config load [\033[32m0\033[0m] = nothing at all, [\033[32m1\033[0m] = number of users, [\033[32m2\033[0m] = list users with default-pw, [\033[32m3\033[0m] = list all users")
def add_zeroconf(ap):
ap2 = ap.add_argument_group("Zeroconf options")
ap2.add_argument("-z", action="store_true", help="enable all zeroconf backends (mdns, ssdp)")
@@ -996,7 +1112,7 @@ def add_zc_mdns(ap):
ap2.add_argument("--zm-mnic", action="store_true", help="merge NICs which share subnets; assume that same subnet means same network")
ap2.add_argument("--zm-msub", action="store_true", help="merge subnets on each NIC -- always enabled for ipv6 -- reduces network load, but gnome-gvfs clients may stop working, and clients cannot be in subnets that the server is not")
ap2.add_argument("--zm-noneg", action="store_true", help="disable NSEC replies -- try this if some clients don't see copyparty")
ap2.add_argument("--zm-spam", metavar="SEC", type=float, default=0, help="send unsolicited announce every \033[33mSEC\033[0m; useful if clients have IPs in a subnet which doesn't overlap with the server, or to avoid some firewall issues")
ap2.add_argument("--zm-spam", metavar="SEC", type=float, default=0.0, help="send unsolicited announce every \033[33mSEC\033[0m; useful if clients have IPs in a subnet which doesn't overlap with the server, or to avoid some firewall issues")
def add_zc_ssdp(ap):
@@ -1011,14 +1127,15 @@ def add_zc_ssdp(ap):
def add_ftp(ap):
ap2 = ap.add_argument_group('FTP options (TCP only)')
ap2.add_argument("--ftp", metavar="PORT", type=int, help="enable FTP server on \033[33mPORT\033[0m, for example \033[32m3921")
ap2.add_argument("--ftps", metavar="PORT", type=int, help="enable FTPS server on \033[33mPORT\033[0m, for example \033[32m3990")
ap2.add_argument("--ftp", metavar="PORT", type=int, default=0, help="enable FTP server on \033[33mPORT\033[0m, for example \033[32m3921")
ap2.add_argument("--ftps", metavar="PORT", type=int, default=0, help="enable FTPS server on \033[33mPORT\033[0m, for example \033[32m3990")
ap2.add_argument("--ftpv", action="store_true", help="verbose")
ap2.add_argument("--ftp4", action="store_true", help="only listen on IPv4")
ap2.add_argument("--ftp-ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
ap2.add_argument("--ftp-no-ow", action="store_true", help="if target file exists, reject upload instead of overwrite")
ap2.add_argument("--ftp-wt", metavar="SEC", type=int, default=7, help="grace period for resuming interrupted uploads (any client can write to any file last-modified more recently than \033[33mSEC\033[0m seconds ago)")
ap2.add_argument("--ftp-nat", metavar="ADDR", type=u, help="the NAT address to use for passive connections")
ap2.add_argument("--ftp-pr", metavar="P-P", type=u, help="the range of TCP ports to use for passive connections, for example \033[32m12000-13000")
ap2.add_argument("--ftp-nat", metavar="ADDR", type=u, default="", help="the NAT address to use for passive connections")
ap2.add_argument("--ftp-pr", metavar="P-P", type=u, default="", help="the range of TCP ports to use for passive connections, for example \033[32m12000-13000")
def add_webdav(ap):
@@ -1032,14 +1149,15 @@ def add_webdav(ap):
def add_tftp(ap):
ap2 = ap.add_argument_group('TFTP options (UDP only)')
ap2.add_argument("--tftp", metavar="PORT", type=int, help="enable TFTP server on \033[33mPORT\033[0m, for example \033[32m69 \033[0mor \033[32m3969")
ap2.add_argument("--tftp", metavar="PORT", type=int, default=0, help="enable TFTP server on \033[33mPORT\033[0m, for example \033[32m69 \033[0mor \033[32m3969")
ap2.add_argument("--tftp4", action="store_true", help="only listen on IPv4")
ap2.add_argument("--tftpv", action="store_true", help="verbose")
ap2.add_argument("--tftpvv", action="store_true", help="verboser")
ap2.add_argument("--tftp-no-fast", action="store_true", help="debug: disable optimizations")
ap2.add_argument("--tftp-lsf", metavar="PTN", type=u, default="\\.?(dir|ls)(\\.txt)?", help="return a directory listing if a file with this name is requested and it does not exist; defaults matches .ls, dir, .dir.txt, ls.txt, ...")
ap2.add_argument("--tftp-nols", action="store_true", help="if someone tries to download a directory, return an error instead of showing its directory listing")
ap2.add_argument("--tftp-ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
ap2.add_argument("--tftp-pr", metavar="P-P", type=u, help="the range of UDP ports to use for data transfer, for example \033[32m12000-13000")
ap2.add_argument("--tftp-pr", metavar="P-P", type=u, default="", help="the range of UDP ports to use for data transfer, for example \033[32m12000-13000")
def add_smb(ap):
@@ -1074,6 +1192,7 @@ def add_hooks(ap):
ap2.add_argument("--xad", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file delete")
ap2.add_argument("--xm", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m on message")
ap2.add_argument("--xban", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m if someone gets banned (pw/404/403/url)")
ap2.add_argument("--hook-v", action="store_true", help="verbose hooks")
def add_stats(ap):
@@ -1115,7 +1234,7 @@ def add_safety(ap):
ap2.add_argument("-s", action="count", default=0, help="increase safety: Disable thumbnails / potentially dangerous software (ffmpeg/pillow/vips), hide partial uploads, avoid crawlers.\n └─Alias of\033[32m --dotpart --no-thumb --no-mtag-ff --no-robots --force-js")
ap2.add_argument("-ss", action="store_true", help="further increase safety: Prevent js-injection, accidental move/delete, broken symlinks, webdav, 404 on 403, ban on excessive 404s.\n └─Alias of\033[32m -s --unpost=0 --no-del --no-mv --hardlink --vague-403 -nih")
ap2.add_argument("-sss", action="store_true", help="further increase safety: Enable logging to disk, scan for dangerous symlinks.\n └─Alias of\033[32m -ss --no-dav --no-logues --no-readme -lo=cpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz --ls=**,*,ln,p,r")
ap2.add_argument("--ls", metavar="U[,V[,F]]", type=u, help="do a sanity/safety check of all volumes on startup; arguments \033[33mUSER\033[0m,\033[33mVOL\033[0m,\033[33mFLAGS\033[0m (see \033[33m--help-ls\033[0m); example [\033[32m**,*,ln,p,r\033[0m]")
ap2.add_argument("--ls", metavar="U[,V[,F]]", type=u, default="", help="do a sanity/safety check of all volumes on startup; arguments \033[33mUSER\033[0m,\033[33mVOL\033[0m,\033[33mFLAGS\033[0m (see \033[33m--help-ls\033[0m); example [\033[32m**,*,ln,p,r\033[0m]")
ap2.add_argument("--xvol", action="store_true", help="never follow symlinks leaving the volume root, unless the link is into another volume where the user has similar access (volflag=xvol)")
ap2.add_argument("--xdev", action="store_true", help="stay within the filesystem of the volume root; do not descend into other devices (symlink or bind-mount to another HDD, ...) (volflag=xdev)")
ap2.add_argument("--no-dot-mv", action="store_true", help="disallow moving dotfiles; makes it impossible to move folders containing dotfiles")
@@ -1125,7 +1244,7 @@ def add_safety(ap):
ap2.add_argument("--vague-403", action="store_true", help="send 404 instead of 403 (security through ambiguity, very enterprise)")
ap2.add_argument("--force-js", action="store_true", help="don't send folder listings as HTML, force clients to use the embedded json instead -- slight protection against misbehaving search engines which ignore \033[33m--no-robots\033[0m")
ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything (volflag=norobots)")
ap2.add_argument("--logout", metavar="H", type=float, default="8086", help="logout clients after \033[33mH\033[0m hours of inactivity; [\033[32m0.0028\033[0m]=10sec, [\033[32m0.1\033[0m]=6min, [\033[32m24\033[0m]=day, [\033[32m168\033[0m]=week, [\033[32m720\033[0m]=month, [\033[32m8760\033[0m]=year)")
ap2.add_argument("--logout", metavar="H", type=float, default=8086.0, help="logout clients after \033[33mH\033[0m hours of inactivity; [\033[32m0.0028\033[0m]=10sec, [\033[32m0.1\033[0m]=6min, [\033[32m24\033[0m]=day, [\033[32m168\033[0m]=week, [\033[32m720\033[0m]=month, [\033[32m8760\033[0m]=year)")
ap2.add_argument("--ban-pw", metavar="N,W,B", type=u, default="9,60,1440", help="more than \033[33mN\033[0m wrong passwords in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; disable with [\033[32mno\033[0m]")
ap2.add_argument("--ban-404", metavar="N,W,B", type=u, default="50,60,1440", help="hitting more than \033[33mN\033[0m 404's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; only affects users who cannot see directory listings because their access is either g/G/h")
ap2.add_argument("--ban-403", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m 403's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; [\033[32m1440\033[0m]=day, [\033[32m10080\033[0m]=week, [\033[32m43200\033[0m]=month")
@@ -1161,7 +1280,7 @@ def add_shutdown(ap):
def add_logging(ap):
ap2 = ap.add_argument_group('logging options')
ap2.add_argument("-q", action="store_true", help="quiet; disable most STDOUT messages")
ap2.add_argument("-lo", metavar="PATH", type=u, help="logfile, example: \033[32mcpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz\033[0m (NB: some errors may appear on STDOUT only)")
ap2.add_argument("-lo", metavar="PATH", type=u, default="", help="logfile, example: \033[32mcpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz\033[0m (NB: some errors may appear on STDOUT only)")
ap2.add_argument("--no-ansi", action="store_true", default=not VT100, help="disable colors; same as environment-variable NO_COLOR")
ap2.add_argument("--ansi", action="store_true", help="force colors; overrides environment-variable NO_COLOR")
ap2.add_argument("--no-logflush", action="store_true", help="don't flush the logfile after each write; tiny bit faster")
@@ -1188,10 +1307,10 @@ def add_thumbnail(ap):
ap2.add_argument("--no-athumb", action="store_true", help="disable audio thumbnails (spectrograms) (volflag=dathumb)")
ap2.add_argument("--th-size", metavar="WxH", default="320x256", help="thumbnail res (volflag=thsize)")
ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails")
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60, help="conversion timeout in seconds (volflag=convt)")
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=6, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
ap2.add_argument("--th-crop", metavar="TXT", type=u, default="y", help="crop thumbnails to 4:3 or keep dynamic height; client can override in UI unless force. [\033[32mfy\033[0m]=crop, [\033[32mfn\033[0m]=nocrop, [\033[32mfy\033[0m]=force-y, [\033[32mfn\033[0m]=force-n (volflag=crop)")
ap2.add_argument("--th-x3", metavar="TXT", type=u, default="n", help="show thumbs at 3x resolution; client can override in UI unless force. [\033[32mfy\033[0m]=yes, [\033[32mfn\033[0m]=no, [\033[32mfy\033[0m]=force-yes, [\033[32mfn\033[0m]=force-no (volflag=th3x)")
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60.0, help="conversion timeout in seconds (volflag=convt)")
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=6.0, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
ap2.add_argument("--th-crop", metavar="TXT", type=u, default="y", help="crop thumbnails to 4:3 or keep dynamic height; client can override in UI unless force. [\033[32my\033[0m]=crop, [\033[32mn\033[0m]=nocrop, [\033[32mfy\033[0m]=force-y, [\033[32mfn\033[0m]=force-n (volflag=crop)")
ap2.add_argument("--th-x3", metavar="TXT", type=u, default="n", help="show thumbs at 3x resolution; client can override in UI unless force. [\033[32my\033[0m]=yes, [\033[32mn\033[0m]=no, [\033[32mfy\033[0m]=force-yes, [\033[32mfn\033[0m]=force-no (volflag=th3x)")
ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference")
ap2.add_argument("--th-no-jpg", action="store_true", help="disable jpg output")
ap2.add_argument("--th-no-webp", action="store_true", help="disable webp output")
@@ -1230,8 +1349,8 @@ def add_db_general(ap, hcores):
ap2.add_argument("-e2v", action="store_true", help="verify file integrity; rehash all files and compare with db")
ap2.add_argument("-e2vu", action="store_true", help="on hash mismatch: update the database with the new hash")
ap2.add_argument("-e2vp", action="store_true", help="on hash mismatch: panic and quit copyparty")
ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume data (db, thumbs) (volflag=hist)")
ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
ap2.add_argument("--hist", metavar="PATH", type=u, default="", help="where to store volume data (db, thumbs); default is a folder named \".hist\" inside each volume (volflag=hist)")
ap2.add_argument("--no-hash", metavar="PTN", type=u, default="", help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
ap2.add_argument("--re-dhash", action="store_true", help="force a cache rebuild on startup; enable this once if it gets out of sync (should never be necessary)")
@@ -1240,7 +1359,7 @@ def add_db_general(ap, hcores):
ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (volflag=xlink)")
ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing")
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="rescan filesystem for changes every \033[33mSEC\033[0m seconds; 0=off (volflag=scan)")
ap2.add_argument("--db-act", metavar="SEC", type=float, default=10, help="defer any scheduled volume reindexing until \033[33mSEC\033[0m seconds after last db write (uploads, renames, ...)")
ap2.add_argument("--db-act", metavar="SEC", type=float, default=10.0, help="defer any scheduled volume reindexing until \033[33mSEC\033[0m seconds after last db write (uploads, renames, ...)")
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=45, help="search deadline -- terminate searches running for more than \033[33mSEC\033[0m seconds")
ap2.add_argument("--srch-hits", metavar="N", type=int, default=7999, help="max search results to allow clients to fetch; 125 results will be shown initially")
ap2.add_argument("--dotsrch", action="store_true", help="show dotfiles in search results (volflags: dotsrch | nodotsrch)")
@@ -1293,7 +1412,8 @@ def add_og(ap):
def add_ui(ap, retry):
ap2 = ap.add_argument_group('ui options')
ap2.add_argument("--grid", action="store_true", help="show grid/thumbnails by default (volflag=grid)")
ap2.add_argument("--lang", metavar="LANG", type=u, default="eng", help="language; one of the following: \033[32meng nor\033[0m")
ap2.add_argument("--gsel", action="store_true", help="select files in grid by ctrl-click (volflag=gsel)")
ap2.add_argument("--lang", metavar="LANG", type=u, default="eng", help="language; one of the following: \033[32meng nor chi\033[0m")
ap2.add_argument("--theme", metavar="NUM", type=int, default=0, help="default theme to use (0..7)")
ap2.add_argument("--themes", metavar="NUM", type=int, default=8, help="number of themes installed")
ap2.add_argument("--au-vol", metavar="0-100", type=int, default=50, choices=range(0, 101), help="default audio/video volume percent")
@@ -1301,9 +1421,10 @@ def add_ui(ap, retry):
ap2.add_argument("--unlist", metavar="REGEX", type=u, default="", help="don't show files matching \033[33mREGEX\033[0m in file list. Purely cosmetic! Does not affect API calls, just the browser. Example: [\033[32m\\.(js|css)$\033[0m] (volflag=unlist)")
ap2.add_argument("--favico", metavar="TXT", type=u, default="c 000 none" if retry else "🎉 000 none", help="\033[33mfavicon-text\033[0m [ \033[33mforeground\033[0m [ \033[33mbackground\033[0m ] ], set blank to disable")
ap2.add_argument("--mpmc", metavar="URL", type=u, default="", help="change the mediaplayer-toggle mouse cursor; URL to a folder with {2..5}.png inside (or disable with [\033[32m.\033[0m])")
ap2.add_argument("--js-browser", metavar="L", type=u, help="URL to additional JS to include")
ap2.add_argument("--css-browser", metavar="L", type=u, help="URL to additional CSS to include")
ap2.add_argument("--html-head", metavar="TXT", type=u, default="", help="text to append to the <head> of all HTML pages; can be @PATH to send the contents of a file at PATH, and/or begin with %% to render as jinja2 template (volflag=html_head)")
ap2.add_argument("--css-browser", metavar="L", type=u, default="", help="URL to additional CSS to include in the filebrowser html")
ap2.add_argument("--js-browser", metavar="L", type=u, default="", help="URL to additional JS to include in the filebrowser html")
ap2.add_argument("--js-other", metavar="L", type=u, default="", help="URL to additional JS to include in all other pages")
ap2.add_argument("--html-head", metavar="TXT", type=u, default="", help="text to append to the <head> of all HTML pages (except for basic-browser); can be @PATH to send the contents of a file at PATH, and/or begin with %% to render as jinja2 template (volflag=html_head)")
ap2.add_argument("--ih", action="store_true", help="if a folder contains index.html, show that instead of the directory listing by default (can be changed in the client settings UI, or add ?v to URL for override)")
ap2.add_argument("--textfiles", metavar="CSV", type=u, default="txt,nfo,diz,cue,readme", help="file extensions to present as plaintext")
ap2.add_argument("--txt-max", metavar="KiB", type=int, default=64, help="max size of embedded textfiles on ?doc= (anything bigger will be lazy-loaded by JS)")
@@ -1322,14 +1443,18 @@ def add_debug(ap):
ap2 = ap.add_argument_group('debug options')
ap2.add_argument("--vc", action="store_true", help="verbose config file parser (explain config)")
ap2.add_argument("--cgen", action="store_true", help="generate config file from current config (best-effort; probably buggy)")
ap2.add_argument("--deps", action="store_true", help="list information about detected optional dependencies")
if hasattr(select, "poll"):
ap2.add_argument("--no-poll", action="store_true", help="kernel-bug workaround: disable poll; use select instead (limits max num clients to ~700)")
ap2.add_argument("--no-sendfile", action="store_true", help="kernel-bug workaround: disable sendfile; do a safe and slow read-send-loop instead")
ap2.add_argument("--no-scandir", action="store_true", help="kernel-bug workaround: disable scandir; do a listdir + stat on each file instead")
ap2.add_argument("--no-fastboot", action="store_true", help="wait for initial filesystem indexing before accepting client requests")
ap2.add_argument("--no-htp", action="store_true", help="disable httpserver threadpool, create threads as-needed instead")
ap2.add_argument("--rm-sck", action="store_true", help="when listening on unix-sockets, do a basic delete+bind instead of the default atomic bind")
ap2.add_argument("--srch-dbg", action="store_true", help="explain search processing, and do some extra expensive sanity checks")
ap2.add_argument("--rclone-mdns", action="store_true", help="use mdns-domain instead of server-ip on /?hc")
ap2.add_argument("--stackmon", metavar="P,S", type=u, help="write stacktrace to \033[33mP\033[0math every \033[33mS\033[0m second, for example --stackmon=\033[32m./st/%%Y-%%m/%%d/%%H%%M.xz,60")
ap2.add_argument("--log-thrs", metavar="SEC", type=float, help="list active threads every \033[33mSEC\033[0m")
ap2.add_argument("--stackmon", metavar="P,S", type=u, default="", help="write stacktrace to \033[33mP\033[0math every \033[33mS\033[0m second, for example --stackmon=\033[32m./st/%%Y-%%m/%%d/%%H%%M.xz,60")
ap2.add_argument("--log-thrs", metavar="SEC", type=float, default=0.0, help="list active threads every \033[33mSEC\033[0m")
ap2.add_argument("--log-fk", metavar="REGEX", type=u, default="", help="log filekey params for files where path matches \033[33mREGEX\033[0m; [\033[32m.\033[0m] (a single dot) = all files")
ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to \033[33m--bf-nc\033[0m and \033[33m--bf-dir\033[0m")
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
@@ -1368,11 +1493,13 @@ def run_argparse(
add_tls(ap, cert_path)
add_cert(ap, cert_path)
add_auth(ap)
add_chpw(ap)
add_qr(ap, tty)
add_zeroconf(ap)
add_zc_mdns(ap)
add_zc_ssdp(ap)
add_fs(ap)
add_share(ap)
add_upload(ap)
add_db_general(ap, hcores)
add_db_metadata(ap)
@@ -1532,7 +1659,7 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
if hard > 0: # -1 == infinite
nc = min(nc, int(hard / 4))
except:
nc = 512
nc = 486 # mdns/ssdp restart headroom; select() maxfd is 512 on windows
retry = False
for fmtr in [RiceFormatter, RiceFormatter, Dodge11874, BasicDodge11874]:
@@ -1577,6 +1704,9 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
if getattr(al, k1):
setattr(al, k2, False)
if not HAVE_IPV6 and al.i == "::":
al.i = "0.0.0.0"
al.i = al.i.split(",")
try:
if "-" in al.p:
@@ -1625,6 +1755,9 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
if not hasattr(os, "sendfile"):
al.no_sendfile = True
if not hasattr(select, "poll"):
al.no_poll = True
# signal.signal(signal.SIGINT, sighandler)
SvcHub(al, dal, argv, "".join(printed)).run()

View File

@@ -1,8 +1,8 @@
# coding: utf-8
VERSION = (1, 13, 2)
CODENAME = "race the beam"
BUILD_DT = (2024, 5, 10)
VERSION = (1, 14, 4)
CODENAME = "one step forward"
BUILD_DT = (2024, 9, 2)
S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -4,6 +4,7 @@ from __future__ import print_function, unicode_literals
import argparse
import base64
import hashlib
import json
import os
import re
import stat
@@ -12,11 +13,13 @@ import threading
import time
from datetime import datetime
from .__init__ import ANYWIN, TYPE_CHECKING, WINDOWS, E
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, WINDOWS, E
from .bos import bos
from .cfg import flagdescs, permdescs, vf_bmap, vf_cmap, vf_vmap
from .pwhash import PWHash
from .util import (
DEF_MTE,
DEF_MTH,
EXTS,
IMPLICATIONS,
MIMES,
@@ -32,9 +35,11 @@ from .util import (
odfusion,
relchk,
statdir,
ub64enc,
uncyg,
undot,
unhumanize,
vjoin,
vsplit,
)
@@ -54,6 +59,9 @@ if TYPE_CHECKING:
# Vflags: TypeAlias = dict[str, Any]
# Mflags: TypeAlias = dict[str, Vflags]
if PY2:
range = xrange # type: ignore
LEELOO_DALLAS = "leeloo_dallas"
@@ -336,6 +344,8 @@ class VFS(object):
self.histtab: dict[str, str] = {} # all realpath->histpath
self.dbv: Optional[VFS] = None # closest full/non-jump parent
self.lim: Optional[Lim] = None # upload limits; only set for dbv
self.shr_src: Optional[tuple[VFS, str]] = None # source vfs+rem of a share
self.shr_files: set[str] = set() # filenames to include from shr_src
self.aread: dict[str, list[str]] = {}
self.awrite: dict[str, list[str]] = {}
self.amove: dict[str, list[str]] = {}
@@ -360,6 +370,9 @@ class VFS(object):
self.all_aps = []
self.all_vps = []
self.get_dbv = self._get_dbv
self.ls = self._ls
def __repr__(self) -> str:
return "VFS(%s)" % (
", ".join(
@@ -439,7 +452,7 @@ class VFS(object):
def _find(self, vpath: str) -> tuple["VFS", str]:
"""return [vfs,remainder]"""
if vpath == "":
if not vpath:
return self, ""
if "/" in vpath:
@@ -449,7 +462,7 @@ class VFS(object):
rem = ""
if name in self.nodes:
return self.nodes[name]._find(undot(rem))
return self.nodes[name]._find(rem)
return self, vpath
@@ -475,6 +488,13 @@ class VFS(object):
)
# skip uhtml because it's rarely needed
def get_perms(self, vpath: str, uname: str) -> str:
zbl = self.can_access(vpath, uname)
ret = "".join(ch for ch, ok in zip("rwmdgGa.", zbl) if ok)
if "rwmd" in ret and "a." in ret:
ret += "A"
return ret
def get(
self,
vpath: str,
@@ -509,12 +529,20 @@ class VFS(object):
t = "{} has no {} in [{}] => [{}] => [{}]"
self.log("vfs", t.format(uname, msg, vpath, cvpath, ap), 6)
t = 'you don\'t have %s-access in "/%s"'
raise Pebkac(err, t % (msg, cvpath))
t = 'you don\'t have %s-access in "/%s" or below "/%s"'
raise Pebkac(err, t % (msg, cvpath, vn.vpath))
return vn, rem
def get_dbv(self, vrem: str) -> tuple["VFS", str]:
def _get_share_src(self, vrem: str) -> tuple["VFS", str]:
src = self.shr_src
if not src:
return self._get_dbv(vrem)
shv, srem = src
return shv, vjoin(srem, vrem)
def _get_dbv(self, vrem: str) -> tuple["VFS", str]:
dbv = self.dbv
if not dbv:
return self, vrem
@@ -540,7 +568,26 @@ class VFS(object):
ad, fn = os.path.split(ap)
return os.path.join(absreal(ad), fn)
def ls(
def _ls_nope(
self, *a, **ka
) -> tuple[str, list[tuple[str, os.stat_result]], dict[str, "VFS"]]:
raise Pebkac(500, "nope.avi")
def _ls_shr(
self,
rem: str,
uname: str,
scandir: bool,
permsets: list[list[bool]],
lstat: bool = False,
) -> tuple[str, list[tuple[str, os.stat_result]], dict[str, "VFS"]]:
"""replaces _ls for certain shares (single-file, or file selection)"""
vn, rem = self.shr_src # type: ignore
abspath, real, _ = vn.ls(rem, "\n", scandir, permsets, lstat)
real = [x for x in real if os.path.basename(x[0]) in self.shr_files]
return abspath, real, {}
def _ls(
self,
rem: str,
uname: str,
@@ -795,6 +842,7 @@ class AuthSrv(object):
self.vfs = VFS(log_func, "", "", AXS(), {})
self.acct: dict[str, str] = {}
self.iacct: dict[str, str] = {}
self.defpw: dict[str, str] = {}
self.grps: dict[str, list[str]] = {}
self.re_pwd: Optional[re.Pattern] = None
@@ -1340,7 +1388,7 @@ class AuthSrv(object):
flags[name] = vals
self._e("volflag [{}] += {} ({})".format(name, vals, desc))
def reload(self) -> None:
def reload(self, verbosity: int = 9) -> None:
"""
construct a flat list of mountpoints and usernames
first from the commandline arguments
@@ -1348,9 +1396,9 @@ class AuthSrv(object):
before finally building the VFS
"""
with self.mutex:
self._reload()
self._reload(verbosity)
def _reload(self) -> None:
def _reload(self, verbosity: int = 9) -> None:
acct: dict[str, str] = {} # username:password
grps: dict[str, list[str]] = {} # groupname:usernames
daxs: dict[str, AXS] = {}
@@ -1428,6 +1476,8 @@ class AuthSrv(object):
raise
self.setup_pwhash(acct)
defpw = acct.copy()
self.setup_chpw(acct)
# case-insensitive; normalize
if WINDOWS:
@@ -1443,9 +1493,8 @@ class AuthSrv(object):
vfs = VFS(self.log_func, absreal("."), "", axs, {})
elif "" not in mount:
# there's volumes but no root; make root inaccessible
vfs = VFS(self.log_func, "", "", AXS(), {})
vfs.flags["tcolor"] = self.args.tcolor
vfs.flags["d2d"] = True
zsd = {"d2d": True, "tcolor": self.args.tcolor}
vfs = VFS(self.log_func, "", "", AXS(), zsd)
maxdepth = 0
for dst in sorted(mount.keys(), key=lambda x: (x.count("/"), len(x))):
@@ -1474,6 +1523,56 @@ class AuthSrv(object):
vol.all_vps.sort(key=lambda x: len(x[0]), reverse=True)
vol.root = vfs
enshare = self.args.shr
shr = enshare[1:-1]
shrs = enshare[1:]
if enshare:
import sqlite3
shv = VFS(self.log_func, "", shr, AXS(), {"d2d": True})
db_path = self.args.shr_db
db = sqlite3.connect(db_path)
cur = db.cursor()
cur2 = db.cursor()
now = time.time()
for row in cur.execute("select * from sh"):
s_k, s_pw, s_vp, s_pr, s_nf, s_un, s_t0, s_t1 = row
if s_t1 and s_t1 < now:
continue
if self.args.shr_v:
t = "loading %s share [%s] by [%s] => [%s]"
self.log(t % (s_pr, s_k, s_un, s_vp))
if s_pw:
# gotta reuse the "account" for all shares with this pw,
# so do a light scramble as this appears in the web-ui
zs = ub64enc(hashlib.sha512(s_pw.encode("utf-8")).digest())[4:16]
sun = "s_%s" % (zs.decode("utf-8"),)
acct[sun] = s_pw
else:
sun = "*"
s_axs = AXS(
[sun] if "r" in s_pr else [],
[sun] if "w" in s_pr else [],
[sun] if "m" in s_pr else [],
[sun] if "d" in s_pr else [],
)
# don't know the abspath yet + wanna ensure the user
# still has the privs they granted, so nullmap it
shv.nodes[s_k] = VFS(
self.log_func, "", "%s/%s" % (shr, s_k), s_axs, shv.flags.copy()
)
vfs.nodes[shr] = vfs.all_vols[shr] = shv
for vol in shv.nodes.values():
vfs.all_vols[vol.vpath] = vol
vol.get_dbv = vol._get_share_src
vol.ls = vol._ls_nope
zss = set(acct)
zss.update(self.idp_accs)
zss.discard("*")
@@ -1492,7 +1591,7 @@ class AuthSrv(object):
for usr in unames:
for vp, vol in vfs.all_vols.items():
zx = getattr(vol.axs, axs_key)
if usr in zx:
if usr in zx and (not enshare or not vp.startswith(shrs)):
umap[usr].append(vp)
umap[usr].sort()
setattr(vfs, "a" + perm, umap)
@@ -1542,6 +1641,8 @@ class AuthSrv(object):
for usr in acct:
if usr not in associated_users:
if enshare and usr.startswith("s_"):
continue
if len(vfs.all_vols) > 1:
# user probably familiar enough that the verbose message is not necessary
t = "account [%s] is not mentioned in any volume definitions; see --help-accounts"
@@ -1617,11 +1718,14 @@ class AuthSrv(object):
use = True
lim.nosub = True
zs = vol.flags.get("df") or (
"{}g".format(self.args.df) if self.args.df else ""
)
if zs:
zs = vol.flags.get("df") or self.args.df or ""
if zs not in ("", "0"):
use = True
try:
_ = float(zs)
zs = "%sg" % (zs)
except:
pass
lim.dfl = unhumanize(zs)
zs = vol.flags.get("sz")
@@ -1883,7 +1987,7 @@ class AuthSrv(object):
self.log(t.format(vol.vpath), 1)
del vol.flags["lifetime"]
needs_e2d = [x for x in hooks if x != "xm"]
needs_e2d = [x for x in hooks if x in ("xau", "xiu")]
drop = [x for x in needs_e2d if vol.flags.get(x)]
if drop:
t = 'removing [{}] from volume "/{}" because e2d is disabled'
@@ -1974,9 +2078,12 @@ class AuthSrv(object):
have_e2t = False
t = "volumes and permissions:\n"
for zv in vfs.all_vols.values():
if not self.warn_anonwrite:
if not self.warn_anonwrite or verbosity < 5:
break
if enshare and (zv.vpath == shr or zv.vpath.startswith(shrs)):
continue
t += '\n\033[36m"/{}" \033[33m{}\033[0m'.format(zv.vpath, zv.realpath)
for txt, attr in [
[" read", "uread"],
@@ -2003,7 +2110,7 @@ class AuthSrv(object):
t += "\n"
if self.warn_anonwrite:
if self.warn_anonwrite and verbosity > 4:
if not self.args.no_voldump:
self.log(t)
@@ -2027,7 +2134,7 @@ class AuthSrv(object):
try:
zv, _ = vfs.get("", "*", False, True, err=999)
if self.warn_anonwrite and os.getcwd() == zv.realpath:
if self.warn_anonwrite and verbosity > 4 and os.getcwd() == zv.realpath:
t = "anyone can write to the current directory: {}\n"
self.log(t.format(zv.realpath), c=1)
@@ -2054,6 +2161,7 @@ class AuthSrv(object):
self.vfs = vfs
self.acct = acct
self.defpw = defpw
self.grps = grps
self.iacct = {v: k for k, v in acct.items()}
@@ -2074,6 +2182,169 @@ class AuthSrv(object):
MIMES[ext] = mime
EXTS.update({v: k for k, v in MIMES.items()})
if enshare:
# hide shares from controlpanel
vfs.all_vols = {
x: y
for x, y in vfs.all_vols.items()
if x != shr and not x.startswith(shrs)
}
assert db and cur and cur2 and shv # type: ignore
for row in cur.execute("select * from sh"):
s_k, s_pw, s_vp, s_pr, s_nf, s_un, s_t0, s_t1 = row
shn = shv.nodes.get(s_k, None)
if not shn:
continue
try:
s_vfs, s_rem = vfs.get(
s_vp, s_un, "r" in s_pr, "w" in s_pr, "m" in s_pr, "d" in s_pr
)
except Exception as ex:
t = "removing share [%s] by [%s] to [%s] due to %r"
self.log(t % (s_k, s_un, s_vp, ex), 3)
shv.nodes.pop(s_k)
continue
fns = []
if s_nf:
q = "select vp from sf where k = ?"
for (s_fn,) in cur2.execute(q, (s_k,)):
fns.append(s_fn)
shn.shr_files = set(fns)
shn.ls = shn._ls_shr
else:
shn.ls = shn._ls
shn.shr_src = (s_vfs, s_rem)
shn.realpath = s_vfs.canonical(s_rem)
if self.args.shr_v:
t = "mapped %s share [%s] by [%s] => [%s] => [%s]"
self.log(t % (s_pr, s_k, s_un, s_vp, shn.realpath))
# transplant shadowing into shares
for vn in shv.nodes.values():
svn, srem = vn.shr_src # type: ignore
if srem:
continue # free branch, safe
ap = svn.canonical(srem)
if bos.path.isfile(ap):
continue # also fine
for zs in svn.nodes.keys():
# hide subvolume
vn.nodes[zs] = VFS(self.log_func, "", "", AXS(), {})
cur2.close()
cur.close()
db.close()
def chpw(self, broker: Optional["BrokerCli"], uname, pw) -> tuple[bool, str]:
if not self.args.chpw:
return False, "feature disabled in server config"
if uname == "*" or uname not in self.defpw:
return False, "not logged in"
if uname in self.args.chpw_no:
return False, "not allowed for this account"
if len(pw) < self.args.chpw_len:
t = "minimum password length: %d characters"
return False, t % (self.args.chpw_len,)
hpw = self.ah.hash(pw) if self.ah.on else pw
if hpw == self.acct[uname]:
return False, "that's already your password my dude"
if hpw in self.iacct:
return False, "password is taken"
with self.mutex:
ap = self.args.chpw_db
if not bos.path.exists(ap):
pwdb = {}
else:
with open(ap, "r", encoding="utf-8") as f:
pwdb = json.load(f)
pwdb = [x for x in pwdb if x[0] != uname]
pwdb.append((uname, self.defpw[uname], hpw))
with open(ap, "w", encoding="utf-8") as f:
json.dump(pwdb, f, separators=(",\n", ": "))
self.log("reinitializing due to password-change for user [%s]" % (uname,))
if not broker:
# only true for tests
self._reload()
return True, "new password OK"
broker.ask("_reload_blocking", False, False).get()
return True, "new password OK"
def setup_chpw(self, acct: dict[str, str]) -> None:
ap = self.args.chpw_db
if not self.args.chpw or not bos.path.exists(ap):
return
with open(ap, "r", encoding="utf-8") as f:
pwdb = json.load(f)
useen = set()
urst = set()
uok = set()
for usr, orig, mod in pwdb:
useen.add(usr)
if usr not in acct:
# previous user, no longer known
continue
if acct[usr] != orig:
urst.add(usr)
continue
uok.add(usr)
acct[usr] = mod
if not self.args.chpw_v:
return
for usr in acct:
if usr not in useen:
urst.add(usr)
for zs in uok:
urst.discard(zs)
if self.args.chpw_v == 1 or (self.args.chpw_v == 2 and not urst):
t = "chpw: %d changed, %d unchanged"
self.log(t % (len(uok), len(urst)))
return
elif self.args.chpw_v == 2:
t = "chpw: %d changed" % (len(uok))
if urst:
t += ", \033[0munchanged:\033[35m %s" % (", ".join(list(urst)))
self.log(t, 6)
return
msg = ""
if uok:
t = "\033[0mchanged: \033[32m%s"
msg += t % (", ".join(list(uok)),)
if urst:
t = "%s\033[0munchanged: \033[35m%s"
msg += t % (
", " if msg else "",
", ".join(list(urst)),
)
self.log("chpw: " + msg, 6)
def setup_pwhash(self, acct: dict[str, str]) -> None:
self.ah = PWHash(self.args)
if not self.ah.on:
@@ -2264,10 +2535,11 @@ class AuthSrv(object):
"",
]
csv = set("i p".split())
csv = set("i p th_covers zm_on zm_off zs_on zs_off".split())
zs = "c ihead mtm mtp on403 on404 xad xar xau xiu xban xbd xbr xbu xm"
lst = set(zs.split())
askip = set("a v c vc cgen theme".split())
askip = set("a v c vc cgen exp_lg exp_md theme".split())
fskip = set("exp_lg exp_md mv_re_r mv_re_t rm_re_r rm_re_t".split())
# keymap from argv to vflag
amap = vf_bmap()
@@ -2288,11 +2560,35 @@ class AuthSrv(object):
for k, v in args.items():
if k in askip:
continue
try:
v = v.pattern
if k in ("idp_gsep", "tftp_lsf"):
v = v[1:-1] # close enough
except:
pass
skip = False
for k2, defstr in (("mte", DEF_MTE), ("mth", DEF_MTH)):
if k != k2:
continue
s1 = list(sorted(list(v)))
s2 = list(sorted(defstr.split(",")))
if s1 == s2:
skip = True
break
v = ",".join(s1)
if skip:
continue
if k in csv:
v = ", ".join([str(za) for za in v])
try:
v2 = getattr(self.dargs, k)
if v == v2:
if k == "tcolor" and len(v2) == 3:
v2 = "".join([x * 2 for x in v2])
if v == v2 or v.replace(", ", ",") == v2:
continue
except:
continue
@@ -2351,6 +2647,7 @@ class AuthSrv(object):
pstr += pchar
if "g" in pstr and "G" in pstr:
pstr = pstr.replace("g", "")
pstr = pstr.replace("rwmd.a", "A")
try:
vperms[pstr].append(uname)
except:
@@ -2360,24 +2657,48 @@ class AuthSrv(object):
trues = []
vals = []
for k, v in sorted(vol.flags.items()):
if k in fskip:
continue
try:
v = v.pattern
except:
pass
try:
ak = vmap[k]
if getattr(self.args, ak) is v:
v2 = getattr(self.args, ak)
try:
v2 = v2.pattern
except:
pass
if v2 is v:
continue
except:
pass
skip = False
for k2, defstr in (("mte", DEF_MTE), ("mth", DEF_MTH)):
if k != k2:
continue
s1 = list(sorted(list(v)))
s2 = list(sorted(defstr.split(",")))
if s1 == s2:
skip = True
break
v = ",".join(s1)
if skip:
continue
if k in lst:
for ve in v:
vals.append("{}: {}".format(k, ve))
elif v is True:
trues.append(k)
elif v is not False:
try:
v = v.pattern
except:
pass
vals.append("{}: {}".format(k, v))
pops = []
for k1, k2 in IMPLICATIONS:

View File

@@ -28,7 +28,7 @@ class ExceptionalQueue(Queue, object):
if rv[1] == "pebkac":
raise Pebkac(*rv[2:])
else:
raise Exception(rv[2])
raise rv[2]
return rv
@@ -65,8 +65,8 @@ def try_exec(want_retval: Union[bool, int], func: Any, *args: list[Any]) -> Any:
return ["exception", "pebkac", ex.code, str(ex)]
except:
except Exception as ex:
if not want_retval:
raise
return ["exception", "stack", traceback.format_exc()]
return ["exception", "stack", ex]

View File

@@ -9,10 +9,10 @@ import time
from .__init__ import ANYWIN
from .util import Netdev, runcmd, wrename, wunlink
HAVE_CFSSL = True
HAVE_CFSSL = not os.environ.get("PRTY_NO_CFSSL")
if True: # pylint: disable=using-constant-test
from .util import RootLogger
from .util import NamedLogger, RootLogger
if ANYWIN:
@@ -83,6 +83,8 @@ def _read_crt(args, fn):
def _gen_ca(log: "RootLogger", args):
nlog: "NamedLogger" = lambda msg, c=0: log("cert-gen-ca", msg, c)
expiry = _read_crt(args, "ca.pem")[0]
if time.time() + args.crt_cdays * 60 * 60 * 24 * 0.1 < expiry:
return
@@ -113,16 +115,18 @@ def _gen_ca(log: "RootLogger", args):
bname = os.path.join(args.crt_dir, "ca")
try:
wunlink(log, bname + ".key", VF)
wunlink(nlog, bname + ".key", VF)
except:
pass
wrename(log, bname + "-key.pem", bname + ".key", VF)
wunlink(log, bname + ".csr", VF)
wrename(nlog, bname + "-key.pem", bname + ".key", VF)
wunlink(nlog, bname + ".csr", VF)
log("cert", "new ca OK", 2)
def _gen_srv(log: "RootLogger", args, netdevs: dict[str, Netdev]):
nlog: "NamedLogger" = lambda msg, c=0: log("cert-gen-srv", msg, c)
names = args.crt_ns.split(",") if args.crt_ns else []
if not args.crt_exact:
for n in names[:]:
@@ -196,11 +200,11 @@ def _gen_srv(log: "RootLogger", args, netdevs: dict[str, Netdev]):
bname = os.path.join(args.crt_dir, "srv")
try:
wunlink(log, bname + ".key", VF)
wunlink(nlog, bname + ".key", VF)
except:
pass
wrename(log, bname + "-key.pem", bname + ".key", VF)
wunlink(log, bname + ".csr", VF)
wrename(nlog, bname + "-key.pem", bname + ".key", VF)
wunlink(nlog, bname + ".csr", VF)
with open(os.path.join(args.crt_dir, "ca.pem"), "rb") as f:
ca = f.read()

View File

@@ -35,6 +35,7 @@ def vf_bmap() -> dict[str, str]:
"e2vp",
"exp",
"grid",
"gsel",
"hardlink",
"magic",
"no_sb_md",
@@ -144,6 +145,7 @@ flagcats = {
"maxb=1g,300": "max 1 GiB over 5min (suffixes: b, k, m, g, t)",
"vmaxb=1g": "total volume size max 1 GiB (suffixes: b, k, m, g, t)",
"vmaxn=4k": "max 4096 files in volume (suffixes: b, k, m, g, t)",
"medialinks": "return medialinks for non-up2k uploads (not hotlinks)",
"rand": "force randomized filenames, 9 chars long by default",
"nrand=N": "randomized filenames are N chars long",
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
@@ -213,6 +215,7 @@ flagcats = {
},
"client and ux": {
"grid": "show grid/thumbnails by default",
"gsel": "select files in grid by ctrl-click",
"sort": "default sort order",
"unlist": "dont list files matching REGEX",
"html_head=TXT": "includes TXT in the <head>, or @PATH for file at PATH",

View File

@@ -9,12 +9,12 @@ import time
from .__init__ import ANYWIN, MACOS
from .authsrv import AXS, VFS
from .bos import bos
from .util import chkcmd, min_ex
from .util import chkcmd, min_ex, undot
if True: # pylint: disable=using-constant-test
from typing import Optional, Union
from .util import RootLogger
from .util import RootLogger, undot
class Fstab(object):
@@ -52,7 +52,7 @@ class Fstab(object):
self.log(msg.format(path, fs, min_ex()), 3)
return fs
path = path.lstrip("/")
path = undot(path)
try:
return self.cache[path]
except:
@@ -124,7 +124,7 @@ class Fstab(object):
if ANYWIN:
path = self._winpath(path)
path = path.lstrip("/")
path = undot(path)
ptn = re.compile(r"^[^\\/]*")
vn, rem = self.tab._find(path)
if not self.trusted:

View File

@@ -19,6 +19,7 @@ from .__init__ import PY2, TYPE_CHECKING
from .authsrv import VFS
from .bos import bos
from .util import (
VF_CAREFUL,
Daemon,
ODict,
Pebkac,
@@ -30,6 +31,7 @@ from .util import (
runhook,
sanitize_fn,
vjoin,
wunlink,
)
if TYPE_CHECKING:
@@ -37,7 +39,10 @@ if TYPE_CHECKING:
if True: # pylint: disable=using-constant-test
import typing
from typing import Any, Optional
from typing import Any, Optional, Union
if PY2:
range = xrange # type: ignore
class FSE(FilesystemError):
@@ -139,6 +144,9 @@ class FtpFs(AbstractedFS):
self.listdirinfo = self.listdir
self.chdir(".")
def log(self, msg: str, c: Union[int, str] = 0) -> None:
self.hub.log("ftpd", msg, c)
def v2a(
self,
vpath: str,
@@ -207,17 +215,37 @@ class FtpFs(AbstractedFS):
w = "w" in mode or "a" in mode or "+" in mode
ap = self.rv2a(filename, r, w)[0]
self.validpath(ap)
if w:
try:
st = bos.stat(ap)
td = time.time() - st.st_mtime
need_unlink = True
except:
need_unlink = False
td = 0
if td < -1 or td > self.args.ftp_wt:
raise FSE("Cannot open existing file for writing")
if w and need_unlink:
if td >= -1 and td <= self.args.ftp_wt:
# within permitted timeframe; unlink and accept
do_it = True
elif self.args.no_del or self.args.ftp_no_ow:
# file too old, or overwrite not allowed; reject
do_it = False
else:
# allow overwrite if user has delete permission
# (avoids win2000 freaking out and deleting the server copy without uploading its own)
try:
self.rv2a(filename, False, True, False, True)
do_it = True
except:
do_it = False
if not do_it:
raise FSE("File already exists")
wunlink(self.log, ap, VF_CAREFUL)
self.validpath(ap)
return open(fsenc(ap), mode, self.args.iobuf)
def chdir(self, path: str) -> None:
@@ -282,9 +310,20 @@ class FtpFs(AbstractedFS):
# display write-only folders as empty
return []
# return list of volumes
r = {x.split("/")[0]: 1 for x in self.hub.asrv.vfs.all_vols.keys()}
return list(sorted(list(r.keys())))
# return list of accessible volumes
ret = []
for vn in self.hub.asrv.vfs.all_vols.values():
if "/" in vn.vpath or not vn.vpath:
continue # only include toplevel-mounted vols
try:
self.hub.asrv.vfs.get(vn.vpath, self.uname, True, False)
ret.append(vn.vpath)
except:
pass
ret.sort()
return ret
def rmdir(self, path: str) -> None:
ap = self.rv2a(path, d=True)[0]
@@ -314,7 +353,7 @@ class FtpFs(AbstractedFS):
svp = join(self.cwd, src).lstrip("/")
dvp = join(self.cwd, dst).lstrip("/")
try:
self.hub.up2k.handle_mv(self.uname, svp, dvp)
self.hub.up2k.handle_mv(self.uname, self.h.cli_ip, svp, dvp)
except Exception as ex:
raise FSE(str(ex))
@@ -432,15 +471,19 @@ class FtpHandler(FTPHandler):
xbu = vfs.flags.get("xbu")
if xbu and not runhook(
None,
None,
self.hub.up2k,
"xbu.ftpd",
xbu,
ap,
vfs.canonical(rem),
vp,
"",
self.uname,
self.hub.asrv.vfs.get_perms(vp, self.uname),
0,
0,
self.cli_ip,
0,
time.time(),
"",
):
raise FSE("Upload blocked by xbu server config")
@@ -543,9 +586,15 @@ class Ftpd(object):
if "::" in ips:
ips.append("0.0.0.0")
ips = [x for x in ips if "unix:" not in x]
if self.args.ftp4:
ips = [x for x in ips if ":" not in x]
if not ips:
lgr.fatal("cannot start ftp-server; no compatible IPs in -i")
return
ips = list(ODict.fromkeys(ips)) # dedup
ioloop = IOLoop()

File diff suppressed because it is too large Load Diff

View File

@@ -9,6 +9,9 @@ import threading # typechk
import time
try:
if os.environ.get("PRTY_NO_TLS"):
raise Exception()
HAVE_SSL = True
import ssl
except:

View File

@@ -12,7 +12,7 @@ import time
import queue
from .__init__ import ANYWIN, CORES, EXE, MACOS, TYPE_CHECKING, EnvParams
from .__init__ import ANYWIN, CORES, EXE, MACOS, PY2, TYPE_CHECKING, EnvParams, unicode
try:
MNFE = ModuleNotFoundError
@@ -84,6 +84,12 @@ if TYPE_CHECKING:
if True: # pylint: disable=using-constant-test
from typing import Any, Optional
if PY2:
range = xrange # type: ignore
if not hasattr(socket, "AF_UNIX"):
setattr(socket, "AF_UNIX", -9001)
class HttpSrv(object):
"""
@@ -148,7 +154,17 @@ class HttpSrv(object):
env = jinja2.Environment()
env.loader = jinja2.FileSystemLoader(os.path.join(self.E.mod, "web"))
jn = ["splash", "svcs", "browser", "browser2", "msg", "md", "mde", "cf"]
jn = [
"splash",
"shares",
"svcs",
"browser",
"browser2",
"msg",
"md",
"mde",
"cf",
]
self.j2 = {x: env.get_template(x + ".html") for x in jn}
zs = os.path.join(self.E.mod, "web", "deps", "prism.js.gz")
self.prism = os.path.exists(zs)
@@ -240,15 +256,24 @@ class HttpSrv(object):
return
def listen(self, sck: socket.socket, nlisteners: int) -> None:
tcp = sck.family != socket.AF_UNIX
if self.args.j != 1:
# lost in the pickle; redefine
if not ANYWIN or self.args.reuseaddr:
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sck.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
if tcp:
sck.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
sck.settimeout(None) # < does not inherit, ^ opts above do
ip, port = sck.getsockname()[:2]
if tcp:
ip, port = sck.getsockname()[:2]
else:
ip = re.sub(r"\.[0-9]+$", "", sck.getsockname().split("/")[-1])
port = 0
self.srvs.append(sck)
self.bound.add((ip, port))
self.nclimax = math.ceil(self.args.nc * 1.0 / nlisteners)
@@ -260,10 +285,19 @@ class HttpSrv(object):
def thr_listen(self, srv_sck: socket.socket) -> None:
"""listens on a shared tcp server"""
ip, port = srv_sck.getsockname()[:2]
fno = srv_sck.fileno()
hip = "[{}]".format(ip) if ":" in ip else ip
msg = "subscribed @ {}:{} f{} p{}".format(hip, port, fno, os.getpid())
if srv_sck.family == socket.AF_UNIX:
ip = re.sub(r"\.[0-9]+$", "", srv_sck.getsockname())
msg = "subscribed @ %s f%d p%d" % (ip, fno, os.getpid())
ip = ip.split("/")[-1]
port = 0
tcp = False
else:
tcp = True
ip, port = srv_sck.getsockname()[:2]
hip = "[%s]" % (ip,) if ":" in ip else ip
msg = "subscribed @ %s:%d f%d p%d" % (hip, port, fno, os.getpid())
self.log(self.name, msg)
Daemon(self.broker.say, "sig-hsrv-up1", ("cb_httpsrv_up",))
@@ -335,11 +369,13 @@ class HttpSrv(object):
try:
sck, saddr = srv_sck.accept()
cip, cport = saddr[:2]
if cip.startswith("::ffff:"):
cip = cip[7:]
addr = (cip, cport)
if tcp:
cip = unicode(saddr[0])
if cip.startswith("::ffff:"):
cip = cip[7:]
addr = (cip, saddr[1])
else:
addr = ("127.8.3.7", sck.fileno())
except (OSError, socket.error) as ex:
if self.stopping:
break

View File

@@ -74,7 +74,7 @@ class Ico(object):
try:
_, _, tw, th = pb.textbbox((0, 0), ext)
except:
tw, th = pb.textsize(ext)
tw, th = pb.textsize(ext) # type: ignore
tw += len(ext)
cw = tw // len(ext)

View File

@@ -292,6 +292,22 @@ class MDNS(MCast):
def run2(self) -> None:
last_hop = time.time()
ihop = self.args.mc_hop
try:
if self.args.no_poll:
raise Exception()
fd2sck = {}
srvpoll = select.poll()
for sck in self.srv:
fd = sck.fileno()
fd2sck[fd] = sck
srvpoll.register(fd, select.POLLIN)
except Exception as ex:
srvpoll = None
if not self.args.no_poll:
t = "WARNING: failed to poll(), will use select() instead: %r"
self.log(t % (ex,), 3)
while self.running:
timeout = (
0.02 + random.random() * 0.07
@@ -300,8 +316,13 @@ class MDNS(MCast):
if self.unsolicited
else (last_hop + ihop if ihop else 180)
)
rdy = select.select(self.srv, [], [], timeout)
rx: list[socket.socket] = rdy[0] # type: ignore
if srvpoll:
pr = srvpoll.poll(timeout * 1000)
rx = [fd2sck[x[0]] for x in pr if x[1] & select.POLLIN]
else:
rdy = select.select(self.srv, [], [], timeout)
rx: list[socket.socket] = rdy[0] # type: ignore
self.rx4.cln()
self.rx6.cln()
buf = b""
@@ -340,7 +361,7 @@ class MDNS(MCast):
except:
pass
self.srv = {}
self.srv.clear()
def eat(self, buf: bytes, addr: tuple[str, int], sck: socket.socket) -> None:
cip = addr[0]

View File

@@ -32,6 +32,17 @@ if True: # pylint: disable=using-constant-test
from .util import NamedLogger, RootLogger
try:
if os.environ.get("PRTY_NO_MUTAGEN"):
raise Exception()
from mutagen import version # noqa: F401
HAVE_MUTAGEN = True
except:
HAVE_MUTAGEN = False
def have_ff(scmd: str) -> bool:
if ANYWIN:
scmd += ".exe"
@@ -48,8 +59,8 @@ def have_ff(scmd: str) -> bool:
return bool(shutil.which(scmd))
HAVE_FFMPEG = have_ff("ffmpeg")
HAVE_FFPROBE = have_ff("ffprobe")
HAVE_FFMPEG = not os.environ.get("PRTY_NO_FFMPEG") and have_ff("ffmpeg")
HAVE_FFPROBE = not os.environ.get("PRTY_NO_FFPROBE") and have_ff("ffprobe")
class MParser(object):
@@ -111,7 +122,9 @@ class MParser(object):
raise Exception()
def au_unpk(log: "NamedLogger", fmt_map: dict[str, str], abspath: str, vn: Optional[VFS] = None) -> str:
def au_unpk(
log: "NamedLogger", fmt_map: dict[str, str], abspath: str, vn: Optional[VFS] = None
) -> str:
ret = ""
try:
ext = abspath.split(".")[-1].lower()
@@ -137,6 +150,9 @@ def au_unpk(log: "NamedLogger", fmt_map: dict[str, str], abspath: str, vn: Optio
zil = [x for x in zil if x.filename.lower().split(".")[-1] == au]
fi = zf.open(zil[0])
else:
raise Exception("unknown compression %s" % (pk,))
with os.fdopen(fd, "wb") as fo:
while True:
buf = fi.read(32768)
@@ -331,9 +347,7 @@ class MTag(object):
if self.backend == "mutagen":
self._get = self.get_mutagen
try:
from mutagen import version # noqa: F401
except:
if not HAVE_MUTAGEN:
self.log("could not load Mutagen, trying FFprobe instead", c=3)
self.backend = "ffprobe"
@@ -573,7 +587,7 @@ class MTag(object):
continue
if k == ".aq":
v /= 1000
v /= 1000 # type: ignore
if k == "ac" and v.startswith("mp4a.40."):
v = "aac"

View File

@@ -4,11 +4,21 @@ from __future__ import print_function, unicode_literals
import argparse
import base64
import hashlib
import os
import sys
import threading
from .__init__ import unicode
try:
if os.environ.get("PRTY_NO_ARGON2"):
raise Exception()
HAVE_ARGON2 = True
from argon2 import __version__ as argon2ver
except:
HAVE_ARGON2 = False
class PWHash(object):
def __init__(self, args: argparse.Namespace):

View File

@@ -187,6 +187,8 @@ class SMB(object):
debug('%s("%s", %s) %s @%s\033[K\033[0m', caller, vpath, str(a), perms, uname)
vfs, rem = self.asrv.vfs.get(vpath, uname, *perms)
if not vfs.realpath:
raise Exception("unmapped vfs")
return vfs, vfs.canonical(rem)
def _listdir(self, vpath: str, *a: Any, **ka: Any) -> list[str]:
@@ -195,6 +197,8 @@ class SMB(object):
uname = self._uname()
# debug('listdir("%s", %s) @%s\033[K\033[0m', vpath, str(a), uname)
vfs, rem = self.asrv.vfs.get(vpath, uname, False, False)
if not vfs.realpath:
raise Exception("unmapped vfs")
_, vfs_ls, vfs_virt = vfs.ls(
rem, uname, not self.args.no_scandir, [[False, False]]
)
@@ -240,7 +244,21 @@ class SMB(object):
xbu = vfs.flags.get("xbu")
if xbu and not runhook(
self.nlog, xbu, ap, vpath, "", "", 0, 0, "1.7.6.2", 0, ""
self.nlog,
None,
self.hub.up2k,
"xbu.smb",
xbu,
ap,
vpath,
"",
"",
"",
0,
0,
"1.7.6.2",
time.time(),
"",
):
yeet("blocked by xbu server config: " + vpath)
@@ -297,7 +315,7 @@ class SMB(object):
t = "blocked rename (no-move-acc %s): /%s @%s"
yeet(t % (vfs1.axs.umove, vp1, uname))
self.hub.up2k.handle_mv(uname, vp1, vp2)
self.hub.up2k.handle_mv(uname, "1.7.6.2", vp1, vp2)
try:
bos.makedirs(ap2)
except:

View File

@@ -5,11 +5,11 @@ import errno
import re
import select
import socket
from email.utils import formatdate
import time
from .__init__ import TYPE_CHECKING
from .multicast import MC_Sck, MCast
from .util import CachedSet, html_escape, min_ex
from .util import CachedSet, formatdate, html_escape, min_ex
if TYPE_CHECKING:
from .broker_util import BrokerCli
@@ -141,9 +141,29 @@ class SSDPd(MCast):
self.log("stopped", 2)
def run2(self) -> None:
try:
if self.args.no_poll:
raise Exception()
fd2sck = {}
srvpoll = select.poll()
for sck in self.srv:
fd = sck.fileno()
fd2sck[fd] = sck
srvpoll.register(fd, select.POLLIN)
except Exception as ex:
srvpoll = None
if not self.args.no_poll:
t = "WARNING: failed to poll(), will use select() instead: %r"
self.log(t % (ex,), 3)
while self.running:
rdy = select.select(self.srv, [], [], self.args.z_chk or 180)
rx: list[socket.socket] = rdy[0] # type: ignore
if srvpoll:
pr = srvpoll.poll((self.args.z_chk or 180) * 1000)
rx = [fd2sck[x[0]] for x in pr if x[1] & select.POLLIN]
else:
rdy = select.select(self.srv, [], [], self.args.z_chk or 180)
rx: list[socket.socket] = rdy[0] # type: ignore
self.rxc.cln()
buf = b""
addr = ("0", 0)
@@ -168,7 +188,7 @@ class SSDPd(MCast):
except:
pass
self.srv = {}
self.srv.clear()
def eat(self, buf: bytes, addr: tuple[str, int]) -> None:
cip = addr[0]
@@ -209,7 +229,7 @@ CONFIGID.UPNP.ORG: 1
"""
v4 = srv.ip.replace("::ffff:", "")
zs = zs.format(formatdate(usegmt=True), v4, srv.hport, self.args.zsid)
zs = zs.format(formatdate(), v4, srv.hport, self.args.zsid)
zb = zs[1:].replace("\n", "\r\n").encode("utf-8", "replace")
srv.sck.sendto(zb, addr[:2])

View File

@@ -1,13 +1,13 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import argparse
import re
import stat
import tarfile
from queue import Queue
from .authsrv import AuthSrv
from .bos import bos
from .sutil import StreamArc, errdesc
from .util import Daemon, fsenc, min_ex
@@ -45,12 +45,12 @@ class StreamTar(StreamArc):
def __init__(
self,
log: "NamedLogger",
args: argparse.Namespace,
asrv: AuthSrv,
fgen: Generator[dict[str, Any], None, None],
cmp: str = "",
**kwargs: Any
):
super(StreamTar, self).__init__(log, args, fgen)
super(StreamTar, self).__init__(log, asrv, fgen)
self.ci = 0
self.co = 0
@@ -148,7 +148,7 @@ class StreamTar(StreamArc):
errors.append((f["vp"], ex))
if errors:
self.errf, txt = errdesc(errors)
self.errf, txt = errdesc(self.asrv.vfs, errors)
self.log("\n".join(([repr(self.errf)] + txt[1:])))
self.ser(self.errf)

View File

@@ -12,6 +12,12 @@ from .label import DNSBuffer, DNSLabel
from .ranges import IP4, IP6, H, I, check_bytes
try:
range = xrange
except:
pass
class DNSError(Exception):
pass

View File

@@ -11,7 +11,21 @@ import os
from ._shared import IP, Adapter
if os.name == "nt":
def nope(include_unconfigured=False):
return []
try:
S390X = os.uname().machine == "s390x"
except:
S390X = False
if os.environ.get("PRTY_NO_IFADDR") or S390X:
# s390x deadlocks at libc.getifaddrs
get_adapters = nope
elif os.name == "nt":
from ._win32 import get_adapters
elif os.name == "posix":
from ._posix import get_adapters

View File

@@ -17,6 +17,7 @@ if not PY2:
U: Callable[[str], str] = str
else:
U = unicode # noqa: F821 # pylint: disable=undefined-variable,self-assigning-variable
range = xrange # noqa: F821 # pylint: disable=undefined-variable,self-assigning-variable
class Adapter(object):

View File

@@ -16,6 +16,11 @@ if True: # pylint: disable=using-constant-test
from typing import Callable, List, Optional, Tuple, Union
try:
range = xrange
except:
pass
def num_char_count_bits(ver: int) -> int:
return 16 if (ver + 7) // 17 else 8

View File

@@ -1,15 +1,15 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import argparse
import os
import tempfile
from datetime import datetime
from .__init__ import CORES
from .authsrv import VFS, AuthSrv
from .bos import bos
from .th_cli import ThumbCli
from .util import UTC, vjoin
from .util import UTC, vjoin, vol_san
if True: # pylint: disable=using-constant-test
from typing import Any, Generator, Optional
@@ -21,12 +21,13 @@ class StreamArc(object):
def __init__(
self,
log: "NamedLogger",
args: argparse.Namespace,
asrv: AuthSrv,
fgen: Generator[dict[str, Any], None, None],
**kwargs: Any
):
self.log = log
self.args = args
self.asrv = asrv
self.args = asrv.args
self.fgen = fgen
self.stopped = False
@@ -103,15 +104,20 @@ def enthumb(
return f
def errdesc(errors: list[tuple[str, str]]) -> tuple[dict[str, Any], list[str]]:
def errdesc(
vfs: VFS, errors: list[tuple[str, str]]
) -> tuple[dict[str, Any], list[str]]:
report = ["copyparty failed to add the following files to the archive:", ""]
for fn, err in errors:
report.extend([" file: {}".format(fn), "error: {}".format(err), ""])
btxt = "\r\n".join(report).encode("utf-8", "replace")
btxt = vol_san(list(vfs.all_vols.values()), btxt)
with tempfile.NamedTemporaryFile(prefix="copyparty-", delete=False) as tf:
tf_path = tf.name
tf.write("\r\n".join(report).encode("utf-8", "replace"))
tf.write(btxt)
dt = datetime.now(UTC).strftime("%Y-%m%d-%H%M%S")

View File

@@ -28,18 +28,30 @@ if True: # pylint: disable=using-constant-test
import typing
from typing import Any, Optional, Union
from .__init__ import ANYWIN, EXE, MACOS, TYPE_CHECKING, E, EnvParams, unicode
from .__init__ import ANYWIN, EXE, MACOS, PY2, TYPE_CHECKING, E, EnvParams, unicode
from .authsrv import BAD_CFG, AuthSrv
from .cert import ensure_cert
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, HAVE_MUTAGEN
from .pwhash import HAVE_ARGON2
from .tcpsrv import TcpSrv
from .th_srv import HAVE_PIL, HAVE_VIPS, HAVE_WEBP, ThumbSrv
from .th_srv import (
HAVE_AVIF,
HAVE_FFMPEG,
HAVE_FFPROBE,
HAVE_HEIF,
HAVE_PIL,
HAVE_VIPS,
HAVE_WEBP,
ThumbSrv,
)
from .up2k import Up2k
from .util import (
DEF_EXP,
DEF_MTE,
DEF_MTH,
FFMPEG_URL,
HAVE_PSUTIL,
HAVE_SQLITE3,
UTC,
VERSIONS,
Daemon,
@@ -65,6 +77,9 @@ if TYPE_CHECKING:
except:
pass
if PY2:
range = xrange # type: ignore
class SvcHub(object):
"""
@@ -91,6 +106,7 @@ class SvcHub(object):
self.no_ansi = args.no_ansi
self.logf: Optional[typing.TextIO] = None
self.logf_base_fn = ""
self.is_dut = False # running in unittest; always False
self.stop_req = False
self.stopping = False
self.stopped = False
@@ -193,6 +209,20 @@ class SvcHub(object):
t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
if args.chpw and args.idp_h_usr:
t = "ERROR: user-changeable passwords is incompatible with IdP/identity-providers; you must disable either --chpw or --idp-h-usr"
self.log("root", t, 1)
raise Exception(t)
noch = set()
for zs in args.chpw_no or []:
zsl = [x.strip() for x in zs.split(",")]
noch.update([x for x in zsl if x])
args.chpw_no = noch
if args.shr:
self.setup_share_db()
bri = "zy"[args.theme % 2 :][:1]
ch = "abcdefghijklmnopqrstuvwx"[int(args.theme / 2)]
args.theme = "{0}{1} {0} {1}".format(ch, bri)
@@ -232,6 +262,8 @@ class SvcHub(object):
self.up2k = Up2k(self)
self._feature_test()
decs = {k: 1 for k in self.args.th_dec.split(",")}
if not HAVE_VIPS:
decs.pop("vips", None)
@@ -336,6 +368,93 @@ class SvcHub(object):
self.broker = Broker(self)
def setup_share_db(self) -> None:
al = self.args
if not HAVE_SQLITE3:
self.log("root", "sqlite3 not available; disabling --shr", 1)
al.shr = ""
return
import sqlite3
al.shr = al.shr.strip("/")
if "/" in al.shr or not al.shr:
t = "config error: --shr must be the name of a virtual toplevel directory to put shares inside"
self.log("root", t, 1)
raise Exception(t)
al.shr = "/%s/" % (al.shr,)
create = True
modified = False
db_path = self.args.shr_db
self.log("root", "opening shares-db %s" % (db_path,))
for n in range(2):
try:
db = sqlite3.connect(db_path)
cur = db.cursor()
try:
cur.execute("select count(*) from sh").fetchone()
create = False
break
except:
pass
except Exception as ex:
if n:
raise
t = "shares-db corrupt; deleting and recreating: %r"
self.log("root", t % (ex,), 3)
try:
cur.close() # type: ignore
except:
pass
try:
db.close() # type: ignore
except:
pass
os.unlink(db_path)
sch1 = [
r"create table kv (k text, v int)",
r"create table sh (k text, pw text, vp text, pr text, st int, un text, t0 int, t1 int)",
# sharekey, password, src, perms, numFiles, owner, created, expires
]
sch2 = [
r"create table sf (k text, vp text)",
r"create index sf_k on sf(k)",
r"create index sh_k on sh(k)",
r"create index sh_t1 on sh(t1)",
]
assert db # type: ignore
assert cur # type: ignore
if create:
dver = 2
modified = True
for cmd in sch1 + sch2:
cur.execute(cmd)
self.log("root", "created new shares-db")
else:
(dver,) = cur.execute("select v from kv where k = 'sver'").fetchall()[0]
if dver == 1:
modified = True
for cmd in sch2:
cur.execute(cmd)
cur.execute("update sh set st = 0")
self.log("root", "shares-db schema upgrade ok")
if modified:
for cmd in [
r"delete from kv where k = 'sver'",
r"insert into kv values ('sver', %d)" % (2,),
]:
cur.execute(cmd)
db.commit()
cur.close()
db.close()
def start_ftpd(self) -> None:
time.sleep(30)
@@ -420,6 +539,58 @@ class SvcHub(object):
Daemon(self.sd_notify, "sd-notify")
def _feature_test(self) -> None:
fok = []
fng = []
t_ff = "transcode audio, create spectrograms, video thumbnails"
to_check = [
(HAVE_SQLITE3, "sqlite", "file and media indexing"),
(HAVE_PIL, "pillow", "image thumbnails (plenty fast)"),
(HAVE_VIPS, "vips", "image thumbnails (faster, eats more ram)"),
(HAVE_WEBP, "pillow-webp", "create thumbnails as webp files"),
(HAVE_FFMPEG, "ffmpeg", t_ff + ", good-but-slow image thumbnails"),
(HAVE_FFPROBE, "ffprobe", t_ff + ", read audio/media tags"),
(HAVE_MUTAGEN, "mutagen", "read audio tags (ffprobe is better but slower)"),
(HAVE_ARGON2, "argon2", "secure password hashing (advanced users only)"),
(HAVE_HEIF, "pillow-heif", "read .heif images with pillow (rarely useful)"),
(HAVE_AVIF, "pillow-avif", "read .avif images with pillow (rarely useful)"),
]
if ANYWIN:
to_check += [
(HAVE_PSUTIL, "psutil", "improved plugin cleanup (rarely useful)")
]
verbose = self.args.deps
if verbose:
self.log("dependencies", "")
for have, feat, what in to_check:
lst = fok if have else fng
lst.append((feat, what))
if verbose:
zi = 2 if have else 5
sgot = "found" if have else "missing"
t = "%7s: %s \033[36m(%s)"
self.log("dependencies", t % (sgot, feat, what), zi)
if verbose:
self.log("dependencies", "")
return
sok = ", ".join(x[0] for x in fok)
sng = ", ".join(x[0] for x in fng)
t = ""
if sok:
t += "OK: \033[32m" + sok
if sng:
if t:
t += ", "
t += "\033[0mNG: \033[35m" + sng
t += "\033[0m, see --deps"
self.log("dependencies", t, 6)
def _check_env(self) -> None:
try:
files = os.listdir(E.cfg)
@@ -479,8 +650,10 @@ class SvcHub(object):
zsl = al.th_covers.split(",")
zsl = [x.strip() for x in zsl]
zsl = [x for x in zsl if x]
al.th_covers = set(zsl)
al.th_coversd = set(zsl + ["." + x for x in zsl])
al.th_covers = zsl
al.th_coversd = zsl + ["." + x for x in zsl]
al.th_covers_set = set(al.th_covers)
al.th_coversd_set = set(al.th_coversd)
for k in "c".split(" "):
vl = getattr(al, k)
@@ -744,18 +917,21 @@ class SvcHub(object):
Daemon(self._reload, "reloading")
return "reload initiated"
def _reload(self, rescan_all_vols: bool = True) -> None:
def _reload(self, rescan_all_vols: bool = True, up2k: bool = True) -> None:
with self.up2k.mutex:
if self.reloading != 1:
return
self.reloading = 2
self.log("root", "reloading config")
self.asrv.reload()
self.up2k.reload(rescan_all_vols)
self.asrv.reload(9 if up2k else 4)
if up2k:
self.up2k.reload(rescan_all_vols)
else:
self.log("root", "reload done")
self.broker.reload()
self.reloading = 0
def _reload_blocking(self, rescan_all_vols: bool = True) -> None:
def _reload_blocking(self, rescan_all_vols: bool = True, up2k: bool = True) -> None:
while True:
with self.up2k.mutex:
if self.reloading < 2:
@@ -766,7 +942,7 @@ class SvcHub(object):
# try to handle multiple pending IdP reloads at once:
time.sleep(0.2)
self._reload(rescan_all_vols=rescan_all_vols)
self._reload(rescan_all_vols=rescan_all_vols, up2k=up2k)
def stop_thr(self) -> None:
while not self.stop_req:

View File

@@ -1,12 +1,12 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import argparse
import calendar
import stat
import time
import zlib
from .authsrv import AuthSrv
from .bos import bos
from .sutil import StreamArc, errdesc
from .util import min_ex, sanitize_fn, spack, sunpack, yieldfile
@@ -37,9 +37,7 @@ def dostime2unix(buf: bytes) -> int:
def unixtime2dos(ts: int) -> bytes:
tt = time.gmtime(ts + 1)
dy, dm, dd, th, tm, ts = list(tt)[:6]
dy, dm, dd, th, tm, ts, _, _, _ = time.gmtime(ts + 1)
bd = ((dy - 1980) << 9) + (dm << 5) + dd
bt = (th << 11) + (tm << 5) + ts // 2
try:
@@ -219,13 +217,13 @@ class StreamZip(StreamArc):
def __init__(
self,
log: "NamedLogger",
args: argparse.Namespace,
asrv: AuthSrv,
fgen: Generator[dict[str, Any], None, None],
utf8: bool = False,
pre_crc: bool = False,
**kwargs: Any
) -> None:
super(StreamZip, self).__init__(log, args, fgen)
super(StreamZip, self).__init__(log, asrv, fgen)
self.utf8 = utf8
self.pre_crc = pre_crc
@@ -302,7 +300,7 @@ class StreamZip(StreamArc):
mbuf = b""
if errors:
errf, txt = errdesc(errors)
errf, txt = errdesc(self.asrv.vfs, errors)
self.log("\n".join(([repr(errf)] + txt[1:])))
for x in self.ser(errf):
yield x

View File

@@ -15,19 +15,25 @@ from .util import (
E_ADDR_IN_USE,
E_ADDR_NOT_AVAIL,
E_UNREACH,
HAVE_IPV6,
IP6ALL,
VF_CAREFUL,
Netdev,
atomic_move,
min_ex,
sunpack,
termsize,
)
if True:
from typing import Generator
from typing import Generator, Union
if TYPE_CHECKING:
from .svchub import SvcHub
if not hasattr(socket, "AF_UNIX"):
setattr(socket, "AF_UNIX", -9001)
if not hasattr(socket, "IPPROTO_IPV6"):
setattr(socket, "IPPROTO_IPV6", 41)
@@ -111,8 +117,10 @@ class TcpSrv(object):
eps = {
"127.0.0.1": Netdev("127.0.0.1", 0, "", "local only"),
"::1": Netdev("::1", 0, "", "local only"),
}
if HAVE_IPV6:
eps["::1"] = Netdev("::1", 0, "", "local only")
nonlocals = [x for x in self.args.i if x not in [k.split("/")[0] for k in eps]]
if nonlocals:
try:
@@ -214,14 +222,41 @@ class TcpSrv(object):
if self.args.qr or self.args.qrs:
self.qr = self._qr(qr1, qr2)
def nlog(self, msg: str, c: Union[int, str] = 0) -> None:
self.log("tcpsrv", msg, c)
def _listen(self, ip: str, port: int) -> None:
ipv = socket.AF_INET6 if ":" in ip else socket.AF_INET
uds_perm = uds_gid = -1
if "unix:" in ip:
tcp = False
ipv = socket.AF_UNIX
uds = ip.split(":")
ip = uds[-1]
if len(uds) > 2:
uds_perm = int(uds[1], 8)
if len(uds) > 3:
try:
uds_gid = int(uds[2])
except:
import grp
uds_gid = grp.getgrnam(uds[2]).gr_gid
elif ":" in ip:
tcp = True
ipv = socket.AF_INET6
else:
tcp = True
ipv = socket.AF_INET
srv = socket.socket(ipv, socket.SOCK_STREAM)
if not ANYWIN or self.args.reuseaddr:
srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
srv.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
if tcp:
srv.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
srv.settimeout(None) # < does not inherit, ^ opts above do
try:
@@ -233,8 +268,25 @@ class TcpSrv(object):
srv.setsockopt(socket.SOL_IP, socket.IP_FREEBIND, 1)
try:
srv.bind((ip, port))
sport = srv.getsockname()[1]
if tcp:
srv.bind((ip, port))
else:
if ANYWIN or self.args.rm_sck:
if os.path.exists(ip):
os.unlink(ip)
srv.bind(ip)
else:
tf = "%s.%d" % (ip, os.getpid())
if os.path.exists(tf):
os.unlink(tf)
srv.bind(tf)
if uds_gid != -1:
os.chown(tf, -1, uds_gid)
if uds_perm != -1:
os.chmod(tf, uds_perm)
atomic_move(self.nlog, tf, ip, VF_CAREFUL)
sport = srv.getsockname()[1] if tcp else port
if port != sport:
# linux 6.0.16 lets you bind a port which is in use
# except it just gives you a random port instead
@@ -246,12 +298,23 @@ class TcpSrv(object):
except:
pass
e = ""
if ex.errno in E_ADDR_IN_USE:
e = "\033[1;31mport {} is busy on interface {}\033[0m".format(port, ip)
if not tcp:
e = "\033[1;31munix-socket {} is busy\033[0m".format(ip)
elif ex.errno in E_ADDR_NOT_AVAIL:
e = "\033[1;31minterface {} does not exist\033[0m".format(ip)
else:
if not e:
if not tcp:
t = "\n\n\n NOTE: this crash may be due to a unix-socket bug; try --rm-sck\n"
self.log("tcpsrv", t, 2)
raise
if not tcp and not self.args.rm_sck:
e += "; maybe this is a bug? try --rm-sck"
raise Exception(e)
def run(self) -> None:
@@ -259,7 +322,14 @@ class TcpSrv(object):
bound: list[tuple[str, int]] = []
srvs: list[socket.socket] = []
for srv in self.srv:
ip, port = srv.getsockname()[:2]
if srv.family == socket.AF_UNIX:
tcp = False
ip = re.sub(r"\.[0-9]+$", "", srv.getsockname())
port = 0
else:
tcp = True
ip, port = srv.getsockname()[:2]
if ip == IP6ALL:
ip = "::" # jython
@@ -291,8 +361,12 @@ class TcpSrv(object):
bound.append((ip, port))
srvs.append(srv)
fno = srv.fileno()
hip = "[{}]".format(ip) if ":" in ip else ip
msg = "listening @ {}:{} f{} p{}".format(hip, port, fno, os.getpid())
if tcp:
hip = "[{}]".format(ip) if ":" in ip else ip
msg = "listening @ {}:{} f{} p{}".format(hip, port, fno, os.getpid())
else:
msg = "listening @ {} f{} p{}".format(ip, fno, os.getpid())
self.log("tcpsrv", msg)
if self.args.q:
print(msg)
@@ -345,6 +419,8 @@ class TcpSrv(object):
def detect_interfaces(self, listen_ips: list[str]) -> dict[str, Netdev]:
from .stolen.ifaddr import get_adapters
listen_ips = [x for x in listen_ips if "unix:" not in x]
nics = get_adapters(True)
eps: dict[str, Netdev] = {}
for nic in nics:

View File

@@ -33,10 +33,10 @@ from partftpy import (
)
from partftpy.TftpShared import TftpException
from .__init__ import EXE, TYPE_CHECKING
from .__init__ import EXE, PY2, TYPE_CHECKING
from .authsrv import VFS
from .bos import bos
from .util import BytesIO, Daemon, ODict, exclude_dotfiles, min_ex, runhook, undot
from .util import UTC, BytesIO, Daemon, ODict, exclude_dotfiles, min_ex, runhook, undot
if True: # pylint: disable=using-constant-test
from typing import Any, Union
@@ -44,6 +44,9 @@ if True: # pylint: disable=using-constant-test
if TYPE_CHECKING:
from .svchub import SvcHub
if PY2:
range = xrange # type: ignore
lg = logging.getLogger("tftp")
debug, info, warning, error = (lg.debug, lg.info, lg.warning, lg.error)
@@ -95,7 +98,7 @@ class Tftpd(object):
TftpServer,
]
cbak = []
if not self.args.tftp_no_fast and not EXE:
if not self.args.tftp_no_fast and not EXE and not PY2:
try:
ptn = re.compile(r"(^\s*)log\.debug\(.*\)$")
for C in Cs:
@@ -105,7 +108,7 @@ class Tftpd(object):
cfn = C.__spec__.origin
exec (compile(src2, filename=cfn, mode="exec"), C.__dict__)
except Exception:
t = "failed to optimize tftp code; run with --tftp-noopt if there are issues:\n"
t = "failed to optimize tftp code; run with --tftp-no-fast if there are issues:\n"
self.log("tftp", t + min_ex(), 3)
for n, zd in enumerate(cbak):
Cs[n].__dict__ = zd
@@ -150,11 +153,6 @@ class Tftpd(object):
self._disarm(fos)
ip = next((x for x in self.args.i if ":" not in x), None)
if not ip:
self.log("tftp", "IPv6 not supported for tftp; listening on 0.0.0.0", 3)
ip = "0.0.0.0"
self.port = int(self.args.tftp)
self.srv = []
self.ips = []
@@ -168,9 +166,16 @@ class Tftpd(object):
if "::" in ips:
ips.append("0.0.0.0")
if self.args.ftp4:
ips = [x for x in ips if "unix:" not in x]
if self.args.tftp4:
ips = [x for x in ips if ":" not in x]
if not ips:
t = "cannot start tftp-server; no compatible IPs in -i"
self.nlog(t, 1)
return
ips = list(ODict.fromkeys(ips)) # dedup
for ip in ips:
@@ -246,6 +251,8 @@ class Tftpd(object):
debug('%s("%s", %s) %s\033[K\033[0m', caller, vpath, str(a), perms)
vfs, rem = self.asrv.vfs.get(vpath, "*", *perms)
if not vfs.realpath:
raise Exception("unmapped vfs")
return vfs, vfs.canonical(rem)
def _ls(self, vpath: str, raddress: str, rport: int, force=False) -> Any:
@@ -267,7 +274,7 @@ class Tftpd(object):
dirs1 = [(v.st_mtime, v.st_size, k + "/") for k, v in vfs_ls if k in dnames]
fils1 = [(v.st_mtime, v.st_size, k) for k, v in vfs_ls if k not in dnames]
real1 = dirs1 + fils1
realt = [(datetime.fromtimestamp(mt), sz, fn) for mt, sz, fn in real1]
realt = [(datetime.fromtimestamp(mt, UTC), sz, fn) for mt, sz, fn in real1]
reals = [
(
"%04d-%02d-%02d %02d:%02d:%02d"
@@ -333,7 +340,21 @@ class Tftpd(object):
xbu = vfs.flags.get("xbu")
if xbu and not runhook(
self.nlog, xbu, ap, vpath, "", "", 0, 0, "8.3.8.7", 0, ""
self.nlog,
None,
self.hub.up2k,
"xbu.tftpd",
xbu,
ap,
vpath,
"",
"",
"",
0,
0,
"8.3.8.7",
time.time(),
"",
):
yeet("blocked by xbu server config: " + vpath)
@@ -341,7 +362,7 @@ class Tftpd(object):
return self._ls(vpath, "", 0, True)
if not a:
a = [self.args.iobuf]
a = (self.args.iobuf,)
return open(ap, mode, *a, **ka)
@@ -382,7 +403,7 @@ class Tftpd(object):
bos.stat(ap)
return True
except:
return False
return vpath == "/"
def _p_isdir(self, vpath: str) -> bool:
try:
@@ -390,7 +411,7 @@ class Tftpd(object):
ret = stat.S_ISDIR(st.st_mode)
return ret
except:
return False
return vpath == "/"
def _hook(self, *a: Any, **ka: Any) -> None:
src = inspect.currentframe().f_back.f_code.co_name

View File

@@ -59,7 +59,8 @@ class ThumbCli(object):
want_opus = fmt in ("opus", "caf", "mp3")
is_au = ext in self.fmt_ffa
if is_au:
is_vau = want_opus and ext in self.fmt_ffv
if is_au or is_vau:
if want_opus:
if self.args.no_acode:
return None
@@ -107,7 +108,7 @@ class ThumbCli(object):
fmt = sfmt
elif fmt[:1] == "p" and not is_au:
elif fmt[:1] == "p" and not is_au and not is_vid:
t = "cannot thumbnail [%s]: png only allowed for waveforms"
self.log(t % (rem), 6)
return None

View File

@@ -12,7 +12,7 @@ import time
from queue import Queue
from .__init__ import ANYWIN, TYPE_CHECKING
from .__init__ import ANYWIN, PY2, TYPE_CHECKING
from .authsrv import VFS
from .bos import bos
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, au_unpk, ffprobe
@@ -38,6 +38,9 @@ if True: # pylint: disable=using-constant-test
if TYPE_CHECKING:
from .svchub import SvcHub
if PY2:
range = xrange # type: ignore
HAVE_PIL = False
HAVE_PILF = False
HAVE_HEIF = False
@@ -45,22 +48,34 @@ HAVE_AVIF = False
HAVE_WEBP = False
try:
if os.environ.get("PRTY_NO_PIL"):
raise Exception()
from PIL import ExifTags, Image, ImageFont, ImageOps
HAVE_PIL = True
try:
if os.environ.get("PRTY_NO_PILF"):
raise Exception()
ImageFont.load_default(size=16)
HAVE_PILF = True
except:
pass
try:
if os.environ.get("PRTY_NO_PIL_WEBP"):
raise Exception()
Image.new("RGB", (2, 2)).save(BytesIO(), format="webp")
HAVE_WEBP = True
except:
pass
try:
if os.environ.get("PRTY_NO_PIL_HEIF"):
raise Exception()
from pyheif_pillow_opener import register_heif_opener
register_heif_opener()
@@ -69,6 +84,9 @@ try:
pass
try:
if os.environ.get("PRTY_NO_PIL_AVIF"):
raise Exception()
import pillow_avif # noqa: F401 # pylint: disable=unused-import
HAVE_AVIF = True
@@ -80,6 +98,9 @@ except:
pass
try:
if os.environ.get("PRTY_NO_VIPS"):
raise Exception()
HAVE_VIPS = True
import pyvips
@@ -304,23 +325,31 @@ class ThumbSrv(object):
ap_unpk = abspath
if not bos.path.exists(tpath):
want_mp3 = tpath.endswith(".mp3")
want_opus = tpath.endswith(".opus") or tpath.endswith(".caf")
want_png = tpath.endswith(".png")
want_au = want_mp3 or want_opus
for lib in self.args.th_dec:
can_au = lib == "ff" and (
ext in self.fmt_ffa or ext in self.fmt_ffv
)
if lib == "pil" and ext in self.fmt_pil:
funs.append(self.conv_pil)
elif lib == "vips" and ext in self.fmt_vips:
funs.append(self.conv_vips)
elif lib == "ff" and ext in self.fmt_ffi or ext in self.fmt_ffv:
funs.append(self.conv_ffmpeg)
elif lib == "ff" and ext in self.fmt_ffa:
if tpath.endswith(".opus") or tpath.endswith(".caf"):
elif can_au and (want_png or want_au):
if want_opus:
funs.append(self.conv_opus)
elif tpath.endswith(".mp3"):
elif want_mp3:
funs.append(self.conv_mp3)
elif tpath.endswith(".png"):
elif want_png:
funs.append(self.conv_waves)
png_ok = True
else:
funs.append(self.conv_spec)
elif lib == "ff" and (ext in self.fmt_ffi or ext in self.fmt_ffv):
funs.append(self.conv_ffmpeg)
elif lib == "ff" and ext in self.fmt_ffa and not want_au:
funs.append(self.conv_spec)
tdir, tfn = os.path.split(tpath)
ttpath = os.path.join(tdir, "w", tfn)
@@ -599,13 +628,14 @@ class ThumbSrv(object):
b"pngquant",
b"--strip",
b"--nofs",
b"--output", fsenc(wtpath),
fsenc(tpath)
b"--output",
fsenc(wtpath),
fsenc(tpath),
]
ret = runcmd(cmd, timeout=vn.flags["convt"], nice=True, oom=400)[0]
if ret:
try:
wunlink(self.log, wtpath, vn.flags)
wunlink(self.log, wtpath, vn.flags)
except:
pass
else:
@@ -673,8 +703,8 @@ class ThumbSrv(object):
raise Exception("disabled in server config")
self.wait4ram(0.2, tpath)
ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
if "ac" not in ret:
tags, rawtags = ffprobe(abspath, int(vn.flags["convt"] / 2))
if "ac" not in tags:
raise Exception("not audio")
if quality.endswith("k"):
@@ -695,7 +725,7 @@ class ThumbSrv(object):
b"-v", b"error",
b"-hide_banner",
b"-i", fsenc(abspath),
b"-map_metadata", b"-1",
] + self.big_tags(rawtags) + [
b"-map", b"0:a:0",
b"-ar", b"44100",
b"-ac", b"2",
@@ -711,16 +741,16 @@ class ThumbSrv(object):
raise Exception("disabled in server config")
self.wait4ram(0.2, tpath)
ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
if "ac" not in ret:
tags, rawtags = ffprobe(abspath, int(vn.flags["convt"] / 2))
if "ac" not in tags:
raise Exception("not audio")
try:
dur = ret[".dur"][1]
dur = tags[".dur"][1]
except:
dur = 0
src_opus = abspath.lower().endswith(".opus") or ret["ac"][1] == "opus"
src_opus = abspath.lower().endswith(".opus") or tags["ac"][1] == "opus"
want_caf = tpath.endswith(".caf")
tmp_opus = tpath
if want_caf:
@@ -741,7 +771,7 @@ class ThumbSrv(object):
b"-v", b"error",
b"-hide_banner",
b"-i", fsenc(abspath),
b"-map_metadata", b"-1",
] + self.big_tags(rawtags) + [
b"-map", b"0:a:0",
b"-c:a", b"libopus",
b"-b:a", bq,
@@ -798,6 +828,16 @@ class ThumbSrv(object):
except:
pass
def big_tags(self, raw_tags: dict[str, list[str]]) -> list[bytes]:
ret = []
for k, vs in raw_tags.items():
for v in vs:
if len(str(v)) >= 1024:
bv = k.encode("utf-8", "replace")
ret += [b"-metadata", bv + b"="]
break
return ret
def poke(self, tdir: str) -> None:
if not self.poke_cd.poke(tdir):
return

View File

@@ -8,7 +8,7 @@ import threading
import time
from operator import itemgetter
from .__init__ import ANYWIN, TYPE_CHECKING, unicode
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, unicode
from .authsrv import LEELOO_DALLAS, VFS
from .bos import bos
from .up2k import up2k_wark_from_hashlist
@@ -38,6 +38,9 @@ if True: # pylint: disable=using-constant-test
if TYPE_CHECKING:
from .httpsrv import HttpSrv
if PY2:
range = xrange # type: ignore
class U2idx(object):
def __init__(self, hsrv: "HttpSrv") -> None:
@@ -56,6 +59,8 @@ class U2idx(object):
self.mem_cur = sqlite3.connect(":memory:", check_same_thread=False).cursor()
self.mem_cur.execute(r"create table a (b text)")
self.sh_cur: Optional["sqlite3.Cursor"] = None
self.p_end = 0.0
self.p_dur = 0.0
@@ -92,17 +97,31 @@ class U2idx(object):
except:
raise Pebkac(500, min_ex())
def get_cur(self, vn: VFS) -> Optional["sqlite3.Cursor"]:
if not HAVE_SQLITE3:
def get_shr(self) -> Optional["sqlite3.Cursor"]:
if self.sh_cur:
return self.sh_cur
if not HAVE_SQLITE3 or not self.args.shr:
return None
assert sqlite3 # type: ignore
db = sqlite3.connect(self.args.shr_db, timeout=2, check_same_thread=False)
cur = db.cursor()
cur.execute('pragma table_info("sh")').fetchall()
self.sh_cur = cur
return cur
def get_cur(self, vn: VFS) -> Optional["sqlite3.Cursor"]:
cur = self.cur.get(vn.realpath)
if cur:
return cur
if "e2d" not in vn.flags:
if not HAVE_SQLITE3 or "e2d" not in vn.flags:
return None
assert sqlite3 # type: ignore
ptop = vn.realpath
histpath = self.asrv.vfs.histtab.get(ptop)
if not histpath:

View File

@@ -28,8 +28,8 @@ from .fsutil import Fstab
from .mtag import MParser, MTag
from .util import (
HAVE_SQLITE3,
VF_CAREFUL,
SYMTIME,
VF_CAREFUL,
Daemon,
MTHash,
Pebkac,
@@ -46,6 +46,7 @@ from .util import (
hidedir,
humansize,
min_ex,
pathmod,
quotep,
rand_name,
ren_open,
@@ -165,6 +166,7 @@ class Up2k(object):
self.xiu_ptn = re.compile(r"(?:^|,)i([0-9]+)")
self.xiu_busy = False # currently running hook
self.xiu_asleep = True # needs rescan_cond poke to schedule self
self.fx_backlog: list[tuple[str, dict[str, str], str]] = []
self.cur: dict[str, "sqlite3.Cursor"] = {}
self.mem_cur = None
@@ -234,6 +236,9 @@ class Up2k(object):
if not self.pp and self.args.exit == "idx":
return self.hub.sigterm()
if self.hub.is_dut:
return
Daemon(self._snapshot, "up2k-snapshot")
if have_e2d:
Daemon(self._hasher, "up2k-hasher")
@@ -430,7 +435,7 @@ class Up2k(object):
def _sched_rescan(self) -> None:
volage = {}
cooldown = timeout = time.time() + 3.0
while True:
while not self.stop:
now = time.time()
timeout = max(timeout, cooldown)
wait = timeout - time.time()
@@ -438,6 +443,9 @@ class Up2k(object):
with self.rescan_cond:
self.rescan_cond.wait(wait)
if self.stop:
return
now = time.time()
if now < cooldown:
# self.log("SR: cd - now = {:.2f}".format(cooldown - now), 5)
@@ -452,11 +460,18 @@ class Up2k(object):
cooldown = now + 3
# self.log("SR", 5)
if self.args.no_lifetime:
if self.args.no_lifetime and not self.args.shr:
timeout = now + 9001
else:
# important; not deferred by db_act
timeout = self._check_lifetimes()
try:
if self.args.shr:
timeout = min(self._check_shares(), timeout)
except Exception as ex:
timeout = min(timeout, now + 60)
t = "could not check for expiring shares: %r"
self.log(t % (ex,), 1)
try:
timeout = min(timeout, now + self._check_xiu())
@@ -545,7 +560,7 @@ class Up2k(object):
nrm += 1
if nrm:
self.log("{} files graduated in {}".format(nrm, vp))
self.log("%d files graduated in /%s" % (nrm, vp))
if timeout < 10:
continue
@@ -559,6 +574,60 @@ class Up2k(object):
return timeout
def _check_shares(self) -> float:
assert sqlite3 # type: ignore
now = time.time()
timeout = now + 9001
maxage = self.args.shr_rt * 60
low = now - maxage
vn = self.asrv.vfs.nodes.get(self.args.shr.strip("/"))
active = vn and vn.nodes
db = sqlite3.connect(self.args.shr_db, timeout=2)
cur = db.cursor()
q = "select k from sh where t1 and t1 <= ?"
rm = [x[0] for x in cur.execute(q, (now,))] if active else []
if rm:
assert vn and vn.nodes # type: ignore
# self.log("chk_shr: %d" % (len(rm),))
zss = set(rm)
rm = [zs for zs in vn.nodes if zs in zss]
reload = bool(rm)
if reload:
self.log("disabling expired shares %s" % (rm,))
rm = [x[0] for x in cur.execute(q, (low,))]
if rm:
self.log("forgetting expired shares %s" % (rm,))
cur.executemany("delete from sh where k=?", [(x,) for x in rm])
cur.executemany("delete from sf where k=?", [(x,) for x in rm])
db.commit()
if reload:
Daemon(self.hub._reload_blocking, "sharedrop", (False, False))
q = "select min(t1) from sh where t1 > ?"
(earliest,) = cur.execute(q, (1,)).fetchone()
if earliest:
# deadline for revoking regular access
timeout = min(timeout, earliest + maxage)
(earliest,) = cur.execute(q, (now - 2,)).fetchone()
if earliest:
# deadline for revival; drop entirely
timeout = min(timeout, earliest)
cur.close()
db.close()
if self.args.shr_v:
self.log("next shr_chk = %d (%d)" % (timeout, timeout - time.time()))
return timeout
def _check_xiu(self) -> float:
if self.xiu_busy:
return 2
@@ -654,7 +723,7 @@ class Up2k(object):
return False, flags
ret = {k: v for k, v in flags.items() if not k.startswith("e2t")}
if ret.keys() == flags.keys():
if len(ret) == len(flags):
return False, flags
return True, ret
@@ -680,6 +749,8 @@ class Up2k(object):
continue
self.pp = ProgressPrinter(self.log, self.args)
if not self.hub.is_dut:
self.pp.start()
break
@@ -709,9 +780,9 @@ class Up2k(object):
try:
bos.makedirs(vol.realpath) # gonna happen at snap anyways
dir_is_empty(self.log_func, not self.args.no_scandir, vol.realpath)
except:
except Exception as ex:
self.volstate[vol.vpath] = "OFFLINE (cannot access folder)"
self.log("cannot access " + vol.realpath, c=1)
self.log("cannot access %s: %r" % (vol.realpath, ex), c=1)
continue
if scan_vols and vol.vpath not in scan_vols:
@@ -1195,6 +1266,9 @@ class Up2k(object):
fat32 = True
cv = ""
th_cvd = self.args.th_coversd
th_cvds = self.args.th_coversd_set
assert self.pp and self.mem_cur
self.pp.msg = "a%d %s" % (self.pp.n, cdir)
@@ -1279,12 +1353,21 @@ class Up2k(object):
files.append((sz, lmod, iname))
liname = iname.lower()
if sz and (
iname in self.args.th_coversd
or (
if (
sz
and (
liname in th_cvds
or (
not cv
and liname.rsplit(".", 1)[-1] in CV_EXTS
and not iname.startswith(".")
)
)
and (
not cv
and liname.rsplit(".", 1)[-1] in CV_EXTS
and not iname.startswith(".")
or liname not in th_cvds
or cv.lower() not in th_cvds
or th_cvd.index(liname) < th_cvd.index(cv.lower())
)
):
cv = iname
@@ -1357,7 +1440,7 @@ class Up2k(object):
if dts == lmod and dsz == sz and (nohash or dw[0] != "#" or not sz):
continue
t = "reindex [{}] => [{}] ({}/{}) ({}/{})".format(
t = "reindex [{}] => [{}] mtime({}/{}) size({}/{})".format(
top, rp, dts, lmod, dsz, sz
)
self.log(t)
@@ -2272,7 +2355,9 @@ class Up2k(object):
def _open_db_wd(self, db_path: str) -> "sqlite3.Cursor":
ok: list[int] = []
Daemon(self._open_db_timeout, "opendb_watchdog", [db_path, ok])
if not self.hub.is_dut:
Daemon(self._open_db_timeout, "opendb_watchdog", [db_path, ok])
try:
return self._open_db(db_path)
finally:
@@ -2521,6 +2606,10 @@ class Up2k(object):
cur.connection.commit()
def wake_rescanner(self):
with self.rescan_cond:
self.rescan_cond.notify_all()
def handle_json(
self, cj: dict[str, Any], busy_aps: dict[str, int]
) -> dict[str, Any]:
@@ -2532,7 +2621,7 @@ class Up2k(object):
if self.mutex.acquire(timeout=10):
got_lock = True
with self.reg_mutex:
return self._handle_json(cj)
ret = self._handle_json(cj)
else:
t = "cannot receive uploads right now;\nserver busy with {}.\nPlease wait; the client will retry..."
raise Pebkac(503, t.format(self.blocked or "[unknown]"))
@@ -2540,12 +2629,20 @@ class Up2k(object):
if not PY2:
raise
with self.mutex, self.reg_mutex:
return self._handle_json(cj)
ret = self._handle_json(cj)
finally:
if got_lock:
self.mutex.release()
def _handle_json(self, cj: dict[str, Any]) -> dict[str, Any]:
if self.fx_backlog:
self.do_fx_backlog()
return ret
def _handle_json(self, cj: dict[str, Any], depth: int = 1) -> dict[str, Any]:
if depth > 16:
raise Pebkac(500, "too many xbu relocs, giving up")
ptop = cj["ptop"]
if not self.register_vpath(ptop, cj["vcfg"]):
if ptop not in self.registry:
@@ -2604,11 +2701,19 @@ class Up2k(object):
if stat.S_ISLNK(st.st_mode):
# broken symlink
raise Exception()
except:
if st.st_size != dsize:
t = "candidate ignored (db/fs desync): {}, size fs={} db={}, mtime fs={} db={}, file: {}"
t = t.format(
wark, st.st_size, dsize, st.st_mtime, dtime, dp_abs
)
self.log(t)
raise Exception("desync")
except Exception as ex:
if n4g:
st = os.stat_result((0, -1, -1, 0, 0, 0, 0, 0, 0, 0))
else:
lost.append((cur, dp_dir, dp_fn))
if str(ex) != "desync":
lost.append((cur, dp_dir, dp_fn))
continue
j = {
@@ -2666,13 +2771,16 @@ class Up2k(object):
ptop = None # use cj or job as appropriate
if not job and wark in reg:
# ensure the files haven't been deleted manually
# ensure the files haven't been edited or deleted
path = ""
st = None
rj = reg[wark]
names = [rj[x] for x in ["name", "tnam"] if x in rj]
for fn in names:
path = djoin(rj["ptop"], rj["prel"], fn)
try:
if bos.path.getsize(path) > 0 or not rj["need"]:
st = bos.stat(path)
if st.st_size > 0 or not rj["need"]:
# upload completed or both present
break
except:
@@ -2683,6 +2791,14 @@ class Up2k(object):
del reg[wark]
break
if st and not self.args.nw and not n4g and st.st_size != rj["size"]:
t = "will not dedup (fs index desync): {}, size fs={} db={}, mtime fs={} db={}, file: {}"
t = t.format(
wark, st.st_size, rj["size"], st.st_mtime, rj["lmod"], path
)
self.log(t)
del reg[wark]
if job or wark in reg:
job = job or reg[wark]
if (
@@ -2738,7 +2854,8 @@ class Up2k(object):
job = deepcopy(job)
job["wark"] = wark
job["at"] = cj.get("at") or time.time()
for k in "lmod ptop vtop prel host user addr".split():
zs = "lmod ptop vtop prel name host user addr poke"
for k in zs.split():
job[k] = cj.get(k) or ""
pdir = djoin(cj["ptop"], cj["prel"])
@@ -2746,27 +2863,50 @@ class Up2k(object):
job["name"] = rand_name(
pdir, cj["name"], vfs.flags["nrand"]
)
else:
job["name"] = self._untaken(pdir, cj, now)
dst = djoin(job["ptop"], job["prel"], job["name"])
xbu = vfs.flags.get("xbu")
if xbu and not runhook(
self.log,
xbu, # type: ignore
dst,
job["vtop"],
job["host"],
job["user"],
job["lmod"],
job["size"],
job["addr"],
job["at"],
"",
):
t = "upload blocked by xbu server config: {}".format(dst)
self.log(t, 1)
raise Pebkac(403, t)
if xbu:
vp = djoin(job["vtop"], job["prel"], job["name"])
hr = runhook(
self.log,
None,
self,
"xbu.up2k.dupe",
xbu, # type: ignore
dst,
vp,
job["host"],
job["user"],
self.asrv.vfs.get_perms(job["vtop"], job["user"]),
job["lmod"],
job["size"],
job["addr"],
job["at"],
"",
)
if not hr:
t = "upload blocked by xbu server config: %s" % (dst,)
self.log(t, 1)
raise Pebkac(403, t)
if hr.get("reloc"):
x = pathmod(self.asrv.vfs, dst, vp, hr["reloc"])
if x:
zvfs = vfs
pdir, _, job["name"], (vfs, rem) = x
dst = os.path.join(pdir, job["name"])
job["vcfg"] = vfs.flags
job["ptop"] = vfs.realpath
job["vtop"] = vfs.vpath
job["prel"] = rem
if zvfs.vpath != vfs.vpath:
# print(json.dumps(job, sort_keys=True, indent=4))
job["hash"] = cj["hash"]
self.log("xbu reloc1:%d..." % (depth,), 6)
return self._handle_json(job, depth + 1)
job["name"] = self._untaken(pdir, job, now)
dst = djoin(job["ptop"], job["prel"], job["name"])
if not self.args.nw:
dvf: dict[str, Any] = vfs.flags
@@ -2838,14 +2978,16 @@ class Up2k(object):
# one chunk may occur multiple times in a file;
# filter to unique values for the list of missing chunks
# (preserve order to reduce disk thrashing)
lut = {}
lut = set()
for k in cj["hash"]:
if k not in lut:
job["need"].append(k)
lut[k] = 1
lut.add(k)
try:
self._new_upload(job)
ret = self._new_upload(job, vfs, depth)
if ret:
return ret # xbu recursed
except:
self.registry[job["ptop"]].pop(job["wark"], None)
raise
@@ -3000,9 +3142,9 @@ class Up2k(object):
times = (int(time.time()), int(lmod))
bos.utime(dst, times, False)
def handle_chunk(
self, ptop: str, wark: str, chash: str
) -> tuple[int, list[int], str, float, bool]:
def handle_chunks(
self, ptop: str, wark: str, chashes: list[str]
) -> tuple[list[str], int, list[list[int]], str, float, bool]:
with self.mutex, self.reg_mutex:
self.db_act = self.vol_act[ptop] = time.time()
job = self.registry[ptop].get(wark)
@@ -3011,49 +3153,91 @@ class Up2k(object):
self.log("unknown wark [{}], known: {}".format(wark, known))
raise Pebkac(400, "unknown wark" + SSEELOG)
if chash not in job["need"]:
msg = "chash = {} , need:\n".format(chash)
msg += "\n".join(job["need"])
self.log(msg)
raise Pebkac(400, "already got that but thanks??")
if len(chashes) > 1 and len(chashes[1]) < 44:
# first hash is full-length; expand remaining ones
uniq = []
lut = set()
for chash in job["hash"]:
if chash not in lut:
uniq.append(chash)
lut.add(chash)
try:
nchunk = uniq.index(chashes[0])
except:
raise Pebkac(400, "unknown chunk0 [%s]" % (chashes[0]))
expanded = [chashes[0]]
for prefix in chashes[1:]:
nchunk += 1
chash = uniq[nchunk]
if not chash.startswith(prefix):
t = "next sibling chunk does not start with expected prefix [%s]: [%s]"
raise Pebkac(400, t % (prefix, chash))
expanded.append(chash)
chashes = expanded
nchunk = [n for n, v in enumerate(job["hash"]) if v == chash]
if not nchunk:
raise Pebkac(400, "unknown chunk")
for chash in chashes:
if chash not in job["need"]:
msg = "chash = {} , need:\n".format(chash)
msg += "\n".join(job["need"])
self.log(msg)
t = "already got that (%s) but thanks??"
if chash not in job["hash"]:
t = "unknown chunk wtf: %s"
raise Pebkac(400, t % (chash,))
if chash in job["busy"]:
nh = len(job["hash"])
idx = job["hash"].index(chash)
t = "that chunk is already being written to:\n {}\n {} {}/{}\n {}"
raise Pebkac(400, t.format(wark, chash, idx, nh, job["name"]))
if chash in job["busy"]:
nh = len(job["hash"])
idx = job["hash"].index(chash)
t = "that chunk is already being written to:\n {}\n {} {}/{}\n {}"
raise Pebkac(400, t.format(wark, chash, idx, nh, job["name"]))
assert chash # type: ignore
chunksize = up2k_chunksize(job["size"])
coffsets = []
nchunks = []
for chash in chashes:
nchunk = [n for n, v in enumerate(job["hash"]) if v == chash]
if not nchunk:
raise Pebkac(400, "unknown chunk %s" % (chash))
ofs = [chunksize * x for x in nchunk]
coffsets.append(ofs)
nchunks.append(nchunk)
for ofs1, ofs2 in zip(coffsets, coffsets[1:]):
gap = (ofs2[0] - ofs1[0]) - chunksize
if gap:
t = "only sibling chunks can be stitched; gap of %d bytes between offsets %d and %d in %s"
raise Pebkac(400, t % (gap, ofs1[0], ofs2[0], job["name"]))
path = djoin(job["ptop"], job["prel"], job["tnam"])
chunksize = up2k_chunksize(job["size"])
ofs = [chunksize * x for x in nchunk]
if not job["sprs"]:
cur_sz = bos.path.getsize(path)
if ofs[0] > cur_sz:
if coffsets[0][0] > cur_sz:
t = "please upload sequentially using one thread;\nserver filesystem does not support sparse files.\n file: {}\n chunk: {}\n cofs: {}\n flen: {}"
t = t.format(job["name"], nchunk[0], ofs[0], cur_sz)
t = t.format(job["name"], nchunks[0][0], coffsets[0][0], cur_sz)
raise Pebkac(400, t)
job["busy"][chash] = 1
job["poke"] = time.time()
return chunksize, ofs, path, job["lmod"], job["sprs"]
return chashes, chunksize, coffsets, path, job["lmod"], job["sprs"]
def release_chunk(self, ptop: str, wark: str, chash: str) -> bool:
def release_chunks(self, ptop: str, wark: str, chashes: list[str]) -> bool:
with self.reg_mutex:
job = self.registry[ptop].get(wark)
if job:
job["busy"].pop(chash, None)
for chash in chashes:
job["busy"].pop(chash, None)
return True
def confirm_chunk(self, ptop: str, wark: str, chash: str) -> tuple[int, str]:
def confirm_chunks(
self, ptop: str, wark: str, chashes: list[str]
) -> tuple[int, str]:
with self.mutex, self.reg_mutex:
self.db_act = self.vol_act[ptop] = time.time()
try:
@@ -3062,14 +3246,16 @@ class Up2k(object):
src = djoin(pdir, job["tnam"])
dst = djoin(pdir, job["name"])
except Exception as ex:
return "confirm_chunk, wark, " + repr(ex) # type: ignore
return "confirm_chunk, wark(%r)" % (ex,) # type: ignore
job["busy"].pop(chash, None)
for chash in chashes:
job["busy"].pop(chash, None)
try:
job["need"].remove(chash)
for chash in chashes:
job["need"].remove(chash)
except Exception as ex:
return "confirm_chunk, chash, " + repr(ex) # type: ignore
return "confirm_chunk, chash(%s) %r" % (chash, ex) # type: ignore
ret = len(job["need"])
if ret > 0:
@@ -3080,11 +3266,14 @@ class Up2k(object):
return ret, dst
def finish_upload(self, ptop: str, wark: str, busy_aps: set[str]) -> None:
def finish_upload(self, ptop: str, wark: str, busy_aps: dict[str, int]) -> None:
self.busy_aps = busy_aps
with self.mutex, self.reg_mutex:
self._finish_upload(ptop, wark)
if self.fx_backlog:
self.do_fx_backlog()
def _finish_upload(self, ptop: str, wark: str) -> None:
"""mutex(main,reg) me"""
try:
@@ -3278,24 +3467,30 @@ class Up2k(object):
xau = False if skip_xau else vflags.get("xau")
dst = djoin(ptop, rd, fn)
if xau and not runhook(
self.log,
xau,
dst,
djoin(vtop, rd, fn),
host,
usr,
int(ts),
sz,
ip,
at or time.time(),
"",
):
t = "upload blocked by xau server config"
self.log(t, 1)
wunlink(self.log, dst, vflags)
self.registry[ptop].pop(wark, None)
raise Pebkac(403, t)
if xau:
hr = runhook(
self.log,
None,
self,
"xau.up2k",
xau,
dst,
djoin(vtop, rd, fn),
host,
usr,
self.asrv.vfs.get_perms(djoin(vtop, rd, fn), usr),
ts,
sz,
ip,
at or time.time(),
"",
)
if not hr:
t = "upload blocked by xau server config"
self.log(t, 1)
wunlink(self.log, dst, vflags)
self.registry[ptop].pop(wark, None)
raise Pebkac(403, t)
xiu = vflags.get("xiu")
if xiu:
@@ -3319,15 +3514,29 @@ class Up2k(object):
with self.rescan_cond:
self.rescan_cond.notify_all()
if rd and sz and fn.lower() in self.args.th_coversd:
if rd and sz and fn.lower() in self.args.th_coversd_set:
# wasteful; db_add will re-index actual covers
# but that won't catch existing files
crd, cdn = rd.rsplit("/", 1) if "/" in rd else ("", rd)
try:
db.execute("delete from cv where rd=? and dn=?", (crd, cdn))
db.execute("insert into cv values (?,?,?)", (crd, cdn, fn))
q = "select fn from cv where rd=? and dn=?"
db_cv = db.execute(q, (crd, cdn)).fetchone()[0]
db_lcv = db_cv.lower()
if db_lcv in self.args.th_coversd_set:
idx_db = self.args.th_coversd.index(db_lcv)
idx_fn = self.args.th_coversd.index(fn.lower())
add_cv = idx_fn < idx_db
else:
add_cv = True
except:
pass
add_cv = True
if add_cv:
try:
db.execute("delete from cv where rd=? and dn=?", (crd, cdn))
db.execute("insert into cv values (?,?,?)", (crd, cdn, fn))
except:
pass
def handle_rm(
self,
@@ -3465,15 +3674,19 @@ class Up2k(object):
if xbd:
if not runhook(
self.log,
None,
self,
"xbd",
xbd,
abspath,
vpath,
"",
uname,
self.asrv.vfs.get_perms(vpath, uname),
stl.st_mtime,
st.st_size,
ip,
0,
time.time(),
"",
):
t = "delete blocked by xbd server config: {}"
@@ -3498,15 +3711,19 @@ class Up2k(object):
if xad:
runhook(
self.log,
None,
self,
"xad",
xad,
abspath,
vpath,
"",
uname,
self.asrv.vfs.get_perms(vpath, uname),
stl.st_mtime,
st.st_size,
ip,
0,
time.time(),
"",
)
@@ -3522,7 +3739,7 @@ class Up2k(object):
return n_files, ok + ok2, ng + ng2
def handle_mv(self, uname: str, svp: str, dvp: str) -> str:
def handle_mv(self, uname: str, ip: str, svp: str, dvp: str) -> str:
if svp == dvp or dvp.startswith(svp + "/"):
raise Pebkac(400, "mv: cannot move parent into subfolder")
@@ -3539,7 +3756,7 @@ class Up2k(object):
if stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode):
with self.mutex:
try:
ret = self._mv_file(uname, svp, dvp, curs)
ret = self._mv_file(uname, ip, svp, dvp, curs)
finally:
for v in curs:
v.connection.commit()
@@ -3572,7 +3789,7 @@ class Up2k(object):
raise Pebkac(500, "mv: bug at {}, top {}".format(svpf, svp))
dvpf = dvp + svpf[len(svp) :]
self._mv_file(uname, svpf, dvpf, curs)
self._mv_file(uname, ip, svpf, dvpf, curs)
finally:
for v in curs:
v.connection.commit()
@@ -3597,7 +3814,7 @@ class Up2k(object):
return "k"
def _mv_file(
self, uname: str, svp: str, dvp: str, curs: set["sqlite3.Cursor"]
self, uname: str, ip: str, svp: str, dvp: str, curs: set["sqlite3.Cursor"]
) -> str:
"""mutex(main) me; will mutex(reg)"""
svn, srem = self.asrv.vfs.get(svp, uname, True, False, True)
@@ -3631,11 +3848,28 @@ class Up2k(object):
except:
pass # broken symlink; keep as-is
ftime = stl.st_mtime
fsize = st.st_size
xbr = svn.flags.get("xbr")
xar = dvn.flags.get("xar")
if xbr:
if not runhook(
self.log, xbr, sabs, svp, "", uname, stl.st_mtime, st.st_size, "", 0, ""
self.log,
None,
self,
"xbr",
xbr,
sabs,
svp,
"",
uname,
self.asrv.vfs.get_perms(svp, uname),
ftime,
fsize,
ip,
time.time(),
"",
):
t = "move blocked by xbr server config: {}".format(svp)
self.log(t, 1)
@@ -3660,20 +3894,29 @@ class Up2k(object):
self.rescan_cond.notify_all()
if xar:
runhook(self.log, xar, dabs, dvp, "", uname, 0, 0, "", 0, "")
runhook(
self.log,
None,
self,
"xar.ln",
xar,
dabs,
dvp,
"",
uname,
self.asrv.vfs.get_perms(dvp, uname),
ftime,
fsize,
ip,
time.time(),
"",
)
return "k"
c1, w, ftime_, fsize_, ip, at = self._find_from_vpath(svn.realpath, srem)
c2 = self.cur.get(dvn.realpath)
if ftime_ is None:
ftime = stl.st_mtime
fsize = st.st_size
else:
ftime = ftime_
fsize = fsize_ or 0
has_dupes = False
if w:
assert c1
@@ -3681,7 +3924,9 @@ class Up2k(object):
self._copy_tags(c1, c2, w)
with self.reg_mutex:
has_dupes = self._forget_file(svn.realpath, srem, c1, w, is_xvol, fsize)
has_dupes = self._forget_file(
svn.realpath, srem, c1, w, is_xvol, fsize_ or fsize
)
if not is_xvol:
has_dupes = self._relink(w, svn.realpath, srem, dabs)
@@ -3751,7 +3996,7 @@ class Up2k(object):
if is_link:
try:
times = (int(time.time()), int(stl.st_mtime))
times = (int(time.time()), int(ftime))
bos.utime(dabs, times, False)
except:
pass
@@ -3759,7 +4004,23 @@ class Up2k(object):
wunlink(self.log, sabs, svn.flags)
if xar:
runhook(self.log, xar, dabs, dvp, "", uname, 0, 0, "", 0, "")
runhook(
self.log,
None,
self,
"xar.mv",
xar,
dabs,
dvp,
"",
uname,
self.asrv.vfs.get_perms(dvp, uname),
ftime,
fsize,
ip,
time.time(),
"",
)
return "k"
@@ -4022,41 +4283,57 @@ class Up2k(object):
return ret
def _new_upload(self, job: dict[str, Any]) -> None:
def _new_upload(self, job: dict[str, Any], vfs: VFS, depth: int) -> dict[str, str]:
pdir = djoin(job["ptop"], job["prel"])
if not job["size"]:
try:
inf = bos.stat(djoin(pdir, job["name"]))
if stat.S_ISREG(inf.st_mode):
job["lmod"] = inf.st_size
return
return {}
except:
pass
self.registry[job["ptop"]][job["wark"]] = job
job["name"] = self._untaken(pdir, job, job["t0"])
# if len(job["name"].split(".")) > 8:
# raise Exception("aaa")
xbu = self.flags[job["ptop"]].get("xbu")
ap_chk = djoin(pdir, job["name"])
vp_chk = djoin(job["vtop"], job["prel"], job["name"])
if xbu and not runhook(
self.log,
xbu,
ap_chk,
vp_chk,
job["host"],
job["user"],
int(job["lmod"]),
job["size"],
job["addr"],
int(job["t0"]),
"",
):
t = "upload blocked by xbu server config: {}".format(vp_chk)
self.log(t, 1)
raise Pebkac(403, t)
if xbu:
hr = runhook(
self.log,
None,
self,
"xbu.up2k",
xbu,
ap_chk,
vp_chk,
job["host"],
job["user"],
self.asrv.vfs.get_perms(vp_chk, job["user"]),
job["lmod"],
job["size"],
job["addr"],
job["t0"],
"",
)
if not hr:
t = "upload blocked by xbu server config: {}".format(vp_chk)
self.log(t, 1)
raise Pebkac(403, t)
if hr.get("reloc"):
x = pathmod(self.asrv.vfs, ap_chk, vp_chk, hr["reloc"])
if x:
zvfs = vfs
pdir, _, job["name"], (vfs, rem) = x
job["vcfg"] = vfs.flags
job["ptop"] = vfs.realpath
job["vtop"] = vfs.vpath
job["prel"] = rem
if zvfs.vpath != vfs.vpath:
self.log("xbu reloc2:%d..." % (depth,), 6)
return self._handle_json(job, depth + 1)
job["name"] = self._untaken(pdir, job, job["t0"])
self.registry[job["ptop"]][job["wark"]] = job
tnam = job["name"] + ".PARTIAL"
if self.args.dotpart:
@@ -4066,7 +4343,7 @@ class Up2k(object):
job["tnam"] = tnam
if not job["hash"]:
del self.registry[job["ptop"]][job["wark"]]
return
return {}
if self.args.plain_ip:
dip = job["addr"].replace(":", ".")
@@ -4126,6 +4403,8 @@ class Up2k(object):
if not job["hash"]:
self._finish_upload(job["ptop"], job["wark"])
return {}
def _snapshot(self) -> None:
slp = self.args.snap_wri
if not slp or self.args.no_snap:
@@ -4330,6 +4609,9 @@ class Up2k(object):
with self.rescan_cond:
self.rescan_cond.notify_all()
if self.fx_backlog:
self.do_fx_backlog()
return True
def hash_file(
@@ -4361,6 +4643,48 @@ class Up2k(object):
self.hashq.put(zt)
self.n_hashq += 1
def do_fx_backlog(self):
with self.mutex, self.reg_mutex:
todo = self.fx_backlog
self.fx_backlog = []
for act, hr, req_vp in todo:
self.hook_fx(act, hr, req_vp)
def hook_fx(self, act: str, hr: dict[str, str], req_vp: str) -> None:
bad = [k for k in hr if k != "vp"]
if bad:
t = "got unsupported key in %s from hook: %s"
raise Exception(t % (act, bad))
for fvp in hr.get("vp") or []:
# expect vpath including filename; either absolute
# or relative to the client's vpath (request url)
if fvp.startswith("/"):
fvp, fn = vsplit(fvp[1:])
fvp = "/" + fvp
else:
fvp, fn = vsplit(fvp)
x = pathmod(self.asrv.vfs, "", req_vp, {"vp": fvp, "fn": fn})
if not x:
t = "hook_fx(%s): failed to resolve %s based on %s"
self.log(t % (act, fvp, req_vp))
continue
ap, rd, fn, (vn, rem) = x
vp = vjoin(rd, fn)
if not vp:
raise Exception("hook_fx: blank vp from pathmod")
if act == "idx":
rd = rd[len(vn.vpath) :].strip("/")
self.hash_file(
vn.realpath, vn.vpath, vn.flags, rd, fn, "", time.time(), "", True
)
if act == "del":
self._handle_rm(LEELOO_DALLAS, "", vp, [], False, False)
def shutdown(self) -> None:
self.stop = True

View File

@@ -26,7 +26,6 @@ import threading
import time
import traceback
from collections import Counter
from email.utils import formatdate
from ipaddress import IPv4Address, IPv4Network, IPv6Address, IPv6Network
from queue import Queue
@@ -60,6 +59,10 @@ except:
UTC = _UTC()
if PY2:
range = xrange # type: ignore
if sys.version_info >= (3, 7) or (
sys.version_info >= (3, 6) and platform.python_implementation() == "CPython"
):
@@ -99,6 +102,9 @@ except:
pass
try:
if os.environ.get("PRTY_NO_SQLITE"):
raise Exception()
HAVE_SQLITE3 = True
import sqlite3
@@ -107,6 +113,9 @@ except:
HAVE_SQLITE3 = False
try:
if os.environ.get("PRTY_NO_PSUTIL"):
raise Exception()
HAVE_PSUTIL = True
import psutil
except:
@@ -137,10 +146,15 @@ if TYPE_CHECKING:
import magic
from .authsrv import VFS
from .broker_util import BrokerCli
from .up2k import Up2k
FAKE_MP = False
try:
if os.environ.get("PRTY_NO_MP"):
raise ImportError()
import multiprocessing as mp
# import multiprocessing.dummy as mp
@@ -158,6 +172,21 @@ else:
from urllib import unquote # type: ignore # pylint: disable=no-name-in-module
try:
if os.environ.get("PRTY_NO_IPV6"):
raise Exception()
socket.inet_pton(socket.AF_INET6, "::1")
HAVE_IPV6 = True
except:
def inet_pton(fam, ip):
return socket.inet_aton(ip)
socket.inet_pton = inet_pton
HAVE_IPV6 = False
try:
struct.unpack(b">i", b"idgi")
spack = struct.pack # type: ignore
@@ -231,6 +260,7 @@ IMPLICATIONS = [
["e2vu", "e2v"],
["e2vp", "e2v"],
["e2v", "e2d"],
["tftpvv", "tftpv"],
["smbw", "smb"],
["smb1", "smb"],
["smbvvv", "smbvv"],
@@ -358,6 +388,18 @@ APPLESAN_TXT = r"/(__MACOS|Icon\r\r)|/\.(_|DS_Store|AppleDouble|LSOverride|Docum
APPLESAN_RE = re.compile(APPLESAN_TXT)
HUMANSIZE_UNITS = ("B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB")
UNHUMANIZE_UNITS = {
"b": 1,
"k": 1024,
"m": 1024 * 1024,
"g": 1024 * 1024 * 1024,
"t": 1024 * 1024 * 1024 * 1024,
"p": 1024 * 1024 * 1024 * 1024 * 1024,
"e": 1024 * 1024 * 1024 * 1024 * 1024 * 1024,
}
VF_CAREFUL = {"mv_re_t": 5, "rm_re_t": 5, "mv_re_r": 0.1, "rm_re_r": 0.1}
@@ -769,7 +811,7 @@ class CachedSet(object):
c = self.c = {k: v for k, v in self.c.items() if now - v < self.maxage}
try:
self.oldest = c[min(c, key=c.get)]
self.oldest = c[min(c, key=c.get)] # type: ignore
except:
self.oldest = now
@@ -873,7 +915,6 @@ class ProgressPrinter(threading.Thread):
self.msg = ""
self.end = False
self.n = -1
self.start()
def run(self) -> None:
sigblock()
@@ -1352,7 +1393,7 @@ def vol_san(vols: list["VFS"], txt: bytes) -> bytes:
def min_ex(max_lines: int = 8, reverse: bool = False) -> str:
et, ev, tb = sys.exc_info()
stb = traceback.extract_tb(tb) if tb else traceback.extract_stack()[:-1]
fmt = "%s @ %d <%s>: %s"
fmt = "%s:%d <%s>: %s"
ex = [fmt % (fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb]
if et or ev or tb:
ex.append("[%s] %s" % (et.__name__ if et else "(anonymous)", ev))
@@ -1718,7 +1759,7 @@ def read_header(sr: Unrecv, t_idle: int, t_tot: int) -> list[str]:
ofs = ret.find(b"\r\n\r\n")
if ofs < 0:
if len(ret) > 1024 * 64:
if len(ret) > 1024 * 32:
raise Pebkac(400, "header 2big")
else:
continue
@@ -1796,10 +1837,21 @@ def gen_filekey_dbg(
return ret
WKDAYS = "Mon Tue Wed Thu Fri Sat Sun".split()
MONTHS = "Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec".split()
RFC2822 = "%s, %02d %s %04d %02d:%02d:%02d GMT"
def formatdate(ts: Optional[float] = None) -> str:
# gmtime ~= datetime.fromtimestamp(ts, UTC).timetuple()
y, mo, d, h, mi, s, wd, _, _ = time.gmtime(ts)
return RFC2822 % (WKDAYS[wd], d, MONTHS[mo - 1], y, h, mi, s)
def gencookie(k: str, v: str, r: str, tls: bool, dur: int = 0, txt: str = "") -> str:
v = v.replace("%", "%25").replace(";", "%3B")
if dur:
exp = formatdate(time.time() + dur, usegmt=True)
exp = formatdate(time.time() + dur)
else:
exp = "Fri, 15 Aug 1997 01:00:00 GMT"
@@ -1808,18 +1860,16 @@ def gencookie(k: str, v: str, r: str, tls: bool, dur: int = 0, txt: str = "") ->
def humansize(sz: float, terse: bool = False) -> str:
for unit in ["B", "KiB", "MiB", "GiB", "TiB"]:
for unit in HUMANSIZE_UNITS:
if sz < 1024:
break
sz /= 1024.0
ret = " ".join([str(sz)[:4].rstrip("."), unit])
if not terse:
return ret
return ret.replace("iB", "").replace(" ", "")
if terse:
return "%s%s" % (str(sz)[:4].rstrip("."), unit[:1])
else:
return "%s %s" % (str(sz)[:4].rstrip("."), unit)
def unhumanize(sz: str) -> int:
@@ -1829,12 +1879,7 @@ def unhumanize(sz: str) -> int:
pass
mc = sz[-1:].lower()
mi = {
"k": 1024,
"m": 1024 * 1024,
"g": 1024 * 1024 * 1024,
"t": 1024 * 1024 * 1024 * 1024,
}.get(mc, 1)
mi = UNHUMANIZE_UNITS.get(mc, 1)
return int(float(sz[:-1]) * mi)
@@ -1876,7 +1921,7 @@ def uncyg(path: str) -> str:
def undot(path: str) -> str:
ret: list[str] = []
for node in path.split("/"):
if node in ["", "."]:
if node == "." or not node:
continue
if node == "..":
@@ -2029,7 +2074,7 @@ def _quotep2(txt: str) -> str:
"""url quoter which deals with bytes correctly"""
btxt = w8enc(txt)
quot = quote(btxt, safe=b"/")
return w8dec(quot.replace(b" ", b"+"))
return w8dec(quot.replace(b" ", b"+")) # type: ignore
def _quotep3(txt: str) -> str:
@@ -2073,6 +2118,72 @@ def ujoin(rd: str, fn: str) -> str:
return rd or fn
def log_reloc(
log: "NamedLogger",
re: dict[str, str],
pm: tuple[str, str, str, tuple["VFS", str]],
ap: str,
vp: str,
fn: str,
vn: "VFS",
rem: str,
) -> None:
nap, nvp, nfn, (nvn, nrem) = pm
t = "reloc %s:\nold ap [%s]\nnew ap [%s\033[36m/%s\033[0m]\nold vp [%s]\nnew vp [%s\033[36m/%s\033[0m]\nold fn [%s]\nnew fn [%s]\nold vfs [%s]\nnew vfs [%s]\nold rem [%s]\nnew rem [%s]"
log(t % (re, ap, nap, nfn, vp, nvp, nfn, fn, nfn, vn.vpath, nvn.vpath, rem, nrem))
def pathmod(
vfs: "VFS", ap: str, vp: str, mod: dict[str, str]
) -> Optional[tuple[str, str, str, tuple["VFS", str]]]:
# vfs: authsrv.vfs
# ap: original abspath to a file
# vp: original urlpath to a file
# mod: modification (ap/vp/fn)
nvp = "\n" # new vpath
ap = os.path.dirname(ap)
vp, fn = vsplit(vp)
if mod.get("fn"):
fn = mod["fn"]
nvp = vp
for ref, k in ((ap, "ap"), (vp, "vp")):
if k not in mod:
continue
ms = mod[k].replace(os.sep, "/")
if ms.startswith("/"):
np = ms
elif k == "vp":
np = undot(vjoin(ref, ms))
else:
np = os.path.abspath(os.path.join(ref, ms))
if k == "vp":
nvp = np.lstrip("/")
continue
# try to map abspath to vpath
np = np.replace("/", os.sep)
for vn_ap, vn in vfs.all_aps:
if not np.startswith(vn_ap):
continue
zs = np[len(vn_ap) :].replace(os.sep, "/")
nvp = vjoin(vn.vpath, zs)
break
if nvp == "\n":
return None
vn, rem = vfs.get(nvp, "*", False, False)
if not vn.realpath:
raise Exception("unmapped vfs")
ap = vn.canonical(rem)
return ap, nvp, fn, (vn, rem)
def _w8dec2(txt: bytes) -> str:
"""decodes filesystem-bytes to wtf8"""
return surrogateescape.decodefilename(txt)
@@ -2451,6 +2562,9 @@ def build_netmap(csv: str):
csv += ", 127.0.0.0/8, ::1/128" # loopback
srcs = [x.strip() for x in csv.split(",") if x.strip()]
if not HAVE_IPV6:
srcs = [x for x in srcs if ":" not in x]
cidrs = []
for zs in srcs:
if not zs.endswith("."):
@@ -2488,7 +2602,7 @@ def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
def hashcopy(
fin: Generator[bytes, None, None],
fout: Union[typing.BinaryIO, typing.IO[Any]],
slp: int = 0,
slp: float = 0,
max_sz: int = 0,
) -> tuple[int, str, str]:
hashobj = hashlib.sha512()
@@ -2516,7 +2630,8 @@ def sendfile_py(
f: typing.BinaryIO,
s: socket.socket,
bufsz: int,
slp: int,
slp: float,
use_poll: bool,
) -> int:
remains = upper - lower
f.seek(lower)
@@ -2544,23 +2659,32 @@ def sendfile_kern(
f: typing.BinaryIO,
s: socket.socket,
bufsz: int,
slp: int,
slp: float,
use_poll: bool,
) -> int:
out_fd = s.fileno()
in_fd = f.fileno()
ofs = lower
stuck = 0.0
if use_poll:
poll = select.poll()
poll.register(out_fd, select.POLLOUT)
while ofs < upper:
stuck = stuck or time.time()
try:
req = min(2 ** 30, upper - ofs)
select.select([], [out_fd], [], 10)
if use_poll:
poll.poll(10000)
else:
select.select([], [out_fd], [], 10)
n = os.sendfile(out_fd, in_fd, ofs, req)
stuck = 0
except OSError as ex:
# client stopped reading; do another select
d = time.time() - stuck
if d < 3600 and ex.errno == errno.EWOULDBLOCK:
time.sleep(0.02)
continue
n = 0
@@ -2676,30 +2800,30 @@ def rmdirs_up(top: str, stop: str) -> tuple[list[str], list[str]]:
def unescape_cookie(orig: str) -> str:
# mw=idk; doot=qwe%2Crty%3Basd+fgh%2Bjkl%25zxc%26vbn # qwe,rty;asd fgh+jkl%zxc&vbn
ret = ""
ret = []
esc = ""
for ch in orig:
if ch == "%":
if len(esc) > 0:
ret += esc
if esc:
ret.append(esc)
esc = ch
elif len(esc) > 0:
elif esc:
esc += ch
if len(esc) == 3:
try:
ret += chr(int(esc[1:], 16))
ret.append(chr(int(esc[1:], 16)))
except:
ret += esc
ret.append(esc)
esc = ""
else:
ret += ch
ret.append(ch)
if len(esc) > 0:
ret += esc
if esc:
ret.append(esc)
return ret
return "".join(ret)
def guess_mime(url: str, fallback: str = "application/octet-stream") -> str:
@@ -2959,7 +3083,8 @@ def retchk(
def _parsehook(
log: Optional["NamedLogger"], cmd: str
) -> tuple[bool, bool, bool, float, dict[str, Any], str]:
) -> tuple[str, bool, bool, bool, float, dict[str, Any], list[str]]:
areq = ""
chk = False
fork = False
jtxt = False
@@ -2984,8 +3109,12 @@ def _parsehook(
cap = int(arg[1:]) # 0=none 1=stdout 2=stderr 3=both
elif arg.startswith("k"):
kill = arg[1:] # [t]ree [m]ain [n]one
elif arg.startswith("a"):
areq = arg[1:] # required perms
elif arg.startswith("i"):
pass
elif not arg:
break
else:
t = "hook: invalid flag {} in {}"
(log or print)(t.format(arg, ocmd))
@@ -3012,9 +3141,11 @@ def _parsehook(
"capture": cap,
}
cmd = os.path.expandvars(os.path.expanduser(cmd))
argv = cmd.split(",") if "," in cmd else [cmd]
return chk, fork, jtxt, wait, sp_ka, cmd
argv[0] = os.path.expandvars(os.path.expanduser(argv[0]))
return areq, chk, fork, jtxt, wait, sp_ka, argv
def runihook(
@@ -3023,10 +3154,9 @@ def runihook(
vol: "VFS",
ups: list[tuple[str, int, int, str, str, str, int]],
) -> bool:
ocmd = cmd
chk, fork, jtxt, wait, sp_ka, cmd = _parsehook(log, cmd)
bcmd = [sfsenc(cmd)]
if cmd.endswith(".py"):
_, chk, fork, jtxt, wait, sp_ka, acmd = _parsehook(log, cmd)
bcmd = [sfsenc(x) for x in acmd]
if acmd[0].endswith(".py"):
bcmd = [sfsenc(pybin)] + bcmd
vps = [vjoin(*list(s3dec(x[3], x[4]))) for x in ups]
@@ -3051,7 +3181,7 @@ def runihook(
t0 = time.time()
if fork:
Daemon(runcmd, ocmd, [bcmd], ka=sp_ka)
Daemon(runcmd, cmd, bcmd, ka=sp_ka)
else:
rc, v, err = runcmd(bcmd, **sp_ka) # type: ignore
if chk and rc:
@@ -3067,19 +3197,28 @@ def runihook(
def _runhook(
log: Optional["NamedLogger"],
src: str,
cmd: str,
ap: str,
vp: str,
host: str,
uname: str,
perms: str,
mt: float,
sz: int,
ip: str,
at: float,
txt: str,
) -> bool:
ocmd = cmd
chk, fork, jtxt, wait, sp_ka, cmd = _parsehook(log, cmd)
) -> dict[str, Any]:
ret = {"rc": 0}
areq, chk, fork, jtxt, wait, sp_ka, acmd = _parsehook(log, cmd)
if areq:
for ch in areq:
if ch not in perms:
t = "user %s not allowed to run hook %s; need perms %s, have %s"
if log:
log(t % (uname, cmd, areq, perms))
return ret # fallthrough to next hook
if jtxt:
ja = {
"ap": ap,
@@ -3090,59 +3229,101 @@ def _runhook(
"at": at or time.time(),
"host": host,
"user": uname,
"perms": perms,
"src": src,
"txt": txt,
}
arg = json.dumps(ja)
else:
arg = txt or ap
acmd = [cmd, arg]
if cmd.endswith(".py"):
acmd += [arg]
if acmd[0].endswith(".py"):
acmd = [pybin] + acmd
bcmd = [fsenc(x) if x == ap else sfsenc(x) for x in acmd]
t0 = time.time()
if fork:
Daemon(runcmd, ocmd, [bcmd], ka=sp_ka)
Daemon(runcmd, cmd, [bcmd], ka=sp_ka)
else:
rc, v, err = runcmd(bcmd, **sp_ka) # type: ignore
if chk and rc:
ret["rc"] = rc
retchk(rc, bcmd, err, log, 5)
return False
else:
try:
ret = json.loads(v)
except:
ret = {}
try:
if "stdout" not in ret:
ret["stdout"] = v
if "rc" not in ret:
ret["rc"] = rc
except:
ret = {"rc": rc, "stdout": v}
wait -= time.time() - t0
if wait > 0:
time.sleep(wait)
return True
return ret
def runhook(
log: Optional["NamedLogger"],
broker: Optional["BrokerCli"],
up2k: Optional["Up2k"],
src: str,
cmds: list[str],
ap: str,
vp: str,
host: str,
uname: str,
perms: str,
mt: float,
sz: int,
ip: str,
at: float,
txt: str,
) -> bool:
) -> dict[str, Any]:
assert broker or up2k
asrv = (broker or up2k).asrv
args = (broker or up2k).args
vp = vp.replace("\\", "/")
ret = {"rc": 0}
for cmd in cmds:
try:
if not _runhook(log, cmd, ap, vp, host, uname, mt, sz, ip, at, txt):
return False
hr = _runhook(
log, src, cmd, ap, vp, host, uname, perms, mt, sz, ip, at, txt
)
if log and args.hook_v:
log("hook(%s) [%s] => \033[32m%s" % (src, cmd, hr), 6)
if not hr:
return {}
for k, v in hr.items():
if k in ("idx", "del") and v:
if broker:
broker.say("up2k.hook_fx", k, v, vp)
else:
up2k.fx_backlog.append((k, v, vp))
elif k == "reloc" and v:
# idk, just take the last one ig
ret["reloc"] = v
elif k in ret:
if k == "rc" and v:
ret[k] = v
else:
ret[k] = v
except Exception as ex:
(log or print)("hook: {}".format(ex))
if ",c," in "," + cmd:
return False
return {}
break
return True
return ret
def loadpy(ap: str, hot: bool) -> Any:

View File

@@ -29,6 +29,7 @@ window.baguetteBox = (function () {
isOverlayVisible = false,
touch = {}, // start-pos
touchFlag = false, // busy
scrollCSS = ['', ''],
scrollTimer = 0,
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
@@ -567,6 +568,12 @@ window.baguetteBox = (function () {
function showOverlay(chosenImageIndex) {
if (options.noScrollbars) {
var a = document.documentElement.style.overflowY,
b = document.body.style.overflowY;
if (a != 'hidden' || b != 'scroll')
scrollCSS = [a, b];
document.documentElement.style.overflowY = 'hidden';
document.body.style.overflowY = 'scroll';
}
@@ -615,8 +622,8 @@ window.baguetteBox = (function () {
playvid(false);
removeFromCache('#files');
if (options.noScrollbars) {
document.documentElement.style.overflowY = 'auto';
document.body.style.overflowY = 'auto';
document.documentElement.style.overflowY = scrollCSS[0];
document.body.style.overflowY = scrollCSS[1];
}
try {
@@ -743,6 +750,8 @@ window.baguetteBox = (function () {
image.volume = clamp(fcfg_get('vol', dvol / 100), 0, 1);
image.setAttribute('controls', 'controls');
image.onended = vidEnd;
image.onplay = function () { show_buttons(1); };
image.onpause = function () { show_buttons(); };
}
image.alt = thumbnailElement ? thumbnailElement.alt || '' : '';
if (options.titleTag && imageCaption)
@@ -988,6 +997,12 @@ window.baguetteBox = (function () {
}
}
function show_buttons(v) {
clmod(ebi('bbox-btns'), 'off', v);
clmod(btnPrev, 'off', v);
clmod(btnNext, 'off', v);
}
function bounceAnimation(direction) {
slider.className = options.animation == 'slideIn' ? 'bounce-from-' + direction : 'eog';
setTimeout(function () {
@@ -1051,9 +1066,7 @@ window.baguetteBox = (function () {
if (fx > 0.7)
return showNextImage();
clmod(ebi('bbox-btns'), 'off', 't');
clmod(btnPrev, 'off', 't');
clmod(btnNext, 'off', 't');
show_buttons('t');
if (Date.now() - ctime <= 500 && !IPHONE)
tglfull();

View File

@@ -10,7 +10,6 @@
--fg2-max: #fff;
--fg-weak: #bbb;
--bg-u7: #555;
--bg-u6: #4c4c4c;
--bg-u5: #444;
--bg-u4: #383838;
@@ -43,8 +42,14 @@
--btn-h-bg: #805;
--btn-1-fg: #400;
--btn-1-bg: var(--a);
--btn-h-bs: var(--btn-bs);
--btn-h-bb: var(--btn-bb);
--btn-1-bs: var(--btn-bs);
--btn-1-bb: var(--btn-bb);
--btn-1h-fg: var(--btn-1-fg);
--btn-1h-bg: #fe8;
--btn-1h-bs: var(--btn-1-bs);
--btn-1h-bb: var(--btn-1-bb);
--chk-fg: var(--tab-alt);
--txt-sh: var(--bg-d2);
--txt-bg: var(--btn-bg);
@@ -212,22 +217,19 @@ html.y {
html.a {
--op-aa-sh: 0 0 .2em var(--bg-d3) inset;
--u2-o-bg: #603;
--u2-o-b1: #a16;
--u2-o-sh: #a00;
--u2-o-h-bg: var(--u2-o-bg);
--u2-o-h-b1: #fb0;
--u2-o-h-sh: #fb0;
--u2-o-1-bg: #6a1;
--u2-o-1-b1: #efa;
--u2-o-1-sh: #0c0;
--u2-o-1h-bg: var(--u2-o-1-bg);
--btn-bs: 0 0 .2em var(--bg-d3);
}
html.az {
--btn-1-bs: 0 0 .1em var(--fg) inset;
}
html.ay {
--op-aa-sh: 0 .1em .2em #ccc;
--op-aa-bg: var(--bg-max);
}
html.b {
--btn-bs: 0 .05em 0 var(--bg-d3) inset;
--btn-1-bs: 0 .05em 0 var(--btn-1h-bg) inset;
--tree-bg: var(--bg);
--g-bg: var(--bg);
@@ -244,17 +246,13 @@ html.b {
--u2-b1-bg: rgba(128,128,128,0.15);
--u2-b2-bg: var(--u2-b1-bg);
--u2-o-bg: var(--btn-bg);
--u2-o-h-bg: var(--btn-h-bg);
--u2-o-1-bg: var(--a);
--u2-o-1h-bg: var(--a-hil);
--f-sh1: 0.1;
--mp-b-bg: transparent;
}
html.bz {
--fg: #cce;
--fg-weak: #bbd;
--bg-u5: #3b3f58;
--bg-u4: #1e2130;
--bg-u3: #1e2130;
@@ -266,11 +264,14 @@ html.bz {
--row-alt: #181a27;
--a-b: #fb4;
--btn-bg: #202231;
--btn-h-bg: #2d2f45;
--btn-1-bg: #ba2959;
--btn-1-fg: #fff;
--btn-1-bg: #eb6;
--btn-1-fg: #000;
--btn-1h-fg: #000;
--btn-1h-bg: #ff9;
--txt-sh: a;
--u2-tab-b1: var(--bg-u5);
@@ -305,6 +306,7 @@ html.by {
}
html.c {
font-weight: bold;
--fg: #fff;
--fg-weak: #cef;
--bg-u5: #409;
@@ -325,16 +327,23 @@ html.c {
--chk-fg: #d90;
--op-aa-bg: #f9dd22;
--u2-o-1-bg: #4cf;
--srv-1: #ea0;
--mp-b-bg: transparent;
}
html.cz {
--bgg: var(--bg-u2);
--sel-bg: var(--bg-u5);
--sel-fg: var(--fg);
--btn-bb: .2em solid #709;
--btn-bs: 0 .1em .6em rgba(255,0,185,0.5);
--btn-1-bb: .2em solid #e90;
--btn-1-bs: 0 .1em .8em rgba(255,205,0,0.9);
--srv-3: #fff;
--u2-tab-b1: var(--bg-d3);
}
html.cy {
@@ -362,6 +371,7 @@ html.cy {
--btn-h-fg: #fff;
--btn-1-bg: #ff0;
--btn-1-fg: #000;
--btn-bs: 0 .25em 0 #f00;
--chk-fg: #fd0;
--srv-1: #f00;
@@ -370,8 +380,6 @@ html.cy {
--u2-b1-bg: #f00;
--u2-b2-bg: #f00;
--u2-o-bg: #ff0;
--u2-o-1-bg: #f00;
}
html.dz {
--fg: #4d4;
@@ -379,7 +387,6 @@ html.dz {
--fg2-max: #fff;
--fg-weak: #2a2;
--bg-u7: #020;
--bg-u6: #020;
--bg-u5: #050;
--bg-u4: #020;
@@ -412,6 +419,9 @@ html.dz {
--btn-1-bg: #4f4;
--btn-1h-fg: var(--btn-1-fg);
--btn-1h-bg: #3f3;
--btn-bs: 0 0 0 .1em #080 inset;
--btn-1-bs: a;
--chk-fg: var(--tab-alt);
--txt-sh: var(--bg-d2);
--txt-bg: var(--btn-bg);
@@ -433,12 +443,6 @@ html.dz {
--u2-b-fg: #fff;
--u2-b1-bg: #3a3;
--u2-b2-bg: #3a3;
--u2-o-bg: var(--btn-bg);
--u2-o-b1: var(--bg-u5);
--u2-o-h-bg: var(--fg-weak);
--u2-o-1-bg: var(--fg-weak);
--u2-o-1-b1: var(--a);
--u2-o-1h-bg: var(--a);
--u2-inf-bg: #07a;
--u2-inf-b1: #0be;
--u2-ok-bg: #380;
@@ -550,10 +554,6 @@ html.dy {
--u2-tab-1-bg: a;
--u2-b1-bg: #000;
--u2-b2-bg: #000;
--u2-o-h-bg: #999;
--u2-o-1h-bg: #999;
--u2-o-bg: #eee;
--u2-o-1-bg: #000;
--ud-b1: a;
@@ -626,6 +626,7 @@ pre, code, tt, #doc, #doc>code {
overflow: hidden;
width: 0;
height: 0;
color: var(--bg);
}
html .ayjump:focus {
z-index: 80386;
@@ -962,6 +963,8 @@ html.y #path a:hover {
#files tbody tr.play a:hover {
color: var(--btn-1h-fg);
background: var(--btn-1h-bg);
box-shadow: var(--btn-1h-bs);
border-bottom: var(--btn-1h-bb);
}
#ggrid {
margin: -.2em -.5em;
@@ -970,6 +973,7 @@ html.y #path a:hover {
overflow: hidden;
display: block;
display: -webkit-box;
line-clamp: var(--grid-ln);
-webkit-line-clamp: var(--grid-ln);
-webkit-box-orient: vertical;
padding-top: .3em;
@@ -1016,9 +1020,6 @@ html.y #path a:hover {
color: var(--g-dfg);
}
#ggrid>a.au:before {
content: '💾';
}
html.np_open #ggrid>a.au:before {
content: '▶';
}
#ggrid>a:before {
@@ -1147,6 +1148,7 @@ html.y #widget.open {
width: 100%;
height: 100%;
}
#fshr,
#wtgrid,
#wtico {
position: relative;
@@ -1333,6 +1335,7 @@ html.y #widget.open {
#widget.cmp #wtoggle {
font-size: 1.2em;
}
#widget.cmp #fshr,
#widget.cmp #wtgrid {
display: none;
}
@@ -1433,7 +1436,11 @@ input[type="checkbox"]+label {
input[type="radio"]:checked+label,
input[type="checkbox"]:checked+label {
color: #0e0;
color: var(--a);
color: var(--btn-1-bg);
}
input[type="checkbox"]:checked+label {
box-shadow: var(--btn-1-bs);
border-bottom: var(--btn-1-bb);
}
html.dz input {
font-family: 'scp', monospace, monospace;
@@ -1611,6 +1618,8 @@ html {
color: var(--btn-fg);
background: #eee;
background: var(--btn-bg);
box-shadow: var(--btn-bs);
border-bottom: var(--btn-bb);
border-radius: .3em;
padding: .2em .4em;
font-size: 1.2em;
@@ -1624,20 +1633,14 @@ html.c .btn,
html.a .btn {
border-radius: .2em;
}
html.cz .btn {
box-shadow: 0 .1em .6em rgba(255,0,185,0.5);
border-bottom: .2em solid #709;
}
html.dz .btn {
font-size: 1em;
box-shadow: 0 0 0 .1em #080 inset;
}
html.dz .tgl.btn.on {
box-shadow: 0 0 0 .1em var(--btn-1-bg) inset;
}
.btn:hover {
color: var(--btn-h-fg);
background: var(--btn-h-bg);
box-shadow: var(--btn-h-bs);
border-bottom: var(--btn-h-bb);
}
.tgl.btn.on {
background: #000;
@@ -1645,14 +1648,14 @@ html.dz .tgl.btn.on {
color: #fff;
color: var(--btn-1-fg);
text-shadow: none;
}
html.cz .tgl.btn.on {
box-shadow: 0 .1em .8em rgba(255,205,0,0.9);
border-bottom: .2em solid #e90;
box-shadow: var(--btn-1-bs);
border-bottom: var(--btn-1-bb);
}
.tgl.btn.on:hover {
background: var(--btn-1h-bg);
color: var(--btn-1h-fg);
background: var(--btn-1h-bg);
box-shadow: var(--btn-1h-bs);
border-bottom: var(--btn-1h-bb);
}
#detree {
padding: .3em .5em;
@@ -1857,6 +1860,7 @@ html.y #tree.nowrap .ntree a+a:hover {
#unpost td:nth-child(4) {
text-align: right;
}
#shui,
#rui {
background: #fff;
background: var(--bg);
@@ -1872,13 +1876,25 @@ html.y #tree.nowrap .ntree a+a:hover {
padding: 1em;
z-index: 765;
}
#shui div+div,
#rui div+div {
margin-top: 1em;
}
#shui table,
#rui table {
width: 100%;
border-collapse: collapse;
}
#shui button {
margin: 0 1em 0 0;
}
#shui .btn {
font-size: 1em;
}
#shui td {
padding: .8em 0;
}
#shui td+td,
#rui td+td {
padding: .2em 0 .2em .5em;
}
@@ -1886,10 +1902,15 @@ html.y #tree.nowrap .ntree a+a:hover {
font-family: 'scp', monospace, monospace;
font-family: var(--font-mono), 'scp', monospace, monospace;
}
#shui td+td,
#rui td+td,
#shui td input[type="text"],
#rui td input[type="text"] {
width: 100%;
}
#shui td.exs input[type="text"] {
width: 3em;
}
#rn_f.m td:first-child {
white-space: nowrap;
}
@@ -2684,23 +2705,25 @@ html.b #u2conf a.b:hover {
#u2conf input[type="checkbox"]:checked+label {
position: relative;
cursor: pointer;
background: var(--u2-o-bg);
border-bottom: .2em solid var(--u2-o-b1);
box-shadow: 0 .1em .3em var(--u2-o-sh) inset;
background: var(--btn-bg);
box-shadow: var(--btn-bs);
border-bottom: var(--btn-bb);
text-shadow: 1px 1px 1px #000, 1px -1px 1px #000, -1px -1px 1px #000, -1px 1px 1px #000;
}
#u2conf input[type="checkbox"]:checked+label {
background: var(--u2-o-1-bg);
border-bottom: .2em solid var(--u2-o-1-b1);
box-shadow: 0 .1em .5em var(--u2-o-1-sh);
background: var(--btn-1-bg);
box-shadow: var(--btn-1-bs);
border-bottom: var(--btn-1-bb);
}
#u2conf input[type="checkbox"]+label:hover {
box-shadow: 0 .1em .3em var(--u2-o-h-sh);
border-color: var(--u2-o-h-b1);
background: var(--u2-o-h-bg);
background: var(--btn-h-bg);
box-shadow: var(--btn-h-bs);
border-bottom: var(--btn-h-bb);
}
#u2conf input[type="checkbox"]:checked+label:hover {
background: var(--u2-o-1h-bg);
background: var(--btn-1h-bg);
box-shadow: var(--btn-1h-bs);
border-bottom: var(--btn-1h-bb);
}
#op_up2k.srch #u2conf td:nth-child(2)>*,
#op_up2k.srch #u2conf td:nth-child(3)>* {

File diff suppressed because it is too large Load Diff

View File

@@ -11,7 +11,6 @@
td{border:1px solid #999;border-width:1px 1px 0 0;padding:0 5px}
a{display:block}
</style>
{{ html_head }}
</head>
<body>

View File

@@ -159,5 +159,8 @@ try { l.light = drk? 0:1; } catch (ex) { }
{%- if edit %}
<script src="{{ r }}/.cpr/md2.js?_={{ ts }}"></script>
{%- endif %}
{%- if js %}
<script src="{{ js }}_={{ ts }}"></script>
{%- endif %}
</body></html>

View File

@@ -607,10 +607,10 @@ function md_newline() {
var s = linebounds(true),
ln = s.md.substring(s.n1, s.n2),
m1 = /^( *)([0-9]+)(\. +)/.exec(ln),
m2 = /^[ \t>+-]*(\* )?/.exec(ln),
m2 = /^[ \t]*[>+*-]{0,2}[ \t]/.exec(ln),
drop = dom_src.selectionEnd - dom_src.selectionStart;
var pre = m2[0];
var pre = m2 ? m2[0] : '';
if (m1 !== null)
pre = m1[1] + (parseInt(m1[2]) + 1) + m1[3];

View File

@@ -53,5 +53,8 @@ try { l.light = drk? 0:1; } catch (ex) { }
<script src="{{ r }}/.cpr/deps/marked.js?_={{ ts }}"></script>
<script src="{{ r }}/.cpr/deps/easymde.js?_={{ ts }}"></script>
<script src="{{ r }}/.cpr/mde.js?_={{ ts }}"></script>
{%- if js %}
<script src="{{ js }}_={{ ts }}"></script>
{%- endif %}
</body></html>

View File

@@ -46,6 +46,9 @@
}, 1000);
</script>
{%- endif %}
{%- if js %}
<script src="{{ js }}_={{ ts }}"></script>
{%- endif %}
</body>
</html>

82
copyparty/web/shares.css Normal file
View File

@@ -0,0 +1,82 @@
html {
color: #333;
background: #f7f7f7;
font-family: sans-serif;
font-family: var(--font-main), sans-serif;
touch-action: manipulation;
}
#wrap {
margin: 2em auto;
padding: 0 1em 3em 1em;
line-height: 2.3em;
}
#wrap>span {
margin: 0 0 0 1em;
border-bottom: 1px solid #999;
}
li {
margin: 1em 0;
}
a {
color: #047;
background: #fff;
text-decoration: none;
white-space: nowrap;
border-bottom: 1px solid #8ab;
border-radius: .2em;
padding: .2em .6em;
margin: 0 .3em;
}
td a {
margin: 0;
}
#w {
color: #fff;
background: #940;
border-color: #b70;
}
#repl {
border: none;
background: none;
color: inherit;
padding: 0;
position: fixed;
bottom: .25em;
left: .2em;
}
table {
border-collapse: collapse;
position: relative;
}
th {
top: -1px;
position: sticky;
background: #f7f7f7;
}
td, th {
padding: .3em .6em;
text-align: left;
white-space: nowrap;
}
td+td+td+td+td+td+td+td {
font-family: var(--font-mono), monospace, monospace;
}
html.z {
background: #222;
color: #ccc;
}
html.z a {
color: #fff;
background: #057;
border-color: #37a;
}
html.z th {
background: #222;
}
html.bz {
color: #bbd;
background: #11121d;
}

76
copyparty/web/shares.html Normal file
View File

@@ -0,0 +1,76 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>{{ s_doctitle }}</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.8">
<meta name="theme-color" content="#{{ tcolor }}">
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/shares.css?_={{ ts }}">
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
{{ html_head }}
</head>
<body>
<div id="wrap">
<a id="a" href="{{ r }}/?shares" class="af">refresh</a>
<a id="a" href="{{ r }}/?h" class="af">control-panel</a>
<span>axs = perms (read,write,move,delet)</span>
<span>nf = numFiles (0=dir)</span>
<span>min/hrs = time left</span>
<table id="tab"><thead><tr>
<th>delete</th>
<th>sharekey</th>
<th>pw</th>
<th>source</th>
<th>axs</th>
<th>nf</th>
<th>user</th>
<th>created</th>
<th>expires</th>
<th>min</th>
<th>hrs</th>
<th>add time</th>
</tr></thead><tbody>
{% for k, pw, vp, pr, st, un, t0, t1 in rows %}
<tr>
<td><a href="#" k="{{ k }}">delete</a></td>
<td><a href="{{ r }}{{ shr }}{{ k }}">{{ k }}</a></td>
<td>{{ pw }}</td>
<td><a href="{{ r }}/{{ vp|e }}">{{ vp|e }}</a></td>
<td>{{ pr }}</td>
<td>{{ st }}</td>
<td>{{ un|e }}</td>
<td>{{ t0 }}</td>
<td>{{ t1 }}</td>
<td>{{ "inf" if not t1 else "dead" if t1 < now else ((t1 - now) / 60) | round(1) }}</td>
<td>{{ "inf" if not t1 else "dead" if t1 < now else ((t1 - now) / 3600) | round(1) }}</td>
<td></td>
</tr>
{% endfor %}
</tbody></table>
{% if not rows %}
(you don't have any active shares btw)
{% endif %}
<script>
var SR = {{ r|tojson }},
shr="{{ shr }}",
lang="{{ lang }}",
dfavico="{{ favico }}";
var STG = window.localStorage;
document.documentElement.className = (STG && STG.cpp_thm) || "{{ this.args.theme }}";
</script>
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>
<script src="{{ r }}/.cpr/shares.js?_={{ ts }}"></script>
{%- if js %}
<script src="{{ js }}_={{ ts }}"></script>
{%- endif %}
</body>
</html>

56
copyparty/web/shares.js Normal file
View File

@@ -0,0 +1,56 @@
var t = QSA('a[k]');
for (var a = 0; a < t.length; a++)
t[a].onclick = rm;
function rm() {
var u = SR + shr + uricom_enc(this.getAttribute('k')) + '?eshare=rm',
xhr = new XHR();
xhr.open('POST', u, true);
xhr.onload = xhr.onerror = cb;
xhr.send();
}
function bump() {
var k = this.closest('tr').getElementsByTagName('a')[0].getAttribute('k'),
u = SR + shr + uricom_enc(k) + '?eshare=' + this.value,
xhr = new XHR();
xhr.open('POST', u, true);
xhr.onload = xhr.onerror = cb;
xhr.send();
}
function cb() {
if (this.status !== 200)
return modal.alert('<h6>server error</h6>' + esc(unpre(this.responseText)));
document.location = '?shares';
}
(function() {
var tab = ebi('tab').tBodies[0],
tr = Array.prototype.slice.call(tab.rows, 0);
var buf = [];
for (var a = 0; a < tr.length; a++)
for (var b = 7; b < 9; b++)
buf.push(parseInt(tr[a].cells[b].innerHTML));
var ibuf = 0;
for (var a = 0; a < tr.length; a++)
for (var b = 7; b < 9; b++) {
var v = buf[ibuf++];
tr[a].cells[b].innerHTML =
v ? unix2iso(v).replace(' ', ',&nbsp;') : 'never';
}
for (var a = 0; a < tr.length; a++)
tr[a].cells[11].innerHTML =
'<button value="1">1min</button> ' +
'<button value="60">1h</button>';
var btns = QSA('td button'), aa = btns.length;
for (var a = 0; a < aa; a++)
btns[a].onclick = bump;
})();

View File

@@ -182,13 +182,18 @@ html.z a.g {
border-color: #af4;
box-shadow: 0 .3em 1em #7d0;
}
form {
line-height: 2.5em;
}
#x,
input {
color: #a50;
background: #fff;
border: 1px solid #a50;
border-radius: .5em;
padding: .5em .7em;
margin: 0 .5em 0 0;
border-radius: .3em;
padding: .25em .6em;
margin: 0 .3em 0 0;
font-size: 1em;
}
input::placeholder {
font-size: 1.2em;
@@ -197,6 +202,7 @@ input::placeholder {
opacity: 0.64;
color: #930;
}
#x,
html.z input {
color: #fff;
background: #626;

View File

@@ -14,6 +14,7 @@
<body>
<div id="wrap">
{%- if not in_shr %}
<a id="a" href="{{ r }}/?h" class="af">refresh</a>
<a id="v" href="{{ r }}/?hc" class="af">connect</a>
@@ -21,7 +22,8 @@
<p id="b">howdy stranger &nbsp; <small>(you're not logged in)</small></p>
{%- else %}
<a id="c" href="{{ r }}/?pw=x" class="logout">logout</a>
<p><span id="m">welcome back,</span> <strong>{{ this.uname }}</strong></p>
<p><span id="m">welcome back,</span> <strong>{{ this.uname|e }}</strong></p>
{%- endif %}
{%- endif %}
{%- if msg %}
@@ -76,8 +78,43 @@
</ul>
{%- endif %}
<h1 id="cc">client config:</h1>
{%- if in_shr %}
<h1 id="z">unlock this share:</h1>
<div>
<form id="lf" method="post" enctype="multipart/form-data" action="{{ r }}/{{ qvpath }}">
<input type="hidden" id="la" name="act" value="login" />
<input type="password" id="lp" name="cppwd" placeholder=" password" />
<input type="hidden" name="uhash" id="uhash" value="x" />
<input type="submit" id="ls" value="Unlock" />
{% if ahttps %}
<a id="w" href="{{ ahttps }}">switch to https</a>
{% endif %}
</form>
</div>
{%- else %}
<h1 id="l">login for more:</h1>
<div>
<form id="lf" method="post" enctype="multipart/form-data" action="{{ r }}/{{ qvpath }}">
<input type="hidden" id="la" name="act" value="login" />
<input type="password" id="lp" name="cppwd" placeholder=" password" />
<input type="hidden" name="uhash" id="uhash" value="x" />
<input type="submit" id="ls" value="Login" />
{% if chpw %}
<a id="x" href="#">change password</a>
{% endif %}
{% if ahttps %}
<a id="w" href="{{ ahttps }}">switch to https</a>
{% endif %}
</form>
</div>
{%- endif %}
<h1 id="cc">other stuff:</h1>
<ul>
{%- if this.uname != '*' and this.args.shr %}
<li><a id="y" href="{{ r }}/?shares">edit shares</a></li>
{% endif %}
{% if k304 or k304vis %}
{% if k304 %}
<li><a id="h" href="{{ r }}/?k304=n">disable k304</a> (currently enabled)
@@ -90,18 +127,6 @@
<li><a id="k" href="{{ r }}/?reset" class="r" onclick="localStorage.clear();return true">reset client settings</a></li>
</ul>
<h1 id="l">login for more:</h1>
<div>
<form method="post" enctype="multipart/form-data" action="{{ r }}/{{ qvpath }}">
<input type="hidden" name="act" value="login" />
<input type="password" name="cppwd" placeholder=" password" />
<input type="hidden" name="uhash" id="uhash" value="x" />
<input type="submit" value="Login" />
{% if ahttps %}
<a id="w" href="{{ ahttps }}">switch to https</a>
{% endif %}
</form>
</div>
</div>
<a href="#" id="repl">π</a>
{%- if not this.args.nb %}
@@ -119,6 +144,9 @@ document.documentElement.className = (STG && STG.cpp_thm) || "{{ this.args.theme
</script>
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>
<script src="{{ r }}/.cpr/splash.js?_={{ ts }}"></script>
{%- if js %}
<script src="{{ js }}_={{ ts }}"></script>
{%- endif %}
</body>
</html>

View File

@@ -9,7 +9,7 @@ var Ls = {
"e2": "leser inn konfigurasjonsfiler på nytt$N(kontoer, volumer, volumbrytere)$Nog kartlegger alle e2ds-volumer$N$Nmerk: endringer i globale parametere$Nkrever en full restart for å ta gjenge",
"f1": "du kan betrakte:",
"g1": "du kan laste opp til:",
"cc1": "klient-konfigurasjon",
"cc1": "brytere og sånt",
"h1": "skru av k304",
"i1": "skru på k304",
"j1": "k304 bryter tilkoplingen for hver HTTP 304. Dette hjelper mot visse mellomtjenere som kan sette seg fast / plutselig slutter å laste sider, men det reduserer også ytelsen betydelig",
@@ -17,9 +17,9 @@ var Ls = {
"l1": "logg inn:",
"m1": "velkommen tilbake,",
"n1": "404: filen finnes ikke &nbsp;┐( ´ -`)┌",
"o1": 'eller kanskje du ikke har tilgang? prøv å logge inn eller <a href="' + SR + '/?h">gå hjem</a>',
"o1": 'eller kanskje du ikke har tilgang? prøv et passord eller <a href="' + SR + '/?h">gå hjem</a>',
"p1": "403: tilgang nektet &nbsp;~┻━┻",
"q1": 'du må logge inn eller <a href="' + SR + '/?h">gå hjem</a>',
"q1": 'prøv et passord eller <a href="' + SR + '/?h">gå hjem</a>',
"r1": "gå hjem",
".s1": "kartlegg",
"t1": "handling",
@@ -27,21 +27,65 @@ var Ls = {
"v1": "koble til",
"v2": "bruk denne serveren som en lokal harddisk$N$NADVARSEL: kommer til å vise passordet ditt!",
"w1": "bytt til https",
"x1": "bytt passord",
"y1": "dine delinger",
"z1": "lås opp område",
"ta1": "du må skrive et nytt passord først",
"ta2": "gjenta for å bekrefte nytt passord:",
"ta3": "fant en skrivefeil; vennligst prøv igjen",
},
"eng": {
"d2": "shows the state of all active threads",
"e2": "reload config files (accounts/volumes/volflags),$Nand rescan all e2ds volumes$N$Nnote: any changes to global settings$Nrequire a full restart to take effect",
"u2": "time since the last server write$N( upload / rename / ... )$N$N17d = 17 days$N1h23 = 1 hour 23 minutes$N4m56 = 4 minutes 56 seconds",
"v2": "use this server as a local HDD$N$NWARNING: this will show your password!",
"ta1": "fill in your new password first",
"ta2": "repeat to confirm new password:",
"ta3": "found a typo; please try again",
},
"chi": {
"a1": "更新",
"b1": "你好 &nbsp; <small>(你尚未登录)</small>",
"c1": "登出",
"d1": "状态",
"d2": "显示所有活动线程的状态",
"e1": "重新加载配置",
"e2": "重新加载配置文件(账户/卷/卷标),$N并重新扫描所有 e2ds 卷$N$N注意任何全局设置的更改$N都需要完全重启才能生效",
"f1": "你可以查看:",
"g1": "你可以上传到:",
"cc1": "开关等",
"h1": "关闭 k304",
"i1": "开启 k304",
"j1": "k304 会在每个 HTTP 304 时断开连接。这有助于避免某些代理服务器卡住或突然停止加载页面,但也会显著降低性能。",
"k1": "重置设置",
"l1": "登录:",
"m1": "欢迎回来,",
"n1": "404: 文件不存在 &nbsp;┐( ´ -`)┌",
"o1": '或者你可能没有权限?尝试输入密码或 <a href="' + SR + '/?h">回家</a>',
"p1": "403: 访问被拒绝 &nbsp;~┻━┻",
"q1": '尝试输入密码或 <a href="' + SR + '/?h">回家</a>',
"r1": "回家",
".s1": "映射",
"t1": "操作",
"u2": "自上次服务器写入的时间$N( 上传 / 重命名 / ... )$N$N17d = 17 天$N1h23 = 1 小时 23 分钟$N4m56 = 4 分钟 56 秒",
"v1": "连接",
"v2": "将此服务器用作本地硬盘$N$N警告这将显示你的密码",
"w1": "切换到 https",
"x1": "更改密码",
"y1": "你的分享",
"z1": "解锁区域",
"ta1": "请先输入新密码",
"ta2": "重复以确认新密码:",
"ta3": "发现拼写错误;请重试",
}
};
var LANGS = ["eng", "nor"];
if (window.langmod)
langmod();
var d = Ls[sread("cpp_lang", LANGS) || lang] || Ls.eng || Ls.nor;
var d = Ls[sread("cpp_lang", Object.keys(Ls)) || lang] ||
Ls.eng || Ls.nor || Ls.chi;
for (var k in (d || {})) {
var f = k.slice(-1),
@@ -74,3 +118,42 @@ if (o && /[0-9]+$/.exec(o.innerHTML))
o.innerHTML = shumantime(o.innerHTML);
ebi('uhash').value = '' + location.hash;
(function() {
if (!ebi('x'))
return;
var pwi = ebi('lp');
function redo(msg) {
modal.alert(msg, function() {
pwi.value = '';
pwi.focus();
});
}
function mok(v) {
if (v !== pwi.value)
return redo(d.ta3);
pwi.setAttribute('name', 'pw');
ebi('la').value = 'chpw';
ebi('lf').submit();
}
function stars() {
var m = ebi('modali');
function enstars(n) {
setTimeout(function() { m.value = ''; }, n);
}
m.setAttribute('type', 'password');
enstars(17);
enstars(32);
enstars(69);
}
ebi('x').onclick = function (e) {
ev(e);
if (!pwi.value)
return redo(d.ta1);
modal.prompt(d.ta2, "y", mok, null, stars);
};
})();

View File

@@ -64,16 +64,7 @@
</div>
<div class="os lin">
<pre>
yum install davfs2
{% if accs %}printf '%s\n' <b>{{ pw }}</b> k | {% endif %}mount -t davfs -ouid=1000 http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b>
</pre>
<p>make it automount on boot:</p>
<pre>
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>{{ pw }}</b> k" >> /etc/davfs2/secrets
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b> davfs rw,user,uid=1000,noauto 0 0" >> /etc/fstab
</pre>
<p>or you can use rclone instead, which is much slower but doesn't require root (plus it keeps lastmodified on upload):</p>
<p>rclone (v1.63 or later) is recommended:</p>
<pre>
rclone config create {{ aname }}-dav webdav url=http{{ s }}://{{ rip }}{{ hport }} vendor=owncloud pacer_min_sleep=0.01ms{% if accs %} user=k pass=<b>{{ pw }}</b>{% endif %}
rclone mount --vfs-cache-mode writes --dir-cache-time 5s {{ aname }}-dav:{{ rvp }} <b>mp</b>
@@ -85,6 +76,16 @@
<li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li>
<li>old version of rclone? replace all <code>=</code> with <code>&nbsp;</code> (space)</li>
</ul>
<p>alternatively use davfs2 (requires root, is slower, forgets lastmodified-timestamp on upload):</p>
<pre>
yum install davfs2
{% if accs %}printf '%s\n' <b>{{ pw }}</b> k | {% endif %}mount -t davfs -ouid=1000 http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b>
</pre>
<p>make davfs2 automount on boot:</p>
<pre>
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>{{ pw }}</b> k" >> /etc/davfs2/secrets
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b> davfs rw,user,uid=1000,noauto 0 0" >> /etc/fstab
</pre>
<p>or the emergency alternative (gnome/gui-only):</p>
<!-- gnome-bug: ignores vp -->
<pre>
@@ -244,6 +245,9 @@ document.documentElement.className = (STG && STG.cpp_thm) || "{{ args.theme }}";
</script>
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>
<script src="{{ r }}/.cpr/svcs.js?_={{ ts }}"></script>
{%- if js %}
<script src="{{ js }}_={{ ts }}"></script>
{%- endif %}
</body>
</html>

View File

@@ -184,6 +184,7 @@ html {
padding: 1.5em 2em;
border-width: .5em 0;
}
.logue code,
#modalc code,
#tt code {
color: #eee;
@@ -264,7 +265,11 @@ html.y #tth {
box-shadow: 0 .3em 3em rgba(0,0,0,0.5);
max-width: 50em;
max-height: 30em;
overflow: auto;
overflow-x: auto;
overflow-y: scroll;
}
#modalc.yk {
overflow-y: auto;
}
#modalc td {
text-align: unset;
@@ -380,6 +385,7 @@ html.y textarea:focus {
}
.mdo pre,
.mdo code,
.mdo code[class*="language-"],
.mdo tt {
font-family: 'scp', monospace, monospace;
font-family: var(--font-mono), 'scp', monospace, monospace;

View File

@@ -658,7 +658,9 @@ function Donut(uc, st) {
}
function pos() {
return uc.fsearch ? Math.max(st.bytes.hashed, st.bytes.finished) : st.bytes.finished;
return uc.fsearch ?
Math.max(st.bytes.hashed, st.bytes.finished) :
st.bytes.inflight + st.bytes.finished;
}
r.on = function (ya) {
@@ -853,6 +855,7 @@ function up2k_init(subtle) {
setmsg(suggest_up2k, 'msg');
var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz.split(',')[1]),
uc = {},
fdom_ctr = 0,
biggest_file = 0;
@@ -1207,7 +1210,7 @@ function up2k_init(subtle) {
match = false;
if (match) {
var msg = ['directory iterator got stuck on the following {0} items; good chance your browser is about to spinlock:<ul>'.format(missing.length)];
var msg = ['directory iterator got stuck trying to access the following {0} items; will skip:<ul>'.format(missing.length)];
for (var a = 0; a < Math.min(20, missing.length); a++)
msg.push('<li>' + esc(missing[a]) + '</li>');
@@ -1736,6 +1739,11 @@ function up2k_init(subtle) {
}
}
if (st.bytes.inflight && (st.bytes.inflight < 0 || !st.busy.upload.length)) {
console.log('insane inflight ' + st.bytes.inflight);
st.bytes.inflight = 0;
}
var mou_ikkai = false;
if (st.busy.handshake.length &&
@@ -2178,7 +2186,7 @@ function up2k_init(subtle) {
st.busy.head.push(t);
var xhr = new XMLHttpRequest();
xhr.onerror = function () {
xhr.onerror = xhr.ontimeout = function () {
console.log('head onerror, retrying', t.name, t);
if (!toast.visible)
toast.warn(9.98, L.u_enethd + "\n\nfile: " + t.name, t);
@@ -2222,6 +2230,7 @@ function up2k_init(subtle) {
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
};
xhr.timeout = 34000;
xhr.open('HEAD', t.purl + uricom_enc(t.name), true);
xhr.send();
}
@@ -2247,7 +2256,7 @@ function up2k_init(subtle) {
console.log("sending keepalive handshake", t.name, t);
var xhr = new XMLHttpRequest();
xhr.onerror = function () {
xhr.onerror = xhr.ontimeout = function () {
if (t.t_busied != me) // t.done ok
return console.log('zombie handshake onerror', t.name, t);
@@ -2374,11 +2383,39 @@ function up2k_init(subtle) {
var arr = st.todo.upload,
sort = arr.length && arr[arr.length - 1].nfile > t.n;
for (var a = 0; a < t.postlist.length; a++)
if (!t.stitch_sz) {
// keep all connections busy
var bpc = (st.bytes.total - st.bytes.finished) / (parallel_uploads || 1),
ocs = 1024 * 1024,
stp = 1024 * 512,
ccs = ocs;
while (ccs < bpc) {
ocs = ccs;
ccs += stp; if (ccs < bpc) ocs = ccs;
ccs += stp; stp *= 2;
}
ocs = Math.floor(ocs / 1024 / 1024);
t.stitch_sz = Math.min(ocs, stitch_tgt);
}
for (var a = 0; a < t.postlist.length; a++) {
var nparts = [], tbytes = 0, stitch = t.stitch_sz;
if (t.nojoin && t.nojoin - t.postlist.length < 6)
stitch = 1;
--a;
for (var b = 0; b < stitch; b++) {
nparts.push(t.postlist[++a]);
tbytes += chunksize;
if (tbytes + chunksize > stitch * 1024 * 1024 || t.postlist[a + 1] - t.postlist[a] !== 1)
break;
}
arr.push({
'nfile': t.n,
'npart': t.postlist[a]
'nparts': nparts
});
}
t.nojoin = 0;
msg = null;
done = false;
@@ -2387,7 +2424,7 @@ function up2k_init(subtle) {
arr.sort(function (a, b) {
return a.nfile < b.nfile ? -1 :
/* */ a.nfile > b.nfile ? 1 :
a.npart < b.npart ? -1 : 1;
/* */ a.nparts[0] < b.nparts[0] ? -1 : 1;
});
}
@@ -2493,6 +2530,7 @@ function up2k_init(subtle) {
xhr.open('POST', t.purl, true);
xhr.responseType = 'text';
xhr.timeout = 42000;
xhr.send(JSON.stringify(req));
}
@@ -2534,7 +2572,10 @@ function up2k_init(subtle) {
function exec_upload() {
var upt = st.todo.upload.shift(),
t = st.files[upt.nfile],
npart = upt.npart,
nparts = upt.nparts,
pcar = nparts[0],
pcdr = nparts[nparts.length - 1],
snpart = pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr),
tries = 0;
if (t.done)
@@ -2549,8 +2590,8 @@ function up2k_init(subtle) {
pvis.seth(t.n, 1, "🚀 send");
var chunksize = get_chunksize(t.size),
car = npart * chunksize,
cdr = car + chunksize;
car = pcar * chunksize,
cdr = (pcdr + 1) * chunksize;
if (cdr >= t.size)
cdr = t.size;
@@ -2560,14 +2601,19 @@ function up2k_init(subtle) {
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
if (txt.indexOf('upload blocked by x') + 1) {
apop(st.busy.upload, upt);
apop(t.postlist, npart);
for (var a = pcar; a <= pcdr; a++)
apop(t.postlist, a);
pvis.seth(t.n, 1, "ERROR");
pvis.seth(t.n, 2, txt.split(/\n/)[0]);
pvis.move(t.n, 'ng');
return;
}
if (xhr.status == 200) {
pvis.prog(t, npart, cdr - car);
var bdone = cdr - car;
for (var a = pcar; a <= pcdr; a++) {
pvis.prog(t, a, Math.min(bdone, chunksize));
bdone -= chunksize;
}
st.bytes.finished += cdr - car;
st.bytes.uploaded += cdr - car;
t.bytes_uploaded += cdr - car;
@@ -2576,18 +2622,21 @@ function up2k_init(subtle) {
}
else if (txt.indexOf('already got that') + 1 ||
txt.indexOf('already being written') + 1) {
console.log("ignoring dupe-segment error", t.name, t);
t.nojoin = t.nojoin || t.postlist.length;
console.log("ignoring dupe-segment with backoff", t.nojoin, t.name, t);
if (!toast.visible && st.todo.upload.length < 4)
toast.inf(10, L.u_cbusy);
}
else {
xhrchk(xhr, L.u_cuerr2.format(npart, Math.ceil(t.size / chunksize), t.name), "404, target folder not found (???)", "warn", t);
xhrchk(xhr, L.u_cuerr2.format(snpart, Math.ceil(t.size / chunksize), t.name), "404, target folder not found (???)", "warn", t);
chill(t);
}
orz2(xhr);
}
var orz2 = function (xhr) {
apop(st.busy.upload, upt);
apop(t.postlist, npart);
for (var a = pcar; a <= pcdr; a++)
apop(t.postlist, a);
if (!t.postlist.length) {
t.t_uploaded = Date.now();
pvis.seth(t.n, 1, 'verifying');
@@ -2601,28 +2650,48 @@ function up2k_init(subtle) {
btot = Math.floor(st.bytes.total / 1024 / 1024);
xhr.upload.onprogress = function (xev) {
var nb = xev.loaded;
st.bytes.inflight += nb - xhr.bsent;
var nb = xev.loaded,
db = nb - xhr.bsent;
if (!db)
return;
st.bytes.inflight += db;
xhr.bsent = nb;
pvis.prog(t, npart, nb);
xhr.timeout = 64000 + Date.now() - xhr.t0;
pvis.prog(t, pcar, nb);
};
xhr.onload = function (xev) {
try { orz(xhr); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
};
xhr.onerror = function (xev) {
xhr.onerror = xhr.ontimeout = function (xev) {
if (crashed)
return;
st.bytes.inflight -= (xhr.bsent || 0);
xhr.bsent = 0;
if (!toast.visible)
toast.warn(9.98, L.u_cuerr.format(npart, Math.ceil(t.size / chunksize), t.name), t);
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), t.name), t);
t.nojoin = t.nojoin || t.postlist.length; // maybe rproxy postsize limit
console.log('chunkpit onerror,', ++tries, t.name, t);
orz2(xhr);
};
var chashes = [],
ctxt = t.hash[pcar],
plen = Math.floor(192 / nparts.length);
plen = plen > 9 ? 9 : plen < 2 ? 2 : plen;
for (var a = pcar + 1; a <= pcdr; a++)
chashes.push(t.hash[a].slice(0, plen));
if (chashes.length)
ctxt += ',' + plen + ',' + chashes.join('');
xhr.open('POST', t.purl, true);
xhr.setRequestHeader("X-Up2k-Hash", t.hash[npart]);
xhr.setRequestHeader("X-Up2k-Hash", ctxt);
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
@@ -2632,6 +2701,8 @@ function up2k_init(subtle) {
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
xhr.bsent = 0;
xhr.t0 = Date.now();
xhr.timeout = 42000;
xhr.responseType = 'text';
xhr.send(t.fobj.slice(car, cdr));
}
@@ -2732,13 +2803,34 @@ function up2k_init(subtle) {
if (parallel_uploads > 16)
parallel_uploads = 16;
if (parallel_uploads > 7)
if (parallel_uploads > 6)
toast.warn(10, L.u_maxconn);
else if (toast.txt == L.u_maxconn)
toast.hide();
obj.value = parallel_uploads;
bumpthread({ "target": 1 });
}
var read_u2sz = function () {
var el = ebi('u2szg'), n = parseInt(el.value), dv = u2sz.split(',');
stitch_tgt = n = (
isNaN(n) ? dv[1] :
n < dv[0] ? dv[0] :
n > dv[2] ? dv[2] : n
);
if (n == dv[1]) sdrop('u2sz'); else swrite('u2sz', n);
if (el.value != n) el.value = n;
};
ebi('u2szg').addEventListener('blur', read_u2sz);
ebi('u2szg').onkeydown = function (e) {
if (anymod(e)) return;
var n = e.code == 'ArrowUp' ? 1 : e.code == 'ArrowDown' ? -1 : 0;
if (!n) return;
this.value = parseInt(this.value) + n;
read_u2sz();
}
function tgl_fsearch() {
set_fsearch(!uc.fsearch);
}

View File

@@ -127,13 +127,13 @@ if ((document.location + '').indexOf(',rej,') + 1)
try {
console.hist = [];
var CMAXHIST = 1000;
var CMAXHIST = MOBILE ? 9000 : 44000;
var hook = function (t) {
var orig = console[t].bind(console),
cfun = function () {
console.hist.push(Date.now() + ' ' + t + ': ' + Array.from(arguments).join(', '));
if (console.hist.length > CMAXHIST)
console.hist = console.hist.slice(CMAXHIST / 2);
console.hist = console.hist.slice(CMAXHIST / 4);
orig.apply(console, arguments);
};
@@ -473,6 +473,24 @@ function crc32(str) {
}
function randstr(len) {
var ret = '';
try {
var ar = new Uint32Array(Math.floor((len + 3) / 4));
crypto.getRandomValues(ar);
for (var a = 0; a < ar.length; a++)
ret += ('000' + ar[a].toString(36)).slice(-4);
return ret.slice(0, len);
}
catch (ex) {
console.log('using unsafe randstr because ' + ex);
while (ret.length < len)
ret += ('000' + Math.floor(Math.random() * 1679616).toString(36)).slice(-4);
return ret.slice(0, len);
}
}
function clmod(el, cls, add) {
if (!el)
return false;
@@ -1396,10 +1414,10 @@ var tt = (function () {
o = ctr.querySelectorAll('*[tt]');
for (var a = o.length - 1; a >= 0; a--) {
o[a].onfocus = _cshow;
o[a].onblur = _hide;
o[a].onmouseenter = _dshow;
o[a].onmouseleave = _hide;
o[a].addEventListener('focus', _cshow);
o[a].addEventListener('blur', _hide);
o[a].addEventListener('mouseenter', _dshow);
o[a].addEventListener('mouseleave', _hide);
}
r.hide();
}
@@ -1536,6 +1554,7 @@ var modal = (function () {
var r = {},
q = [],
o = null,
scrolling = null,
cb_up = null,
cb_ok = null,
cb_ng = null,
@@ -1556,6 +1575,7 @@ var modal = (function () {
r.nofocus = 0;
r.show = function (html) {
tt.hide();
o = mknod('div', 'modal');
o.innerHTML = '<table><tr><td><div id="modalc">' + html + '</div></td></tr></table>';
document.body.appendChild(o);
@@ -1579,6 +1599,7 @@ var modal = (function () {
document.addEventListener('focus', onfocus);
document.addEventListener('selectionchange', onselch);
timer.add(scrollchk, 1);
timer.add(onfocus);
if (cb_up)
setTimeout(cb_up, 1);
@@ -1586,6 +1607,8 @@ var modal = (function () {
r.hide = function () {
timer.rm(onfocus);
timer.rm(scrollchk);
scrolling = null;
try {
ebi('modal-ok').removeEventListener('blur', onblur);
}
@@ -1604,13 +1627,28 @@ var modal = (function () {
r.hide();
if (cb_ok)
cb_ok(v);
}
};
var ng = function (e) {
ev(e);
r.hide();
if (cb_ng)
cb_ng(null);
}
};
var scrollchk = function () {
if (scrolling === true)
return;
var o = ebi('modalc'),
vis = o.offsetHeight,
all = o.scrollHeight,
nsc = 8 + vis < all;
if (scrolling !== nsc)
clmod(o, 'yk', !nsc);
scrolling = nsc;
};
var onselch = function () {
try {
@@ -2024,6 +2062,9 @@ function xhrchk(xhr, prefix, e404, lvl, tag) {
if (xhr.status == 404)
return toast.err(0, prefix + e404 + suf, tag);
if (!xhr.status && !errtxt)
return toast.err(0, prefix + L.xhr0);
if (is_cf && (xhr.status == 403 || xhr.status == 503)) {
var now = Date.now(), td = now - cf_cha_t;
if (td < 15000)

View File

@@ -1,3 +1,376 @@
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0830-2311 `v1.14.3` important dedup fix
<img src="https://github.com/9001/copyparty/raw/hovudstraum/docs/logo.svg" width="250" align="right"/>
* read-only demo server at https://a.ocv.me/pub/demo/
* [docker image](https://github.com/9001/copyparty/tree/hovudstraum/scripts/docker) [similar software](https://github.com/9001/copyparty/blob/hovudstraum/docs/versus.md) [client testbed](https://cd.ocv.me/b/)
there is a [discord server](https://discord.gg/25J8CdTT6G) with an `@everyone` in case of future important updates, such as [vulnerabilities](https://github.com/9001/copyparty/security) (most recently 2023-07-23)
# important bugfix ☢️
this version fixes a file deduplication bug which was introduced in [v1.13.8](https://github.com/9001/copyparty/releases/tag/v1.13.8), released 2024-08-13
its worst-case outcome is **loss of data** in the following scenario:
* someone uploads a file into a folder where that filename is already taken, but the file contents are different, and the server already has a copy of that new file elsewhere under a different name
specific example:
* the server has two existing files, `logo.png` and `logo-v2.png`, in the same volume but not necessarily in the same folder, and those files contain different data
* you have a local copy of `logo-v2.png` on your laptop, but your local filename is `logo.png`
* you upload your local `logo.png` onto the server, into the same folder as the server's `logo.png`
* because the files contain different data, the server accidentally replaces the contents of `logo.png` with your version
if you have been using the database feature (globally with `-e2dsa` or volflag `e2ds`), and you suspect you may have hit this bug, then it is a good idea to make a backup of the up2k databases for all your volumes (the files with names starting with `up2k.db`) before restarting copyparty and before you do anything else, especially if you do not have serverlogs from far back in time -- if you have either the databases and/or the serverlogs, then it is possible to identify replaced files with some manual work
you can check if you hit the bug using one of the following two approaches:
* if your OS has the [gnu find](https://linux.die.net/man/1/find) command, do a search for empty files with `find -type f -size 0`
* using copyparty (any OS), do the following steps:
* make sure that reindex-on-startup is enabled; either globally with `-e2dsa` or volflag `e2ds`
* then install this new copyparty version
* click the search tab `[🔎]` and type the number `0` into the `maximum MiB` textbox
if you find any empty files with a filename that indicates it was autogenerated to avoid a name collision, for example `logo.png-1725040569.239207-kbt0xteO.png`, and the value of the number after `logo.png` is larger than `1723507200` (unixtime for 2024-08-13), then this indicates that `logo.png` may have been replaced by another upload
if you have the serverlogs from when the original upload of `logo.png` was made, then this can be used to identify the original contents of the file that was replaced, and to look for other copies. Please get in touch on the discord for assistance if necessary
----
## new features
* shares: add revival and expiration extension ad2371f8
* share-owners can revive expired shares for `--shr-rt` minutes (default 1 day)
* ...and extend expiration time by adding 1 minute or 1 hour to the timer
* [sfx customizer](https://github.com/9001/copyparty/blob/hovudstraum/scripts/make-sfx.sh) improvements 03b13e8a
* improved translations stripper
* add more examples
## bugfixes
* the dedup bug 3da62ec2
* tftp: support unmapped root 01233991
## other changes
* copyparty.exe: update to pyinstaller 6.10.0
* textviewer wordwrapping c4e2b0f9
* add logo 7037e736 ee359742
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0823-2307 `v1.14.2` bing chilling
## new features
* #94 @ultwcz translated the UI to Chinese (thx!) 92edea1d
* #84 improvements to [shares](https://github.com/9001/copyparty#shares): 8122dded
* if one or more files are selected for sharing, they are placed into a virtual folder
* more appropriate password UI for accessing protected shares
* human-readable timestamps in shares listing
* u2c (commandline uploader): support multiple exclusion patterns f356faa2
## bugfixes
* remove confusing logmessage when downloading a zerobyte file 9f034d9c
* shares: 7ff46966
* fix crash if the root volume is unmapped
* log-spam on config reload
* password coalescing
* add chrome support
## other changes
* #93 add html IDs to the tabstrip 461f3158
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0819-0014 `v1.14.1` one step forward
[if i turn back now, then this will always follow... one step forward, forward](https://youtu.be/xe3Wkzc0O3k?t=27)
* read-only demo server at https://a.ocv.me/pub/demo/
* [docker image](https://github.com/9001/copyparty/tree/hovudstraum/scripts/docker) [similar software](https://github.com/9001/copyparty/blob/hovudstraum/docs/versus.md) [client testbed](https://cd.ocv.me/b/)
there is a [discord server](https://discord.gg/25J8CdTT6G) with an `@everyone` in case of future important updates, such as [vulnerabilities](https://github.com/9001/copyparty/security) (most recently 2023-07-23)
## new features
* #92 users can change their own passwords 83fb569d 00da7440
* this feature is default-disabled; see [readme](https://github.com/9001/copyparty#user-changeable-passwords)
* #84 share files/folders by creating a temporary url 7c2beba5
* inspired by other file servers; click the share-button to create a link like `example.com/share/enkz8g374o8g`
* primary usecase is to sneak past authentication services (see issue description)
* the create-share UI has options to accept uploads into the share, and/or set expiration time
* this feature is default-disabled; see [readme](https://github.com/9001/copyparty#shares)
## bugfixes
* #93 fixes for vproxy / location-based / not-vhost-based reverse-proxying 0b46b1a6
* using `--rp-loc` to reverse-proxy from a subfolder made some UI stuff break
* listening on unix-sockets: 687df2fa
* fix `x-forwarded-for` support, and avoid a possible container-specific collision
* new syntax which allows setting unix-permissions and unix-group
* `-i unix:770:www:/tmp/party.sock` (see `--help-bind` for more examples)
* using relocation hooks (introduced in previous ver) could cause dedup issues c8f4aeae b0af4b37
* custom fonts using `@import` css statements 5a62cb48
* invert volume scrollwheel 7d8d9438
## other changes
* changed the button colors in theme 2 (pm-monokai) from red to yellow 5153db6b
* the red buttons look better, but are too confusing because usually red means off
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0813-0008 `v1.13.8` hook into place
## new features
* #86 intentional side-effects from hooks 6c94a63f
* use hooks (plugins) to conditionally move uploads into another folder depending on filename, extension, uploader ip/name, file contents, ...
* hooks can create additional files and tell copyparty to index them immediately, or delete an existing file based on some condition
* only one example so far though, [reloc-by-ext](https://github.com/9001/copyparty/tree/hovudstraum/bin/hooks#before-upload) which was a feature-request to dodge [sharex#3992](https://github.com/ShareX/ShareX/issues/3992)
* listen on unix-sockets ee9aad82
* `-i unix:/tmp/party.sock` stops listening on TCP ports entirely, and only listens on that unix-socket
* can be combined with regular sockets, `-i 127.0.0.1,unix:/tmp/a.sock`
* kinda buggy for now (need to `--xff-src=any` and doesn't let you set socket-perms yet), will be fixed in next ver
* makes it 10% faster, but more importantly offers tighter access control behind reverse-proxies
* inspired by https://www.oligo.security/blog/0-0-0-0-day-exploiting-localhost-apis-from-the-browser
* up2k stitching:
* more optimal stitch sizes for max throughput across connections c862ec1b
* improve fat32 compatibility 373194c3
* new option `--js-other` to load custom javascript dbd42bc6
* `--js-browser` affects the filebrowser page, `--js-other` does all the others
* endless possibilities, such as [adding a login-banner](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/banner.js) which [looks like this](https://github.com/user-attachments/assets/8ae8e087-b209-449c-b08d-74e040f0284b)
* list detected optional dependencies on startup 3db117d8
* hopefully reduces the guesswork / jank factor by a tiny bit
## bugfixes
* up2k stitching:
* put the request headers on a diet so they fit through more reverse-proxies 0da719f4
* fix deadlock on s390x (IBM mainframes) 250c8c56
## other changes
* add flags to disengage [features](https://github.com/9001/copyparty/tree/hovudstraum#feature-chickenbits) and [dependencies](https://github.com/9001/copyparty/tree/hovudstraum#dependency-chickenbits) in case they cause trouble 72361c99
* optimizations
* 6% faster on average d5c9c8eb
* docker: reduce ram usage 98ffaadf
* python2: reduce ram usage ebb19818
* docker: add [portainer howto](https://github.com/9001/copyparty/blob/hovudstraum/docs/examples/docker/portainer.md) e136231c
* update deps ca001c85
* pyftpdlib 1.5.10
* copyparty.exe: python 3.12.5
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0729-2028 `v1.13.6` not that big
## new features
* up2k.js: set clientside timeouts on http connections during upload 85e54980
* some reverse-proxy setups could cause uploads to hang indefinitely by eating requests; should recover nicely now
* audio-player shows statustext while loading 662541c6
* [bsod theme](https://github.com/9001/copyparty/tree/hovudstraum/contrib/themes) [(live demo)](https://cd.ocv.me/c/) 15ddcf53
## bugfixes
* fix bugs in the [long-distance upload optimizations](https://github.com/9001/copyparty/releases/tag/v1.13.5) in the previous version:
* up2k.js didn't necessarily use the expected chunksize when stitching 225bd80e
* u2c (commandline uploader): 8916bce3
* use the correct chunksize instead of overshooting like crazy
* could crash on exit if `-z` was enabled (so basically harmless)
* the "time spent uploading" statustext that was printed on exit could multiply by `-j` and exceed walltime
* misc ux 9bb6e0dc
* don't accept hotkeys until it's safe to do so
* improve messages regarding the [firefox crash](https://bugzilla.mozilla.org/show_bug.cgi?id=1790500)
* keep more console logs in memory (easier to debug)
* fix wordwrap in messageboxes on firefox a19a0fa9
## other changes
* changed the `xm` / "on message" [hook examples](https://github.com/9001/copyparty/tree/hovudstraum/bin/hooks#on-message) to reject users without write-access 99edba4f
* docker images were rebuilt on 2024-08-02, 23:30 UTC with new optimizations: 98ffaadf
* 😃 RAM usage decreased by `5-6 MiB` for most flavors; `10 MiB` for dj/iv
* 😕 image size grew by `4 MiB` (min), `6 MiB` (ac/im/iv), `9 MiB` (dj)
* 😃 startup time reduced to about half
* and avoids a deadlock on IBM mainframes
* updated comparison to other software 6b54972e
* `hfs2` is dead, `hfs3` and `filebrowser` improved
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0722-2323 `v1.13.5` american sized
## new features
* long-distance uploads are now **twice as fast** on average 132a8350
* boost tcp windowsize scaling by stitching together smaller chunks into bigger chonks so they fly better across the atlantic
* i'm not kidding, on the two routes we've tested this on we gained 1.6x / 160% (from US-West to Finland) and **2.6x / 260%** (Norway to US-East)
* files that are between 4 MiB and 256 MiB see the biggest improvement; 70% faster <= 768 MiB, 40% <= 1.5 GiB, 10% <= 6G
* if this turns out to be buggy, disable it serverside with `--u2sz 1,1,1` or clientside in the browser-ui: `[⚙️]` -> `up2k switches` -> change `64` to `1`
* u2c.py (CLI uploader): support stitching (☝️) + print a summary with hashing and upload speeds 987bce21
* video files can play as audio 53f1e3c9
* audio is extracted serverside to avoid wasting bandwidth
* extraction is lossy (converted to opus or mp3 depending on browser)
* togglebutton `🎧` in the gridview toolbar to enable/disable
* new hook: [into-the-cache-it-goes.py](https://github.com/9001/copyparty/tree/hovudstraum/bin/hooks#after-upload) d26a944d
* avoids a cloudflare bug (race condition?) where it will send truncated files to visitors on the very first load if several people simultaneously access a file that hasn't been viewed before
## bugfixes
* inline markdown/logues rendered black-on-black in firefox 54 and some other browsers from 2017 and older eeef8091
* unintuitive folder thumbnail selection if folder contains both `Cover.jpg` and `cover.jpg` f955d2bd
* the gridview toolbar got undocked after viewing a pic/vid dc449bf8
## other changes
* #90 recommend rclone in favor of davfs2 ef0ecf87
* improved some error messages e565ad5f
* added helptext exporters to generate the online [html](https://ocv.me/copyparty/helptext.html) and [txt](https://ocv.me/copyparty/helptext.txt) editions 59533990
* mention that cloudflare is incompatible with uploading files larger than 383.9 GiB
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0716-0457 `v1.13.4` descript.ion
## new features
* "medialinks"; instead of the usual hotlink, the basic-uploader (as used by sharex and such) can return a link that opens the file in the media viewer c9281f89
* enable for all uploads with volflag `medialinks`, or just for one upload by adding `?media` to the post url
* thumbnails are now fully compatible with dirkeys/filekeys 52e06226
* `--th-covers` will respect filename order, selecting the first matching filename as the folder thumbnail 1cdb1702
* new hook: [bittorrent downloader](https://github.com/9001/copyparty/tree/hovudstraum/bin/hooks#on-message) bd3b3863 803e1565
* hooks: d749683d
* can be restricted to only run when user has specific permissions
* user permissions are also included in the json message to the hook
* new syntax to prepend args to the hook's command
* (all this will be better documented after some additional upcoming hook-related features, see `--help-hooks` for now)
* support `descript.ion` usenet metadata; will parse and render into directory listings when possible 927c3bce
* directory listings are now 2% slower, eh who's keeping count anyways
* tftp-server: 45259251
* improved support for buggy clients
* improved ipv6 support, especially on macos
* improved robustness on unreliable networks
* #85 new option `--gsel` to default-enable the client setting to select files by ctrl-clicking them in the grid 9a87ee2f
* music player: set audio volume by scrollwheel 36d6d29a
## bugfixes
* race-the-beam (downloading an unfinished upload) could get interrupted near the end, requiring a manual resume in the browser's download manager to finish f37187a0
* ftp-server: when accessing the root folder of servers without a root folder, it could mention inaccessible folders 84e8e1dd
* ftp-server: uploads will automatically replace existing files if user has delete perms 0a9f4c60
* windows 2000 expects this behavior, otherwise it'll freak out and delete stuff and then not actually upload it, nice
* new option `--ftp-no-ow` restores old default behavior of rejecting upload if target filename exists
* music player:
* stop trying to recover from a corrupted file if the user already fixed it manually 55a011b9
* support downloading the currently playing song regardless of current folder c06aa683
* music player preloader: db6059e1
* stop searching after 5 folders of nothing
* don't crash playback by walking into error-pages
* `--og` (rich discord embeds) was incompatible with viewing markdown docs d75a2c77
* `--cgen` (configfile generator) much less jank d5de3f2f
## other changes
* mention that HTTP/2 is still usually slower than HTTP/1.1 dfe7f1d9
* give up much sooner if a client is supposed to send a request body but isn't c549f367
* support running copyparty as a server on windows 2000 and winXP 8c73e0cb 2fd12a83
* updated deps 6e58514b
* copyparty.exe: python 3.12, pillow 10.4, pyinstaller 6.9
* dompurify 3.1.6
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0601-2324 `v1.13.3` 700+
## new features
* keep tags when transcoding music to opus/mp3 07ea629c
* useful for batch-downloading folders with [on-the-fly transcoding](https://github.com/9001/copyparty#zip-downloads)
* excessively large tags will be individually dropped (traktor beatmaps, cover-art, xmp)
## bugfixes
* optimization for large amounts (700+) of tcp connections / clients 07b2bf11
* `select()` was used for non-https downloads and mdns/ssdp initialization, which would start spinning at more than 1024 FDs, so now they `poll()` when possible (so not on windows)
* default max number of connections on windows was lowered to 486 since windows maxes out at 512 FDs
* the markdown editor autoindent would duplicate `<hr>` 692175f5
## other changes
* #83: more intuitive behavior for `--df` and the `df` volflag 5ad65450
* print helpful warning if OS restrictions make it impossible to persist config b629d18d
* censor filesystem paths in the download-as-zip error summary 5919607a
* `u2c.exe`: explain that https is disabled bef96176
* ux: 60c96f99
* hide lightbox buttons when a video is playing
* move audio seekbar text down a bit so it hides less of the waveform and minute-markers
* updated dompurify to 3.1.5 f00b9394
* updated docker images to alpine 3.20
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0510-1431 `v1.13.2` s3xmodit.zip
## new features
* play [compressed](https://a.ocv.me/pub/demo/music/chiptunes/compressed/#af-99f0c0e4) s3xmodit chiptunes/modules c0466279
* can now read gz/xz/zip-compressed s3m/xm/mod/it songs
* new filetypes supported: mdz, mdgz, mdxz, s3z, s3gz, s3xz, xmz, xmgz, xmxz, itz, itgz, itxz
* and if you need to fit even more tracks on the mixtape, [try mo3](https://a.ocv.me/pub/demo/music/chiptunes/compressed/#af-0bc9b877)
* option to batch-convert audio waveforms 38e4fdfe
* volflag to improve audio waveform compression with pngquant 82ce6862
* option to add or change mappings from file-extensions to mimetypes 560d7b66
* export and publish the `--help` text for online viewing 560d7b66
* now available [as html](https://ocv.me/copyparty/helptext.html) and as [plaintext](https://ocv.me/copyparty/helptext.txt), includes many features not documented in the readme
* another way to add your own UI translations 19d156ff
## bugfixes
* ensure OS signals are immediately received and processed 87c60a1e
* things like reload and shutdown signals from systemd could get lost/stuck
* fix mimetype detection for uppercase file extensions 565daee9
* when clicking a `.ts` file in the gridview, don't open it as text 925c7f0a
* ...as it's probably an mpeg transport-stream, not a typescript file
* be less aggressive in dropping volume caches e396c5c2
* very minor performance gain, only really relevant if you're doing something like burning a copyparty volume onto a CD
* previously, adding or removing any volume at all was enough to drop covers cache for all volumes; now this only happens if an intersecting volume is added/removed
## other changes
* updated dompurify to 3.1.2 566cbb65
* opengraph: add the full filename as url suffix 5c1e2390
* so discord picks a good filename when saving an image
----
# 💾 what to download?
| download link | is it good? | description |
| -- | -- | -- |
| **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py)** | ✅ the best 👍 | runs anywhere! only needs python |
| [a docker image](https://github.com/9001/copyparty/blob/hovudstraum/scripts/docker/README.md) | it's ok | good if you prefer docker 🐋 |
| [copyparty.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty.exe) | ⚠️ [acceptable](https://github.com/9001/copyparty#copypartyexe) | for [win8](https://user-images.githubusercontent.com/241032/221445946-1e328e56-8c5b-44a9-8b9f-dee84d942535.png) or later; built-in thumbnailer |
| [u2c.exe](https://github.com/9001/copyparty/releases/download/v1.13.0/u2c.exe) | ⚠️ acceptable | [CLI uploader](https://github.com/9001/copyparty/blob/hovudstraum/bin/u2c.py) as a win7+ exe ([video](https://a.ocv.me/pub/demo/pics-vids/u2cli.webm)) |
| [copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty.pyz) | ⚠️ acceptable | similar to the regular sfx, [mostly worse](https://github.com/9001/copyparty#zipapp) |
| [copyparty32.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty32.exe) | ⛔️ [dangerous](https://github.com/9001/copyparty#copypartyexe) | for [win7](https://user-images.githubusercontent.com/241032/221445944-ae85d1f4-d351-4837-b130-82cab57d6cca.png) -- never expose to the internet! |
| [cpp-winpe64.exe](https://github.com/9001/copyparty/releases/download/v1.10.1/copyparty-winpe64.exe) | ⛔️ dangerous | runs on [64bit WinPE](https://user-images.githubusercontent.com/241032/205454984-e6b550df-3c49-486d-9267-1614078dd0dd.png), otherwise useless |
* except for [u2c.exe](https://github.com/9001/copyparty/releases/download/v1.13.0/u2c.exe), all of the options above are mostly equivalent
* the zip and tar.gz files below are just source code
* python packages are available at [PyPI](https://pypi.org/project/copyparty/#files)
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0506-0029 `v1.13.1` ctrl-v
@@ -30,7 +403,7 @@
----
this release introduces **[copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py)**, yet another way to bring copyparty where it's needed -- very limited and with many drawbacks (see [readme](https://github.com/9001/copyparty#zipapp)) but may work when the others don't
this release introduces **[copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty.pyz)**, yet another way to bring copyparty where it's needed -- very limited and with many drawbacks (see [readme](https://github.com/9001/copyparty#zipapp)) but may work when the others don't
@@ -544,7 +917,7 @@ dnf install https://ocv.me/copyparty/fedora/39/python3-copyparty.fc39.noarch.rpm
## other changes
* improved [systemd example](https://github.com/9001/copyparty/tree/hovudstraum/contrib/systemd) with hardening and a better example config
* logfiles are flushed for every line written; can be disabled with `--no-logflush` for ~3% more performance best-case
* iphones probably won't broadcast cover-art to car stereos over bluetooth anymore since the thingamajig in iOS that's in charge of that doesn't have cookie-access, and strapping in the auth is too funky so let's stop doing that b7723ac245b8b3e38d6410891ef1aa92d4772114
* iphones probably won't broadcast cover-art to car stereos over bluetooth anymore since the thingamajig in iOS that's in charge of that doesn't have cookie-access, and strapping in the auth is too funky so let's stop doing that b7723ac2
* can be remedied by enabling filekeys and granting unauthenticated people access that way, but that's too much effort for anyone to bother with I'm sure
@@ -813,12 +1186,12 @@ okay, i swear this is the last version for weeks! probably
* [r0c is much better](https://github.com/9001/r0c) than this joke
## bugfixes
* 163e3fce46122d64bf824762b6733ff2c3551ba5 the `x-forwarded-for` header was ignored if the nearest reverse-proxy is not asking from 127.0.0.1, which broke client IPs in containerized deployments
* 163e3fce the `x-forwarded-for` header was ignored if the nearest reverse-proxy is not asking from 127.0.0.1, which broke client IPs in containerized deployments
* the serverlog will now explain how to trust the reverse-proxy to provide client IPs, but basically,
* `--xff-hdr` specifies which header to read the client's real ip from
* `--xff-src` is an allowlist of IP-addresses to trust that header from
* a62f744a187bc9f821b540e8bb4e0b9a67bd01c8 if copyparty was started while an external HDD was not connected, and that volume's index was stored elsewhere, then the index would get wiped (since all the files are gone)
* 3b8f66c0d5c27a68841814ec06f1758f146a5ff5 javascript could crash while uploading from a very unreliable internet connection
* a62f744a if copyparty was started while an external HDD was not connected, and that volume's index was stored elsewhere, then the index would get wiped (since all the files are gone)
* 3b8f66c0 javascript could crash while uploading from a very unreliable internet connection
## other changes
* copyparty.exe: updated pillow to 10.0.1 which fixes the webp cve
@@ -864,7 +1237,7 @@ hello! it's been a while, an entire day even...
* default compression levels are gz:3, bz2:2, xz:1; override with `?tar=gz:9`
# bugfixes
* c1efd227b7377144a5760bc6cff64f4e87b626d9 symlink-deduplicated files got indexed with the wrong last-modified timestamp
* c1efd227 symlink-deduplicated files got indexed with the wrong last-modified timestamp
* mostly inconsequential; would cause the dupe's uploader-ip to be forgotten on the next server restart since it would reindex to "fix" the timestamp
* when linking [a search query](https://a.ocv.me/pub/#q=tags%20like%20soundsho*) it loads the results faster
@@ -879,12 +1252,12 @@ hello! it's been a while, an entire day even...
## new features
* iPhones and iPads are now able to...
* 9986136dfb2364edb35aa9fbb87410641c6d6af3 play entire albums while the screen is off without the music randomly stopping
* 9986136d play entire albums while the screen is off without the music randomly stopping
* apple keeps breaking AudioContext in new and interesting ways; time to give up (no more equalizer)
* 1c0d978979a703edeb792e552b18d3b7695b2d90 perform search queries and execude js code
* 1c0d9789 perform search queries and execude js code
* by translating [smart-quotes](https://stackoverflow.com/questions/48678359/ios-11-safari-html-disable-smart-punctuation) into regular `'` and `"` characters
* python 3.12 support
* technically a bugfix since it was added [a year ago](https://github.com/9001/copyparty/commit/32e22dfe84d5e0b13914b4d0e15c1b8c9725a76d) way before the first py3.12 alpha was released but turns out i botched it, oh well
* technically a bugfix since it was added [a year ago](https://github.com/9001/copyparty/commit/32e22dfe) way before the first py3.12 alpha was released but turns out i botched it, oh well
* filter error messages so they never include the filesystem path where copyparty's python files reside
* print more context in server logs if someone hits an unexpected permission-denied
@@ -1105,8 +1478,8 @@ Thanks for flying copyparty! And especially if you decide to continue doing so :
```bash
(gzip -dc access.log.*.gz; cat access.log) | sed -r 's/" [0-9]+ .*//' | grep -E 'cpr/.*%2[^0]' | grep -vF data:image/svg
```
* 77f1e5144455eb946db7368792ea11c934f0f6da fixes an extremely unlikely race-condition (see the commit for details)
* 8f59afb1593a75b8ce8c91ceee304097a07aea6e fixes another race-condition which is a bit worse:
* 77f1e514 fixes an extremely unlikely race-condition (see the commit for details)
* 8f59afb1 fixes another race-condition which is a bit worse:
* the unpost feature could collide with other database activity, with the worst-case outcome being aborted batch operations, for example a directory move or a batch-rename which stops halfways
----
@@ -1280,7 +1653,7 @@ don't get excited! nothing new and revolutionary, but `xvol` and `xdev` changed
# 2023-0426-2300 `v1.6.15` unexpected boost
## new features
* 30% faster folder listings due to [the very last thing](https://github.com/9001/copyparty/commit/55c74ad164633a0a64dceb51f7f534da0422cbb5) i'd ever expect to be a bottleneck, [thx perf](https://docs.python.org/3.12/howto/perf_profiling.html)
* 30% faster folder listings due to [the very last thing](https://github.com/9001/copyparty/commit/55c74ad1) i'd ever expect to be a bottleneck, [thx perf](https://docs.python.org/3.12/howto/perf_profiling.html)
* option to see the lastmod timestamps of symlinks instead of the target files
* makes the turbo mode of [u2cli, the commandline uploader and folder-sync tool](https://github.com/9001/copyparty/blob/hovudstraum/bin/up2k.py) more turbo since copyparty dedupes uploads by symlinking to an existing copy and the symlink is stamped with the deduped file's lastmod
* **webdav:** enabled by default (because rclone will want this), can be disabled with arg `--dav-rt` or volflag `davrt`
@@ -1484,7 +1857,7 @@ don't get excited! nothing new and revolutionary, but `xvol` and `xdev` changed
the commandline up2k upload / filesearch client, now as a standalone windows exe
* based on python 3.7 so it runs on 32bit windows7 or anything newer
* *no https support* (saves space + the python3.7 openssl is getting old)
* built from b39ff92f34e3fca389c78109d20d5454af761f8e so it can do long filepaths and mojibake
* built from b39ff92f so it can do long filepaths and mojibake
----
@@ -1691,7 +2064,7 @@ but nothing is affected (that i know of):
* tar/zip-download of hidden folders
* unpost filtering was buggy for non-ascii characters
* moving a deduplicated file on a volume where deduplication was since disabled
* improved the [linux 6.0.16](https://utcc.utoronto.ca/~cks/space/blog/linux/KernelBindBugIn6016) kernel bug [workaround](https://github.com/9001/copyparty/commit/9065226c3d634a9fc15b14a768116158bc1761ad) because there is similar funk in 5.x
* improved the [linux 6.0.16](https://utcc.utoronto.ca/~cks/space/blog/linux/KernelBindBugIn6016) kernel bug [workaround](https://github.com/9001/copyparty/commit/9065226c) because there is similar funk in 5.x
* add custom text selection colors because chrome is currently broken on fedora
* blockdevs (`/dev/nvme0n1`) couldn't be downloaded as files
* misc fixes for location-based reverse-proxying
@@ -1720,7 +2093,7 @@ hello from warsaw airport (goodbye japan ;_;)
* browser ui didn't allow specifying number of threads for file search
* dont panic if a digit key is pressed while viewing an image
* workaround [linux kernel bug](https://utcc.utoronto.ca/~cks/space/blog/linux/KernelBindBugIn6016) causing log spam on dualstack
* ~~related issue (also mostly harmless) will be fixed next relese 010770684db95bece206943768621f2c7c27bace~~
* ~~related issue (also mostly harmless) will be fixed next relese 01077068~~
* they fixed it in linux 6.1 so these workarounds will be gone too
@@ -2686,7 +3059,7 @@ fixed another dumdum, sorry for the spam
* the ftp server is not compatible with python 3.12 (releasing october 2023)
* will be fixed in a [future version of pyftpdlib](https://github.com/giampaolo/pyftpdlib/issues/560)
the sfx was built from https://github.com/9001/copyparty/commit/39e7a7a2311ab8da43b2a9a18ae39d06202105e3
the sfx was built from https://github.com/9001/copyparty/commit/39e7a7a2
@@ -3467,7 +3840,7 @@ we did it reddit 👉😎👉
* latest gzip edition of the sfx: [v0.11.18](https://github.com/9001/copyparty/releases/tag/v0.11.18)
* if upgrading from v0.11.x or before, see [v0.12.4](https://github.com/9001/copyparty/releases/tag/v0.12.4)
note: `copyparty-sfx.py` is https://github.com/9001/copyparty/commit/5955940b82adddb7149125a60463aba22f1c8c31 which fixes upload eta
note: `copyparty-sfx.py` is https://github.com/9001/copyparty/commit/5955940b which fixes upload eta
## new features
* provide password using basic-authentication
@@ -4937,7 +5310,7 @@ nothing really important happened since [v0.11.6](https://github.com/9001/copypa
* this release fixes a missing permission check which could allow users to download write-only folders
* this bug was introduced 19 days ago, in `v0.10.17`
* the requirement to be affected is write-only folders mounted within readable folders
* and the worst part is there was a unit-test exactly for this, https://github.com/9001/copyparty/commit/273ca0c8da0d94f9d06ca16bd86c0301d9d06455 way overdue
* and the worst part is there was a unit-test exactly for this, https://github.com/9001/copyparty/commit/273ca0c8 way overdue
* also fixes minor bugs introduced in `v0.11.1`
* this version is the same as `v0.11.5` on pypi
@@ -5121,8 +5494,8 @@ in other news, minor ui tweaks:
* a few lightmode adjustments
* less cpu usage? should be
`copyparty-sfx.py` (latest) made from c5db7c1a0c8f6ab23138ad7ea7642a6260e7da9b (v0.10.15-15) fixes `-j` (multiprocessing/high-performance)
`copyparty-sfx-5a579db.py` (old) made from 5a579dba52e46c202b79c3d80c3b1c996c7b2e4a (v0.10.15-5) reduced the size
`copyparty-sfx.py` (latest) made from c5db7c1a (v0.10.15-15) fixes `-j` (multiprocessing/high-performance)
`copyparty-sfx-5a579db.py` (old) made from 5a579dba (v0.10.15-5) reduced the size
@@ -5435,7 +5808,7 @@ and i just realized i never added runtime tag scanning so copyparty will have to
use `-e2dsa` and `-e2ts` to enable the media tag features globally, or enable/disable them per-volume (see readme)
**NOTE:** older fuse clients (from before 5e3775c1afc9438f9930080a9b8542a063ba1765 / older than v0.8.0) must be upgraded for this copyparty release, however the new client still supports connecting to old servers
**NOTE:** older fuse clients (from before 5e3775c1 / older than v0.8.0) must be upgraded for this copyparty release, however the new client still supports connecting to old servers
other changes include
* support chunked PUT requests from curl
@@ -5655,7 +6028,7 @@ valvrave-stop.jpg
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0818-1822 `v0.5.2` da setter vi punktum
full disclaimer: `copyparty-sfx.py` was built using `sfx.py` from ~~82e568d4c9f25bfdfd1bf5166f0ebedf058723ee~~ f550a8171d298992f4ef569d2fc99a6037a44ea8
full disclaimer: `copyparty-sfx.py` was built using `sfx.py` from ~~82e568d4~~ f550a817

View File

@@ -1,7 +1,7 @@
## devnotes toc
* top
* [future plans](#future-plans) - some improvement ideas
* [future ideas](#future-ideas) - list of dreams which will probably never happen
* [design](#design)
* [up2k](#up2k) - quick outline of the up2k protocol
* [why not tus](#why-not-tus) - I didn't know about [tus](https://tus.io/)
@@ -12,6 +12,8 @@
* [write](#write)
* [admin](#admin)
* [general](#general)
* [event hooks](#event-hooks) - on writing your own [hooks](../README.md#event-hooks)
* [hook effects](#hook-effects) - hooks can cause intentional side-effects
* [assumptions](#assumptions)
* [mdns](#mdns)
* [sfx repack](#sfx-repack) - reduce the size of an sfx by removing features
@@ -25,9 +27,9 @@
* [discarded ideas](#discarded-ideas)
# future plans
# future ideas
some improvement ideas
list of dreams which will probably never happen
* the JS is a mess -- a ~~preact~~ rewrite would be nice
* preferably without build dependencies like webpack/babel/node.js, maybe a python thing to assemble js files into main.js
@@ -55,8 +57,8 @@ quick outline of the up2k protocol, see [uploading](https://github.com/9001/cop
* server creates the `wark`, an identifier for this upload
* `sha512( salt + filesize + chunk_hashes )`
* and a sparse file is created for the chunks to drop into
* client uploads each chunk
* header entries for the chunk-hash and wark
* client sends a series of POSTs, with one or more consecutive chunks in each
* header entries for the chunk-hashes (comma-separated) and wark
* server writes chunks into place based on the hash
* client does another handshake with the hashlist; server replies with OK or a list of chunks to reupload
@@ -137,10 +139,12 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
| GET | `?tar&w` | pregenerate webp thumbnails |
| GET | `?tar&j` | pregenerate jpg thumbnails |
| GET | `?tar&p` | pregenerate audio waveforms |
| GET | `?shares` | list your shared files/folders |
| GET | `?ups` | show recent uploads from your IP |
| GET | `?ups&filter=f` | ...where URL contains `f` |
| GET | `?mime=foo` | specify return mimetype `foo` |
| GET | `?v` | render markdown file at URL |
| GET | `?v` | open image/video/audio in mediaplayer |
| GET | `?txt` | get file at URL as plaintext |
| GET | `?txt=iso-8859-1` | ...with specific charset |
| GET | `?th` | get image/video at URL as thumbnail |
@@ -169,8 +173,12 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
| mPOST | | `f=FILE` | upload `FILE` into the folder at URL |
| mPOST | `?j` | `f=FILE` | ...and reply with json |
| mPOST | `?replace` | `f=FILE` | ...and overwrite existing files |
| mPOST | `?media` | `f=FILE` | ...and return medialink (not hotlink) |
| mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL |
| POST | `?delete` | | delete URL recursively |
| POST | `?eshare=rm` | | stop sharing a file/folder |
| POST | `?eshare=3` | | set expiration to 3 minutes |
| jPOST | `?share` | (complicated) | create temp URL for file/folder |
| jPOST | `?delete` | `["/foo","/bar"]` | delete `/foo` and `/bar` recursively |
| uPOST | | `msg=foo` | send message `foo` into server log |
| mPOST | | `act=tput`, `body=TEXT` | overwrite markdown document at URL |
@@ -202,6 +210,32 @@ upload modifiers:
| GET | `?pw=x` | logout |
# event hooks
on writing your own [hooks](../README.md#event-hooks)
## hook effects
hooks can cause intentional side-effects, such as redirecting an upload into another location, or creating+indexing additional files, or deleting existing files, by returning json on stdout
* `reloc` can redirect uploads before/after uploading has finished, based on filename, extension, file contents, uploader ip/name etc.
* `idx` informs copyparty about a new file to index as a consequence of this upload
* `del` tells copyparty to delete an unrelated file by vpath
for these to take effect, the hook must be defined with the `c1` flag; see example [reloc-by-ext](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/reloc-by-ext.py)
a subset of effect types are available for a subset of hook types,
* most hook types (xbu/xau/xbr/xar/xbd/xad/xm) support `idx` and `del` for all http protocols (up2k / basic-uploader / webdav), but not ftp/tftp/smb
* most hook types will abort/reject the action if the hook returns nonzero, assuming flag `c` is given, see examples [reject-extension](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/reject-extension.py) and [reject-mimetype](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/reject-mimetype.py)
* `xbu` supports `reloc` for all http protocols (up2k / basic-uploader / webdav), but not ftp/tftp/smb
* `xau` supports `reloc` for basic-uploader / webdav only, not up2k or ftp/tftp/smb
* so clients like sharex are supported, but not dragdrop into browser
to trigger indexing of files `/foo/1.txt` and `/foo/bar/2.txt`, a hook can `print(json.dumps({"idx":{"vp":["/foo/1.txt","/foo/bar/2.txt"]}}))` (and replace "idx" with "del" to delete instead)
* note: paths starting with `/` are absolute URLs, but you can also do `../3.txt` relative to the destination folder of each uploaded file
# assumptions
## mdns
@@ -325,10 +359,6 @@ can be reproduced with `--no-sendfile --s-wr-sz 8192 --s-wr-slp 0.3 --rsp-slp 6`
* remove brokers / multiprocessing stuff; https://github.com/9001/copyparty/tree/no-broker
* reduce the nesting / indirections in `HttpCli` / `httpcli.py`
* nearly zero benefit from stuff like replacing all the `self.conn.hsrv` with a local `hsrv` variable
* reduce up2k roundtrips
* start from a chunk index and just go
* terminate client on bad data
* not worth the effort, just throw enough conncetions at it
* single sha512 across all up2k chunks?
* crypto.subtle cannot into streaming, would have to use hashwasm, expensive
* separate sqlite table per tag

View File

@@ -0,0 +1,45 @@
the following setup appears to work (copyparty starts, accepts uploads, is able to persist config)
tested on debian 12 using [portainer-ce](https://docs.portainer.io/start/install-ce/server/docker/linux) with [docker-ce](https://docs.docker.com/engine/install/debian/) as root (not rootless)
before making the container, first `mkdir /etc/copyparty /srv/pub` which will be bind-mounts into the container
> both `/etc/copyparty` and `/srv/pub` are examples; you can change them if you'd like
put your copyparty config files directly into `/etc/copyparty` and the files to share inside `/srv/pub`
on first startup, copyparty will create a subfolder inside `/etc/copyparty` called `copyparty` where it puts some runtime state; for example replacing `/etc/copyparty/copyparty/cert.pem` with another TLS certificate is a quick and dirty way to get valid HTTPS (if you really want copyparty to handle that and not a reverse-proxy)
## in portainer:
```
environments -> local -> containers -> add container:
name = copyparty-ac
registry = docker hub
image = copyparty/ac
always pull = no
manual network port publishing:
3923 to 3923 [TCP]
advanced -> command & logging:
console = interactive & tty
advanced -> volumes -> map additional volume:
container = /cfg [Bind]
host = /etc/copyparty [Writable]
advanced -> volumes -> map additional volume:
container = /w [Bind]
host = /srv/pub [Writable]
```
notes:
* `/cfg` is where copyparty expects to find its config files; `/etc/copyparty` is just an example mapping to that
* `/w` is where copyparty expects to find the folder to share; `/srv/pub` is just an example mapping to that
* the volumes must be bind-mounts to avoid permission issues (or so the theory goes)

210
docs/logo.svg Normal file
View File

@@ -0,0 +1,210 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
width="300mm"
height="207mm"
viewBox="0 0 300 207"
version="1.1"
id="svg1"
inkscape:version="1.3.2 (091e20ef0f, 2023-11-25)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<title
id="title1">copyparty_logo</title>
<defs
id="defs1">
<linearGradient
inkscape:collect="always"
id="linearGradient1">
<stop
style="stop-color:#ffcc55;stop-opacity:1"
offset="0"
id="stop1" />
<stop
style="stop-color:#ffcc00;stop-opacity:1"
offset="0.2"
id="stop2" />
<stop
style="stop-color:#ff8800;stop-opacity:1"
offset="1"
id="stop3" />
</linearGradient>
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient1"
id="linearGradient2"
x1="15"
y1="15"
x2="15"
y2="143"
gradientUnits="userSpaceOnUse" />
</defs>
<metadata
id="metadata5">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title>copyparty_logo</dc:title>
<dc:source>github.com/9001/copyparty</dc:source>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:groupmode="layer"
id="layer1"
inkscape:label="kassett">
<rect
style="fill:#333333"
id="rect1"
width="300"
height="205"
x="0"
y="0"
rx="12"
ry="12" />
<rect
style="fill:url(#linearGradient2)"
id="rect2"
width="270"
height="128"
x="15"
y="15"
rx="8"
ry="8" />
<rect
style="fill:#333333"
id="rect3"
width="172"
height="52"
x="64"
y="72"
rx="26"
ry="26" />
<circle
style="fill:#cccccc"
id="circle1"
cx="91"
cy="98"
r="18" />
<circle
style="fill:#cccccc"
id="circle2"
cx="209"
cy="98"
r="18" />
<path
style="fill:#737373;stroke-width:1px"
d="m 48,207 10,-39 c 1.79,-6.2 5.6,-7.8 12,-8 60,-1 100,-1 160,0 6.4,0.2 10,1.8 12,8 l 10,39 z"
id="path1"
sodipodi:nodetypes="ccccccc" />
</g>
<g
inkscape:groupmode="layer"
id="layer3"
inkscape:label="tekst"
style="display:none">
<text
xml:space="preserve"
style="font-size:38.8056px;line-height:1.25;font-family:Akbar;-inkscape-font-specification:Akbar;letter-spacing:3.70417px;word-spacing:0px;fill:#333333"
x="47.153069"
y="55.548954"
id="text1"><tspan
sodipodi:role="line"
id="tspan1"
x="47.153069"
y="55.548954"
style="-inkscape-font-specification:Akbar"
rotate="0 0">copyparty</tspan></text>
</g>
<g
inkscape:groupmode="layer"
id="layer4"
inkscape:label="stensatt">
<path
d="m 63.5,50.9 q -0.85,0.93 -4.73,2.3 -3.6,1.3 -4.4,1.3 -3.3,0 -5.1,-2.1 -1.75,-2 -1.75,-5.36 0,-4.6 3.76,-7.64 3.3,-2.7 7.3,-2.7 0.4,0 0.93,0.74 0.54,0.7 0.54,1.16 0,2.06 -2.2,2.7 -1.36,0.4 -4.04,1.16 -2.2,1.16 -2.2,4.4 0,3.2 2.9,3.2 0.85,0 0.85,0 0.54,0 1.44,-0.16 1.1,-0.23 2.9,-0.74 1.8,-0.54 2.13,-0.54 0.4,0 1.75,0.6 z"
style="fill:#333333"
id="path11" />
<path
d="m 87.6,45 q 0,4.2 -3.7,6.95 -3.2,2.3 -6.87,2.3 -3.4,0 -6,-2.6 -2.5,-2.6 -2.5,-6 0,-3.6 3.14,-6.64 3.2,-3 6.8,-3 3.5,0 6.3,2.76 2.83,2.76 2.83,6.25 z m -3.4,0.16 q 0,-2.25 -1.75,-3.7 -1.7,-1.5 -4,-1.5 -0.1,0 -1.6,1.6 -1.44,1.55 -2.44,1.55 -0.6,0 -0.8,-0.3 -1.16,2.3 -1.16,3 0,2.25 2.13,3.4 1.6,0.9 3.6,0.9 2,0 3.76,-1.1 2.25,-1.4 2.25,-3.84 z"
style="fill:#333333"
id="path12" />
<path
d="m 112.8,46.8 q 0,2.8 -1.9,4.4 -1.8,1.5 -4.7,1.5 -0.7,0 -2.7,-0.4 -1.9,-0.4 -2.6,-0.4 -2.1,0 -2.1,2.64 0,0.85 0.23,2.6 0.2,1.75 0.2,2.6 0,1.9 -0.77,2.83 -1.44,0 -3,-0.85 -1.46,-9.5 -1.46,-12 0,-3.65 1.75,-8.1 2.37,-6.05 6.45,-6.05 3.7,0 7.3,4.1 3.3,3.84 3.3,7.14 z m -3.8,0.2 q -0.6,-2.2 -2.6,-4.4 -2.3,-2.5 -4.3,-2.5 -1.3,0 -2.33,2.2 -0.9,1.8 -0.9,3.26 0,0.47 0.38,1.24 0.43,0.8 0.85,0.8 1.1,0 3.2,0.3 2.1,0.3 3.2,0.3 0.3,0 1.3,-0.4 1,-0.47 1.3,-0.74 z"
style="fill:#333333"
id="path13" />
<path
d="m 133,40 q -2.1,4.1 -3.2,7 -0.1,0.3 -1.6,4.5 -0.4,1.36 -1,4.2 -0.5,2.83 -1,4.2 -1,2.83 -2.3,2.64 -1.4,-0.2 -1.6,-1.6 0,-0.2 0,-0.5 0,-0.16 0.3,-1.5 1,-5.04 1,-6.44 0,-0.54 -0.1,-0.74 -1.4,-2.44 -4.1,-7.4 -2.7,-4.97 -2.4,-7.7 1.5,-1.36 2.1,-1.36 0.4,0 1.1,0.6 0.6,0.6 0.7,1.1 0.8,6.2 4.9,11.1 1,-1.8 1.8,-4.04 0.5,-1.4 1.6,-4.15 1.9,-4.46 3.4,-4.46 0.2,0 0.4,0.1 0.9,0.3 1.3,2.8 z"
style="fill:#333333"
id="path14" />
<path
d="m 157.5,48 q 0,2.8 -1.9,4.4 -1.8,1.5 -4.7,1.5 -0.7,0 -2.7,-0.4 -1.9,-0.4 -2.6,-0.4 -2,0 -2,2.64 0,0.85 0.2,2.6 0.2,1.75 0.2,2.6 0,1.9 -0.7,2.83 -1.5,0 -3,-0.85 -1.5,-9.5 -1.5,-11.95 0,-3.65 1.8,-8.1 2.3,-6.05 6.4,-6.05 3.7,0 7.2,4.1 3.3,3.84 3.3,7.14 z m -3.8,0.2 q -0.6,-2.2 -2.6,-4.4 -2.3,-2.5 -4.3,-2.5 -1.3,0 -2.3,2.2 -0.9,1.8 -0.9,3.26 0,0.47 0.4,1.24 0.4,0.8 0.8,0.8 1.1,0 3.2,0.3 2.1,0.3 3.2,0.3 0.3,0 1.3,-0.4 1,-0.47 1.3,-0.74 z"
style="fill:#333333"
id="path15" />
<path
d="m 182,53.3 q 0,0.9 -0.6,1.5 -0.6,0.6 -1.4,0.6 -1.6,0 -3,-0.9 -1.4,-0.93 -2.1,-2.3 -0.7,-0.1 -1.5,0.85 -0.9,1.16 -1.1,1.24 -1.2,0.54 -3.9,0.54 -2.2,0 -3.9,-2.44 -1.5,-2.13 -1.5,-4 0,-3.4 3.4,-6.4 3.2,-2.9 6.7,-2.9 0.9,0 1.7,0.6 0.8,0.6 0.8,1.44 0,0.54 -0.4,1.1 2.4,0.9 2.4,2.83 0,0.35 -0.1,1.05 -0.1,0.7 -0.1,1.05 0,0.4 0.1,0.6 0.5,1.3 2.5,3.4 1.9,1.9 1.9,2.2 z m -8.1,-10.1 q -0.4,0 -1.1,-0.1 -0.8,-0.16 -1.1,-0.16 -1.3,0 -3.2,1.94 -1.9,1.94 -1.9,3.3 0,0.8 0.7,1.8 0.9,1.3 2.2,1.3 2.6,0 3.5,-2.9 0.5,-2.6 1,-5.16 z"
style="fill:#333333"
id="path16" />
<path
d="m 203.8,42.4 q -0.4,0.4 -1.5,0.4 -0.9,0 -2.5,-0.3 -1.7,-0.3 -2.5,-0.3 -4.7,0 -5.5,6.9 -0.3,3.1 -0.4,3.3 -0.4,1 -1.7,2.3 h -1.1 q -0.7,-1.2 -1.3,-4.1 -0.6,-2.76 -0.6,-4.27 0,-1.16 0.1,-1.5 0.2,-0.54 1,-0.54 0.3,0 0.6,0.3 0.4,0.3 0.4,0.3 1.9,-3.53 3.1,-4.6 1.8,-1.7 5.1,-1.7 1.4,0 3.6,0.9 2.8,1.16 3.3,2.8 z"
style="fill:#333333"
id="path17" />
<path
d="m 229.5,37.16 q 0.3,0.8 0.3,1.44 0,1.86 -2.4,1.86 -1,0 -3.5,-0.5 -2.5,-0.54 -3.4,-0.54 -1.3,0 -1.5,0.1 -0.4,0.2 -0.4,1.2 0,2.2 0.6,6.9 0.7,5.86 1.6,6.13 -0.4,0.35 -0.4,1.1 -1.2,0.7 -2.6,0.7 -1.4,0 -2,-3.9 -0.2,-1.36 -0.5,-7.76 -0.2,-4.6 -0.8,-5.5 -0.3,-0.47 -4.3,-0.35 -1,0 -1.6,0.1 -0.5,0 -0.3,0 -0.8,0 -1.2,-0.7 -0.5,-1.3 -0.5,-1.4 0,-1.44 4.1,-2 1.6,-0.16 4.7,-0.5 0,-0.85 -0.1,-2.56 0,-1.75 0,-2.6 0,-4.35 2.1,-4.35 0.5,0 1.1,0.6 0.6,0.6 0.6,1.1 v 7.9 q 1.1,1.2 5,1.7 3.9,0.5 5.3,1.86 z"
style="fill:#333333"
id="path18" />
<path
d="m 251.2,40.2 q -2,4.1 -3.2,7 -0.1,0.3 -1.5,4.5 -0.5,1.36 -1,4.2 -0.5,2.83 -1,4.2 -1,2.83 -2.4,2.64 -1.4,-0.2 -1.5,-1.6 -0.1,-0.2 -0.1,-0.5 0,-0.16 0.3,-1.5 1.1,-5.04 1.1,-6.44 0,-0.54 -0.1,-0.74 -1.4,-2.44 -4.1,-7.4 -2.7,-4.97 -2.4,-7.7 1.4,-1.36 2.1,-1.36 0.4,0 1,0.6 0.6,0.6 0.7,1.1 0.9,6.2 4.9,11.1 1,-1.8 1.9,-4.04 0.5,-1.4 1.6,-4.15 1.8,-4.46 3.4,-4.46 0.2,0 0.4,0.1 0.8,0.3 1.2,2.8 z"
style="fill:#333333"
id="path19" />
</g>
<g
inkscape:groupmode="layer"
id="layer5"
inkscape:label="tagger">
<g
id="g1">
<path
id="path4"
style="fill:#333333"
d="m 111.4,83.335 -9.526,5.5 2.5,4.33 9.526,-5.5 z m -33.775,19.5 -9.526,5.5 2.5,4.33 9.526,-5.5 z"
sodipodi:nodetypes="cccccccccc" />
<path
id="path5"
style="fill:#333333"
d="M 88.5,73 V 84 h 5 V 73 Z m 0,39 v 11 h 5 V 112 Z"
sodipodi:nodetypes="cccccccccc" />
<path
id="path6"
style="fill:#333333"
d="m 68.1,87.665 9.526,5.5 2.5,-4.33 -9.526,-5.5 z m 33.775,19.5 9.527,5.5 2.5,-4.33 -9.527,-5.5 z"
sodipodi:nodetypes="cccccccccc" />
</g>
<g
id="g2"
transform="rotate(30,150,318.19)">
<path
id="path7"
style="fill:#333333"
d="m 111.4,83.335 -9.526,5.5 2.5,4.33 9.526,-5.5 z m -33.775,19.5 -9.526,5.5 2.5,4.33 9.526,-5.5 z"
sodipodi:nodetypes="cccccccccc" />
<path
id="path8"
style="fill:#333333"
d="M 88.5,73 V 84 h 5 V 73 Z m 0,39 v 11 h 5 V 112 Z"
sodipodi:nodetypes="cccccccccc" />
<path
id="path9"
style="fill:#333333"
d="m 68.1,87.665 9.526,5.5 2.5,-4.33 -9.526,-5.5 z m 33.775,19.5 9.527,5.5 2.5,-4.33 -9.527,-5.5 z"
sodipodi:nodetypes="cccccccccc" />
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 8.3 KiB

View File

@@ -141,6 +141,9 @@ find -maxdepth 1 -printf '%s %p\n' | sort -n | awk '!/-([0-9a-zA-Z_-]{11})\.(mkv
# unique stacks in a stackdump
f=a; rm -rf stacks; mkdir stacks; grep -E '^#' $f | while IFS= read -r n; do awk -v n="$n" '!$0{o=0} o; $0==n{o=1}' <$f >stacks/f; h=$(sha1sum <stacks/f | cut -c-16); mv stacks/f stacks/$h-"$n"; done ; find stacks/ | sort | uniq -cw24
# find unused css variables
cat browser.css | sed -r 's/(var\()/\n\1/g' | awk '{sub(/:/," ")} $1~/^--/{d[$1]=1} /var\(/{sub(/.*var\(/,"");sub(/\).*/,"");u[$1]=1} END{for (x in u) delete d[x]; for (x in d) print x}' | tr '\n' '|'
##
## sqlite3 stuff

View File

@@ -20,6 +20,7 @@ currently up to date with [awesome-selfhosted](https://github.com/awesome-selfho
* 💾 = what copyparty offers as an alternative
* 🔵 = similarities
* ⚠️ = disadvantages (something copyparty does "better")
* 🔥 = hazards
## toc
@@ -37,7 +38,7 @@ currently up to date with [awesome-selfhosted](https://github.com/awesome-selfho
* [another matrix](#another-matrix)
* [reviews](#reviews)
* [copyparty](#copyparty)
* [hfs2](#hfs2)
* [hfs2](#hfs2) 🔥
* [hfs3](#hfs3)
* [nextcloud](#nextcloud)
* [seafile](#seafile)
@@ -83,8 +84,8 @@ the table headers in the matrixes below are the different softwares, with a quic
the softwares,
* `a` = [copyparty](https://github.com/9001/copyparty)
* `b` = [hfs2](https://rejetto.com/hfs/)
* `c` = [hfs3](https://github.com/rejetto/hfs)
* `b` = [hfs2](https://github.com/rejetto/hfs2/) 🔥
* `c` = [hfs3](https://rejetto.com/hfs/)
* `d` = [nextcloud](https://github.com/nextcloud/server)
* `e` = [seafile](https://github.com/haiwen/seafile)
* `f` = [rclone](https://github.com/rclone/rclone), specifically `rclone serve webdav .`
@@ -152,19 +153,20 @@ symbol legend,
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| download folder as zip | █ | █ | █ | █ | | | █ | | █ | █ | | █ | |
| download folder as tar | █ | | | | | | | | | | | | |
| upload | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | █ | █ |
| parallel uploads | █ | | | █ | █ | | • | | █ | | █ | | █ |
| resumable uploads | █ | | | | | | | | █ | | █ | | |
| upload segmenting | █ | | | | | | | █ | █ | | █ | | █ |
| download folder as tar | █ | | | | | | | | | | | | |
| upload | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | █ | █ |
| parallel uploads | █ | | | █ | █ | | • | | █ | | █ | | █ |
| resumable uploads | █ | | | | | | | | █ | | █ | | |
| upload segmenting | █ | | | | | | | █ | █ | | █ | | █ |
| upload acceleration | █ | | | | | | | | █ | | █ | | |
| upload verification | █ | | | █ | █ | | | | █ | | | | |
| upload deduplication | █ | | | | █ | | | | █ | | | | |
| upload a 999 TiB file | █ | | | | █ | █ | • | | █ | | █ | | |
| race the beam ("p2p") | █ | | | | | | | | | • | | | |
| CTRL-V from device | █ | | | █ | | | | | | | | | |
| race the beam ("p2p") | █ | | | | | | | | | | | | |
| keep last-modified time | █ | | | █ | █ | █ | | | | | | █ | |
| upload rules | | | | | | | | | | | | | |
| ┗ max disk usage | █ | █ | | | █ | | | | █ | | | █ | █ |
| ┗ max disk usage | █ | █ | | | █ | | | | █ | | | █ | █ |
| ┗ max filesize | █ | | | | | | | █ | | | █ | █ | █ |
| ┗ max items in folder | █ | | | | | | | | | | | | |
| ┗ max file age | █ | | | | | | | | █ | | | | |
@@ -173,6 +175,7 @@ symbol legend,
| ┗ randomize filename | █ | | | | | | | █ | █ | | | | |
| ┗ mimetype reject-list | | | | | | | | | • | | | | • |
| ┗ extension reject-list | | | | | | | | █ | • | | | | • |
| ┗ upload routing | █ | | | | | | | | | | | | |
| checksums provided | | | | █ | █ | | | | █ | | | | |
| cloud storage backend | | | | █ | █ | █ | | | | | █ | █ | |
@@ -182,8 +185,13 @@ symbol legend,
* `upload verification` = uploads are checksummed or otherwise confirmed to have been transferred correctly
* `CTRL-V from device` = press CTRL-C in Windows Explorer (or whatever) and paste into the webbrowser to upload it
* `race the beam` = files can be downloaded while they're still uploading; downloaders are slowed down such that the uploader is always ahead
* `upload routing` = depending on filetype / contents / uploader etc., the file can be redirected to another location or otherwise transformed; mitigates limitations such as [sharex#3992](https://github.com/ShareX/ShareX/issues/3992)
* copyparty example: [reloc-by-ext](https://github.com/9001/copyparty/tree/hovudstraum/bin/hooks#before-upload)
* `checksums provided` = when downloading a file from the server, the file's checksum is provided for verification client-side
* `cloud storage backend` = able to serve files from (and write to) s3 or similar cloud services; `` means the software can do this with some help from `rclone mount` as a bridge
@@ -213,7 +221,7 @@ symbol legend,
| serve sftp (ssh) | | | | | | █ | | | | | | █ | █ |
| serve smb/cifs | | | | | | █ | | | | | | | |
| serve dlna | | | | | | █ | | | | | | | |
| listen on unix-socket | | | | █ | █ | | █ | █ | █ | | █ | █ | |
| listen on unix-socket | | | | █ | █ | | █ | █ | █ | | █ | █ | |
| zeroconf | █ | | | | | | | | | | | | █ |
| supports netscape 4 | | | | | | █ | | | | | • | | |
| ...internet explorer 6 | | █ | | █ | | █ | | | | | • | | |
@@ -243,7 +251,7 @@ symbol legend,
| listen multiple ports | █ | | | | | | | | | | | █ | |
| virtual file system | █ | █ | █ | | | | █ | | | | | █ | |
| reverse-proxy ok | █ | | █ | █ | █ | █ | █ | █ | • | • | • | █ | |
| folder-rproxy ok | █ | | | | █ | █ | | • | • | | • | | • |
| folder-rproxy ok | █ | | | | █ | █ | | • | • | | • | | • |
* `folder-rproxy` = reverse-proxying without dedicating an entire (sub)domain, using a subfolder instead
* `l`/sftpgo:
@@ -266,9 +274,9 @@ symbol legend,
| per-folder permissions | | | | █ | █ | | █ | | █ | █ | | █ | █ |
| per-file permissions | | | | █ | █ | | █ | | █ | | | | █ |
| per-file passwords | █ | | | █ | █ | | █ | | █ | | | | █ |
| unmap subfolders | █ | | | | | | █ | | | █ | | • | |
| unmap subfolders | █ | | | | | | █ | | | █ | | • | |
| index.html blocks list | | | | | | | █ | | | • | | | |
| write-only folders | █ | | | | | | | | | | █ | █ | |
| write-only folders | █ | | | | | | | | | | █ | █ | |
| files stored as-is | █ | █ | █ | █ | | █ | █ | | | █ | █ | █ | █ |
| file versioning | | | | █ | █ | | | | | | | | |
| file encryption | | | | █ | █ | █ | | | | | | █ | |
@@ -298,6 +306,7 @@ symbol legend,
* `file action event hooks` = run script before/after upload, move, rename, ...
* `one-way folder sync` = like rsync, optionally deleting unexpected files at target
* `full sync` = stateful, dropbox-like sync
* `speed throttle` = rate limiting (per ip, per user, per connection, anything like that)
* `curl-friendly ls` = returns a [sortable plaintext folder listing](https://user-images.githubusercontent.com/241032/215322619-ea5fd606-3654-40ad-94ee-2bc058647bb2.png) when curled
* `curl-friendly upload` = uploading with curl is just `curl -T some.bin http://.../`
* `a`/copyparty remarks:
@@ -323,14 +332,14 @@ symbol legend,
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ---------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| single-page app | █ | | █ | █ | █ | | | █ | █ | █ | █ | | █ |
| themes | █ | █ | | █ | | | | | █ | | | | |
| themes | █ | █ | | █ | | | | | █ | | | | |
| directory tree nav | █ | | | | █ | | | | █ | | | | |
| multi-column sorting | █ | | | | | | | | | | | | |
| thumbnails | █ | | | | | | | █ | █ | | | | █ |
| ┗ image thumbnails | █ | | | █ | █ | | | █ | █ | █ | | | █ |
| ┗ video thumbnails | █ | | | █ | █ | | | | █ | | | | █ |
| ┗ audio spectrograms | █ | | | | | | | | | | | | |
| audio player | █ | | | █ | █ | | | | █ | | | | █ |
| audio player | █ | | | █ | █ | | | | █ | | | | █ |
| ┗ gapless playback | █ | | | | | | | | • | | | | |
| ┗ audio equalizer | █ | | | | | | | | | | | | |
| ┗ waveform seekbar | █ | | | | | | | | | | | | |
@@ -348,16 +357,16 @@ symbol legend,
| search by custom parser | █ | | | | | | | | | | | | |
| find local file | █ | | | | | | | | | | | | |
| undo recent uploads | █ | | | | | | | | | | | | |
| create directories | █ | | | █ | █ | | █ | █ | █ | █ | █ | █ | █ |
| image viewer | █ | | | █ | █ | | | | █ | █ | █ | | █ |
| create directories | █ | | | █ | █ | | █ | █ | █ | █ | █ | █ | █ |
| image viewer | █ | | | █ | █ | | | | █ | █ | █ | | █ |
| markdown viewer | █ | | | | █ | | | | █ | | | | █ |
| markdown editor | █ | | | | █ | | | | █ | | | | █ |
| readme.md in listing | █ | | | █ | | | | | | | | | |
| rename files | █ | █ | █ | █ | █ | | █ | | █ | █ | █ | █ | █ |
| batch rename | █ | | | | | | | | █ | | | | |
| cut / paste files | █ | █ | | █ | █ | | | | █ | | | | █ |
| move files | █ | █ | | █ | █ | | █ | | █ | █ | █ | | █ |
| delete files | █ | █ | | █ | █ | | █ | █ | █ | █ | █ | █ | █ |
| move files | █ | █ | | █ | █ | | █ | | █ | █ | █ | | █ |
| delete files | █ | █ | | █ | █ | | █ | █ | █ | █ | █ | █ | █ |
| copy files | | | | | █ | | | | █ | █ | █ | | █ |
* `single-page app` = multitasking; possible to continue navigating while uploading
@@ -367,8 +376,12 @@ symbol legend,
* `undo recent uploads` = accounts without delete permissions have a time window where they can undo their own uploads
* `a`/copyparty has teeny-tiny skips playing gapless albums depending on audio codec (opus best)
* `b`/hfs2 has a very basic directory tree view, not showing sibling folders
* `c`/hfs3 remarks:
* audio playback does not continue into next song
* `f`/rclone can do some file management (mkdir, rename, delete) when hosting througn webdav
* `j`/filebrowser has a plaintext viewer/editor
* `j`/filebrowser remarks:
* audio playback does not continue into next song
* plaintext viewer/editor
* `k`/filegator directory tree is a modal window
@@ -424,6 +437,7 @@ symbol legend,
* 💾 are what copyparty offers as an alternative
* 🔵 are similarities
* ⚠️ are disadvantages (something copyparty does "better")
* 🔥 are hazards
## [copyparty](https://github.com/9001/copyparty)
* resumable uploads which are verified server-side
@@ -431,8 +445,9 @@ symbol legend,
* both of the above are surprisingly uncommon features
* very cross-platform (python, no dependencies)
## [hfs2](https://rejetto.com/hfs/)
* the OG, the legend
## [hfs2](https://github.com/rejetto/hfs2/)
* the OG, the legend (now replaced by [hfs3](#hfs3))
* 🔥 hfs2 is dead and dangerous! unfixed RCE: [info](https://github.com/rejetto/hfs2/issues/44), [info](https://github.com/drapid/hfs/issues/3), [info](https://asec.ahnlab.com/en/67650/)
* ⚠️ uploads not resumable / accelerated / integrity-checked
* ⚠️ on cloudflare: max upload size 100 MiB
* ⚠️ windows-only
@@ -440,10 +455,19 @@ symbol legend,
* vfs with gui config, per-volume permissions
* starting to show its age, hence the rewrite:
## [hfs3](https://github.com/rejetto/hfs)
## [hfs3](https://rejetto.com/hfs/)
* nodejs; cross-platform
* vfs with gui config, per-volume permissions
* still early development, let's revisit later
* 🔵 uploads are resumable
* ⚠️ uploads are not segmented; max upload size 100 MiB on cloudflare
* ⚠️ uploads are not accelerated (copyparty is 3x faster across the atlantic)
* ⚠️ uploads are not integrity-checked
* ⚠️ copies the file after upload; need twice filesize free disk space
* ⚠️ doesn't support crazy filenames
* ✅ config GUI
* ✅ download counter
* ✅ watch active connections
* ✅ plugins
## [nextcloud](https://github.com/nextcloud/server)
* php, mariadb
@@ -497,6 +521,7 @@ symbol legend,
* rust; cross-platform (windows, linux, macos)
* ⚠️ uploads not resumable / accelerated / integrity-checked
* ⚠️ on cloudflare: max upload size 100 MiB
* ⚠️ across the atlantic, copyparty is 3x faster
* ⚠️ doesn't support crazy filenames
* ✅ per-url access control (copyparty is per-volume)
* 🔵 basic but really snappy ui
@@ -539,8 +564,10 @@ symbol legend,
## [filebrowser](https://github.com/filebrowser/filebrowser)
* go; cross-platform (windows, linux, mac)
* ⚠️ uploads not resumable / accelerated / integrity-checked
* ⚠️ on cloudflare: max upload size 100 MiB
* 🔵 uploads are resumable and segmented
* 🔵 multiple files are uploaded in parallel, but...
* ⚠️ big files are not accelerated (copyparty is 5x faster across the atlantic)
* ⚠️ uploads are not integrity-checked
* ⚠️ http only; no webdav / ftp / zeroconf
* ⚠️ doesn't support crazy filenames
* ⚠️ no directory tree nav
@@ -550,12 +577,14 @@ symbol legend,
* ⚠️ but no directory tree for navigation
* ✅ user signup
* ✅ command runner / remote shell
* 🔵 supposed to have write-only folders but couldn't get it to work
* ✅ more efficient; can handle around twice as much simultaneous traffic
## [filegator](https://github.com/filegator/filegator)
* go; cross-platform (windows, linux, mac)
* php; cross-platform (windows, linux, mac)
* 🔵 *it has upload segmenting and acceleration*
* ⚠️ but uploads are still not integrity-checked
* ⚠️ on copyparty, uploads are 40x faster
* compared to the official filegator docker example which might be bad
* ⚠️ http only; no webdav / ftp / zeroconf
* ⚠️ does not support symlinks
* ⚠️ expensive download-as-zip feature
@@ -566,6 +595,7 @@ symbol legend,
* go; cross-platform (windows, linux, mac)
* ⚠️ http uploads not resumable / accelerated / integrity-checked
* ⚠️ on cloudflare: max upload size 100 MiB
* ⚠️ across the atlantic, copyparty is 2.5x faster
* 🔵 sftp uploads are resumable
* ⚠️ web UI is very minimal + a bit slow
* ⚠️ no thumbnails / image viewer / audio player
@@ -573,6 +603,7 @@ symbol legend,
* ⚠️ no filesystem indexing / search
* ⚠️ doesn't run on phones, tablets
* ⚠️ no zeroconf (mdns/ssdp)
* ⚠️ impractical directory URLs
* ⚠️ AGPL licensed
* 🔵 ftp, ftps, webdav
* ✅ sftp server
@@ -589,11 +620,13 @@ symbol legend,
## [arozos](https://github.com/tobychui/arozos)
* big suite of applications similar to [kodbox](#kodbox), copyparty is better at downloading/uploading/music/indexing but arozos has other advantages
* go; primarily linux (limited support for windows)
* ⚠️ needs root
* ⚠️ uploads not resumable / integrity-checked
* ⚠️ uploading small files to copyparty is 2.7x faster
* ⚠️ uploading large files to copyparty is at least 10% faster
* arozos is websocket-based, 512 KiB chunks; writes each chunk to separate files and then merges
* copyparty splices directly into the final file; faster and better for the HDD and filesystem
* ⚠️ across the atlantic, uploading to copyparty is 6x faster
* ⚠️ no directory tree navpane; not as easy to navigate
* ⚠️ download-as-zip is not streaming; creates a temp.file on the server
* ⚠️ not self-contained (pulls from jsdelivr)

View File

@@ -49,7 +49,7 @@ thumbnails2 = ["pyvips"]
audiotags = ["mutagen"]
ftpd = ["pyftpdlib"]
ftps = ["pyftpdlib", "pyopenssl"]
tftpd = ["partftpy>=0.3.1"]
tftpd = ["partftpy>=0.4.0"]
pwhash = ["argon2-cffi"]
[project.scripts]

View File

@@ -3,7 +3,7 @@ WORKDIR /z
ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
ver_hashwasm=4.10.0 \
ver_marked=4.3.0 \
ver_dompf=3.1.2 \
ver_dompf=3.1.6 \
ver_mde=2.18.0 \
ver_codemirror=5.65.16 \
ver_fontawesome=5.13.0 \

View File

@@ -5,19 +5,16 @@ LABEL org.opencontainers.image.url="https://github.com/9001/copyparty" \
org.opencontainers.image.licenses="MIT" \
org.opencontainers.image.title="copyparty-ac" \
org.opencontainers.image.description="copyparty with Pillow and FFmpeg (image/audio/video thumbnails, audio transcoding, media tags)"
ENV PYTHONPYCACHEPREFIX=/tmp/pyc \
XDG_CONFIG_HOME=/cfg
ENV XDG_CONFIG_HOME=/cfg
RUN apk --no-cache add !pyc \
wget \
py3-argon2-cffi py3-pillow \
ffmpeg \
&& rm -rf /tmp/pyc \
&& mkdir /cfg /w \
&& chmod 777 /cfg /w \
&& echo % /cfg > initcfg
py3-jinja2 py3-argon2-cffi py3-pillow \
ffmpeg
COPY i/dist/copyparty-sfx.py innvikler.sh ./
RUN ash innvikler.sh && rm innvikler.sh
COPY i/dist/copyparty-sfx.py ./
WORKDIR /w
EXPOSE 3923
ENTRYPOINT ["python3", "/z/copyparty-sfx.py", "--no-crt", "-c", "/z/initcfg"]
ENTRYPOINT ["python3", "-m", "copyparty", "--no-crt", "-c", "/z/initcfg"]

View File

@@ -5,15 +5,14 @@ LABEL org.opencontainers.image.url="https://github.com/9001/copyparty" \
org.opencontainers.image.licenses="MIT" \
org.opencontainers.image.title="copyparty-dj" \
org.opencontainers.image.description="copyparty with all optional dependencies, including musical key / bpm detection"
ENV PYTHONPYCACHEPREFIX=/tmp/pyc \
XDG_CONFIG_HOME=/cfg
ENV XDG_CONFIG_HOME=/cfg
COPY i/bin/mtag/install-deps.sh ./
COPY i/bin/mtag/audio-bpm.py /mtag/
COPY i/bin/mtag/audio-key.py /mtag/
RUN apk add -U !pyc \
wget \
py3-argon2-cffi py3-pillow py3-pip py3-cffi \
py3-jinja2 py3-argon2-cffi py3-pillow py3-pip py3-cffi \
ffmpeg \
vips-jxl vips-heif vips-poppler vips-magick \
py3-numpy fftw libsndfile \
@@ -27,18 +26,12 @@ RUN apk add -U !pyc \
&& python3 -m pip install pyvips \
&& bash install-deps.sh \
&& apk del py3-pip .bd \
&& rm -rf /var/cache/apk/* /tmp/pyc \
&& chmod 777 /root \
&& ln -s /root/vamp /root/.local / \
&& mkdir /cfg /w \
&& chmod 777 /cfg /w \
&& echo % /cfg > initcfg
&& ln -s /root/vamp /root/.local /
COPY i/dist/copyparty-sfx.py innvikler.sh ./
RUN ash innvikler.sh && rm innvikler.sh
COPY i/dist/copyparty-sfx.py ./
WORKDIR /w
EXPOSE 3923
ENTRYPOINT ["python3", "/z/copyparty-sfx.py", "--no-crt", "-c", "/z/initcfg"]
# size: 286 MB
# bpm/key: 529 sec
# idx-bench: 2352 MB/s
ENTRYPOINT ["python3", "-m", "copyparty", "--no-crt", "-c", "/z/initcfg"]

View File

@@ -5,18 +5,15 @@ LABEL org.opencontainers.image.url="https://github.com/9001/copyparty" \
org.opencontainers.image.licenses="MIT" \
org.opencontainers.image.title="copyparty-im" \
org.opencontainers.image.description="copyparty with Pillow and Mutagen (image thumbnails, media tags)"
ENV PYTHONPYCACHEPREFIX=/tmp/pyc \
XDG_CONFIG_HOME=/cfg
ENV XDG_CONFIG_HOME=/cfg
RUN apk --no-cache add !pyc \
wget \
py3-argon2-cffi py3-pillow py3-mutagen \
&& rm -rf /tmp/pyc \
&& mkdir /cfg /w \
&& chmod 777 /cfg /w \
&& echo % /cfg > initcfg
py3-jinja2 py3-argon2-cffi py3-pillow py3-mutagen
COPY i/dist/copyparty-sfx.py innvikler.sh ./
RUN ash innvikler.sh && rm innvikler.sh
COPY i/dist/copyparty-sfx.py ./
WORKDIR /w
EXPOSE 3923
ENTRYPOINT ["python3", "/z/copyparty-sfx.py", "--no-crt", "-c", "/z/initcfg"]
ENTRYPOINT ["python3", "-m", "copyparty", "--no-crt", "-c", "/z/initcfg"]

View File

@@ -5,12 +5,11 @@ LABEL org.opencontainers.image.url="https://github.com/9001/copyparty" \
org.opencontainers.image.licenses="MIT" \
org.opencontainers.image.title="copyparty-iv" \
org.opencontainers.image.description="copyparty with Pillow, FFmpeg, libvips (image/audio/video thumbnails, audio transcoding, media tags)"
ENV PYTHONPYCACHEPREFIX=/tmp/pyc \
XDG_CONFIG_HOME=/cfg
ENV XDG_CONFIG_HOME=/cfg
RUN apk add -U !pyc \
wget \
py3-argon2-cffi py3-pillow py3-pip py3-cffi \
py3-jinja2 py3-argon2-cffi py3-pillow py3-pip py3-cffi \
ffmpeg \
vips-jxl vips-heif vips-poppler vips-magick \
&& apk add -t .bd \
@@ -18,13 +17,11 @@ RUN apk add -U !pyc \
python3-dev py3-wheel \
&& rm -f /usr/lib/python3*/EXTERNALLY-MANAGED \
&& python3 -m pip install pyvips \
&& apk del py3-pip .bd \
&& rm -rf /var/cache/apk/* /tmp/pyc \
&& mkdir /cfg /w \
&& chmod 777 /cfg /w \
&& echo % /cfg > initcfg
&& apk del py3-pip .bd
COPY i/dist/copyparty-sfx.py innvikler.sh ./
RUN ash innvikler.sh && rm innvikler.sh
COPY i/dist/copyparty-sfx.py ./
WORKDIR /w
EXPOSE 3923
ENTRYPOINT ["python3", "/z/copyparty-sfx.py", "--no-crt", "-c", "/z/initcfg"]
ENTRYPOINT ["python3", "-m", "copyparty", "--no-crt", "-c", "/z/initcfg"]

View File

@@ -5,17 +5,14 @@ LABEL org.opencontainers.image.url="https://github.com/9001/copyparty" \
org.opencontainers.image.licenses="MIT" \
org.opencontainers.image.title="copyparty-min" \
org.opencontainers.image.description="just copyparty, no thumbnails / media tags / audio transcoding"
ENV PYTHONPYCACHEPREFIX=/tmp/pyc \
XDG_CONFIG_HOME=/cfg
ENV XDG_CONFIG_HOME=/cfg
RUN apk --no-cache add !pyc \
python3 \
&& rm -rf /tmp/pyc \
&& mkdir /cfg /w \
&& chmod 777 /cfg /w \
&& echo % /cfg > initcfg
py3-jinja2
COPY i/dist/copyparty-sfx.py innvikler.sh ./
RUN ash innvikler.sh && rm innvikler.sh
COPY i/dist/copyparty-sfx.py ./
WORKDIR /w
EXPOSE 3923
ENTRYPOINT ["python3", "/z/copyparty-sfx.py", "--no-crt", "--no-thumb", "-c", "/z/initcfg"]
ENTRYPOINT ["python3", "-m", "copyparty", "--no-crt", "--no-thumb", "-c", "/z/initcfg"]

View File

@@ -22,6 +22,11 @@ this example is also available as a podman-compatible [docker-compose yaml](http
i'm not very familiar with containers, so let me know if this section could be better 🙏
## portainer
* there is a [portainer howto](https://github.com/9001/copyparty/blob/hovudstraum/docs/examples/docker/portainer.md) which is mostly untested
## configuration
> this section basically explains how the [docker-compose yaml](https://github.com/9001/copyparty/blob/hovudstraum/docs/examples/docker/basic-docker-compose) works, so you may look there instead

View File

@@ -17,3 +17,19 @@ but I don't really know what i'm doing here 💩
`podman login docker.io`
`podman login ghcr.io -u 9001`
[about gchq](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) (takes a classic token as password)
## building on alpine
```bash
apk add podman{,-docker}
rc-update add cgroups
service cgroups start
vim /etc/containers/storage.conf # driver = "btrfs"
modprobe tun
echo ed:100000:65536 >/etc/subuid
echo ed:100000:65536 >/etc/subgid
apk add qemu-openrc qemu-tools qemu-{arm,armeb,aarch64,s390x,ppc64le}
rc-update add qemu-binfmt
service qemu-binfmt start
```

View File

@@ -0,0 +1,46 @@
#!/bin/ash
set -ex
# cleanup for flavors with python build steps (dj/iv)
rm -rf /var/cache/apk/* /root/.cache
# initial config; common for all flavors
mkdir /cfg /w
chmod 777 /cfg /w
echo % /cfg > initcfg
# unpack sfx and dive in
python3 copyparty-sfx.py --version
cd /tmp/pe-copyparty.0
# steal the stuff we need
mv copyparty partftpy ftp/* /usr/lib/python3.*/site-packages/
# golf
cd /usr/lib/python3.*/
rm -rf \
/tmp/pe-* /z/copyparty-sfx.py \
ensurepip pydoc_data turtle.py turtledemo lib2to3
# drop bytecode
find / -xdev -name __pycache__ -print0 | xargs -0 rm -rf
# build the stuff we want
python3 -m compileall -qj4 site-packages sqlite3 xml
# drop the stuff we dont
find -name __pycache__ |
grep -E 'ty/web/|/pycpar' |
tr '\n' '\0' | xargs -0 rm -rf
# two-for-one:
# 1) smoketest copyparty even starts
# 2) build any bytecode we missed
# this tends to race other builders (alle gode ting er tre)
cd /z
python3 -m copyparty \
--ign-ebind -p$((1024+RANDOM)),$((1024+RANDOM)),$((1024+RANDOM)) \
--no-crt -qi127.1 --exit=idx -e2dsa -e2ts
# output from -e2d
rm -rf .hist

View File

@@ -6,6 +6,8 @@ set -e
exit 1
}
suf=-b1
suf=
sarchs="386 amd64 arm/v7 arm64/v8 ppc64le s390x"
archs="amd64 arm s390x 386 arm64 ppc64le"
imgs="dj iv min im ac"
@@ -103,11 +105,12 @@ filt=
# --pull=never does nothing at all btw
(set -x
$nice podman build \
--squash \
--pull=never \
--from localhost/alpine-$a \
-t copyparty-$i-$a \
-t copyparty-$i-$a$suf \
-f Dockerfile.$i . ||
(echo $? $i-$a >> err)
(echo $? $i-$a >> err; printf '%096d\n' $(seq 1 42))
rm -f .blk
) 2> >(tee $a.err | sed "s/^/$aa:/" >&2) > >(tee $a.out | sed "s/^/$aa:/") &
done
@@ -134,9 +137,10 @@ filt=
variants=
for a in $archs; do
[[ " ${ngs[*]} " =~ " $i-$a " ]] && continue
variants="$variants containers-storage:localhost/copyparty-$i-$a"
variants="$variants containers-storage:localhost/copyparty-$i-$a$suf"
done
podman manifest create copyparty-$i $variants
podman manifest rm copyparty-$i$suf || echo "(that's fine btw)"
podman manifest create copyparty-$i$suf $variants
done
}

81
scripts/help2html.py Executable file
View File

@@ -0,0 +1,81 @@
#!/usr/bin/env python3
import re
import subprocess as sp
# to convert the copyparty --help to html, run this in xfce4-terminal @ 140x43:
_ = r""""
echo; for a in '' -accounts -flags -handlers -hooks -urlform -exp -ls -dbd -pwhash -zm; do
./copyparty-sfx.py --help$a 2>/dev/null; printf '\n\n\n%0139d\n\n\n'; done # xfce4-terminal @ 140x43
"""
# click [edit] => [select all]
# click [edit] => [copy as html]
# and then run this script
def readclip():
cmds = [
"xsel -ob",
"xclip -selection CLIPBOARD -o",
"pbpaste",
]
for cmd in cmds:
try:
return sp.check_output(cmd.split()).decode("utf-8")
except:
pass
def cnv(src):
yield '<html style="background:#222;color:#fff"><body>'
skip_sfx = False
in_sfx = 0
in_salt = 0
while True:
ln = next(src)
if "<font" in ln:
if not ln.startswith("<pre>"):
ln = "<pre>" + ln
yield ln
break
for ln in src:
ln = ln.rstrip()
if re.search(r"^<font[^>]+>copyparty v[0-9]", ln):
in_sfx = 3
if in_sfx:
in_sfx -= 1
if not skip_sfx:
yield ln
continue
if '">uuid:' in ln:
ln = re.sub(r">uuid:[0-9a-f-]{36}<", ">autogenerated<", ln)
if "-salt SALT" in ln:
in_salt = 3
if in_salt:
in_salt -= 1
t = ln
ln = re.sub(r">[0-9a-zA-Z/+]{24}<", ">24-character-autogenerated<", ln)
ln = re.sub(r">[0-9a-zA-Z/+]{40}<", ">40-character-autogenerated<", ln)
if t != ln:
in_salt = 0
ln = ln.replace(">/home/ed/", ">~/")
if ln.startswith("0" * 20):
skip_sfx = True
yield ln
yield "</pre>eof</body></html>"
def main():
src = readclip()
src = re.split("0{100,200}", src[::-1], 1)[1][::-1]
with open("helptext.html", "wb") as fo:
for ln in cnv(iter(src.split("\n")[:-3])):
fo.write(ln.encode("utf-8") + b"\r\n")
if __name__ == "__main__":
main()

26
scripts/help2txt.sh Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
set -e
( xsel -ob | sed -r '
s`/home/ed/`~/`;
s/uuid:[0-9a-f-]{36}/autogenerated/;
s/(-salt SALT.*default: )[0-9a-zA-Z/+]{24}\)/\124-character-autogenerated)/;
s/(-salt SALT.*default: )[0-9a-zA-Z/+]{40}\)/\140-character-autogenerated)/;
' | awk '
/^copyparty/{a=1} !a{next}
/^0{20}/{b=1} b&&/^copyparty v[0-9]+\./{s=3}
s{s-=1;next} 1' |
head -n-6; echo eof ) >helptext.txt
exit 0
# =====================================================================
# end of script; below is the explanation how to use this:
# first open an infinitely wide console (this is why you own an ultrawide) and copypaste this into it:
for a in '' -accounts -flags -handlers -hooks -urlform -exp -ls -dbd -pwhash -zm; do
./copyparty-sfx.py --help$a 2>/dev/null; printf '\n\n\n%0255d\n\n\n'; done
# then copypaste all of the output by pressing ctrl-shift-a, ctrl-shift-c
# and finally actually run this script which should produce helptext.txt

View File

@@ -3,6 +3,7 @@ set -e
echo
berr() { p=$(head -c 72 </dev/zero | tr '\0' =); printf '\n%s\n\n' $p; cat; printf '\n%s\n\n' $p; }
aerr() { printf '%s\n' "$*" | berr; }
help() { exec cat <<'EOF'
@@ -28,9 +29,11 @@ help() { exec cat <<'EOF'
#
# `no-tfp` saves ~10k by removing the tftp server, disabling --tftp
#
# `no-zm` saves ~7k by removing the zeroconf mDNS server
#
# `no-smb` saves ~3.5k by removing the smb / cifs server
#
# `no-zm` saves ~k by removing the zeroconf mDNS server
# `no-pf` saves ~2.8k by removing the option to download partyfuse
#
# _____________________________________________________________________
# web features:
@@ -52,10 +55,15 @@ help() { exec cat <<'EOF'
#
# `ign-wd` allows building an sfx without webdeps
#
# ---------------------------------------------------------------------
#
# _____________________________________________________________________
# if you are on windows, you can use msys2:
# PATH=/c/Users/$USER/AppData/Local/Programs/Python/Python310:"$PATH" ./make-sfx.sh fast
#
# _____________________________________________________________________
# some usage examples:
# ./scripts/make-sfx.sh lang eng no-cm no-hl no-dd no-fnt no-smb no-pf
# ./scripts/rls.sh sfx lang eng no-cm no-hl no-dd no-fnt no-smb no-pf
# (reduces v1.14.2 from 700k to 495k)
EOF
}
@@ -112,6 +120,7 @@ while [ ! -z "$1" ]; do
no-tfp) no_tfp=1 ; ;;
no-smb) no_smb=1 ; ;;
no-zm) no_zm=1 ; ;;
no-pf) no_pf=1 ; ;;
no-fnt) no_fnt=1 ; ;;
no-hl) no_hl=1 ; ;;
no-dd) no_dd=1 ; ;;
@@ -119,7 +128,6 @@ while [ ! -z "$1" ]; do
dl-wd) dl_wd=1 ; ;;
ign-wd) ign_wd=1 ; ;;
fast) zopf= ; ;;
ultra) ultra=1 ; ;;
lang) shift;langs="$1"; ;;
*) help ; ;;
esac
@@ -202,14 +210,14 @@ necho() {
mv {markupsafe,jinja2} j2/
necho collecting pyftpdlib
f="../build/pyftpdlib-1.5.9.tar.gz"
f="../build/pyftpdlib-1.5.10.tar.gz"
[ -e "$f" ] ||
(url=https://github.com/giampaolo/pyftpdlib/archive/refs/tags/release-1.5.9.tar.gz;
(url=https://files.pythonhosted.org/packages/cf/31/8d910cf40317dd0db74ba0b8558d0dee23c8b002468c14d3a5dec0e6e9fd/pyftpdlib-1.5.10.tar.gz;
wget -O$f "$url" || curl -L "$url" >$f)
tar -zxf $f
mv pyftpdlib-release-*/pyftpdlib .
rm -rf pyftpdlib-release-* pyftpdlib/test
mv pyftpdlib-*/pyftpdlib .
rm -rf pyftpdlib-* pyftpdlib/test
for f in pyftpdlib/_async{hat,ore}.py; do
[ -e "$f" ] || continue;
iawk 'NR<4||NR>27||!/^#/;NR==4{print"# license: https://opensource.org/licenses/ISC\n"}' $f
@@ -219,9 +227,9 @@ necho() {
mv pyftpdlib ftp/
necho collecting partftpy
f="../build/partftpy-0.3.1.tar.gz"
f="../build/partftpy-0.4.0.tar.gz"
[ -e "$f" ] ||
(url=https://files.pythonhosted.org/packages/37/79/1a1de1d3fdf27ddc9c2d55fec6552e7b8ed115258fedac6120679898b83d/partftpy-0.3.1.tar.gz;
(url=https://files.pythonhosted.org/packages/8c/96/642bb3ddcb07a2c6764eb29aa562d1cf56877ad6c330c3c8921a5f05606d/partftpy-0.4.0.tar.gz;
wget -O$f "$url" || curl -L "$url" >$f)
tar -zxf $f
@@ -413,7 +421,7 @@ rm have
ised /fork_process/d ftp/pyftpdlib/servers.py
iawk '/^class _Base/{s=1}!s' ftp/pyftpdlib/authorizers.py
iawk '/^ {0,4}[^ ]/{s=0}/^ {4}def (serve_forever|_loop)/{s=1}!s' ftp/pyftpdlib/servers.py
iawk '/^ {0,4}[a-zA-Z]/{s=0}/^ {4}def (serve_forever|_loop)/{s=1}!s' ftp/pyftpdlib/servers.py
rm -f ftp/pyftpdlib/{__main__,prefork}.py
[ $no_ftp ] &&
@@ -428,6 +436,9 @@ rm -f ftp/pyftpdlib/{__main__,prefork}.py
[ $no_zm ] &&
rm -rf copyparty/mdns.py copyparty/stolen/dnslib
[ $no_pf ] &&
rm -rf copyparty/web/a/partyfuse.py
[ $no_cm ] && {
rm -rf copyparty/web/mde.* copyparty/web/deps/easymde*
echo h > copyparty/web/mde.html
@@ -451,11 +462,16 @@ rm -f ftp/pyftpdlib/{__main__,prefork}.py
ised 's/(cursor: ?)url\([^)]+\), ?(pointer)/\1\2/; s/[0-9]+% \{cursor:[^}]+\}//; s/animation: ?cursor[^};]+//' $f
}
[ $langs ] &&
[ $langs ] && {
echo $langs | grep -q eng || {
langs="eng|$langs"
aerr "ERROR: removing english is not supported; will do this instead: $langs"
}
for f in copyparty/web/{browser.js,splash.js}; do
gzip -d "$f.gz" || true
iawk '/^\}/{l=0} !l; /^var Ls =/{l=1;next} o; /^\t["}]/{o=0} /^\t"'"$langs"'"/{o=1;print}' $f
iawk '/^\}/{l=0} !l; /^var Ls =/{l=1;next} !l{next} o; /^\t["}]/{o=0} /^\t"'"$langs"'"/{o=1;print}' $f
done
}
[ ! $repack ] && {
# uncomment
@@ -490,6 +506,11 @@ while IFS= read -r f; do
tmv "$f"
done
grep -rlE '^class [^(]+:' |
while IFS= read -r f; do
ised 's/(^class [^(:]+):/\1(object):/' "$f"
done
# up2k goes from 28k to 22k laff
awk 'BEGIN{gensub(//,"",1)}' </dev/null 2>/dev/null &&
echo entabbening &&

View File

@@ -16,7 +16,7 @@ uname -s | grep WOW64 && m=64 || m=32
uname -s | grep NT-10 && w10=1 || w7=1
[ $w7 ] && [ -e up2k.sh ] && [ ! "$1" ] && ./up2k.sh
[ $w7 ] && pyv=37 || pyv=311
[ $w7 ] && pyv=37 || pyv=312
esuf=
[ $w7 ] && [ $m = 32 ] && esuf=32
[ $w7 ] && [ $m = 64 ] && esuf=-winpe64
@@ -127,3 +127,6 @@ grep -q $csum uplod.log && echo upload OK || {
echo UPLOAD FAILED
exit 1
}
echo; read -u1 -n1 -p 'shutdown? y/n: '
[ "$REPLY" = y ] && shutdown -s -t 1

View File

@@ -1,33 +1,39 @@
f117016b1e6a7d7e745db30d3e67f1acf7957c443a0dd301b6c5e10b8368f2aa4db6be9782d2d3f84beadd139bfeef4982e40f21ca5d9065cb794eeb0e473e82 altgraph-0.17.4-py2.py3-none-any.whl
e0d2e6183437af321a36944f04a501e85181243e5fa2da3254254305dd8119161f62048bc56bff8849b49f546ff175b02b4c999401f1c404f6b88e6f46a9c96e Git-2.44.0-32-bit.exe
9d2c31701a4d3fef553928c00528a48f9e1854ab5333528b50e358a214eba90029d687f039bcda5760b6fdf9f2de3bcf3784ae21a6374cf2a97a845d33b636c6 packaging-24.0-py3-none-any.whl
6a624018f30da375581d5751eca0080edbbe37f102f643f856279fcfded3a4379fd1b6fb0661cdb2e72bbbbc726ca714a1f5990cc348df365db62bc53e4c4503 Git-2.45.2-32-bit.exe
17ce52ba50692a9d964f57a23ac163fb74c77fdeb2ca988a6d439ae1fe91955ff43730c073af97a7b3223093ffea3479a996b9b50ee7fba0869247a56f74baa6 pefile-2023.2.7-py3-none-any.whl
126ca016c00256f4ff13c88707ead21b3b98f3c665ae57a5bcbb80c8be3004bff36d9c7f9a1cc9d20551019708f2b195154f302d80a1e5a2026d6d0fe9f3d5f4 pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl
749a473646c6d4c7939989649733d4c7699fd1c359c27046bf5bc9c070d1a4b8b986bbc65f60d7da725baf16dbfdd75a4c2f5bb8335f2cb5685073f5fee5c2d1 pywin32_ctypes-0.2.2-py3-none-any.whl
6e0d854040baff861e1647d2bece7d090bc793b2bd9819c56105b94090df54881a6a9b43ebd82578cd7c76d47181571b671e60672afd9def389d03c9dae84fcf setuptools-68.2.2-py3-none-any.whl
085d39ef4426aa5f097fbc484595becc16e61ca23fc7da4d2a8bba540a3b82e789e390b176c7151bdc67d01735cce22b1562cdb2e31273225a2d3e275851a4ad setuptools-70.3.0-py3-none-any.whl
360a141928f4a7ec18a994602cbb28bbf8b5cc7c077a06ac76b54b12fa769ed95ca0333a5cf728923a8e0baeb5cc4d5e73e5b3de2666beb05eb477d8ae719093 upx-4.2.4-win32.zip
# u2c (win7)
7a3bd4849f95e1715fe2e99613df70a0fedd944a9bfde71a0fadb837fe62c3431c30da4f0b75c74de6f1a459f1fdf7cb62eaf404fdbe45e2d121e0b1021f1580 certifi-2024.2.2-py3-none-any.whl
9cc8acc5e269e6421bc32bb89261101da29d6ca337d39d60b9106de9ed7904e188716e4a48d78a2c4329026443fcab7acab013d2fe43778e30d6c4e4506a1b91 charset_normalizer-3.3.2-cp37-cp37m-win32.whl
0ec1ae5c928b4a0001a254c8598b746049406e1eed720bfafa94d4474078eff76bf6e032124e2d4df4619052836523af36162443c6d746487b387d2e3476e691 idna-3.6-py3-none-any.whl
b795abb26ba2f04f1afcfb196f21f638014b26c8186f8f488f1c2d91e8e0220962fbd259dbc9c3875222eb47fc95c73fc0606aaa6602b9ebc524809c9ba3501f requests-2.31.0-py3-none-any.whl
61ed4500b6361632030f05229705c5c5a52cb47e31c0e6b55151c8f3beed631cd752ca6c3d6393d56a2acf6a453cfcf801e877116123c550922249c3a976e0f4 urllib3-1.26.18-py2.py3-none-any.whl
cc08d0d87d184401872a2f82266d589253979b4cd02f23b51290fbb2a20082848fc72acbed8aacb74ac4af068d575ef96e66196c5068bc38fb0bcafdc7626869 requests-2.29.0-py3-none-any.whl
fe5fee6cb8a2c68800b32353a0015e5d2e1ad1cb6e0c9e6acf86e48e5cdb5606ad465dc4485ea5fbc8701d8716a8a7f7148c57724ef9da26b0c0a76f6dbbd698 urllib3-1.26.19-py2.py3-none-any.whl
# win7
3253e86471e6f9fa85bfdb7684cd2f964ed6e35c6a4db87f81cca157c049bef43e66dfcae1e037b2fb904567b1e028aaeefe8983ba3255105df787406d2aa71e en_windows_7_professional_with_sp1_x86_dvd_u_677056.iso
ab0db0283f61a5bbe44797d74546786bf41685175764a448d2e3bd629f292f1e7d829757b26be346b5044d78c9c1891736d93237cee4b1b6f5996a902c86d15f en_windows_7_professional_with_sp1_x64_dvd_u_676939.iso
d130bfa136bd171b9972b5c281c578545f2a84a909fdf18a6d2d71dd12fb3d512a7a1fa5cf7300433adece1d306eb2f22d7278f4c90e744e04dc67ba627a82c0 future-1.0.0-py3-none-any.whl
0b4d07434bf8d314f42893d90bce005545b44a509e7353a73cad26dc9360b44e2824218a1a74f8174d02eba87fba91baffa82c8901279a32ebc6b8386b1b4275 importlib_metadata-6.7.0-py3-none-any.whl
9d2c31701a4d3fef553928c00528a48f9e1854ab5333528b50e358a214eba90029d687f039bcda5760b6fdf9f2de3bcf3784ae21a6374cf2a97a845d33b636c6 packaging-24.0-py3-none-any.whl
5d7462a584105bccaa9cf376f5a8c5827ead099c813c8af7392d478a4398f373d9e8cac7bbad2db51b335411ab966b21e119b1b1234c9a7ab70c6ddfc9306da6 pip-24.0-py3-none-any.whl
f298e34356b5590dde7477d7b3a88ad39c622a2bcf3fcd7c53870ce8384dd510f690af81b8f42e121a22d3968a767d2e07595036b2ed7049c8ef4d112bcf3a61 pyinstaller-5.13.2-py3-none-win32.whl
ea73aa54cc6d5db20dfb127e54562dabf890e4cd6171a91b10a51af2bcfc76e1d64cbdce4546df2dcfe42b624724c85b1cd05934be2413425b1f880222727b4f pyinstaller-5.13.2-py3-none-win_amd64.whl
2f4e3927a38cf7757bc9a1c06370d79209669a285a80f1b09cf9917137825c7022a50a56b351807e6e687e2c3a7bd7b2c5cc6daeb4d90e11920284c1a04a1cc3 pyinstaller_hooks_contrib-2023.8-py2.py3-none-any.whl
6bb73cc2db795c59c92f2115727f5c173cacc9465af7710db9ff2f2aec2d73130d0992d0f16dcb3fac222dc15c0916562d0813b2337401022020673a4461df3d python-3.7.9-amd64.exe
500747651c87f59f2436c5ab91207b5b657856e43d10083f3ce27efb196a2580fadd199a4209519b409920c562aaaa7dcbdfb83ed2072a43eaccae6e2d056f31 python-3.7.9.exe
03e50aecc85914567c114e38a1777e32628ee098756f37177bc23220eab33ac7d3ff591fd162db3b4d4e34d55cee93ef0dc67af68a69c38bb1435e0768dee57e typing_extensions-4.7.1-py3-none-any.whl
2e04acff170ca3bbceeeb18489c687126c951ec0bfd53cccfb389ba8d29a4576c1a9e8f2e5ea26c84dd21bfa2912f4e71fa72c1e2653b71e34afc0e65f1722d4 upx-4.2.2-win32.zip
68e1b618d988be56aaae4e2eb92bc0093627a00441c1074ebe680c41aa98a6161e52733ad0c59888c643a33fe56884e4f935178b2557fbbdd105e92e0d993df6 windows6.1-kb2533623-x64.msu
479a63e14586ab2f2228208116fc149ed8ee7b1e4ff360754f5bda4bf765c61af2e04b5ef123976623d04df4976b7886e0445647269da81436bd0a7b5671d361 windows6.1-kb2533623-x86.msu
ac96786e5d35882e0c5b724794329c9125c2b86ae7847f17acfc49f0d294312c6afc1c3f248655de3f0ccb4ca426d7957d02ba702f4a15e9fcd7e2c314e72c19 zipp-3.15.0-py3-none-any.whl
# win10
0a2cd4cadf0395f0374974cd2bc2407e5cc65c111275acdffb6ecc5a2026eee9e1bb3da528b35c7f0ff4b64563a74857d5c2149051e281cc09ebd0d1968be9aa en-us_windows_10_enterprise_ltsc_2021_x64_dvd_d289cf96.iso
16cc0c58b5df6c7040893089f3eb29c074aed61d76dae6cd628d8a89a05f6223ac5d7f3f709a12417c147594a87a94cc808d1e04a6f1e407cc41f7c9f47790d1 virtio-win-0.1.248.iso
d1420c8417fad7888766dd26b9706a87c63e8f33dceeb8e26d0056d5127b0b3ed9272e44b4b761132d4b3320327252eab1696520488ca64c25958896b41f547b jinja2-3.1.4-py3-none-any.whl
e21495f1d473d855103fb4a243095b498ec90eb68776b0f9b48e994990534f7286c0292448e129c507e5d70409f8a05cca58b98d59ce2a815993d0a873dfc480 MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl
8e6847bcde75a2736be0214731f834bc1b5854238d703351e68bc4e74d38404b212b8568565ae22c844189e466d3fbe6024836351cb69ffb1824131387644fef MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
1dfe6f66bef5c9d62c9028a964196b902772ec9e19db215f3f41acb8d2d563586988d81b455fa6b895b434e9e1e9d57e4d271d1b1214483bdb3eadffcbba6a33 pillow-10.3.0-cp311-cp311-win_amd64.whl
8760eab271e79256ae3bfb4af8ccc59010cb5d2eccdd74b325d1a533ae25eb127d51c2ec28ff90d449afed32dd7d6af62934fe9caaf1ae1f4d4831e948e912da pyinstaller-6.5.0-py3-none-win_amd64.whl
897a14d5ee5cbc6781a0f48beffc27807a4f789d58c4329d899233f615d168a5dcceddf7f8f2d5bb52212ddcf3eba4664590d9f1fdb25bb5201f44899e03b2f7 python-3.11.9-amd64.exe
729dc52f1a02bc6274d012ce33f534102975a828cba11f6029600ea40e2d23aefeb07bf4ae19f9621d0565dd03eb2635bbb97d45fb692c1f756315e8c86c5255 upx-4.2.2-win64.zip
0203ec2551c4836696cfab0b2c9fff603352f03fa36e7476e2e1ca7ec57a3a0c24bd791fcd92f342bf817f0887854d9f072e0271c643de4b313d8c9569ba8813 packaging-24.1-py3-none-any.whl
2be320b4191f208cdd6af183c77ba2cf460ea52164ee45ac3ff17d6dfa57acd9deff016636c2dd42a21f4f6af977d5f72df7dacf599bebcf41757272354d14c1 pillow-10.4.0-cp312-cp312-win_amd64.whl
896ddddbd4b85e86e0600cb65eb4c07fbc7f3802d47e7f660411e20b5500831469b97ed4770f25820f4e75cbfac40308da624fd86d4f62e578149d5c276a9cde pyinstaller-6.10.0-py3-none-win_amd64.whl
873781decaeef07f6a79b0ed8b9f35f3fa534a1ea0d866991e40278a10818fa5b60c70b0d5828971b045364f1099694cd1e5d5d60d480acb93fcfbfbced4a09e pyinstaller_hooks_contrib-2024.8-py3-none-any.whl
0572c6345f6a4f7f3e5c2ff858e3ca7ca54ae4478f3d59d8e18cb0f596e61dcf12aef579db229e83d63b30f15d6684ee6bb3feaea9413e5e636a503933057678 python-3.12.5-amd64.exe

View File

@@ -34,6 +34,7 @@ https://support.microsoft.com/en-us/topic/microsoft-security-advisory-insecure-l
see web.archive.org links below
# direct links to version-frozen deps
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.248-1/virtio-win-0.1.248.iso
https://www.python.org/ftp/python/3.7.9/python-3.7.9-amd64.exe
https://www.python.org/ftp/python/3.7.9/python-3.7.9.exe
https://web.archive.org/web/20200412130846if_/https://download.microsoft.com/download/2/D/7/2D78D0DD-2802-41F5-88D6-DC1D559F206D/Windows6.1-KB2533623-x86.msu

View File

@@ -1,14 +1,26 @@
after performing all the initial setup in this file,
run ./build.sh in git-bash to build + upload the exe
## ============================================================
## first-time setup on a stock win7x32sp1 and/or win10x64 vm:
##
to obtain the files referenced below, see ./deps.txt
download + install git (32bit OK on 64):
http://192.168.123.1:3923/ro/pyi/Git-2.44.0-32-bit.exe
and if you don't yet have a windows vm to build in, then the
first step will be "creating the windows VM templates" below
commands to start the VMs after initial setup:
qemu-system-x86_64 -m 4096 -enable-kvm --machine q35 -cpu host -smp 4 -usb -device usb-tablet -net bridge,br=virhost0 -net nic,model=e1000e -drive file=win7-x32.qcow2,discard=unmap,detect-zeroes=unmap
qemu-system-x86_64 -m 4096 -enable-kvm --machine q35 -cpu host -smp 4 -usb -device usb-tablet -net bridge,br=virhost0 -net nic,model=virtio -drive file=win10-e2021.qcow2,if=virtio,discard=unmap
## ============================================================
## first-time setup in a stock win7x32sp1 and/or win10x64 vm:
##
grab & install from ftp2host: Git-2.45.2-32-bit.exe
...and do this on the host so you can grab these notes too:
unix2dos <~/dev/copyparty/scripts/pyinstaller/notes.txt >~/dev/pyi/notes.txt
===[ copy-paste into git-bash ]================================
uname -s | grep NT-10 && w10=1 || {
@@ -16,39 +28,40 @@ uname -s | grep NT-10 && w10=1 || {
}
fns=(
altgraph-0.17.4-py2.py3-none-any.whl
packaging-24.0-py3-none-any.whl
pefile-2023.2.7-py3-none-any.whl
pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl
pywin32_ctypes-0.2.2-py3-none-any.whl
setuptools-68.2.2-py3-none-any.whl
setuptools-70.3.0-py3-none-any.whl
upx-4.2.4-win32.zip
)
[ $w10 ] && fns+=(
pyinstaller-6.5.0-py3-none-win_amd64.whl
Jinja2-3.1.4-py3-none-any.whl
MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl
jinja2-3.1.4-py3-none-any.whl
MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl
mutagen-1.47.0-py3-none-any.whl
pillow-10.3.0-cp311-cp311-win_amd64.whl
python-3.11.9-amd64.exe
upx-4.2.2-win64.zip
packaging-24.1-py3-none-any.whl
pillow-10.4.0-cp312-cp312-win_amd64.whl
pyinstaller-6.10.0-py3-none-win_amd64.whl
pyinstaller_hooks_contrib-2024.8-py3-none-any.whl
python-3.12.5-amd64.exe
)
[ $w7 ] && fns+=(
pyinstaller-5.13.2-py3-none-win32.whl
[ $w7 ] && fns+=( # u2c stuff
certifi-2024.2.2-py3-none-any.whl
charset_normalizer-3.3.2-cp37-cp37m-win32.whl
idna-3.6-py3-none-any.whl
requests-2.31.0-py3-none-any.whl
urllib3-1.26.18-py2.py3-none-any.whl
upx-4.2.2-win32.zip
requests-2.29.0-py3-none-any.whl
urllib3-1.26.19-py2.py3-none-any.whl
)
[ $w7 ] && fns+=(
future-1.0.0-py3-none-any.whl
importlib_metadata-6.7.0-py3-none-any.whl
packaging-24.0-py3-none-any.whl
pip-24.0-py3-none-any.whl
pyinstaller_hooks_contrib-2023.8-py2.py3-none-any.whl
typing_extensions-4.7.1-py3-none-any.whl
zipp-3.15.0-py3-none-any.whl
)
[ $w7x64 ] && fns+=(
windows6.1-kb2533623-x64.msu
pyinstaller-5.13.2-py3-none-win64.whl
python-3.7.9-amd64.exe
)
[ $w7x32 ] && fns+=(
@@ -57,20 +70,24 @@ fns=(
python-3.7.9.exe
)
dl() { curl -fkLOC- "$1" && return 0; echo "$1"; return 1; }
cd ~/Downloads &&
cd ~/Downloads && rm -f Git-*.exe &&
for fn in "${fns[@]}"; do
dl "https://192.168.123.1:3923/ro/pyi/$fn" || {
echo ERROR; ok=; break
}
done
manually install:
windows6.1-kb2533623 + reboot
python-3.7.9
WIN7-ONLY: manually install windows6.1-kb2533623 and reboot
manually install python-3.99.99.exe and then delete it
close and reopen git-bash so python is in PATH
===[ copy-paste into git-bash ]================================
uname -s | grep NT-10 && w10=1 || w7=1
[ $w7 ] && pyv=37 || pyv=311
[ $w7 ] && pyv=37 || pyv=312
appd=$(cygpath.exe "$APPDATA")
cd ~/Downloads &&
yes | unzip upx-*-win32.zip &&
@@ -78,7 +95,7 @@ mv upx-*/upx.exe . &&
python -m ensurepip &&
{ [ $w10 ] || python -m pip install --user -U pip-*.whl; } &&
python -m pip install --user -U packaging-*.whl &&
{ [ $w7 ] || python -m pip install --user -U {setuptools,mutagen,Pillow,Jinja2,MarkupSafe}-*.whl; } &&
{ [ $w7 ] || python -m pip install --user -U {setuptools,mutagen,pillow,jinja2,MarkupSafe}-*.whl; } &&
{ [ $w10 ] || python -m pip install --user -U {requests,urllib3,charset_normalizer,certifi,idna}-*.whl; } &&
{ [ $w10 ] || python -m pip install --user -U future-*.whl importlib_metadata-*.whl typing_extensions-*.whl zipp-*.whl; } &&
python -m pip install --user -U pyinstaller-*.whl pefile-*.whl pywin32_ctypes-*.whl pyinstaller_hooks_contrib-*.whl altgraph-*.whl &&
@@ -90,9 +107,65 @@ rm -f build.sh &&
curl -fkLO https://192.168.123.1:3923/cpp/scripts/pyinstaller/build.sh &&
{ [ $w10 ] || curl -fkLO https://192.168.123.1:3923/cpp/scripts/pyinstaller/up2k.sh; } &&
echo ok
# python -m pip install --user -U Pillow-9.2.0-cp37-cp37m-win32.whl
# sed -ri 's/, bestopt, /]+bestopt+[/' $APPDATA/Python/Python37/site-packages/pyinstaller/building/utils.py
# sed -ri 's/(^\s+bestopt = ).*/\1["--best","--lzma","--ultra-brute"]/' $APPDATA/Python/Python37/site-packages/pyinstaller/building/utils.py
now is an excellent time to take another snapshot, but:
* on win7: first do the 4g.nul thing again
* on win10: first do a reboot so fstrim kicks in
then shutdown and: vmsnap the.qcow2 snap2
## ============================================================
## creating the windows VM templates
##
bash ~/dev/asm/doc/setup-virhost.sh # github:9001/asm
truncate -s 4G ~/dev/pyi/4g.nul # win7 "trim"
note: if you keep accidentally killing the vm with alt-f4 then remove "-device usb-tablet" in the qemu commands below
# win7: don't bother with virtio stuff since win7 doesn't fstrim properly anyways (4g.nul takes care of that)
rm -f win7-x32.qcow2
qemu-img create -f qcow2 win7-x32.qcow2 64g
qemu-system-x86_64 -m 4096 -enable-kvm --machine q35 -cpu host -smp 4 -usb -device usb-tablet -net bridge,br=virhost0 -net nic,model=e1000e -drive file=win7-x32.qcow2,discard=unmap,detect-zeroes=unmap \
-cdrom ~/iso/win7-X17-59183-english-32bit-professional.iso
# win10: use virtio hdd and net (viostor+netkvm), but do NOT use qxl graphics (kills mouse cursor)
rm -f win10-e2021.qcow2
qemu-img create -f qcow2 win10-e2021.qcow2 64g
qemu-system-x86_64 -m 4096 -enable-kvm --machine q35 -cpu host -smp 4 -usb -device usb-tablet -net bridge,br=virhost0 -net nic,model=virtio -drive file=win10-e2021.qcow2,if=virtio,discard=unmap \
-drive file=$HOME/iso/virtio-win-0.1.248.iso,media=cdrom -cdrom $HOME/iso/en-us_windows_10_enterprise_ltsc_2021_x64_dvd_d289cf96.iso
tweak stuff to your preference, but also do these steps in order:
* press ctrl-alt-g so you don't accidentally alt-f4 the vm
* startmenu, type "sysdm.cpl" and hit Enter,
* system protection -> configure -> disable
* advanced > performance > advanced > virtual memory > no paging file
* startmenu, type "cmd" and hit Ctrl-Shift-Enter, run command: powercfg /h off
* reboot
* make screen resolution something comfy (1440x900 is always a winner)
* cmd.exe window-width 176 (assuming 1440x900) and buffer-height 8191
* fix explorer settings (show hidden files and file extensions)
* WIN10-ONLY: startmenu, device manager, install netkvm driver for ethernet
* create ftp2host.bat on desktop with following contents:
start explorer ftp://wark:k@192.168.123.1:3921/ro/pyi/
* WIN7-ONLY: connect to ftp, download 4g.nul to desktop, then delete it (poor man's fstrim...)
and finally take snapshots of the VMs by copypasting this stuff into your shell:
vmsnap() { zstd --long=31 -vT0 -19 <$1 >$1.$2; };
vmsnap win7-x32.qcow2 snap1
vmsnap win10-e2021.qcow2 snap1
note: vmsnap could have defragged the qcow2 as well, but
that makes it hard to do xdelta3 memes so it's not worth it --
but you can add this before "zstd" if you still want to:
qemu-img convert -f qcow2 -O qcow2 $1 a.qcow2 && mv a.qcow2 $1 &&
## ============================================================

View File

@@ -37,6 +37,8 @@ grep -E '^from .ssl_ import' $APPDATA/python/python37/site-packages/urllib3/util
echo golfed
}
sed -ri 's/(add_argument."-t[de]",.*help=")[^"]+/\1not applicable; HTTPS is disabled in this exe/; s/for some reason/in this exe for safety reasons/' u2c.py
read a b _ < <(awk -F\" '/^S_VERSION =/{$0=$2;sub(/\./," ");print}' < u2c.py)
sed -r 's/1,2,3,0/'$a,$b,0,0'/;s/1\.2\.3/'$a.$b.0/ <up2k.rc >up2k.rc2

View File

@@ -1,6 +1,8 @@
#!/bin/bash
set -e
# if specified, keep the following sfx flags last: gz gzz fast
parallel=1
[ -e make-sfx.sh ] || cd scripts
@@ -35,6 +37,14 @@ f=../dist/copyparty-sfx
$f$s.py --version >/dev/null
while [ "$1" ]; do
case "$1" in
gz*) break;;
fast) break;;
esac
shift
done
[ $parallel -gt 1 ] && {
printf '\033[%s' s 2r H "0;1;37;44mbruteforcing sfx size -- press enter to terminate" K u "7m $* " K $'27m\n'
trap "rm -f .sfx-run; printf '\033[%s' s r u" INT TERM EXIT

View File

@@ -103,6 +103,9 @@ copyparty/web/mde.html,
copyparty/web/mde.js,
copyparty/web/msg.css,
copyparty/web/msg.html,
copyparty/web/shares.css,
copyparty/web/shares.html,
copyparty/web/shares.js,
copyparty/web/splash.css,
copyparty/web/splash.html,
copyparty/web/splash.js,

View File

@@ -47,6 +47,14 @@ def uh(top):
def uh1(fp):
try:
uh2(fp)
except:
print("failed to process", fp)
raise
def uh2(fp):
pr(".")
cs = strip_file_to_string(fp, no_ast=True, to_empty=True)

View File

@@ -3,34 +3,39 @@ set -ex
# PYTHONPATH=.:~/dev/partftpy/ taskset -c 0 python3 -m copyparty -v srv::r -v srv/junk:junk:A --tftp 3969
get_src=~/dev/copyparty/srv/palette.flac
get_fn=${get_src##*/}
get_src=~/dev/copyparty/srv/ro/palette.flac
get_fp=ro/${get_src##*/} # server url
get_fn=${get_fp##*/} # just filename
put_src=~/Downloads/102.zip
put_dst=~/dev/copyparty/srv/junk/102.zip
export PATH="$PATH:$HOME/src/atftp-0.8.0"
cd /dev/shm
echo curl get 1428 v4; curl --tftp-blksize 1428 tftp://127.0.0.1:3969/$get_fn | cmp $get_src || exit 1
echo curl get 1428 v6; curl --tftp-blksize 1428 tftp://[::1]:3969/$get_fn | cmp $get_src || exit 1
echo curl get 1428 v4; curl --tftp-blksize 1428 tftp://127.0.0.1:3969/$get_fp | cmp $get_src || exit 1
echo curl get 1428 v6; curl --tftp-blksize 1428 tftp://[::1]:3969/$get_fp | cmp $get_src || exit 1
echo curl put 1428 v4; rm -f $put_dst && curl --tftp-blksize 1428 -T $put_src tftp://127.0.0.1:3969/junk/ && cmp $put_src $put_dst || exit 1
echo curl put 1428 v6; rm -f $put_dst && curl --tftp-blksize 1428 -T $put_src tftp://[::1]:3969/junk/ && cmp $put_src $put_dst || exit 1
echo atftp get 1428; rm -f $get_fn && ~/src/atftp/atftp --option "blksize 1428" -g -r $get_fn 127.0.0.1 3969 && cmp $get_fn $get_src || exit 1
echo atftp get 1428; rm -f $get_fn && atftp --option "blksize 1428" -g -r $get_fp -l $get_fn 127.0.0.1 3969 && cmp $get_fn $get_src || exit 1
echo atftp put 1428; rm -f $put_dst && ~/src/atftp/atftp --option "blksize 1428" 127.0.0.1 3969 -p -l $put_src -r junk/102.zip && cmp $put_src $put_dst || exit 1
echo atftp put 1428; rm -f $put_dst && atftp --option "blksize 1428" 127.0.0.1 3969 -p -l $put_src -r junk/102.zip && cmp $put_src $put_dst || exit 1
echo tftp-hpa get; rm -f $put_dst && tftp -v -m binary 127.0.0.1 3969 -c get $get_fn && cmp $get_src $get_fn || exit 1
echo tftp-hpa get; rm -f $get_fn && tftp -v -m binary 127.0.0.1 3969 -c get $get_fp && cmp $get_src $get_fn || exit 1
echo tftp-hpa put; rm -f $put_dst && tftp -v -m binary 127.0.0.1 3969 -c put $put_src junk/102.zip && cmp $put_src $put_dst || exit 1
echo curl get 512; curl tftp://127.0.0.1:3969/$get_fn | cmp $get_src || exit 1
echo curl get 512; curl tftp://127.0.0.1:3969/$get_fp | cmp $get_src || exit 1
echo curl put 512; rm -f $put_dst && curl -T $put_src tftp://127.0.0.1:3969/junk/ && cmp $put_src $put_dst || exit 1
echo atftp get 512; rm -f $get_fn && ~/src/atftp/atftp -g -r $get_fn 127.0.0.1 3969 && cmp $get_fn $get_src || exit 1
echo atftp get 512; rm -f $get_fn && atftp -g -r $get_fp -l $get_fn 127.0.0.1 3969 && cmp $get_fn $get_src || exit 1
echo atftp put 512; rm -f $put_dst && ~/src/atftp/atftp 127.0.0.1 3969 -p -l $put_src -r junk/102.zip && cmp $put_src $put_dst || exit 1
echo atftp put 512; rm -f $put_dst && atftp 127.0.0.1 3969 -p -l $put_src -r junk/102.zip && cmp $put_src $put_dst || exit 1
echo nice
rm -f $get_fn

50
scripts/tlcheck.sh Executable file
View File

@@ -0,0 +1,50 @@
#!/bin/bash
set -e
# usage: ./scripts/tlcheck.sh eng chi copyparty/web/browser.js
awk <"$3" -v lang1=\"$1\": -v lang2=\"$2\": '
/^\t\}/{fa=0;fb=0}
!/":/{next}
$0~lang1{fa=1}
$0~lang2{fb=1}
fa{a[ia++]=$0}
fb{b[ib++]=$0}
END{for (i=0;i<ia;i++) printf "%s\n%s\n\n",a[i],b[i]}
' |
awk -v apos=\' -v quot=\" '
# count special chars and prefix to line
function c(ch) {
m=$0;
gsub(ch,"",m);
t=t sprintf("%s%d ", ch, length($0)-length(m))
}
!$0 && t!=tp {
print "\n\033[1;37;41m====DIFF===="
}
!$0 { print; next; }
{
tp=t; t="";
c(quot);
c(apos);
c("<");
c(">");
c("{");
c("}");
c("&");
c("\\\$");
c("\\\\");
print t $0;
}
' |
sed -r $'
s/\\\\/\033[1;37;41m\\\\\033[0m/g;
s/\$N/\033[1;37;45m$N\033[0m/g;
s/([{}])/\033[34m\\1\033[0m/g;
s/"/\033[44m"\033[0m/g;
s/\'/\033[45m\'\033[0m/g;
s/&/\033[1;43;30m&\033[0m/g;
s/([<>])/\033[30;47m\\1\033[0m/g
' |
sed -r 's/\t+//' |
less -R

View File

@@ -8,6 +8,12 @@ import sys
import tokenize
try:
FSTRING_MIDDLE = tokenize.FSTRING_MIDDLE
except:
FSTRING_MIDDLE = -9001
def uncomment(fpath):
"""modified https://stackoverflow.com/a/62074206"""
@@ -31,7 +37,7 @@ def uncomment(fpath):
if start_line > last_lineno:
last_col = 0
if start_col > last_col:
if start_col > last_col and prev_toktype != FSTRING_MIDDLE:
out += " " * (start_col - last_col)
is_legalese = (
@@ -48,6 +54,10 @@ def uncomment(fpath):
out += token_string
else:
out += '"a"'
elif token_type == FSTRING_MIDDLE:
out += token_string.replace(r"{", r"{{").replace(r"}", r"}}")
if not code and token_string.strip():
code = True
elif token_type != tokenize.COMMENT:
out += token_string
if not code and token_string.strip():

View File

@@ -141,7 +141,7 @@ args = {
"audiotags": ["mutagen"],
"ftpd": ["pyftpdlib"],
"ftps": ["pyftpdlib", "pyopenssl"],
"tftpd": ["partftpy>=0.3.1"],
"tftpd": ["partftpy>=0.4.0"],
"pwhash": ["argon2-cffi"],
},
"entry_points": {"console_scripts": ["copyparty = copyparty.__main__:main"]},

143
tests/test_dedup.py Normal file
View File

@@ -0,0 +1,143 @@
#!/usr/bin/env python3
# coding: utf-8
from __future__ import print_function, unicode_literals
import json
import os
import shutil
import tempfile
import unittest
from itertools import product
from copyparty.authsrv import AuthSrv
from copyparty.httpcli import HttpCli
from tests import util as tu
from tests.util import Cfg
class TestDedup(unittest.TestCase):
def setUp(self):
self.td = tu.get_ramdisk()
def tearDown(self):
os.chdir(tempfile.gettempdir())
shutil.rmtree(self.td)
def reset(self):
td = os.path.join(self.td, "vfs")
if os.path.exists(td):
shutil.rmtree(td)
os.mkdir(td)
os.chdir(td)
return td
def test(self):
quick = True # sufficient for regular smoketests
# quick = False
dirnames = ["d1", "d2"]
filenames = ["f1", "f2"]
files = [
(
"one",
"BfcDQQeKz2oG1CPSFyD5ZD1flTYm2IoCY23DqeeVgq6w",
"XMbpLRqVdtGmgggqjUI6uSoNMTqZVX4K6zr74XA1BRKc",
),
(
"two",
"ko1Q0eJNq3zKYs_oT83Pn8aVFgonj5G1wK8itwnYL4qj",
"fxvihWlnQIbVbUPr--TxyV41913kPLhXPD1ngXYxDfou",
),
]
# (data, chash, wark)
# 3072 uploads in total
self.ctr = 3072
self.conn = None
fstab = None
for e2d in [True, False]:
self.args = Cfg(v=[".::A"], a=[], e2d=e2d)
for dn1, fn1, f1 in product(dirnames, filenames, files):
for dn2, fn2, f2 in product(dirnames, filenames, files):
for dn3, fn3, f3 in product(dirnames, filenames, files):
self.reset()
if self.conn:
fstab = self.conn.hsrv.hub.up2k.fstab
self.conn.hsrv.hub.up2k.shutdown()
self.asrv = AuthSrv(self.args, self.log)
self.conn = tu.VHttpConn(
self.args, self.asrv, self.log, b"", True
)
if fstab:
self.conn.hsrv.hub.up2k.fstab = fstab
self.do_post(dn1, fn1, f1, True)
self.do_post(dn2, fn2, f2, False)
self.do_post(dn3, fn3, f3, False)
if quick:
break
def do_post(self, dn, fn, fi, first):
print("\n\n# do_post", self.ctr, repr((dn, fn, fi, first)))
self.ctr -= 1
data, chash, wark = fi
hs = self.handshake(dn, fn, fi)
self.assertEqual(hs["wark"], wark)
sfn = hs["name"]
if sfn == fn:
print("using original name " + fn)
else:
print(fn + " got renamed to " + sfn)
if first:
raise Exception("wait what")
if hs["hash"]:
self.assertEqual(hs["hash"][0], chash)
self.put_chunk(dn, wark, chash, data)
elif first:
raise Exception("found first; %r, %r" % ((dn, fn, fi), hs))
h, b = self.curl("%s/%s" % (dn, sfn))
self.assertEqual(b, data)
def handshake(self, dn, fn, fi):
hdr = "POST /%s/ HTTP/1.1\r\nConnection: close\r\nContent-Type: text/plain\r\nContent-Length: %d\r\n\r\n"
msg = {"name": fn, "size": 3, "lmod": 1234567890, "life": 0, "hash": [fi[1]]}
buf = json.dumps(msg).encode("utf-8")
buf = (hdr % (dn, len(buf))).encode("utf-8") + buf
print("HS -->", buf)
HttpCli(self.conn.setbuf(buf)).run()
ret = self.conn.s._reply.decode("utf-8").split("\r\n\r\n", 1)
print("HS <--", ret)
return json.loads(ret[1])
def put_chunk(self, dn, wark, chash, data):
msg = [
"POST /%s/ HTTP/1.1" % (dn,),
"Connection: close",
"Content-Type: application/octet-stream",
"Content-Length: 3",
"X-Up2k-Hash: " + chash,
"X-Up2k-Wark: " + wark,
"",
data,
]
buf = "\r\n".join(msg).encode("utf-8")
print("PUT -->", buf)
HttpCli(self.conn.setbuf(buf)).run()
ret = self.conn.s._reply.decode("utf-8").split("\r\n\r\n", 1)
self.assertEqual(ret[1], "thank")
def curl(self, url, binary=False):
h = "GET /%s HTTP/1.1\r\nConnection: close\r\n\r\n"
HttpCli(self.conn.setbuf((h % (url,)).encode("utf-8"))).run()
if binary:
h, b = self.conn.s._reply.split(b"\r\n\r\n", 1)
return [h.decode("utf-8"), b]
return self.conn.s._reply.decode("utf-8").split("\r\n\r\n", 1)
def log(self, src, msg, c=0):
print(msg)

Some files were not shown because too many files have changed in this diff Show More