Compare commits

...

64 Commits

Author SHA1 Message Date
ed
a0da0122b9 v1.10.0 2024-02-15 00:00:41 +00:00
ed
879e83e24f ignore easymde errors
it randomly throws when clicking inside the preview pane
2024-02-14 23:26:06 +00:00
ed
64ad585318 ie11: file selection hotkeys 2024-02-14 23:08:32 +00:00
ed
f262aee800 change folders to preload music when necessary:
on phones especially, hitting the end of a folder while playing music
could permanently stop audio playback, because the browser will
revoke playback privileges unless we have a song ready to go...
there's no time to navigate through folders looking for the next file

the preloader will now start jumping through folders ahead of time
2024-02-14 22:44:33 +00:00
ed
d4da386172 add watchdog for sqlite deadlock on db init:
some cifs servers cause sqlite to fail in interesting ways; any attempt
to create a table can instantly throw an exception, which results in a
zerobyte database being created. During the next startup, the db would
be determined to be corrupted, and up2k would invoke _backup_db before
deleting and recreating it -- except that sqlite's connection.backup()
will hang indefinitely and deadlock up2k

add a watchdog which fires if it takes longer than 1 minute to open the
database, printing a big warning that the filesystem probably does not
support locking or is otherwise sqlite-incompatible, then writing a
stacktrace of all threads to a textfile in the config directory
(in case this deadlock is due to something completely different),
before finally crashing spectacularly

additionally, delete the database if the creation fails, which should
prevents the deadlock on the next startup, so combine that with a
message hinting at the filesystem incompatibility

the 1-minute limit may sound excessively gracious, but considering what
some of the copyparty instances out there is running on, really isn't

this was reported when connecting to a cifs server running alpine

thx to abex on discord for the detailed bug report!
2024-02-14 20:18:36 +00:00
ed
5d92f4df49 mention why -j0 can be a bad idea to enable,
and that `--hist` can also help for loading thumbnails faster
2024-02-13 19:47:42 +00:00
ed
6f8a588c4d up2k: fix a mostly-harmless race
as each chunk is written to the file, httpcli calls
up2k.confirm_chunk to register the chunk as completed, and the reply
indicates whether that was the final outstanding chunk, in which case
httpcli closes the file descriptors since there's nothing more to write

the issue is that the final chunk is registered as completed before the
file descriptors are closed, meaning there could be writes that haven't
finished flushing to disk yet

if the client decides to issue another handshake during this window,
up2k sees that all chunks are complete and calls up2k.finish_upload
even as some threads might still be flushing the final writes to disk

so the conditions to hit this bug were as follows (all must be true):
* multiprocessing is disabled
* there is a reverse-proxy
* a client has several idle connections and reuses one of those
* the server's filesystem is EXTREMELY slow, to the point where
   closing a file takes over 30 seconds

the fix is to stop handshakes from being processed while a file is
being closed, which is unfortunately a small bottleneck in that it
prohibits initiating another upload while one is being finalized, but
the required complexity to handle this better is probably not worth it
(a separate mutex for each upload session or something like that)

this issue is mostly harmless, partially because it is super tricky to
hit (only aware of it happening synthetically), and because there is
usually no harmful consequences; the worst-case is if this were to
happen exactly as the server OS decides to crash, which would make the
file appear to be fully uploaded even though it's missing some data
(all extremely unlikely, but not impossible)

there is no performance impact; if anything it should now accept
new tcp connections slightly faster thanks to more granular locking
2024-02-13 19:24:06 +00:00
ed
7c8e368721 lol markdown 2024-02-12 06:01:09 +01:00
ed
f7a43a8e46 fix grid layout on first toggle from listview 2024-02-12 05:40:18 +01:00
ed
02879713a2 tftp: update readme + small py2 fix 2024-02-12 05:39:54 +01:00
ed
acbb8267e1 tftp: add directory listing 2024-02-10 23:50:17 +00:00
ed
8796c09f56 add --tftp-pr to specify portrange instead of ephemerals 2024-02-10 21:45:57 +00:00
ed
d636316a19 add tftp server 2024-02-10 18:37:21 +00:00
ed
ed524d84bb /np: exclude uploader ip and trim dot-prefix 2024-02-07 23:02:47 +00:00
ed
f0cdd9f25d upgrade copyparty.exe to python 3.11.8 2024-02-07 20:39:51 +00:00
ed
4e797a7156 docker: mention debian issue from discord 2024-02-05 20:11:04 +00:00
ed
136c0fdc2b detect reverse-proxies stripping URL params:
if a reverseproxy decides to strip away URL parameters, show an
appropriate error-toast instead of silently entering a bad state

someone on discord ended up in an infinite page-reload loop
since the js would try to recover by fully navigating to the
requested dir if `?ls` failed, which wouldn't do any good anyways
if the dir in question is the initial dir to display
2024-02-05 19:17:36 +00:00
ed
cab999978e update pkgs to 1.9.31 2024-02-03 16:02:59 +00:00
ed
fabeebd96b v1.9.31 2024-02-03 15:33:11 +00:00
ed
b1cf588452 add lore 2024-02-03 15:05:27 +00:00
ed
c354a38b4c up2k: warn about browser cap on num connections 2024-02-02 23:46:00 +00:00
ed
a17c267d87 bbox: unload pics/vids from DOM; closes #71
videos unloaded correctly when switching between files, but not when
closing the lightbox while playing a video and then clicking another

now, only media within the preload window (+/- 2 from current file)
is kept loaded into DOM, everything else gets ejected, both on
navigation and when closing the lightbox
2024-02-02 23:16:50 +00:00
ed
c1180d6f9c up2k: include inflight bytes in eta calculation;
much more accurate total-ETA when uploading with many connections
and/or uploading huge files to really slow servers

the titlebar % still only does actually confirmed bytes,
partially because that makes sense, partially because
that's what happened by accident
2024-02-02 22:46:24 +00:00
ed
d3db6d296f disable mkdir and new-doc buttons if no name is provided
also fixes toast.hide() unintentionally stopping events from bubbling
2024-02-01 21:41:48 +00:00
ed
eefa0518db change FFmpeg from BtbN to gyan/codex;
deps are more up-to-date and slightly better codec selection
2024-01-28 22:04:01 +00:00
ed
945170e271 fix umod/touching zerobyte files 2024-01-27 20:26:27 +00:00
ed
6c2c6090dc notes: hardlink/symlink conversion + phone cam sync 2024-01-27 18:52:08 +00:00
ed
b2e233403d u2c: apply exclude-filter to deletion too
if a file gets synced and you later add an exclude-filter for it,
delete the file from the server as if it doesn't exist locally
2024-01-27 18:49:25 +00:00
ed
e397ec2e48 update pkgs to 1.9.30 2024-01-25 23:18:21 +00:00
ed
fade751a3e v1.9.30 2024-01-25 22:52:42 +00:00
ed
0f386c4b08 also sanitize histpaths in client error messages;
previously it only did volume abspaths
2024-01-25 21:40:41 +00:00
ed
14bccbe45f backports from IdP branch:
* allow mounting `/` (the entire filesystem) as a volume
  * not that you should (really, you shouldn't)
* improve `-v` helptext
* change IdP group symbol to @ because % is used for file inclusion
  * not technically necessary but is less confusing in docs
2024-01-25 21:39:30 +00:00
ed
55eb692134 up2k: add option to touch existing files to match local 2024-01-24 20:36:41 +00:00
ed
b32d65207b fix js-error on older chromes in incognito mode;
window.localStorage was null, so trying to read would fail

seen on falkon 23.08.4 with qtwebengine 5.15.12 (fedora39)

might as well be paranoid about the other failure modes too
(sudden exceptions on reads and/or writes)
2024-01-24 02:24:27 +00:00
ed
64cac003d8 add missing historic changelog entries 2024-01-24 01:28:29 +00:00
ed
6dbfcddcda don't print indexing progress to stdout if -q 2024-01-20 17:26:52 +00:00
ed
b4e0a34193 ensure windows-safe filenames during batch rename
also handle ctrl-click in the navpane float
2024-01-19 21:41:56 +00:00
ed
01c82b54a7 audio player: add shuffle 2024-01-18 22:59:47 +00:00
ed
4ef3106009 more old-browser support:
* polyfill Set() for gridview (ie9, ie10)
* navpane: do full-page nav if history api is ng (ie9)
* show markdown as plaintext if rendering fails (ie*)
* text-editor: hide preview pane if it doesn't work (ie*)
* explicitly hide toasts on close (ie9, ff10)
2024-01-18 22:56:39 +00:00
ed
aa3a971961 windows: safeguard against parallel deletes
st_ino is valid for NTFS on python3, good enough
2024-01-17 23:32:37 +00:00
ed
b9d0c8536b avoid sendfile bugs on 32bit machines:
https://github.com/python/cpython/issues/114077
2024-01-17 20:56:44 +00:00
ed
3313503ea5 retry deleting busy files on windows:
some clients (clonezilla-webdav) rapidly create and delete files;
this fails if copyparty is still hashing the file (usually the case)

and the same thing can probably happen due to antivirus etc

add global-option --rm-retry (volflag rm_retry) specifying
for how long (and how quickly) to keep retrying the deletion

default: retry for 5sec on windows, 0sec (disabled) on everything else
because this is only a problem on windows
2024-01-17 20:27:53 +00:00
ed
d999d3a921 update pkgs to 1.9.29 2024-01-14 07:03:47 +00:00
ed
e7d00bae39 v1.9.29 2024-01-14 06:29:31 +00:00
ed
650e41c717 update deps:
* web: hashwasm 4.9 -> 4.10
* web: dompurify 3.0.5 -> 3.0.8
* web: codemirror 5.65.12 -> 5.65.16
* win10exe: pillow 10.1 -> 10.2
2024-01-14 05:57:28 +00:00
ed
140f6e0389 add contextlet + igloo irc config + upd changelog 2024-01-14 04:58:24 +00:00
ed
5e111ba5ee only show the unpost hint if unpost is available (-e2d) 2024-01-14 04:24:32 +00:00
ed
95a599961e add RAM usage tracking to thumbnailer;
prevents server OOM from high RAM usage by FFmpeg when generating
spectrograms and waveforms: https://trac.ffmpeg.org/ticket/10797
2024-01-14 04:15:09 +00:00
ed
a55e0d6eb8 add button to bust music player cache,
useful on phones when the server was OOM'ing and
butchering the responses (foreshadowing...)
2024-01-13 04:08:40 +00:00
ed
2fd2c6b948 ie11 fixes (2024? haha no way dude it's like 2004 right)
* fix crash on keyboard input in modals
* text editor works again (but without markdown preview)
* keyboard hotkeys for the few features that actually work
2024-01-13 02:31:50 +00:00
ed
7a936ea01e js: be careful with allocations in crash handler 2024-01-13 01:22:20 +00:00
ed
226c7c3045 fix confusing behavior when reindexing files:
when a file was reindexed (due to a change in size or last-modified
timestamp) the uploader-IP would get removed, but the upload timestamp
was ported over. This was intentional so there was probably a reason...

new behavior is to keep both uploader-IP and upload timestamp if the
file contents are unchanged (determined by comparing warks), and to
discard both uploader-IP and upload timestamp if that is not the case
2024-01-13 00:18:46 +00:00
ed
a4239a466b immediately perform search if a checkbox is toggled 2024-01-12 00:20:38 +01:00
ed
d0eb014c38 improve applefilters + add missing newline in curl 404
* webdav: extend applesan regex with more stuff to exclude
* on macos, set applesan as default `--no-idx` to avoid indexing them
   (they didn't show up in search since they're dotfiles, but still)
2024-01-12 00:13:35 +01:00
ed
e01ba8552a warn if a user doesn't have privileges anywhere
(since the account system isn't super-inutitive and at least
 one dude figured that -a would default to giving admin rights)
2024-01-11 00:24:34 +00:00
ed
024303592a improved logging when a client dies mid-POST;
igloo irc has an absolute time limit of 2 minutes before it just
disconnects mid-upload and that kinda looked like it had a buggy
multipart generator instead of just being funny

anticipating similar events in the future, also log the
client-selected boundary value to eyeball its yoloness
2024-01-10 23:59:43 +00:00
ed
86419b8f47 suboptimizations and some future safeguards 2024-01-10 23:20:42 +01:00
ed
f1358dbaba use scandir for volume smoketests during up2k init;
gives much faster startup on filesystems that are extremely slow
(TLNote: android sdcardfs)
2024-01-09 21:47:02 +01:00
ed
e8a653ca0c don't block non-up2k uploads during indexing;
due to all upload APIs invoking up2k.hash_file to index uploads,
the uploads could block during a rescan for a crazy long time
(past most gateway timeouts); now this is mostly fire-and-forget

"mostly" because this also adds a conditional slowdown to
help the hasher churn through if the queue gets too big

worst case, if the server is restarted before it catches up, this
would rely on filesystem reindexing to eventually index the files
after a restart or on a schedule, meaning uploader info would be
lost on shutdown, but this is usually fine anyways (and this was
also the case until now)
2024-01-08 22:10:16 +00:00
ed
9bc09ce949 accept file POSTs without specifying the act field;
primarily to support uploading from Igloo IRC but also generally useful
(not actually tested with Igloo IRC yet because it's a paid feature
so just gonna wait for spiky to wake up and tell me it didn't work)
2024-01-08 19:09:53 +00:00
ed
dc8e621d7c increase OOM kill-score for FFmpeg and mtp's;
discourage Linux from killing innocent processes
when FFmpeg decides to allocate 1 TiB of RAM
2024-01-07 17:52:10 +00:00
ed
dee0950f74 misc;
* scripts: add log repacker
* bench/filehash: msys support + add more stats
2024-01-06 01:15:43 +00:00
ed
143f72fe36 bench/filehash: fix locale + add more stats 2024-01-03 02:41:18 +01:00
ed
a7889fb6a2 update pkgs to 1.9.28 2023-12-31 19:44:24 +00:00
61 changed files with 2670 additions and 411 deletions

3
.vscode/launch.json vendored
View File

@@ -19,8 +19,7 @@
"-emp",
"-e2dsa",
"-e2ts",
"-mtp",
".bpm=f,bin/mtag/audio-bpm.py",
"-mtp=.bpm=f,bin/mtag/audio-bpm.py",
"-aed:wark",
"-vsrv::r:rw,ed:c,dupe",
"-vdist:dist:r"

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2019 ed
Copyright (c) 2019 ed <oss@ocv.me>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -3,7 +3,7 @@
turn almost any device into a file server with resumable uploads/downloads using [*any*](#browser-support) web browser
* server only needs Python (2 or 3), all dependencies optional
* 🔌 protocols: [http](#the-browser) // [ftp](#ftp-server) // [webdav](#webdav-server) // [smb/cifs](#smb-server)
* 🔌 protocols: [http](#the-browser) // [webdav](#webdav-server) // [ftp](#ftp-server) // [tftp](#tftp-server) // [smb/cifs](#smb-server)
* 📱 [android app](#android-app) // [iPhone shortcuts](#ios-shortcuts)
👉 **[Get started](#quickstart)!** or visit the **[read-only demo server](https://a.ocv.me/pub/demo/)** 👀 running from a basement in finland
@@ -53,6 +53,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [ftp server](#ftp-server) - an FTP server can be started using `--ftp 3921`
* [webdav server](#webdav-server) - with read-write support
* [connecting to webdav from windows](#connecting-to-webdav-from-windows) - using the GUI
* [tftp server](#tftp-server) - a TFTP server (read/write) can be started using `--tftp 3969`
* [smb server](#smb-server) - unsafe, slow, not recommended for wan
* [browser ux](#browser-ux) - tweaking the ui
* [file indexing](#file-indexing) - enables dedup and music search ++
@@ -157,11 +158,11 @@ you may also want these, especially on servers:
and remember to open the ports you want; here's a complete example including every feature copyparty has to offer:
```
firewall-cmd --permanent --add-port={80,443,3921,3923,3945,3990}/tcp # --zone=libvirt
firewall-cmd --permanent --add-port=12000-12099/tcp --permanent # --zone=libvirt
firewall-cmd --permanent --add-port={1900,5353}/udp # --zone=libvirt
firewall-cmd --permanent --add-port=12000-12099/tcp # --zone=libvirt
firewall-cmd --permanent --add-port={69,1900,3969,5353}/udp # --zone=libvirt
firewall-cmd --reload
```
(1900:ssdp, 3921:ftp, 3923:http/https, 3945:smb, 3990:ftps, 5353:mdns, 12000:passive-ftp)
(69:tftp, 1900:ssdp, 3921:ftp, 3923:http/https, 3945:smb, 3969:tftp, 3990:ftps, 5353:mdns, 12000:passive-ftp)
## features
@@ -172,6 +173,7 @@ firewall-cmd --reload
* ☑ volumes (mountpoints)
* ☑ [accounts](#accounts-and-volumes)
* ☑ [ftp server](#ftp-server)
* ☑ [tftp server](#tftp-server)
* ☑ [webdav server](#webdav-server)
* ☑ [smb/cifs server](#smb-server)
* ☑ [qr-code](#qr-code) for quick access
@@ -739,7 +741,8 @@ some hilights:
click the `play` link next to an audio file, or copy the link target to [share it](https://a.ocv.me/pub/demo/music/Ubiktune%20-%20SOUNDSHOCK%202%20-%20FM%20FUNK%20TERRROR!!/#af-1fbfba61&t=18) (optionally with a timestamp to start playing from, like that example does)
open the `[🎺]` media-player-settings tab to configure it,
* switches:
* "switches":
* `[🔀]` shuffles the files inside each folder
* `[preload]` starts loading the next track when it's about to end, reduces the silence between songs
* `[full]` does a full preload by downloading the entire next file; good for unreliable connections, bad for slow connections
* `[~s]` toggles the seekbar waveform display
@@ -749,10 +752,12 @@ open the `[🎺]` media-player-settings tab to configure it,
* `[art]` shows album art on the lockscreen
* `[🎯]` keeps the playing song scrolled into view (good when using the player as a taskbar dock)
* `[⟎]` shrinks the playback controls
* playback mode:
* "buttons":
* `[uncache]` may fix songs that won't play correctly due to bad files in browser cache
* "at end of folder":
* `[loop]` keeps looping the folder
* `[next]` plays into the next folder
* transcode:
* "transcode":
* `[flac]` converts `flac` and `wav` files into opus
* `[aac]` converts `aac` and `m4a` files into opus
* `[oth]` converts all other known formats into opus
@@ -940,6 +945,28 @@ known client bugs:
* latin-1 is fine, hiragana is not (not even as shift-jis on japanese xp)
## tftp server
a TFTP server (read/write) can be started using `--tftp 3969` (you probably want [ftp](#ftp-server) instead unless you are *actually* communicating with hardware from the 90s (in which case we should definitely hang some time))
> that makes this the first RTX DECT Base that has been updated using copyparty 🎉
* based on [partftpy](https://github.com/9001/partftpy)
* no accounts; read from world-readable folders, write to world-writable, overwrite in world-deletable
* needs a dedicated port (cannot share with the HTTP/HTTPS API)
* run as root to use the spec-recommended port `69` (nice)
* can reply from a predefined portrange (good for firewalls)
* only supports the binary/octet/image transfer mode (no netascii)
* [RFC 7440](https://datatracker.ietf.org/doc/html/rfc7440) is **not** supported, so will be extremely slow over WAN
* expect 1100 KiB/s over 1000BASE-T, 400-500 KiB/s over wifi, 200 on bad wifi
some recommended TFTP clients:
* windows: `tftp.exe` (you probably already have it)
* linux: `tftp-hpa`, `atftp`
* `tftp 127.0.0.1 3969 -v -m binary -c put firmware.bin`
* `curl tftp://127.0.0.1:3969/firmware.bin` (read-only)
## smb server
unsafe, slow, not recommended for wan, enable with `--smb` for read-only or `--smbw` for read-write
@@ -1028,6 +1055,8 @@ to save some time, you can provide a regex pattern for filepaths to only index
similarly, you can fully ignore files/folders using `--no-idx [...]` and `:c,noidx=\.iso$`
* when running on macos, all the usual apple metadata files are excluded by default
if you set `--no-hash [...]` globally, you can enable hashing for specific volumes using flag `:c,nohash=`
### filesystem guards
@@ -1535,8 +1564,8 @@ TLDR: yes
| navpane | - | yep | yep | yep | yep | yep | yep | yep |
| image viewer | - | yep | yep | yep | yep | yep | yep | yep |
| video player | - | yep | yep | yep | yep | yep | yep | yep |
| markdown editor | - | - | yep | yep | yep | yep | yep | yep |
| markdown viewer | - | yep | yep | yep | yep | yep | yep | yep |
| markdown editor | - | - | `*2` | `*2` | yep | yep | yep | yep |
| markdown viewer | - | `*2` | `*2` | `*2` | yep | yep | yep | yep |
| play mp3/m4a | - | yep | yep | yep | yep | yep | yep | yep |
| play ogg/opus | - | - | - | - | yep | yep | `*3` | yep |
| **= feature =** | ie6 | ie9 | ie10 | ie11 | ff 52 | c 49 | iOS | Andr |
@@ -1544,6 +1573,7 @@ TLDR: yes
* internet explorer 6 through 8 behave the same
* firefox 52 and chrome 49 are the final winxp versions
* `*1` yes, but extremely slow (ie10: `1 MiB/s`, ie11: `270 KiB/s`)
* `*2` only able to do plaintext documents (no markdown rendering)
* `*3` iOS 11 and newer, opus only, and requires FFmpeg on the server
quick summary of more eccentric web-browsers trying to view a directory index:
@@ -1569,10 +1599,12 @@ interact with copyparty using non-browser clients
* `var xhr = new XMLHttpRequest(); xhr.open('POST', '//127.0.0.1:3923/msgs?raw'); xhr.send('foo');`
* curl/wget: upload some files (post=file, chunk=stdin)
* `post(){ curl -F act=bput -F f=@"$1" http://127.0.0.1:3923/?pw=wark;}`
`post movie.mkv`
* `post(){ curl -F f=@"$1" http://127.0.0.1:3923/?pw=wark;}`
`post movie.mkv` (gives HTML in return)
* `post(){ curl -F f=@"$1" 'http://127.0.0.1:3923/?want=url&pw=wark';}`
`post movie.mkv` (gives hotlink in return)
* `post(){ curl -H pw:wark -H rand:8 -T "$1" http://127.0.0.1:3923/;}`
`post movie.mkv`
`post movie.mkv` (randomized filename)
* `post(){ wget --header='pw: wark' --post-file="$1" -O- http://127.0.0.1:3923/?raw;}`
`post movie.mkv`
* `chunk(){ curl -H pw:wark -T- http://127.0.0.1:3923/;}`
@@ -1594,6 +1626,10 @@ interact with copyparty using non-browser clients
* sharex (screenshot utility): see [./contrib/sharex.sxcu](contrib/#sharexsxcu)
* contextlet (web browser integration); see [contrib contextlet](contrib/#send-to-cppcontextletjson)
* [igloo irc](https://iglooirc.com/): Method: `post` Host: `https://you.com/up/?want=url&pw=hunter2` Multipart: `yes` File parameter: `f`
copyparty returns a truncated sha512sum of your PUT/POST as base64; you can generate the same checksum locally to verify uplaods:
b512(){ printf "$((sha512sum||shasum -a512)|sed -E 's/ .*//;s/(..)/\\x\1/g')"|base64|tr '+/' '-_'|head -c44;}
@@ -1612,7 +1648,7 @@ the commandline uploader [u2c.py](https://github.com/9001/copyparty/tree/hovudst
alternatively there is [rclone](./docs/rclone.md) which allows for bidirectional sync and is *way* more flexible (stream files straight from sftp/s3/gcs to copyparty, ...), although there is no integrity check and it won't work with files over 100 MiB if copyparty is behind cloudflare
* starting from rclone v1.63 (currently [in beta](https://beta.rclone.org/?filter=latest)), rclone will also be faster than u2c.py
* starting from rclone v1.63, rclone is faster than u2c.py on low-latency connections
## mount as drive
@@ -1621,7 +1657,7 @@ a remote copyparty server as a local filesystem; go to the control-panel and cl
alternatively, some alternatives roughly sorted by speed (unreproducible benchmark), best first:
* [rclone-webdav](./docs/rclone.md) (25s), read/WRITE ([v1.63-beta](https://beta.rclone.org/?filter=latest))
* [rclone-webdav](./docs/rclone.md) (25s), read/WRITE (rclone v1.63 or later)
* [rclone-http](./docs/rclone.md) (26s), read-only
* [partyfuse.py](./bin/#partyfusepy) (35s), read-only
* [rclone-ftp](./docs/rclone.md) (47s), read/WRITE
@@ -1661,6 +1697,7 @@ below are some tweaks roughly ordered by usefulness:
* `-q` disables logging and can help a bunch, even when combined with `-lo` to redirect logs to file
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
* and also makes thumbnails load faster, regardless of e2d/e2t
* `--no-hash .` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable
* `--no-htp --hash-mt=0 --mtag-mt=1 --th-mt=1` minimizes the number of threads; can help in some eccentric environments (like the vscode debugger)
* `-j0` enables multiprocessing (actual multithreading), can reduce latency to `20+80/numCores` percent and generally improve performance in cpu-intensive workloads, for example:
@@ -1668,7 +1705,7 @@ below are some tweaks roughly ordered by usefulness:
* simultaneous downloads and uploads saturating a 20gbps connection
* if `-e2d` is enabled, `-j2` gives 4x performance for directory listings; `-j4` gives 16x
...however it adds an overhead to internal communication so it might be a net loss, see if it works 4 u
...however it also increases the server/filesystem/HDD load during uploads, and adds an overhead to internal communication, so it is usually a better idea to don't
* using [pypy](https://www.pypy.org/) instead of [cpython](https://www.python.org/) *can* be 70% faster for some workloads, but slower for many others
* and pypy can sometimes crash on startup with `-j0` (TODO make issue)
@@ -1677,7 +1714,7 @@ below are some tweaks roughly ordered by usefulness:
when uploading files,
* chrome is recommended, at least compared to firefox:
* chrome is recommended (unfortunately), at least compared to firefox:
* up to 90% faster when hashing, especially on SSDs
* up to 40% faster when uploading over extremely fast internets
* but [u2c.py](https://github.com/9001/copyparty/blob/hovudstraum/bin/u2c.py) can be 40% faster than chrome again
@@ -1879,7 +1916,7 @@ can be convenient on machines where installing python is problematic, however is
meanwhile [copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py) instead relies on your system python which gives better performance and will stay safe as long as you keep your python install up-to-date
then again, if you are already into downloading shady binaries from the internet, you may also want my [minimal builds](./scripts/pyinstaller#ffmpeg) of [ffmpeg](https://ocv.me/stuff/bin/ffmpeg.exe) and [ffprobe](https://ocv.me/stuff/bin/ffprobe.exe) which enables copyparty to extract multimedia-info, do audio-transcoding, and thumbnails/spectrograms/waveforms, however it's much better to instead grab a [recent official build](https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-win64-gpl.zip) every once ina while if you can afford the size
then again, if you are already into downloading shady binaries from the internet, you may also want my [minimal builds](./scripts/pyinstaller#ffmpeg) of [ffmpeg](https://ocv.me/stuff/bin/ffmpeg.exe) and [ffprobe](https://ocv.me/stuff/bin/ffprobe.exe) which enables copyparty to extract multimedia-info, do audio-transcoding, and thumbnails/spectrograms/waveforms, however it's much better to instead grab a [recent official build](https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z) every once ina while if you can afford the size
# install on android

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3
from __future__ import print_function, unicode_literals
S_VERSION = "1.12"
S_BUILD_DT = "2023-12-08"
S_VERSION = "1.14"
S_BUILD_DT = "2024-01-27"
"""
u2c.py: upload to copyparty
@@ -560,8 +560,11 @@ def handshake(ar, file, search):
}
if search:
req["srch"] = 1
elif ar.dr:
req["replace"] = True
else:
if ar.touch:
req["umod"] = True
if ar.dr:
req["replace"] = True
headers = {"Content-Type": "text/plain"} # <=1.5.1 compat
if pw:
@@ -874,6 +877,8 @@ class Ctl(object):
self.st_hash = [file, ofs]
def hasher(self):
ptn = re.compile(self.ar.x.encode("utf-8"), re.I) if self.ar.x else None
sep = "{0}".format(os.sep).encode("ascii")
prd = None
ls = {}
for top, rel, inf in self.filegen:
@@ -906,7 +911,12 @@ class Ctl(object):
if self.ar.drd:
dp = os.path.join(top, rd)
lnodes = set(os.listdir(dp))
bnames = [x for x in ls if x not in lnodes]
if ptn:
zs = dp.replace(sep, b"/").rstrip(b"/") + b"/"
zls = [zs + x for x in lnodes]
zls = [x for x in zls if not ptn.match(x)]
lnodes = [x.split(b"/")[-1] for x in zls]
bnames = [x for x in ls if x not in lnodes and x != b".hist"]
vpath = self.ar.url.split("://")[-1].split("/", 1)[-1]
names = [x.decode("utf-8", "replace") for x in bnames]
locs = [vpath + srd + "/" + x for x in names]
@@ -1057,14 +1067,13 @@ class Ctl(object):
self.uploader_busy += 1
self.t0_up = self.t0_up or time.time()
zs = "{0}/{1}/{2}/{3} {4}/{5} {6}"
stats = zs.format(
stats = "%d/%d/%d/%d %d/%d %s" % (
self.up_f,
len(self.recheck),
self.uploader_busy,
self.nfiles - self.up_f,
int(self.nbytes / (1024 * 1024)),
int((self.nbytes - self.up_b) / (1024 * 1024)),
self.nbytes // (1024 * 1024),
(self.nbytes - self.up_b) // (1024 * 1024),
self.eta,
)
@@ -1130,6 +1139,7 @@ source file/folder selection uses rsync syntax, meaning that:
ap.add_argument("-s", action="store_true", help="file-search (disables upload)")
ap.add_argument("-x", type=unicode, metavar="REGEX", default="", help="skip file if filesystem-abspath matches REGEX, example: '.*/\\.hist/.*'")
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)")
ap.add_argument("--version", action="store_true", help="show version and exit")
ap = app.add_argument_group("compatibility")

View File

@@ -22,6 +22,11 @@ however if your copyparty is behind a reverse-proxy, you may want to use [`share
* `URL`: full URL to the root folder (with trailing slash) followed by `$regex:1|1$`
* `pw`: password (remove `Parameters` if anon-write)
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
* browser integration, kind of? custom rightclick actions and stuff
* rightclick a pic and send it to copyparty straight from your browser
* for the [contextlet](https://addons.mozilla.org/en-US/firefox/addon/contextlets/) firefox extension
### [`media-osd-bgone.ps1`](media-osd-bgone.ps1)
* disables the [windows OSD popup](https://user-images.githubusercontent.com/241032/122821375-0e08df80-d2dd-11eb-9fd9-184e8aacf1d0.png) (the thing on the left) which appears every time you hit media hotkeys to adjust volume or change song while playing music with the copyparty web-ui, or most other audio players really

View File

@@ -1,8 +1,8 @@
# Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty
pkgver="1.9.27"
pkgver="1.9.31"
pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, zeroconf, media indexer, thumbnails++"
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any")
url="https://github.com/9001/${pkgname}"
license=('MIT')
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
)
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" )
sha256sums=("0bcf9362bc1bd9c85c228312c8341fcb9a37e383af6d8bee123e3a84e66394be")
sha256sums=("a8ec1faf8cb224515355226882fdb2d1ab1de42d96ff78e148b930318867a71e")
build() {
cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,5 +1,5 @@
{
"url": "https://github.com/9001/copyparty/releases/download/v1.9.27/copyparty-sfx.py",
"version": "1.9.27",
"hash": "sha256-o06o/gUIkQhDw9a9SLjuAyQUoLMfFCqkIeonUFzezds="
"url": "https://github.com/9001/copyparty/releases/download/v1.9.31/copyparty-sfx.py",
"version": "1.9.31",
"hash": "sha256-yp7qoiW5yzm2M7qVmYY7R+SyhZXlqL+JxsXV22aS+MM="
}

View File

@@ -0,0 +1,11 @@
{
"code": "// https://addons.mozilla.org/en-US/firefox/addon/contextlets/\n// https://github.com/davidmhammond/contextlets\n\nvar url = 'http://partybox.local:3923/';\nvar pw = 'wark';\n\nvar xhr = new XMLHttpRequest();\nxhr.msg = this.info.linkUrl || this.info.srcUrl;\nxhr.open('POST', url, true);\nxhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded;charset=UTF-8');\nxhr.setRequestHeader('PW', pw);\nxhr.send('msg=' + xhr.msg);\n",
"contexts": [
"link"
],
"icons": null,
"patterns": "",
"scope": "background",
"title": "send to cpp",
"type": "normal"
}

View File

@@ -20,17 +20,31 @@ import time
import traceback
import uuid
from .__init__ import ANYWIN, CORES, EXE, PY2, VT100, WINDOWS, E, EnvParams, unicode
from .__init__ import (
ANYWIN,
CORES,
EXE,
MACOS,
PY2,
VT100,
WINDOWS,
E,
EnvParams,
unicode,
)
from .__version__ import CODENAME, S_BUILD_DT, S_VERSION
from .authsrv import expand_config_file, split_cfg_ln, upgrade_cfg_fmt
from .cfg import flagcats, onedash
from .svchub import SvcHub
from .util import (
APPLESAN_TXT,
DEF_EXP,
DEF_MTE,
DEF_MTH,
IMPLICATIONS,
JINJA_VER,
PARTFTPY_VER,
PY_DESC,
PYFTPD_VER,
SQLITE_VER,
UNPLICATIONS,
@@ -38,7 +52,6 @@ from .util import (
ansi_re,
dedent,
min_ex,
py_desc,
pybin,
termsize,
wrap,
@@ -826,7 +839,7 @@ def add_general(ap, nc, srvname):
ap2.add_argument("-nc", metavar="NUM", type=int, default=nc, help="max num clients")
ap2.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores, 0=all")
ap2.add_argument("-a", metavar="ACCT", type=u, action="append", help="add account, \033[33mUSER\033[0m:\033[33mPASS\033[0m; example [\033[32med:wark\033[0m]")
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m]")
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m], see --help-accounts")
ap2.add_argument("-ed", action="store_true", help="enable the ?dots url parameter / client option which allows clients to see dotfiles / hidden files (volflag=dots)")
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,get", help="how to handle url-form POSTs; see \033[33m--help-urlform\033[0m")
ap2.add_argument("--wintitle", metavar="TXT", type=u, default="cpp @ $pub", help="server terminal title, for example [\033[32m$ip-10.1.2.\033[0m] or [\033[32m$ip-]")
@@ -847,6 +860,12 @@ def add_qr(ap, tty):
ap2.add_argument("--qrz", metavar="N", type=int, default=0, help="[\033[32m1\033[0m]=1x, [\033[32m2\033[0m]=2x, [\033[32m0\033[0m]=auto (try [\033[32m2\033[0m] on broken fonts)")
def add_fs(ap):
ap2 = ap.add_argument_group("filesystem options")
rm_re_def = "5/0.1" if ANYWIN else "0/0"
ap2.add_argument("--rm-retry", metavar="T/R", type=u, default=rm_re_def, help="if a file cannot be deleted because it is busy, continue trying for \033[33mT\033[0m seconds, retry every \033[33mR\033[0m seconds; disable with 0/0 (volflag=rm_retry)")
def add_upload(ap):
ap2 = ap.add_argument_group('upload options')
ap2.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads, hiding them from clients unless \033[33m-ed\033[0m")
@@ -870,7 +889,7 @@ def add_upload(ap):
ap2.add_argument("--df", metavar="GiB", type=float, default=0, help="ensure \033[33mGiB\033[0m free disk space by rejecting upload requests")
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16-32 for cross-atlantic (max=64)")
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
@@ -975,7 +994,7 @@ def add_zc_ssdp(ap):
def add_ftp(ap):
ap2 = ap.add_argument_group('FTP options')
ap2 = ap.add_argument_group('FTP options (TCP only)')
ap2.add_argument("--ftp", metavar="PORT", type=int, help="enable FTP server on \033[33mPORT\033[0m, for example \033[32m3921")
ap2.add_argument("--ftps", metavar="PORT", type=int, help="enable FTPS server on \033[33mPORT\033[0m, for example \033[32m3990")
ap2.add_argument("--ftpv", action="store_true", help="verbose")
@@ -995,6 +1014,17 @@ def add_webdav(ap):
ap2.add_argument("--dav-auth", action="store_true", help="force auth for all folders (required by davfs2 when only some folders are world-readable) (volflag=davauth)")
def add_tftp(ap):
ap2 = ap.add_argument_group('TFTP options (UDP only)')
ap2.add_argument("--tftp", metavar="PORT", type=int, help="enable TFTP server on \033[33mPORT\033[0m, for example \033[32m69 \033[0mor \033[32m3969")
ap2.add_argument("--tftpv", action="store_true", help="verbose")
ap2.add_argument("--tftpvv", action="store_true", help="verboser")
ap2.add_argument("--tftp-lsf", metavar="PTN", type=u, default="\\.?(dir|ls)(\\.txt)?", help="return a directory listing if a file with this name is requested and it does not exist; defaults matches .ls, dir, .dir.txt, ls.txt, ...")
ap2.add_argument("--tftp-nols", action="store_true", help="if someone tries to download a directory, return an error instead of showing its directory listing")
ap2.add_argument("--tftp-ipa", metavar="PFX", type=u, default="", help="only accept connections from IP-addresses starting with \033[33mPFX\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Example: [\033[32m127., 10.89., 192.168.\033[0m]")
ap2.add_argument("--tftp-pr", metavar="P-P", type=u, help="the range of UDP ports to use for data transfer, for example \033[32m12000-13000")
def add_smb(ap):
ap2 = ap.add_argument_group('SMB/CIFS options')
ap2.add_argument("--smb", action="store_true", help="enable smb (read-only) -- this requires running copyparty as root on linux and macos unless \033[33m--smb-port\033[0m is set above 1024 and your OS does port-forwarding from 445 to that.\n\033[1;31mWARNING:\033[0m this protocol is DANGEROUS and buggy! Never expose to the internet!")
@@ -1138,6 +1168,7 @@ def add_thumbnail(ap):
ap2.add_argument("--th-size", metavar="WxH", default="320x256", help="thumbnail res (volflag=thsize)")
ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails")
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60, help="conversion timeout in seconds (volflag=convt)")
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=6, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
ap2.add_argument("--th-no-crop", action="store_true", help="dynamic height; show full image by default (client can override in UI) (volflag=nocrop)")
ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference")
ap2.add_argument("--th-no-jpg", action="store_true", help="disable jpg output")
@@ -1166,6 +1197,7 @@ def add_transcoding(ap):
def add_db_general(ap, hcores):
noidx = APPLESAN_TXT if MACOS else ""
ap2 = ap.add_argument_group('general db options')
ap2.add_argument("-e2d", action="store_true", help="enable up2k database, making files searchable + enables upload deduplication")
ap2.add_argument("-e2ds", action="store_true", help="scan writable folders for new files on startup; sets \033[33m-e2d\033[0m")
@@ -1175,7 +1207,7 @@ def add_db_general(ap, hcores):
ap2.add_argument("-e2vp", action="store_true", help="on hash mismatch: panic and quit copyparty")
ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume data (db, thumbs) (volflag=hist)")
ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
ap2.add_argument("--no-idx", metavar="PTN", type=u, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
ap2.add_argument("--re-dhash", action="store_true", help="rebuild the cache if it gets out of sync (for example crash on startup during metadata scanning)")
ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice -- only useful for offloading uploads to a cloud service or something (volflag=noforget)")
@@ -1294,6 +1326,7 @@ def run_argparse(
add_zeroconf(ap)
add_zc_mdns(ap)
add_zc_ssdp(ap)
add_fs(ap)
add_upload(ap)
add_db_general(ap, hcores)
add_db_metadata(ap)
@@ -1301,6 +1334,7 @@ def run_argparse(
add_transcoding(ap)
add_ftp(ap)
add_webdav(ap)
add_tftp(ap)
add_smb(ap)
add_safety(ap)
add_salt(ap, fk_salt, ah_salt)
@@ -1354,15 +1388,16 @@ def main(argv: Optional[list[str]] = None) -> None:
if argv is None:
argv = sys.argv
f = '\033[36mcopyparty v{} "\033[35m{}\033[36m" ({})\n{}\033[0;36m\n sqlite v{} | jinja2 v{} | pyftpd v{}\n\033[0m'
f = '\033[36mcopyparty v{} "\033[35m{}\033[36m" ({})\n{}\033[0;36m\n sqlite {} | jinja {} | pyftpd {} | tftp {}\n\033[0m'
f = f.format(
S_VERSION,
CODENAME,
S_BUILD_DT,
py_desc().replace("[", "\033[90m["),
PY_DESC.replace("[", "\033[90m["),
SQLITE_VER,
JINJA_VER,
PYFTPD_VER,
PARTFTPY_VER,
)
lprint(f)
@@ -1432,7 +1467,7 @@ def main(argv: Optional[list[str]] = None) -> None:
_, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
if hard > 0: # -1 == infinite
nc = min(nc, hard // 4)
nc = min(nc, int(hard / 4))
except:
nc = 512
@@ -1524,6 +1559,9 @@ def main(argv: Optional[list[str]] = None) -> None:
if sys.version_info < (3, 6):
al.no_scandir = True
if not hasattr(os, "sendfile"):
al.no_sendfile = True
# signal.signal(signal.SIGINT, sighandler)
SvcHub(al, dal, argv, "".join(printed)).run()

View File

@@ -1,8 +1,8 @@
# coding: utf-8
VERSION = (1, 9, 28)
CODENAME = "prometheable"
BUILD_DT = (2023, 12, 31)
VERSION = (1, 10, 0)
CODENAME = "tftp"
BUILD_DT = (2024, 2, 15)
S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -381,7 +381,7 @@ class VFS(object):
def add(self, src: str, dst: str) -> "VFS":
"""get existing, or add new path to the vfs"""
assert not src.endswith("/") # nosec
assert src == "/" or not src.endswith("/") # nosec
assert not dst.endswith("/") # nosec
if "/" in dst:
@@ -779,7 +779,6 @@ class AuthSrv(object):
self.warn_anonwrite = warn_anonwrite
self.line_ctr = 0
self.indent = ""
self.desc = []
self.mutex = threading.Lock()
self.reload()
@@ -862,7 +861,6 @@ class AuthSrv(object):
mflags: dict[str, dict[str, Any]],
mount: dict[str, str],
) -> None:
self.desc = []
self.line_ctr = 0
expand_config_file(cfg_lines, fp, "")
@@ -1009,6 +1007,7 @@ class AuthSrv(object):
raise Exception("invalid config value (volume or volflag): %s" % (t,))
if lvl == "c":
# here, 'uname' is not a username; it is a volflag name... sorry
cval: Union[bool, str] = True
try:
# volflag with arguments, possibly with a preceding list of bools
@@ -1235,6 +1234,7 @@ class AuthSrv(object):
all_users = {}
missing_users = {}
associated_users = {}
for axs in daxs.values():
for d in [
axs.uread,
@@ -1251,6 +1251,8 @@ class AuthSrv(object):
all_users[usr] = 1
if usr != "*" and usr not in acct:
missing_users[usr] = 1
if "*" not in d:
associated_users[usr] = 1
if missing_users:
self.log(
@@ -1271,6 +1273,16 @@ class AuthSrv(object):
raise Exception(BAD_CFG)
seenpwds[pwd] = usr
for usr in acct:
if usr not in associated_users:
if len(vfs.all_vols) > 1:
# user probably familiar enough that the verbose message is not necessary
t = "account [%s] is not mentioned in any volume definitions; see --help-accounts"
self.log(t % (usr,), 1)
else:
t = "WARNING: the account [%s] is not mentioned in any volume definitions and thus has the same access-level and privileges that guests have; please see --help-accounts for details. For example, if you intended to give that user full access to the current directory, you could do this: -v .::A,%s"
self.log(t % (usr, usr), 1)
promote = []
demote = []
for vol in vfs.all_vols.values():
@@ -1481,6 +1493,14 @@ class AuthSrv(object):
if k in vol.flags:
vol.flags[k] = float(vol.flags[k])
try:
zs1, zs2 = vol.flags["rm_retry"].split("/")
vol.flags["rm_re_t"] = float(zs1)
vol.flags["rm_re_r"] = float(zs2)
except:
t = 'volume "/%s" has invalid rm_retry [%s]'
raise Exception(t % (vol.vpath, vol.flags.get("rm_retry")))
for k1, k2 in IMPLICATIONS:
if k1 in vol.flags:
vol.flags[k2] = True
@@ -1492,8 +1512,8 @@ class AuthSrv(object):
dbds = "acid|swal|wal|yolo"
vol.flags["dbd"] = dbd = vol.flags.get("dbd") or self.args.dbd
if dbd not in dbds.split("|"):
t = "invalid dbd [{}]; must be one of [{}]"
raise Exception(t.format(dbd, dbds))
t = 'volume "/%s" has invalid dbd [%s]; must be one of [%s]'
raise Exception(t % (vol.vpath, dbd, dbds))
# default tag cfgs if unset
for k in ("mte", "mth", "exp_md", "exp_lg"):

View File

@@ -76,7 +76,7 @@ class MpWorker(BrokerCli):
pass
def logw(self, msg: str, c: Union[int, str] = 0) -> None:
self.log("mp{}".format(self.n), msg, c)
self.log("mp%d" % (self.n,), msg, c)
def main(self) -> None:
while True:

View File

@@ -62,6 +62,7 @@ def vf_vmap() -> dict[str, str]:
"lg_sbf",
"md_sbf",
"nrand",
"rm_retry",
"sort",
"unlist",
"u2ts",
@@ -208,6 +209,7 @@ flagcats = {
"dots": "allow all users with read-access to\nenable the option to show dotfiles in listings",
"fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
"fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers',
"rm_retry": "ms-windows: timeout for deleting busy files",
"davauth": "ask webdav clients to login for all folders",
"davrt": "show lastmod time of symlink destination, not the link itself\n(note: this option is always enabled for recursive listings)",
},

View File

@@ -37,6 +37,8 @@ from .star import StreamTar
from .sutil import StreamArc, gfilter
from .szip import StreamZip
from .util import (
APPLESAN_RE,
BITNESS,
HTTPCODE,
META_NOBOTS,
UTC,
@@ -45,6 +47,7 @@ from .util import (
ODict,
Pebkac,
UnrecvEOF,
WrongPostKey,
absreal,
alltrace,
atomic_move,
@@ -86,6 +89,7 @@ from .util import (
vjoin,
vol_san,
vsplit,
wunlink,
yieldfile,
)
@@ -111,7 +115,7 @@ class HttpCli(object):
self.t0 = time.time()
self.conn = conn
self.mutex = conn.mutex # mypy404
self.u2mutex = conn.u2mutex # mypy404
self.s = conn.s
self.sr = conn.sr
self.ip = conn.addr[0]
@@ -189,7 +193,7 @@ class HttpCli(object):
def unpwd(self, m: Match[str]) -> str:
a, b, c = m.groups()
return "{}\033[7m {} \033[27m{}".format(a, self.asrv.iacct[b], c)
return "%s\033[7m %s \033[27m%s" % (a, self.asrv.iacct[b], c)
def _check_nonfatal(self, ex: Pebkac, post: bool) -> bool:
if post:
@@ -556,16 +560,16 @@ class HttpCli(object):
self.keepalive = False
em = str(ex)
msg = em if pex == ex else min_ex()
msg = em if pex is ex else min_ex()
if pex.code != 404 or self.do_log:
self.log(
"{}\033[0m, {}".format(msg, self.vpath),
"%s\033[0m, %s" % (msg, self.vpath),
6 if em.startswith("client d/c ") else 3,
)
msg = "{}\r\nURL: {}\r\n".format(em, self.vpath)
msg = "%s\r\nURL: %s\r\n" % (em, self.vpath)
if self.hint:
msg += "hint: {}\r\n".format(self.hint)
msg += "hint: %s\r\n" % (self.hint,)
if "database is locked" in em:
self.conn.hsrv.broker.say("log_stacks")
@@ -808,7 +812,7 @@ class HttpCli(object):
if k in skip:
continue
t = "{}={}".format(quotep(k), quotep(v))
t = "%s=%s" % (quotep(k), quotep(v))
ret.append(t.replace(" ", "+").rstrip("="))
if not ret:
@@ -856,7 +860,8 @@ class HttpCli(object):
oh = self.out_headers
origin = origin.lower()
good_origins = self.args.acao + [
"{}://{}".format(
"%s://%s"
% (
"https" if self.is_https else "http",
self.host.lower().split(":")[0],
)
@@ -1053,7 +1058,7 @@ class HttpCli(object):
self.can_read = self.can_write = self.can_get = False
if not self.can_read and not self.can_write and not self.can_get:
self.log("inaccessible: [{}]".format(self.vpath))
self.log("inaccessible: [%s]" % (self.vpath,))
raise Pebkac(401, "authenticate")
from .dxml import parse_xml
@@ -1382,8 +1387,7 @@ class HttpCli(object):
return False
vp = "/" + self.vpath
ptn = r"/\.(_|DS_Store|Spotlight-|fseventsd|Trashes|AppleDouble)|/__MACOS"
if re.search(ptn, vp):
if re.search(APPLESAN_RE, vp):
zt = '<?xml version="1.0" encoding="utf-8"?>\n<D:error xmlns:D="DAV:"><D:lock-token-submitted><D:href>{}</D:href></D:lock-token-submitted></D:error>'
zb = zt.format(vp).encode("utf-8", "replace")
self.reply(zb, 423, "text/xml; charset=utf-8")
@@ -1403,7 +1407,7 @@ class HttpCli(object):
if txt and len(txt) == orig_len:
raise Pebkac(500, "chunk slicing failed")
buf = "{:x}\r\n".format(len(buf)).encode(enc) + buf
buf = ("%x\r\n" % (len(buf),)).encode(enc) + buf
self.s.sendall(buf + b"\r\n")
return txt
@@ -1689,7 +1693,7 @@ class HttpCli(object):
and bos.path.getmtime(path) >= time.time() - self.args.blank_wt
):
# small toctou, but better than clobbering a hardlink
bos.unlink(path)
wunlink(self.log, path, vfs.flags)
with ren_open(fn, *open_a, **params) as zfw:
f, fn = zfw["orz"]
@@ -1703,7 +1707,7 @@ class HttpCli(object):
lim.chk_sz(post_sz)
lim.chk_vsz(self.conn.hsrv.broker, vfs.realpath, post_sz)
except:
bos.unlink(path)
wunlink(self.log, path, vfs.flags)
raise
if self.args.nw:
@@ -1756,7 +1760,7 @@ class HttpCli(object):
):
t = "upload blocked by xau server config"
self.log(t, 1)
os.unlink(path)
wunlink(self.log, path, vfs.flags)
raise Pebkac(403, t)
vfs, rem = vfs.get_dbv(rem)
@@ -1862,7 +1866,16 @@ class HttpCli(object):
self.parser = MultipartParser(self.log, self.sr, self.headers)
self.parser.parse()
act = self.parser.require("act", 64)
file0: list[tuple[str, Optional[str], Generator[bytes, None, None]]] = []
try:
act = self.parser.require("act", 64)
except WrongPostKey as ex:
if ex.got == "f" and ex.fname:
self.log("missing 'act', but looks like an upload so assuming that")
file0 = [(ex.got, ex.fname, ex.datagen)]
act = "bput"
else:
raise
if act == "login":
return self.handle_login()
@@ -1875,7 +1888,7 @@ class HttpCli(object):
return self.handle_new_md()
if act == "bput":
return self.handle_plain_upload()
return self.handle_plain_upload(file0)
if act == "tput":
return self.handle_text_upload()
@@ -1975,8 +1988,11 @@ class HttpCli(object):
except:
raise Pebkac(500, min_ex())
x = self.conn.hsrv.broker.ask("up2k.handle_json", body, self.u2fh.aps)
ret = x.get()
# not to protect u2fh, but to prevent handshakes while files are closing
with self.u2mutex:
x = self.conn.hsrv.broker.ask("up2k.handle_json", body, self.u2fh.aps)
ret = x.get()
if self.is_vproxied:
if "purl" in ret:
ret["purl"] = self.args.SR + ret["purl"]
@@ -2081,7 +2097,7 @@ class HttpCli(object):
f = None
fpool = not self.args.no_fpool and sprs
if fpool:
with self.mutex:
with self.u2mutex:
try:
f = self.u2fh.pop(path)
except:
@@ -2124,7 +2140,7 @@ class HttpCli(object):
if not fpool:
f.close()
else:
with self.mutex:
with self.u2mutex:
self.u2fh.put(path, f)
except:
# maybe busted handle (eg. disk went full)
@@ -2143,7 +2159,7 @@ class HttpCli(object):
return False
if not num_left and fpool:
with self.mutex:
with self.u2mutex:
self.u2fh.close(path)
if not num_left and not self.args.nw:
@@ -2314,7 +2330,9 @@ class HttpCli(object):
vfs.flags.get("xau") or [],
)
def handle_plain_upload(self) -> bool:
def handle_plain_upload(
self, file0: list[tuple[str, Optional[str], Generator[bytes, None, None]]]
) -> bool:
assert self.parser
nullwrite = self.args.nw
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
@@ -2336,11 +2354,13 @@ class HttpCli(object):
files: list[tuple[int, str, str, str, str, str]] = []
# sz, sha_hex, sha_b64, p_file, fname, abspath
errmsg = ""
tabspath = ""
dip = self.dip()
t0 = time.time()
try:
assert self.parser.gen
for nfile, (p_field, p_file, p_data) in enumerate(self.parser.gen):
gens = itertools.chain(file0, self.parser.gen)
for nfile, (p_field, p_file, p_data) in enumerate(gens):
if not p_file:
self.log("discarding incoming file without filename")
# fallthrough
@@ -2424,14 +2444,16 @@ class HttpCli(object):
lim.chk_nup(self.ip)
except:
if not nullwrite:
bos.unlink(tabspath)
bos.unlink(abspath)
wunlink(self.log, tabspath, vfs.flags)
wunlink(self.log, abspath, vfs.flags)
fname = os.devnull
raise
if not nullwrite:
atomic_move(tabspath, abspath)
tabspath = ""
files.append(
(sz, sha_hex, sha_b64, p_file or "(discarded)", fname, abspath)
)
@@ -2451,7 +2473,7 @@ class HttpCli(object):
):
t = "upload blocked by xau server config"
self.log(t, 1)
os.unlink(abspath)
wunlink(self.log, abspath, vfs.flags)
raise Pebkac(403, t)
dbv, vrem = vfs.get_dbv(rem)
@@ -2477,6 +2499,12 @@ class HttpCli(object):
errmsg = vol_san(
list(self.asrv.vfs.all_vols.values()), unicode(ex).encode("utf-8")
).decode("utf-8")
try:
got = bos.path.getsize(tabspath)
t = "connection lost after receiving %s of the file"
self.log(t % (humansize(got),), 3)
except:
pass
td = max(0.1, time.time() - t0)
sz_total = sum(x[0] for x in files)
@@ -2689,7 +2717,7 @@ class HttpCli(object):
raise Pebkac(403, t)
if bos.path.exists(fp):
bos.unlink(fp)
wunlink(self.log, fp, vfs.flags)
with open(fsenc(fp), "wb", 512 * 1024) as f:
sz, sha512, _ = hashcopy(p_data, f, self.args.s_wr_slp)
@@ -2701,7 +2729,7 @@ class HttpCli(object):
lim.chk_sz(sz)
lim.chk_vsz(self.conn.hsrv.broker, vfs.realpath, sz)
except:
bos.unlink(fp)
wunlink(self.log, fp, vfs.flags)
raise
new_lastmod = bos.stat(fp).st_mtime
@@ -2724,7 +2752,7 @@ class HttpCli(object):
):
t = "save blocked by xau server config"
self.log(t, 1)
os.unlink(fp)
wunlink(self.log, fp, vfs.flags)
raise Pebkac(403, t)
vfs, rem = vfs.get_dbv(rem)
@@ -2936,9 +2964,11 @@ class HttpCli(object):
# 512 kB is optimal for huge files, use 64k
open_args = [fsenc(fs_path), "rb", 64 * 1024]
use_sendfile = (
not self.tls #
# fmt: off
not self.tls
and not self.args.no_sendfile
and hasattr(os, "sendfile")
and (BITNESS > 32 or file_sz < 0x7fffFFFF)
# fmt: on
)
#
@@ -3373,7 +3403,7 @@ class HttpCli(object):
pt = "404 not found ┐( ´ -`)┌"
if self.ua.startswith("curl/") or self.ua.startswith("fetch"):
pt = "# acct: %s\n%s" % (self.uname, pt)
pt = "# acct: %s\n%s\n" % (self.uname, pt)
self.reply(pt.encode("utf-8"), status=rc)
return True
@@ -4218,7 +4248,7 @@ class HttpCli(object):
if icur:
lmte = list(mte)
if self.can_admin:
lmte += ["up_ip", ".up_at"]
lmte.extend(("up_ip", ".up_at"))
taglist = [k for k in lmte if k in tagset]
for fe in dirs:

View File

@@ -50,7 +50,7 @@ class HttpConn(object):
self.addr = addr
self.hsrv = hsrv
self.mutex: threading.Lock = hsrv.mutex # mypy404
self.u2mutex: threading.Lock = hsrv.u2mutex # mypy404
self.args: argparse.Namespace = hsrv.args # mypy404
self.E: EnvParams = self.args.E
self.asrv: AuthSrv = hsrv.asrv # mypy404
@@ -93,7 +93,7 @@ class HttpConn(object):
self.rproxy = ip
self.ip = ip
self.log_src = "{} \033[{}m{}".format(ip, color, self.addr[1]).ljust(26)
self.log_src = ("%s \033[%dm%d" % (ip, color, self.addr[1])).ljust(26)
return self.log_src
def respath(self, res_name: str) -> str:
@@ -176,7 +176,7 @@ class HttpConn(object):
self.s = ctx.wrap_socket(self.s, server_side=True)
msg = [
"\033[1;3{:d}m{}".format(c, s)
"\033[1;3%dm%s" % (c, s)
for c, s in zip([0, 5, 0], self.s.cipher()) # type: ignore
]
self.log(" ".join(msg) + "\033[0m")

View File

@@ -117,6 +117,7 @@ class HttpSrv(object):
self.bound: set[tuple[str, int]] = set()
self.name = "hsrv" + nsuf
self.mutex = threading.Lock()
self.u2mutex = threading.Lock()
self.stopping = False
self.tp_nthr = 0 # actual
@@ -220,7 +221,7 @@ class HttpSrv(object):
def periodic(self) -> None:
while True:
time.sleep(2 if self.tp_ncli or self.ncli else 10)
with self.mutex:
with self.u2mutex, self.mutex:
self.u2fh.clean()
if self.tp_q:
self.tp_ncli = max(self.ncli, self.tp_ncli - 2)
@@ -366,7 +367,7 @@ class HttpSrv(object):
if not self.t_periodic:
name = "hsrv-pt"
if self.nid:
name += "-{}".format(self.nid)
name += "-%d" % (self.nid,)
self.t_periodic = Daemon(self.periodic, name)
@@ -385,7 +386,7 @@ class HttpSrv(object):
Daemon(
self.thr_client,
"httpconn-{}-{}".format(addr[0].split(".", 2)[-1][-6:], addr[1]),
"httpconn-%s-%d" % (addr[0].split(".", 2)[-1][-6:], addr[1]),
(sck, addr),
)
@@ -402,9 +403,7 @@ class HttpSrv(object):
try:
sck, addr = task
me = threading.current_thread()
me.name = "httpconn-{}-{}".format(
addr[0].split(".", 2)[-1][-6:], addr[1]
)
me.name = "httpconn-%s-%d" % (addr[0].split(".", 2)[-1][-6:], addr[1])
self.thr_client(sck, addr)
me.name = self.name + "-poolw"
except Exception as ex:

View File

@@ -27,13 +27,13 @@ class Ico(object):
c1 = colorsys.hsv_to_rgb(zb[0] / 256.0, 1, 0.3)
c2 = colorsys.hsv_to_rgb(zb[0] / 256.0, 0.8 if HAVE_PILF else 1, 1)
ci = [int(x * 255) for x in list(c1) + list(c2)]
c = "".join(["{:02x}".format(x) for x in ci])
c = "".join(["%02x" % (x,) for x in ci])
w = 100
h = 30
if not self.args.th_no_crop and as_thumb:
sw, sh = self.args.th_size.split("x")
h = int(100 / (float(sw) / float(sh)))
h = int(100.0 / (float(sw) / float(sh)))
w = 100
if chrome:
@@ -47,12 +47,12 @@ class Ico(object):
# [.lt] are hard to see lowercase / unspaced
ext2 = re.sub("(.)", "\\1 ", ext).upper()
h = int(128 * h / w)
h = int(128.0 * h / w)
w = 128
img = Image.new("RGB", (w, h), "#" + c[:6])
pb = ImageDraw.Draw(img)
_, _, tw, th = pb.textbbox((0, 0), ext2, font_size=16)
xy = ((w - tw) // 2, (h - th) // 2)
xy = (int((w - tw) / 2), int((h - th) / 2))
pb.text(xy, ext2, fill="#" + c[6:], font_size=16)
img = img.resize((w * 2, h * 2), Image.NEAREST)
@@ -68,7 +68,7 @@ class Ico(object):
# svg: 3s, cache: 6s, this: 8s
from PIL import Image, ImageDraw
h = int(64 * h / w)
h = int(64.0 * h / w)
w = 64
img = Image.new("RGB", (w, h), "#" + c[:6])
pb = ImageDraw.Draw(img)

View File

@@ -118,7 +118,7 @@ def ffprobe(
b"--",
fsenc(abspath),
]
rc, so, se = runcmd(cmd, timeout=timeout, nice=True)
rc, so, se = runcmd(cmd, timeout=timeout, nice=True, oom=200)
retchk(rc, cmd, se)
return parse_ffprobe(so)
@@ -240,7 +240,7 @@ def parse_ffprobe(txt: str) -> tuple[dict[str, tuple[int, Any]], dict[str, list[
if "/" in fps:
fa, fb = fps.split("/")
try:
fps = int(fa) * 1.0 / int(fb)
fps = float(fa) / float(fb)
except:
fps = 9001
@@ -564,6 +564,7 @@ class MTag(object):
args = {
"env": env,
"nice": True,
"oom": 300,
"timeout": parser.timeout,
"kill": parser.kill,
"capture": parser.capture,

View File

@@ -65,21 +65,21 @@ class StreamTar(StreamArc):
cmp = re.sub(r"[^a-z0-9]*pax[^a-z0-9]*", "", cmp)
try:
cmp, lv = cmp.replace(":", ",").split(",")
lv = int(lv)
cmp, zs = cmp.replace(":", ",").split(",")
lv = int(zs)
except:
lv = None
lv = -1
arg = {"name": None, "fileobj": self.qfile, "mode": "w", "format": fmt}
if cmp == "gz":
fun = tarfile.TarFile.gzopen
arg["compresslevel"] = lv if lv is not None else 3
arg["compresslevel"] = lv if lv >= 0 else 3
elif cmp == "bz2":
fun = tarfile.TarFile.bz2open
arg["compresslevel"] = lv if lv is not None else 2
arg["compresslevel"] = lv if lv >= 0 else 2
elif cmp == "xz":
fun = tarfile.TarFile.xzopen
arg["preset"] = lv if lv is not None else 1
arg["preset"] = lv if lv >= 0 else 1
else:
fun = tarfile.open
arg["mode"] = "w|"

View File

@@ -133,7 +133,7 @@ class SvcHub(object):
if not self._process_config():
raise Exception(BAD_CFG)
# for non-http clients (ftp)
# for non-http clients (ftp, tftp)
self.bans: dict[str, int] = {}
self.gpwd = Garda(self.args.ban_pw)
self.g404 = Garda(self.args.ban_404)
@@ -268,6 +268,12 @@ class SvcHub(object):
Daemon(self.start_ftpd, "start_ftpd")
zms += "f" if args.ftp else "F"
if args.tftp:
from .tftpd import Tftpd
self.tftpd: Optional[Tftpd] = None
Daemon(self.start_ftpd, "start_tftpd")
if args.smb:
# impacket.dcerpc is noisy about listen timeouts
sto = socket.getdefaulttimeout()
@@ -297,10 +303,12 @@ class SvcHub(object):
def start_ftpd(self) -> None:
time.sleep(30)
if self.ftpd:
return
self.restart_ftpd()
if hasattr(self, "ftpd") and not self.ftpd:
self.restart_ftpd()
if hasattr(self, "tftpd") and not self.tftpd:
self.restart_tftpd()
def restart_ftpd(self) -> None:
if not hasattr(self, "ftpd"):
@@ -317,6 +325,17 @@ class SvcHub(object):
self.ftpd = Ftpd(self)
self.log("root", "started FTPd")
def restart_tftpd(self) -> None:
if not hasattr(self, "tftpd"):
return
from .tftpd import Tftpd
if self.tftpd:
return # todo
self.tftpd = Tftpd(self)
def thr_httpsrv_up(self) -> None:
time.sleep(1 if self.args.ign_ebind_all else 5)
expected = self.broker.num_workers * self.tcpsrv.nsrv
@@ -432,6 +451,13 @@ class SvcHub(object):
else:
setattr(al, k, re.compile(vs))
for k in "tftp_lsf".split(" "):
vs = getattr(al, k)
if not vs or vs == "no":
setattr(al, k, None)
else:
setattr(al, k, re.compile("^" + vs + "$"))
if not al.sus_urls:
al.ban_url = "no"
elif al.ban_url == "no":
@@ -444,6 +470,7 @@ class SvcHub(object):
al.xff_re = self._ipa2re(al.xff_src)
al.ipa_re = self._ipa2re(al.ipa)
al.ftp_ipa_re = self._ipa2re(al.ftp_ipa or al.ipa)
al.tftp_ipa_re = self._ipa2re(al.tftp_ipa or al.ipa)
mte = ODict.fromkeys(DEF_MTE.split(","), True)
al.mte = odfusion(mte, al.mte)
@@ -460,6 +487,13 @@ class SvcHub(object):
if ptn:
setattr(self.args, k, re.compile(ptn))
try:
zf1, zf2 = self.args.rm_retry.split("/")
self.args.rm_re_t = float(zf1)
self.args.rm_re_r = float(zf2)
except:
raise Exception("invalid --rm-retry [%s]" % (self.args.rm_retry,))
return True
def _ipa2re(self, txt) -> Optional[re.Pattern]:
@@ -474,7 +508,7 @@ class SvcHub(object):
import resource
soft, hard = [
x if x > 0 else 1024 * 1024
int(x) if x > 0 else 1024 * 1024
for x in list(resource.getrlimit(resource.RLIMIT_NOFILE))
]
except:
@@ -791,7 +825,7 @@ class SvcHub(object):
self.logf.flush()
now = time.time()
if now >= self.next_day:
if int(now) >= self.next_day:
self._set_next_day()
def _set_next_day(self) -> None:
@@ -819,7 +853,7 @@ class SvcHub(object):
"""handles logging from all components"""
with self.log_mutex:
now = time.time()
if now >= self.next_day:
if int(now) >= self.next_day:
dt = datetime.fromtimestamp(now, UTC)
zs = "{}\n" if self.no_ansi else "\033[36m{}\033[0m\n"
zs = zs.format(dt.strftime("%Y-%m-%d"))

View File

@@ -309,6 +309,7 @@ class TcpSrv(object):
self.hub.start_zeroconf()
gencert(self.log, self.args, self.netdevs)
self.hub.restart_ftpd()
self.hub.restart_tftpd()
def shutdown(self) -> None:
self.stopping = True

309
copyparty/tftpd.py Normal file
View File

@@ -0,0 +1,309 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
try:
from types import SimpleNamespace
except:
class SimpleNamespace(object):
def __init__(self, **attr):
self.__dict__.update(attr)
import inspect
import logging
import os
import stat
from datetime import datetime
from partftpy import TftpContexts, TftpServer, TftpStates
from partftpy.TftpShared import TftpException
from .__init__ import PY2, TYPE_CHECKING
from .authsrv import VFS
from .bos import bos
from .util import BytesIO, Daemon, exclude_dotfiles, runhook, undot
if True: # pylint: disable=using-constant-test
from typing import Any, Union
if TYPE_CHECKING:
from .svchub import SvcHub
lg = logging.getLogger("tftp")
debug, info, warning, error = (lg.debug, lg.info, lg.warning, lg.error)
def _serverInitial(self, pkt: Any, raddress: str, rport: int) -> bool:
info("connection from %s:%s", raddress, rport)
ret = _orig_serverInitial(self, pkt, raddress, rport)
ptn = _hub[0].args.tftp_ipa_re
if ptn and not ptn.match(raddress):
yeet("client rejected (--tftp-ipa): %s" % (raddress,))
return ret
# patch ipa-check into partftpd
_hub: list["SvcHub"] = []
_orig_serverInitial = TftpStates.TftpServerState.serverInitial
TftpStates.TftpServerState.serverInitial = _serverInitial
class Tftpd(object):
def __init__(self, hub: "SvcHub") -> None:
self.hub = hub
self.args = hub.args
self.asrv = hub.asrv
self.log = hub.log
_hub[:] = []
_hub.append(hub)
lg.setLevel(logging.DEBUG if self.args.tftpv else logging.INFO)
for x in ["partftpy", "partftpy.TftpStates", "partftpy.TftpServer"]:
lgr = logging.getLogger(x)
lgr.setLevel(logging.DEBUG if self.args.tftpv else logging.INFO)
# patch vfs into partftpy
TftpContexts.open = self._open
TftpStates.open = self._open
fos = SimpleNamespace()
for k in os.__dict__:
try:
setattr(fos, k, getattr(os, k))
except:
pass
fos.access = self._access
fos.mkdir = self._mkdir
fos.unlink = self._unlink
fos.sep = "/"
TftpContexts.os = fos
TftpServer.os = fos
TftpStates.os = fos
fop = SimpleNamespace()
for k in os.path.__dict__:
try:
setattr(fop, k, getattr(os.path, k))
except:
pass
fop.abspath = self._p_abspath
fop.exists = self._p_exists
fop.isdir = self._p_isdir
fop.normpath = self._p_normpath
fos.path = fop
self._disarm(fos)
ip = next((x for x in self.args.i if ":" not in x), None)
if not ip:
self.log("tftp", "IPv6 not supported for tftp; listening on 0.0.0.0", 3)
ip = "0.0.0.0"
self.ip = ip
self.port = int(self.args.tftp)
self.srv = TftpServer.TftpServer("/", self._ls)
self.stop = self.srv.stop
ports = []
if self.args.tftp_pr:
p1, p2 = [int(x) for x in self.args.tftp_pr.split("-")]
ports = list(range(p1, p2 + 1))
Daemon(self.srv.listen, "tftp", [self.ip, self.port], ka={"ports": ports})
def nlog(self, msg: str, c: Union[int, str] = 0) -> None:
self.log("tftp", msg, c)
def _v2a(self, caller: str, vpath: str, perms: list, *a: Any) -> tuple[VFS, str]:
vpath = vpath.replace("\\", "/").lstrip("/")
if not perms:
perms = [True, True]
debug('%s("%s", %s) %s\033[K\033[0m', caller, vpath, str(a), perms)
vfs, rem = self.asrv.vfs.get(vpath, "*", *perms)
return vfs, vfs.canonical(rem)
def _ls(self, vpath: str, raddress: str, rport: int, force=False) -> Any:
# generate file listing if vpath is dir.txt and return as file object
if not force:
vpath, fn = os.path.split(vpath.replace("\\", "/"))
ptn = self.args.tftp_lsf
if not ptn or not ptn.match(fn.lower()):
return None
vn, rem = self.asrv.vfs.get(vpath, "*", True, False)
fsroot, vfs_ls, vfs_virt = vn.ls(
rem,
"*",
not self.args.no_scandir,
[[True, False]],
)
dnames = set([x[0] for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)])
dirs1 = [(v.st_mtime, v.st_size, k + "/") for k, v in vfs_ls if k in dnames]
fils1 = [(v.st_mtime, v.st_size, k) for k, v in vfs_ls if k not in dnames]
real1 = dirs1 + fils1
realt = [(datetime.fromtimestamp(mt), sz, fn) for mt, sz, fn in real1]
reals = [
(
"%04d-%02d-%02d %02d:%02d:%02d"
% (
zd.year,
zd.month,
zd.day,
zd.hour,
zd.minute,
zd.second,
),
sz,
fn,
)
for zd, sz, fn in realt
]
virs = [("????-??-?? ??:??:??", 0, k + "/") for k in vfs_virt.keys()]
ls = virs + reals
if "*" not in vn.axs.udot:
names = set(exclude_dotfiles([x[2] for x in ls]))
ls = [x for x in ls if x[2] in names]
try:
biggest = max([x[1] for x in ls])
except:
biggest = 0
perms = []
if "*" in vn.axs.uread:
perms.append("read")
if "*" in vn.axs.udot:
perms.append("hidden")
if "*" in vn.axs.uwrite:
if "*" in vn.axs.udel:
perms.append("overwrite")
else:
perms.append("write")
fmt = "{{}} {{:{},}} {{}}"
fmt = fmt.format(len("{:,}".format(biggest)))
retl = ["# permissions: %s" % (", ".join(perms),)]
retl += [fmt.format(*x) for x in ls]
ret = "\n".join(retl).encode("utf-8", "replace")
return BytesIO(ret)
def _open(self, vpath: str, mode: str, *a: Any, **ka: Any) -> Any:
rd = wr = False
if mode == "rb":
rd = True
elif mode == "wb":
wr = True
else:
raise Exception("bad mode %s" % (mode,))
vfs, ap = self._v2a("open", vpath, [rd, wr])
if wr:
if "*" not in vfs.axs.uwrite:
yeet("blocked write; folder not world-writable: /%s" % (vpath,))
if bos.path.exists(ap) and "*" not in vfs.axs.udel:
yeet("blocked write; folder not world-deletable: /%s" % (vpath,))
xbu = vfs.flags.get("xbu")
if xbu and not runhook(
self.nlog, xbu, ap, vpath, "", "", 0, 0, "8.3.8.7", 0, ""
):
yeet("blocked by xbu server config: " + vpath)
if not self.args.tftp_nols and bos.path.isdir(ap):
return self._ls(vpath, "", 0, True)
return open(ap, mode, *a, **ka)
def _mkdir(self, vpath: str, *a) -> None:
vfs, ap = self._v2a("mkdir", vpath, [])
if "*" not in vfs.axs.uwrite:
yeet("blocked mkdir; folder not world-writable: /%s" % (vpath,))
return bos.mkdir(ap)
def _unlink(self, vpath: str) -> None:
# return bos.unlink(self._v2a("stat", vpath, *a)[1])
vfs, ap = self._v2a("delete", vpath, [True, False, False, True])
try:
inf = bos.stat(ap)
except:
return
if not stat.S_ISREG(inf.st_mode) or inf.st_size:
yeet("attempted delete of non-empty file")
vpath = vpath.replace("\\", "/").lstrip("/")
self.hub.up2k.handle_rm("*", "8.3.8.7", [vpath], [], False)
def _access(self, *a: Any) -> bool:
return True
def _p_abspath(self, vpath: str) -> str:
return "/" + undot(vpath)
def _p_normpath(self, *a: Any) -> str:
return ""
def _p_exists(self, vpath: str) -> bool:
try:
ap = self._v2a("p.exists", vpath, [False, False])[1]
bos.stat(ap)
return True
except:
return False
def _p_isdir(self, vpath: str) -> bool:
try:
st = bos.stat(self._v2a("p.isdir", vpath, [False, False])[1])
ret = stat.S_ISDIR(st.st_mode)
return ret
except:
return False
def _hook(self, *a: Any, **ka: Any) -> None:
src = inspect.currentframe().f_back.f_code.co_name
error("\033[31m%s:hook(%s)\033[0m", src, a)
raise Exception("nope")
def _disarm(self, fos: SimpleNamespace) -> None:
fos.chmod = self._hook
fos.chown = self._hook
fos.close = self._hook
fos.ftruncate = self._hook
fos.lchown = self._hook
fos.link = self._hook
fos.listdir = self._hook
fos.lstat = self._hook
fos.open = self._hook
fos.remove = self._hook
fos.rename = self._hook
fos.replace = self._hook
fos.scandir = self._hook
fos.stat = self._hook
fos.symlink = self._hook
fos.truncate = self._hook
fos.utime = self._hook
fos.walk = self._hook
fos.path.expanduser = self._hook
fos.path.expandvars = self._hook
fos.path.getatime = self._hook
fos.path.getctime = self._hook
fos.path.getmtime = self._hook
fos.path.getsize = self._hook
fos.path.isabs = self._hook
fos.path.isfile = self._hook
fos.path.islink = self._hook
fos.path.realpath = self._hook
def yeet(msg: str) -> None:
warning(msg)
raise TftpException(msg)

View File

@@ -28,6 +28,7 @@ from .util import (
runcmd,
statdir,
vsplit,
wunlink,
)
if True: # pylint: disable=using-constant-test
@@ -102,7 +103,7 @@ def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -
rd += "\n" + fmt
h = hashlib.sha512(afsenc(rd)).digest()
b64 = base64.urlsafe_b64encode(h).decode("ascii")[:24]
rd = "{}/{}/".format(b64[:2], b64[2:4]).lower() + b64
rd = ("%s/%s/" % (b64[:2], b64[2:4])).lower() + b64
# could keep original filenames but this is safer re pathlen
h = hashlib.sha512(afsenc(fn)).digest()
@@ -115,7 +116,7 @@ def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -
fmt = "webp" if fc == "w" else "png" if fc == "p" else "jpg"
cat = "th"
return "{}/{}/{}/{}.{:x}.{}".format(histpath, cat, rd, fn, int(mtime), fmt)
return "%s/%s/%s/%s.%x.%s" % (histpath, cat, rd, fn, int(mtime), fmt)
class ThumbSrv(object):
@@ -129,6 +130,8 @@ class ThumbSrv(object):
self.mutex = threading.Lock()
self.busy: dict[str, list[threading.Condition]] = {}
self.ram: dict[str, float] = {}
self.memcond = threading.Condition(self.mutex)
self.stopping = False
self.nthr = max(1, self.args.th_mt)
@@ -214,7 +217,7 @@ class ThumbSrv(object):
with self.mutex:
try:
self.busy[tpath].append(cond)
self.log("wait {}".format(tpath))
self.log("joined waiting room for %s" % (tpath,))
except:
thdir = os.path.dirname(tpath)
bos.makedirs(os.path.join(thdir, "w"))
@@ -265,6 +268,23 @@ class ThumbSrv(object):
"ffa": self.fmt_ffa,
}
def wait4ram(self, need: float, ttpath: str) -> None:
ram = self.args.th_ram_max
if need > ram * 0.99:
t = "file too big; need %.2f GiB RAM, but --th-ram-max is only %.1f"
raise Exception(t % (need, ram))
while True:
with self.mutex:
used = sum([v for k, v in self.ram.items() if k != ttpath]) + need
if used < ram:
# self.log("XXX self.ram: %s" % (self.ram,), 5)
self.ram[ttpath] = need
return
with self.memcond:
# self.log("at RAM limit; used %.2f GiB, need %.2f more" % (used-need, need), 1)
self.memcond.wait(3)
def worker(self) -> None:
while not self.stopping:
task = self.q.get()
@@ -298,7 +318,7 @@ class ThumbSrv(object):
tdir, tfn = os.path.split(tpath)
ttpath = os.path.join(tdir, "w", tfn)
try:
bos.unlink(ttpath)
wunlink(self.log, ttpath, vn.flags)
except:
pass
@@ -318,7 +338,7 @@ class ThumbSrv(object):
else:
# ffmpeg may spawn empty files on windows
try:
os.unlink(ttpath)
wunlink(self.log, ttpath, vn.flags)
except:
pass
@@ -330,11 +350,15 @@ class ThumbSrv(object):
with self.mutex:
subs = self.busy[tpath]
del self.busy[tpath]
self.ram.pop(ttpath, None)
for x in subs:
with x:
x.notify_all()
with self.memcond:
self.memcond.notify_all()
with self.mutex:
self.nthr -= 1
@@ -366,6 +390,7 @@ class ThumbSrv(object):
return im
def conv_pil(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
self.wait4ram(0.2, tpath)
with Image.open(fsenc(abspath)) as im:
try:
im = self.fancy_pillow(im, fmt, vn)
@@ -382,7 +407,7 @@ class ThumbSrv(object):
# method 0 = pillow-default, fast
# method 4 = ffmpeg-default
# method 6 = max, slow
fmts += ["RGBA", "LA"]
fmts.extend(("RGBA", "LA"))
args["method"] = 6
else:
# default q = 75
@@ -395,6 +420,7 @@ class ThumbSrv(object):
im.save(tpath, **args)
def conv_vips(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
self.wait4ram(0.2, tpath)
crops = ["centre", "none"]
if fmt.endswith("f"):
crops = ["none"]
@@ -415,6 +441,7 @@ class ThumbSrv(object):
img.write_to_file(tpath, Q=40)
def conv_ffmpeg(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
self.wait4ram(0.2, tpath)
ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
if not ret:
return
@@ -467,9 +494,9 @@ class ThumbSrv(object):
cmd += [fsenc(tpath)]
self._run_ff(cmd, vn)
def _run_ff(self, cmd: list[bytes], vn: VFS) -> None:
def _run_ff(self, cmd: list[bytes], vn: VFS, oom: int = 400) -> None:
# self.log((b" ".join(cmd)).decode("utf-8"))
ret, _, serr = runcmd(cmd, timeout=vn.flags["convt"], nice=True)
ret, _, serr = runcmd(cmd, timeout=vn.flags["convt"], nice=True, oom=oom)
if not ret:
return
@@ -517,8 +544,21 @@ class ThumbSrv(object):
if "ac" not in ret:
raise Exception("not audio")
flt = (
b"[0:a:0]"
# jt_versi.xm: 405M/839s
dur = ret[".dur"][1] if ".dur" in ret else 300
need = 0.2 + dur / 3000
speedup = b""
if need > self.args.th_ram_max * 0.7:
self.log("waves too big (need %.2f GiB); trying to optimize" % (need,))
need = 0.2 + dur / 4200 # only helps about this much...
speedup = b"aresample=8000,"
if need > self.args.th_ram_max * 0.96:
raise Exception("file too big; cannot waves")
self.wait4ram(need, tpath)
flt = b"[0:a:0]" + speedup
flt += (
b"compand=.3|.3:1|1:-90/-60|-60/-40|-40/-30|-20/-20:6:0:-90:0.2"
b",volume=2"
b",showwavespic=s=2048x64:colors=white"
@@ -545,6 +585,15 @@ class ThumbSrv(object):
if "ac" not in ret:
raise Exception("not audio")
# https://trac.ffmpeg.org/ticket/10797
# expect 1 GiB every 600 seconds when duration is tricky;
# simple filetypes are generally safer so let's special-case those
safe = ("flac", "wav", "aif", "aiff", "opus")
coeff = 1800 if abspath.split(".")[-1].lower() in safe else 600
dur = ret[".dur"][1] if ".dur" in ret else 300
need = 0.2 + dur / coeff
self.wait4ram(need, tpath)
fc = "[0:a:0]aresample=48000{},showspectrumpic=s=640x512,crop=780:544:70:50[o]"
if self.args.th_ff_swr:
@@ -587,6 +636,7 @@ class ThumbSrv(object):
if self.args.no_acode:
raise Exception("disabled in server config")
self.wait4ram(0.2, tpath)
ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
if "ac" not in ret:
raise Exception("not audio")
@@ -602,7 +652,7 @@ class ThumbSrv(object):
if want_caf:
tmp_opus = tpath + ".opus"
try:
bos.unlink(tmp_opus)
wunlink(self.log, tmp_opus, vn.flags)
except:
pass
@@ -623,7 +673,7 @@ class ThumbSrv(object):
fsenc(tmp_opus)
]
# fmt: on
self._run_ff(cmd, vn)
self._run_ff(cmd, vn, oom=300)
# iOS fails to play some "insufficiently complex" files
# (average file shorter than 8 seconds), so of course we
@@ -647,7 +697,7 @@ class ThumbSrv(object):
fsenc(tpath)
]
# fmt: on
self._run_ff(cmd, vn)
self._run_ff(cmd, vn, oom=300)
elif want_caf:
# simple remux should be safe
@@ -665,11 +715,11 @@ class ThumbSrv(object):
fsenc(tpath)
]
# fmt: on
self._run_ff(cmd, vn)
self._run_ff(cmd, vn, oom=300)
if tmp_opus != tpath:
try:
bos.unlink(tmp_opus)
wunlink(self.log, tmp_opus, vn.flags)
except:
pass
@@ -696,7 +746,10 @@ class ThumbSrv(object):
else:
self.log("\033[Jcln {} ({})/\033[A".format(histpath, vol))
ndirs += self.clean(histpath)
try:
ndirs += self.clean(histpath)
except Exception as ex:
self.log("\033[Jcln err in %s: %r" % (histpath, ex), 3)
self.log("\033[Jcln ok; rm {} dirs".format(ndirs))

View File

@@ -21,7 +21,7 @@ from copy import deepcopy
from queue import Queue
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, WINDOWS
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, WINDOWS, E
from .authsrv import LEELOO_DALLAS, SSEELOG, VFS, AuthSrv
from .bos import bos
from .cfg import vf_bmap, vf_cmap, vf_vmap
@@ -35,8 +35,10 @@ from .util import (
Pebkac,
ProgressPrinter,
absreal,
alltrace,
atomic_move,
db_ex_chk,
dir_is_empty,
djoin,
fsenc,
gen_filekey,
@@ -63,6 +65,7 @@ from .util import (
vsplit,
w8b64dec,
w8b64enc,
wunlink,
)
try:
@@ -85,6 +88,9 @@ zsg = "avif,avifs,bmp,gif,heic,heics,heif,heifs,ico,j2p,j2k,jp2,jpeg,jpg,jpx,png
CV_EXTS = set(zsg.split(","))
HINT_HISTPATH = "you could try moving the database to another location (preferably an SSD or NVME drive) using either the --hist argument (global option for all volumes), or the hist volflag (just for this volume)"
class Dbw(object):
def __init__(self, c: "sqlite3.Cursor", n: int, t: float) -> None:
self.c = c
@@ -145,9 +151,12 @@ class Up2k(object):
self.entags: dict[str, set[str]] = {}
self.mtp_parsers: dict[str, dict[str, MParser]] = {}
self.pending_tags: list[tuple[set[str], str, str, dict[str, Any]]] = []
self.hashq: Queue[tuple[str, str, str, str, str, float, str, bool]] = Queue()
self.hashq: Queue[
tuple[str, str, dict[str, Any], str, str, str, float, str, bool]
] = Queue()
self.tagq: Queue[tuple[str, str, str, str, str, float]] = Queue()
self.tag_event = threading.Condition()
self.hashq_mutex = threading.Lock()
self.n_hashq = 0
self.n_tagq = 0
self.mpool_used = False
@@ -578,7 +587,7 @@ class Up2k(object):
if gid:
self.log("reload #{} running".format(self.gid))
self.pp = ProgressPrinter()
self.pp = ProgressPrinter(self.log, self.args)
vols = list(all_vols.values())
t0 = time.time()
have_e2d = False
@@ -601,7 +610,7 @@ class Up2k(object):
for vol in vols:
try:
bos.makedirs(vol.realpath) # gonna happen at snap anyways
bos.listdir(vol.realpath)
dir_is_empty(self.log_func, not self.args.no_scandir, vol.realpath)
except:
self.volstate[vol.vpath] = "OFFLINE (cannot access folder)"
self.log("cannot access " + vol.realpath, c=1)
@@ -804,7 +813,7 @@ class Up2k(object):
ft = "\033[0;32m{}{:.0}"
ff = "\033[0;35m{}{:.0}"
fv = "\033[0;36m{}:\033[90m{}"
fx = set(("html_head",))
fx = set(("html_head", "rm_re_t", "rm_re_r"))
fd = vf_bmap()
fd.update(vf_cmap())
fd.update(vf_vmap())
@@ -887,7 +896,7 @@ class Up2k(object):
return None
try:
cur = self._open_db(db_path)
cur = self._open_db_wd(db_path)
# speeds measured uploading 520 small files on a WD20SPZX (SMR 2.5" 5400rpm 4kb)
dbd = flags["dbd"]
@@ -930,8 +939,8 @@ class Up2k(object):
return cur, db_path
except:
msg = "cannot use database at [{}]:\n{}"
self.log(msg.format(ptop, traceback.format_exc()))
msg = "ERROR: cannot use database at [%s]:\n%s\n\033[33mhint: %s\n"
self.log(msg % (db_path, traceback.format_exc(), HINT_HISTPATH), 1)
return None
@@ -983,12 +992,12 @@ class Up2k(object):
excl = [x.replace("/", "\\") for x in excl]
else:
# ~/.wine/dosdevices/z:/ and such
excl += ["/dev", "/proc", "/run", "/sys"]
excl.extend(("/dev", "/proc", "/run", "/sys"))
rtop = absreal(top)
n_add = n_rm = 0
try:
if not bos.listdir(rtop):
if dir_is_empty(self.log_func, not self.args.no_scandir, rtop):
t = "volume /%s at [%s] is empty; will not be indexed as this could be due to an offline filesystem"
self.log(t % (vol.vpath, rtop), 6)
return True, False
@@ -1085,7 +1094,7 @@ class Up2k(object):
cv = ""
assert self.pp and self.mem_cur
self.pp.msg = "a{} {}".format(self.pp.n, cdir)
self.pp.msg = "a%d %s" % (self.pp.n, cdir)
rd = cdir[len(top) :].strip("/")
if WINDOWS:
@@ -1160,8 +1169,8 @@ class Up2k(object):
continue
if not sz and (
"{}.PARTIAL".format(iname) in partials
or ".{}.PARTIAL".format(iname) in partials
"%s.PARTIAL" % (iname,) in partials
or ".%s.PARTIAL" % (iname,) in partials
):
# placeholder for unfinished upload
continue
@@ -1224,7 +1233,7 @@ class Up2k(object):
abspath = os.path.join(cdir, fn)
nohash = reh.search(abspath) if reh else False
sql = "select w, mt, sz, at from up where rd = ? and fn = ?"
sql = "select w, mt, sz, ip, at from up where rd = ? and fn = ?"
try:
c = db.c.execute(sql, (rd, fn))
except:
@@ -1233,7 +1242,7 @@ class Up2k(object):
in_db = list(c.fetchall())
if in_db:
self.pp.n -= 1
dw, dts, dsz, at = in_db[0]
dw, dts, dsz, ip, at = in_db[0]
if len(in_db) > 1:
t = "WARN: multiple entries: [{}] => [{}] |{}|\n{}"
rep_db = "\n".join([repr(x) for x in in_db])
@@ -1255,9 +1264,11 @@ class Up2k(object):
db.n += 1
in_db = []
else:
dw = ""
ip = ""
at = 0
self.pp.msg = "a{} {}".format(self.pp.n, abspath)
self.pp.msg = "a%d %s" % (self.pp.n, abspath)
if nohash or not sz:
wark = up2k_wark_from_metadata(self.salt, sz, lmod, rd, fn)
@@ -1278,8 +1289,12 @@ class Up2k(object):
wark = up2k_wark_from_hashlist(self.salt, sz, hashes)
if dw and dw != wark:
ip = ""
at = 0
# skip upload hooks by not providing vflags
self.db_add(db.c, {}, rd, fn, lmod, sz, "", "", wark, "", "", "", at)
self.db_add(db.c, {}, rd, fn, lmod, sz, "", "", wark, "", "", ip, at)
db.n += 1
ret += 1
td = time.time() - db.t
@@ -1357,7 +1372,7 @@ class Up2k(object):
rd = drd
abspath = djoin(top, rd)
self.pp.msg = "b{} {}".format(ndirs - nchecked, abspath)
self.pp.msg = "b%d %s" % (ndirs - nchecked, abspath)
try:
if os.path.isdir(abspath):
continue
@@ -1709,7 +1724,7 @@ class Up2k(object):
cur.execute(q, (w[:16],))
abspath = djoin(ptop, rd, fn)
self.pp.msg = "c{} {}".format(nq, abspath)
self.pp.msg = "c%d %s" % (nq, abspath)
if not mpool:
n_tags = self._tagscan_file(cur, entags, w, abspath, ip, at)
else:
@@ -1766,7 +1781,7 @@ class Up2k(object):
if c2.execute(q, (row[0][:16],)).fetchone():
continue
gf.write("{}\n".format("\x00".join(row)).encode("utf-8"))
gf.write(("%s\n" % ("\x00".join(row),)).encode("utf-8"))
n += 1
c2.close()
@@ -2144,6 +2159,46 @@ class Up2k(object):
def _trace(self, msg: str) -> None:
self.log("ST: {}".format(msg))
def _open_db_wd(self, db_path: str) -> "sqlite3.Cursor":
ok: list[int] = []
Daemon(self._open_db_timeout, "opendb_watchdog", [db_path, ok])
try:
return self._open_db(db_path)
finally:
ok.append(1)
def _open_db_timeout(self, db_path, ok: list[int]) -> None:
# give it plenty of time due to the count statement (and wisdom from byte's box)
for _ in range(60):
time.sleep(1)
if ok:
return
t = "WARNING:\n\n initializing an up2k database is taking longer than one minute; something has probably gone wrong:\n\n"
self._log_sqlite_incompat(db_path, t)
def _log_sqlite_incompat(self, db_path, t0) -> None:
txt = t0 or ""
digest = hashlib.sha512(db_path.encode("utf-8", "replace")).digest()
stackname = base64.urlsafe_b64encode(digest[:9]).decode("utf-8")
stackpath = os.path.join(E.cfg, "stack-%s.txt" % (stackname,))
t = " the filesystem at %s may not support locking, or is otherwise incompatible with sqlite\n\n %s\n\n"
t += " PS: if you think this is a bug and wish to report it, please include your configuration + the following file: %s\n"
txt += t % (db_path, HINT_HISTPATH, stackpath)
self.log(txt, 3)
try:
stk = alltrace()
with open(stackpath, "wb") as f:
f.write(stk.encode("utf-8", "replace"))
except Exception as ex:
self.log("warning: failed to write %s: %s" % (stackpath, ex), 3)
if self.args.q:
t = "-" * 72
raise Exception("%s\n%s\n%s" % (t, txt, t))
def _orz(self, db_path: str) -> "sqlite3.Cursor":
c = sqlite3.connect(
db_path, timeout=self.timeout, check_same_thread=False
@@ -2156,7 +2211,7 @@ class Up2k(object):
cur = self._orz(db_path)
ver = self._read_ver(cur)
if not existed and ver is None:
return self._create_db(db_path, cur)
return self._try_create_db(db_path, cur)
if ver == 4:
try:
@@ -2194,8 +2249,16 @@ class Up2k(object):
db = cur.connection
cur.close()
db.close()
bos.unlink(db_path)
return self._create_db(db_path, None)
self._delete_db(db_path)
return self._try_create_db(db_path, None)
def _delete_db(self, db_path: str):
for suf in ("", "-shm", "-wal", "-journal"):
try:
bos.unlink(db_path + suf)
except:
if not suf:
raise
def _backup_db(
self, db_path: str, cur: "sqlite3.Cursor", ver: Optional[int], msg: str
@@ -2232,6 +2295,18 @@ class Up2k(object):
return int(rows[0][0])
return None
def _try_create_db(
self, db_path: str, cur: Optional["sqlite3.Cursor"]
) -> "sqlite3.Cursor":
try:
return self._create_db(db_path, cur)
except:
try:
self._delete_db(db_path)
except:
pass
raise
def _create_db(
self, db_path: str, cur: Optional["sqlite3.Cursor"]
) -> "sqlite3.Cursor":
@@ -2351,7 +2426,8 @@ class Up2k(object):
t = "cannot receive uploads right now;\nserver busy with {}.\nPlease wait; the client will retry..."
raise Pebkac(503, t.format(self.blocked or "[unknown]"))
except TypeError:
# py2
if not PY2:
raise
with self.mutex:
self._job_volchk(cj)
@@ -2574,12 +2650,13 @@ class Up2k(object):
raise Pebkac(403, t)
if not self.args.nw:
dvf: dict[str, Any] = vfs.flags
try:
dvf = self.flags[job["ptop"]]
self._symlink(src, dst, dvf, lmod=cj["lmod"], rm=True)
except:
if bos.path.exists(dst):
bos.unlink(dst)
wunlink(self.log, dst, dvf)
if not n4g:
raise
@@ -2676,6 +2753,28 @@ class Up2k(object):
fk = self.gen_fk(alg, self.args.fk_salt, ap, job["size"], ino)
ret["fk"] = fk[: vfs.flags["fk"]]
if (
not ret["hash"]
and cur
and cj.get("umod")
and int(cj["lmod"]) != int(job["lmod"])
and not self.args.nw
and cj["user"] in vfs.axs.uwrite
and cj["user"] in vfs.axs.udel
):
sql = "update up set mt=? where substr(w,1,16)=? and +rd=? and +fn=?"
try:
cur.execute(sql, (cj["lmod"], wark[:16], job["prel"], job["name"]))
cur.connection.commit()
ap = djoin(job["ptop"], job["prel"], job["name"])
times = (int(time.time()), int(cj["lmod"]))
bos.utime(ap, times, False)
self.log("touched %s from %d to %d" % (ap, job["lmod"], cj["lmod"]))
except Exception as ex:
self.log("umod failed, %r" % (ex,), 3)
return ret
def _untaken(self, fdir: str, job: dict[str, Any], ts: float) -> str:
@@ -2688,14 +2787,14 @@ class Up2k(object):
fp = djoin(fdir, fname)
if job.get("replace") and bos.path.exists(fp):
self.log("replacing existing file at {}".format(fp))
bos.unlink(fp)
wunlink(self.log, fp, self.flags.get(job["ptop"]) or {})
if self.args.plain_ip:
dip = ip.replace(":", ".")
else:
dip = self.hub.iphash.s(ip)
suffix = "-{:.6f}-{}".format(ts, dip)
suffix = "-%.6f-%s" % (ts, dip)
with ren_open(fname, "wb", fdir=fdir, suffix=suffix) as zfw:
return zfw["orz"][1]
@@ -2746,7 +2845,7 @@ class Up2k(object):
ldst = ldst.replace("/", "\\")
if rm and bos.path.exists(dst):
bos.unlink(dst)
wunlink(self.log, dst, flags)
try:
if "hardlink" in flags:
@@ -2762,7 +2861,7 @@ class Up2k(object):
Path(ldst).symlink_to(lsrc)
if not bos.path.exists(dst):
try:
bos.unlink(dst)
wunlink(self.log, dst, flags)
except:
pass
t = "the created symlink [%s] did not resolve to [%s]"
@@ -2910,7 +3009,7 @@ class Up2k(object):
except:
pass
z2 += [upt]
z2.append(upt)
if self.idx_wark(vflags, *z2):
del self.registry[ptop][wark]
else:
@@ -3065,7 +3164,7 @@ class Up2k(object):
):
t = "upload blocked by xau server config"
self.log(t, 1)
bos.unlink(dst)
wunlink(self.log, dst, vflags)
self.registry[ptop].pop(wark, None)
raise Pebkac(403, t)
@@ -3203,9 +3302,9 @@ class Up2k(object):
except:
pass
volpath = "{}/{}".format(vrem, fn).strip("/")
vpath = "{}/{}".format(dbv.vpath, volpath).strip("/")
self.log("rm {}\n {}".format(vpath, abspath))
volpath = ("%s/%s" % (vrem, fn)).strip("/")
vpath = ("%s/%s" % (dbv.vpath, volpath)).strip("/")
self.log("rm %s\n %s" % (vpath, abspath))
_ = dbv.get(volpath, uname, *permsets[0])
if xbd:
if not runhook(
@@ -3236,7 +3335,7 @@ class Up2k(object):
if cur:
cur.connection.commit()
bos.unlink(abspath)
wunlink(self.log, abspath, dbv.flags)
if xad:
runhook(
self.log,
@@ -3391,7 +3490,7 @@ class Up2k(object):
t = "moving symlink from [{}] to [{}], target [{}]"
self.log(t.format(sabs, dabs, dlabs))
mt = bos.path.getmtime(sabs, False)
bos.unlink(sabs)
wunlink(self.log, sabs, svn.flags)
self._symlink(dlabs, dabs, dvn.flags, False, lmod=mt)
# folders are too scary, schedule rescan of both vols
@@ -3458,7 +3557,7 @@ class Up2k(object):
dlink = os.path.join(os.path.dirname(sabs), dlink)
dlink = bos.path.abspath(dlink)
self._symlink(dlink, dabs, dvn.flags, lmod=ftime)
bos.unlink(sabs)
wunlink(self.log, sabs, svn.flags)
else:
atomic_move(sabs, dabs)
@@ -3473,7 +3572,7 @@ class Up2k(object):
shutil.copy2(b1, b2)
except:
try:
os.unlink(b2)
wunlink(self.log, dabs, dvn.flags)
except:
pass
@@ -3485,7 +3584,7 @@ class Up2k(object):
zb = os.readlink(b1)
os.symlink(zb, b2)
except:
os.unlink(b2)
wunlink(self.log, dabs, dvn.flags)
raise
if is_link:
@@ -3495,7 +3594,7 @@ class Up2k(object):
except:
pass
os.unlink(b1)
wunlink(self.log, sabs, svn.flags)
if xar:
runhook(self.log, xar, dabs, dvp, "", uname, 0, 0, "", 0, "")
@@ -3635,10 +3734,11 @@ class Up2k(object):
ptop, rem = links.pop(slabs)
self.log("linkswap [{}] and [{}]".format(sabs, slabs))
mt = bos.path.getmtime(slabs, False)
bos.unlink(slabs)
flags = self.flags.get(ptop) or {}
wunlink(self.log, slabs, flags)
bos.rename(sabs, slabs)
bos.utime(slabs, (int(time.time()), int(mt)), False)
self._symlink(slabs, sabs, self.flags.get(ptop) or {}, False)
self._symlink(slabs, sabs, flags, False)
full[slabs] = (ptop, rem)
sabs = slabs
@@ -3684,13 +3784,13 @@ class Up2k(object):
self.log(t % (ex, ex), 3)
self.log("relinking [%s] to [%s]" % (alink, dabs))
flags = self.flags.get(parts[0]) or {}
try:
lmod = bos.path.getmtime(alink, False)
bos.unlink(alink)
wunlink(self.log, alink, flags)
except:
pass
flags = self.flags.get(parts[0]) or {}
self._symlink(dabs, alink, flags, False, lmod=lmod or 0)
return len(full) + len(links)
@@ -3737,7 +3837,7 @@ class Up2k(object):
return []
if self.pp:
mb = int(fsz / 1024 / 1024)
mb = fsz // (1024 * 1024)
self.pp.msg = prefix + str(mb) + suffix
hashobj = hashlib.sha512()
@@ -3759,8 +3859,14 @@ class Up2k(object):
def _new_upload(self, job: dict[str, Any]) -> None:
pdir = djoin(job["ptop"], job["prel"])
if not job["size"] and bos.path.isfile(djoin(pdir, job["name"])):
return
if not job["size"]:
try:
inf = bos.stat(djoin(pdir, job["name"]))
if stat.S_ISREG(inf.st_mode):
job["lmod"] = inf.st_size
return
except:
pass
self.registry[job["ptop"]][job["wark"]] = job
job["name"] = self._untaken(pdir, job, job["t0"])
@@ -3802,7 +3908,7 @@ class Up2k(object):
else:
dip = self.hub.iphash.s(job["addr"])
suffix = "-{:.6f}-{}".format(job["t0"], dip)
suffix = "-%.6f-%s" % (job["t0"], dip)
with ren_open(tnam, "wb", fdir=pdir, suffix=suffix) as zfw:
f, job["tnam"] = zfw["orz"]
abspath = djoin(pdir, job["tnam"])
@@ -3991,16 +4097,16 @@ class Up2k(object):
self.log("tagged {} ({}+{})".format(abspath, ntags1, len(tags) - ntags1))
def _hasher(self) -> None:
with self.mutex:
with self.hashq_mutex:
self.n_hashq += 1
while True:
with self.mutex:
with self.hashq_mutex:
self.n_hashq -= 1
# self.log("hashq {}".format(self.n_hashq))
task = self.hashq.get()
if len(task) != 8:
if len(task) != 9:
raise Exception("invalid hash task")
try:
@@ -4009,11 +4115,14 @@ class Up2k(object):
except Exception as ex:
self.log("failed to hash %s: %s" % (task, ex), 1)
def _hash_t(self, task: tuple[str, str, str, str, str, float, str, bool]) -> bool:
ptop, vtop, rd, fn, ip, at, usr, skip_xau = task
def _hash_t(
self, task: tuple[str, str, dict[str, Any], str, str, str, float, str, bool]
) -> bool:
ptop, vtop, flags, rd, fn, ip, at, usr, skip_xau = task
# self.log("hashq {} pop {}/{}/{}".format(self.n_hashq, ptop, rd, fn))
if "e2d" not in self.flags[ptop]:
return True
with self.mutex:
if not self.register_vpath(ptop, flags):
return True
abspath = djoin(ptop, rd, fn)
self.log("hashing " + abspath)
@@ -4064,11 +4173,22 @@ class Up2k(object):
usr: str,
skip_xau: bool = False,
) -> None:
with self.mutex:
self.register_vpath(ptop, flags)
self.hashq.put((ptop, vtop, rd, fn, ip, at, usr, skip_xau))
if "e2d" not in flags:
return
if self.n_hashq > 1024:
t = "%d files in hashq; taking a nap"
self.log(t % (self.n_hashq,), 6)
for _ in range(self.n_hashq // 1024):
time.sleep(0.1)
if self.n_hashq < 1024:
break
zt = (ptop, vtop, flags, rd, fn, ip, at, usr, skip_xau)
with self.hashq_mutex:
self.hashq.put(zt)
self.n_hashq += 1
# self.log("hashq {} push {}/{}/{}".format(self.n_hashq, ptop, rd, fn))
def shutdown(self) -> None:
self.stop = True
@@ -4119,6 +4239,6 @@ def up2k_wark_from_hashlist(salt: str, filesize: int, hashes: list[str]) -> str:
def up2k_wark_from_metadata(salt: str, sz: int, lastmod: int, rd: str, fn: str) -> str:
ret = sfsenc("{}\n{}\n{}\n{}\n{}".format(salt, lastmod, sz, rd, fn))
ret = sfsenc("%s\n%d\n%d\n%s\n%s" % (salt, lastmod, sz, rd, fn))
ret = base64.urlsafe_b64encode(hashlib.sha512(ret).digest())
return "#{}".format(ret.decode("ascii"))[:44]
return ("#%s" % (ret.decode("ascii"),))[:44]

View File

@@ -1,6 +1,7 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import argparse
import base64
import contextlib
import errno
@@ -167,6 +168,12 @@ except:
return struct.unpack(fmt.decode("ascii"), a)
try:
BITNESS = struct.calcsize(b"P") * 8
except:
BITNESS = struct.calcsize("P") * 8
ansi_re = re.compile("\033\\[[^mK]*[mK]")
@@ -343,6 +350,11 @@ CMD_EXEB = set(_exestr.encode("utf-8").split())
CMD_EXES = set(_exestr.split())
# mostly from https://github.com/github/gitignore/blob/main/Global/macOS.gitignore
APPLESAN_TXT = r"/(__MACOS|Icon\r\r)|/\.(_|DS_Store|AppleDouble|LSOverride|DocumentRevisions-|fseventsd|Spotlight-|TemporaryItems|Trashes|VolumeIcon\.icns|com\.apple\.timemachine\.donotpresent|AppleDB|AppleDesktop|apdisk)"
APPLESAN_RE = re.compile(APPLESAN_TXT)
pybin = sys.executable or ""
if EXE:
pybin = ""
@@ -366,11 +378,6 @@ def py_desc() -> str:
if ofs > 0:
py_ver = py_ver[:ofs]
try:
bitness = struct.calcsize(b"P") * 8
except:
bitness = struct.calcsize("P") * 8
host_os = platform.system()
compiler = platform.python_compiler().split("http")[0]
@@ -378,7 +385,7 @@ def py_desc() -> str:
os_ver = m.group(1) if m else ""
return "{:>9} v{} on {}{} {} [{}]".format(
interp, py_ver, host_os, bitness, os_ver, compiler
interp, py_ver, host_os, BITNESS, os_ver, compiler
)
@@ -416,14 +423,32 @@ try:
except:
PYFTPD_VER = "(None)"
try:
from partftpy.__init__ import __version__ as PARTFTPY_VER
except:
PARTFTPY_VER = "(None)"
VERSIONS = "copyparty v{} ({})\n{}\n sqlite v{} | jinja v{} | pyftpd v{}".format(
S_VERSION, S_BUILD_DT, py_desc(), SQLITE_VER, JINJA_VER, PYFTPD_VER
PY_DESC = py_desc()
VERSIONS = (
"copyparty v{} ({})\n{}\n sqlite {} | jinja {} | pyftpd {} | tftp {}".format(
S_VERSION, S_BUILD_DT, PY_DESC, SQLITE_VER, JINJA_VER, PYFTPD_VER, PARTFTPY_VER
)
)
_: Any = (mp, BytesIO, quote, unquote, SQLITE_VER, JINJA_VER, PYFTPD_VER)
__all__ = ["mp", "BytesIO", "quote", "unquote", "SQLITE_VER", "JINJA_VER", "PYFTPD_VER"]
_: Any = (mp, BytesIO, quote, unquote, SQLITE_VER, JINJA_VER, PYFTPD_VER, PARTFTPY_VER)
__all__ = [
"mp",
"BytesIO",
"quote",
"unquote",
"SQLITE_VER",
"JINJA_VER",
"PYFTPD_VER",
"PARTFTPY_VER",
]
class Daemon(threading.Thread):
@@ -527,6 +552,8 @@ class HLog(logging.Handler):
elif record.name.startswith("impacket"):
if self.ptn_smb_ign.match(msg):
return
elif record.name.startswith("partftpy."):
record.name = record.name[9:]
self.log_func(record.name[-21:], msg, c)
@@ -623,9 +650,14 @@ class _Unrecv(object):
while nbytes > len(ret):
ret += self.recv(nbytes - len(ret))
except OSError:
t = "client only sent {} of {} expected bytes".format(len(ret), nbytes)
if len(ret) <= 16:
t += "; got {!r}".format(ret)
t = "client stopped sending data; expected at least %d more bytes"
if not ret:
t = t % (nbytes,)
else:
t += ", only got %d"
t = t % (nbytes, len(ret))
if len(ret) <= 16:
t += "; %r" % (ret,)
if raise_on_trunc:
raise UnrecvEOF(5, t)
@@ -775,16 +807,20 @@ class ProgressPrinter(threading.Thread):
periodically print progress info without linefeeds
"""
def __init__(self) -> None:
def __init__(self, log: "NamedLogger", args: argparse.Namespace) -> None:
threading.Thread.__init__(self, name="pp")
self.daemon = True
self.log = log
self.args = args
self.msg = ""
self.end = False
self.n = -1
self.start()
def run(self) -> None:
tp = 0
msg = None
no_stdout = self.args.q
fmt = " {}\033[K\r" if VT100 else " {} $\r"
while not self.end:
time.sleep(0.1)
@@ -792,10 +828,21 @@ class ProgressPrinter(threading.Thread):
continue
msg = self.msg
now = time.time()
if msg and now - tp > 10:
tp = now
self.log("progress: %s" % (msg,), 6)
if no_stdout:
continue
uprint(fmt.format(msg))
if PY2:
sys.stdout.flush()
if no_stdout:
return
if VT100:
print("\033[K", end="")
elif msg:
@@ -849,7 +896,7 @@ class MTHash(object):
ex = ex or str(qe)
if pp:
mb = int((fsz - nch * chunksz) / 1024 / 1024)
mb = (fsz - nch * chunksz) // (1024 * 1024)
pp.msg = prefix + str(mb) + suffix
if ex:
@@ -1071,7 +1118,7 @@ def uprint(msg: str) -> None:
def nuprint(msg: str) -> None:
uprint("{}\n".format(msg))
uprint("%s\n" % (msg,))
def dedent(txt: str) -> str:
@@ -1094,10 +1141,10 @@ def rice_tid() -> str:
def trace(*args: Any, **kwargs: Any) -> None:
t = time.time()
stack = "".join(
"\033[36m{}\033[33m{}".format(x[0].split(os.sep)[-1][:-3], x[1])
"\033[36m%s\033[33m%s" % (x[0].split(os.sep)[-1][:-3], x[1])
for x in traceback.extract_stack()[3:-1]
)
parts = ["{:.6f}".format(t), rice_tid(), stack]
parts = ["%.6f" % (t,), rice_tid(), stack]
if args:
parts.append(repr(args))
@@ -1114,17 +1161,17 @@ def alltrace() -> str:
threads: dict[str, types.FrameType] = {}
names = dict([(t.ident, t.name) for t in threading.enumerate()])
for tid, stack in sys._current_frames().items():
name = "{} ({:x})".format(names.get(tid), tid)
name = "%s (%x)" % (names.get(tid), tid)
threads[name] = stack
rret: list[str] = []
bret: list[str] = []
for name, stack in sorted(threads.items()):
ret = ["\n\n# {}".format(name)]
ret = ["\n\n# %s" % (name,)]
pad = None
for fn, lno, name, line in traceback.extract_stack(stack):
fn = os.sep.join(fn.split(os.sep)[-3:])
ret.append('File: "{}", line {}, in {}'.format(fn, lno, name))
ret.append('File: "%s", line %d, in %s' % (fn, lno, name))
if line:
ret.append(" " + str(line.strip()))
if "self.not_empty.wait()" in line:
@@ -1133,7 +1180,7 @@ def alltrace() -> str:
if pad:
bret += [ret[0]] + [pad + x for x in ret[1:]]
else:
rret += ret
rret.extend(ret)
return "\n".join(rret + bret) + "\n"
@@ -1216,12 +1263,20 @@ def log_thrs(log: Callable[[str, str, int], None], ival: float, name: str) -> No
def vol_san(vols: list["VFS"], txt: bytes) -> bytes:
txt0 = txt
for vol in vols:
txt = txt.replace(vol.realpath.encode("utf-8"), vol.vpath.encode("utf-8"))
txt = txt.replace(
vol.realpath.encode("utf-8").replace(b"\\", b"\\\\"),
vol.vpath.encode("utf-8"),
)
bap = vol.realpath.encode("utf-8")
bhp = vol.histpath.encode("utf-8")
bvp = vol.vpath.encode("utf-8")
bvph = b"$hist(/" + bvp + b")"
txt = txt.replace(bap, bvp)
txt = txt.replace(bhp, bvph)
txt = txt.replace(bap.replace(b"\\", b"\\\\"), bvp)
txt = txt.replace(bhp.replace(b"\\", b"\\\\"), bvph)
if txt != txt0:
txt += b"\r\nNOTE: filepaths sanitized; see serverlog for correct values"
return txt
@@ -1229,9 +1284,9 @@ def vol_san(vols: list["VFS"], txt: bytes) -> bytes:
def min_ex(max_lines: int = 8, reverse: bool = False) -> str:
et, ev, tb = sys.exc_info()
stb = traceback.extract_tb(tb)
fmt = "{} @ {} <{}>: {}"
ex = [fmt.format(fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb]
ex.append("[{}] {}".format(et.__name__ if et else "(anonymous)", ev))
fmt = "%s @ %d <%s>: %s"
ex = [fmt % (fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb]
ex.append("[%s] %s" % (et.__name__ if et else "(anonymous)", ev))
return "\n".join(ex[-max_lines:][:: -1 if reverse else 1])
@@ -1282,7 +1337,7 @@ def ren_open(
with fun(fsenc(fpath), *args, **kwargs) as f:
if b64:
assert fdir
fp2 = "fn-trunc.{}.txt".format(b64)
fp2 = "fn-trunc.%s.txt" % (b64,)
fp2 = os.path.join(fdir, fp2)
with open(fsenc(fp2), "wb") as f2:
f2.write(orig_name.encode("utf-8"))
@@ -1311,7 +1366,7 @@ def ren_open(
raise
if not b64:
zs = "{}\n{}".format(orig_name, suffix).encode("utf-8", "replace")
zs = ("%s\n%s" % (orig_name, suffix)).encode("utf-8", "replace")
zs = hashlib.sha512(zs).digest()[:12]
b64 = base64.urlsafe_b64encode(zs).decode("utf-8")
@@ -1331,7 +1386,7 @@ def ren_open(
# okay do the first letter then
ext = "." + ext[2:]
fname = "{}~{}{}".format(bname, b64, ext)
fname = "%s~%s%s" % (bname, b64, ext)
class MultipartParser(object):
@@ -1517,15 +1572,20 @@ class MultipartParser(object):
return ret
def parse(self) -> None:
boundary = get_boundary(self.headers)
self.log("boundary=%r" % (boundary,))
# spec says there might be junk before the first boundary,
# can't have the leading \r\n if that's not the case
self.boundary = b"--" + get_boundary(self.headers).encode("utf-8")
self.boundary = b"--" + boundary.encode("utf-8")
# discard junk before the first boundary
for junk in self._read_data():
self.log(
"discarding preamble: [{}]".format(junk.decode("utf-8", "replace"))
)
if not junk:
continue
jtxt = junk.decode("utf-8", "replace")
self.log("discarding preamble |%d| %r" % (len(junk), jtxt))
# nice, now make it fast
self.boundary = b"\r\n" + self.boundary
@@ -1537,11 +1597,9 @@ class MultipartParser(object):
raises if the field name is not as expected
"""
assert self.gen
p_field, _, p_data = next(self.gen)
p_field, p_fname, p_data = next(self.gen)
if p_field != field_name:
raise Pebkac(
422, 'expected field "{}", got "{}"'.format(field_name, p_field)
)
raise WrongPostKey(field_name, p_field, p_fname, p_data)
return self._read_value(p_data, max_len).decode("utf-8", "surrogateescape")
@@ -1610,7 +1668,7 @@ def rand_name(fdir: str, fn: str, rnd: int) -> str:
break
nc = rnd + extra
nb = int((6 + 6 * nc) / 8)
nb = (6 + 6 * nc) // 8
zb = os.urandom(nb)
zb = base64.urlsafe_b64encode(zb)
fn = zb[:nc].decode("utf-8") + ext
@@ -1713,7 +1771,7 @@ def get_spd(nbyte: int, t0: float, t: Optional[float] = None) -> str:
bps = nbyte / ((t - t0) + 0.001)
s1 = humansize(nbyte).replace(" ", "\033[33m").replace("iB", "")
s2 = humansize(bps).replace(" ", "\033[35m").replace("iB", "")
return "{} \033[0m{}/s\033[0m".format(s1, s2)
return "%s \033[0m%s/s\033[0m" % (s1, s2)
def s2hms(s: float, optional_h: bool = False) -> str:
@@ -1721,9 +1779,9 @@ def s2hms(s: float, optional_h: bool = False) -> str:
h, s = divmod(s, 3600)
m, s = divmod(s, 60)
if not h and optional_h:
return "{}:{:02}".format(m, s)
return "%d:%02d" % (m, s)
return "{}:{:02}:{:02}".format(h, m, s)
return "%d:%02d:%02d" % (h, m, s)
def djoin(*paths: str) -> str:
@@ -2065,6 +2123,47 @@ def atomic_move(usrc: str, udst: str) -> None:
os.rename(src, dst)
def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
maxtime = flags.get("rm_re_t", 0.0)
bpath = fsenc(abspath)
if not maxtime:
os.unlink(bpath)
return True
chill = flags.get("rm_re_r", 0.0)
if chill < 0.001:
chill = 0.1
ino = 0
t0 = now = time.time()
for attempt in range(90210):
try:
if ino and os.stat(bpath).st_ino != ino:
log("inode changed; aborting delete")
return False
os.unlink(bpath)
if attempt:
now = time.time()
t = "deleted in %.2f sec, attempt %d"
log(t % (now - t0, attempt + 1))
return True
except OSError as ex:
now = time.time()
if ex.errno == errno.ENOENT:
return False
if now - t0 > maxtime or attempt == 90209:
raise
if not attempt:
if not PY2:
ino = os.stat(bpath).st_ino
t = "delete failed (err.%d); retrying for %d sec: %s"
log(t % (ex.errno, maxtime + 0.99, abspath))
time.sleep(chill)
return False # makes pylance happy
def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]:
try:
# some fuses misbehave
@@ -2193,7 +2292,7 @@ def read_socket_chunked(
raise Pebkac(400, t.format(x))
if log:
log("receiving {} byte chunk".format(chunklen))
log("receiving %d byte chunk" % (chunklen,))
for chunk in read_socket(sr, chunklen):
yield chunk
@@ -2365,6 +2464,12 @@ def statdir(
print(t)
def dir_is_empty(logger: "RootLogger", scandir: bool, top: str):
for _ in statdir(logger, scandir, False, top):
return False
return True
def rmdirs(
logger: "RootLogger", scandir: bool, lstat: bool, top: str, depth: int
) -> tuple[list[str], list[str]]:
@@ -2557,6 +2662,7 @@ def runcmd(
argv: Union[list[bytes], list[str]], timeout: Optional[float] = None, **ka: Any
) -> tuple[int, str, str]:
isbytes = isinstance(argv[0], (bytes, bytearray))
oom = ka.pop("oom", 0) # 0..1000
kill = ka.pop("kill", "t") # [t]ree [m]ain [n]one
capture = ka.pop("capture", 3) # 0=none 1=stdout 2=stderr 3=both
@@ -2587,6 +2693,14 @@ def runcmd(
argv = [NICES] + argv
p = sp.Popen(argv, stdout=cout, stderr=cerr, **ka)
if oom and not ANYWIN and not MACOS:
try:
with open("/proc/%d/oom_score_adj" % (p.pid,), "wb") as f:
f.write(("%d\n" % (oom,)).encode("utf-8"))
except:
pass
if not timeout or PY2:
bout, berr = p.communicate(sin)
else:
@@ -2734,6 +2848,7 @@ def _parsehook(
sp_ka = {
"env": env,
"nice": True,
"oom": 300,
"timeout": tout,
"kill": kill,
"capture": cap,
@@ -2950,7 +3065,7 @@ def visual_length(txt: str) -> int:
pend = None
else:
if ch == "\033":
pend = "{0}".format(ch)
pend = "%s" % (ch,)
else:
co = ord(ch)
# the safe parts of latin1 and cp437 (no greek stuff)
@@ -3048,3 +3163,20 @@ class Pebkac(Exception):
def __repr__(self) -> str:
return "Pebkac({}, {})".format(self.code, repr(self.args))
class WrongPostKey(Pebkac):
def __init__(
self,
expected: str,
got: str,
fname: Optional[str],
datagen: Generator[bytes, None, None],
) -> None:
msg = 'expected field "{}", got "{}"'.format(expected, got)
super(WrongPostKey, self).__init__(422, msg)
self.expected = expected
self.got = got
self.fname = fname
self.datagen = datagen

View File

@@ -255,19 +255,19 @@ window.baguetteBox = (function () {
if (anymod(e, true))
return;
var k = e.code + '', v = vid(), pos = -1;
var k = (e.code || e.key) + '', v = vid(), pos = -1;
if (k == "BracketLeft")
setloop(1);
else if (k == "BracketRight")
setloop(2);
else if (e.shiftKey && k != 'KeyR')
else if (e.shiftKey && k != "KeyR" && k != "R")
return;
else if (k == "ArrowLeft" || k == "KeyJ")
else if (k == "ArrowLeft" || k == "KeyJ" || k == "Left" || k == "j")
showPreviousImage();
else if (k == "ArrowRight" || k == "KeyL")
else if (k == "ArrowRight" || k == "KeyL" || k == "Right" || k == "l")
showNextImage();
else if (k == "Escape")
else if (k == "Escape" || k == "Esc")
hideOverlay();
else if (k == "Home")
showFirstImage(e);
@@ -295,9 +295,9 @@ window.baguetteBox = (function () {
}
else if (k == "KeyF")
tglfull();
else if (k == "KeyS")
else if (k == "KeyS" || k == "s")
tglsel();
else if (k == "KeyR")
else if (k == "KeyR" || k == "r" || k == "R")
rotn(e.shiftKey ? -1 : 1);
else if (k == "KeyY")
dlpic();
@@ -615,7 +615,43 @@ window.baguetteBox = (function () {
documentLastFocus && documentLastFocus.focus();
isOverlayVisible = false;
}, 500);
unvid();
unfig();
}, 250);
}
function unvid(keep) {
var vids = QSA('#bbox-overlay video');
for (var a = vids.length - 1; a >= 0; a--) {
var v = vids[a];
if (v == keep)
continue;
v.src = '';
v.load();
var p = v.parentNode;
p.removeChild(v);
p.parentNode.removeChild(p);
}
}
function unfig(keep) {
var figs = QSA('#bbox-overlay figure'),
npre = options.preload || 0,
k = [];
if (keep === undefined)
keep = -9;
for (var a = keep - npre; a <= keep + npre; a++)
k.push('bbox-figure-' + a);
for (var a = figs.length - 1; a >= 0; a--) {
var f = figs[a];
if (!has(k, f.getAttribute('id')))
f.parentNode.removeChild(f);
}
}
function loadImage(index, callback) {
@@ -708,6 +744,7 @@ window.baguetteBox = (function () {
}
function show(index, gallery) {
gallery = gallery || currentGallery;
if (!isOverlayVisible && index >= 0 && index < gallery.length) {
prepareOverlay(gallery, options);
showOverlay(index);
@@ -720,12 +757,10 @@ window.baguetteBox = (function () {
if (index >= imagesElements.length)
return bounceAnimation('right');
var v = vid();
if (v) {
v.src = '';
v.load();
v.parentNode.removeChild(v);
try {
vid().pause();
}
catch (ex) { }
currentIndex = index;
loadImage(currentIndex, function () {
@@ -734,6 +769,15 @@ window.baguetteBox = (function () {
});
updateOffset();
if (options.animation == 'none')
unvid(vid());
else
setTimeout(function () {
unvid(vid());
}, 100);
unfig(index);
if (options.onChange)
options.onChange(currentIndex, imagesElements.length);

View File

@@ -818,6 +818,10 @@ html.y #path a:hover {
.logue:empty {
display: none;
}
.logue.raw {
white-space: pre;
font-family: 'scp', 'consolas', monospace;
}
#doc>iframe,
.logue>iframe {
background: var(--bgg);
@@ -1653,7 +1657,9 @@ html.cz .tgl.btn.on {
color: var(--fg-max);
}
#tree ul a.hl {
color: #fff;
color: var(--btn-1-fg);
background: #000;
background: var(--btn-1-bg);
text-shadow: none;
}
@@ -1769,6 +1775,7 @@ html.y #tree.nowrap .ntree a+a:hover {
padding: 0;
}
#thumbs,
#au_prescan,
#au_fullpre,
#au_os_seek,
#au_osd_cv,
@@ -1776,7 +1783,8 @@ html.y #tree.nowrap .ntree a+a:hover {
opacity: .3;
}
#griden.on+#thumbs,
#au_preload.on+#au_fullpre,
#au_preload.on+#au_prescan,
#au_preload.on+#au_prescan+#au_fullpre,
#au_os_ctl.on+#au_os_seek,
#au_os_ctl.on+#au_os_seek+#au_osd_cv,
#u2turbo.on+#u2tdate {
@@ -2174,6 +2182,7 @@ html.y #bbox-overlay figcaption a {
}
#bbox-halp {
color: var(--fg-max);
background: #fff;
background: var(--bg);
position: absolute;
top: 0;

View File

@@ -148,7 +148,8 @@
logues = {{ logues|tojson if sb_lg else "[]" }},
ls0 = {{ ls0|tojson }};
document.documentElement.className = localStorage.cpp_thm || dtheme;
var STG = window.localStorage;
document.documentElement.className = (STG && STG.cpp_thm) || dtheme;
</script>
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>
<script src="{{ r }}/.cpr/baguettebox.js?_={{ ts }}"></script>

View File

@@ -200,6 +200,8 @@ var Ls = {
"ct_idxh": "show index.html instead of folder listing",
"ct_sbars": "show scrollbars",
"cut_umod": "if a file already exists on the server, update the server's last-modified timestamp to match your local file (requires write+delete permissions)",
"cut_turbo": "the yolo button, you probably DO NOT want to enable this:$N$Nuse this if you were uploading a huge amount of files and had to restart for some reason, and want to continue the upload ASAP$N$Nthis replaces the hash-check with a simple <em>&quot;does this have the same filesize on the server?&quot;</em> so if the file contents are different it will NOT be uploaded$N$Nyou should turn this off when the upload is done, and then &quot;upload&quot; the same files again to let the client verify them",
"cut_datechk": "has no effect unless the turbo button is enabled$N$Nreduces the yolo factor by a tiny amount; checks whether the file timestamps on the server matches yours$N$Nshould <em>theoretically</em> catch most unfinished / corrupted uploads, but is not a substitute for doing a verification pass with turbo disabled afterwards",
@@ -229,13 +231,16 @@ var Ls = {
"tt_wrap": "word wrap",
"tt_hover": "reveal overflowing lines on hover$N( breaks scrolling unless mouse $N&nbsp; cursor is in the left gutter )",
"ml_pmode": "playback mode",
"ml_pmode": "at end of folder...",
"ml_btns": "cmds",
"ml_tcode": "transcode",
"ml_tint": "tint",
"ml_eq": "audio equalizer",
"ml_drc": "dynamic range compressor",
"mt_shuf": "shuffle the songs in each folder\">🔀",
"mt_preload": "start loading the next song near the end for gapless playback\">preload",
"mt_prescan": "go to the next folder before the last song$Nends, keeping the webbrowser happy$Nso it doesn't stop the playback\">nav",
"mt_fullpre": "try to preload the entire song;$N✅ enable on <b>unreliable</b> connections,$N❌ <b>disable</b> on slow connections probably\">full",
"mt_waves": "waveform seekbar:$Nshow audio amplitude in the scrubber\">~s",
"mt_npclip": "show buttons for clipboarding the currently playing song\">/np",
@@ -244,6 +249,7 @@ var Ls = {
"mt_oscv": "show album cover in osd\">art",
"mt_follow": "keep the playing track scrolled into view\">🎯",
"mt_compact": "compact controls\">⟎",
"mt_uncache": "clear cache &nbsp;(try this if your browser cached$Na broken copy of a song so it refuses to play)\">uncache",
"mt_mloop": "loop the open folder\">🔁 loop",
"mt_mnext": "load the next folder and continue\">📂 next",
"mt_cflac": "convert flac / wav to opus\">flac",
@@ -267,6 +273,9 @@ var Ls = {
"mm_e403": "Could not play audio; error 403: Access denied.\n\nTry pressing F5 to reload, maybe you got logged out",
"mm_e5xx": "Could not play audio; server error ",
"mm_nof": "not finding any more audio files nearby",
"mm_prescan": "Looking for music to play next...",
"mm_scank": "Found the next song:",
"mm_uncache": "cache cleared; all songs will redownload on next playback",
"mm_pwrsv": "<p>it looks like playback is being interrupted by your phone's power-saving settings!</p>" + '<p>please go to <a target="_blank" href="https://user-images.githubusercontent.com/241032/235262121-2ffc51ae-7821-4310-a322-c3b7a507890c.png">the app settings of your browser</a> and then <a target="_blank" href="https://user-images.githubusercontent.com/241032/235262123-c328cca9-3930-4948-bd18-3949b9fd3fcf.png">allow unrestricted battery usage</a> to fix it.</p><p><em>however,</em> it could also be due to the browser\'s autoplay settings;</p><p>Firefox: tap the icon on the left side of the address bar, then select "autoplay" and "allow audio"</p><p>Chrome: the problem will gradually dissipate as you play more music on this site</p>',
"mm_iosblk": "<p>your web browser thinks the audio playback is unwanted, and it decided to block playback until you start another track manually... unfortunately we are both powerless in telling it otherwise</p><p>supposedly this will get better as you continue playing music on this site, but I'm unfamiliar with apple devices so idk if that's true</p><p>you could try another browser, maybe firefox or chrome?</p>",
"mm_hnf": "that song no longer exists",
@@ -295,6 +304,8 @@ var Ls = {
"frb_apply": "APPLY RENAME",
"fr_adv": "batch / metadata / pattern renaming\">advanced",
"fr_case": "case-sensitive regex\">case",
"fr_win": "windows-safe names; replace <code>&lt;&gt;:&quot;\\|?*</code> with japanese fullwidth characters\">win",
"fr_slash": "replace <code>/</code> with a character that doesn't cause new folders to be created\">no /",
"fr_pdel": "delete",
"fr_pnew": "save as",
"fr_pname": "provide a name for your new preset",
@@ -304,6 +315,7 @@ var Ls = {
"fr_tags": "tags for the selected files (read-only, just for reference):",
"fr_busy": "renaming {0} items...\n\n{1}",
"fr_efail": "rename failed:\n",
"fr_nchg": "{0} of the new names were altered due to <code>win</code> and/or <code>ikke /</code>\n\nOK to continue with these altered new names?",
"fd_ok": "delete OK",
"fd_err": "delete failed:\n",
@@ -323,6 +335,8 @@ var Ls = {
"fp_confirm": "move these {0} items here?",
"fp_etab": 'failed to read clipboard from other browser tab',
"mk_noname": "type a name into the text field on the left before you do that :p",
"tv_load": "Loading text document:\n\n{0}\n\n{1}% ({2} of {3} MiB loaded)",
"tv_xe1": "could not load textfile:\n\nerror ",
"tv_xe2": "404, file not found",
@@ -369,7 +383,7 @@ var Ls = {
"s_t1": "tags contains &nbsp; (^=start, end=$)",
"s_a1": "specific metadata properties",
"md_eshow": "cannot show ",
"md_eshow": "cannot render ",
"md_off": "[📜<em>readme</em>] disabled in [⚙️] -- document hidden",
"xhr403": "403: Access denied\n\ntry pressing F5, maybe you got logged out",
@@ -454,6 +468,7 @@ var Ls = {
"u_emtleakf": 'try the following:\n<ul><li>hit <code>F5</code> to refresh the page</li><li>then enable <code>🥔</code> (potato) in the upload UI<li>and try that upload again</li></ul>\nPS: firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500">will hopefully have a bugfix</a> at some point',
"u_s404": "not found on server",
"u_expl": "explain",
"u_maxconn": "most browsers limit this to 6, but firefox lets you raise it with <code>connections-per-server</code> in <code>about:config</code>",
"u_tu": '<p class="warn">WARNING: turbo enabled, <span>&nbsp;client may not detect and resume incomplete uploads; see turbo-button tooltip</span></p>',
"u_ts": '<p class="warn">WARNING: turbo enabled, <span>&nbsp;search results can be incorrect; see turbo-button tooltip</span></p>',
"u_turbo_c": "turbo is disabled in server config",
@@ -680,6 +695,8 @@ var Ls = {
"ct_idxh": "vis index.html istedenfor fil-liste",
"ct_sbars": "vis rullgardiner / skrollefelt",
"cut_umod": "i tilfelle en fil du laster opp allerede finnes på serveren, så skal serverens tidsstempel oppdateres slik at det stemmer overens med din lokale fil (krever rettighetene write+delete)",
"cut_turbo": "forenklet befaring ved opplastning; bør sannsynlig <em>ikke</em> skrus på:$N$Nnyttig dersom du var midt i en svær opplastning som måtte restartes av en eller annen grunn, og du vil komme igang igjen så raskt som overhodet mulig.$N$Nnår denne er skrudd på så forenkles befaringen kraftig; istedenfor å utføre en trygg sjekk på om filene finnes på serveren i god stand, så sjekkes kun om <em>filstørrelsen</em> stemmer. Så dersom en korrupt fil skulle befinne seg på serveren allerede, på samme sted med samme størrelse og navn, så blir det <em>ikke oppdaget</em>.$N$Ndet anbefales å kun benytte denne funksjonen for å komme seg raskt igjennom selve opplastningen, for så å skru den av, og til slutt &quot;laste opp&quot; de samme filene én gang til -- slik at integriteten kan verifiseres",
"cut_datechk": "har ingen effekt dersom turbo er avslått$N$Ngjør turbo bittelitt tryggere ved å sjekke datostemplingen på filene (i tillegg til filstørrelse)$N$N<em>burde</em> oppdage og gjenoppta de fleste ufullstendige opplastninger, men er <em>ikke</em> en fullverdig erstatning for å deaktivere turbo og gjøre en skikkelig sjekk",
@@ -709,13 +726,16 @@ var Ls = {
"tt_wrap": "linjebryting",
"tt_hover": "vis hele mappenavnet når musepekeren treffer mappen$N( gjør dessverre at scrollhjulet fusker dersom musepekeren ikke befinner seg i grøfta )",
"ml_pmode": "spillemodus",
"ml_pmode": "ved enden av mappen",
"ml_btns": "knapper",
"ml_tcode": "konvertering",
"ml_tint": "tint",
"ml_eq": "audio equalizer (tonejustering)",
"ml_drc": "compressor (volum-utjevning)",
"mt_shuf": "sangene i hver mappe$Nspilles i tilfeldig rekkefølge\">🔀",
"mt_preload": "hent ned litt av neste sang i forkant,$Nslik at pausen i overgangen blir mindre\">forles",
"mt_prescan": "ved behov, bla til neste mappe$Nslik at nettleseren lar oss$Nfortsette å spille musikk\">bla",
"mt_fullpre": "hent ned hele neste sang, ikke bare litt:$N✅ skru på hvis nettet ditt er <b>ustabilt</b>,$N❌ skru av hvis nettet ditt er <b>tregt</b>\">full",
"mt_waves": "waveform seekbar:$Nvis volumkurve i avspillingsfeltet\">~s",
"mt_npclip": "vis knapper for å kopiere info om sangen du hører på\">/np",
@@ -724,6 +744,7 @@ var Ls = {
"mt_oscv": "vis album-cover på infoskjermen\">bilde",
"mt_follow": "bla slik at sangen som spilles alltid er synlig\">🎯",
"mt_compact": "tettpakket avspillerpanel\">⟎",
"mt_uncache": "prøv denne hvis en sang ikke spiller riktig\">uncache",
"mt_mloop": "repeter hele mappen\">🔁 gjenta",
"mt_mnext": "hopp til neste mappe og fortsett\">📂 neste",
"mt_cflac": "konverter flac / wav-filer til opus\">flac",
@@ -747,6 +768,9 @@ var Ls = {
"mm_e403": "Avspilling feilet: Tilgang nektet.\n\nKanskje du ble logget ut?\nPrøv å trykk F5 for å laste siden på nytt.",
"mm_e5xx": "Avspilling feilet: ",
"mm_nof": "finner ikke flere sanger i nærheten",
"mm_prescan": "Leter etter neste sang...",
"mm_scank": "Fant neste sang:",
"mm_uncache": "alle sanger vil lastes på nytt ved neste avspilling",
"mm_pwrsv": "<p>det ser ut som musikken ble avbrutt av telefonen sine strømsparings-innstillinger!</p>" + '<p>ta en tur innom <a target="_blank" href="https://user-images.githubusercontent.com/241032/235262121-2ffc51ae-7821-4310-a322-c3b7a507890c.png">app-innstillingene til nettleseren din</a> og så <a target="_blank" href="https://user-images.githubusercontent.com/241032/235262123-c328cca9-3930-4948-bd18-3949b9fd3fcf.png">tillat ubegrenset batteriforbruk</a></p><p>NB: det kan også være pga. autoplay-innstillingene, så prøv dette:</p><p>Firefox: klikk på ikonet i venstre side av addressefeltet, velg "autoplay" og "tillat lyd"</p><p>Chrome: problemet vil minske gradvis jo mer musikk du spiller på denne siden</p>',
"mm_iosblk": "<p>nettleseren din tror at musikken er uønsket, og den bestemte seg for å stoppe avspillingen slik at du manuelt må velge en ny sang... dessverre er både du og jeg maktesløse når den har bestemt seg.</p><p>det ryktes at problemet vil minske jo mer musikk du spiller på denne siden, men jeg er ikke godt kjent med apple-dingser så jeg er ikke sikker.</p><p>kanskje firefox eller chrome fungerer bedre?</p>",
"mm_hnf": "sangen finnes ikke lenger",
@@ -775,6 +799,8 @@ var Ls = {
"frb_apply": "IVERKSETT",
"fr_adv": "automasjon basert på metadata<br>og / eller mønster (regulære uttrykk)\">avansert",
"fr_case": "versalfølsomme uttrykk\">Aa",
"fr_win": "bytt ut bokstavene <code>&lt;&gt;:&quot;\\|?*</code> med$Ntilsvarende som windows ikke får panikk av\">win",
"fr_slash": "bytt ut bokstaven <code>/</code> slik at den ikke forårsaker at nye mapper opprettes\">ikke /",
"fr_pdel": "slett",
"fr_pnew": "lagre som",
"fr_pname": "gi innstillingene dine et navn",
@@ -784,6 +810,7 @@ var Ls = {
"fr_tags": "metadata for de valgte filene (kun for referanse):",
"fr_busy": "endrer navn på {0} filer...\n\n{1}",
"fr_efail": "endring av navn feilet:\n",
"fr_nchg": "{0} av navnene ble justert pga. <code>win</code> og/eller <code>ikke /</code>\n\nvil du fortsette med de nye navnene som ble valgt?",
"fd_ok": "sletting OK",
"fd_err": "sletting feilet:\n",
@@ -803,6 +830,8 @@ var Ls = {
"fp_confirm": "flytt disse {0} filene hit?",
"fp_etab": 'kunne ikke lese listen med filer ifra den andre nettleserfanen',
"mk_noname": "skriv inn et navn i tekstboksen til venstre først :p",
"tv_load": "Laster inn tekstfil:\n\n{0}\n\n{1}% ({2} av {3} MiB lastet ned)",
"tv_xe1": "kunne ikke laste tekstfil:\n\nfeil ",
"tv_xe2": "404, Fil ikke funnet",
@@ -849,7 +878,7 @@ var Ls = {
"s_t1": "sang-info inneholder",
"s_a1": "konkrete egenskaper",
"md_eshow": "kan ikke vise ",
"md_eshow": "viser forenklet ",
"md_off": "[📜<em>readme</em>] er avskrudd i [⚙️] -- dokument skjult",
"xhr403": "403: Tilgang nektet\n\nkanskje du ble logget ut? prøv å trykk F5",
@@ -934,6 +963,7 @@ var Ls = {
"u_emtleakf": 'prøver følgende:\n<ul><li>trykk F5 for å laste siden på nytt</li><li>så skru på <code>🥔</code> ("enkelt UI") i opplasteren</li><li>og forsøk den samme opplastningen igjen</li></ul>\nPS: Firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500">fikser forhåpentligvis feilen</a> en eller annen gang',
"u_s404": "ikke funnet på serveren",
"u_expl": "forklar",
"u_maxconn": "de fleste nettlesere tillater ikke mer enn 6, men firefox lar deg øke grensen med <code>connections-per-server</code> in <code>about:config</code>",
"u_tu": '<p class="warn">ADVARSEL: turbo er på, <span>&nbsp;avbrutte opplastninger vil muligens ikke oppdages og gjenopptas; hold musepekeren over turbo-knappen for mer info</span></p>',
"u_ts": '<p class="warn">ADVARSEL: turbo er på, <span>&nbsp;søkeresultater kan være feil; hold musepekeren over turbo-knappen for mer info</span></p>',
"u_turbo_c": "turbo er deaktivert i serverkonfigurasjonen",
@@ -1177,6 +1207,7 @@ ebi('op_cfg').innerHTML = (
' <h3>' + L.cl_uopts + '</h3>\n' +
' <div>\n' +
' <a id="ask_up" class="tgl btn" href="#" tt="' + L.ut_ask + '">💭</a>\n' +
' <a id="umod" class="tgl btn" href="#" tt="' + L.cut_umod + '">re📅</a>\n' +
' <a id="hashw" class="tgl btn" href="#" tt="' + L.cut_mt + '">mt</a>\n' +
' <a id="u2turbo" class="tgl btn ttb" href="#" tt="' + L.cut_turbo + '">turbo</a>\n' +
' <a id="u2tdate" class="tgl btn ttb" href="#" tt="' + L.cut_datechk + '">date-chk</a>\n' +
@@ -1353,6 +1384,7 @@ function set_files_html(html) {
// actx breaks background album playback on ios
var ACtx = !IPHONE && (window.AudioContext || window.webkitAudioContext),
ACB = sread('au_cbv') || 1,
noih = /[?&]v\b/.exec('' + location),
hash0 = location.hash,
mp;
@@ -1363,7 +1395,9 @@ var mpl = (function () {
ebi('op_player').innerHTML = (
'<div><h3>' + L.cl_opts + '</h3><div>' +
'<a href="#" class="tgl btn" id="au_shuf" tt="' + L.mt_shuf + '</a>' +
'<a href="#" class="tgl btn" id="au_preload" tt="' + L.mt_preload + '</a>' +
'<a href="#" class="tgl btn" id="au_prescan" tt="' + L.mt_prescan + '</a>' +
'<a href="#" class="tgl btn" id="au_fullpre" tt="' + L.mt_fullpre + '</a>' +
'<a href="#" class="tgl btn" id="au_waves" tt="' + L.mt_waves + '</a>' +
'<a href="#" class="tgl btn" id="au_npclip" tt="' + L.mt_npclip + '</a>' +
@@ -1374,6 +1408,10 @@ var mpl = (function () {
'<a href="#" class="tgl btn" id="au_compact" tt="' + L.mt_compact + '</a>' +
'</div></div>' +
'<div><h3>' + L.ml_btns + '</h3><div>' +
'<a href="#" class="btn" id="au_uncache" tt="' + L.mt_uncache + '</a>' +
'</div></div>' +
'<div><h3>' + L.ml_pmode + '</h3><div id="pb_mode">' +
'<a href="#" class="tgl btn" m="loop" tt="' + L.mt_mloop + '</a>' +
'<a href="#" class="tgl btn" m="next" tt="' + L.mt_mnext + '</a>' +
@@ -1400,7 +1438,11 @@ var mpl = (function () {
"os_ctl": bcfg_get('au_os_ctl', have_mctl) && have_mctl,
'traversals': 0,
};
bcfg_bind(r, 'shuf', 'au_shuf', false, function () {
mp.read_order(); // don't bind
});
bcfg_bind(r, 'preload', 'au_preload', true);
bcfg_bind(r, 'prescan', 'au_prescan', true);
bcfg_bind(r, 'fullpre', 'au_fullpre', false);
bcfg_bind(r, 'waves', 'au_waves', true, function (v) {
if (!v) pbar.unwave();
@@ -1422,6 +1464,14 @@ var mpl = (function () {
r.fullpre = false;
}
ebi('au_uncache').onclick = function (e) {
ev(e);
ACB = (Date.now() % 46656).toString(36);
swrite('au_cbv', ACB);
reload_mp();
toast.inf(5, L.mm_uncache);
};
ebi('au_os_ctl').onclick = function (e) {
ev(e);
r.os_ctl = !r.os_ctl && have_mctl;
@@ -1538,8 +1588,10 @@ var mpl = (function () {
ebi('np_title').textContent = np.title || '';
ebi('np_dur').textContent = np['.dur'] || '';
ebi('np_url').textContent = get_vpath() + np.file.split('?')[0];
if (!MOBILE)
ebi('np_img').setAttribute('src', cover || '');
if (!MOBILE && cover)
ebi('np_img').setAttribute('src', cover);
else
ebi('np_img').removeAttribute('src');
navigator.mediaSession.metadata = new MediaMetadata(tags);
navigator.mediaSession.setActionHandler('play', mplay);
@@ -1644,6 +1696,20 @@ function MPlayer() {
r.au.volume = r.expvol(r.vol);
};
r.shuffle = function () {
if (!mpl.shuf)
return;
// durstenfeld
for (var a = r.order.length - 1; a > 0; a--) {
var b = Math.floor(Math.random() * (a + 1)),
c = r.order[a];
r.order[a] = r.order[b];
r.order[b] = c;
}
};
r.shuffle();
r.read_order = function () {
var order = [],
links = QSA('#files>tbody>tr>td:nth-child(1)>a');
@@ -1656,6 +1722,7 @@ function MPlayer() {
order.push(tid.slice(1));
}
r.order = order;
r.shuffle();
};
r.fdir = 0;
@@ -1711,10 +1778,12 @@ function MPlayer() {
}
r.preload = function (url, full) {
var t0 = Date.now(),
fname = uricom_dec(url.split('/').pop());
url = mpl.acode(url);
url += (url.indexOf('?') < 0 ? '?' : '&') + 'cache=987';
url += (url.indexOf('?') < 0 ? '?' : '&') + 'cache=987&_=' + ACB;
mpl.preload_url = full ? url : null;
var t0 = Date.now();
if (mpl.waves)
fetch(url.replace(/\bth=opus&/, '') + '&th=p').then(function (x) {
@@ -1748,6 +1817,12 @@ function MPlayer() {
r.au2.onloadeddata = r.au2.onloadedmetadata = r.nopause;
r.au2.preload = "auto";
r.au2.src = r.au2.rsrc = url;
if (mpl.prescan_evp) {
mpl.prescan_evp = null;
toast.ok(7, L.mm_scank + "\n" + esc(fname));
}
console.log("preloading " + fname);
};
r.nopause = function () {
@@ -1756,19 +1831,21 @@ function MPlayer() {
}
function ft2dict(tr) {
function ft2dict(tr, skip) {
var th = ebi('files').tHead.rows[0].cells,
rv = [],
rh = [],
ra = [],
rt = {};
skip = skip || {};
for (var a = 1, aa = th.length; a < aa; a++) {
var tv = tr.cells[a].textContent,
tk = a == 1 ? 'file' : th[a].getAttribute('name').split('/').pop().toLowerCase(),
vis = th[a].className.indexOf('min') === -1;
if (!tv)
if (!tv || skip[tk])
continue;
(vis ? rv : rh).push(tk);
@@ -1781,7 +1858,7 @@ function ft2dict(tr) {
function get_np() {
var tr = QS('#files tr.play');
return ft2dict(tr);
return ft2dict(tr, { 'up_ip': 1 });
};
@@ -1842,7 +1919,7 @@ var widget = (function () {
np = npr[0];
for (var a = 0; a < npk.length; a++)
m += (npk[a] == 'file' ? '' : npk[a]) + '(' + cv + np[npk[a]] + ck + ') // ';
m += (npk[a] == 'file' ? '' : npk[a]).replace(/^\./, '') + '(' + cv + np[npk[a]] + ck + ') // ';
m += '[' + cv + s2ms(mp.au.currentTime) + ck + '/' + cv + s2ms(mp.au.duration) + ck + ']';
@@ -2303,7 +2380,7 @@ function dl_song() {
}
var url = mp.tracks[mp.au.tid];
url += (url.indexOf('?') < 0 ? '?' : '&') + 'cache=987';
url += (url.indexOf('?') < 0 ? '?' : '&') + 'cache=987&_=' + ACB;
dl_file(url);
}
@@ -2392,6 +2469,10 @@ var mpui = (function () {
timer.add(updater_impl, true);
};
function repreload() {
preloaded = fpreloaded = null;
}
function updater_impl() {
if (!mp.au) {
widget.paused(true);
@@ -2453,7 +2534,26 @@ var mpui = (function () {
if (full !== null)
try {
mp.preload(mp.tracks[mp.order[mp.order.indexOf(mp.au.tid) + 1]], full);
var oi = mp.order.indexOf(mp.au.tid) + 1,
evp = get_evpath();
if (mpl.pb_mode == 'loop' || mp.au.evp != evp)
oi = 0;
if (oi >= mp.order.length) {
if (!mpl.prescan)
throw "prescan disabled";
if (mpl.prescan_evp == evp)
throw "evp match";
mpl.prescan_evp = evp;
toast.inf(10, L.mm_prescan);
treectl.ls_cb = repreload;
tree_neigh(1);
}
else
mp.preload(mp.tracks[mp.order[oi]], full);
}
catch (ex) {
console.log("preload failed", ex);
@@ -2953,7 +3053,7 @@ function play(tid, is_ev, seek) {
}
var url = mpl.acode(mp.tracks[tid]);
url += (url.indexOf('?') < 0 ? '?' : '&') + 'cache=987';
url += (url.indexOf('?') < 0 ? '?' : '&') + 'cache=987&_=' + ACB;
if (mp.au.rsrc == url)
mp.au.currentTime = 0;
@@ -2980,6 +3080,7 @@ function play(tid, is_ev, seek) {
}, 500);
mp.au.tid = tid;
mp.au.evp = get_evpath();
mp.au.volume = mp.expvol(mp.vol);
var trs = QSA('#files tr.play');
for (var a = 0, aa = trs.length; a < aa; a++)
@@ -3359,8 +3460,8 @@ function sortfiles(nodes) {
}
catch (ex) {
console.log("failed to apply sort config: " + ex);
console.log("resetting fsort " + sread('fsort'))
localStorage.removeItem('fsort');
console.log("resetting fsort " + sread('fsort'));
sdrop('fsort');
}
return nodes;
}
@@ -3605,6 +3706,8 @@ var fileman = (function () {
'<button id="rn_apply">✅ ' + L.frb_apply + '</button>',
'<a id="rn_adv" class="tgl btn" href="#" tt="' + L.fr_adv + '</a>',
'<a id="rn_case" class="tgl btn" href="#" tt="' + L.fr_case + '</a>',
'<a id="rn_win" class="tgl btn" href="#" tt="' + L.fr_win + '</a>',
'<a id="rn_slash" class="tgl btn" href="#" tt="' + L.fr_slash + '</a>',
'</div>',
'<div id="rn_vadv"><table>',
'<tr><td>regex</td><td><input type="text" id="rn_re" ' + NOAC + ' tt="regex search pattern to apply to original filenames; capturing groups can be referenced in the format field below like &lt;code&gt;(1)&lt;/code&gt; and &lt;code&gt;(2)&lt;/code&gt; and so on" placeholder="^[0-9]+[\\. ]+(.*) - (.*)" /></td></tr>',
@@ -3656,7 +3759,8 @@ var fileman = (function () {
(function (a) {
f[a].inew.onkeydown = function (e) {
rn_ok(a, true);
if (e.key.endsWith('Enter'))
var kc = (e.code || e.key) + '';
if (kc.endsWith('Enter'))
return rn_apply();
};
QS('.rn_dec' + k).onclick = function (e) {
@@ -3677,6 +3781,8 @@ var fileman = (function () {
}
bcfg_bind(r, 'adv', 'rn_adv', false, sadv);
bcfg_bind(r, 'cs', 'rn_case', false);
bcfg_bind(r, 'win', 'rn_win', true);
bcfg_bind(r, 'slash', 'rn_slash', true);
sadv();
function rn_ok(n, ok) {
@@ -3748,10 +3854,12 @@ var fileman = (function () {
spresets();
ire.onkeydown = ifmt.onkeydown = function (e) {
if (e.key == 'Escape')
var k = (e.code || e.key) + '';
if (k == 'Escape' || k == 'Esc')
return rn_cancel();
if (e.key.endsWith('Enter'))
if (k.endsWith('Enter'))
return rn_apply();
};
@@ -3792,6 +3900,24 @@ var fileman = (function () {
function rn_apply(e) {
ev(e);
if (r.win || r.slash) {
var changed = 0;
for (var a = 0; a < f.length; a++) {
var ov = f[a].inew.value,
nv = namesan(ov, r.win, r.slash);
if (ov != nv) {
f[a].inew.value = nv;
changed++;
}
}
if (changed)
return modal.confirm(L.fr_nchg.format(changed), rn_apply_loop, null);
}
rn_apply_loop();
}
function rn_apply_loop() {
while (f.length && (!f[0].ok || f[0].ofn == f[0].inew.value))
f.shift();
@@ -3812,7 +3938,7 @@ var fileman = (function () {
}
f.shift().inew.value = '( OK )';
return rn_apply();
return rn_apply_loop();
}
var xhr = new XHR();
@@ -4703,7 +4829,7 @@ var thegrid = (function () {
}
ihref = SR + '/.cpr/ico/' + ext;
}
ihref += (ihref.indexOf('?') > 0 ? '&' : '?') + 'cache=i';
ihref += (ihref.indexOf('?') > 0 ? '&' : '?') + 'cache=i&_=' + ACB;
html.push('<a href="' + ohref + '" ref="' + ref +
'"' + ac + ' ttt="' + esc(name) + '"><img style="height:' +
@@ -4728,6 +4854,7 @@ var thegrid = (function () {
r.dirty = false;
r.bagit('#ggrid');
r.loadsel();
aligngriditems();
setTimeout(r.tippen, 20);
}
@@ -4926,14 +5053,14 @@ document.onkeydown = function (e) {
if (QS('#bbox-overlay.visible') || modal.busy)
return;
var k = e.code + '', pos = -1, n,
var k = (e.code || e.key) + '', pos = -1, n,
ae = document.activeElement,
aet = ae && ae != document.body ? ae.nodeName.toLowerCase() : '';
if (e.key == '?')
return hkhelp();
if (k == 'Escape') {
if (k == 'Escape' || k == 'Esc') {
ae && ae.blur();
tt.hide();
@@ -4962,23 +5089,24 @@ document.onkeydown = function (e) {
return ebi('griden').click();
}
if (aet == 'tr' && ae.closest('#files')) {
if ((aet == 'tr' || aet == 'td') && ae.closest('#files')) {
var d = '', rem = 0;
if (k == 'ArrowUp') d = 'previous';
if (k == 'ArrowDown') d = 'next';
if (aet == 'td') ae = ae.closest('tr'); //ie11
if (k == 'ArrowUp' || k == 'Up') d = 'previous';
if (k == 'ArrowDown' || k == 'Down') d = 'next';
if (k == 'PageUp') { d = 'previous'; rem = 0.6; }
if (k == 'PageDown') { d = 'next'; rem = 0.6; }
if (d) {
fselfunw(e, ae, d, rem);
return ev(e);
}
if (k == 'Space') {
if (k == 'Space' || k == 'Spacebar') {
clmod(ae, 'sel', 't');
msel.origin_tr(ae);
msel.selui();
return ev(e);
}
if (k == 'KeyA' && ctrl(e)) {
if ((k == 'KeyA' || k == 'a') && ctrl(e)) {
var sel = msel.getsel(),
all = msel.getall();
@@ -4989,7 +5117,7 @@ document.onkeydown = function (e) {
}
if (ae && ae.closest('pre')) {
if (k == 'KeyA' && ctrl(e)) {
if ((k == 'KeyA' || k == 'a') && ctrl(e)) {
var sel = document.getSelection(),
ran = document.createRange();
@@ -5003,23 +5131,23 @@ document.onkeydown = function (e) {
if (k.endsWith('Enter') && ae && (ae.onclick || ae.hasAttribute('tabIndex')))
return ev(e) && ae.click() || true;
if (aet && aet != 'a' && aet != 'tr' && aet != 'pre')
if (aet && aet != 'a' && aet != 'tr' && aet != 'td' && aet != 'div' && aet != 'pre')
return;
if (ctrl(e)) {
if (k == 'KeyX')
if (k == 'KeyX' || k == 'x')
return fileman.cut();
if (k == 'KeyV')
if (k == 'KeyV' || k == 'v')
return fileman.paste();
if (k == 'KeyK')
if (k == 'KeyK' || k == 'k')
return fileman.delete();
return;
}
if (e.shiftKey && k != 'KeyA' && k != 'KeyD')
if (e.shiftKey && k != 'KeyA' && k != 'KeyD' && k != 'A' && k != 'D')
return;
if (k.indexOf('Digit') === 0)
@@ -5028,16 +5156,17 @@ document.onkeydown = function (e) {
if (pos !== -1)
return seek_au_mul(pos) || true;
if (k == 'KeyJ')
if (k == 'KeyJ' || k == 'j')
return prev_song() || true;
if (k == 'KeyL')
if (k == 'KeyL' || k == 'l')
return next_song() || true;
if (k == 'KeyP')
if (k == 'KeyP' || k == 'p')
return playpause() || true;
n = k == 'KeyU' ? -10 : k == 'KeyO' ? 10 : 0;
n = (k == 'KeyU' || k == 'u') ? -10 :
(k == 'KeyO' || k == 'o') ? 10 : 0;
if (n !== 0)
return seek_au_rel(n) || true;
@@ -5046,51 +5175,52 @@ document.onkeydown = function (e) {
showfile.active() ? ebi('dldoc').click() :
dl_song();
n = k == 'KeyI' ? -1 : k == 'KeyK' ? 1 : 0;
n = (k == 'KeyI' || k == 'i') ? -1 :
(k == 'KeyK' || k == 'k') ? 1 : 0;
if (n !== 0)
return tree_neigh(n);
if (k == 'KeyM')
if (k == 'KeyM' || k == 'm')
return tree_up();
if (k == 'KeyB')
if (k == 'KeyB' || k == 'b')
return treectl.hidden ? treectl.entree() : treectl.detree();
if (k == 'KeyG')
if (k == 'KeyG' || k == 'g')
return ebi('griden').click();
if (k == 'KeyT')
if (k == 'KeyT' || k == 't')
return ebi('thumbs').click();
if (k == 'KeyV')
if (k == 'KeyV' || k == 'v')
return ebi('filetree').click();
if (k == 'F2')
return fileman.rename();
if (!treectl.hidden && (!e.shiftKey || !thegrid.en)) {
if (k == 'KeyA')
if (k == 'KeyA' || k == 'a')
return QS('#twig').click();
if (k == 'KeyD')
if (k == 'KeyD' || k == 'd')
return QS('#twobytwo').click();
}
if (showfile.active()) {
if (k == 'KeyS')
if (k == 'KeyS' || k == 's')
showfile.tglsel();
if (k == 'KeyE' && ebi('editdoc').style.display != 'none')
if ((k == 'KeyE' || k == 'e') && ebi('editdoc').style.display != 'none')
ebi('editdoc').click();
}
if (thegrid.en) {
if (k == 'KeyS')
if (k == 'KeyS' || k == 's')
return ebi('gridsel').click();
if (k == 'KeyA')
if (k == 'KeyA' || k == 'a')
return QSA('#ghead a[z]')[0].click();
if (k == 'KeyD')
if (k == 'KeyD' || k == 'd')
return QSA('#ghead a[z]')[1].click();
}
};
@@ -5179,9 +5309,11 @@ document.onkeydown = function (e) {
function ev_search_input() {
var v = unsmart(this.value),
id = this.getAttribute('id');
id = this.getAttribute('id'),
is_txt = id.slice(-1) == 'v',
is_chk = id.slice(-1) == 'c';
if (id.slice(-1) == 'v') {
if (is_txt) {
var chk = ebi(id.slice(0, -1) + 'c');
chk.checked = ((v + '').length > 0);
}
@@ -5193,12 +5325,15 @@ document.onkeydown = function (e) {
cap = 125;
clearTimeout(defer_timeout);
if (is_chk)
return do_search();
defer_timeout = setTimeout(try_search, 2000);
try_search(v);
}
function ev_search_keydown(e) {
if (e.key.endsWith('Enter'))
if ((e.key + '').endsWith('Enter'))
do_search();
}
@@ -5741,7 +5876,7 @@ var treectl = (function () {
var res = JSON.parse(this.responseText);
}
catch (ex) {
return;
return toast.err(30, "bad <code>?tree</code> reply;\nexpected json, got this:\n\n" + esc(this.responseText + ''));
}
rendertree(res, this.ts, this.top, this.dst, this.rst);
}
@@ -5872,6 +6007,9 @@ var treectl = (function () {
}
function bad_proxy(e) {
if (ctrl(e))
return true;
ev(e);
var dst = this.getAttribute('dst'),
k = dst ? 'dst' : 'href',
@@ -5903,11 +6041,15 @@ var treectl = (function () {
thegrid.setvis(true);
}
r.reqls = function (url, hpush, back) {
r.reqls = function (url, hpush, back, hydrate) {
if (IE && !history.pushState)
return window.location = url;
var xhr = new XHR();
xhr.top = url.split('?')[0];
xhr.back = back
xhr.hpush = hpush;
xhr.hydrate = hydrate;
xhr.ts = Date.now();
xhr.open('GET', xhr.top + '?ls' + (r.dots ? '&dots' : ''), true);
xhr.onload = xhr.onerror = recvls;
@@ -5954,8 +6096,10 @@ var treectl = (function () {
var res = JSON.parse(this.responseText);
}
catch (ex) {
location = this.top;
return;
if (!this.hydrate)
location = this.top;
return toast.err(30, "bad <code>?ls</code> reply;\nexpected json, got this:\n\n" + esc(this.responseText + ''));
}
if (r.chk_index_html(this.top, res))
@@ -6177,7 +6321,7 @@ var treectl = (function () {
xhr.send();
r.ls_cb = showfile.addlinks;
return r.reqls(get_evpath(), false);
return r.reqls(get_evpath(), false, undefined, true);
}
var top = get_evpath();
@@ -7178,6 +7322,28 @@ var msel = (function () {
})();
(function () {
if (!window.FormData)
return;
var form = QS('#op_new_md>form'),
tb = QS('#op_new_md input[name="name"]');
form.onsubmit = function (e) {
if (tb.value) {
if (toast.tag == L.mk_noname)
toast.hide();
return true;
}
ev(e);
toast.err(10, L.mk_noname, L.mk_noname);
return false;
};
})();
(function () {
if (!window.FormData)
return;
@@ -7191,8 +7357,16 @@ var msel = (function () {
form.onsubmit = function (e) {
ev(e);
clmod(sf, 'vis', 1);
var dn = tb.value;
if (!dn) {
toast.err(10, L.mk_noname, L.mk_noname);
return false;
}
if (toast.tag == L.mk_noname || toast.tag == L.fd_xe1)
toast.hide();
clmod(sf, 'vis', 1);
sf.textContent = 'creating "' + dn + '"...';
var fd = new FormData();
@@ -7349,8 +7523,11 @@ function show_md(md, name, div, url, depth) {
wfp_debounce.hide();
if (!marked) {
if (depth)
if (depth) {
clmod(div, 'raw', 1);
div.textContent = "--[ " + name + " ]---------\r\n" + md;
return toast.warn(10, errmsg + (window.WebAssembly ? 'failed to load marked.js' : 'your browser is too old'));
}
wfp_debounce.n--;
return import_js(SR + '/.cpr/deps/marked.js', function () {

View File

@@ -139,16 +139,15 @@ var md_opt = {
};
(function () {
var l = localStorage,
drk = l.light != 1,
var l = window.localStorage,
drk = (l && l.light) != 1,
btn = document.getElementById("lightswitch"),
f = function (e) {
if (e) { e.preventDefault(); drk = !drk; }
document.documentElement.className = drk? "z":"y";
btn.innerHTML = "go " + (drk ? "light":"dark");
l.light = drk? 0:1;
try { l.light = drk? 0:1; } catch (ex) { }
};
btn.onclick = f;
f();
})();

View File

@@ -216,6 +216,11 @@ function convert_markdown(md_text, dest_dom) {
md_html = DOMPurify.sanitize(md_html);
}
catch (ex) {
if (IE) {
dest_dom.innerHTML = 'IE cannot into markdown ;_;';
return;
}
if (ext)
md_plug_err(ex, ext[1]);

View File

@@ -163,7 +163,7 @@ redraw = (function () {
dom_sbs.onclick = setsbs;
dom_nsbs.onclick = modetoggle;
onresize();
(IE ? modetoggle : onresize)();
return onresize;
})();
@@ -933,7 +933,7 @@ var set_lno = (function () {
var keydown = function (ev) {
if (!ev && window.event) {
ev = window.event;
if (localStorage.dev_fbw == 1) {
if (dev_fbw == 1) {
toast.warn(10, 'hello from fallback code ;_;\ncheck console trace');
console.error('using window.event');
}
@@ -1009,7 +1009,7 @@ var set_lno = (function () {
md_home(ev.shiftKey);
return false;
}
if (!ev.shiftKey && (ev.code.endsWith("Enter") || kc == 13)) {
if (!ev.shiftKey && ((ev.code + '').endsWith("Enter") || kc == 13)) {
return md_newline();
}
if (!ev.shiftKey && kc == 8) {

View File

@@ -37,12 +37,12 @@ var md_opt = {
};
var lightswitch = (function () {
var l = localStorage,
drk = l.light != 1,
var l = window.localStorage,
drk = (l && l.light) != 1,
f = function (e) {
if (e) drk = !drk;
document.documentElement.className = drk? "z":"y";
l.light = drk? 0:1;
try { l.light = drk? 0:1; } catch (ex) { }
};
f();
return f;

View File

@@ -110,7 +110,8 @@ var SR = {{ r|tojson }},
lang="{{ lang }}",
dfavico="{{ favico }}";
document.documentElement.className=localStorage.cpp_thm||"{{ this.args.theme }}";
var STG = window.localStorage;
document.documentElement.className = (STG && STG.cpp_thm) || "{{ this.args.theme }}";
</script>
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>

View File

@@ -238,7 +238,8 @@ var SR = {{ r|tojson }},
lang="{{ lang }}",
dfavico="{{ favico }}";
document.documentElement.className=localStorage.cpp_thm||"{{ args.theme }}";
var STG = window.localStorage;
document.documentElement.className = (STG && STG.cpp_thm) || "{{ args.theme }}";
</script>
<script src="{{ r }}/.cpr/util.js?_={{ ts }}"></script>

View File

@@ -105,6 +105,9 @@ html {
#toast pre {
margin: 0;
}
#toast.hide {
display: none;
}
#toast.vis {
right: 1.3em;
transform: inherit;
@@ -144,6 +147,10 @@ html {
#toast.err #toastc {
background: #d06;
}
#toast code {
padding: 0 .2em;
background: rgba(0,0,0,0.2);
}
#tth {
color: #fff;
background: #111;

View File

@@ -431,7 +431,7 @@ function U2pvis(act, btns, uc, st) {
if (sread('potato') === null) {
btn.click();
toast.inf(30, L.u_gotpot);
localStorage.removeItem('potato');
sdrop('potato');
}
u2f.appendChild(ode);
@@ -861,6 +861,7 @@ function up2k_init(subtle) {
bcfg_bind(uc, 'multitask', 'multitask', true, null, false);
bcfg_bind(uc, 'potato', 'potato', false, set_potato, false);
bcfg_bind(uc, 'ask_up', 'ask_up', true, null, false);
bcfg_bind(uc, 'umod', 'umod', false, null, false);
bcfg_bind(uc, 'u2ts', 'u2ts', !u2ts.endsWith('u'), set_u2ts, false);
bcfg_bind(uc, 'fsearch', 'fsearch', false, set_fsearch, false);
@@ -894,6 +895,7 @@ function up2k_init(subtle) {
"bytes": {
"total": 0,
"hashed": 0,
"inflight": 0,
"uploaded": 0,
"finished": 0
},
@@ -1332,7 +1334,8 @@ function up2k_init(subtle) {
return modal.confirm(msg.join('') + '</ul>', function () {
start_actx();
up_them(good_files);
toast.inf(15, L.u_unpt, L.u_unpt);
if (have_up2k_idx)
toast.inf(15, L.u_unpt, L.u_unpt);
}, null);
up_them(good_files);
@@ -1391,6 +1394,8 @@ function up2k_init(subtle) {
entry.rand = true;
entry.name = 'a\n' + entry.name;
}
else if (uc.umod)
entry.umod = true;
if (biggest_file < entry.size)
biggest_file = entry.size;
@@ -1539,17 +1544,21 @@ function up2k_init(subtle) {
if (uc.fsearch)
t.push(['u2etat', st.bytes.hashed, st.bytes.hashed, st.time.hashing]);
}
var b_up = st.bytes.inflight + st.bytes.uploaded,
b_fin = st.bytes.inflight + st.bytes.finished;
if (nsend) {
st.time.uploading += td;
t.push(['u2etau', st.bytes.uploaded, st.bytes.finished, st.time.uploading]);
t.push(['u2etau', b_up, b_fin, st.time.uploading]);
}
if ((nhash || nsend) && !uc.fsearch) {
if (!st.bytes.finished) {
if (!b_fin) {
ebi('u2etat').innerHTML = L.u_etaprep;
}
else {
st.time.busy += td;
t.push(['u2etat', st.bytes.finished, st.bytes.finished, st.time.busy]);
t.push(['u2etat', b_fin, b_fin, st.time.busy]);
}
}
for (var a = 0; a < t.length; a++) {
@@ -2467,6 +2476,8 @@ function up2k_init(subtle) {
req.srch = 1;
else if (t.rand)
req.rand = true;
else if (t.umod)
req.umod = true;
xhr.open('POST', t.purl, true);
xhr.responseType = 'text';
@@ -2533,6 +2544,7 @@ function up2k_init(subtle) {
cdr = t.size;
var orz = function (xhr) {
st.bytes.inflight -= xhr.bsent;
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
if (txt.indexOf('upload blocked by x') + 1) {
apop(st.busy.upload, upt);
@@ -2577,7 +2589,10 @@ function up2k_init(subtle) {
btot = Math.floor(st.bytes.total / 1024 / 1024);
xhr.upload.onprogress = function (xev) {
pvis.prog(t, npart, xev.loaded);
var nb = xev.loaded;
st.bytes.inflight += nb - xhr.bsent;
xhr.bsent = nb;
pvis.prog(t, npart, nb);
};
xhr.onload = function (xev) {
try { orz(xhr); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
@@ -2586,6 +2601,8 @@ function up2k_init(subtle) {
if (crashed)
return;
st.bytes.inflight -= (xhr.bsent || 0);
if (!toast.visible)
toast.warn(9.98, L.u_cuerr.format(npart, Math.ceil(t.size / chunksize), t.name), t);
@@ -2602,6 +2619,7 @@ function up2k_init(subtle) {
if (xhr.overrideMimeType)
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
xhr.bsent = 0;
xhr.responseType = 'text';
xhr.send(t.fobj.slice(car, cdr));
}
@@ -2686,7 +2704,7 @@ function up2k_init(subtle) {
parallel_uploads = v;
if (v == u2j)
localStorage.removeItem('nthread');
sdrop('nthread');
else
swrite('nthread', v);
@@ -2702,6 +2720,9 @@ function up2k_init(subtle) {
if (parallel_uploads > 16)
parallel_uploads = 16;
if (parallel_uploads > 7)
toast.warn(10, L.u_maxconn);
obj.value = parallel_uploads;
bumpthread({ "target": 1 });
}

View File

@@ -12,9 +12,11 @@ if (window.CGV)
var wah = '',
STG = null,
NOAC = 'autocorrect="off" autocapitalize="off"',
L, tt, treectl, thegrid, up2k, asmCrypto, hashwasm, vbar, marked,
CB = '?_=' + Date.now(),
T0 = Date.now(),
CB = '?_=' + Math.floor(T0 / 1000).toString(36),
R = SR.slice(1),
RS = R ? "/" + R : "",
HALFMAX = 8192 * 8192 * 8192 * 8192,
@@ -39,6 +41,16 @@ if (!window.Notification || !Notification.permission)
if (!window.FormData)
window.FormData = false;
try {
STG = window.localStorage;
STG.STG;
}
catch (ex) {
STG = null;
if ((ex + '').indexOf('sandbox') < 0)
console.log('no localStorage: ' + ex);
}
try {
CB = '?' + document.currentScript.src.split('?').pop();
@@ -145,6 +157,10 @@ catch (ex) {
}
var crashed = false, ignexd = {}, evalex_fatal = false;
function vis_exh(msg, url, lineNo, columnNo, error) {
var ekey = url + '\n' + lineNo + '\n' + msg;
if (ignexd[ekey] || crashed)
return;
msg = String(msg);
url = String(url);
@@ -160,10 +176,12 @@ function vis_exh(msg, url, lineNo, columnNo, error) {
if (url.indexOf(' > eval') + 1 && !evalex_fatal)
return; // md timer
var ekey = url + '\n' + lineNo + '\n' + msg;
if (ignexd[ekey] || crashed)
if (IE && url.indexOf('prism.js') + 1)
return;
if (url.indexOf('easymde.js') + 1)
return; // clicking the preview pane
if (url.indexOf('deps/marked.js') + 1 && !window.WebAssembly)
return; // ff<52
@@ -202,19 +220,24 @@ function vis_exh(msg, url, lineNo, columnNo, error) {
}
ignexd[ekey] = true;
var ls = jcp(localStorage);
if (ls.fman_clip)
ls.fman_clip = ls.fman_clip.length + ' items';
var ls = {},
lsk = Object.keys(localStorage),
nka = lsk.length,
nk = Math.min(200, nka);
var lsk = Object.keys(ls);
lsk.sort();
html.push('<p class="b">');
for (var a = 0; a < lsk.length; a++) {
if (ls[lsk[a]].length > 9000)
continue;
for (var a = 0; a < nk; a++) {
var k = lsk[a],
v = localStorage.getItem(k);
html.push(' <b>' + esc(lsk[a]) + '</b> <code>' + esc(ls[lsk[a]]) + '</code> ');
ls[k] = v.length > 256 ? v.slice(0, 32) + '[...' + v.length + 'b]' : v;
}
lsk = Object.keys(ls);
lsk.sort();
html.push('<p class="b"><b>' + nka + ':&nbsp;</b>');
for (var a = 0; a < nk; a++)
html.push(' <b>' + esc(lsk[a]) + '</b> <code>' + esc(ls[lsk[a]]) + '</code> ');
html.push('</p>');
}
catch (e) { }
@@ -276,10 +299,11 @@ function anymod(e, shift_ok) {
}
var dev_fbw = sread('dev_fbw');
function ev(e) {
if (!e && window.event) {
e = window.event;
if (localStorage.dev_fbw == 1) {
if (dev_fbw == 1) {
toast.warn(10, 'hello from fallback code ;_;\ncheck console trace');
console.error('using window.event');
}
@@ -370,6 +394,22 @@ catch (ex) {
}
}
if (!window.Set)
window.Set = function () {
var r = this;
r.size = 0;
r.d = {};
r.add = function (k) {
if (!r.d[k]) {
r.d[k] = 1;
r.size++;
}
};
r.has = function (k) {
return r.d[k];
};
};
// https://stackoverflow.com/a/950146
function import_js(url, cb, ecb) {
var head = document.head || document.getElementsByTagName('head')[0];
@@ -395,6 +435,25 @@ function unsmart(txt) {
}
function namesan(txt, win, fslash) {
if (win)
txt = (txt.
replace(/</g, "").
replace(/>/g, "").
replace(/:/g, "").
replace(/"/g, "").
replace(/\\/g, "").
replace(/\|/g, "").
replace(/\?/g, "").
replace(/\*/g, ""));
if (fslash)
txt = txt.replace(/\//g, "");
return txt;
}
var crctab = (function () {
var c, tab = [];
for (var n = 0; n < 256; n++) {
@@ -881,9 +940,16 @@ function jcp(obj) {
}
function sdrop(key) {
try {
STG.removeItem(key);
}
catch (ex) { }
}
function sread(key, al) {
try {
var ret = localStorage.getItem(key);
var ret = STG.getItem(key);
return (!al || has(al, ret)) ? ret : null;
}
catch (e) {
@@ -894,9 +960,9 @@ function sread(key, al) {
function swrite(key, val) {
try {
if (val === undefined || val === null)
localStorage.removeItem(key);
STG.removeItem(key);
else
localStorage.setItem(key, val);
STG.setItem(key, val);
}
catch (e) { }
}
@@ -1057,7 +1123,7 @@ function dl_file(url) {
function cliptxt(txt, ok) {
var fb = function () {
console.log('fb');
console.log('clip-fb');
var o = mknod('input');
o.value = txt;
document.body.appendChild(o);
@@ -1392,15 +1458,23 @@ var toast = (function () {
}
r.hide = function (e) {
ev(e);
if (this === ebi('toastc'))
ev(e);
unscroll();
clearTimeout(te);
clmod(obj, 'vis');
r.visible = false;
r.tag = obj;
if (!window.WebAssembly)
te = setTimeout(function () {
obj.className = 'hide';
}, 500);
};
r.show = function (cl, sec, txt, tag) {
txt = (txt + '').slice(0, 16384);
var same = r.visible && txt == r.p_txt && r.p_sec == sec,
delta = Date.now() - r.p_t;
@@ -1558,7 +1632,7 @@ var modal = (function () {
};
var onkey = function (e) {
var k = e.code,
var k = (e.code || e.key) + '',
eok = ebi('modal-ok'),
eng = ebi('modal-ng'),
ae = document.activeElement;
@@ -1573,10 +1647,10 @@ var modal = (function () {
return ok(e);
}
if ((k == 'ArrowLeft' || k == 'ArrowRight') && eng && (ae == eok || ae == eng))
if ((k == 'ArrowLeft' || k == 'ArrowRight' || k == 'Left' || k == 'Right') && eng && (ae == eok || ae == eng))
return (ae == eok ? eng : eok).focus() || ev(e);
if (k == 'Escape')
if (k == 'Escape' || k == 'Esc')
return ng(e);
}
@@ -1854,21 +1928,17 @@ var favico = (function () {
var b64;
try {
b64 = btoa(svg ? svg_decl + svg : gx(r.txt));
//console.log('f1');
}
catch (e1) {
try {
b64 = btoa(gx(encodeURIComponent(r.txt).replace(/%([0-9A-F]{2})/g,
function x(m, v) { return String.fromCharCode('0x' + v); })));
//console.log('f2');
}
catch (e2) {
try {
b64 = btoa(gx(unescape(encodeURIComponent(r.txt))));
//console.log('f3');
}
catch (e3) {
//console.log('fe');
return;
}
}
@@ -1922,6 +1992,9 @@ function xhrchk(xhr, prefix, e404, lvl, tag) {
if (xhr.status < 400 && xhr.status >= 200)
return true;
if (tag === undefined)
tag = prefix;
var errtxt = (xhr.response && xhr.response.err) || xhr.responseText,
fun = toast[lvl || 'err'],
is_cf = /[Cc]loud[f]lare|>Just a mo[m]ent|#cf-b[u]bbles|Chec[k]ing your br[o]wser|\/chall[e]nge-platform|"chall[e]nge-error|nable Ja[v]aScript and cook/.test(errtxt);

View File

@@ -1,3 +1,122 @@
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0203-1533 `v1.9.31` eject
## new features
* disable mkdir / new-doc buttons until a name is provided d3db6d29
* warning about browsers limiting the number of connections c354a38b
## bugfixes
* #71 stop videos from buffering in the background a17c267d
* improve up2k ETA on slow networks / many connections c1180d6f
* u2c: exclude-filter didn't apply to file deletions b2e23340
* `--touch` / `re📅` didn't apply to zerobyte files 945170e2
## other changes
* notes on [hardlink/symlink conversion](https://github.com/9001/copyparty/blob/6c2c6090/docs/notes.sh#L35-L46) 6c2c6090
* [lore](https://github.com/9001/copyparty/blob/hovudstraum/docs/notes.md#trivia--lore) b1cf5884
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0125-2252 `v1.9.30` retime
probably last release before v1.10 (IdP), please watch warmly
## new features
* option to replace serverside last-modified timestamps to match uploader's local files 55eb6921
* requires uploader to have write+delete permissions because it tampers with existing files
* in the browser-UI, enable with the `re📅` button in the settings tab `⚙️`
* u2c (commandline uploader): `--touch`
* media player can shuffle songs now 01c82b54
* click `🔀` in the media-player settings tab `🎺` to enable
* windows: retry deleting busy files 3313503e aa3a9719
* to support webdav-clients that upload and then immediately delete files (clonezilla)
* options in batch-rename UI to ensure filenames are windows-safe b4e0a341
* more support for older browsers 4ef31060
* ie9: gridview, navpane, text-viewer, text-editor
* ie9, firefox10: make sure toasts are properly closed
## bugfixes
* older chromes (and current iPhones) could randomly panic in incognito mode b32d6520
* errormessage filepath sanitizer didn't catch histpaths in non-default locations 0f386c4b
* now possible to mount the entire filesystem as a volume (please don't) 14bccbe4
* on 32bit machines, disable sendfile when necessary to avoid python bug b9d0c853
* `-q` would still print filesystem-indexing progress to STDOUT 6dbfcddc
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0114-0629 `v1.9.29` RAM friendly
## new features
* try to keep track of RAM usage in the thumbnailer 95a59996
* very inaccurate, just wild guessing really, but probably good enough:
* an attempt to stop FFmpeg from eating all the RAM when generating spectrograms
* `--th-ram-max` specifies how much RAM it's allowed to use (default 6 GB), crank it up if thumbnailing is too slow now
* much faster startup on devices with slow filesystems and lots of files in the volume root (especially android phones) f1358dba
* `uncache` button (in mediaplayer settings) a55e0d6e
* rotates all audio URLs, in case the browser has a cached copy of a broken mp3 or whatnot
* now possible to POST files without having to set the `act: bput` multipart field 9bc09ce9
* mainly to support [igloo irc](https://github.com/9001/copyparty#client-examples) and other simplistic upload clients
* try to point the linux oom-killer at FFmpeg so it doesn't kill innocent processes instead dc8e621d
* only works if copyparty has acces to /proc, so not in prisonparty, and maybe not in docker (todo)
* UX:
* do another search immediately if a search-filter gets unchecked a4239a46
* several ie11 fixes (keyboard hotkeys and a working text editor) 2fd2c6b9
## bugfixes
* POSTing files could block for a really long time if the database is busy (filesystem reindexing), now it schedules the indexing for later instead e8a653ca
* less confusing behavior when reindexing a file (keep uploader-ip/time if file contents turn out to be unmodified, and drop both otherwise) 226c7c30
## other changes
* better log messages when clients decide to disconnect in the middle of a POST 02430359
* add a warning if copyparty is started with an account definition (`-a`) which isn't used in any volumes e01ba855
* when running on macos, don't index apple metadata files (`.DS_Store` and such) d0eb014c
* they are still downloadable by anyone with read-access, and still appear in directory listings for users with access to see dotfiles
* added a [log repacker](https://github.com/9001/copyparty/blob/hovudstraum/scripts/logpack.sh) to shrink/optimize old logs dee0950f
* and a [contextlet](https://github.com/9001/copyparty/blob/hovudstraum/contrib/README.md#send-to-cppcontextletjson) example
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2023-1231-1849 `v1.9.28` eo2023
was hoping to finish the IdP stuff during 37c3 but that fell through, so here's all the other recent fixes instead -- happy newyears
## new features
* #66 new permission `.` to grant access to see dotfiles (hidden files) to specific users
* and new volflag `dots` to grant access to all users with `r`ead
* `-ed` still behaves like before (anyone with `r` can see dotfiles in all volumes)
* #70 new permission `A` (alias of `rwmda.`) grants read/write/move/delete/admin/dotfiles
* #67 folder thumbnails can be dotfiles (`.cover.jpg`, `.folder.png`) if the database is enabled (`-e2dsa`)
* new option `--u2j` to specify default number of parallel file uploads in the up2k browser client
* default (2) is good on average; 16 can be good when most uploaders are overseas
* curl gets plaintext 404/403 messages
## bugfixes
* cors-checking is disabled if the `PW` header is provided, just like the [readme](https://github.com/9001/copyparty#cors) always claimed
* server would return `200 OK` while trying to return a file that is unreadable due to filesystem permissions
* `--xdev` still doesn't work on windows, but at least now it doesn't entirely break filesystem indexing
* fix tiny resource leak due to funky dualstack on macos
## other changes
* logfiles are padded to align messages when `-q` is specified, similar to current/previous behavior without `-q`
* `--hdr-au-usr` was renamed to `--idp-h-usr` in preparation for other `--idp` things
* any mentions of `--hdr-au-usr` are translated to the new name on startup
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2023-1208-2133 `v1.9.27` another dedup bug
@@ -4469,3 +4588,832 @@ nothing really important happened since [v0.11.6](https://github.com/9001/copypa
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0601-0625 `v0.11.6` vtec
### things to know when upgrading:
* see release-notes for [v0.11.0](https://github.com/9001/copyparty/releases/tag/v0.11.0) and [v0.11.1](https://github.com/9001/copyparty/releases/tag/v0.11.1) as they introduced new features you may wish to disable
### new features:
* searching for audio tags is now literally 1000x faster
(almost as fast as the version numbers recently)
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0601-0155 `v0.11.4` please upgrade
## important news:
* this release fixes a missing permission check which could allow users to download write-only folders
* this bug was introduced 19 days ago, in `v0.10.17`
* the requirement to be affected is write-only folders mounted within readable folders
* and the worst part is there was a unit-test exactly for this, https://github.com/9001/copyparty/commit/273ca0c8da0d94f9d06ca16bd86c0301d9d06455 way overdue
* also fixes minor bugs introduced in `v0.11.1`
* this version is the same as `v0.11.5` on pypi
----
### things to know when upgrading:
* see [v0.11.0](https://github.com/9001/copyparty/releases/tag/v0.11.0) and [v0.11.1](https://github.com/9001/copyparty/releases/tag/v0.11.1) as they introduce new features you may wish to disable
* especially the `dbtool` part if your database is huge
### new features:
* filesearch now powered by a boolean query syntax
* the regular search interface generates example queries
* `size >= 2048 and ( name like *.mp4 or name like *.mkv )`
### bugfixes:
* scan files on upload (broke in 0.11.1)
* restore the loud "folder does not exist" warning (another 0.11.1)
* fix thumbnails in search results (never worked)
#### really minor stuff:
* increased default thumbnail clean interval from 30min to 12h
* admin panel also links to the volumes
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0529-2139 `v0.11.1` do it live
no important bugfixes, just new features
### things to know when upgrading:
* `--no-rescan` disables `?scan`, a new feature which lets users initiate a recursive scan for new files to hash and read tags from
* this is enabled per-volume for users with read+write access
* `--no-stack` disables `?stack`, a new feature which shows a dump of all the stacks
* this is enabled if a user has read+write on at least one folder
* if you wish to wipe the DB and rebuild it to get the new metadata collected as of v0.11.0, and you have expensive `-mtp` parsers (bpm/key) and a huge database (or a slow server), consider https://github.com/9001/copyparty/tree/master/bin#dbtoolpy
### new features:
* **live rescan!** no more rebooting if you add/move files outside of copyparty and want to update the database, just hit the rescan button in the new...
* **admin panel!** access `/?h` (the old control-panel link) to see it
* **fast startup!** added 40TB of music? no need to wait for the initial scan, it runs in the background now
* when this turns out to be buggy you can `--no-fastboot`
* uploading is not possible until the initial file hashing has finished and it has started doing tags
* you can follow the progress in the new admin panel
### bugfixes:
* windows: avoid drifting into subvolumes and doublehashing files
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0529-1303 `v0.11.0` welcome to the grid
no important bugfixes, just new features
### things to know when upgrading:
* `Pillow` and `FFmpeg` is now used to generate thumbnails
* `--no-thumb` disables both
* `--no-vthumb` disables just `FFmpeg`
* new optional dependencies:
* `Pillow` to enable thumbnails
* `pyheif-pillow-opener` to enable reading HEIF images
* `pillow-avif-plugin` to enable reading AVIF images
* `ffmpeg` and `ffprobe` to enable video thumbnails
* if you wish to wipe the DB and rebuild it to get the new metadata collected as of this version, and you have expensive `-mtp` parsers (bpm/key) and a huge database (or a slow server), consider https://github.com/9001/copyparty/tree/master/bin#dbtoolpy
### new features:
* thumbnails! of both static images and video files
* served as webp or jpg depending on browser support
* new hotkeys: G, T, S, A/D
* additional metadata collection with `-e2ts`
* audio/video codecs, video/image resolution, fps, ...
* if you wanna reindex, do a single run with `-e2tsr` to wipe the DB
* mtp can collect multiple tags at once
* expects json like `{ "tag": "value" }`, see end of https://github.com/9001/copyparty/blob/master/bin/mtag/exe.py
### bugfixes:
* when sorting by name, show folders first
* mimetypes for webp and opus on GET
* mojibake support
* up2k into mb folder
* indexing files in mb folders
* editing markdown in mb folders
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0518-0210 `v0.10.22` this is a no drift zone
* browser: fix off-by-one which made the page slowly shrink back down when navigating away from a large folder
* browser/mediaplayer: handle unsupported audio codecs better in some (older?) browsers
* readme/requirements: firefox 34 and chrome 41 were the first browsers with native sha512 / full speed in up2k
* and the feature nobody asked for:
![2021-0518-041625-hexchat-fs8](https://user-images.githubusercontent.com/241032/118581146-67876700-b791-11eb-99c0-f1f5ace50797.png)
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0516-1822 `v0.10.21` fix tagger crash
a
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0516-0551 `v0.10.20` inspect
nothing important this time, just new bling and some fixes to support old browsers
(well except for the basic-uploader summary autoclosing immediately on completion, that was kinda user-confusing)
* add `ad`/`an` flags to `-mtp`; collect and display metadata from any file, not just audio-files
* up2k speedboost on older iPhones (native hashing on safari 7 through 10)
* add `--lf-url`, URL regex to exclude from log, defaults to `^/\.cpr/` (static files)
* add `--ihead` to print specific request headers, `*` prints all
* ux fixes
* include links to the uploaded files in bup summaries
* ...also make the bup summary not auto-close
* don't link to bup from up2k if read-only access
* toggle-switch for tooltips also affects the up2k ui
* stop flipping back to up2k on older browsers
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0513-2200 `v0.10.19` imagine running servers on windows
* fix: uploads when running copyparty on windows (broke in 0.10.18)
* fix: bup uploads would not get PARTIAL-suffixed if the filename length hits filesystem-max and the client disconnects mid-upload
* add `--dotpart` which hides uploads as dotfiles until completed
* very careful styling of the basic-browser
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0513-1542 `v0.10.18` just 302 it my dude
* stop trying to be smart, do full redirects instead
* allow switching to basic-browser using cookie `b=u`
* fix mode-toggling (upload/search) depending on folder permissions
* persist/clear the password cookie with expiration
* slight optimizations for rclone clients
* other minor ui tweaks
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0512-2139 `v0.10.17` denoise
* allow navigating to write-only folders using the tree sidebar
* show logues (prologue/epilogue) in write-only folders as well
* rename `.prologue.html` / `.epilogue.html` when uploaded so people can't embed javascript
* support pyinstaller
* hide more of the UI while in write-only folders
* hide [even more](https://a.ocv.me/pub/g/nerd-stuff/cpp/2021-0513-ui-mod.png) using [lovely hacks](https://github.com/9001/copyparty/blob/master/docs/minimal-up2k.html)
* add a notice in bup that up2k is generally better
alternative title: [Petit Up2k's - No Gui!](https://www.youtube.com/watch?v=IreeUoI6Kqc)
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0502-0718 `v0.10.16` somebody used -c
* cfg-file: fix shorthand for assigning permissions to anonymous users
* sfx: `-j` works on python3 (pickle did not enjoy the binary comments)
* sfx: higher cooldown before it starts deleting tempfiles from old instances
* sfx: should be a bit smaller (put compressed blobs at the end of the tar)
* misc minor ui tweaks, mostly the bright-mode theme
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0424-0205 `v0.10.15` write-only volumes are write-only
good thing it was so obviously broken and/or that nobody ever tried to use it
* regression test added to keep it fixed
* can now make a hidden/inaccessible folder (optionally inside a public folder) like `-v /mnt/nas/music:/music:r -v /mnt/nas/music/inc:/music/inc:w`
in other news, minor ui tweaks:
* clickdrag in the media player sliders doesn't select text any more
* a few lightmode adjustments
* less cpu usage? should be
`copyparty-sfx.py` (latest) made from c5db7c1a0c8f6ab23138ad7ea7642a6260e7da9b (v0.10.15-15) fixes `-j` (multiprocessing/high-performance)
`copyparty-sfx-5a579db.py` (old) made from 5a579dba52e46c202b79c3d80c3b1c996c7b2e4a (v0.10.15-5) reduced the size
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0421-2004 `v0.10.14` sparse4win
# great stuff
* firefox no longer leaking memory like crazy during large uploads
* not fixed intentionally (the firefox bug still exists i think)
* one of the v0.10.x changes are accidentally avoiding it w
# good stuff
* up2k-cli: conditional readahead based on filereader latency (firefox was not happy)
* up2k-srv: make sparse files on windows if larger than `--sparse` MiB
* files will unsparse when upload completes if win10 or newer
* performance gain starts around 32 and up but default is 4 to save the SSDs
* up2k-cli: fix high cpu usage after returning to idle
* up2k-cli: ui tweaks
* browser: give 404 instead of redirecting home when folder is 404 or 403
* md-srv: stream documents rather than load into memory
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0420-2319 `v0.10.13` moon
600 MiB/s for both hashing and uploading on a ryzen 3700
* up2k: hashing 2x faster than before
* except on android-chrome where it is now slightly slower because the android file api is a meme
* ...but android-firefox gained 4x and is now 3x faster than chrome, google pls
this concludes the optimization arc
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0419-1958 `v0.10.12` rocket
up2k [way faster on large files](https://a.ocv.me/pub/g/nerd-stuff/cpp/2021-0419-up2k.webm) this time
* js: removed a cpu bottleneck in the up2k client
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0419-1517 `v0.10.10` blinded by the light
* up2k: fix progress bars
* up2k: more specific error messages (for example when trying to up a rangelocked file)
* browser: link to timestamps in media files (media fragment urls)
* fix crash when trying to -e2ts without the necessary dependencies available
* since there wasn't enough pointless features that nobody will ever use already: added lightmode
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0416-2329 `v0.10.9` fasten your seatbelts
* up2k: [way faster](https://a.ocv.me/pub/g/nerd-stuff/cpp/2021-0416-up2k.webm) when uploading a large number of files
* 2x faster at 500 files, 3x faster at 1000, **8x at 3000**
* up2k: show ETA and upload/hashing speeds in realtime
* browser: hide search tab when database disabled
* avoid crash on startup when mounting the root of a restricted smb share on windows, [cpython bug](https://bugs.python.org/issue43847)
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0411-1926 `v0.10.8` misc
nothing massive, just a bunch of small things
* browser: fix zip download on iphone/android
* sfx: prevent StorageSense from deleting copyparty while it's running
* browser: less tree jitter when scrolling
* browser: only capture hotkeys without modifiers
* up2k: add some missing presentational uridecodes
* browser: add `?b` for an extremely minimal browser
* `?b=u` includes the uploader
* browser: somewhat support `?pw=hunter2` in addition to the cppwd cookie
* make-sfx: optional argument `gz` to build non-bz2 sfx
* stop crashing argparse on pythons <= june 2018
* support http/1.0
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0402-2235 `v0.10.7` thx exci
up2k-client fixes:
* uploads getting stuck if more than 128 MiB was rejected as dupes
* displayed links on rejected uploads
* displayed upload speed was way off
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0402-0111 `v0.10.6` enterprise ready
minimal-effort support for really old browsers
* internet explorer 6 can browse, upload files, mkdir
* internet explorer 9 can also play mp3s, zip selected files
* internet explorer 10 and newer has near-full support
* the final version of chrome and firefox on xp have full support
* netscape 4.5 works well enough, text is yellow on white
* [netscape 4.0 segfaults](https://a.ocv.me/pub/g/nerd-stuff/cpp/2021-0402-netscape.png) (rip)
on a more serious note,
* fix multiselect zip diving into unselected subfolders
* decode urlform messages to plaintext
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0330-2328 `v0.10.5` search fix
* fix audio playback in search results (broke in v0.9.9)
* sort search results according to userdefined order
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0329-1853 `v0.10.4` stablesort
running out of things to fix so here are nitpicks
* stable sort when sorting multiple columns
* default to filenames with directories first (column 2 + 1)
* remove some console spam
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0329-0247 `v0.10.3` not slow as tar
nothing too big this time,
* tar 6x faster (does 1.8 GiB/s now)
* fix selective archiving of subfolders
* mute the loadbalancer when `-q`
* don't show 0:00 as duration for non-audio files
known inconvenience since 0.9.13 that won't ever be fixed:
if you use the subfolder hiding thing (`-v :foo/bar:cd2d`) it creates intermediate volumes between the actual volume and the hidden subfolder which kinda messes with existing indexes (it will reindex stuff inside the intermediate volumes) but everything still works so it's just a pain
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0327-1703 `v0.10.2` do i have to think of a name
* select multiple files/folders to download as tar/zip
* recover from read-errors when zipping things, adding a textfile in the zip explaining what went wrong
* fix permissions in zip files for linux/macos unpacking
* make the first browser column sortable
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0327-0144 `v0.10.1` zip it
* download folders as .zip or .tar files
* upload entire folders by dropping them in
* 4x faster response on the first request on each new connection
forgot to explain the zip formats
| name | url-suffix | description |
|--|--|--|
| `tar` | `?tar` | a plain gnutar, works great with `curl \| tar -xv` |
| `zip` | `?zip=utf8` | works everywhere, glitchy filenames on win7 and older |
| `zip_dos` | `?zip` | traditional cp437 (no unicode) to fix glitchy filenames |
| `zip_crc` | `?zip=crc` | cp437 with crc32 computed early for truly ancient software |
`zip_crc` will take longer to download since the server has to read each file twice, please let me know if you find a program old enough to actually need it btw, curious
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0323-0113 `v0.9.13` micromanage
you can skip this version unless your volume setup is crazy advanced
* support hiding specific subfolders with `-v :/foo/bar:cd2d`
* properly disable db/tags/etc when `cd2d` or `cd2t` volflags are set
* volume info on startup is prettier
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0321-2105 `v0.9.10` nurupo
not so strong anymore
* fixes a nullpointer when sorting a folder that contains markdown revisions
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0321-1615 `v0.9.9` the strongest
## big ones
* add support for external analysis tools to provide arbitrary tags for the index
* add example tools for detecting bpm and melodic key
* https://github.com/9001/copyparty/tree/master/bin/mtag
* add range-search (duration/bpm/key/... between min/max values)
* hotkeys for changing songs + skipping
* `0..9`=jump, `J/L`=file, `U/O`=10sec, `K/I`=folder, `P`=parent
## the rest
* add search timeouts and rate-control on both server/client-side
* add time markers in the audio player
* remember the file browser sort order
* the initial html retains server order, so use the tree to navigate
* fix a race in the tag parser when using the multithreaded FFprobe backend
* fix minor stuff related to volume flags and tag-display options
* repacker should no longer break the bundled jinja2
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0315-0013 `v0.9.8` the strongest for a while
nothing more to add or fix for now (barely avoided adding bpm/tempo detection using keyfinder and vamp+qm since thats just too ridiculous)
* browser: correct music playback order after sorting
* browser: no more glitching on resize in non-tree-mode
* fuse-client: read password from `some.txt` with `-a $some.txt`
* sfx: reduce startup time by 20% or so (import rather than shell out)
* sfx: support pypy, jython, and ironpy
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0308-0251 `v0.9.7` the strongest hotfix 2nd season
* actually fix it so it doesn't truncate in the first place
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0307-2044 `v0.9.6` the strongest hotfix
* don't crash the file browser on truncated table rows
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0307-1825 `v0.9.5` the strongest potions
* better support for mojibake filenames
* separate scrollbar for the directory tree
* stop persisting page data in the browser, reload on each navigation
* firefox disapproves of storing >= 4 MB of json in sessionStorage
* normalization of musical keys collected from tags
* recover from dying tag parsers
* be nice to rhelics
* add support for the 2013 edition of sqlite3 in rhel 7
* and fix some py2 issues with `-e2d`, again thx to ^
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0305-0106 `v0.9.4` the strongest orz
markdown editor works
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0304-2300 `v0.9.3` the strongest performance
gotta go fast
| | windows | linux | macos |
|--|--|--|--|
| file browser / directory listing | *15 times* faster | 2% slower sorry | 15% faster |
| startup / `-e2ds` verification | 10% faster | even | 10% faster |
| reading tags with ffprobe | 5 times faster | 4 times faster | 2 times faster |
## new features
* async scan incoming files for tags (from up2k, basic-upper, PUT)
* resizable file browser tree
## bugfixes
* floor mtime so `-e2ds` doesn't keep rescanning
* use localStorage for pushState data since firefox couldn't handle big folders
* minor directory rescan semantics
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0303-0028 `v0.9.1` the strongest bugs
imagine downloading a .0
* fix file search / search by contents
* stop spamming responses with `{"tags":["x"]}`
* recover from missing writable volumes during startup
* redo search when filter-checkboxes are toggled
* 1.5x faster client-side sorting
* 1.02x faster server-side
and i just realized i never added runtime tag scanning so copyparty will have to be restarted to see tags of new uploads, TODO for next ver
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0301-2312 `v0.9.0` the strongest music server
* grab tags from music files and make them searchable
* and show the tags in the file browser
* make all the browser columns minimizable
* shrink the media player widget thing on big screens
use `-e2dsa` and `-e2ts` to enable the media tag features globally, or enable/disable them per-volume (see readme)
**NOTE:** older fuse clients (from before 5e3775c1afc9438f9930080a9b8542a063ba1765 / older than v0.8.0) must be upgraded for this copyparty release, however the new client still supports connecting to old servers
other changes include
* support chunked PUT requests from curl
* fix a pypy memleak which broke sqlite3
* fix directory tree sidebar breaking when nothing is mounted on `/`
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0222-2058 `v0.8.3` discovery
forgot to update the release name for 0.8 (which introduced searching and directory trees), good opportunity to name it after a dope album with some absolute bangers
aside from the release name this version is entirely unrevolutionary
* fixed debug prints on xp / win7 / win8 / early win10 versions
* load prologues/epilogues when switching between folders
* fix up2k modeswitching between read/write folders
* additional minor ux tweaks
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0222-0254 `v0.8.1` the ux update
* search by name/path/size/date
* search by file contents
* directory tree sidebar thing
* navigate between folders while uploading
NOTE: this will upgrade your `up2k.db` to `v2` but it will leave a backup of the old version in case you need to downgrade or whatever
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0214-0113 `v0.7.7` trafikklys
* new checkbox in up2k which coordinates uploading from multiple tabs
* if one tab is uploading, others will wait
* fix up2k handshakes so uploads complete faster
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0212-1953 `v0.7.6` nothing big
* up2k: resume hashing when <= 128 MiB left to upload
* stop showing `up2k.db/snap` in the file list
* fix `--ciphers help`
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0212-0706 `v0.7.5` you can https if you want to
* fix https on python3 after breaking it in v0.6.3
* workaround for older versions: `--no-sendfile`
* don't use the native https anyways (pls reverse-proxy)
* that said, added a bunch of ssl/tls/https options
* choice to only accept http or https
* specify ssl/tls versions and ciphers to allow
* log master-secrets to file
* print cipher overlap on connect
* up2k indexer flushes to disk every minute
* up2k indexer mentions the filepath on errors
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0204-0001 `v0.7.4` a
* sfx: save 43kB by replacing all docstrings with "a"
* sfx: upgrade the bundled jinja2 and markupsafe
* zero dependencies on python3 as well now
* do something useful with url-encoded POSTs
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0202-2357 `v0.7.3` Hey! Listen!
* bind multiple IP's / port ranges
* dim the connection tracking messages a bit
* stop gz/br unpacker from being too helpful
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0128-2352 `v0.7.2` QUALITY
* make up2k confirmations optional
* let pending uploads stay for 6 hours
* fix the 0.7.1 regression we won't talk about
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0123-1855 `v0.7.1` checking it twice
* up2k-client shows an OK/Cancel box before upload starts
* up2k-client hashes the next pending file at most
* previously, all pending uploads were announced immediately
* fix edgecase when the registry snapshot contained deleted files
* delete all related files after 1h if an up2k upload was initiated but never started
* previously, the `.PARTIAL` (upload data) was kept, even when blank
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0110-1649 `v0.7.0` keeping track
## remember all uploads using `-e2d` to avoid duplicates
* `-e2d` stores the up2k registry in a per-volume sqlite3 database at `$VOL/.hist/up2k.db`
* unfinished uploads are indexed in `$VOL/.hist/up2k.snap` every 30 seconds
* unfinished uploads which are idle for over 1 hour are forgotten
* duplicate uploads will be symlinked to the new name (by default) or rejected
## build an index of all existing files at startup using `-e2s`
* ...so copyparty also knows about files from older versions / other sources
* this detects deleted/renamed files and updates the database
## reject duplicate uploads instead of symlinking
* this is a per-volume config option, see the `cnodupe` example in `-h`
* the uploader gets an error message with the path to the existing file
## other changes
* uploads temporarily have the extension `.PARTIAL` until the upload is completed
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2021-0107-0009 `v0.6.3` no nagles beyond this point
* reduce latency of final packet by ~0.2 sec
* use sendfile(2) when possible (linux and macos)
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-1214-0328 `v0.6.2` happy end of 2020
* support uploads with massive filenames
* list world-readable volumes when logged in
* up2k-client: ignore rejected dupe uploads
* sfx-repack: support wget
* dodge python-bug #7980
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-1201-0158 `v0.6.0` CHRISTMAAAAAS
https://www.youtube.com/watch?v=rWc9XuqwoLI
* md cleanup/fixes (thx eslint)
* fix the sfx repacker
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-1130-0201 `v0.5.7` multiuser notepad
not in the etherpad sense but rather
* md: poll for changes every `-mcr` sec and warn if doc changed
* md: prevent closing the tab on unsaved changes
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-1129-1849 `v0.5.6` the extra mile
* use git tag/commit as version when creating sfx
* md: table prettyprinter compacting properly
* md/plug: add error handling to the plugins
* md/plug: new feature to modify the final dom tree
* md/plug: actually replace the plugin instances rather than keep adding new ones tehe
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-1127-0225 `v0.5.5` far beyond
valvrave-stop.jpg
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-1117-2258 `v0.5.4` edovprim
(get it? becasue reverse proxy haha)
* reverse-proxy support
* filetype column in the browser
* md-edit: table formatter more chill
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-1113-0231 `v0.5.3` improved
* show per-connection and per-transfer speeds
* restore macos support in sfx.sh
* http correctness fixes
* SameSite=Lax
* support multiple cookies in parser
* `+` no longer decodes to ` `, goodbye netscape 3.04
* fuse stuff
* python client: mojibake support on windows
* python client: https and password support
* support rclone as client (windows/linux)
* new markdown-editor features
* table formatter
* mojibake/unicode hunter
* more predictable behavior
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0818-1822 `v0.5.2` da setter vi punktum
full disclaimer: `copyparty-sfx.py` was built using `sfx.py` from ~~82e568d4c9f25bfdfd1bf5166f0ebedf058723ee~~ f550a8171d298992f4ef569d2fc99a6037a44ea8
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0817-2155 `v0.5.1` insert soho joke
* add info-banner with hostname and disk-free
* make older firefox versions cache less aggressively
* expect less correctness from cots nas
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0816-2304 `v0.5.0` fuse jelly
* change default port from `1234` to `3923`
* fuse 10x faster + add windows support
* minimal CORS support added
* PUT stuff from a browser-console or wherever
* markdown editor improvements again
* paragraph-jump with ctrl-cursors
* fix firefox not showing the latest ver on F5
* fix systemd killing the sfx binaries (ノ ゚ヮ゚)ノ ~┻━┻
* not actually related to the tegra exploit
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0517-1446 `v0.4.3` 🌲🪓🎉
* print your documents! kill the trees!
* drop support for opus/vorbis audio playback on iOS 11 *and older*
* chrome's now twice as fast in the markdown editor
* firefox still wins
* upgrade to marked.js v1.1.0
* minor fuse + ux fixes
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0514-2302 `v0.4.2` still not quite emacs (the editor is too good)
* better editor cursor behavior
* better editor autoindent
* less broken fuse client
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0513-2308 `v0.4.1` Further improvements to overall system stability and other minor adjustments have been made to enhance the user experience
* better editor performance in massive documents
* better undo/redo cursor positioning
* better ux on safari
* better ux on phones
* better
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0512-2244 `v0.4.0` NIH
* new "basic" markdown editor
* textarea-based, way less buggy on phones
* better autoindent + undo/redo
* smaller sfx (~170k)
* osx fixes
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0506-2220 `v0.3.1` v0.3.1
* indicate version history for files in the browser
* (also move old versions into .hist subfolders)
* handle uploads with illegal filenames on windows
* sortable file list
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0505-2302 `v0.3.0` docuparty
"why does a file server have a markdown editor"
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2020-0119-1512 `v0.2.3` hello world

View File

@@ -162,8 +162,8 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
| PUT | | (binary data) | upload into file at URL |
| PUT | `?gz` | (binary data) | compress with gzip and write into file at URL |
| PUT | `?xz` | (binary data) | compress with xz and write into file at URL |
| mPOST | | `act=bput`, `f=FILE` | upload `FILE` into the folder at URL |
| mPOST | `?j` | `act=bput`, `f=FILE` | ...and reply with json |
| mPOST | | `f=FILE` | upload `FILE` into the folder at URL |
| mPOST | `?j` | `f=FILE` | ...and reply with json |
| mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL |
| POST | `?delete` | | delete URL recursively |
| jPOST | `?delete` | `["/foo","/bar"]` | delete `/foo` and `/bar` recursively |
@@ -242,6 +242,7 @@ python3 -m venv .venv
pip install jinja2 strip_hints # MANDATORY
pip install mutagen # audio metadata
pip install pyftpdlib # ftp server
pip install partftpy # tftp server
pip install impacket # smb server -- disable Windows Defender if you REALLY need this on windows
pip install Pillow pyheif-pillow-opener pillow-avif-plugin # thumbnails
pip install pyvips # faster thumbnails

View File

@@ -31,33 +31,33 @@
/w # share /w (the docker data volume)
accs:
rw: * # everyone gets read-access, but
rwmda: %su # the group "su" gets read-write-move-delete-admin
rwmda: @su # the group "su" gets read-write-move-delete-admin
[/u/${u}] # each user gets their own home-folder at /u/username
/w/u/${u} # which will be "u/username" in the docker data volume
accs:
r: * # read-access for anyone, and
rwmda: ${u}, %su # read-write-move-delete-admin for that username + the "su" group
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
[/u/${u}/priv] # each user also gets a private area at /u/username/priv
/w/u/${u}/priv # stored at DATAVOLUME/u/username/priv
accs:
rwmda: ${u}, %su # read-write-move-delete-admin for that username + the "su" group
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
[/lounge/${g}] # each group gets their own shared volume
/w/lounge/${g} # stored at DATAVOLUME/lounge/groupname
accs:
r: * # read-access for anyone, and
rwmda: %${g}, %su # read-write-move-delete-admin for that group + the "su" group
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
[/lounge/${g}/priv] # and a private area for each group too
/w/lounge/${g}/priv # stored at DATAVOLUME/lounge/groupname/priv
accs:
rwmda: %${g}, %su # read-write-move-delete-admin for that group + the "su" group
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
# and create some strategic volumes to prevent anyone from gaining
@@ -65,8 +65,8 @@
[/u]
/w/u
accs:
rwmda: %su
rwmda: @su
[/lounge]
/w/lounge
accs:
rwmda: %su
rwmda: @su

View File

@@ -6,7 +6,7 @@ you will definitely need either [copyparty.exe](https://github.com/9001/copypart
* if you decided to grab `copyparty-sfx.py` instead of the exe you will also need to install the ["Latest Python 3 Release"](https://www.python.org/downloads/windows/)
then you probably want to download [FFmpeg](https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-win64-gpl.zip) and put `ffmpeg.exe` and `ffprobe.exe` in your PATH (so for example `C:\Windows\System32\`) -- this enables thumbnails, audio transcoding, and making music metadata searchable
then you probably want to download [FFmpeg](https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z) and put `ffmpeg.exe` and `ffprobe.exe` in your PATH (so for example `C:\Windows\System32\`) -- this enables thumbnails, audio transcoding, and making music metadata searchable
## the config file

View File

@@ -24,6 +24,10 @@ https://github.com/giampaolo/pyftpdlib/
C: 2007 Giampaolo Rodola
L: MIT
https://github.com/9001/partftpy
C: 2010-2021 Michael P. Soulier
L: MIT
https://github.com/nayuki/QR-Code-generator/
C: Project Nayuki
L: MIT

View File

@@ -1,3 +1,19 @@
this file accidentally got committed at some point, so let's put it to use
# trivia / lore
copyparty started as [three separate php projects](https://a.ocv.me/pub/stuff/old-php-projects/); an nginx custom directory listing (which became a php script), and a php music/picture viewer, and an additional php project for resumable uploads:
* findex -- directory browser / gallery with thumbnails and a music player which sometime back in 2009 had a canvas visualizer grabbing fft data from a flash audio player
* findex.mini -- plain-listing fork of findex with streaming zip-download of folders (the js and design should look familiar)
* upper and up2k -- up2k being the star of the show and where copyparty's chunked resumable uploads came from
the first link has screenshots but if that doesn't work there's also a [tar here](https://ocv.me/dev/old-php-projects.tgz)
----
below this point is misc useless scribbles
# up2k.js
## potato detection

View File

@@ -24,6 +24,27 @@ gzip -d < .hist/up2k.snap | jq -r '.[].name' | while IFS= read -r f; do wc -c --
echo; find -type f | while IFS= read -r x; do printf '\033[A\033[36m%s\033[K\033[0m\n' "$x"; tail -c$((1024*1024)) <"$x" | xxd -a | awk 'NR==1&&/^[0: ]+.{16}$/{next} NR==2&&/^\*$/{next} NR==3&&/^[0f]+: [0 ]+65 +.{16}$/{next} {e=1} END {exit e}' || continue; printf '\033[A\033[31msus:\033[33m %s \033[0m\n\n' "$x"; done
##
## sync pics/vids from phone
## (takes all files named (IMG|PXL|PANORAMA|Screenshot)_20231224_*)
cd /storage/emulated/0/DCIM/Camera
find -mindepth 1 -maxdepth 1 | sort | cut -c3- > ls
url=https://192.168.1.3:3923/rw/pics/Camera/$d/; awk -F_ '!/^[A-Z][A-Za-z]{1,16}_[0-9]{8}[_-]/{next} {d=substr($2,1,6)} !t[d]++{print d}' ls | while read d; do grep -E "^[A-Z][A-Za-z]{1,16}_$d" ls | tr '\n' '\0' | xargs -0 python3 ~/dev/copyparty/bin/u2c.py -td $url --; done
##
## convert symlinks to hardlinks (probably safe, no guarantees)
find -type l | while IFS= read -r lnk; do [ -h "$lnk" ] || { printf 'nonlink: %s\n' "$lnk"; continue; }; dst="$(readlink -f -- "$lnk")"; [ -e "$dst" ] || { printf '???\n%s\n%s\n' "$lnk" "$dst"; continue; }; printf 'relinking:\n %s\n %s\n' "$lnk" "$dst"; rm -- "$lnk"; ln -- "$dst" "$lnk"; done
##
## convert hardlinks to symlinks (maybe not as safe? use with caution)
e=; p=; find -printf '%i %p\n' | awk '{i=$1;sub(/[^ ]+ /,"")} !n[i]++{p[i]=$0;next} {printf "real %s\nlink %s\n",p[i],$0}' | while read cls p; do [ -e "$p" ] || e=1; p="$(realpath -- "$p")" || e=1; [ -e "$p" ] || e=1; [ $cls = real ] && { real="$p"; continue; }; [ $cls = link ] || e=1; [ "$p" ] || e=1; [ $e ] && { echo "ERROR $p"; break; }; printf '\033[36m%s \033[0m -> \033[35m%s\033[0m\n' "$p" "$real"; rm "$p"; ln -s "$real" "$p" || { echo LINK FAILED; break; }; done
##
## create a test payload

View File

@@ -200,9 +200,10 @@ symbol legend,
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - |
| serve https | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
| serve webdav | █ | | | █ | █ | █ | █ | | █ | | | █ |
| serve ftp | █ | | | | | █ | | | | | | █ |
| serve ftps | █ | | | | | █ | | | | | | █ |
| serve sftp | | | | | | █ | | | | | | |
| serve ftp (tcp) | █ | | | | | █ | | | | | | █ |
| serve ftps (tls) | █ | | | | | █ | | | | | | █ |
| serve tftp (udp) | █ | | | | | | | | | | | |
| serve sftp (ssh) | | | | | | █ | | | | | | █ |
| serve smb/cifs | | | | | | █ | | | | | | |
| serve dlna | | | | | | █ | | | | | | |
| listen on unix-socket | | | | █ | █ | | █ | █ | █ | | █ | █ |

View File

@@ -28,6 +28,7 @@ classifiers = [
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: Jython",
"Programming Language :: Python :: Implementation :: PyPy",
"Operating System :: OS Independent",
"Environment :: Console",
"Environment :: No Input/Output (Daemon)",
"Intended Audience :: End Users/Desktop",
@@ -48,6 +49,7 @@ thumbnails2 = ["pyvips"]
audiotags = ["mutagen"]
ftpd = ["pyftpdlib"]
ftps = ["pyftpdlib", "pyopenssl"]
tftpd = ["partftpy>=0.2.0"]
pwhash = ["argon2-cffi"]
[project.scripts]

View File

@@ -12,6 +12,11 @@ set -euo pipefail
#
# can be adjusted with --hash-mt (but alpine caps out at 5)
fsize=256
nfiles=128
pybin=$(command -v python3 || command -v python)
#pybin=~/.pyenv/versions/nogil-3.9.10-2/bin/python3
[ $# -ge 1 ] || {
echo 'need arg 1: path to copyparty-sfx.py'
echo ' (remaining args will be passed on to copyparty,'
@@ -22,6 +27,8 @@ sfx="$1"
shift
sfx="$(realpath "$sfx" || readlink -e "$sfx" || echo "$sfx")"
awk=$(command -v gawk || command -v awk)
uname -s | grep -E MSYS && win=1 || win=
totalsize=$((fsize*nfiles))
# try to use /dev/shm to avoid hitting filesystems at all,
# otherwise fallback to mktemp which probably uses /tmp
@@ -30,20 +37,24 @@ mkdir $td || td=$(mktemp -d)
trap "rm -rf $td" INT TERM EXIT
cd $td
echo creating 256 MiB testfile in $td
head -c $((1024*1024*256)) /dev/urandom > 1
echo creating $fsize MiB testfile in $td
sz=$((1024*1024*fsize))
head -c $sz /dev/zero | openssl enc -aes-256-ctr -iter 1 -pass pass:k -nosalt 2>/dev/null >1 || true
wc -c 1 | awk '$1=='$sz'{r=1}END{exit 1-r}' || head -c $sz /dev/urandom >1
echo creating 127 symlinks to it
for n in $(seq 2 128); do ln -s 1 $n; done
echo creating $((nfiles-1)) symlinks to it
for n in $(seq 2 $nfiles); do MSYS=winsymlinks:nativestrict ln -s 1 $n; done
echo warming up cache
cat 1 >/dev/null
echo ok lets go
python3 "$sfx" -p39204 -e2dsa --dbd=yolo --exit=idx -lo=t -q "$@"
$pybin "$sfx" -p39204 -e2dsa --dbd=yolo --exit=idx -lo=t -q "$@" && err= || err=$?
[ $win ] && [ $err = 15 ] && err= # sigterm doesn't hook on windows, ah whatever
[ $err ] && echo ERROR $err && exit $err
echo and the results are...
$awk '/1 volumes in / {printf "%s MiB/s\n", 256*128/$(NF-1)}' <t
LC_ALL=C $awk '/1 volumes in / {s=$(NF-1); printf "speed: %.1f MiB/s (time=%.2fs)\n", '$totalsize'/s, s}' <t
echo deleting $td and exiting
@@ -52,16 +63,30 @@ echo deleting $td and exiting
# MiB/s @ cpu or device (copyparty, pythonver, distro/os) // comment
# 3887 @ Ryzen 5 4500U (cpp 1.9.5, nogil 3.9, fedora 39) // --hash-mt=6; laptop
# 3732 @ Ryzen 5 4500U (cpp 1.9.5, py 3.12.1, fedora 39) // --hash-mt=6; laptop
# 3608 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.5, fedora 38) // --hash-mt=6; laptop
# 2726 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.5, fedora 38) // --hash-mt=4 (old-default)
# 2202 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.5, docker-alpine 3.18.3) ??? alpine slow
# 2719 @ Ryzen 5 4500U (cpp 1.9.5, py 3.11.2, docker-debian 12.1)
# 7746 @ mbp 2023 m3pro (cpp 1.9.5, py 3.11.7, macos 14.1) // --hash-mt=6
# 6687 @ mbp 2023 m3pro (cpp 1.9.5, py 3.11.7, macos 14.1) // --hash-mt=5 (default)
# 5544 @ Intel i5-12500 (cpp 1.9.5, py 3.11.2, debian 12.0) // --hash-mt=12; desktop
# 5197 @ Ryzen 7 3700X (cpp 1.9.5, py 3.9.18, freebsd 13.2) // --hash-mt=8; 2u server
# 4551 @ mbp 2020 m1 (cpp 1.9.5, py 3.11.7, macos 14.2.1)
# 4190 @ Ryzen 7 5800X (cpp 1.9.5, py 3.11.6, fedora 37) // --hash-mt=8 (vbox-VM on win10-17763.4974)
# 3028 @ Ryzen 7 5800X (cpp 1.9.5, py 3.11.6, fedora 37) // --hash-mt=5 (vbox-VM on win10-17763.4974)
# 2629 @ Ryzen 7 5800X (cpp 1.9.5, py 3.11.7, win10-ltsc-1809-17763.4974) // --hash-mt=5 (default)
# 2576 @ Ryzen 7 5800X (cpp 1.9.5, py 3.11.7, win10-ltsc-1809-17763.4974) // --hash-mt=8 (hello??)
# 2606 @ Ryzen 7 3700X (cpp 1.9.5, py 3.9.18, freebsd 13.2) // --hash-mt=4 (old-default)
# 1436 @ Ryzen 5 5500U (cpp 1.9.5, py 3.11.4, alpine 3.18.3) // nuc
# 1065 @ Pixel 7 (cpp 1.9.5, py 3.11.5, termux 2023-09)
# 945 @ Pi 5B v1.0 (cpp 1.9.5, py 3.11.6, alpine 3.19.0)
# 548 @ Pi 4B v1.5 (cpp 1.9.5, py 3.11.6, debian 11)
# 435 @ Pi 4B v1.5 (cpp 1.9.5, py 3.11.6, alpine 3.19.0)
# 212 @ Pi Zero2W v1.0 (cpp 1.9.5, py 3.11.6, alpine 3.19.0)
# 10.0 @ Pi Zero W v1.1 (cpp 1.9.5, py 3.11.6, alpine 3.19.0)
# notes,
# podman run --rm -it --shm-size 512m --entrypoint /bin/ash localhost/copyparty-min

View File

@@ -1,11 +1,11 @@
FROM alpine:3.18
WORKDIR /z
ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
ver_hashwasm=4.9.0 \
ver_hashwasm=4.10.0 \
ver_marked=4.3.0 \
ver_dompf=3.0.5 \
ver_dompf=3.0.8 \
ver_mde=2.18.0 \
ver_codemirror=5.65.12 \
ver_codemirror=5.65.16 \
ver_fontawesome=5.13.0 \
ver_prism=1.29.0 \
ver_zopfli=1.0.3
@@ -80,7 +80,7 @@ RUN cd asmcrypto.js-$ver_asmcrypto \
# build hash-wasm
RUN cd hash-wasm \
RUN cd hash-wasm/dist \
&& mv sha512.umd.min.js /z/dist/sha512.hw.js

View File

@@ -1,6 +1,6 @@
all: $(addsuffix .gz, $(wildcard *.js *.css))
%.gz: %
pigz -11 -I 573 $<
pigz -11 -I 2048 $<
# pigz -11 -J 34 -I 100 -F < $< > $@.first

View File

@@ -79,6 +79,15 @@ or using commandline arguments,
```
# faq
the following advice is best-effort and not guaranteed to be entirely correct
* q: starting a rootless container on debian 12 fails with `failed to register layer: lsetxattr user.overlay.impure /etc: operation not supported`
* a: docker's default rootless configuration on debian is to use the overlay2 storage driver; this does not work. Your options are to replace docker with podman (good choice), or to configure docker to use the `fuse-overlayfs` storage driver
# build the images yourself
basically `./make.sh hclean pull img push` but see [devnotes.md](./devnotes.md)

73
scripts/logpack.sh Executable file
View File

@@ -0,0 +1,73 @@
#!/bin/bash
set -e
# recompress logs so they decompress faster + save some space;
# * will not recurse into subfolders
# * each file in current folder gets recompressed to zstd; input file is DELETED
# * any xz-compressed logfiles are decompressed before converting to zstd
# * SHOULD ignore and skip files which are currently open; SHOULD be safe to run while copyparty is running
# for files larger than $cutoff, compress with `zstd -T0`
# (otherwise do several files in parallel (scales better))
cutoff=400M
# osx support:
# port install findutils gsed coreutils
command -v gfind >/dev/null &&
command -v gsed >/dev/null &&
command -v gsort >/dev/null && {
find() { gfind "$@"; }
sed() { gsed "$@"; }
sort() { gsort "$@"; }
}
packfun() {
local jobs=$1 fn="$2"
printf '%s\n' "$fn" | grep -qF .zst && return
local of="$(printf '%s\n' "$fn" | sed -r 's/\.(xz|txt)/.zst/')"
[ "$fn" = "$of" ] &&
of="$of.zst"
[ -e "$of" ] &&
echo "SKIP: output file exists: $of" &&
return
lsof -- "$fn" 2>/dev/null | grep -E .. &&
printf "SKIP: file in use: %s\n\n" $fn &&
return
# determine by header; old copyparty versions would produce xz output without .xz names
head -c3 "$fn" | grep -qF 7z &&
cmd="xz -dkc" || cmd="cat"
printf '<%s> T%d: %s\n' "$cmd" $jobs "$of"
$cmd <"$fn" >/dev/null || {
echo "ERROR: uncompress failed: $fn"
return
}
$cmd <"$fn" | zstd --long -19 -T$jobs >"$of"
touch -r "$fn" -- "$of"
cmp <($cmd <"$fn") <(zstd -d <"$of") || {
echo "ERROR: data mismatch: $of"
mv "$of"{,.BAD}
return
}
rm -- "$fn"
}
# do small files in parallel first (in descending size);
# each file can use 4 threads in case the cutoff is poor
export -f packfun
export -f sed 2>/dev/null || true
find -maxdepth 1 -type f -size -$cutoff -printf '%s %p\n' |
sort -nr | sed -r 's`[^ ]+ ``; s`^\./``' | tr '\n' '\0' |
xargs "$@" -0i -P$(nproc) bash -c 'packfun 4 "$@"' _ {}
# then the big ones, letting each file use the whole cpu
for f in *; do packfun 0 "$f"; done

View File

@@ -77,13 +77,14 @@ function have() {
}
function load_env() {
. buildenv/bin/activate
have setuptools
have wheel
have build
have twine
have jinja2
have strip_hints
. buildenv/bin/activate || return 1
have setuptools &&
have wheel &&
have build &&
have twine &&
have jinja2 &&
have strip_hints &&
return 0 || return 1
}
load_env || {

View File

@@ -26,8 +26,9 @@ help() { exec cat <<'EOF'
# _____________________________________________________________________
# core features:
#
# `no-ftp` saves ~33k by removing the ftp server and filetype detector,
# disabling --ftpd and --magic
# `no-ftp` saves ~30k by removing the ftp server, disabling --ftp
#
# `no-tfp` saves ~10k by removing the tftp server, disabling --tftp
#
# `no-smb` saves ~3.5k by removing the smb / cifs server
#
@@ -114,6 +115,7 @@ while [ ! -z "$1" ]; do
gz) use_gz=1 ; ;;
gzz) shift;use_gzz=$1;use_gz=1; ;;
no-ftp) no_ftp=1 ; ;;
no-tfp) no_tfp=1 ; ;;
no-smb) no_smb=1 ; ;;
no-zm) no_zm=1 ; ;;
no-fnt) no_fnt=1 ; ;;
@@ -165,7 +167,8 @@ necho() {
[ $repack ] && {
old="$tmpdir/pe-copyparty.$(id -u)"
echo "repack of files in $old"
cp -pR "$old/"*{py2,py37,j2,copyparty} .
cp -pR "$old/"*{py2,py37,magic,j2,copyparty} .
cp -pR "$old/"*partftpy . || true
cp -pR "$old/"*ftp . || true
}
@@ -221,6 +224,16 @@ necho() {
mkdir ftp/
mv pyftpdlib ftp/
necho collecting partftpy
f="../build/partftpy-0.2.0.tar.gz"
[ -e "$f" ] ||
(url=https://files.pythonhosted.org/packages/64/4a/360dde1e7277758a4ccb0d6434ec661042d9d745aa6c3baa9ec0699df3e9/partftpy-0.2.0.tar.gz;
wget -O$f "$url" || curl -L "$url" >$f)
tar -zxf $f
mv partftpy-*/partftpy .
rm -rf partftpy-* partftpy/bin
necho collecting python-magic
v=0.4.27
f="../build/python-magic-$v.tar.gz"
@@ -234,7 +247,6 @@ necho() {
rm -rf python-magic-*
rm magic/compat.py
iawk '/^def _add_compat/{o=1} !o; /^_add_compat/{o=0}' magic/__init__.py
mv magic ftp/ # doesn't provide a version label anyways
# enable this to dynamically remove type hints at startup,
# in case a future python version can use them for performance
@@ -409,8 +421,10 @@ iawk '/^ {0,4}[^ ]/{s=0}/^ {4}def (serve_forever|_loop)/{s=1}!s' ftp/pyftpdlib/s
rm -f ftp/pyftpdlib/{__main__,prefork}.py
[ $no_ftp ] &&
rm -rf copyparty/ftpd.py ftp &&
sed -ri '/\.ftp/d' copyparty/svchub.py
rm -rf copyparty/ftpd.py ftp
[ $no_tfp ] &&
rm -rf copyparty/tftpd.py partftpy
[ $no_smb ] &&
rm -f copyparty/smbd.py
@@ -584,7 +598,7 @@ nf=$(ls -1 "$zdir"/arc.* 2>/dev/null | wc -l)
echo gen tarlist
for d in copyparty j2 py2 py37 ftp; do find $d -type f; done | # strip_hints
for d in copyparty partftpy magic j2 py2 py37 ftp; do find $d -type f || true; done | # strip_hints
sed -r 's/(.*)\.(.*)/\2 \1/' | LC_ALL=C sort |
sed -r 's/([^ ]*) (.*)/\2.\1/' | grep -vE '/list1?$' > list1

View File

@@ -28,5 +28,5 @@ ba91ab0518c61eff13e5612d9e6b532940813f6b56e6ed81ea6c7c4d45acee4d98136a383a250675
7f8f4daa4f4f2dbf24cdd534b2952ee3fba6334eb42b37465ccda3aa1cccc3d6204aa6bfffb8a83bf42ec59c702b5b5247d4c8ee0d4df906334ae53072ef8c4c MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
656015f5cc2c04aa0653ee5609c39a7e5f0b6a58c84fe26b20bd070c52d20b4effb810132f7fb771168483e9fd975cc3302837dd7a1a687ee058b0460c857cc4 packaging-23.2-py3-none-any.whl
6401616fdfdd720d1aaa9a0ed1398d00664b28b6d84517dff8d1f9c416452610c6afa64cfb012a78e61d1cf4f6d0784eca6e7610957859e511f15bc6f3b3bd53 Pillow-10.1.0-cp311-cp311-win_amd64.whl
2e6a57bab45b5a825a2073780c73980cbf5aafd99dc3b28660ea3f5f658f04668cd0f01c7de0bb79e362ff4e3b8f01dd4f671d3a2e054d3071baefdcf0b0e4ba python-3.11.7-amd64.exe
424e20dc7263a31d524307bc39ed755a9dd82f538086fff68d98dd97e236c9b00777a8ac2e3853081b532b0e93cef44983e74d0ab274877440e8b7341b19358a pillow-10.2.0-cp311-cp311-win_amd64.whl
e6bdbae1affd161e62fc87407c912462dfe875f535ba9f344d0c4ade13715c947cd3ae832eff60f1bad4161938311d06ac8bc9b52ef203f7b0d9de1409f052a5 python-3.11.8-amd64.exe

View File

@@ -28,8 +28,8 @@ fns=(
MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl
mutagen-1.47.0-py3-none-any.whl
packaging-23.2-py3-none-any.whl
Pillow-10.1.0-cp311-cp311-win_amd64.whl
python-3.11.7-amd64.exe
pillow-10.2.0-cp311-cp311-win_amd64.whl
python-3.11.8-amd64.exe
)
[ $w7 ] && fns+=(
pyinstaller-5.13.2-py3-none-win32.whl

View File

@@ -54,6 +54,7 @@ copyparty/sutil.py,
copyparty/svchub.py,
copyparty/szip.py,
copyparty/tcpsrv.py,
copyparty/tftpd.py,
copyparty/th_cli.py,
copyparty/th_srv.py,
copyparty/u2idx.py,

View File

@@ -262,7 +262,7 @@ def unpack():
final = opj(top, name)
san = opj(final, "copyparty/up2k.py")
for suf in range(0, 9001):
withpid = "{}.{}.{}".format(name, os.getpid(), suf)
withpid = "%s.%d.%s" % (name, os.getpid(), suf)
mine = opj(top, withpid)
if not ofe(mine):
break
@@ -285,8 +285,8 @@ def unpack():
ck = hashfile(tar)
if ck != CKSUM:
t = "\n\nexpected {} ({} byte)\nobtained {} ({} byte)\nsfx corrupt"
raise Exception(t.format(CKSUM, SIZE, ck, sz))
t = "\n\nexpected %s (%d byte)\nobtained %s (%d byte)\nsfx corrupt"
raise Exception(t % (CKSUM, SIZE, ck, sz))
with tarfile.open(tar, "r:bz2") as tf:
# this is safe against traversal

View File

@@ -84,7 +84,7 @@ args = {
"version": about["__version__"],
"description": (
"Portable file server with accelerated resumable uploads, "
+ "deduplication, WebDAV, FTP, zeroconf, media indexer, "
+ "deduplication, WebDAV, FTP, TFTP, zeroconf, media indexer, "
+ "video thumbnails, audio transcoding, and write-only folders"
),
"long_description": long_description,
@@ -111,6 +111,7 @@ args = {
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: Jython",
"Programming Language :: Python :: Implementation :: PyPy",
"Operating System :: OS Independent",
"Environment :: Console",
"Environment :: No Input/Output (Daemon)",
"Intended Audience :: End Users/Desktop",
@@ -140,6 +141,7 @@ args = {
"audiotags": ["mutagen"],
"ftpd": ["pyftpdlib"],
"ftps": ["pyftpdlib", "pyopenssl"],
"tftpd": ["partftpy>=0.2.0"],
"pwhash": ["argon2-cffi"],
},
"entry_points": {"console_scripts": ["copyparty = copyparty.__main__:main"]},

View File

@@ -11,8 +11,8 @@ import unittest
from copyparty.authsrv import AuthSrv
from copyparty.httpcli import HttpCli
from copyparty.up2k import Up2k
from copyparty.u2idx import U2idx
from copyparty.up2k import Up2k
from tests import util as tu
from tests.util import Cfg

View File

@@ -43,8 +43,8 @@ if MACOS:
from copyparty.__init__ import E
from copyparty.__main__ import init_E
from copyparty.util import FHC, Garda, Unrecv
from copyparty.u2idx import U2idx
from copyparty.util import FHC, Garda, Unrecv
init_E(E)
@@ -110,7 +110,7 @@ class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None, **ka0):
ka = {}
ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw rand smb srch_dbg stats th_no_crop vague_403 vc ver xdev xlink xvol"
ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw q rand smb srch_dbg stats th_no_crop vague_403 vc ver xdev xlink xvol"
ka.update(**{k: False for k in ex.split()})
ex = "dotpart dotsrch no_dhash no_fastboot no_rescan no_sendfile no_voldump re_dhash plain_ip"
@@ -152,6 +152,7 @@ class Cfg(Namespace):
mte={"a": True},
mth={},
mtp=[],
rm_retry="0/0",
s_wr_sz=512 * 1024,
sort="href",
srch_hits=99999,
@@ -242,6 +243,7 @@ class VHttpConn(object):
self.log_func = log
self.log_src = "a"
self.mutex = threading.Lock()
self.u2mutex = threading.Lock()
self.nbyte = 0
self.nid = None
self.nreq = -1