Compare commits

...

25 Commits

Author SHA1 Message Date
ed
b3cecabca3 v1.13.6 2024-07-29 20:28:51 +00:00
ed
662541c64c audio-player: show status while loading 2024-07-29 20:14:39 +00:00
ed
225bd80ea8 up2k.js: fix overshoot in chunk stitcher 2024-07-29 19:19:22 +00:00
ed
85e54980cc up2k.js: set timeouts for uploads
in the event that an upload chunk gets stuck, the js would
never stop waiting for a response, requiring a page reload

improves reliability when running behind a reverse-proxy
which is configured to never timeout requests (can make
sense when combined with other services on the same box)
2024-07-29 19:17:03 +00:00
ed
a19a0fa9f3 fix modal wordwrap in firefox;
with overflow:auto, firefox picks the div-width before estimating
the height, causing it to undershoot by the scrollbar width
and then messing up the text alignment

fix: conditionally set overflow-y:scroll using js
2024-07-29 18:04:35 +00:00
ed
9bb6e0dc62 misc ux:
* wait until page (au) has loaded to register hotkeys
* hotkey `m` would grow sidebar if tree was minimized
* more exact warning about num.parallel uploads
* keep more console logs in memory
* message phrasing
2024-07-29 17:59:34 +00:00
ed
15ddcf53e7 add bsod theme 2024-07-26 22:09:59 +00:00
ed
6b54972ec0 update comparison vs similar software:
* general changes:
  * upload speed comparisons considering v1.13.5

* hfs2:
  * dead project with unfixed vulnerabilities

* hfs3:
  * has replaced hfs2
  * uploads are now resumable
  * add new functionality:
    * write-only folders
    * unmap subfolders
    * move and delete files
    * folder-rproxy
    * themes
    * basic audio player, image viewer

* filebrowser:
  * uploads are now parallelized, resumable, segmented
    * but single large files are not accelerated
  * can listen on unix sockets
  * folder-rproxy is supported
  * more cpu efficient than copyparty
2024-07-26 19:46:03 +00:00
ed
0219eada23 cleanup: strip trailing whitespace 2024-07-26 19:33:56 +00:00
ed
8916bce306 u2c fixes:
* `--sz` was num.chunks, not the intended MiB
* crash on exit with `-z` and no modified files
* summary upload elapsed-time could exceed wallclock
2024-07-26 19:28:47 +00:00
ed
99edba4fd9 change xm examples to reject users without write-access; #68 2024-07-25 19:23:08 +00:00
ed
64de3e01e8 update pkgs to 1.13.5 2024-07-22 23:48:24 +00:00
ed
8222ccc40b v1.13.5 2024-07-22 23:23:53 +00:00
ed
dc449bf8b0 fix grid toolbar undocking after viewing a pic/vid 2024-07-22 23:09:25 +00:00
ed
ef0ecf878b recommend rclone over davfs2; closes #90 2024-07-22 22:46:24 +00:00
ed
53f1e3c91d ui option to play video as audio
audio extraction happens serverside to opus or mp3
depending on browser support

remuxing (extracting audio without transcoding)
is currently not supported, and is not planned
2024-07-22 22:30:21 +00:00
ed
eeef80919f css-fix for firefox52 (centos6) 2024-07-22 20:59:05 +00:00
ed
987bce2182 u2c fixes:
* don't stitch across deduplicated blocks
* print speed/time for hash/upload
* more compact json in handshakes
2024-07-22 20:55:32 +00:00
ed
b511d686f0 up2k fixes:
* progress donuts should include inflight bytes
* changes to stitch-size in settings didn't apply until next refresh
* serverlog was too verbose; truncate chunk hashes
* mention absolute cloudflare limit in readme
2024-07-22 19:06:01 +00:00
ed
132a83501e add chunk stitching; twice as fast long-distance uploads:
rather than sending each file chunk as a separate HTTP request,
sibling chunks will now be fused together into larger HTTP POSTs
which results in unreasonably huge speed boosts on some routes
( `2.6x` from Norway to US-East,  `1.6x` from US-West to Finland )

the `x-up2k-hash` request header now takes a comma-separated list
of chunk hashes, which must all be sibling chunks, resulting in
one large consecutive range of file data as the post body

a new global-option `--u2sz`, default `1,64,96`, sets the target
request size as 64 MiB, allowing the settings ui to specify any
value between 1 and 96 MiB, which is cloudflare's max value

this does not cause any issues for resumable uploads; thanks to the
streaming HTTP POST parser, each chunk will be verified and written
to disk as they arrive, meaning only the untransmitted chunks will
have to be resent in the event of a connection drop -- of course
assuming there are no misconfigured WAFs or caching-proxies

the previous up2k approach of uploading each chunk in a separate HTTP
POST was inefficient in many real-world scenarios, mainly due to TCP
window-scaling behaving erratically in some IXPs / along some routes

a particular link from Norway to Virginia,US is unusably slow for
the first 4 MiB, only reaching optimal speeds after 100 MiB, and
then immediately resets the scale when the request has been sent;
connection reuse does not help in this case

on this route, the basic-uploader was somehow faster than up2k
with 6 parallel uploads; only time i've seen this
2024-07-21 23:35:37 +00:00
ed
e565ad5f55 better errors through broker 2024-07-21 20:36:50 +00:00
ed
f955d2bd58 dangit 2024-07-20 22:28:40 +00:00
ed
5953399090 add helptext exporters (html, txt) 2024-07-17 23:06:01 +00:00
ed
d26a944d95 hooks: add cache-warmer 2024-07-17 21:00:59 +00:00
ed
50dac15568 update pkgs to 1.13.4 2024-07-16 05:48:45 +00:00
33 changed files with 1006 additions and 247 deletions

View File

@@ -209,7 +209,7 @@ also see [comparison to similar software](./docs/versus.md)
* upload * upload
* ☑ basic: plain multipart, ie6 support * ☑ basic: plain multipart, ie6 support
* ☑ [up2k](#uploading): js, resumable, multithreaded * ☑ [up2k](#uploading): js, resumable, multithreaded
* unaffected by cloudflare's max-upload-size (100 MiB) * **no filesize limit!** ...unless you use Cloudflare, then it's 383.9 GiB
* ☑ stash: simple PUT filedropper * ☑ stash: simple PUT filedropper
* ☑ filename randomizer * ☑ filename randomizer
* ☑ write-only folders * ☑ write-only folders
@@ -225,6 +225,7 @@ also see [comparison to similar software](./docs/versus.md)
* ☑ [navpane](#navpane) (directory tree sidebar) * ☑ [navpane](#navpane) (directory tree sidebar)
* ☑ file manager (cut/paste, delete, [batch-rename](#batch-rename)) * ☑ file manager (cut/paste, delete, [batch-rename](#batch-rename))
* ☑ audio player (with [OS media controls](https://user-images.githubusercontent.com/241032/215347492-b4250797-6c90-4e09-9a4c-721edf2fb15c.png) and opus/mp3 transcoding) * ☑ audio player (with [OS media controls](https://user-images.githubusercontent.com/241032/215347492-b4250797-6c90-4e09-9a4c-721edf2fb15c.png) and opus/mp3 transcoding)
* ☑ play video files as audio (converted on server)
* ☑ image gallery with webm player * ☑ image gallery with webm player
* ☑ textfile browser with syntax hilighting * ☑ textfile browser with syntax hilighting
* ☑ [thumbnails](#thumbnails) * ☑ [thumbnails](#thumbnails)
@@ -646,6 +647,7 @@ up2k has several advantages:
* uploads resume if you reboot your browser or pc, just upload the same files again * uploads resume if you reboot your browser or pc, just upload the same files again
* server detects any corruption; the client reuploads affected chunks * server detects any corruption; the client reuploads affected chunks
* the client doesn't upload anything that already exists on the server * the client doesn't upload anything that already exists on the server
* no filesize limit unless imposed by a proxy, for example Cloudflare, which blocks uploads over 383.9 GiB
* much higher speeds than ftp/scp/tarpipe on some internet connections (mainly american ones) thanks to parallel connections * much higher speeds than ftp/scp/tarpipe on some internet connections (mainly american ones) thanks to parallel connections
* the last-modified timestamp of the file is preserved * the last-modified timestamp of the file is preserved
@@ -800,6 +802,7 @@ some hilights:
* OS integration; control playback from your phone's lockscreen ([windows](https://user-images.githubusercontent.com/241032/233213022-298a98ba-721a-4cf1-a3d4-f62634bc53d5.png) // [iOS](https://user-images.githubusercontent.com/241032/142711926-0700be6c-3e31-47b3-9928-53722221f722.png) // [android](https://user-images.githubusercontent.com/241032/233212311-a7368590-08c7-4f9f-a1af-48ccf3f36fad.png)) * OS integration; control playback from your phone's lockscreen ([windows](https://user-images.githubusercontent.com/241032/233213022-298a98ba-721a-4cf1-a3d4-f62634bc53d5.png) // [iOS](https://user-images.githubusercontent.com/241032/142711926-0700be6c-3e31-47b3-9928-53722221f722.png) // [android](https://user-images.githubusercontent.com/241032/233212311-a7368590-08c7-4f9f-a1af-48ccf3f36fad.png))
* shows the audio waveform in the seekbar * shows the audio waveform in the seekbar
* not perfectly gapless but can get really close (see settings + eq below); good enough to enjoy gapless albums as intended * not perfectly gapless but can get really close (see settings + eq below); good enough to enjoy gapless albums as intended
* videos can be played as audio, without wasting bandwidth on the video
click the `play` link next to an audio file, or copy the link target to [share it](https://a.ocv.me/pub/demo/music/Ubiktune%20-%20SOUNDSHOCK%202%20-%20FM%20FUNK%20TERRROR!!/#af-1fbfba61&t=18) (optionally with a timestamp to start playing from, like that example does) click the `play` link next to an audio file, or copy the link target to [share it](https://a.ocv.me/pub/demo/music/Ubiktune%20-%20SOUNDSHOCK%202%20-%20FM%20FUNK%20TERRROR!!/#af-1fbfba61&t=18) (optionally with a timestamp to start playing from, like that example does)
@@ -987,7 +990,7 @@ some recommended FTP / FTPS clients; `wark` = example password:
## webdav server ## webdav server
with read-write support, supports winXP and later, macos, nautilus/gvfs with read-write support, supports winXP and later, macos, nautilus/gvfs ... a greay way to [access copyparty straight from the file explorer in your OS](#mount-as-drive)
click the [connect](http://127.0.0.1:3923/?hc) button in the control-panel to see connection instructions for windows, linux, macos click the [connect](http://127.0.0.1:3923/?hc) button in the control-panel to see connection instructions for windows, linux, macos
@@ -1798,7 +1801,7 @@ alternatively, some alternatives roughly sorted by speed (unreproducible benchma
* [rclone-http](./docs/rclone.md) (26s), read-only * [rclone-http](./docs/rclone.md) (26s), read-only
* [partyfuse.py](./bin/#partyfusepy) (35s), read-only * [partyfuse.py](./bin/#partyfusepy) (35s), read-only
* [rclone-ftp](./docs/rclone.md) (47s), read/WRITE * [rclone-ftp](./docs/rclone.md) (47s), read/WRITE
* davfs2 (103s), read/WRITE, *very fast* on small files * davfs2 (103s), read/WRITE
* [win10-webdav](#webdav-server) (138s), read/WRITE * [win10-webdav](#webdav-server) (138s), read/WRITE
* [win10-smb2](#smb-server) (387s), read/WRITE * [win10-smb2](#smb-server) (387s), read/WRITE

View File

@@ -2,7 +2,7 @@ standalone programs which are executed by copyparty when an event happens (uploa
these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info
run copyparty with `--help-hooks` for usage details / hook type explanations (xbu/xau/xiu/xbr/xar/xbd/xad) run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbr/xar/xbd/xad/xban)
> **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead > **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead
@@ -13,6 +13,7 @@ run copyparty with `--help-hooks` for usage details / hook type explanations (xb
* [image-noexif.py](image-noexif.py) removes image exif by overwriting / directly editing the uploaded file * [image-noexif.py](image-noexif.py) removes image exif by overwriting / directly editing the uploaded file
* [discord-announce.py](discord-announce.py) announces new uploads on discord using webhooks ([example](https://user-images.githubusercontent.com/241032/215304439-1c1cb3c8-ec6f-4c17-9f27-81f969b1811a.png)) * [discord-announce.py](discord-announce.py) announces new uploads on discord using webhooks ([example](https://user-images.githubusercontent.com/241032/215304439-1c1cb3c8-ec6f-4c17-9f27-81f969b1811a.png))
* [reject-mimetype.py](reject-mimetype.py) rejects uploads unless the mimetype is acceptable * [reject-mimetype.py](reject-mimetype.py) rejects uploads unless the mimetype is acceptable
* [into-the-cache-it-goes.py](into-the-cache-it-goes.py) avoids bugs in caching proxies by immediately downloading each file that is uploaded
# upload batches # upload batches

View File

@@ -0,0 +1,140 @@
#!/usr/bin/env python3
import sys
import json
import shutil
import platform
import subprocess as sp
from urllib.parse import quote
_ = r"""
try to avoid race conditions in caching proxies
(primarily cloudflare, but probably others too)
by means of the most obvious solution possible:
just as each file has finished uploading, use
the server's external URL to download the file
so that it ends up in the cache, warm and snug
this intentionally delays the upload response
as it waits for the file to finish downloading
before copyparty is allowed to return the URL
NOTE: you must edit this script before use,
replacing https://example.com with your URL
NOTE: if the files are only accessible with a
password and/or filekey, you must also add
a cromulent password in the PASSWORD field
NOTE: needs either wget, curl, or "requests":
python3 -m pip install --user -U requests
example usage as global config:
--xau j,t10,bin/hooks/into-the-cache-it-goes.py
parameters explained,
xau = execute after upload
j = this hook needs upload information as json (not just the filename)
t10 = abort download and continue if it takes longer than 10sec
example usage as a volflag (per-volume config):
-v srv/inc:inc:r:rw,ed:xau=j,t10,bin/hooks/into-the-cache-it-goes.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(share filesystem-path srv/inc as volume /inc,
readable by everyone, read-write for user 'ed',
running this plugin on all uploads with params explained above)
example usage as a volflag in a copyparty config file:
[/inc]
srv/inc
accs:
r: *
rw: ed
flags:
xau: j,t10,bin/hooks/into-the-cache-it-goes.py
"""
# replace this with your site's external URL
# (including the :portnumber if necessary)
SITE_URL = "https://example.com"
# if downloading is protected by passwords or filekeys,
# specify a valid password between the quotes below:
PASSWORD = ""
# if file is larger than this, skip download
MAX_MEGABYTES = 8
# =============== END OF CONFIG ===============
WINDOWS = platform.system() == "Windows"
def main():
fun = download_with_python
if shutil.which("curl"):
fun = download_with_curl
elif shutil.which("wget"):
fun = download_with_wget
inf = json.loads(sys.argv[1])
if inf["sz"] > 1024 * 1024 * MAX_MEGABYTES:
print("[into-the-cache] file is too large; will not download")
return
file_url = "/"
if inf["vp"]:
file_url += inf["vp"] + "/"
file_url += inf["ap"].replace("\\", "/").split("/")[-1]
file_url = SITE_URL.rstrip("/") + quote(file_url, safe=b"/")
print("[into-the-cache] %s(%s)" % (fun.__name__, file_url))
fun(file_url, PASSWORD.strip())
print("[into-the-cache] Download OK")
def download_with_curl(url, pw):
cmd = ["curl"]
if pw:
cmd += ["-HPW:%s" % (pw,)]
nah = sp.DEVNULL
sp.check_call(cmd + [url], stdout=nah, stderr=nah)
def download_with_wget(url, pw):
cmd = ["wget", "-O"]
cmd += ["nul" if WINDOWS else "/dev/null"]
if pw:
cmd += ["--header=PW:%s" % (pw,)]
nah = sp.DEVNULL
sp.check_call(cmd + [url], stdout=nah, stderr=nah)
def download_with_python(url, pw):
import requests
headers = {}
if pw:
headers["PW"] = pw
with requests.get(url, headers=headers, stream=True) as r:
r.raise_for_status()
for _ in r.iter_content(chunk_size=1024 * 256):
pass
if __name__ == "__main__":
main()

View File

@@ -23,17 +23,18 @@ because the keyword "anime" is in the DESTS config below
needs python3 needs python3
example usage as global config (not a good idea): example usage as global config (not a good idea):
python copyparty-sfx.py --xm f,j,t60,bin/hooks/qbittorrent-magnet.py python copyparty-sfx.py --xm aw,f,j,t60,bin/hooks/qbittorrent-magnet.py
parameters explained, parameters explained,
xm = execute on message (📟) xm = execute on message (📟)
aw = only users with write-access can use this
f = fork; don't delay other hooks while this is running f = fork; don't delay other hooks while this is running
j = provide message information as json (not just the text) j = provide message information as json (not just the text)
t60 = abort if qbittorrent has to think about it for more than 1 min t60 = abort if qbittorrent has to think about it for more than 1 min
example usage as a volflag (per-volume config, much better): example usage as a volflag (per-volume config, much better):
-v srv/qb:qb:A,ed:c,xm=f,j,t60,bin/hooks/qbittorrent-magnet.py -v srv/qb:qb:A,ed:c,xm=aw,f,j,t60,bin/hooks/qbittorrent-magnet.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(share filesystem-path srv/qb as volume /qb with Admin for user 'ed', (share filesystem-path srv/qb as volume /qb with Admin for user 'ed',
running this plugin on all messages with the params explained above) running this plugin on all messages with the params explained above)
@@ -44,7 +45,7 @@ example usage as a volflag in a copyparty config file:
accs: accs:
A: ed A: ed
flags: flags:
xm: f,j,t60,bin/hooks/qbittorrent-magnet.py xm: aw,f,j,t60,bin/hooks/qbittorrent-magnet.py
the volflag examples only kicks in if you send the torrent magnet the volflag examples only kicks in if you send the torrent magnet
while you're in the /qb folder (or any folder below there) while you're in the /qb folder (or any folder below there)

View File

@@ -12,18 +12,19 @@ application/x-www-form-urlencoded (for example using the
📟 message-to-server-log in the web-ui) 📟 message-to-server-log in the web-ui)
example usage as global config: example usage as global config:
--xm f,j,t3600,bin/hooks/wget.py --xm aw,f,j,t3600,bin/hooks/wget.py
parameters explained, parameters explained,
xm = execute on message-to-server-log xm = execute on message-to-server-log
aw = only users with write-access can use this
f = fork; don't delay other hooks while this is running f = fork; don't delay other hooks while this is running
j = provide message information as json (not just the text) j = provide message information as json (not just the text)
c3 = mute all output c3 = mute all output
t3600 = timeout and abort download after 1 hour t3600 = timeout and abort download after 1 hour
example usage as a volflag (per-volume config): example usage as a volflag (per-volume config):
-v srv/inc:inc:r:rw,ed:c,xm=f,j,t3600,bin/hooks/wget.py -v srv/inc:inc:r:rw,ed:c,xm=aw,f,j,t3600,bin/hooks/wget.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(share filesystem-path srv/inc as volume /inc, (share filesystem-path srv/inc as volume /inc,
readable by everyone, read-write for user 'ed', readable by everyone, read-write for user 'ed',
@@ -36,7 +37,7 @@ example usage as a volflag in a copyparty config file:
r: * r: *
rw: ed rw: ed
flags: flags:
xm: f,j,t3600,bin/hooks/wget.py xm: aw,f,j,t3600,bin/hooks/wget.py
the volflag examples only kicks in if you send the message the volflag examples only kicks in if you send the message
while you're in the /inc folder (or any folder below there) while you're in the /inc folder (or any folder below there)

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
S_VERSION = "1.18" S_VERSION = "1.21"
S_BUILD_DT = "2024-06-01" S_BUILD_DT = "2024-07-26"
""" """
u2c.py: upload to copyparty u2c.py: upload to copyparty
@@ -20,6 +20,7 @@ import sys
import stat import stat
import math import math
import time import time
import json
import atexit import atexit
import signal import signal
import socket import socket
@@ -79,7 +80,7 @@ req_ses = requests.Session()
class Daemon(threading.Thread): class Daemon(threading.Thread):
def __init__(self, target, name = None, a = None): def __init__(self, target, name=None, a=None):
threading.Thread.__init__(self, name=name) threading.Thread.__init__(self, name=name)
self.a = a or () self.a = a or ()
self.fun = target self.fun = target
@@ -110,18 +111,22 @@ class File(object):
# set by get_hashlist # set by get_hashlist
self.cids = [] # type: list[tuple[str, int, int]] # [ hash, ofs, sz ] self.cids = [] # type: list[tuple[str, int, int]] # [ hash, ofs, sz ]
self.kchunks = {} # type: dict[str, tuple[int, int]] # hash: [ ofs, sz ] self.kchunks = {} # type: dict[str, tuple[int, int]] # hash: [ ofs, sz ]
self.t_hash = 0.0 # type: float
# set by handshake # set by handshake
self.recheck = False # duplicate; redo handshake after all files done self.recheck = False # duplicate; redo handshake after all files done
self.ucids = [] # type: list[str] # chunks which need to be uploaded self.ucids = [] # type: list[str] # chunks which need to be uploaded
self.wark = "" # type: str self.wark = "" # type: str
self.url = "" # type: str self.url = "" # type: str
self.nhs = 0 self.nhs = 0 # type: int
# set by upload # set by upload
self.t0_up = 0.0 # type: float
self.t1_up = 0.0 # type: float
self.nojoin = 0 # type: int
self.up_b = 0 # type: int self.up_b = 0 # type: int
self.up_c = 0 # type: int self.up_c = 0 # type: int
self.cd = 0 self.cd = 0 # type: int
# t = "size({}) lmod({}) top({}) rel({}) abs({}) name({})\n" # t = "size({}) lmod({}) top({}) rel({}) abs({}) name({})\n"
# eprint(t.format(self.size, self.lmod, self.top, self.rel, self.abs, self.name)) # eprint(t.format(self.size, self.lmod, self.top, self.rel, self.abs, self.name))
@@ -130,10 +135,20 @@ class File(object):
class FileSlice(object): class FileSlice(object):
"""file-like object providing a fixed window into a file""" """file-like object providing a fixed window into a file"""
def __init__(self, file, cid): def __init__(self, file, cids):
# type: (File, str) -> None # type: (File, str) -> None
self.car, self.len = file.kchunks[cid] self.file = file
self.cids = cids
self.car, tlen = file.kchunks[cids[0]]
for cid in cids[1:]:
ofs, clen = file.kchunks[cid]
if ofs != self.car + tlen:
raise Exception(9)
tlen += clen
self.len = tlen
self.cdr = self.car + self.len self.cdr = self.car + self.len
self.ofs = 0 # type: int self.ofs = 0 # type: int
self.f = open(file.abs, "rb", 512 * 1024) self.f = open(file.abs, "rb", 512 * 1024)
@@ -357,7 +372,7 @@ def undns(url):
usp = urlsplit(url) usp = urlsplit(url)
hn = usp.hostname hn = usp.hostname
gai = None gai = None
eprint("resolving host [{0}] ...".format(hn), end="") eprint("resolving host [%s] ..." % (hn,))
try: try:
gai = socket.getaddrinfo(hn, None) gai = socket.getaddrinfo(hn, None)
hn = gai[0][4][0] hn = gai[0][4][0]
@@ -375,7 +390,7 @@ def undns(url):
usp = usp._replace(netloc=hn) usp = usp._replace(netloc=hn)
url = urlunsplit(usp) url = urlunsplit(usp)
eprint(" {0}".format(url)) eprint(" %s\n" % (url,))
return url return url
@@ -518,6 +533,8 @@ def get_hashlist(file, pcb, mth):
file_ofs = 0 file_ofs = 0
ret = [] ret = []
with open(file.abs, "rb", 512 * 1024) as f: with open(file.abs, "rb", 512 * 1024) as f:
t0 = time.time()
if mth and file.size >= 1024 * 512: if mth and file.size >= 1024 * 512:
ret = mth.hash(f, file.size, chunk_sz, pcb, file) ret = mth.hash(f, file.size, chunk_sz, pcb, file)
file_rem = 0 file_rem = 0
@@ -544,10 +561,12 @@ def get_hashlist(file, pcb, mth):
if pcb: if pcb:
pcb(file, file_ofs) pcb(file, file_ofs)
file.t_hash = time.time() - t0
file.cids = ret file.cids = ret
file.kchunks = {} file.kchunks = {}
for k, v1, v2 in ret: for k, v1, v2 in ret:
file.kchunks[k] = [v1, v2] if k not in file.kchunks:
file.kchunks[k] = [v1, v2]
def handshake(ar, file, search): def handshake(ar, file, search):
@@ -589,7 +608,8 @@ def handshake(ar, file, search):
sc = 600 sc = 600
txt = "" txt = ""
try: try:
r = req_ses.post(url, headers=headers, json=req) zs = json.dumps(req, separators=(",\n", ": "))
r = req_ses.post(url, headers=headers, data=zs)
sc = r.status_code sc = r.status_code
txt = r.text txt = r.text
if sc < 400: if sc < 400:
@@ -636,13 +656,13 @@ def handshake(ar, file, search):
return r["hash"], r["sprs"] return r["hash"], r["sprs"]
def upload(file, cid, pw, stats): def upload(fsl, pw, stats):
# type: (File, str, str, str) -> None # type: (FileSlice, str, str) -> None
"""upload one specific chunk, `cid` (a chunk-hash)""" """upload a range of file data, defined by one or more `cid` (chunk-hash)"""
headers = { headers = {
"X-Up2k-Hash": cid, "X-Up2k-Hash": ",".join(fsl.cids),
"X-Up2k-Wark": file.wark, "X-Up2k-Wark": fsl.file.wark,
"Content-Type": "application/octet-stream", "Content-Type": "application/octet-stream",
} }
@@ -652,15 +672,24 @@ def upload(file, cid, pw, stats):
if pw: if pw:
headers["Cookie"] = "=".join(["cppwd", pw]) headers["Cookie"] = "=".join(["cppwd", pw])
f = FileSlice(file, cid)
try: try:
r = req_ses.post(file.url, headers=headers, data=f) r = req_ses.post(fsl.file.url, headers=headers, data=fsl)
if r.status_code == 400:
txt = r.text
if (
"already being written" in txt
or "already got that" in txt
or "only sibling chunks" in txt
):
fsl.file.nojoin = 1
if not r: if not r:
raise Exception(repr(r)) raise Exception(repr(r))
_ = r.content _ = r.content
finally: finally:
f.f.close() fsl.f.close()
class Ctl(object): class Ctl(object):
@@ -724,6 +753,9 @@ class Ctl(object):
if ar.safe: if ar.safe:
self._safe() self._safe()
else: else:
self.at_hash = 0.0
self.at_up = 0.0
self.at_upr = 0.0
self.hash_f = 0 self.hash_f = 0
self.hash_c = 0 self.hash_c = 0
self.hash_b = 0 self.hash_b = 0
@@ -743,7 +775,7 @@ class Ctl(object):
self.mutex = threading.Lock() self.mutex = threading.Lock()
self.q_handshake = Queue() # type: Queue[File] self.q_handshake = Queue() # type: Queue[File]
self.q_upload = Queue() # type: Queue[tuple[File, str]] self.q_upload = Queue() # type: Queue[FileSlice]
self.st_hash = [None, "(idle, starting...)"] # type: tuple[File, int] self.st_hash = [None, "(idle, starting...)"] # type: tuple[File, int]
self.st_up = [None, "(idle, starting...)"] # type: tuple[File, int] self.st_up = [None, "(idle, starting...)"] # type: tuple[File, int]
@@ -788,7 +820,8 @@ class Ctl(object):
for nc, cid in enumerate(hs): for nc, cid in enumerate(hs):
print(" {0} up {1}".format(ncs - nc, cid)) print(" {0} up {1}".format(ncs - nc, cid))
stats = "{0}/0/0/{1}".format(nf, self.nfiles - nf) stats = "{0}/0/0/{1}".format(nf, self.nfiles - nf)
upload(file, cid, self.ar.a, stats) fslice = FileSlice(file, [cid])
upload(fslice, self.ar.a, stats)
print(" ok!") print(" ok!")
if file.recheck: if file.recheck:
@@ -797,7 +830,7 @@ class Ctl(object):
if not self.recheck: if not self.recheck:
return return
eprint("finalizing {0} duplicate files".format(len(self.recheck))) eprint("finalizing %d duplicate files\n" % (len(self.recheck),))
for file in self.recheck: for file in self.recheck:
handshake(self.ar, file, search) handshake(self.ar, file, search)
@@ -871,10 +904,17 @@ class Ctl(object):
t = "{0} eta @ {1}/s, {2}, {3}# left".format(self.eta, spd, sleft, nleft) t = "{0} eta @ {1}/s, {2}, {3}# left".format(self.eta, spd, sleft, nleft)
eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail)) eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail))
if self.hash_b and self.at_hash:
spd = humansize(self.hash_b / self.at_hash)
eprint("\nhasher: %.2f sec, %s/s\n" % (self.at_hash, spd))
if self.up_b and self.at_up:
spd = humansize(self.up_b / self.at_up)
eprint("upload: %.2f sec, %s/s\n" % (self.at_up, spd))
if not self.recheck: if not self.recheck:
return return
eprint("finalizing {0} duplicate files".format(len(self.recheck))) eprint("finalizing %d duplicate files\n" % (len(self.recheck),))
for file in self.recheck: for file in self.recheck:
handshake(self.ar, file, False) handshake(self.ar, file, False)
@@ -1060,21 +1100,62 @@ class Ctl(object):
self.handshaker_busy -= 1 self.handshaker_busy -= 1
if not hs: if not hs:
kw = "uploaded" if file.up_b else " found" self.at_hash += file.t_hash
print("{0} {1}".format(kw, upath))
for cid in hs: if self.ar.spd:
self.q_upload.put([file, cid]) if VT100:
c1 = "\033[36m"
c2 = "\033[0m"
else:
c1 = c2 = ""
spd_h = humansize(file.size / file.t_hash, True)
if file.up_b:
t_up = file.t1_up - file.t0_up
spd_u = humansize(file.size / t_up, True)
t = "uploaded %s %s(h:%.2fs,%s/s,up:%.2fs,%s/s)%s"
print(t % (upath, c1, file.t_hash, spd_h, t_up, spd_u, c2))
else:
t = " found %s %s(%.2fs,%s/s)%s"
print(t % (upath, c1, file.t_hash, spd_h, c2))
else:
kw = "uploaded" if file.up_b else " found"
print("{0} {1}".format(kw, upath))
chunksz = up2k_chunksize(file.size)
njoin = (self.ar.sz * 1024 * 1024) // chunksz
cs = hs[:]
while cs:
fsl = FileSlice(file, cs[:1])
try:
if file.nojoin:
raise Exception()
for n in range(2, min(len(cs), njoin + 1)):
fsl = FileSlice(file, cs[:n])
except:
pass
cs = cs[len(fsl.cids) :]
self.q_upload.put(fsl)
def uploader(self): def uploader(self):
while True: while True:
task = self.q_upload.get() fsl = self.q_upload.get()
if not task: if not fsl:
self.st_up = [None, "(finished)"] self.st_up = [None, "(finished)"]
break break
file = fsl.file
cids = fsl.cids
with self.mutex: with self.mutex:
if not self.uploader_busy:
self.at_upr = time.time()
self.uploader_busy += 1 self.uploader_busy += 1
self.t0_up = self.t0_up or time.time() if not file.t0_up:
file.t0_up = time.time()
if not self.t0_up:
self.t0_up = file.t0_up
stats = "%d/%d/%d/%d %d/%d %s" % ( stats = "%d/%d/%d/%d %d/%d %s" % (
self.up_f, self.up_f,
@@ -1086,28 +1167,30 @@ class Ctl(object):
self.eta, self.eta,
) )
file, cid = task
try: try:
upload(file, cid, self.ar.a, stats) upload(fsl, self.ar.a, stats)
except Exception as ex: except Exception as ex:
t = "upload failed, retrying: {0} #{1} ({2})\n" t = "upload failed, retrying: %s #%s+%d (%s)\n"
eprint(t.format(file.name, cid[:8], ex)) eprint(t % (file.name, cids[0][:8], len(cids) - 1, ex))
file.cd = time.time() + self.ar.cd file.cd = time.time() + self.ar.cd
# handshake will fix it # handshake will fix it
with self.mutex: with self.mutex:
sz = file.kchunks[cid][1] sz = fsl.len
file.ucids = [x for x in file.ucids if x != cid] file.ucids = [x for x in file.ucids if x not in cids]
if not file.ucids: if not file.ucids:
file.t1_up = time.time()
self.q_handshake.put(file) self.q_handshake.put(file)
self.st_up = [file, cid] self.st_up = [file, cids[0]]
file.up_b += sz file.up_b += sz
self.up_b += sz self.up_b += sz
self.up_br += sz self.up_br += sz
file.up_c += 1 file.up_c += 1
self.up_c += 1 self.up_c += 1
self.uploader_busy -= 1 self.uploader_busy -= 1
if not self.uploader_busy:
self.at_up += time.time() - self.at_upr
def up_done(self, file): def up_done(self, file):
if self.ar.dl: if self.ar.dl:
@@ -1150,6 +1233,7 @@ source file/folder selection uses rsync syntax, meaning that:
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible") ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)") ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)")
ap.add_argument("--ow", action="store_true", help="overwrite existing files instead of autorenaming") ap.add_argument("--ow", action="store_true", help="overwrite existing files instead of autorenaming")
ap.add_argument("--spd", action="store_true", help="print speeds for each file")
ap.add_argument("--version", action="store_true", help="show version and exit") ap.add_argument("--version", action="store_true", help="show version and exit")
ap = app.add_argument_group("compatibility") ap = app.add_argument_group("compatibility")
@@ -1164,6 +1248,7 @@ source file/folder selection uses rsync syntax, meaning that:
ap = app.add_argument_group("performance tweaks") ap = app.add_argument_group("performance tweaks")
ap.add_argument("-j", type=int, metavar="CONNS", default=2, help="parallel connections") ap.add_argument("-j", type=int, metavar="CONNS", default=2, help="parallel connections")
ap.add_argument("-J", type=int, metavar="CORES", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing") ap.add_argument("-J", type=int, metavar="CORES", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing")
ap.add_argument("--sz", type=int, metavar="MiB", default=64, help="try to make each POST this big")
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading") ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
ap.add_argument("-ns", action="store_true", help="no status panel (for slow consoles and macos)") ap.add_argument("-ns", action="store_true", help="no status panel (for slow consoles and macos)")
ap.add_argument("--cd", type=float, metavar="SEC", default=5, help="delay before reattempting a failed handshake/upload") ap.add_argument("--cd", type=float, metavar="SEC", default=5, help="delay before reattempting a failed handshake/upload")

View File

@@ -1,6 +1,6 @@
# Maintainer: icxes <dev.null@need.moe> # Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty pkgname=copyparty
pkgver="1.13.3" pkgver="1.13.5"
pkgrel=1 pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++" pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any") arch=("any")
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
) )
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz") source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" ) backup=("etc/${pkgname}.d/init" )
sha256sums=("35845d6335fba4a13d153d7062f365dad529202bc865b93267d899e19a0a6da3") sha256sums=("83bf52ac03256ee6fe405a912e2767578692760f9554f821dfcab0700dd58082")
build() { build() {
cd "${srcdir}/${pkgname}-${pkgver}" cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,5 +1,5 @@
{ {
"url": "https://github.com/9001/copyparty/releases/download/v1.13.3/copyparty-sfx.py", "url": "https://github.com/9001/copyparty/releases/download/v1.13.5/copyparty-sfx.py",
"version": "1.13.3", "version": "1.13.5",
"hash": "sha256-LtbdioAYtWGC4+5frzUjXwm0thubkyMhc86YU/rXIuo=" "hash": "sha256-I+dqsiScYPcX6JpLgwVoLs7l0FlbXabc/Ofqye9RQI0="
} }

118
contrib/themes/bsod.css Normal file
View File

@@ -0,0 +1,118 @@
/* copy bsod.* into a folder named ".themes" in your webroot and then
--themes=10 --theme=9 --css-browser=/.themes/bsod.css
*/
html.ey {
--w2: #3d7bbc;
--w3: #5fcbec;
--fg: #fff;
--fg-max: #fff;
--fg-weak: var(--w3);
--bg: #2067b2;
--bg-d3: var(--bg);
--bg-d2: var(--w2);
--bg-d1: var(--fg-weak);
--bg-u2: var(--bg);
--bg-u3: var(--bg);
--bg-u5: var(--w2);
--tab-alt: var(--fg-weak);
--row-alt: var(--w2);
--scroll: var(--w3);
--a: #fff;
--a-b: #fff;
--a-hil: #fff;
--a-h-bg: var(--fg-weak);
--a-dark: var(--a);
--a-gray: var(--fg-weak);
--btn-fg: var(--a);
--btn-bg: var(--w2);
--btn-h-fg: var(--w2);
--btn-1-fg: var(--bg);
--btn-1-bg: var(--a);
--txt-sh: a;
--txt-bg: var(--w2);
--u2-b1-bg: var(--w2);
--u2-b2-bg: var(--w2);
--u2-o-bg: var(--w2);
--u2-o-1-bg: var(--a);
--u2-txt-bg: var(--w2);
--u2-tab-bg: a;
--u2-tab-1-bg: var(--w2);
--sort-1: var(--a);
--sort-1: var(--fg-weak);
--tree-bg: var(--bg);
--g-b1: a;
--g-b2: a;
--g-f-bg: var(--w2);
--f-sh1: 0.1;
--f-sh2: 0.02;
--f-sh3: 0.1;
--f-h-b1: a;
--srv-1: var(--a);
--srv-3: var(--a);
--mp-sh: a;
}
html.ey {
background: url('bsod.png') top 5em right 4.5em no-repeat fixed var(--bg);
}
html.ey body#b {
background: var(--bg); /*sandbox*/
}
html.ey #ops {
margin: 1.7em 1.5em 0 1.5em;
border-radius: .3em;
border-width: 1px 0;
}
html.ey #ops a {
text-shadow: 1px 1px 0 rgba(0,0,0,0.5);
}
html.ey .opbox {
margin: 1.5em 0 0 0;
}
html.ey #tree {
box-shadow: none;
}
html.ey #tt {
border-color: var(--w2);
background: var(--w2);
}
html.ey .mdo a {
background: none;
text-decoration: underline;
}
html.ey .mdo pre,
html.ey .mdo code {
color: #fff;
background: var(--w2);
border: none;
}
html.ey .mdo h1,
html.ey .mdo h2 {
background: none;
border-color: var(--w2);
}
html.ey .mdo ul ul,
html.ey .mdo ul ol,
html.ey .mdo ol ul,
html.ey .mdo ol ol {
border-color: var(--w2);
}
html.ey .mdo p>em,
html.ey .mdo li>em,
html.ey .mdo td>em {
color: #fd0;
}

BIN
contrib/themes/bsod.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@@ -491,11 +491,17 @@ def disable_quickedit() -> None:
def sfx_tpoke(top: str): def sfx_tpoke(top: str):
files = [os.path.join(dp, p) for dp, dd, df in os.walk(top) for p in dd + df] files = [top] + [
os.path.join(dp, p) for dp, dd, df in os.walk(top) for p in dd + df
]
while True: while True:
t = int(time.time()) t = int(time.time())
for f in [top] + files: for f in list(files):
os.utime(f, (t, t)) try:
os.utime(f, (t, t))
except Exception as ex:
lprint("<TPOKE> [%s] %r" % (f, ex))
files.remove(f)
time.sleep(78123) time.sleep(78123)
@@ -942,6 +948,7 @@ def add_upload(ap):
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files") ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck") ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)") ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for this size. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine") ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory") ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")

View File

@@ -1,8 +1,8 @@
# coding: utf-8 # coding: utf-8
VERSION = (1, 13, 4) VERSION = (1, 13, 6)
CODENAME = "race the beam" CODENAME = "race the beam"
BUILD_DT = (2024, 7, 16) BUILD_DT = (2024, 7, 29)
S_VERSION = ".".join(map(str, VERSION)) S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT) S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -28,7 +28,7 @@ class ExceptionalQueue(Queue, object):
if rv[1] == "pebkac": if rv[1] == "pebkac":
raise Pebkac(*rv[2:]) raise Pebkac(*rv[2:])
else: else:
raise Exception(rv[2]) raise rv[2]
return rv return rv
@@ -65,8 +65,8 @@ def try_exec(want_retval: Union[bool, int], func: Any, *args: list[Any]) -> Any:
return ["exception", "pebkac", ex.code, str(ex)] return ["exception", "pebkac", ex.code, str(ex)]
except: except Exception as ex:
if not want_retval: if not want_retval:
raise raise
return ["exception", "stack", traceback.format_exc()] return ["exception", "stack", ex]

View File

@@ -646,11 +646,8 @@ class HttpCli(object):
if not self._check_nonfatal(pex, post): if not self._check_nonfatal(pex, post):
self.keepalive = False self.keepalive = False
if pex is ex: em = str(ex)
em = msg = str(ex) msg = em if pex is ex else min_ex()
else:
em = repr(ex)
msg = min_ex()
if pex.code != 404 or self.do_log: if pex.code != 404 or self.do_log:
self.log( self.log(
@@ -2202,33 +2199,39 @@ class HttpCli(object):
def handle_post_binary(self) -> bool: def handle_post_binary(self) -> bool:
try: try:
remains = int(self.headers["content-length"]) postsize = remains = int(self.headers["content-length"])
except: except:
raise Pebkac(400, "you must supply a content-length for binary POST") raise Pebkac(400, "you must supply a content-length for binary POST")
try: try:
chash = self.headers["x-up2k-hash"] chashes = self.headers["x-up2k-hash"].split(",")
wark = self.headers["x-up2k-wark"] wark = self.headers["x-up2k-wark"]
except KeyError: except KeyError:
raise Pebkac(400, "need hash and wark headers for binary POST") raise Pebkac(400, "need hash and wark headers for binary POST")
chashes = [x.strip() for x in chashes]
vfs, _ = self.asrv.vfs.get(self.vpath, self.uname, False, True) vfs, _ = self.asrv.vfs.get(self.vpath, self.uname, False, True)
ptop = (vfs.dbv or vfs).realpath ptop = (vfs.dbv or vfs).realpath
x = self.conn.hsrv.broker.ask("up2k.handle_chunk", ptop, wark, chash) x = self.conn.hsrv.broker.ask("up2k.handle_chunks", ptop, wark, chashes)
response = x.get() response = x.get()
chunksize, cstart, path, lastmod, sprs = response chunksize, cstarts, path, lastmod, sprs = response
maxsize = chunksize * len(chashes)
cstart0 = cstarts[0]
try: try:
if self.args.nw: if self.args.nw:
path = os.devnull path = os.devnull
if remains > chunksize: if remains > maxsize:
raise Pebkac(400, "your chunk is too big to fit") t = "your client is sending %d bytes which is too much (server expected %d bytes at most)"
raise Pebkac(400, t % (remains, maxsize))
self.log("writing {} #{} @{} len {}".format(path, chash, cstart, remains)) t = "writing %s %s+%d #%d+%d %s"
chunkno = cstart0[0] // chunksize
reader = read_socket(self.sr, self.args.s_rd_sz, remains) zs = " ".join([chashes[0][:15]] + [x[:9] for x in chashes[1:]])
self.log(t % (path, cstart0, remains, chunkno, len(chashes), zs))
f = None f = None
fpool = not self.args.no_fpool and sprs fpool = not self.args.no_fpool and sprs
@@ -2242,37 +2245,43 @@ class HttpCli(object):
f = f or open(fsenc(path), "rb+", self.args.iobuf) f = f or open(fsenc(path), "rb+", self.args.iobuf)
try: try:
f.seek(cstart[0]) for chash, cstart in zip(chashes, cstarts):
post_sz, _, sha_b64 = hashcopy(reader, f, self.args.s_wr_slp) f.seek(cstart[0])
reader = read_socket(
if sha_b64 != chash: self.sr, self.args.s_rd_sz, min(remains, chunksize)
try:
self.bakflip(f, cstart[0], post_sz, sha_b64, vfs.flags)
except:
self.log("bakflip failed: " + min_ex())
t = "your chunk got corrupted somehow (received {} bytes); expected vs received hash:\n{}\n{}"
raise Pebkac(400, t.format(post_sz, chash, sha_b64))
if len(cstart) > 1 and path != os.devnull:
self.log(
"clone {} to {}".format(
cstart[0], " & ".join(unicode(x) for x in cstart[1:])
)
) )
ofs = 0 post_sz, _, sha_b64 = hashcopy(reader, f, self.args.s_wr_slp)
while ofs < chunksize:
bufsz = max(4 * 1024 * 1024, self.args.iobuf)
bufsz = min(chunksize - ofs, bufsz)
f.seek(cstart[0] + ofs)
buf = f.read(bufsz)
for wofs in cstart[1:]:
f.seek(wofs + ofs)
f.write(buf)
ofs += len(buf) if sha_b64 != chash:
try:
self.bakflip(f, cstart[0], post_sz, sha_b64, vfs.flags)
except:
self.log("bakflip failed: " + min_ex())
self.log("clone {} done".format(cstart[0])) t = "your chunk got corrupted somehow (received {} bytes); expected vs received hash:\n{}\n{}"
raise Pebkac(400, t.format(post_sz, chash, sha_b64))
remains -= chunksize
if len(cstart) > 1 and path != os.devnull:
self.log(
"clone {} to {}".format(
cstart[0], " & ".join(unicode(x) for x in cstart[1:])
)
)
ofs = 0
while ofs < chunksize:
bufsz = max(4 * 1024 * 1024, self.args.iobuf)
bufsz = min(chunksize - ofs, bufsz)
f.seek(cstart[0] + ofs)
buf = f.read(bufsz)
for wofs in cstart[1:]:
f.seek(wofs + ofs)
f.write(buf)
ofs += len(buf)
self.log("clone {} done".format(cstart[0]))
if not fpool: if not fpool:
f.close() f.close()
@@ -2284,10 +2293,10 @@ class HttpCli(object):
f.close() f.close()
raise raise
finally: finally:
x = self.conn.hsrv.broker.ask("up2k.release_chunk", ptop, wark, chash) x = self.conn.hsrv.broker.ask("up2k.release_chunks", ptop, wark, chashes)
x.get() # block client until released x.get() # block client until released
x = self.conn.hsrv.broker.ask("up2k.confirm_chunk", ptop, wark, chash) x = self.conn.hsrv.broker.ask("up2k.confirm_chunks", ptop, wark, chashes)
ztis = x.get() ztis = x.get()
try: try:
num_left, fin_path = ztis num_left, fin_path = ztis
@@ -2306,7 +2315,7 @@ class HttpCli(object):
cinf = self.headers.get("x-up2k-stat", "") cinf = self.headers.get("x-up2k-stat", "")
spd = self._spd(post_sz) spd = self._spd(postsize)
self.log("{:70} thank {}".format(spd, cinf)) self.log("{:70} thank {}".format(spd, cinf))
self.reply(b"thank") self.reply(b"thank")
return True return True
@@ -4503,6 +4512,7 @@ class HttpCli(object):
"themes": self.args.themes, "themes": self.args.themes,
"turbolvl": self.args.turbo, "turbolvl": self.args.turbo,
"u2j": self.args.u2j, "u2j": self.args.u2j,
"u2sz": self.args.u2sz,
"idxh": int(self.args.ih), "idxh": int(self.args.ih),
"u2sort": self.args.u2sort, "u2sort": self.args.u2sort,
} }

View File

@@ -59,7 +59,8 @@ class ThumbCli(object):
want_opus = fmt in ("opus", "caf", "mp3") want_opus = fmt in ("opus", "caf", "mp3")
is_au = ext in self.fmt_ffa is_au = ext in self.fmt_ffa
if is_au: is_vau = want_opus and ext in self.fmt_ffv
if is_au or is_vau:
if want_opus: if want_opus:
if self.args.no_acode: if self.args.no_acode:
return None return None
@@ -107,7 +108,7 @@ class ThumbCli(object):
fmt = sfmt fmt = sfmt
elif fmt[:1] == "p" and not is_au: elif fmt[:1] == "p" and not is_au and not is_vid:
t = "cannot thumbnail [%s]: png only allowed for waveforms" t = "cannot thumbnail [%s]: png only allowed for waveforms"
self.log(t % (rem), 6) self.log(t % (rem), 6)
return None return None

View File

@@ -304,23 +304,31 @@ class ThumbSrv(object):
ap_unpk = abspath ap_unpk = abspath
if not bos.path.exists(tpath): if not bos.path.exists(tpath):
want_mp3 = tpath.endswith(".mp3")
want_opus = tpath.endswith(".opus") or tpath.endswith(".caf")
want_png = tpath.endswith(".png")
want_au = want_mp3 or want_opus
for lib in self.args.th_dec: for lib in self.args.th_dec:
can_au = lib == "ff" and (
ext in self.fmt_ffa or ext in self.fmt_ffv
)
if lib == "pil" and ext in self.fmt_pil: if lib == "pil" and ext in self.fmt_pil:
funs.append(self.conv_pil) funs.append(self.conv_pil)
elif lib == "vips" and ext in self.fmt_vips: elif lib == "vips" and ext in self.fmt_vips:
funs.append(self.conv_vips) funs.append(self.conv_vips)
elif lib == "ff" and ext in self.fmt_ffi or ext in self.fmt_ffv: elif can_au and (want_png or want_au):
funs.append(self.conv_ffmpeg) if want_opus:
elif lib == "ff" and ext in self.fmt_ffa:
if tpath.endswith(".opus") or tpath.endswith(".caf"):
funs.append(self.conv_opus) funs.append(self.conv_opus)
elif tpath.endswith(".mp3"): elif want_mp3:
funs.append(self.conv_mp3) funs.append(self.conv_mp3)
elif tpath.endswith(".png"): elif want_png:
funs.append(self.conv_waves) funs.append(self.conv_waves)
png_ok = True png_ok = True
else: elif lib == "ff" and (ext in self.fmt_ffi or ext in self.fmt_ffv):
funs.append(self.conv_spec) funs.append(self.conv_ffmpeg)
elif lib == "ff" and ext in self.fmt_ffa and not want_au:
funs.append(self.conv_spec)
tdir, tfn = os.path.split(tpath) tdir, tfn = os.path.split(tpath)
ttpath = os.path.join(tdir, "w", tfn) ttpath = os.path.join(tdir, "w", tfn)

View File

@@ -545,7 +545,7 @@ class Up2k(object):
nrm += 1 nrm += 1
if nrm: if nrm:
self.log("{} files graduated in {}".format(nrm, vp)) self.log("%d files graduated in /%s" % (nrm, vp))
if timeout < 10: if timeout < 10:
continue continue
@@ -1296,7 +1296,7 @@ class Up2k(object):
not cv not cv
or liname not in th_cvds or liname not in th_cvds
or cv.lower() not in th_cvds or cv.lower() not in th_cvds
or th_cvd.index(iname) < th_cvd.index(cv) or th_cvd.index(liname) < th_cvd.index(cv.lower())
) )
): ):
cv = iname cv = iname
@@ -3013,9 +3013,9 @@ class Up2k(object):
times = (int(time.time()), int(lmod)) times = (int(time.time()), int(lmod))
bos.utime(dst, times, False) bos.utime(dst, times, False)
def handle_chunk( def handle_chunks(
self, ptop: str, wark: str, chash: str self, ptop: str, wark: str, chashes: list[str]
) -> tuple[int, list[int], str, float, bool]: ) -> tuple[list[int], list[list[int]], str, float, bool]:
with self.mutex, self.reg_mutex: with self.mutex, self.reg_mutex:
self.db_act = self.vol_act[ptop] = time.time() self.db_act = self.vol_act[ptop] = time.time()
job = self.registry[ptop].get(wark) job = self.registry[ptop].get(wark)
@@ -3024,26 +3024,37 @@ class Up2k(object):
self.log("unknown wark [{}], known: {}".format(wark, known)) self.log("unknown wark [{}], known: {}".format(wark, known))
raise Pebkac(400, "unknown wark" + SSEELOG) raise Pebkac(400, "unknown wark" + SSEELOG)
if chash not in job["need"]: for chash in chashes:
msg = "chash = {} , need:\n".format(chash) if chash not in job["need"]:
msg += "\n".join(job["need"]) msg = "chash = {} , need:\n".format(chash)
self.log(msg) msg += "\n".join(job["need"])
raise Pebkac(400, "already got that but thanks??") self.log(msg)
raise Pebkac(400, "already got that (%s) but thanks??" % (chash,))
nchunk = [n for n, v in enumerate(job["hash"]) if v == chash] if chash in job["busy"]:
if not nchunk: nh = len(job["hash"])
raise Pebkac(400, "unknown chunk") idx = job["hash"].index(chash)
t = "that chunk is already being written to:\n {}\n {} {}/{}\n {}"
if chash in job["busy"]: raise Pebkac(400, t.format(wark, chash, idx, nh, job["name"]))
nh = len(job["hash"])
idx = job["hash"].index(chash)
t = "that chunk is already being written to:\n {}\n {} {}/{}\n {}"
raise Pebkac(400, t.format(wark, chash, idx, nh, job["name"]))
path = djoin(job["ptop"], job["prel"], job["tnam"])
chunksize = up2k_chunksize(job["size"]) chunksize = up2k_chunksize(job["size"])
ofs = [chunksize * x for x in nchunk]
coffsets = []
for chash in chashes:
nchunk = [n for n, v in enumerate(job["hash"]) if v == chash]
if not nchunk:
raise Pebkac(400, "unknown chunk %s" % (chash))
ofs = [chunksize * x for x in nchunk]
coffsets.append(ofs)
for ofs1, ofs2 in zip(coffsets, coffsets[1:]):
gap = (ofs2[0] - ofs1[0]) - chunksize
if gap:
t = "only sibling chunks can be stitched; gap of %d bytes between offsets %d and %d in %s"
raise Pebkac(400, t % (gap, ofs1[0], ofs2[0], job["name"]))
path = djoin(job["ptop"], job["prel"], job["tnam"])
if not job["sprs"]: if not job["sprs"]:
cur_sz = bos.path.getsize(path) cur_sz = bos.path.getsize(path)
@@ -3056,17 +3067,20 @@ class Up2k(object):
job["poke"] = time.time() job["poke"] = time.time()
return chunksize, ofs, path, job["lmod"], job["sprs"] return chunksize, coffsets, path, job["lmod"], job["sprs"]
def release_chunk(self, ptop: str, wark: str, chash: str) -> bool: def release_chunks(self, ptop: str, wark: str, chashes: list[str]) -> bool:
with self.reg_mutex: with self.reg_mutex:
job = self.registry[ptop].get(wark) job = self.registry[ptop].get(wark)
if job: if job:
job["busy"].pop(chash, None) for chash in chashes:
job["busy"].pop(chash, None)
return True return True
def confirm_chunk(self, ptop: str, wark: str, chash: str) -> tuple[int, str]: def confirm_chunks(
self, ptop: str, wark: str, chashes: list[str]
) -> tuple[int, str]:
with self.mutex, self.reg_mutex: with self.mutex, self.reg_mutex:
self.db_act = self.vol_act[ptop] = time.time() self.db_act = self.vol_act[ptop] = time.time()
try: try:
@@ -3075,14 +3089,16 @@ class Up2k(object):
src = djoin(pdir, job["tnam"]) src = djoin(pdir, job["tnam"])
dst = djoin(pdir, job["name"]) dst = djoin(pdir, job["name"])
except Exception as ex: except Exception as ex:
return "confirm_chunk, wark, " + repr(ex) # type: ignore return "confirm_chunk, wark(%r)" % (ex,) # type: ignore
job["busy"].pop(chash, None) for chash in chashes:
job["busy"].pop(chash, None)
try: try:
job["need"].remove(chash) for chash in chashes:
job["need"].remove(chash)
except Exception as ex: except Exception as ex:
return "confirm_chunk, chash, " + repr(ex) # type: ignore return "confirm_chunk, chash(%s) %r" % (chash, ex) # type: ignore
ret = len(job["need"]) ret = len(job["need"])
if ret > 0: if ret > 0:

View File

@@ -1377,7 +1377,7 @@ def vol_san(vols: list["VFS"], txt: bytes) -> bytes:
def min_ex(max_lines: int = 8, reverse: bool = False) -> str: def min_ex(max_lines: int = 8, reverse: bool = False) -> str:
et, ev, tb = sys.exc_info() et, ev, tb = sys.exc_info()
stb = traceback.extract_tb(tb) if tb else traceback.extract_stack()[:-1] stb = traceback.extract_tb(tb) if tb else traceback.extract_stack()[:-1]
fmt = "%s @ %d <%s>: %s" fmt = "%s:%d <%s>: %s"
ex = [fmt % (fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb] ex = [fmt % (fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb]
if et or ev or tb: if et or ev or tb:
ex.append("[%s] %s" % (et.__name__ if et else "(anonymous)", ev)) ex.append("[%s] %s" % (et.__name__ if et else "(anonymous)", ev))

View File

@@ -29,6 +29,7 @@ window.baguetteBox = (function () {
isOverlayVisible = false, isOverlayVisible = false,
touch = {}, // start-pos touch = {}, // start-pos
touchFlag = false, // busy touchFlag = false, // busy
scrollCSS = ['', ''],
scrollTimer = 0, scrollTimer = 0,
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i, re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i, re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
@@ -567,6 +568,12 @@ window.baguetteBox = (function () {
function showOverlay(chosenImageIndex) { function showOverlay(chosenImageIndex) {
if (options.noScrollbars) { if (options.noScrollbars) {
var a = document.documentElement.style.overflowY,
b = document.body.style.overflowY;
if (a != 'hidden' || b != 'scroll')
scrollCSS = [a, b];
document.documentElement.style.overflowY = 'hidden'; document.documentElement.style.overflowY = 'hidden';
document.body.style.overflowY = 'scroll'; document.body.style.overflowY = 'scroll';
} }
@@ -615,8 +622,8 @@ window.baguetteBox = (function () {
playvid(false); playvid(false);
removeFromCache('#files'); removeFromCache('#files');
if (options.noScrollbars) { if (options.noScrollbars) {
document.documentElement.style.overflowY = 'auto'; document.documentElement.style.overflowY = scrollCSS[0];
document.body.style.overflowY = 'auto'; document.body.style.overflowY = scrollCSS[1];
} }
try { try {

View File

@@ -210,6 +210,8 @@ var Ls = {
"cut_datechk": "has no effect unless the turbo button is enabled$N$Nreduces the yolo factor by a tiny amount; checks whether the file timestamps on the server matches yours$N$Nshould <em>theoretically</em> catch most unfinished / corrupted uploads, but is not a substitute for doing a verification pass with turbo disabled afterwards\">date-chk", "cut_datechk": "has no effect unless the turbo button is enabled$N$Nreduces the yolo factor by a tiny amount; checks whether the file timestamps on the server matches yours$N$Nshould <em>theoretically</em> catch most unfinished / corrupted uploads, but is not a substitute for doing a verification pass with turbo disabled afterwards\">date-chk",
"cut_u2sz": "size (in MiB) of each upload chunk; big values fly better across the atlantic. Try low values on very unreliable connections",
"cut_flag": "ensure only one tab is uploading at a time $N -- other tabs must have this enabled too $N -- only affects tabs on the same domain", "cut_flag": "ensure only one tab is uploading at a time $N -- other tabs must have this enabled too $N -- only affects tabs on the same domain",
"cut_az": "upload files in alphabetical order, rather than smallest-file-first$N$Nalphabetical order can make it easier to eyeball if something went wrong on the server, but it makes uploading slightly slower on fiber / LAN", "cut_az": "upload files in alphabetical order, rather than smallest-file-first$N$Nalphabetical order can make it easier to eyeball if something went wrong on the server, but it makes uploading slightly slower on fiber / LAN",
@@ -268,6 +270,8 @@ var Ls = {
"mb_play": "play", "mb_play": "play",
"mm_hashplay": "play this audio file?", "mm_hashplay": "play this audio file?",
"mp_breq": "need firefox 82+ or chrome 73+ or iOS 15+", "mp_breq": "need firefox 82+ or chrome 73+ or iOS 15+",
"mm_bload": "now loading...",
"mm_bconv": "converting to {0}, please wait...",
"mm_opusen": "your browser cannot play aac / m4a files;\ntranscoding to opus is now enabled", "mm_opusen": "your browser cannot play aac / m4a files;\ntranscoding to opus is now enabled",
"mm_playerr": "playback failed: ", "mm_playerr": "playback failed: ",
"mm_eabrt": "The playback attempt was cancelled", "mm_eabrt": "The playback attempt was cancelled",
@@ -358,6 +362,7 @@ var Ls = {
"tvt_sel": "select file &nbsp; ( for cut / delete / ... )$NHotkey: S\">sel", "tvt_sel": "select file &nbsp; ( for cut / delete / ... )$NHotkey: S\">sel",
"tvt_edit": "open file in text editor$NHotkey: E\">✏️ edit", "tvt_edit": "open file in text editor$NHotkey: E\">✏️ edit",
"gt_vau": "don't show videos, just play the audio\">🎧",
"gt_msel": "enable file selection; ctrl-click a file to override$N$N&lt;em&gt;when active: doubleclick a file / folder to open it&lt;/em&gt;$N$NHotkey: S\">multiselect", "gt_msel": "enable file selection; ctrl-click a file to override$N$N&lt;em&gt;when active: doubleclick a file / folder to open it&lt;/em&gt;$N$NHotkey: S\">multiselect",
"gt_crop": "center-crop thumbnails\">crop", "gt_crop": "center-crop thumbnails\">crop",
"gt_3x": "hi-res thumbnails\">3x", "gt_3x": "hi-res thumbnails\">3x",
@@ -461,7 +466,7 @@ var Ls = {
"u_badf": 'These {0} files (of {1} total) were skipped, possibly due to filesystem permissions:\n\n', "u_badf": 'These {0} files (of {1} total) were skipped, possibly due to filesystem permissions:\n\n',
"u_blankf": 'These {0} files (of {1} total) are blank / empty; upload them anyways?\n\n', "u_blankf": 'These {0} files (of {1} total) are blank / empty; upload them anyways?\n\n',
"u_just1": '\nMaybe it works better if you select just one file', "u_just1": '\nMaybe it works better if you select just one file',
"u_ff_many": "This amount of files <em>may</em> cause Firefox to skip some files, or crash.\nPlease try again with fewer files (or use Chrome) if that happens.", "u_ff_many": "if you're using <b>Linux / MacOS / Android,</b> then this amount of files <a href=\"https://bugzilla.mozilla.org/show_bug.cgi?id=1790500\" target=\"_blank\"><em>may</em> crash Firefox!</a>\nif that happens, please try again (or use Chrome).",
"u_up_life": "This upload will be deleted from the server\n{0} after it completes", "u_up_life": "This upload will be deleted from the server\n{0} after it completes",
"u_asku": 'upload these {0} files to <code>{1}</code>', "u_asku": 'upload these {0} files to <code>{1}</code>',
"u_unpt": "you can undo / delete this upload using the top-left 🧯", "u_unpt": "you can undo / delete this upload using the top-left 🧯",
@@ -478,12 +483,13 @@ var Ls = {
"u_ehsinit": "server rejected the request to initiate upload; retrying...", "u_ehsinit": "server rejected the request to initiate upload; retrying...",
"u_eneths": "network error while performing upload handshake; retrying...", "u_eneths": "network error while performing upload handshake; retrying...",
"u_enethd": "network error while testing target existence; retrying...", "u_enethd": "network error while testing target existence; retrying...",
"u_cbusy": "waiting for server to trust us again after a network glitch...",
"u_ehsdf": "server ran out of disk space!\n\nwill keep retrying, in case someone\nfrees up enough space to continue", "u_ehsdf": "server ran out of disk space!\n\nwill keep retrying, in case someone\nfrees up enough space to continue",
"u_emtleak1": "it looks like your webbrowser may have a memory leak;\nplease", "u_emtleak1": "it looks like your webbrowser may have a memory leak;\nplease",
"u_emtleak2": ' <a href="{0}">switch to https (recommended)</a> or ', "u_emtleak2": ' <a href="{0}">switch to https (recommended)</a> or ',
"u_emtleak3": ' ', "u_emtleak3": ' ',
"u_emtleakc": 'try the following:\n<ul><li>hit <code>F5</code> to refresh the page</li><li>then disable the &nbsp;<code>mt</code>&nbsp; button in the &nbsp;<code>⚙️ settings</code></li><li>and try that upload again</li></ul>Uploads will be a bit slower, but oh well.\nSorry for the trouble !\n\nPS: chrome v107 <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=1354816">has a bugfix</a> for this', "u_emtleakc": 'try the following:\n<ul><li>hit <code>F5</code> to refresh the page</li><li>then disable the &nbsp;<code>mt</code>&nbsp; button in the &nbsp;<code>⚙️ settings</code></li><li>and try that upload again</li></ul>Uploads will be a bit slower, but oh well.\nSorry for the trouble !\n\nPS: chrome v107 <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=1354816" target="_blank">has a bugfix</a> for this',
"u_emtleakf": 'try the following:\n<ul><li>hit <code>F5</code> to refresh the page</li><li>then enable <code>🥔</code> (potato) in the upload UI<li>and try that upload again</li></ul>\nPS: firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500">will hopefully have a bugfix</a> at some point', "u_emtleakf": 'try the following:\n<ul><li>hit <code>F5</code> to refresh the page</li><li>then enable <code>🥔</code> (potato) in the upload UI<li>and try that upload again</li></ul>\nPS: firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500" target="_blank">will hopefully have a bugfix</a> at some point',
"u_s404": "not found on server", "u_s404": "not found on server",
"u_expl": "explain", "u_expl": "explain",
"u_maxconn": "most browsers limit this to 6, but firefox lets you raise it with <code>connections-per-server</code> in <code>about:config</code>", "u_maxconn": "most browsers limit this to 6, but firefox lets you raise it with <code>connections-per-server</code> in <code>about:config</code>",
@@ -721,6 +727,8 @@ var Ls = {
"cut_datechk": "har ingen effekt dersom turbo er avslått$N$Ngjør turbo bittelitt tryggere ved å sjekke datostemplingen på filene (i tillegg til filstørrelse)$N$N<em>burde</em> oppdage og gjenoppta de fleste ufullstendige opplastninger, men er <em>ikke</em> en fullverdig erstatning for å deaktivere turbo og gjøre en skikkelig sjekk\">date-chk", "cut_datechk": "har ingen effekt dersom turbo er avslått$N$Ngjør turbo bittelitt tryggere ved å sjekke datostemplingen på filene (i tillegg til filstørrelse)$N$N<em>burde</em> oppdage og gjenoppta de fleste ufullstendige opplastninger, men er <em>ikke</em> en fullverdig erstatning for å deaktivere turbo og gjøre en skikkelig sjekk\">date-chk",
"cut_u2sz": "størrelse i megabyte for hvert bruddstykke for opplastning. Store verdier flyr bedre over atlanteren. Små verdier kan være bedre på særdeles ustabile forbindelser",
"cut_flag": "samkjører nettleserfaner slik at bare én $N kan holde på med befaring / opplastning $N -- andre faner må også ha denne skrudd på $N -- fungerer kun innenfor samme domene", "cut_flag": "samkjører nettleserfaner slik at bare én $N kan holde på med befaring / opplastning $N -- andre faner må også ha denne skrudd på $N -- fungerer kun innenfor samme domene",
"cut_az": "last opp filer i alfabetisk rekkefølge, istedenfor minste-fil-først$N$Nalfabetisk kan gjøre det lettere å anslå om alt gikk bra, men er bittelitt tregere på fiber / LAN", "cut_az": "last opp filer i alfabetisk rekkefølge, istedenfor minste-fil-først$N$Nalfabetisk kan gjøre det lettere å anslå om alt gikk bra, men er bittelitt tregere på fiber / LAN",
@@ -779,6 +787,8 @@ var Ls = {
"mb_play": "lytt", "mb_play": "lytt",
"mm_hashplay": "spill denne sangen?", "mm_hashplay": "spill denne sangen?",
"mp_breq": "krever firefox 82+, chrome 73+, eller iOS 15+", "mp_breq": "krever firefox 82+, chrome 73+, eller iOS 15+",
"mm_bload": "laster inn...",
"mm_bconv": "konverterer til {0}, vent litt...",
"mm_opusen": "nettleseren din forstår ikke aac / m4a;\nkonvertering til opus er nå aktivert", "mm_opusen": "nettleseren din forstår ikke aac / m4a;\nkonvertering til opus er nå aktivert",
"mm_playerr": "avspilling feilet: ", "mm_playerr": "avspilling feilet: ",
"mm_eabrt": "Avspillingsforespørselen ble avbrutt", "mm_eabrt": "Avspillingsforespørselen ble avbrutt",
@@ -869,6 +879,7 @@ var Ls = {
"tvt_sel": "markér filen &nbsp; ( for utklipp / sletting / ... )$NSnarvei: S\">merk", "tvt_sel": "markér filen &nbsp; ( for utklipp / sletting / ... )$NSnarvei: S\">merk",
"tvt_edit": "redigér filen$NSnarvei: E\">✏️ endre", "tvt_edit": "redigér filen$NSnarvei: E\">✏️ endre",
"gt_vau": "ikke vis videofiler, bare spill lyden\">🎧",
"gt_msel": "markér filer istedenfor å åpne dem; ctrl-klikk filer for å overstyre$N$N&lt;em&gt;når aktiv: dobbelklikk en fil / mappe for å åpne&lt;/em&gt;$N$NSnarvei: S\">markering", "gt_msel": "markér filer istedenfor å åpne dem; ctrl-klikk filer for å overstyre$N$N&lt;em&gt;når aktiv: dobbelklikk en fil / mappe for å åpne&lt;/em&gt;$N$NSnarvei: S\">markering",
"gt_crop": "beskjær ikonene så de passer bedre\">✂", "gt_crop": "beskjær ikonene så de passer bedre\">✂",
"gt_3x": "høyere oppløsning på ikoner\">3x", "gt_3x": "høyere oppløsning på ikoner\">3x",
@@ -972,7 +983,7 @@ var Ls = {
"u_badf": 'Disse {0} filene (av totalt {1}) kan ikke leses, kanskje pga rettighetsproblemer i filsystemet på datamaskinen din:\n\n', "u_badf": 'Disse {0} filene (av totalt {1}) kan ikke leses, kanskje pga rettighetsproblemer i filsystemet på datamaskinen din:\n\n',
"u_blankf": 'Disse {0} filene (av totalt {1}) er blanke / uten innhold; ønsker du å laste dem opp uansett?\n\n', "u_blankf": 'Disse {0} filene (av totalt {1}) er blanke / uten innhold; ønsker du å laste dem opp uansett?\n\n',
"u_just1": '\nFunker kanskje bedre hvis du bare tar én fil om gangen', "u_just1": '\nFunker kanskje bedre hvis du bare tar én fil om gangen',
"u_ff_many": "Det var mange filer! Mulig at Firefox kommer til å krasje, eller\nhoppe over et par av dem. Smart å ha Chrome på lur i tilfelle.", "u_ff_many": 'Hvis du bruker <b>Linux / MacOS / Android,</b> så kan dette antallet filer<br /><a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500" target="_blank"><em>kanskje</em> krasje Firefox!</a> Hvis det skjer, så prøv igjen (eller bruk Chrome).',
"u_up_life": "Filene slettes fra serveren {0}\netter at opplastningen er fullført", "u_up_life": "Filene slettes fra serveren {0}\netter at opplastningen er fullført",
"u_asku": 'Laste opp disse {0} filene til <code>{1}</code>', "u_asku": 'Laste opp disse {0} filene til <code>{1}</code>',
"u_unpt": "Du kan angre / slette opplastningen med 🧯 oppe til venstre", "u_unpt": "Du kan angre / slette opplastningen med 🧯 oppe til venstre",
@@ -989,12 +1000,13 @@ var Ls = {
"u_ehsinit": "server nektet forespørselen om å begynne en ny opplastning; prøver igjen...", "u_ehsinit": "server nektet forespørselen om å begynne en ny opplastning; prøver igjen...",
"u_eneths": "et problem med nettverket gjorde at avtale om opplastning ikke kunne inngås; prøver igjen...", "u_eneths": "et problem med nettverket gjorde at avtale om opplastning ikke kunne inngås; prøver igjen...",
"u_enethd": "et problem med nettverket gjorde at filsjekk ikke kunne utføres; prøver igjen...", "u_enethd": "et problem med nettverket gjorde at filsjekk ikke kunne utføres; prøver igjen...",
"u_cbusy": "venter på klarering ifra server etter et lite nettverksglipp...",
"u_ehsdf": "serveren er full!\n\nprøver igjen regelmessig,\ni tilfelle noen rydder litt...", "u_ehsdf": "serveren er full!\n\nprøver igjen regelmessig,\ni tilfelle noen rydder litt...",
"u_emtleak1": "uff, det er mulig at nettleseren din har en minnelekkasje...\nForeslår", "u_emtleak1": "uff, det er mulig at nettleseren din har en minnelekkasje...\nForeslår",
"u_emtleak2": ' helst at du <a href="{0}">bytter til https</a>, eller ', "u_emtleak2": ' helst at du <a href="{0}">bytter til https</a>, eller ',
"u_emtleak3": ' at du ', "u_emtleak3": ' at du ',
"u_emtleakc": 'prøver følgende:\n<ul><li>trykk F5 for å laste siden på nytt</li><li>så skru av &nbsp;<code>mt</code>&nbsp; bryteren under &nbsp;<code>⚙️ innstillinger</code></li><li>og forsøk den samme opplastningen igjen</li></ul>Opplastning vil gå litt tregere, men det får så være.\nBeklager bryderiet !\n\nPS: feilen <a href="<a href="https://bugs.chromium.org/p/chromium/issues/detail?id=1354816">skal være fikset</a> i chrome v107', "u_emtleakc": 'prøver følgende:\n<ul><li>trykk F5 for å laste siden på nytt</li><li>så skru av &nbsp;<code>mt</code>&nbsp; bryteren under &nbsp;<code>⚙️ innstillinger</code></li><li>og forsøk den samme opplastningen igjen</li></ul>Opplastning vil gå litt tregere, men det får så være.\nBeklager bryderiet !\n\nPS: feilen <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=1354816" target="_blank">skal være fikset</a> i chrome v107',
"u_emtleakf": 'prøver følgende:\n<ul><li>trykk F5 for å laste siden på nytt</li><li>så skru på <code>🥔</code> ("enkelt UI") i opplasteren</li><li>og forsøk den samme opplastningen igjen</li></ul>\nPS: Firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500">fikser forhåpentligvis feilen</a> en eller annen gang', "u_emtleakf": 'prøver følgende:\n<ul><li>trykk F5 for å laste siden på nytt</li><li>så skru på <code>🥔</code> ("enkelt UI") i opplasteren</li><li>og forsøk den samme opplastningen igjen</li></ul>\nPS: Firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500" target="_blank">fikser forhåpentligvis feilen</a> en eller annen gang',
"u_s404": "ikke funnet på serveren", "u_s404": "ikke funnet på serveren",
"u_expl": "forklar", "u_expl": "forklar",
"u_maxconn": "de fleste nettlesere tillater ikke mer enn 6, men firefox lar deg øke grensen med <code>connections-per-server</code> i <code>about:config</code>", "u_maxconn": "de fleste nettlesere tillater ikke mer enn 6, men firefox lar deg øke grensen med <code>connections-per-server</code> i <code>about:config</code>",
@@ -1251,6 +1263,7 @@ ebi('op_cfg').innerHTML = (
' <a id="hashw" class="tgl btn" href="#" tt="' + L.cut_mt + '</a>\n' + ' <a id="hashw" class="tgl btn" href="#" tt="' + L.cut_mt + '</a>\n' +
' <a id="u2turbo" class="tgl btn ttb" href="#" tt="' + L.cut_turbo + '</a>\n' + ' <a id="u2turbo" class="tgl btn ttb" href="#" tt="' + L.cut_turbo + '</a>\n' +
' <a id="u2tdate" class="tgl btn ttb" href="#" tt="' + L.cut_datechk + '</a>\n' + ' <a id="u2tdate" class="tgl btn ttb" href="#" tt="' + L.cut_datechk + '</a>\n' +
' <input type="text" id="u2szg" value="" ' + NOAC + ' style="width:3em" tt="' + L.cut_u2sz + '" />' +
' <a id="flag_en" class="tgl btn" href="#" tt="' + L.cut_flag + '">💤</a>\n' + ' <a id="flag_en" class="tgl btn" href="#" tt="' + L.cut_flag + '">💤</a>\n' +
' <a id="u2sort" class="tgl btn" href="#" tt="' + L.cut_az + '">az</a>\n' + ' <a id="u2sort" class="tgl btn" href="#" tt="' + L.cut_az + '">az</a>\n' +
' <a id="upnag" class="tgl btn" href="#" tt="' + L.cut_nag + '">🔔</a>\n' + ' <a id="upnag" class="tgl btn" href="#" tt="' + L.cut_nag + '">🔔</a>\n' +
@@ -1593,6 +1606,8 @@ var mpl = (function () {
r.pp = function () { r.pp = function () {
var adur, apos, playing = mp.au && !mp.au.paused; var adur, apos, playing = mp.au && !mp.au.paused;
clearTimeout(mpl.t_eplay);
clmod(ebi('np_inf'), 'playing', playing); clmod(ebi('np_inf'), 'playing', playing);
if (mp.au && isNum(adur = mp.au.duration) && isNum(apos = mp.au.currentTime) && apos >= 0) if (mp.au && isNum(adur = mp.au.duration) && isNum(apos = mp.au.currentTime) && apos >= 0)
@@ -1702,7 +1717,7 @@ catch (ex) { }
var re_au_native = (can_ogg || have_acode) ? /\.(aac|flac|m4a|mp3|ogg|opus|wav)$/i : /\.(aac|flac|m4a|mp3|wav)$/i, var re_au_native = (can_ogg || have_acode) ? /\.(aac|flac|m4a|mp3|ogg|opus|wav)$/i : /\.(aac|flac|m4a|mp3|wav)$/i,
re_au_all = /\.(aac|ac3|aif|aiff|alac|alaw|amr|ape|au|dfpwm|dts|flac|gsm|it|itgz|itxz|itz|m4a|mdgz|mdxz|mdz|mo3|mod|mp2|mp3|mpc|mptm|mt2|mulaw|ogg|okt|opus|ra|s3m|s3gz|s3xz|s3z|tak|tta|ulaw|wav|wma|wv|xm|xmgz|xmxz|xmz|xpk)$/i; re_au_all = /\.(aac|ac3|aif|aiff|alac|alaw|amr|ape|au|dfpwm|dts|flac|gsm|it|itgz|itxz|itz|m4a|mdgz|mdxz|mdz|mo3|mod|mp2|mp3|mpc|mptm|mt2|mulaw|ogg|okt|opus|ra|s3m|s3gz|s3xz|s3z|tak|tta|ulaw|wav|wma|wv|xm|xmgz|xmxz|xmz|xpk|3gp|asf|avi|flv|m4v|mkv|mov|mp4|mpeg|mpeg2|mpegts|mpg|mpg2|nut|ogm|ogv|rm|ts|vob|webm|wmv)$/i;
// extract songs + add play column // extract songs + add play column
@@ -2178,8 +2193,21 @@ var pbar = (function () {
} }
pctx.clearRect(0, 0, pc.w, pc.h); pctx.clearRect(0, 0, pc.w, pc.h);
if (!mp || !mp.au || !isNum(adur = mp.au.duration) || !isNum(apos = mp.au.currentTime) || apos < 0 || adur < apos) if (!mp || !mp.au)
return; // not-init
if (!isNum(adur = mp.au.duration) || !isNum(apos = mp.au.currentTime) || apos < 0 || adur < apos) {
if (Date.now() - mp.au.pt0 < 500)
return;
pctx.fillStyle = light ? 'rgba(0,0,0,0.5)' : 'rgba(255,255,255,0.5)';
var m = /[?&]th=(opus|caf|mp3)/.exec('' + mp.au.rsrc),
txt = mp.au.ded ? L.mm_playerr.replace(':', ' ;_;') :
m ? L.mm_bconv.format(m[1]) : L.mm_bload;
pctx.fillText(txt, 16, pc.h / 1.5);
return; // not-init || unsupp-codec return; // not-init || unsupp-codec
}
if (bau != mp.au) if (bau != mp.au)
r.drawbuf(); r.drawbuf();
@@ -3145,7 +3173,9 @@ function play(tid, is_ev, seek) {
mpl.unbuffer(url); mpl.unbuffer(url);
}, 500); }, 500);
mp.au.ded = 0;
mp.au.tid = tid; mp.au.tid = tid;
mp.au.pt0 = Date.now();
mp.au.evp = get_evpath(); mp.au.evp = get_evpath();
mp.au.volume = mp.expvol(mp.vol); mp.au.volume = mp.expvol(mp.vol);
var trs = QSA('#files tr.play'); var trs = QSA('#files tr.play');
@@ -3213,6 +3243,8 @@ function evau_error(e) {
var err = '', var err = '',
eplaya = (e && e.target) || (window.event && window.event.srcElement); eplaya = (e && e.target) || (window.event && window.event.srcElement);
eplaya.ded = 1;
switch (eplaya.error.code) { switch (eplaya.error.code) {
case eplaya.error.MEDIA_ERR_ABORTED: case eplaya.error.MEDIA_ERR_ABORTED:
err = L.mm_eabrt; err = L.mm_eabrt;
@@ -3327,6 +3359,7 @@ function scan_hash(v) {
function eval_hash() { function eval_hash() {
document.onkeydown = ahotkeys;
window.onpopstate = treectl.onpopfun; window.onpopstate = treectl.onpopfun;
if (hash0 && window.og_fn) { if (hash0 && window.og_fn) {
@@ -4403,7 +4436,7 @@ var showfile = (function () {
var td = ebi(link.id).closest('tr').getElementsByTagName('td')[0]; var td = ebi(link.id).closest('tr').getElementsByTagName('td')[0];
if (lang == 'md' && td.textContent != '-') if (lang == 'ts' || (lang == 'md' && td.textContent != '-'))
continue; continue;
td.innerHTML = '<a href="#" id="t' + td.innerHTML = '<a href="#" id="t' +
@@ -4670,6 +4703,7 @@ var thegrid = (function () {
gfiles.style.display = 'none'; gfiles.style.display = 'none';
gfiles.innerHTML = ( gfiles.innerHTML = (
'<div id="ghead" class="ghead">' + '<div id="ghead" class="ghead">' +
'<a href="#" class="tgl btn" id="gridvau" tt="' + L.gt_vau + '</a> ' +
'<a href="#" class="tgl btn" id="gridsel" tt="' + L.gt_msel + '</a> ' + '<a href="#" class="tgl btn" id="gridsel" tt="' + L.gt_msel + '</a> ' +
'<a href="#" class="tgl btn" id="gridcrop" tt="' + L.gt_crop + '</a> ' + '<a href="#" class="tgl btn" id="gridcrop" tt="' + L.gt_crop + '</a> ' +
'<a href="#" class="tgl btn" id="grid3x" tt="' + L.gt_3x + '</a> ' + '<a href="#" class="tgl btn" id="grid3x" tt="' + L.gt_3x + '</a> ' +
@@ -4831,7 +4865,7 @@ var thegrid = (function () {
else if (oth.hasAttribute('download')) else if (oth.hasAttribute('download'))
oth.click(); oth.click();
else if (widget.is_open && aplay) else if (aplay && (r.vau || !is_img))
aplay.click(); aplay.click();
else if (is_dir && !have_sel) else if (is_dir && !have_sel)
@@ -5122,6 +5156,7 @@ var thegrid = (function () {
bcfg_bind(r, 'thumbs', 'thumbs', true, r.setdirty); bcfg_bind(r, 'thumbs', 'thumbs', true, r.setdirty);
bcfg_bind(r, 'ihop', 'ihop', true); bcfg_bind(r, 'ihop', 'ihop', true);
bcfg_bind(r, 'vau', 'gridvau', false);
bcfg_bind(r, 'crop', 'gridcrop', !dcrop.endsWith('n'), r.set_crop); bcfg_bind(r, 'crop', 'gridcrop', !dcrop.endsWith('n'), r.set_crop);
bcfg_bind(r, 'x3', 'grid3x', dth3x.endsWith('y'), r.set_x3); bcfg_bind(r, 'x3', 'grid3x', dth3x.endsWith('y'), r.set_x3);
bcfg_bind(r, 'sel', 'gridsel', false, r.loadsel); bcfg_bind(r, 'sel', 'gridsel', false, r.loadsel);
@@ -5221,7 +5256,9 @@ function tree_up(justgo) {
if (!justgo) if (!justgo)
return; return;
} }
act.parentNode.parentNode.parentNode.getElementsByTagName('a')[1].click(); var a = act.parentNode.parentNode.parentNode.getElementsByTagName('a')[1];
if (a.parentNode.tagName == 'LI')
a.click();
} }
@@ -5284,7 +5321,7 @@ function fselfunw(e, ae, d, rem) {
} }
selfun(); selfun();
} }
document.onkeydown = function (e) { var ahotkeys = function (e) {
if (e.altKey || e.isComposing) if (e.altKey || e.isComposing)
return; return;
@@ -7992,7 +8029,8 @@ function sandbox(tgt, rules, cls, html) {
env = js.split(/\blogues *=/)[0] + 'a;'; env = js.split(/\blogues *=/)[0] + 'a;';
} }
html = '<html class="iframe ' + document.documentElement.className + '"><head><style>' + globalcss() + html = '<html class="iframe ' + document.documentElement.className +
'"><head><style>html{background:#eee;color:#000}\n' + globalcss() +
'</style><base target="_parent"></head><body id="b" class="logue ' + cls + '">' + html + '</style><base target="_parent"></head><body id="b" class="logue ' + cls + '">' + html +
'<script>' + env + '</script>' + sandboxjs() + '<script>' + env + '</script>' + sandboxjs() +
'<script>var d=document.documentElement,TS="' + TS + '",' + '<script>var d=document.documentElement,TS="' + TS + '",' +

View File

@@ -64,16 +64,7 @@
</div> </div>
<div class="os lin"> <div class="os lin">
<pre> <p>rclone (v1.63 or later) is recommended:</p>
yum install davfs2
{% if accs %}printf '%s\n' <b>{{ pw }}</b> k | {% endif %}mount -t davfs -ouid=1000 http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b>
</pre>
<p>make it automount on boot:</p>
<pre>
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>{{ pw }}</b> k" >> /etc/davfs2/secrets
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b> davfs rw,user,uid=1000,noauto 0 0" >> /etc/fstab
</pre>
<p>or you can use rclone instead, which is much slower but doesn't require root (plus it keeps lastmodified on upload):</p>
<pre> <pre>
rclone config create {{ aname }}-dav webdav url=http{{ s }}://{{ rip }}{{ hport }} vendor=owncloud pacer_min_sleep=0.01ms{% if accs %} user=k pass=<b>{{ pw }}</b>{% endif %} rclone config create {{ aname }}-dav webdav url=http{{ s }}://{{ rip }}{{ hport }} vendor=owncloud pacer_min_sleep=0.01ms{% if accs %} user=k pass=<b>{{ pw }}</b>{% endif %}
rclone mount --vfs-cache-mode writes --dir-cache-time 5s {{ aname }}-dav:{{ rvp }} <b>mp</b> rclone mount --vfs-cache-mode writes --dir-cache-time 5s {{ aname }}-dav:{{ rvp }} <b>mp</b>
@@ -85,6 +76,16 @@
<li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li> <li>running <code>rclone mount</code> as root? add <code>--allow-other</code></li>
<li>old version of rclone? replace all <code>=</code> with <code>&nbsp;</code> (space)</li> <li>old version of rclone? replace all <code>=</code> with <code>&nbsp;</code> (space)</li>
</ul> </ul>
<p>alternatively use davfs2 (requires root, is slower, forgets lastmodified-timestamp on upload):</p>
<pre>
yum install davfs2
{% if accs %}printf '%s\n' <b>{{ pw }}</b> k | {% endif %}mount -t davfs -ouid=1000 http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b>
</pre>
<p>make davfs2 automount on boot:</p>
<pre>
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>{{ pw }}</b> k" >> /etc/davfs2/secrets
printf '%s\n' "http{{ s }}://{{ ep }}/{{ rvp }} <b>mp</b> davfs rw,user,uid=1000,noauto 0 0" >> /etc/fstab
</pre>
<p>or the emergency alternative (gnome/gui-only):</p> <p>or the emergency alternative (gnome/gui-only):</p>
<!-- gnome-bug: ignores vp --> <!-- gnome-bug: ignores vp -->
<pre> <pre>

View File

@@ -265,7 +265,11 @@ html.y #tth {
box-shadow: 0 .3em 3em rgba(0,0,0,0.5); box-shadow: 0 .3em 3em rgba(0,0,0,0.5);
max-width: 50em; max-width: 50em;
max-height: 30em; max-height: 30em;
overflow: auto; overflow-x: auto;
overflow-y: scroll;
}
#modalc.yk {
overflow-y: auto;
} }
#modalc td { #modalc td {
text-align: unset; text-align: unset;

View File

@@ -658,7 +658,9 @@ function Donut(uc, st) {
} }
function pos() { function pos() {
return uc.fsearch ? Math.max(st.bytes.hashed, st.bytes.finished) : st.bytes.finished; return uc.fsearch ?
Math.max(st.bytes.hashed, st.bytes.finished) :
st.bytes.inflight + st.bytes.finished;
} }
r.on = function (ya) { r.on = function (ya) {
@@ -853,6 +855,7 @@ function up2k_init(subtle) {
setmsg(suggest_up2k, 'msg'); setmsg(suggest_up2k, 'msg');
var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j), var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz.split(',')[1]),
uc = {}, uc = {},
fdom_ctr = 0, fdom_ctr = 0,
biggest_file = 0; biggest_file = 0;
@@ -1207,7 +1210,7 @@ function up2k_init(subtle) {
match = false; match = false;
if (match) { if (match) {
var msg = ['directory iterator got stuck on the following {0} items; good chance your browser is about to spinlock:<ul>'.format(missing.length)]; var msg = ['directory iterator got stuck trying to access the following {0} items; will skip:<ul>'.format(missing.length)];
for (var a = 0; a < Math.min(20, missing.length); a++) for (var a = 0; a < Math.min(20, missing.length); a++)
msg.push('<li>' + esc(missing[a]) + '</li>'); msg.push('<li>' + esc(missing[a]) + '</li>');
@@ -1736,6 +1739,11 @@ function up2k_init(subtle) {
} }
} }
if (st.bytes.inflight && (st.bytes.inflight < 0 || !st.busy.upload.length)) {
console.log('insane inflight ' + st.bytes.inflight);
st.bytes.inflight = 0;
}
var mou_ikkai = false; var mou_ikkai = false;
if (st.busy.handshake.length && if (st.busy.handshake.length &&
@@ -2178,7 +2186,7 @@ function up2k_init(subtle) {
st.busy.head.push(t); st.busy.head.push(t);
var xhr = new XMLHttpRequest(); var xhr = new XMLHttpRequest();
xhr.onerror = function () { xhr.onerror = xhr.ontimeout = function () {
console.log('head onerror, retrying', t.name, t); console.log('head onerror, retrying', t.name, t);
if (!toast.visible) if (!toast.visible)
toast.warn(9.98, L.u_enethd + "\n\nfile: " + t.name, t); toast.warn(9.98, L.u_enethd + "\n\nfile: " + t.name, t);
@@ -2222,6 +2230,7 @@ function up2k_init(subtle) {
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); } try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
}; };
xhr.timeout = 34000;
xhr.open('HEAD', t.purl + uricom_enc(t.name), true); xhr.open('HEAD', t.purl + uricom_enc(t.name), true);
xhr.send(); xhr.send();
} }
@@ -2247,7 +2256,7 @@ function up2k_init(subtle) {
console.log("sending keepalive handshake", t.name, t); console.log("sending keepalive handshake", t.name, t);
var xhr = new XMLHttpRequest(); var xhr = new XMLHttpRequest();
xhr.onerror = function () { xhr.onerror = xhr.ontimeout = function () {
if (t.t_busied != me) // t.done ok if (t.t_busied != me) // t.done ok
return console.log('zombie handshake onerror', t.name, t); return console.log('zombie handshake onerror', t.name, t);
@@ -2374,11 +2383,23 @@ function up2k_init(subtle) {
var arr = st.todo.upload, var arr = st.todo.upload,
sort = arr.length && arr[arr.length - 1].nfile > t.n; sort = arr.length && arr[arr.length - 1].nfile > t.n;
for (var a = 0; a < t.postlist.length; a++) for (var a = 0; a < t.postlist.length; a++) {
var nparts = [], tbytes = 0, stitch = stitch_tgt;
if (t.nojoin && t.nojoin - t.postlist.length < 6)
stitch = 1;
--a;
for (var b = 0; b < stitch; b++) {
nparts.push(t.postlist[++a]);
tbytes += chunksize;
if (tbytes + chunksize > stitch * 1024 * 1024 || t.postlist[a + 1] - t.postlist[a] !== 1)
break;
}
arr.push({ arr.push({
'nfile': t.n, 'nfile': t.n,
'npart': t.postlist[a] 'nparts': nparts
}); });
}
msg = null; msg = null;
done = false; done = false;
@@ -2387,7 +2408,7 @@ function up2k_init(subtle) {
arr.sort(function (a, b) { arr.sort(function (a, b) {
return a.nfile < b.nfile ? -1 : return a.nfile < b.nfile ? -1 :
/* */ a.nfile > b.nfile ? 1 : /* */ a.nfile > b.nfile ? 1 :
a.npart < b.npart ? -1 : 1; /* */ a.nparts[0] < b.nparts[0] ? -1 : 1;
}); });
} }
@@ -2493,6 +2514,7 @@ function up2k_init(subtle) {
xhr.open('POST', t.purl, true); xhr.open('POST', t.purl, true);
xhr.responseType = 'text'; xhr.responseType = 'text';
xhr.timeout = 42000;
xhr.send(JSON.stringify(req)); xhr.send(JSON.stringify(req));
} }
@@ -2534,7 +2556,10 @@ function up2k_init(subtle) {
function exec_upload() { function exec_upload() {
var upt = st.todo.upload.shift(), var upt = st.todo.upload.shift(),
t = st.files[upt.nfile], t = st.files[upt.nfile],
npart = upt.npart, nparts = upt.nparts,
pcar = nparts[0],
pcdr = nparts[nparts.length - 1],
snpart = pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr),
tries = 0; tries = 0;
if (t.done) if (t.done)
@@ -2549,8 +2574,8 @@ function up2k_init(subtle) {
pvis.seth(t.n, 1, "🚀 send"); pvis.seth(t.n, 1, "🚀 send");
var chunksize = get_chunksize(t.size), var chunksize = get_chunksize(t.size),
car = npart * chunksize, car = pcar * chunksize,
cdr = car + chunksize; cdr = (pcdr + 1) * chunksize;
if (cdr >= t.size) if (cdr >= t.size)
cdr = t.size; cdr = t.size;
@@ -2560,14 +2585,19 @@ function up2k_init(subtle) {
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText); var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
if (txt.indexOf('upload blocked by x') + 1) { if (txt.indexOf('upload blocked by x') + 1) {
apop(st.busy.upload, upt); apop(st.busy.upload, upt);
apop(t.postlist, npart); for (var a = pcar; a <= pcdr; a++)
apop(t.postlist, a);
pvis.seth(t.n, 1, "ERROR"); pvis.seth(t.n, 1, "ERROR");
pvis.seth(t.n, 2, txt.split(/\n/)[0]); pvis.seth(t.n, 2, txt.split(/\n/)[0]);
pvis.move(t.n, 'ng'); pvis.move(t.n, 'ng');
return; return;
} }
if (xhr.status == 200) { if (xhr.status == 200) {
pvis.prog(t, npart, cdr - car); var bdone = cdr - car;
for (var a = pcar; a <= pcdr; a++) {
pvis.prog(t, a, Math.min(bdone, chunksize));
bdone -= chunksize;
}
st.bytes.finished += cdr - car; st.bytes.finished += cdr - car;
st.bytes.uploaded += cdr - car; st.bytes.uploaded += cdr - car;
t.bytes_uploaded += cdr - car; t.bytes_uploaded += cdr - car;
@@ -2576,18 +2606,21 @@ function up2k_init(subtle) {
} }
else if (txt.indexOf('already got that') + 1 || else if (txt.indexOf('already got that') + 1 ||
txt.indexOf('already being written') + 1) { txt.indexOf('already being written') + 1) {
console.log("ignoring dupe-segment error", t.name, t); t.nojoin = t.postlist.length;
console.log("ignoring dupe-segment with backoff", t.nojoin, t.name, t);
if (!toast.visible && st.todo.upload.length < 4)
toast.msg(10, L.u_cbusy);
} }
else { else {
xhrchk(xhr, L.u_cuerr2.format(npart, Math.ceil(t.size / chunksize), t.name), "404, target folder not found (???)", "warn", t); xhrchk(xhr, L.u_cuerr2.format(snpart, Math.ceil(t.size / chunksize), t.name), "404, target folder not found (???)", "warn", t);
chill(t); chill(t);
} }
orz2(xhr); orz2(xhr);
} }
var orz2 = function (xhr) { var orz2 = function (xhr) {
apop(st.busy.upload, upt); apop(st.busy.upload, upt);
apop(t.postlist, npart); for (var a = pcar; a <= pcdr; a++)
apop(t.postlist, a);
if (!t.postlist.length) { if (!t.postlist.length) {
t.t_uploaded = Date.now(); t.t_uploaded = Date.now();
pvis.seth(t.n, 1, 'verifying'); pvis.seth(t.n, 1, 'verifying');
@@ -2601,28 +2634,38 @@ function up2k_init(subtle) {
btot = Math.floor(st.bytes.total / 1024 / 1024); btot = Math.floor(st.bytes.total / 1024 / 1024);
xhr.upload.onprogress = function (xev) { xhr.upload.onprogress = function (xev) {
var nb = xev.loaded; var nb = xev.loaded,
st.bytes.inflight += nb - xhr.bsent; db = nb - xhr.bsent;
if (!db)
return;
st.bytes.inflight += db;
xhr.bsent = nb; xhr.bsent = nb;
pvis.prog(t, npart, nb); xhr.timeout = 64000 + Date.now() - xhr.t0;
pvis.prog(t, pcar, nb);
}; };
xhr.onload = function (xev) { xhr.onload = function (xev) {
try { orz(xhr); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); } try { orz(xhr); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
}; };
xhr.onerror = function (xev) { xhr.onerror = xhr.ontimeout = function (xev) {
if (crashed) if (crashed)
return; return;
st.bytes.inflight -= (xhr.bsent || 0); st.bytes.inflight -= (xhr.bsent || 0);
if (!toast.visible) if (!toast.visible)
toast.warn(9.98, L.u_cuerr.format(npart, Math.ceil(t.size / chunksize), t.name), t); toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), t.name), t);
console.log('chunkpit onerror,', ++tries, t.name, t); console.log('chunkpit onerror,', ++tries, t.name, t);
orz2(xhr); orz2(xhr);
}; };
var chashes = [];
for (var a = pcar; a <= pcdr; a++)
chashes.push(t.hash[a]);
xhr.open('POST', t.purl, true); xhr.open('POST', t.purl, true);
xhr.setRequestHeader("X-Up2k-Hash", t.hash[npart]); xhr.setRequestHeader("X-Up2k-Hash", chashes.join(","));
xhr.setRequestHeader("X-Up2k-Wark", t.wark); xhr.setRequestHeader("X-Up2k-Wark", t.wark);
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format( xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin, pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
@@ -2632,6 +2675,8 @@ function up2k_init(subtle) {
xhr.overrideMimeType('Content-Type', 'application/octet-stream'); xhr.overrideMimeType('Content-Type', 'application/octet-stream');
xhr.bsent = 0; xhr.bsent = 0;
xhr.t0 = Date.now();
xhr.timeout = 42000;
xhr.responseType = 'text'; xhr.responseType = 'text';
xhr.send(t.fobj.slice(car, cdr)); xhr.send(t.fobj.slice(car, cdr));
} }
@@ -2732,13 +2777,34 @@ function up2k_init(subtle) {
if (parallel_uploads > 16) if (parallel_uploads > 16)
parallel_uploads = 16; parallel_uploads = 16;
if (parallel_uploads > 7) if (parallel_uploads > 6)
toast.warn(10, L.u_maxconn); toast.warn(10, L.u_maxconn);
else if (toast.txt == L.u_maxconn)
toast.hide();
obj.value = parallel_uploads; obj.value = parallel_uploads;
bumpthread({ "target": 1 }); bumpthread({ "target": 1 });
} }
var read_u2sz = function () {
var el = ebi('u2szg'), n = parseInt(el.value), dv = u2sz.split(',');
stitch_tgt = n = (
isNaN(n) ? dv[1] :
n < dv[0] ? dv[0] :
n > dv[2] ? dv[2] : n
);
if (n == dv[1]) sdrop('u2sz'); else swrite('u2sz', n);
if (el.value != n) el.value = n;
};
ebi('u2szg').addEventListener('blur', read_u2sz);
ebi('u2szg').onkeydown = function (e) {
if (anymod(e)) return;
var n = e.code == 'ArrowUp' ? 1 : e.code == 'ArrowDown' ? -1 : 0;
if (!n) return;
this.value = parseInt(this.value) + n;
read_u2sz();
}
function tgl_fsearch() { function tgl_fsearch() {
set_fsearch(!uc.fsearch); set_fsearch(!uc.fsearch);
} }

View File

@@ -127,13 +127,13 @@ if ((document.location + '').indexOf(',rej,') + 1)
try { try {
console.hist = []; console.hist = [];
var CMAXHIST = 1000; var CMAXHIST = MOBILE ? 9000 : 44000;
var hook = function (t) { var hook = function (t) {
var orig = console[t].bind(console), var orig = console[t].bind(console),
cfun = function () { cfun = function () {
console.hist.push(Date.now() + ' ' + t + ': ' + Array.from(arguments).join(', ')); console.hist.push(Date.now() + ' ' + t + ': ' + Array.from(arguments).join(', '));
if (console.hist.length > CMAXHIST) if (console.hist.length > CMAXHIST)
console.hist = console.hist.slice(CMAXHIST / 2); console.hist = console.hist.slice(CMAXHIST / 4);
orig.apply(console, arguments); orig.apply(console, arguments);
}; };
@@ -1396,10 +1396,10 @@ var tt = (function () {
o = ctr.querySelectorAll('*[tt]'); o = ctr.querySelectorAll('*[tt]');
for (var a = o.length - 1; a >= 0; a--) { for (var a = o.length - 1; a >= 0; a--) {
o[a].onfocus = _cshow; o[a].addEventListener('focus', _cshow);
o[a].onblur = _hide; o[a].addEventListener('blur', _hide);
o[a].onmouseenter = _dshow; o[a].addEventListener('mouseenter', _dshow);
o[a].onmouseleave = _hide; o[a].addEventListener('mouseleave', _hide);
} }
r.hide(); r.hide();
} }
@@ -1536,6 +1536,7 @@ var modal = (function () {
var r = {}, var r = {},
q = [], q = [],
o = null, o = null,
scrolling = null,
cb_up = null, cb_up = null,
cb_ok = null, cb_ok = null,
cb_ng = null, cb_ng = null,
@@ -1556,6 +1557,7 @@ var modal = (function () {
r.nofocus = 0; r.nofocus = 0;
r.show = function (html) { r.show = function (html) {
tt.hide();
o = mknod('div', 'modal'); o = mknod('div', 'modal');
o.innerHTML = '<table><tr><td><div id="modalc">' + html + '</div></td></tr></table>'; o.innerHTML = '<table><tr><td><div id="modalc">' + html + '</div></td></tr></table>';
document.body.appendChild(o); document.body.appendChild(o);
@@ -1579,6 +1581,7 @@ var modal = (function () {
document.addEventListener('focus', onfocus); document.addEventListener('focus', onfocus);
document.addEventListener('selectionchange', onselch); document.addEventListener('selectionchange', onselch);
timer.add(scrollchk, 1);
timer.add(onfocus); timer.add(onfocus);
if (cb_up) if (cb_up)
setTimeout(cb_up, 1); setTimeout(cb_up, 1);
@@ -1586,6 +1589,8 @@ var modal = (function () {
r.hide = function () { r.hide = function () {
timer.rm(onfocus); timer.rm(onfocus);
timer.rm(scrollchk);
scrolling = null;
try { try {
ebi('modal-ok').removeEventListener('blur', onblur); ebi('modal-ok').removeEventListener('blur', onblur);
} }
@@ -1604,13 +1609,28 @@ var modal = (function () {
r.hide(); r.hide();
if (cb_ok) if (cb_ok)
cb_ok(v); cb_ok(v);
} };
var ng = function (e) { var ng = function (e) {
ev(e); ev(e);
r.hide(); r.hide();
if (cb_ng) if (cb_ng)
cb_ng(null); cb_ng(null);
} };
var scrollchk = function () {
if (scrolling === true)
return;
var o = ebi('modalc'),
vis = o.offsetHeight,
all = o.scrollHeight,
nsc = 8 + vis < all;
if (scrolling !== nsc)
clmod(o, 'yk', !nsc);
scrolling = nsc;
};
var onselch = function () { var onselch = function () {
try { try {

View File

@@ -1,3 +1,87 @@
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0722-2323 `v1.13.5` american sized
## new features
* long-distance uploads are now **twice as fast** on average 132a8350
* boost tcp windowsize scaling by stitching together smaller chunks into bigger chonks so they fly better across the atlantic
* i'm not kidding, on the two routes we've tested this on we gained 1.6x / 160% (from US-West to Finland) and **2.6x / 260%** (Norway to US-East)
* files that are between 4 MiB and 256 MiB see the biggest improvement; 70% faster <= 768 MiB, 40% <= 1.5 GiB, 10% <= 6G
* if this turns out to be buggy, disable it serverside with `--u2sz 1,1,1` or clientside in the browser-ui: `[⚙️]` -> `up2k switches` -> change `64` to `1`
* u2c.py (CLI uploader): support stitching (☝️) + print a summary with hashing and upload speeds 987bce21
* video files can play as audio 53f1e3c9
* audio is extracted serverside to avoid wasting bandwidth
* extraction is lossy (converted to opus or mp3 depending on browser)
* togglebutton `🎧` in the gridview toolbar to enable/disable
* new hook: [into-the-cache-it-goes.py](https://github.com/9001/copyparty/tree/hovudstraum/bin/hooks#after-upload) d26a944d
* avoids a cloudflare bug (race condition?) where it will send truncated files to visitors on the very first load if several people simultaneously access a file that hasn't been viewed before
## bugfixes
* inline markdown/logues rendered black-on-black in firefox 54 and some other browsers from 2017 and older eeef8091
* unintuitive folder thumbnail selection if folder contains both `Cover.jpg` and `cover.jpg` f955d2bd
* the gridview toolbar got undocked after viewing a pic/vid dc449bf8
## other changes
* #90 recommend rclone in favor of davfs2 ef0ecf87
* improved some error messages e565ad5f
* added helptext exporters to generate the online [html](https://ocv.me/copyparty/helptext.html) and [txt](https://ocv.me/copyparty/helptext.txt) editions 59533990
* mention that cloudflare is incompatible with uploading files larger than 383.9 GiB
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0716-0457 `v1.13.4` descript.ion
## new features
* "medialinks"; instead of the usual hotlink, the basic-uploader (as used by sharex and such) can return a link that opens the file in the media viewer c9281f89
* enable for all uploads with volflag `medialinks`, or just for one upload by adding `?media` to the post url
* thumbnails are now fully compatible with dirkeys/filekeys 52e06226
* `--th-covers` will respect filename order, selecting the first matching filename as the folder thumbnail 1cdb1702
* new hook: [bittorrent downloader](https://github.com/9001/copyparty/tree/hovudstraum/bin/hooks#on-message) bd3b3863 803e1565
* hooks: d749683d
* can be restricted to only run when user has specific permissions
* user permissions are also included in the json message to the hook
* new syntax to prepend args to the hook's command
* (all this will be better documented after some additional upcoming hook-related features, see `--help-hooks` for now)
* support `descript.ion` usenet metadata; will parse and render into directory listings when possible 927c3bce
* directory listings are now 2% slower, eh who's keeping count anyways
* tftp-server: 45259251
* improved support for buggy clients
* improved ipv6 support, especially on macos
* improved robustness on unreliable networks
* #85 new option `--gsel` to default-enable the client setting to select files by ctrl-clicking them in the grid 9a87ee2f
* music player: set audio volume by scrollwheel 36d6d29a
## bugfixes
* race-the-beam (downloading an unfinished upload) could get interrupted near the end, requiring a manual resume in the browser's download manager to finish f37187a0
* ftp-server: when accessing the root folder of servers without a root folder, it could mention inaccessible folders 84e8e1dd
* ftp-server: uploads will automatically replace existing files if user has delete perms 0a9f4c60
* windows 2000 expects this behavior, otherwise it'll freak out and delete stuff and then not actually upload it, nice
* new option `--ftp-no-ow` restores old default behavior of rejecting upload if target filename exists
* music player:
* stop trying to recover from a corrupted file if the user already fixed it manually 55a011b9
* support downloading the currently playing song regardless of current folder c06aa683
* music player preloader: db6059e1
* stop searching after 5 folders of nothing
* don't crash playback by walking into error-pages
* `--og` (rich discord embeds) was incompatible with viewing markdown docs d75a2c77
* `--cgen` (configfile generator) much less jank d5de3f2f
## other changes
* mention that HTTP/2 is still usually slower than HTTP/1.1 dfe7f1d9
* give up much sooner if a client is supposed to send a request body but isn't c549f367
* support running copyparty as a server on windows 2000 and winXP 8c73e0cb 2fd12a83
* updated deps 6e58514b
* copyparty.exe: python 3.12, pillow 10.4, pyinstaller 6.9
* dompurify 3.1.6
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0601-2324 `v1.13.3` 700+ # 2024-0601-2324 `v1.13.3` 700+

View File

@@ -55,8 +55,8 @@ quick outline of the up2k protocol, see [uploading](https://github.com/9001/cop
* server creates the `wark`, an identifier for this upload * server creates the `wark`, an identifier for this upload
* `sha512( salt + filesize + chunk_hashes )` * `sha512( salt + filesize + chunk_hashes )`
* and a sparse file is created for the chunks to drop into * and a sparse file is created for the chunks to drop into
* client uploads each chunk * client sends a series of POSTs, with one or more consecutive chunks in each
* header entries for the chunk-hash and wark * header entries for the chunk-hashes (comma-separated) and wark
* server writes chunks into place based on the hash * server writes chunks into place based on the hash
* client does another handshake with the hashlist; server replies with OK or a list of chunks to reupload * client does another handshake with the hashlist; server replies with OK or a list of chunks to reupload
@@ -327,10 +327,6 @@ can be reproduced with `--no-sendfile --s-wr-sz 8192 --s-wr-slp 0.3 --rsp-slp 6`
* remove brokers / multiprocessing stuff; https://github.com/9001/copyparty/tree/no-broker * remove brokers / multiprocessing stuff; https://github.com/9001/copyparty/tree/no-broker
* reduce the nesting / indirections in `HttpCli` / `httpcli.py` * reduce the nesting / indirections in `HttpCli` / `httpcli.py`
* nearly zero benefit from stuff like replacing all the `self.conn.hsrv` with a local `hsrv` variable * nearly zero benefit from stuff like replacing all the `self.conn.hsrv` with a local `hsrv` variable
* reduce up2k roundtrips
* start from a chunk index and just go
* terminate client on bad data
* not worth the effort, just throw enough conncetions at it
* single sha512 across all up2k chunks? * single sha512 across all up2k chunks?
* crypto.subtle cannot into streaming, would have to use hashwasm, expensive * crypto.subtle cannot into streaming, would have to use hashwasm, expensive
* separate sqlite table per tag * separate sqlite table per tag

View File

@@ -20,6 +20,7 @@ currently up to date with [awesome-selfhosted](https://github.com/awesome-selfho
* 💾 = what copyparty offers as an alternative * 💾 = what copyparty offers as an alternative
* 🔵 = similarities * 🔵 = similarities
* ⚠️ = disadvantages (something copyparty does "better") * ⚠️ = disadvantages (something copyparty does "better")
* 🔥 = hazards
## toc ## toc
@@ -37,7 +38,7 @@ currently up to date with [awesome-selfhosted](https://github.com/awesome-selfho
* [another matrix](#another-matrix) * [another matrix](#another-matrix)
* [reviews](#reviews) * [reviews](#reviews)
* [copyparty](#copyparty) * [copyparty](#copyparty)
* [hfs2](#hfs2) * [hfs2](#hfs2) 🔥
* [hfs3](#hfs3) * [hfs3](#hfs3)
* [nextcloud](#nextcloud) * [nextcloud](#nextcloud)
* [seafile](#seafile) * [seafile](#seafile)
@@ -83,8 +84,8 @@ the table headers in the matrixes below are the different softwares, with a quic
the softwares, the softwares,
* `a` = [copyparty](https://github.com/9001/copyparty) * `a` = [copyparty](https://github.com/9001/copyparty)
* `b` = [hfs2](https://rejetto.com/hfs/) * `b` = [hfs2](https://github.com/rejetto/hfs2/) 🔥
* `c` = [hfs3](https://github.com/rejetto/hfs) * `c` = [hfs3](https://rejetto.com/hfs/)
* `d` = [nextcloud](https://github.com/nextcloud/server) * `d` = [nextcloud](https://github.com/nextcloud/server)
* `e` = [seafile](https://github.com/haiwen/seafile) * `e` = [seafile](https://github.com/haiwen/seafile)
* `f` = [rclone](https://github.com/rclone/rclone), specifically `rclone serve webdav .` * `f` = [rclone](https://github.com/rclone/rclone), specifically `rclone serve webdav .`
@@ -152,19 +153,20 @@ symbol legend,
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m | | feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - | | ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| download folder as zip | █ | █ | █ | █ | | | █ | | █ | █ | | █ | | | download folder as zip | █ | █ | █ | █ | | | █ | | █ | █ | | █ | |
| download folder as tar | █ | | | | | | | | | | | | | | download folder as tar | █ | | | | | | | | | | | | |
| upload | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | █ | █ | | upload | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | █ | █ |
| parallel uploads | █ | | | █ | █ | | • | | █ | | █ | | █ | | parallel uploads | █ | | | █ | █ | | • | | █ | | █ | | █ |
| resumable uploads | █ | | | | | | | | █ | | █ | | | | resumable uploads | █ | | | | | | | | █ | | █ | | |
| upload segmenting | █ | | | | | | | █ | █ | | █ | | █ | | upload segmenting | █ | | | | | | | █ | █ | | █ | | █ |
| upload acceleration | █ | | | | | | | | █ | | █ | | | | upload acceleration | █ | | | | | | | | █ | | █ | | |
| upload verification | █ | | | █ | █ | | | | █ | | | | | | upload verification | █ | | | █ | █ | | | | █ | | | | |
| upload deduplication | █ | | | | █ | | | | █ | | | | | | upload deduplication | █ | | | | █ | | | | █ | | | | |
| upload a 999 TiB file | █ | | | | █ | █ | • | | █ | | █ | | | | upload a 999 TiB file | █ | | | | █ | █ | • | | █ | | █ | | |
| race the beam ("p2p") | █ | | | | | | | | | • | | | | | CTRL-V from device | █ | | | █ | | | | | | | | | |
| race the beam ("p2p") | █ | | | | | | | | | | | | |
| keep last-modified time | █ | | | █ | █ | █ | | | | | | █ | | | keep last-modified time | █ | | | █ | █ | █ | | | | | | █ | |
| upload rules | | | | | | | | | | | | | | | upload rules | | | | | | | | | | | | | |
| ┗ max disk usage | █ | █ | | | █ | | | | █ | | | █ | █ | | ┗ max disk usage | █ | █ | | | █ | | | | █ | | | █ | █ |
| ┗ max filesize | █ | | | | | | | █ | | | █ | █ | █ | | ┗ max filesize | █ | | | | | | | █ | | | █ | █ | █ |
| ┗ max items in folder | █ | | | | | | | | | | | | | | ┗ max items in folder | █ | | | | | | | | | | | | |
| ┗ max file age | █ | | | | | | | | █ | | | | | | ┗ max file age | █ | | | | | | | | █ | | | | |
@@ -182,6 +184,8 @@ symbol legend,
* `upload verification` = uploads are checksummed or otherwise confirmed to have been transferred correctly * `upload verification` = uploads are checksummed or otherwise confirmed to have been transferred correctly
* `CTRL-V from device` = press CTRL-C in Windows Explorer (or whatever) and paste into the webbrowser to upload it
* `race the beam` = files can be downloaded while they're still uploading; downloaders are slowed down such that the uploader is always ahead * `race the beam` = files can be downloaded while they're still uploading; downloaders are slowed down such that the uploader is always ahead
* `checksums provided` = when downloading a file from the server, the file's checksum is provided for verification client-side * `checksums provided` = when downloading a file from the server, the file's checksum is provided for verification client-side
@@ -213,7 +217,7 @@ symbol legend,
| serve sftp (ssh) | | | | | | █ | | | | | | █ | █ | | serve sftp (ssh) | | | | | | █ | | | | | | █ | █ |
| serve smb/cifs | | | | | | █ | | | | | | | | | serve smb/cifs | | | | | | █ | | | | | | | |
| serve dlna | | | | | | █ | | | | | | | | | serve dlna | | | | | | █ | | | | | | | |
| listen on unix-socket | | | | █ | █ | | █ | █ | █ | | █ | █ | | | listen on unix-socket | | | | █ | █ | | █ | █ | █ | | █ | █ | |
| zeroconf | █ | | | | | | | | | | | | █ | | zeroconf | █ | | | | | | | | | | | | █ |
| supports netscape 4 | | | | | | █ | | | | | • | | | | supports netscape 4 | | | | | | █ | | | | | • | | |
| ...internet explorer 6 | | █ | | █ | | █ | | | | | • | | | | ...internet explorer 6 | | █ | | █ | | █ | | | | | • | | |
@@ -243,7 +247,7 @@ symbol legend,
| listen multiple ports | █ | | | | | | | | | | | █ | | | listen multiple ports | █ | | | | | | | | | | | █ | |
| virtual file system | █ | █ | █ | | | | █ | | | | | █ | | | virtual file system | █ | █ | █ | | | | █ | | | | | █ | |
| reverse-proxy ok | █ | | █ | █ | █ | █ | █ | █ | • | • | • | █ | | | reverse-proxy ok | █ | | █ | █ | █ | █ | █ | █ | • | • | • | █ | |
| folder-rproxy ok | █ | | | | █ | █ | | • | • | | • | | • | | folder-rproxy ok | █ | | | | █ | █ | | • | • | | • | | • |
* `folder-rproxy` = reverse-proxying without dedicating an entire (sub)domain, using a subfolder instead * `folder-rproxy` = reverse-proxying without dedicating an entire (sub)domain, using a subfolder instead
* `l`/sftpgo: * `l`/sftpgo:
@@ -266,9 +270,9 @@ symbol legend,
| per-folder permissions | | | | █ | █ | | █ | | █ | █ | | █ | █ | | per-folder permissions | | | | █ | █ | | █ | | █ | █ | | █ | █ |
| per-file permissions | | | | █ | █ | | █ | | █ | | | | █ | | per-file permissions | | | | █ | █ | | █ | | █ | | | | █ |
| per-file passwords | █ | | | █ | █ | | █ | | █ | | | | █ | | per-file passwords | █ | | | █ | █ | | █ | | █ | | | | █ |
| unmap subfolders | █ | | | | | | █ | | | █ | | • | | | unmap subfolders | █ | | | | | | █ | | | █ | | • | |
| index.html blocks list | | | | | | | █ | | | • | | | | | index.html blocks list | | | | | | | █ | | | • | | | |
| write-only folders | █ | | | | | | | | | | █ | █ | | | write-only folders | █ | | | | | | | | | | █ | █ | |
| files stored as-is | █ | █ | █ | █ | | █ | █ | | | █ | █ | █ | █ | | files stored as-is | █ | █ | █ | █ | | █ | █ | | | █ | █ | █ | █ |
| file versioning | | | | █ | █ | | | | | | | | | | file versioning | | | | █ | █ | | | | | | | | |
| file encryption | | | | █ | █ | █ | | | | | | █ | | | file encryption | | | | █ | █ | █ | | | | | | █ | |
@@ -298,6 +302,7 @@ symbol legend,
* `file action event hooks` = run script before/after upload, move, rename, ... * `file action event hooks` = run script before/after upload, move, rename, ...
* `one-way folder sync` = like rsync, optionally deleting unexpected files at target * `one-way folder sync` = like rsync, optionally deleting unexpected files at target
* `full sync` = stateful, dropbox-like sync * `full sync` = stateful, dropbox-like sync
* `speed throttle` = rate limiting (per ip, per user, per connection, anything like that)
* `curl-friendly ls` = returns a [sortable plaintext folder listing](https://user-images.githubusercontent.com/241032/215322619-ea5fd606-3654-40ad-94ee-2bc058647bb2.png) when curled * `curl-friendly ls` = returns a [sortable plaintext folder listing](https://user-images.githubusercontent.com/241032/215322619-ea5fd606-3654-40ad-94ee-2bc058647bb2.png) when curled
* `curl-friendly upload` = uploading with curl is just `curl -T some.bin http://.../` * `curl-friendly upload` = uploading with curl is just `curl -T some.bin http://.../`
* `a`/copyparty remarks: * `a`/copyparty remarks:
@@ -323,14 +328,14 @@ symbol legend,
| feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m | | feature / software | a | b | c | d | e | f | g | h | i | j | k | l | m |
| ---------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - | | ---------------------- | - | - | - | - | - | - | - | - | - | - | - | - | - |
| single-page app | █ | | █ | █ | █ | | | █ | █ | █ | █ | | █ | | single-page app | █ | | █ | █ | █ | | | █ | █ | █ | █ | | █ |
| themes | █ | █ | | █ | | | | | █ | | | | | | themes | █ | █ | | █ | | | | | █ | | | | |
| directory tree nav | █ | | | | █ | | | | █ | | | | | | directory tree nav | █ | | | | █ | | | | █ | | | | |
| multi-column sorting | █ | | | | | | | | | | | | | | multi-column sorting | █ | | | | | | | | | | | | |
| thumbnails | █ | | | | | | | █ | █ | | | | █ | | thumbnails | █ | | | | | | | █ | █ | | | | █ |
| ┗ image thumbnails | █ | | | █ | █ | | | █ | █ | █ | | | █ | | ┗ image thumbnails | █ | | | █ | █ | | | █ | █ | █ | | | █ |
| ┗ video thumbnails | █ | | | █ | █ | | | | █ | | | | █ | | ┗ video thumbnails | █ | | | █ | █ | | | | █ | | | | █ |
| ┗ audio spectrograms | █ | | | | | | | | | | | | | | ┗ audio spectrograms | █ | | | | | | | | | | | | |
| audio player | █ | | | █ | █ | | | | █ | | | | █ | | audio player | █ | | | █ | █ | | | | █ | | | | █ |
| ┗ gapless playback | █ | | | | | | | | • | | | | | | ┗ gapless playback | █ | | | | | | | | • | | | | |
| ┗ audio equalizer | █ | | | | | | | | | | | | | | ┗ audio equalizer | █ | | | | | | | | | | | | |
| ┗ waveform seekbar | █ | | | | | | | | | | | | | | ┗ waveform seekbar | █ | | | | | | | | | | | | |
@@ -348,16 +353,16 @@ symbol legend,
| search by custom parser | █ | | | | | | | | | | | | | | search by custom parser | █ | | | | | | | | | | | | |
| find local file | █ | | | | | | | | | | | | | | find local file | █ | | | | | | | | | | | | |
| undo recent uploads | █ | | | | | | | | | | | | | | undo recent uploads | █ | | | | | | | | | | | | |
| create directories | █ | | | █ | █ | | █ | █ | █ | █ | █ | █ | █ | | create directories | █ | | | █ | █ | | █ | █ | █ | █ | █ | █ | █ |
| image viewer | █ | | | █ | █ | | | | █ | █ | █ | | █ | | image viewer | █ | | | █ | █ | | | | █ | █ | █ | | █ |
| markdown viewer | █ | | | | █ | | | | █ | | | | █ | | markdown viewer | █ | | | | █ | | | | █ | | | | █ |
| markdown editor | █ | | | | █ | | | | █ | | | | █ | | markdown editor | █ | | | | █ | | | | █ | | | | █ |
| readme.md in listing | █ | | | █ | | | | | | | | | | | readme.md in listing | █ | | | █ | | | | | | | | | |
| rename files | █ | █ | █ | █ | █ | | █ | | █ | █ | █ | █ | █ | | rename files | █ | █ | █ | █ | █ | | █ | | █ | █ | █ | █ | █ |
| batch rename | █ | | | | | | | | █ | | | | | | batch rename | █ | | | | | | | | █ | | | | |
| cut / paste files | █ | █ | | █ | █ | | | | █ | | | | █ | | cut / paste files | █ | █ | | █ | █ | | | | █ | | | | █ |
| move files | █ | █ | | █ | █ | | █ | | █ | █ | █ | | █ | | move files | █ | █ | | █ | █ | | █ | | █ | █ | █ | | █ |
| delete files | █ | █ | | █ | █ | | █ | █ | █ | █ | █ | █ | █ | | delete files | █ | █ | | █ | █ | | █ | █ | █ | █ | █ | █ | █ |
| copy files | | | | | █ | | | | █ | █ | █ | | █ | | copy files | | | | | █ | | | | █ | █ | █ | | █ |
* `single-page app` = multitasking; possible to continue navigating while uploading * `single-page app` = multitasking; possible to continue navigating while uploading
@@ -367,8 +372,12 @@ symbol legend,
* `undo recent uploads` = accounts without delete permissions have a time window where they can undo their own uploads * `undo recent uploads` = accounts without delete permissions have a time window where they can undo their own uploads
* `a`/copyparty has teeny-tiny skips playing gapless albums depending on audio codec (opus best) * `a`/copyparty has teeny-tiny skips playing gapless albums depending on audio codec (opus best)
* `b`/hfs2 has a very basic directory tree view, not showing sibling folders * `b`/hfs2 has a very basic directory tree view, not showing sibling folders
* `c`/hfs3 remarks:
* audio playback does not continue into next song
* `f`/rclone can do some file management (mkdir, rename, delete) when hosting througn webdav * `f`/rclone can do some file management (mkdir, rename, delete) when hosting througn webdav
* `j`/filebrowser has a plaintext viewer/editor * `j`/filebrowser remarks:
* audio playback does not continue into next song
* plaintext viewer/editor
* `k`/filegator directory tree is a modal window * `k`/filegator directory tree is a modal window
@@ -424,6 +433,7 @@ symbol legend,
* 💾 are what copyparty offers as an alternative * 💾 are what copyparty offers as an alternative
* 🔵 are similarities * 🔵 are similarities
* ⚠️ are disadvantages (something copyparty does "better") * ⚠️ are disadvantages (something copyparty does "better")
* 🔥 are hazards
## [copyparty](https://github.com/9001/copyparty) ## [copyparty](https://github.com/9001/copyparty)
* resumable uploads which are verified server-side * resumable uploads which are verified server-side
@@ -431,8 +441,9 @@ symbol legend,
* both of the above are surprisingly uncommon features * both of the above are surprisingly uncommon features
* very cross-platform (python, no dependencies) * very cross-platform (python, no dependencies)
## [hfs2](https://rejetto.com/hfs/) ## [hfs2](https://github.com/rejetto/hfs2/)
* the OG, the legend * the OG, the legend (now replaced by [hfs3](#hfs3))
* 🔥 hfs2 is dead and dangerous! unfixed RCE: [info](https://github.com/rejetto/hfs2/issues/44), [info](https://github.com/drapid/hfs/issues/3), [info](https://asec.ahnlab.com/en/67650/)
* ⚠️ uploads not resumable / accelerated / integrity-checked * ⚠️ uploads not resumable / accelerated / integrity-checked
* ⚠️ on cloudflare: max upload size 100 MiB * ⚠️ on cloudflare: max upload size 100 MiB
* ⚠️ windows-only * ⚠️ windows-only
@@ -440,10 +451,19 @@ symbol legend,
* vfs with gui config, per-volume permissions * vfs with gui config, per-volume permissions
* starting to show its age, hence the rewrite: * starting to show its age, hence the rewrite:
## [hfs3](https://github.com/rejetto/hfs) ## [hfs3](https://rejetto.com/hfs/)
* nodejs; cross-platform * nodejs; cross-platform
* vfs with gui config, per-volume permissions * vfs with gui config, per-volume permissions
* still early development, let's revisit later * 🔵 uploads are resumable
* ⚠️ uploads are not segmented; max upload size 100 MiB on cloudflare
* ⚠️ uploads are not accelerated (copyparty is 3x faster across the atlantic)
* ⚠️ uploads are not integrity-checked
* ⚠️ copies the file after upload; need twice filesize free disk space
* ⚠️ doesn't support crazy filenames
* ✅ config GUI
* ✅ download counter
* ✅ watch active connections
* ✅ plugins
## [nextcloud](https://github.com/nextcloud/server) ## [nextcloud](https://github.com/nextcloud/server)
* php, mariadb * php, mariadb
@@ -497,6 +517,7 @@ symbol legend,
* rust; cross-platform (windows, linux, macos) * rust; cross-platform (windows, linux, macos)
* ⚠️ uploads not resumable / accelerated / integrity-checked * ⚠️ uploads not resumable / accelerated / integrity-checked
* ⚠️ on cloudflare: max upload size 100 MiB * ⚠️ on cloudflare: max upload size 100 MiB
* ⚠️ across the atlantic, copyparty is 3x faster
* ⚠️ doesn't support crazy filenames * ⚠️ doesn't support crazy filenames
* ✅ per-url access control (copyparty is per-volume) * ✅ per-url access control (copyparty is per-volume)
* 🔵 basic but really snappy ui * 🔵 basic but really snappy ui
@@ -539,8 +560,10 @@ symbol legend,
## [filebrowser](https://github.com/filebrowser/filebrowser) ## [filebrowser](https://github.com/filebrowser/filebrowser)
* go; cross-platform (windows, linux, mac) * go; cross-platform (windows, linux, mac)
* ⚠️ uploads not resumable / accelerated / integrity-checked * 🔵 uploads are resumable and segmented
* ⚠️ on cloudflare: max upload size 100 MiB * 🔵 multiple files are uploaded in parallel, but...
* ⚠️ big files are not accelerated (copyparty is 5x faster across the atlantic)
* ⚠️ uploads are not integrity-checked
* ⚠️ http only; no webdav / ftp / zeroconf * ⚠️ http only; no webdav / ftp / zeroconf
* ⚠️ doesn't support crazy filenames * ⚠️ doesn't support crazy filenames
* ⚠️ no directory tree nav * ⚠️ no directory tree nav
@@ -550,12 +573,14 @@ symbol legend,
* ⚠️ but no directory tree for navigation * ⚠️ but no directory tree for navigation
* ✅ user signup * ✅ user signup
* ✅ command runner / remote shell * ✅ command runner / remote shell
* 🔵 supposed to have write-only folders but couldn't get it to work * ✅ more efficient; can handle around twice as much simultaneous traffic
## [filegator](https://github.com/filegator/filegator) ## [filegator](https://github.com/filegator/filegator)
* go; cross-platform (windows, linux, mac) * php; cross-platform (windows, linux, mac)
* 🔵 *it has upload segmenting and acceleration* * 🔵 *it has upload segmenting and acceleration*
* ⚠️ but uploads are still not integrity-checked * ⚠️ but uploads are still not integrity-checked
* ⚠️ on copyparty, uploads are 40x faster
* compared to the official filegator docker example which might be bad
* ⚠️ http only; no webdav / ftp / zeroconf * ⚠️ http only; no webdav / ftp / zeroconf
* ⚠️ does not support symlinks * ⚠️ does not support symlinks
* ⚠️ expensive download-as-zip feature * ⚠️ expensive download-as-zip feature
@@ -566,6 +591,7 @@ symbol legend,
* go; cross-platform (windows, linux, mac) * go; cross-platform (windows, linux, mac)
* ⚠️ http uploads not resumable / accelerated / integrity-checked * ⚠️ http uploads not resumable / accelerated / integrity-checked
* ⚠️ on cloudflare: max upload size 100 MiB * ⚠️ on cloudflare: max upload size 100 MiB
* ⚠️ across the atlantic, copyparty is 2.5x faster
* 🔵 sftp uploads are resumable * 🔵 sftp uploads are resumable
* ⚠️ web UI is very minimal + a bit slow * ⚠️ web UI is very minimal + a bit slow
* ⚠️ no thumbnails / image viewer / audio player * ⚠️ no thumbnails / image viewer / audio player
@@ -573,6 +599,7 @@ symbol legend,
* ⚠️ no filesystem indexing / search * ⚠️ no filesystem indexing / search
* ⚠️ doesn't run on phones, tablets * ⚠️ doesn't run on phones, tablets
* ⚠️ no zeroconf (mdns/ssdp) * ⚠️ no zeroconf (mdns/ssdp)
* ⚠️ impractical directory URLs
* ⚠️ AGPL licensed * ⚠️ AGPL licensed
* 🔵 ftp, ftps, webdav * 🔵 ftp, ftps, webdav
* ✅ sftp server * ✅ sftp server
@@ -589,11 +616,13 @@ symbol legend,
## [arozos](https://github.com/tobychui/arozos) ## [arozos](https://github.com/tobychui/arozos)
* big suite of applications similar to [kodbox](#kodbox), copyparty is better at downloading/uploading/music/indexing but arozos has other advantages * big suite of applications similar to [kodbox](#kodbox), copyparty is better at downloading/uploading/music/indexing but arozos has other advantages
* go; primarily linux (limited support for windows) * go; primarily linux (limited support for windows)
* ⚠️ needs root
* ⚠️ uploads not resumable / integrity-checked * ⚠️ uploads not resumable / integrity-checked
* ⚠️ uploading small files to copyparty is 2.7x faster * ⚠️ uploading small files to copyparty is 2.7x faster
* ⚠️ uploading large files to copyparty is at least 10% faster * ⚠️ uploading large files to copyparty is at least 10% faster
* arozos is websocket-based, 512 KiB chunks; writes each chunk to separate files and then merges * arozos is websocket-based, 512 KiB chunks; writes each chunk to separate files and then merges
* copyparty splices directly into the final file; faster and better for the HDD and filesystem * copyparty splices directly into the final file; faster and better for the HDD and filesystem
* ⚠️ across the atlantic, uploading to copyparty is 6x faster
* ⚠️ no directory tree navpane; not as easy to navigate * ⚠️ no directory tree navpane; not as easy to navigate
* ⚠️ download-as-zip is not streaming; creates a temp.file on the server * ⚠️ download-as-zip is not streaming; creates a temp.file on the server
* ⚠️ not self-contained (pulls from jsdelivr) * ⚠️ not self-contained (pulls from jsdelivr)

View File

@@ -17,3 +17,19 @@ but I don't really know what i'm doing here 💩
`podman login docker.io` `podman login docker.io`
`podman login ghcr.io -u 9001` `podman login ghcr.io -u 9001`
[about gchq](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) (takes a classic token as password) [about gchq](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) (takes a classic token as password)
## building on alpine
```bash
apk add podman{,-docker}
rc-update add cgroups
service cgroups start
vim /etc/containers/storage.conf # driver = "btrfs"
modprobe tun
echo ed:100000:65536 >/etc/subuid
echo ed:100000:65536 >/etc/subgid
apk add qemu-openrc qemu-tools qemu-{arm,armeb,aarch64,s390x,ppc64le}
rc-update add qemu-binfmt
service qemu-binfmt start
```

81
scripts/help2html.py Executable file
View File

@@ -0,0 +1,81 @@
#!/usr/bin/env python3
import re
import subprocess as sp
# to convert the copyparty --help to html, run this in xfce4-terminal @ 140x43:
_ = r""""
echo; for a in '' -accounts -flags -handlers -hooks -urlform -exp -ls -dbd -pwhash -zm; do
./copyparty-sfx.py --help$a 2>/dev/null; printf '\n\n\n%0139d\n\n\n'; done # xfce4-terminal @ 140x43
"""
# click [edit] => [select all]
# click [edit] => [copy as html]
# and then run this script
def readclip():
cmds = [
"xsel -ob",
"xclip -selection CLIPBOARD -o",
"pbpaste",
]
for cmd in cmds:
try:
return sp.check_output(cmd.split()).decode("utf-8")
except:
pass
def cnv(src):
yield '<html style="background:#222;color:#fff"><body>'
skip_sfx = False
in_sfx = 0
in_salt = 0
while True:
ln = next(src)
if "<font" in ln:
if not ln.startswith("<pre>"):
ln = "<pre>" + ln
yield ln
break
for ln in src:
ln = ln.rstrip()
if re.search(r"^<font[^>]+>copyparty v[0-9]", ln):
in_sfx = 3
if in_sfx:
in_sfx -= 1
if not skip_sfx:
yield ln
continue
if '">uuid:' in ln:
ln = re.sub(r">uuid:[0-9a-f-]{36}<", ">autogenerated<", ln)
if "-salt SALT" in ln:
in_salt = 3
if in_salt:
in_salt -= 1
t = ln
ln = re.sub(r">[0-9a-zA-Z/+]{24}<", ">24-character-autogenerated<", ln)
ln = re.sub(r">[0-9a-zA-Z/+]{40}<", ">40-character-autogenerated<", ln)
if t != ln:
in_salt = 0
ln = ln.replace(">/home/ed/", ">~/")
if ln.startswith("0" * 20):
skip_sfx = True
yield ln
yield "</pre>eof</body></html>"
def main():
src = readclip()
src = re.split("0{100,200}", src[::-1], 1)[1][::-1]
with open("helptext.html", "wb") as fo:
for ln in cnv(iter(src.split("\n")[:-3])):
fo.write(ln.encode("utf-8") + b"\r\n")
if __name__ == "__main__":
main()

26
scripts/help2txt.sh Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
set -e
( xsel -ob | sed -r '
s`/home/ed/`~/`;
s/uuid:[0-9a-f-]{36}/autogenerated/;
s/(-salt SALT.*default: )[0-9a-zA-Z/+]{24}\)/\124-character-autogenerated)/;
s/(-salt SALT.*default: )[0-9a-zA-Z/+]{40}\)/\140-character-autogenerated)/;
' | awk '
/^copyparty/{a=1} !a{next}
/^0{20}/{b=1} b&&/^copyparty v[0-9]+\./{s=3}
s{s-=1;next} 1' |
head -n-6; echo eof ) >helptext.txt
exit 0
# =====================================================================
# end of script; below is the explanation how to use this:
# first open an infinitely wide console (this is why you own an ultrawide) and copypaste this into it:
for a in '' -accounts -flags -handlers -hooks -urlform -exp -ls -dbd -pwhash -zm; do
./copyparty-sfx.py --help$a 2>/dev/null; printf '\n\n\n%0255d\n\n\n'; done
# then copypaste all of the output by pressing ctrl-shift-a, ctrl-shift-c
# and finally actually run this script which should produce helptext.txt

View File

@@ -120,7 +120,7 @@ class Cfg(Namespace):
ex = "ah_cli ah_gen css_browser hist js_browser mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua" ex = "ah_cli ah_gen css_browser hist js_browser mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
ka.update(**{k: None for k in ex.split()}) ka.update(**{k: None for k in ex.split()})
ex = "hash_mt srch_time u2abort u2j" ex = "hash_mt srch_time u2abort u2j u2sz"
ka.update(**{k: 1 for k in ex.split()}) ka.update(**{k: 1 for k in ex.split()})
ex = "au_vol mtab_age reg_cap s_thead s_tbody th_convt" ex = "au_vol mtab_age reg_cap s_thead s_tbody th_convt"