Compare commits

...

29 Commits

Author SHA1 Message Date
ed
cecef88d6b v1.15.9 2024-10-18 23:42:20 +00:00
ed
7ffd805a03 add RSS feed output; closes #109 2024-10-18 23:24:12 +00:00
ed
a7e2a0c981 up2k: fix chinese-specific js crash; closes #108
the client-side ETA, included as metadata in POSTs,
would crash the js with the initial "Starting..." text
2024-10-18 19:04:22 +00:00
ed
2a570bb4ca fix --df for webdav; closes #107
PUT uploads, as used by webdav, would stat the absolute
path of the file to be created, which would throw ENOENT

strip components until the path is an existing directory

and also try to enforce disk space / volume size limits
even when the incoming file is of unknown size
2024-10-18 18:14:35 +00:00
ed
5ca8f0706d up2k.js: detect broken webworkers;
the first time a file is to be hashed after a website refresh,
a set of webworkers are launched for efficient parallelization

in the unlikely event of a network outage exactly at this point,
the workers will fail to start, and the hashing would never begin

add a ping/pong sequence to smoketest the workers, and
fallback to hashing on the main-thread when necessary
2024-10-18 16:50:15 +00:00
ed
a9b4436cdc up2k: improve upload retry/timeout
* `js:` make handshake retries more aggressive
* `u2c:` reduce chunks timeout + ^
* `main:` reduce tcp timeout to 128sec (js is 42s)
* `httpcli:` less confusing log messages
2024-10-18 16:24:31 +00:00
ed
5f91999512 update pkgs to 1.15.8 2024-10-16 22:22:29 +00:00
ed
9f000beeaf v1.15.8 2024-10-16 21:53:23 +00:00
ed
ff0a71f212 gallery: play m4v videos 2024-10-16 21:36:11 +00:00
ed
22dfc6ec24 ui-toast: hide countdown if infinite 2024-10-16 21:32:47 +00:00
ed
48147c079e subchunks: fix eta, cfg-ui 2024-10-16 21:17:00 +00:00
ed
d715479ef6 add chickenbit to force hashwasm 2024-10-16 20:23:02 +00:00
ed
fc8298c468 up2k: avoid cloudflare upload size-limit
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096

`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets

subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time

if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
2024-10-16 19:29:08 +00:00
ed
e94ca5dc91 up2k: improve logging 2024-10-16 15:41:19 +00:00
ed
114b71b751 up2k: fix filesystem toctou
previously and currently, as an upload completes, its "done" flag
is not set until all the data has been flushed to disk

however, the list of missing chunks becomes empty before the flush,
and that list was incorrectly used to determine completion state
in some dedup-related logic

as a result, duplicate uploads could initially fail, and would
succeed after the client automatically retried a handful of times
2024-10-16 15:32:58 +00:00
ed
b2770a2087 u2c: support more crazy filenames
newlines, invalid utf8, and worst of all... %20 (whitespace)

due to up2k protocol limitations,
filenames are normalized when they hit the server,
but folders get to keep their intended jank
2024-10-15 23:01:07 +00:00
ed
cba1878bb2 u2c: don't get stuck at fifos and such 2024-10-15 22:53:55 +00:00
ed
a2e037d6af u2c: fix chunksize calculation
files which were exactly 128 GiB large would fail
(you can't make this shit up)
2024-10-15 22:39:48 +00:00
ed
65a2b6a223 u2c: fix excessive FDs
it would open separate FDs for all chunks to be uploaded...

open and close files as they are needed during upload instead
2024-10-15 22:30:15 +00:00
ed
9ed799e803 update pkgs to 1.15.7 2024-10-13 23:07:31 +00:00
ed
c1c0ecca13 v1.15.7 2024-10-13 22:44:57 +00:00
ed
ee62836383 bitflip logging 2024-10-13 22:37:35 +00:00
ed
705f598b1a up2k non-e2d fixes:
* respect noforget when loading snaps
* ...but actually forget deleted files otherwise
* insert empty need/hash as necessary
2024-10-13 22:10:27 +00:00
ed
414de88925 u2c v2.2 2024-10-13 22:07:41 +00:00
ed
53ffd245dd u2c: fix progress indicator for resumed uploads 2024-10-13 22:07:07 +00:00
ed
cf1b756206 u2c: option to list chunk hashes 2024-10-13 22:06:02 +00:00
ed
22b58e31ef unpost: authed users can see anon on same ip 2024-10-13 22:00:15 +00:00
ed
b7f9bf5a28 cidr-based autologin 2024-10-13 21:56:26 +00:00
ed
aba680b6c2 update pkgs to 1.15.6 2024-10-11 23:16:24 +00:00
25 changed files with 778 additions and 164 deletions

View File

@@ -47,6 +47,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [file manager](#file-manager) - cut/paste, rename, and delete files/folders (if you have permission)
* [shares](#shares) - share a file or folder by creating a temporary link
* [batch rename](#batch-rename) - select some files and press `F2` to bring up the rename UI
* [rss feeds](#rss-feeds) - monitor a folder with your RSS reader
* [media player](#media-player) - plays almost every audio format there is
* [audio equalizer](#audio-equalizer) - and [dynamic range compressor](https://en.wikipedia.org/wiki/Dynamic_range_compression)
* [fix unreliable playback on android](#fix-unreliable-playback-on-android) - due to phone / app settings
@@ -80,6 +81,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [event hooks](#event-hooks) - trigger a program on uploads, renames etc ([examples](./bin/hooks/))
* [upload events](#upload-events) - the older, more powerful approach ([examples](./bin/mtag/))
* [handlers](#handlers) - redefine behavior with plugins ([examples](./bin/handlers/))
* [ip auth](#ip-auth) - autologin based on IP range (CIDR)
* [identity providers](#identity-providers) - replace copyparty passwords with oauth and such
* [user-changeable passwords](#user-changeable-passwords) - if permitted, users can change their own passwords
* [using the cloud as storage](#using-the-cloud-as-storage) - connecting to an aws s3 bucket and similar
@@ -218,7 +220,7 @@ also see [comparison to similar software](./docs/versus.md)
* upload
* ☑ basic: plain multipart, ie6 support
* ☑ [up2k](#uploading): js, resumable, multithreaded
* **no filesize limit!** ...unless you use Cloudflare, then it's 383.9 GiB
* **no filesize limit!** even on Cloudflare
* ☑ stash: simple PUT filedropper
* ☑ filename randomizer
* ☑ write-only folders
@@ -653,7 +655,7 @@ up2k has several advantages:
* uploads resume if you reboot your browser or pc, just upload the same files again
* server detects any corruption; the client reuploads affected chunks
* the client doesn't upload anything that already exists on the server
* no filesize limit unless imposed by a proxy, for example Cloudflare, which blocks uploads over 383.9 GiB
* no filesize limit, even when a proxy limits the request size (for example Cloudflare)
* much higher speeds than ftp/scp/tarpipe on some internet connections (mainly american ones) thanks to parallel connections
* the last-modified timestamp of the file is preserved
@@ -689,6 +691,8 @@ note that since up2k has to read each file twice, `[🎈] bup` can *theoreticall
if you are resuming a massive upload and want to skip hashing the files which already finished, you can enable `turbo` in the `[⚙️] config` tab, but please read the tooltip on that button
if the server is behind a proxy which imposes a request-size limit, you can configure up2k to sneak below the limit with server-option `--u2sz` (the default is 96 MiB to support Cloudflare)
### file-search
@@ -842,6 +846,30 @@ or a mix of both:
the metadata keys you can use in the format field are the ones in the file-browser table header (whatever is collected with `-mte` and `-mtp`)
## rss feeds
monitor a folder with your RSS reader , optionally recursive
must be enabled per-volume with volflag `rss` or globally with `--rss`
the feed includes itunes metadata for use with podcast readers such as [AntennaPod](https://antennapod.org/)
a feed example: https://cd.ocv.me/a/d2/d22/?rss&fext=mp3
url parameters:
* `pw=hunter2` for password auth
* `recursive` to also include subfolders
* `title=foo` changes the feed title (default: folder name)
* `fext=mp3,opus` only include mp3 and opus files (default: all)
* `nf=30` only show the first 30 results (default: 250)
* `sort=m` sort by mtime (file last-modified), newest first (default)
* `u` = upload-time; NOTE: non-uploaded files have upload-time `0`
* `n` = filename
* `a` = filesize
* uppercase = reverse-sort; `M` = oldest file first
## media player
plays almost every audio format there is (if the server has FFmpeg installed for on-demand transcoding)
@@ -1432,6 +1460,22 @@ redefine behavior with plugins ([examples](./bin/handlers/))
replace 404 and 403 errors with something completely different (that's it for now)
## ip auth
autologin based on IP range (CIDR) , using the global-option `--ipu`
for example, if everyone with an IP that starts with `192.168.123` should automatically log in as the user `spartacus`, then you can either specify `--ipu=192.168.123.0/24=spartacus` as a commandline option, or put this in a config file:
```yaml
[global]
ipu: 192.168.123.0/24=spartacus
```
repeat the option to map additional subnets
**be careful with this one!** if you have a reverseproxy, then you definitely want to make sure you have [real-ip](#real-ip) configured correctly, and it's probably a good idea to nullmap the reverseproxy's IP just in case; so if your reverseproxy is sending requests from `172.24.27.9` then that would be `--ipu=172.24.27.9/32=`
## identity providers
replace copyparty passwords with oauth and such

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3
from __future__ import print_function, unicode_literals
S_VERSION = "2.1"
S_BUILD_DT = "2024-09-23"
S_VERSION = "2.5"
S_BUILD_DT = "2024-10-18"
"""
u2c.py: upload to copyparty
@@ -62,6 +62,9 @@ else:
unicode = str
WTF8 = "replace" if PY2 else "surrogateescape"
VT100 = platform.system() != "Windows"
@@ -151,6 +154,7 @@ class HCli(object):
self.tls = tls
self.verify = ar.te or not ar.td
self.conns = []
self.hconns = []
if tls:
import ssl
@@ -170,7 +174,7 @@ class HCli(object):
"User-Agent": "u2c/%s" % (S_VERSION,),
}
def _connect(self):
def _connect(self, timeout):
args = {}
if PY37:
args["blocksize"] = 1048576
@@ -182,7 +186,7 @@ class HCli(object):
if self.ctx:
args = {"context": self.ctx}
return C(self.addr, self.port, timeout=999, **args)
return C(self.addr, self.port, timeout=timeout, **args)
def req(self, meth, vpath, hdrs, body=None, ctype=None):
hdrs.update(self.base_hdrs)
@@ -195,7 +199,9 @@ class HCli(object):
0 if not body else body.len if hasattr(body, "len") else len(body)
)
c = self.conns.pop() if self.conns else self._connect()
# large timeout for handshakes (safededup)
conns = self.hconns if ctype == MJ else self.conns
c = conns.pop() if conns else self._connect(999 if ctype == MJ else 128)
try:
c.request(meth, vpath, body, hdrs)
if PY27:
@@ -204,7 +210,7 @@ class HCli(object):
rsp = c.getresponse()
data = rsp.read()
self.conns.append(c)
conns.append(c)
return rsp.status, data.decode("utf-8")
except:
c.close()
@@ -228,7 +234,7 @@ class File(object):
self.lmod = lmod # type: float
self.abs = os.path.join(top, rel) # type: bytes
self.name = self.rel.split(b"/")[-1].decode("utf-8", "replace") # type: str
self.name = self.rel.split(b"/")[-1].decode("utf-8", WTF8) # type: str
# set by get_hashlist
self.cids = [] # type: list[tuple[str, int, int]] # [ hash, ofs, sz ]
@@ -267,10 +273,41 @@ class FileSlice(object):
raise Exception(9)
tlen += clen
self.len = tlen
self.len = self.tlen = tlen
self.cdr = self.car + self.len
self.ofs = 0 # type: int
self.f = open(file.abs, "rb", 512 * 1024)
self.f = None
self.seek = self._seek0
self.read = self._read0
def subchunk(self, maxsz, nth):
if self.tlen <= maxsz:
return -1
if not nth:
self.car0 = self.car
self.cdr0 = self.cdr
self.car = self.car0 + maxsz * nth
if self.car >= self.cdr0:
return -2
self.cdr = self.car + min(self.cdr0 - self.car, maxsz)
self.len = self.cdr - self.car
self.seek(0)
return nth
def unsub(self):
self.car = self.car0
self.cdr = self.cdr0
self.len = self.tlen
def _open(self):
self.seek = self._seek
self.read = self._read
self.f = open(self.file.abs, "rb", 512 * 1024)
self.f.seek(self.car)
# https://stackoverflow.com/questions/4359495/what-is-exactly-a-file-like-object-in-python
@@ -282,10 +319,15 @@ class FileSlice(object):
except:
pass # py27 probably
def close(self, *a, **ka):
return # until _open
def tell(self):
return self.ofs
def seek(self, ofs, wh=0):
def _seek(self, ofs, wh=0):
assert self.f # !rm
if wh == 1:
ofs = self.ofs + ofs
elif wh == 2:
@@ -299,12 +341,22 @@ class FileSlice(object):
self.ofs = ofs
self.f.seek(self.car + ofs)
def read(self, sz):
def _read(self, sz):
assert self.f # !rm
sz = min(sz, self.len - self.ofs)
ret = self.f.read(sz)
self.ofs += len(ret)
return ret
def _seek0(self, ofs, wh=0):
self._open()
return self.seek(ofs, wh)
def _read0(self, sz):
self._open()
return self.read(sz)
class MTHash(object):
def __init__(self, cores):
@@ -557,13 +609,17 @@ def walkdir(err, top, excl, seen):
for ap, inf in sorted(statdir(err, top)):
if excl.match(ap):
continue
yield ap, inf
if stat.S_ISDIR(inf.st_mode):
yield ap, inf
try:
for x in walkdir(err, ap, excl, seen):
yield x
except Exception as ex:
err.append((ap, str(ex)))
elif stat.S_ISREG(inf.st_mode):
yield ap, inf
else:
err.append((ap, "irregular filetype 0%o" % (inf.st_mode,)))
def walkdirs(err, tops, excl):
@@ -609,11 +665,12 @@ def walkdirs(err, tops, excl):
# mostly from copyparty/util.py
def quotep(btxt):
# type: (bytes) -> bytes
quot1 = quote(btxt, safe=b"/")
if not PY2:
quot1 = quot1.encode("ascii")
return quot1.replace(b" ", b"+") # type: ignore
return quot1.replace(b" ", b"%20") # type: ignore
# from copyparty/util.py
@@ -641,7 +698,7 @@ def up2k_chunksize(filesize):
while True:
for mul in [1, 2]:
nchunks = math.ceil(filesize * 1.0 / chunksize)
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks < 4096):
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks <= 4096):
return chunksize
chunksize += stepsize
@@ -720,7 +777,7 @@ def handshake(ar, file, search):
url = file.url
else:
if b"/" in file.rel:
url = quotep(file.rel.rsplit(b"/", 1)[0]).decode("utf-8", "replace")
url = quotep(file.rel.rsplit(b"/", 1)[0]).decode("utf-8")
else:
url = ""
url = ar.vtop + url
@@ -728,6 +785,7 @@ def handshake(ar, file, search):
while True:
sc = 600
txt = ""
t0 = time.time()
try:
zs = json.dumps(req, separators=(",\n", ": "))
sc, txt = web.req("POST", url, {}, zs.encode("utf-8"), MJ)
@@ -752,7 +810,9 @@ def handshake(ar, file, search):
print("\nERROR: login required, or wrong password:\n%s" % (txt,))
raise BadAuth()
eprint("handshake failed, retrying: %s\n %s\n\n" % (file.name, em))
t = "handshake failed, retrying: %s\n t0=%.3f t1=%.3f td=%.3f\n %s\n\n"
now = time.time()
eprint(t % (file.name, t0, now, now - t0, em))
time.sleep(ar.cd)
try:
@@ -763,15 +823,15 @@ def handshake(ar, file, search):
if search:
return r["hits"], False
file.url = r["purl"]
file.url = quotep(r["purl"].encode("utf-8", WTF8)).decode("utf-8")
file.name = r["name"]
file.wark = r["wark"]
return r["hash"], r["sprs"]
def upload(fsl, stats):
# type: (FileSlice, str) -> None
def upload(fsl, stats, maxsz):
# type: (FileSlice, str, int) -> None
"""upload a range of file data, defined by one or more `cid` (chunk-hash)"""
ctxt = fsl.cids[0]
@@ -789,21 +849,34 @@ def upload(fsl, stats):
if stats:
headers["X-Up2k-Stat"] = stats
nsub = 0
try:
sc, txt = web.req("POST", fsl.file.url, headers, fsl, MO)
while nsub != -1:
nsub = fsl.subchunk(maxsz, nsub)
if nsub == -2:
return
if nsub >= 0:
headers["X-Up2k-Subc"] = str(maxsz * nsub)
headers.pop(CLEN, None)
nsub += 1
if sc == 400:
if (
"already being written" in txt
or "already got that" in txt
or "only sibling chunks" in txt
):
fsl.file.nojoin = 1
sc, txt = web.req("POST", fsl.file.url, headers, fsl, MO)
if sc >= 400:
raise Exception("http %s: %s" % (sc, txt))
if sc == 400:
if (
"already being written" in txt
or "already got that" in txt
or "only sibling chunks" in txt
):
fsl.file.nojoin = 1
if sc >= 400:
raise Exception("http %s: %s" % (sc, txt))
finally:
fsl.f.close()
if fsl.f:
fsl.f.close()
if nsub != -1:
fsl.unsub()
class Ctl(object):
@@ -869,8 +942,8 @@ class Ctl(object):
self.hash_b = 0
self.up_f = 0
self.up_c = 0
self.up_b = 0
self.up_br = 0
self.up_b = 0 # num bytes handled
self.up_br = 0 # num bytes actually transferred
self.uploader_busy = 0
self.serialized = False
@@ -935,7 +1008,7 @@ class Ctl(object):
print(" %d up %s" % (ncs - nc, cid))
stats = "%d/0/0/%d" % (nf, self.nfiles - nf)
fslice = FileSlice(file, [cid])
upload(fslice, stats)
upload(fslice, stats, self.ar.szm)
print(" ok!")
if file.recheck:
@@ -1013,11 +1086,14 @@ class Ctl(object):
t = "%s eta @ %s/s, %s, %d# left\033[K" % (self.eta, spd, sleft, nleft)
eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail))
if self.ar.wlist:
self.at_hash = time.time() - self.t0
if self.hash_b and self.at_hash:
spd = humansize(self.hash_b / self.at_hash)
eprint("\nhasher: %.2f sec, %s/s\n" % (self.at_hash, spd))
if self.up_b and self.at_up:
spd = humansize(self.up_b / self.at_up)
if self.up_br and self.at_up:
spd = humansize(self.up_br / self.at_up)
eprint("upload: %.2f sec, %s/s\n" % (self.at_up, spd))
if not self.recheck:
@@ -1051,7 +1127,7 @@ class Ctl(object):
print(" ls ~{0}".format(srd))
zt = (
self.ar.vtop,
quotep(rd.replace(b"\\", b"/")).decode("utf-8", "replace"),
quotep(rd.replace(b"\\", b"/")).decode("utf-8"),
)
sc, txt = web.req("GET", "%s%s?ls&lt&dots" % zt, {})
if sc >= 400:
@@ -1060,7 +1136,7 @@ class Ctl(object):
j = json.loads(txt)
for f in j["dirs"] + j["files"]:
rfn = f["href"].split("?")[0].rstrip("/")
ls[unquote(rfn.encode("utf-8", "replace"))] = f
ls[unquote(rfn.encode("utf-8", WTF8))] = f
except Exception as ex:
print(" mkdir ~{0} ({1})".format(srd, ex))
@@ -1074,7 +1150,7 @@ class Ctl(object):
lnodes = [x.split(b"/")[-1] for x in zls]
bnames = [x for x in ls if x not in lnodes and x != b".hist"]
vpath = self.ar.url.split("://")[-1].split("/", 1)[-1]
names = [x.decode("utf-8", "replace") for x in bnames]
names = [x.decode("utf-8", WTF8) for x in bnames]
locs = [vpath + srd + "/" + x for x in names]
while locs:
req = locs
@@ -1136,10 +1212,16 @@ class Ctl(object):
self.up_b = self.hash_b
if self.ar.wlist:
vp = file.rel.decode("utf-8")
if self.ar.chs:
zsl = [
"%s %d %d" % (zsii[0], n, zsii[1])
for n, zsii in enumerate(file.cids)
]
print("chs: %s\n%s" % (vp, "\n".join(zsl)))
zsl = [self.ar.wsalt, str(file.size)] + [x[0] for x in file.kchunks]
zb = hashlib.sha512("\n".join(zsl).encode("utf-8")).digest()[:33]
wark = ub64enc(zb).decode("utf-8")
vp = file.rel.decode("utf-8")
if self.ar.jw:
print("%s %s" % (wark, vp))
else:
@@ -1177,6 +1259,7 @@ class Ctl(object):
self.q_upload.put(None)
return
chunksz = up2k_chunksize(file.size)
upath = file.abs.decode("utf-8", "replace")
if not VT100:
upath = upath.lstrip("\\?")
@@ -1236,9 +1319,14 @@ class Ctl(object):
file.up_c -= len(hs)
for cid in hs:
sz = file.kchunks[cid][1]
self.up_br -= sz
self.up_b -= sz
file.up_b -= sz
if hs and not file.up_b:
# first hs of this file; is this an upload resume?
file.up_b = chunksz * max(0, len(file.kchunks) - len(hs))
file.ucids = hs
if not hs:
@@ -1252,7 +1340,7 @@ class Ctl(object):
c1 = c2 = ""
spd_h = humansize(file.size / file.t_hash, True)
if file.up_b:
if file.up_c:
t_up = file.t1_up - file.t0_up
spd_u = humansize(file.size / t_up, True)
@@ -1262,14 +1350,13 @@ class Ctl(object):
t = " found %s %s(%.2fs,%s/s)%s"
print(t % (upath, c1, file.t_hash, spd_h, c2))
else:
kw = "uploaded" if file.up_b else " found"
kw = "uploaded" if file.up_c else " found"
print("{0} {1}".format(kw, upath))
self._check_if_done()
continue
chunksz = up2k_chunksize(file.size)
njoin = (self.ar.sz * 1024 * 1024) // chunksz
njoin = self.ar.sz // chunksz
cs = hs[:]
while cs:
fsl = FileSlice(file, cs[:1])
@@ -1321,7 +1408,7 @@ class Ctl(object):
)
try:
upload(fsl, stats)
upload(fsl, stats, self.ar.szm)
except Exception as ex:
t = "upload failed, retrying: %s #%s+%d (%s)\n"
eprint(t % (file.name, cids[0][:8], len(cids) - 1, ex))
@@ -1365,7 +1452,7 @@ def main():
cores = (os.cpu_count() if hasattr(os, "cpu_count") else 0) or 2
hcores = min(cores, 3) # 4% faster than 4+ on py3.9 @ r5-4500U
ver = "{0} v{1} https://youtu.be/BIcOO6TLKaY".format(S_BUILD_DT, S_VERSION)
ver = "{0}, v{1}".format(S_BUILD_DT, S_VERSION)
if "--version" in sys.argv:
print(ver)
return
@@ -1403,12 +1490,14 @@ source file/folder selection uses rsync syntax, meaning that:
ap = app.add_argument_group("file-ID calculator; enable with url '-' to list warks (file identifiers) instead of upload/search")
ap.add_argument("--wsalt", type=unicode, metavar="S", default="hunter2", help="salt to use when creating warks; must match server config")
ap.add_argument("--chs", action="store_true", help="verbose (print the hash/offset of each chunk in each file)")
ap.add_argument("--jw", action="store_true", help="just identifier+filepath, not mtime/size too")
ap = app.add_argument_group("performance tweaks")
ap.add_argument("-j", type=int, metavar="CONNS", default=2, help="parallel connections")
ap.add_argument("-J", type=int, metavar="CORES", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing")
ap.add_argument("--sz", type=int, metavar="MiB", default=64, help="try to make each POST this big")
ap.add_argument("--szm", type=int, metavar="MiB", default=96, help="max size of each POST (default is cloudflare max)")
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
ap.add_argument("-ns", action="store_true", help="no status panel (for slow consoles and macos)")
ap.add_argument("--cd", type=float, metavar="SEC", default=5, help="delay before reattempting a failed handshake/upload")
@@ -1436,6 +1525,9 @@ source file/folder selection uses rsync syntax, meaning that:
if ar.dr:
ar.ow = True
ar.sz *= 1024 * 1024
ar.szm *= 1024 * 1024
ar.x = "|".join(ar.x or [])
setattr(ar, "wlist", ar.url == "-")

View File

@@ -1,6 +1,6 @@
# Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty
pkgver="1.15.5"
pkgver="1.15.8"
pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any")
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
)
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" )
sha256sums=("c380ad1d20787d80077123ced583d45bc26467386bbceac35436662f435a6b8c")
sha256sums=("ce2870eca76c554d36392d649e9371ec77b85b84e4899926889e7428922c83ab")
build() {
cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,5 +1,5 @@
{
"url": "https://github.com/9001/copyparty/releases/download/v1.15.5/copyparty-sfx.py",
"version": "1.15.5",
"hash": "sha256-2JcXSbtyEn+EtpyQTcE9U4XuckVKvAowVGqBZ110Jt4="
"url": "https://github.com/9001/copyparty/releases/download/v1.15.8/copyparty-sfx.py",
"version": "1.15.8",
"hash": "sha256-J+E6W4q0lsPlsO8S0nglWwGBeu98SE9w/zgIHSNg6Ic="
}

View File

@@ -1017,7 +1017,7 @@ def add_upload(ap):
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for this size. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for \033[33mdefault\033[0m, and never exceed \033[33mmax\033[0m. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
@@ -1037,7 +1037,7 @@ def add_network(ap):
else:
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=128.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0.0, help="debug: socket write delay in seconds")
@@ -1087,6 +1087,7 @@ def add_auth(ap):
ap2.add_argument("--ses-db", metavar="PATH", type=u, default=ses_db, help="where to store the sessions database (if you run multiple copyparty instances, make sure they use different DBs)")
ap2.add_argument("--ses-len", metavar="CHARS", type=int, default=20, help="session key length; default is 120 bits ((20//4)*4*6)")
ap2.add_argument("--no-ses", action="store_true", help="disable sessions; use plaintext passwords in cookies")
ap2.add_argument("--ipu", metavar="CIDR=USR", type=u, action="append", help="users with IP matching \033[33mCIDR\033[0m are auto-authenticated as username \033[33mUSR\033[0m; example: [\033[32m172.16.24.0/24=dave]")
def add_chpw(ap):
@@ -1356,6 +1357,14 @@ def add_transcoding(ap):
ap2.add_argument("--ac-maxage", metavar="SEC", type=int, default=86400, help="delete cached transcode output after \033[33mSEC\033[0m seconds")
def add_rss(ap):
ap2 = ap.add_argument_group('RSS options')
ap2.add_argument("--rss", action="store_true", help="enable RSS output (experimental)")
ap2.add_argument("--rss-nf", metavar="HITS", type=int, default=250, help="default number of files to return (url-param 'nf')")
ap2.add_argument("--rss-fext", metavar="E,E", type=u, default="", help="default list of file extensions to include (url-param 'fext'); blank=all")
ap2.add_argument("--rss-sort", metavar="ORD", type=u, default="m", help="default sort order (url-param 'sort'); [\033[32mm\033[0m]=last-modified [\033[32mu\033[0m]=upload-time [\033[32mn\033[0m]=filename [\033[32ms\033[0m]=filesize; Uppercase=oldest-first. Note that upload-time is 0 for non-uploaded files")
def add_db_general(ap, hcores):
noidx = APPLESAN_TXT if MACOS else ""
ap2 = ap.add_argument_group('general db options')
@@ -1477,6 +1486,7 @@ def add_debug(ap):
ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to \033[33m--bf-nc\033[0m and \033[33m--bf-dir\033[0m")
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
ap2.add_argument("--bf-dir", metavar="PATH", type=u, default="bf", help="bak-flips: store corrupted chunks at \033[33mPATH\033[0m; default: folder named 'bf' wherever copyparty was started")
ap2.add_argument("--bf-log", metavar="PATH", type=u, default="", help="bak-flips: log corruption info to a textfile at \033[33mPATH\033[0m")
# fmt: on
@@ -1524,6 +1534,7 @@ def run_argparse(
add_db_metadata(ap)
add_thumbnail(ap)
add_transcoding(ap)
add_rss(ap)
add_ftp(ap)
add_webdav(ap)
add_tftp(ap)

View File

@@ -1,8 +1,8 @@
# coding: utf-8
VERSION = (1, 15, 6)
VERSION = (1, 15, 9)
CODENAME = "fill the drives"
BUILD_DT = (2024, 10, 12)
BUILD_DT = (2024, 10, 18)
S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -66,6 +66,7 @@ if PY2:
LEELOO_DALLAS = "leeloo_dallas"
SEE_LOG = "see log for details"
SEESLOG = " (see serverlog for details)"
SSEELOG = " ({})".format(SEE_LOG)
BAD_CFG = "invalid config; {}".format(SEE_LOG)
SBADCFG = " ({})".format(BAD_CFG)
@@ -164,8 +165,11 @@ class Lim(object):
self.chk_rem(rem)
if sz != -1:
self.chk_sz(sz)
self.chk_vsz(broker, ptop, sz, volgetter)
self.chk_df(abspath, sz) # side effects; keep last-ish
else:
sz = 0
self.chk_vsz(broker, ptop, sz, volgetter)
self.chk_df(abspath, sz) # side effects; keep last-ish
ap2, vp2 = self.rot(abspath)
if abspath == ap2:
@@ -205,7 +209,15 @@ class Lim(object):
if self.dft < time.time():
self.dft = int(time.time()) + 300
self.dfv = get_df(abspath)[0] or 0
df, du, err = get_df(abspath, True)
if err:
t = "failed to read disk space usage for [%s]: %s"
self.log(t % (abspath, err), 3)
self.dfv = 0xAAAAAAAAA # 42.6 GiB
else:
self.dfv = df or 0
for j in list(self.reg.values()) if self.reg else []:
self.dfv -= int(j["size"] / (len(j["hash"]) or 999) * len(j["need"]))

View File

@@ -46,6 +46,7 @@ def vf_bmap() -> dict[str, str]:
"og_no_head",
"og_s_title",
"rand",
"rss",
"xdev",
"xlink",
"xvol",

View File

@@ -76,6 +76,7 @@ class FtpAuth(DummyAuthorizer):
else:
raise AuthenticationFailed("banned")
args = self.hub.args
asrv = self.hub.asrv
uname = "*"
if username != "anonymous":
@@ -86,6 +87,9 @@ class FtpAuth(DummyAuthorizer):
uname = zs
break
if args.ipu and uname == "*":
uname = args.ipu_iu[args.ipu_nm.map(ip)]
if not uname or not (asrv.vfs.aread.get(uname) or asrv.vfs.awrite.get(uname)):
g = self.hub.gpwd
if g.lim:

View File

@@ -131,6 +131,8 @@ LOGUES = [[0, ".prologue.html"], [1, ".epilogue.html"]]
READMES = [[0, ["preadme.md", "PREADME.md"]], [1, ["readme.md", "README.md"]]]
RSS_SORT = {"m": "mt", "u": "at", "n": "fn", "s": "sz"}
class HttpCli(object):
"""
@@ -589,6 +591,9 @@ class HttpCli(object):
or "*"
)
if self.args.ipu and self.uname == "*":
self.uname = self.conn.ipu_iu[self.conn.ipu_nm.map(self.ip)]
self.rvol = self.asrv.vfs.aread[self.uname]
self.wvol = self.asrv.vfs.awrite[self.uname]
self.avol = self.asrv.vfs.aadmin[self.uname]
@@ -1198,8 +1203,146 @@ class HttpCli(object):
if "h" in self.uparam:
return self.tx_mounts()
if "rss" in self.uparam:
return self.tx_rss()
return self.tx_browser()
def tx_rss(self) -> bool:
if self.do_log:
self.log("RSS %s @%s" % (self.req, self.uname))
if not self.can_read:
return self.tx_404()
vn = self.vn
if not vn.flags.get("rss"):
raise Pebkac(405, "RSS is disabled in server config")
rem = self.rem
idx = self.conn.get_u2idx()
if not idx or not hasattr(idx, "p_end"):
if not HAVE_SQLITE3:
raise Pebkac(500, "sqlite3 not found on server; rss is disabled")
raise Pebkac(500, "server busy, cannot generate rss; please retry in a bit")
uv = [rem]
if "recursive" in self.uparam:
uq = "up.rd like ?||'%'"
else:
uq = "up.rd == ?"
zs = str(self.uparam.get("fext", self.args.rss_fext))
if zs in ("True", "False"):
zs = ""
if zs:
zsl = []
for ext in zs.split(","):
zsl.append("+up.fn like '%.'||?")
uv.append(ext)
uq += " and ( %s )" % (" or ".join(zsl),)
zs1 = self.uparam.get("sort", self.args.rss_sort)
zs2 = zs1.lower()
zs = RSS_SORT.get(zs2)
if not zs:
raise Pebkac(400, "invalid sort key; must be m/u/n/s")
uq += " order by up." + zs
if zs1 == zs2:
uq += " desc"
nmax = int(self.uparam.get("nf") or self.args.rss_nf)
hits = idx.run_query(self.uname, [self.vn], uq, uv, False, False, nmax)[0]
pw = self.ouparam.get("pw")
if pw:
q_pw = "?pw=%s" % (pw,)
a_pw = "&pw=%s" % (pw,)
for i in hits:
i["rp"] += a_pw if "?" in i["rp"] else q_pw
else:
q_pw = a_pw = ""
title = self.uparam.get("title") or self.vpath.split("/")[-1]
etitle = html_escape(title, True, True)
baseurl = "%s://%s%s" % (
"https" if self.is_https else "http",
self.host,
self.args.SRS,
)
feed = "%s%s" % (baseurl, self.req[1:])
efeed = html_escape(feed, True, True)
edirlink = efeed.split("?")[0] + q_pw
ret = [
"""\
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:content="http://purl.org/rss/1.0/modules/content/">
\t<channel>
\t\t<atom:link href="%s" rel="self" type="application/rss+xml" />
\t\t<title>%s</title>
\t\t<description></description>
\t\t<link>%s</link>
\t\t<generator>copyparty-1</generator>
"""
% (efeed, etitle, edirlink)
]
q = "select fn from cv where rd=? and dn=?"
crd, cdn = rem.rsplit("/", 1) if "/" in rem else ("", rem)
try:
cfn = idx.cur[self.vn.realpath].execute(q, (crd, cdn)).fetchone()[0]
bos.stat(os.path.join(vn.canonical(rem), cfn))
cv_url = "%s%s?th=jf%s" % (baseurl, vjoin(self.vpath, cfn), a_pw)
cv_url = html_escape(cv_url, True, True)
zs = """\
\t\t
"""
ret.append(zs % (cv_url, etitle, edirlink))
except:
pass
for i in hits:
iurl = html_escape("%s%s" % (baseurl, i["rp"]), True, True)
title = unquotep(i["rp"].split("?")[0].split("/")[-1])
title = html_escape(title, True, True)
tag_t = str(i["tags"].get("title") or "")
tag_a = str(i["tags"].get("artist") or "")
desc = "%s - %s" % (tag_a, tag_t) if tag_t and tag_a else (tag_t or tag_a)
desc = html_escape(desc, True, True) if desc else title
mime = html_escape(guess_mime(title))
lmod = formatdate(i["ts"])
zsa = (iurl, iurl, title, desc, lmod, iurl, mime, i["sz"])
zs = (
"""\
\t\t<item>
\t\t\t<guid>%s</guid>
\t\t\t<link>%s</link>
\t\t\t<title>%s</title>
\t\t\t<description>%s</description>
\t\t\t<pubDate>%s</pubDate>
\t\t\t<enclosure url="%s" type="%s" length="%d"/>
"""
% zsa
)
dur = i["tags"].get(".dur")
if dur:
zs += "\t\t\t<itunes:duration>%d</itunes:duration>\n" % (dur,)
ret.append(zs + "\t\t</item>\n")
ret.append("\t</channel>\n</rss>\n")
bret = "".join(ret).encode("utf-8", "replace")
self.reply(bret, 200, "text/xml; charset=utf-8")
self.log("rss: %d hits, %d bytes" % (len(hits), len(bret)))
return True
def handle_propfind(self) -> bool:
if self.do_log:
self.log("PFIND %s @%s" % (self.req, self.uname))
@@ -1881,7 +2024,7 @@ class HttpCli(object):
f, fn = ren_open(fn, *open_a, **params)
try:
path = os.path.join(fdir, fn)
post_sz, sha_hex, sha_b64 = hashcopy(reader, f, self.args.s_wr_slp)
post_sz, sha_hex, sha_b64 = hashcopy(reader, f, None, 0, self.args.s_wr_slp)
finally:
f.close()
@@ -2033,13 +2176,32 @@ class HttpCli(object):
return True
def bakflip(
self, f: typing.BinaryIO, ofs: int, sz: int, sha: str, flags: dict[str, Any]
self,
f: typing.BinaryIO,
ap: str,
ofs: int,
sz: int,
good_sha: str,
bad_sha: str,
flags: dict[str, Any],
) -> None:
now = time.time()
t = "bad-chunk: %.3f %s %s %d %s %s %s"
t = t % (now, bad_sha, good_sha, ofs, self.ip, self.uname, ap)
self.log(t, 5)
if self.args.bf_log:
try:
with open(self.args.bf_log, "ab+") as f2:
f2.write((t + "\n").encode("utf-8", "replace"))
except Exception as ex:
self.log("append %s failed: %r" % (self.args.bf_log, ex))
if not self.args.bak_flips or self.args.nw:
return
sdir = self.args.bf_dir
fp = os.path.join(sdir, sha)
fp = os.path.join(sdir, bad_sha)
if bos.path.exists(fp):
return self.log("no bakflip; have it", 6)
@@ -2326,7 +2488,7 @@ class HttpCli(object):
broker = self.conn.hsrv.broker
x = broker.ask("up2k.handle_chunks", ptop, wark, chashes)
response = x.get()
chashes, chunksize, cstarts, path, lastmod, sprs = response
chashes, chunksize, cstarts, path, lastmod, fsize, sprs = response
maxsize = chunksize * len(chashes)
cstart0 = cstarts[0]
locked = chashes # remaining chunks to be received in this request
@@ -2334,6 +2496,50 @@ class HttpCli(object):
num_left = -1 # num chunks left according to most recent up2k release
treport = time.time() # ratelimit up2k reporting to reduce overhead
if "x-up2k-subc" in self.headers:
sc_ofs = int(self.headers["x-up2k-subc"])
chash = chashes[0]
u2sc = self.conn.hsrv.u2sc
try:
sc_pofs, hasher = u2sc[chash]
if not sc_ofs:
t = "client restarted the chunk; forgetting subchunk offset %d"
self.log(t % (sc_pofs,))
raise Exception()
except:
sc_pofs = 0
hasher = hashlib.sha512()
et = "subchunk protocol error; resetting chunk "
if sc_pofs != sc_ofs:
u2sc.pop(chash, None)
t = "%s[%s]: the expected resume-point was %d, not %d"
raise Pebkac(400, t % (et, chash, sc_pofs, sc_ofs))
if len(cstarts) > 1:
u2sc.pop(chash, None)
t = "%s[%s]: only a single subchunk can be uploaded in one request; you are sending %d chunks"
raise Pebkac(400, t % (et, chash, len(cstarts)))
csize = min(chunksize, fsize - cstart0[0])
cstart0[0] += sc_ofs # also sets cstarts[0][0]
sc_next_ofs = sc_ofs + postsize
if sc_next_ofs > csize:
u2sc.pop(chash, None)
t = "%s[%s]: subchunk offset (%d) plus postsize (%d) exceeds chunksize (%d)"
raise Pebkac(400, t % (et, chash, sc_ofs, postsize, csize))
else:
final_subchunk = sc_next_ofs == csize
t = "subchunk %s %d:%d/%d %s"
zs = "END" if final_subchunk else ""
self.log(t % (chash[:15], sc_ofs, sc_next_ofs, csize, zs), 6)
if final_subchunk:
u2sc.pop(chash, None)
else:
u2sc[chash] = (sc_next_ofs, hasher)
else:
hasher = None
final_subchunk = True
try:
if self.args.nw:
path = os.devnull
@@ -2364,11 +2570,15 @@ class HttpCli(object):
reader = read_socket(
self.sr, self.args.s_rd_sz, min(remains, chunksize)
)
post_sz, _, sha_b64 = hashcopy(reader, f, self.args.s_wr_slp)
post_sz, _, sha_b64 = hashcopy(
reader, f, hasher, 0, self.args.s_wr_slp
)
if sha_b64 != chash:
if sha_b64 != chash and final_subchunk:
try:
self.bakflip(f, cstart[0], post_sz, sha_b64, vfs.flags)
self.bakflip(
f, path, cstart[0], post_sz, chash, sha_b64, vfs.flags
)
except:
self.log("bakflip failed: " + min_ex())
@@ -2396,7 +2606,8 @@ class HttpCli(object):
# be quick to keep the tcp winsize scale;
# if we can't confirm rn then that's fine
written.append(chash)
if final_subchunk:
written.append(chash)
now = time.time()
if now - treport < 1:
continue
@@ -2421,6 +2632,7 @@ class HttpCli(object):
except:
# maybe busted handle (eg. disk went full)
f.close()
chashes = [] # exception flag
raise
finally:
if locked:
@@ -2429,9 +2641,11 @@ class HttpCli(object):
num_left, t = x.get()
if num_left < 0:
self.loud_reply(t, status=500)
return False
t = "got %d more chunks, %d left"
self.log(t % (len(written), num_left), 6)
if chashes: # kills exception bubbling otherwise
return False
else:
t = "got %d more chunks, %d left"
self.log(t % (len(written), num_left), 6)
if num_left < 0:
raise Pebkac(500, "unconfirmed; see serverlog")
@@ -2789,7 +3003,7 @@ class HttpCli(object):
tabspath = os.path.join(fdir, tnam)
self.log("writing to {}".format(tabspath))
sz, sha_hex, sha_b64 = hashcopy(
p_data, f, self.args.s_wr_slp, max_sz
p_data, f, None, max_sz, self.args.s_wr_slp
)
if sz == 0:
raise Pebkac(400, "empty files in post")
@@ -3121,7 +3335,7 @@ class HttpCli(object):
wunlink(self.log, fp, vfs.flags)
with open(fsenc(fp), "wb", self.args.iobuf) as f:
sz, sha512, _ = hashcopy(p_data, f, self.args.s_wr_slp)
sz, sha512, _ = hashcopy(p_data, f, None, 0, self.args.s_wr_slp)
if lim:
lim.nup(self.ip)
@@ -5031,7 +5245,7 @@ class HttpCli(object):
self.log("#wow #whoa")
if not self.args.nid:
free, total = get_df(abspath)
free, total, _ = get_df(abspath, False)
if total is not None:
h1 = humansize(free or 0)
h2 = humansize(total)

View File

@@ -59,6 +59,8 @@ class HttpConn(object):
self.asrv: AuthSrv = hsrv.asrv # mypy404
self.u2fh: Util.FHC = hsrv.u2fh # mypy404
self.pipes: Util.CachedDict = hsrv.pipes # mypy404
self.ipu_iu: Optional[dict[str, str]] = hsrv.ipu_iu
self.ipu_nm: Optional[NetMap] = hsrv.ipu_nm
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
self.xff_nm: Optional[NetMap] = hsrv.xff_nm
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore

View File

@@ -1,6 +1,7 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import hashlib
import math
import os
import re
@@ -69,6 +70,7 @@ from .util import (
build_netmap,
has_resource,
ipnorm,
load_ipu,
load_resource,
min_ex,
shut_socket,
@@ -143,6 +145,7 @@ class HttpSrv(object):
self.t_periodic: Optional[threading.Thread] = None
self.u2fh = FHC()
self.u2sc: dict[str, tuple[int, "hashlib._Hash"]] = {}
self.pipes = CachedDict(0.2)
self.metrics = Metrics(self)
self.nreq = 0
@@ -175,6 +178,11 @@ class HttpSrv(object):
self.j2 = {x: env.get_template(x + ".html") for x in jn}
self.prism = has_resource(self.E, "web/deps/prism.js.gz")
if self.args.ipu:
self.ipu_iu, self.ipu_nm = load_ipu(self.log, self.args.ipu)
else:
self.ipu_iu = self.ipu_nm = None
self.ipa_nm = build_netmap(self.args.ipa)
self.xff_nm = build_netmap(self.args.xff_src)
self.xff_lan = build_netmap("lan")

View File

@@ -128,7 +128,7 @@ class Metrics(object):
addbh("cpp_disk_size_bytes", "total HDD size of volume")
addbh("cpp_disk_free_bytes", "free HDD space in volume")
for vpath, vol in allvols:
free, total = get_df(vol.realpath)
free, total, _ = get_df(vol.realpath, False)
if free is None or total is None:
continue

View File

@@ -60,6 +60,7 @@ from .util import (
alltrace,
ansi_re,
build_netmap,
load_ipu,
min_ex,
mp,
odfusion,
@@ -221,6 +222,11 @@ class SvcHub(object):
noch.update([x for x in zsl if x])
args.chpw_no = noch
if args.ipu:
iu, nm = load_ipu(self.log, args.ipu)
setattr(args, "ipu_iu", iu)
setattr(args, "ipu_nm", nm)
if not self.args.no_ses:
self.setup_session_db()

View File

@@ -95,7 +95,7 @@ class U2idx(object):
uv: list[Union[str, int]] = [wark[:16], wark]
try:
return self.run_query(uname, vols, uq, uv, False, 99999)[0]
return self.run_query(uname, vols, uq, uv, False, True, 99999)[0]
except:
raise Pebkac(500, min_ex())
@@ -301,7 +301,7 @@ class U2idx(object):
q += " lower({}) {} ? ) ".format(field, oper)
try:
return self.run_query(uname, vols, q, va, have_mt, lim)
return self.run_query(uname, vols, q, va, have_mt, True, lim)
except Exception as ex:
raise Pebkac(500, repr(ex))
@@ -312,6 +312,7 @@ class U2idx(object):
uq: str,
uv: list[Union[str, int]],
have_mt: bool,
sort: bool,
lim: int,
) -> tuple[list[dict[str, Any]], list[str], bool]:
if self.args.srch_dbg:
@@ -458,7 +459,8 @@ class U2idx(object):
done_flag.append(True)
self.active_id = ""
ret.sort(key=itemgetter("rp"))
if sort:
ret.sort(key=itemgetter("rp"))
return ret, list(taglist.keys()), lim < 0 and not clamped

View File

@@ -20,7 +20,7 @@ from copy import deepcopy
from queue import Queue
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, WINDOWS, E
from .authsrv import LEELOO_DALLAS, SSEELOG, VFS, AuthSrv
from .authsrv import LEELOO_DALLAS, SEESLOG, VFS, AuthSrv
from .bos import bos
from .cfg import vf_bmap, vf_cmap, vf_vmap
from .fsutil import Fstab
@@ -357,17 +357,18 @@ class Up2k(object):
return '[{"timeout":1}]'
ret: list[tuple[int, str, int, int, int]] = []
userset = set([(uname or "\n"), "*"])
try:
for ptop, tab2 in self.registry.items():
cfg = self.flags.get(ptop, {}).get("u2abort", 1)
if not cfg:
continue
addr = (ip or "\n") if cfg in (1, 2) else ""
user = (uname or "\n") if cfg in (1, 3) else ""
user = userset if cfg in (1, 3) else None
for job in tab2.values():
if (
"done" in job
or (user and user != job["user"])
or (user and job["user"] not in user)
or (addr and addr != job["addr"])
):
continue
@@ -1015,6 +1016,7 @@ class Up2k(object):
vpath = k
_, flags = self._expr_idx_filter(flags)
n4g = bool(flags.get("noforget"))
ft = "\033[0;32m{}{:.0}"
ff = "\033[0;35m{}{:.0}"
@@ -1072,21 +1074,35 @@ class Up2k(object):
for job in reg2.values():
job["dwrk"] = job["wark"]
rm = []
for k, job in reg2.items():
job["ptop"] = ptop
if "done" in job:
job["need"] = job["hash"] = emptylist
else:
if "need" not in job:
job["need"] = []
if "hash" not in job:
job["hash"] = []
fp = djoin(ptop, job["prel"], job["name"])
if bos.path.exists(fp):
reg[k] = job
if "done" in job:
job["need"] = job["hash"] = emptylist
continue
job["poke"] = time.time()
job["busy"] = {}
else:
self.log("ign deleted file in snap: [{}]".format(fp))
if not n4g:
rm.append(k)
continue
for x in rm:
del reg2[x]
if drp is None:
drp = [k for k, v in reg.items() if not v.get("need", [])]
drp = [k for k, v in reg.items() if not v["need"]]
else:
drp = [x for x in drp if x in reg]
@@ -2875,9 +2891,6 @@ class Up2k(object):
"user": cj["user"],
"addr": ip,
"at": at,
"hash": [],
"need": [],
"busy": {},
}
for k in ["life"]:
if k in cj:
@@ -2911,17 +2924,20 @@ class Up2k(object):
hashes2, st = self._hashlist_from_file(orig_ap)
wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2)
if dwark != wark2:
t = "will not dedup (fs index desync): fs=%s, db=%s, file: %s"
self.log(t % (wark2, dwark, orig_ap))
t = "will not dedup (fs index desync): fs=%s, db=%s, file: %s\n%s"
self.log(t % (wark2, dwark, orig_ap, rj))
lost.append(dupe[3:])
continue
data_ok = True
job = rj
break
if job and wark in reg:
# self.log("pop " + wark + " " + job["name"] + " handle_json db", 4)
del reg[wark]
if job:
if wark in reg:
del reg[wark]
job["hash"] = job["need"] = []
job["done"] = True
job["busy"] = {}
if lost:
c2 = None
@@ -2950,7 +2966,7 @@ class Up2k(object):
path = djoin(rj["ptop"], rj["prel"], fn)
try:
st = bos.stat(path)
if st.st_size > 0 or not rj["need"]:
if st.st_size > 0 or "done" in rj:
# upload completed or both present
break
except:
@@ -2964,13 +2980,13 @@ class Up2k(object):
inc_ap = djoin(cj["ptop"], cj["prel"], cj["name"])
orig_ap = djoin(rj["ptop"], rj["prel"], rj["name"])
if self.args.nw or n4g or not st:
if self.args.nw or n4g or not st or "done" not in rj:
pass
elif st.st_size != rj["size"]:
t = "will not dedup (fs index desync): {}, size fs={} db={}, mtime fs={} db={}, file: {}"
t = "will not dedup (fs index desync): {}, size fs={} db={}, mtime fs={} db={}, file: {}\n{}"
t = t.format(
wark, st.st_size, rj["size"], st.st_mtime, rj["lmod"], path
wark, st.st_size, rj["size"], st.st_mtime, rj["lmod"], path, rj
)
self.log(t)
del reg[wark]
@@ -2980,8 +2996,8 @@ class Up2k(object):
hashes2, _ = self._hashlist_from_file(orig_ap)
wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2)
if wark != wark2:
t = "will not dedup (fs index desync): fs=%s, idx=%s, file: %s"
self.log(t % (wark2, wark, orig_ap))
t = "will not dedup (fs index desync): fs=%s, idx=%s, file: %s\n%s"
self.log(t % (wark2, wark, orig_ap, rj))
del reg[wark]
if job or wark in reg:
@@ -2996,7 +3012,7 @@ class Up2k(object):
dst = djoin(cj["ptop"], cj["prel"], cj["name"])
vsrc = djoin(job["vtop"], job["prel"], job["name"])
vsrc = vsrc.replace("\\", "/") # just for prints anyways
if job["need"]:
if "done" not in job:
self.log("unfinished:\n {0}\n {1}".format(src, dst))
err = "partial upload exists at a different location; please resume uploading here instead:\n"
err += "/" + quotep(vsrc) + " "
@@ -3357,14 +3373,14 @@ class Up2k(object):
def handle_chunks(
self, ptop: str, wark: str, chashes: list[str]
) -> tuple[list[str], int, list[list[int]], str, float, bool]:
) -> tuple[list[str], int, list[list[int]], str, float, int, bool]:
with self.mutex, self.reg_mutex:
self.db_act = self.vol_act[ptop] = time.time()
job = self.registry[ptop].get(wark)
if not job:
known = " ".join([x for x in self.registry[ptop].keys()])
self.log("unknown wark [{}], known: {}".format(wark, known))
raise Pebkac(400, "unknown wark" + SSEELOG)
raise Pebkac(400, "unknown wark" + SEESLOG)
if "t0c" not in job:
job["t0c"] = time.time()
@@ -3380,7 +3396,7 @@ class Up2k(object):
try:
nchunk = uniq.index(chashes[0])
except:
raise Pebkac(400, "unknown chunk0 [%s]" % (chashes[0]))
raise Pebkac(400, "unknown chunk0 [%s]" % (chashes[0],))
expanded = [chashes[0]]
for prefix in chashes[1:]:
nchunk += 1
@@ -3415,7 +3431,7 @@ class Up2k(object):
for chash in chashes:
nchunk = [n for n, v in enumerate(job["hash"]) if v == chash]
if not nchunk:
raise Pebkac(400, "unknown chunk %s" % (chash))
raise Pebkac(400, "unknown chunk %s" % (chash,))
ofs = [chunksize * x for x in nchunk]
coffsets.append(ofs)
@@ -3440,7 +3456,7 @@ class Up2k(object):
job["poke"] = time.time()
return chashes, chunksize, coffsets, path, job["lmod"], job["sprs"]
return chashes, chunksize, coffsets, path, job["lmod"], job["size"], job["sprs"]
def fast_confirm_chunks(
self, ptop: str, wark: str, chashes: list[str]
@@ -3482,6 +3498,7 @@ class Up2k(object):
for chash in written:
job["need"].remove(chash)
except Exception as ex:
# dead tcp connections can get here by timeout (OK)
return -2, "confirm_chunk, chash(%s) %r" % (chash, ex) # type: ignore
ret = len(job["need"])
@@ -3509,11 +3526,13 @@ class Up2k(object):
src = djoin(pdir, job["tnam"])
dst = djoin(pdir, job["name"])
except Exception as ex:
raise Pebkac(500, "finish_upload, wark, " + repr(ex))
self.log(min_ex(), 1)
raise Pebkac(500, "finish_upload, wark, %r%s" % (ex, SEESLOG))
if job["need"]:
t = "finish_upload {} with remaining chunks {}"
raise Pebkac(500, t.format(wark, job["need"]))
self.log(min_ex(), 1)
t = "finish_upload %s with remaining chunks %s%s"
raise Pebkac(500, t % (wark, job["need"], SEESLOG))
upt = job.get("at") or time.time()
vflags = self.flags[ptop]
@@ -3833,10 +3852,12 @@ class Up2k(object):
with self.mutex, self.reg_mutex:
abrt_cfg = self.flags.get(ptop, {}).get("u2abort", 1)
addr = (ip or "\n") if abrt_cfg in (1, 2) else ""
user = (uname or "\n") if abrt_cfg in (1, 3) else ""
user = ((uname or "\n"), "*") if abrt_cfg in (1, 3) else None
reg = self.registry.get(ptop, {}) if abrt_cfg else {}
for wark, job in reg.items():
if (user and user != job["user"]) or (addr and addr != job["addr"]):
if (addr and addr != job["addr"]) or (
user and job["user"] not in user
):
continue
jrem = djoin(job["prel"], job["name"])
if ANYWIN:
@@ -4036,7 +4057,9 @@ class Up2k(object):
self.db_act = self.vol_act[dbv.realpath] = time.time()
svpf = "/".join(x for x in [dbv.vpath, vrem, fn[0]] if x)
if not svpf.startswith(svp + "/"): # assert
raise Pebkac(500, "mv: bug at {}, top {}".format(svpf, svp))
self.log(min_ex(), 1)
t = "mv: bug at %s, top %s%s"
raise Pebkac(500, t % (svpf, svp, SEESLOG))
dvpf = dvp + svpf[len(svp) :]
self._mv_file(uname, ip, svpf, dvpf, curs)
@@ -4051,7 +4074,9 @@ class Up2k(object):
for zsl in (rm_ok, rm_ng):
for ap in reversed(zsl):
if not ap.startswith(sabs):
raise Pebkac(500, "mv_d: bug at {}, top {}".format(ap, sabs))
self.log(min_ex(), 1)
t = "mv_d: bug at %s, top %s%s"
raise Pebkac(500, t % (ap, sabs, SEESLOG))
rem = ap[len(sabs) :].replace(os.sep, "/").lstrip("/")
vp = vjoin(dvp, rem)

View File

@@ -213,6 +213,9 @@ except:
ansi_re = re.compile("\033\\[[^mK]*[mK]")
BOS_SEP = ("%s" % (os.sep,)).encode("ascii")
surrogateescape.register_surrogateescape()
if WINDOWS and PY2:
FS_ENCODING = "utf-8"
@@ -665,11 +668,15 @@ class HLog(logging.Handler):
class NetMap(object):
def __init__(self, ips: list[str], cidrs: list[str], keep_lo=False) -> None:
def __init__(
self, ips: list[str], cidrs: list[str], keep_lo=False, strict_cidr=False
) -> None:
"""
ips: list of plain ipv4/ipv6 IPs, not cidr
cidrs: list of cidr-notation IPs (ip/prefix)
"""
self.mutex = threading.Lock()
if "::" in ips:
ips = [x for x in ips if x != "::"] + list(
[x.split("/")[0] for x in cidrs if ":" in x]
@@ -696,7 +703,7 @@ class NetMap(object):
bip = socket.inet_pton(fam, ip.split("/")[0])
self.bip.append(bip)
self.b2sip[bip] = ip.split("/")[0]
self.b2net[bip] = (IPv6Network if v6 else IPv4Network)(ip, False)
self.b2net[bip] = (IPv6Network if v6 else IPv4Network)(ip, strict_cidr)
self.bip.sort(reverse=True)
@@ -707,8 +714,10 @@ class NetMap(object):
try:
return self.cache[ip]
except:
pass
with self.mutex:
return self._map(ip)
def _map(self, ip: str) -> str:
v6 = ":" in ip
ci = IPv6Address(ip) if v6 else IPv4Address(ip)
bip = next((x for x in self.bip if ci in self.b2net[x]), None)
@@ -2486,23 +2495,28 @@ def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
return _fs_mvrm(log, abspath, "", False, flags)
def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]:
def get_df(abspath: str, prune: bool) -> tuple[Optional[int], Optional[int], str]:
try:
# some fuses misbehave
assert ctypes # type: ignore # !rm
ap = fsenc(abspath)
while prune and not os.path.isdir(ap) and BOS_SEP in ap:
# strip leafs until it hits an existing folder
ap = ap.rsplit(BOS_SEP, 1)[0]
if ANYWIN:
assert ctypes # type: ignore # !rm
abspath = fsdec(ap)
bfree = ctypes.c_ulonglong(0)
ctypes.windll.kernel32.GetDiskFreeSpaceExW( # type: ignore
ctypes.c_wchar_p(abspath), None, None, ctypes.pointer(bfree)
)
return (bfree.value, None)
return (bfree.value, None, "")
else:
sv = os.statvfs(fsenc(abspath))
sv = os.statvfs(ap)
free = sv.f_frsize * sv.f_bfree
total = sv.f_frsize * sv.f_blocks
return (free, total)
except:
return (None, None)
return (free, total, "")
except Exception as ex:
return (None, None, repr(ex))
if not ANYWIN and not MACOS:
@@ -2678,6 +2692,31 @@ def build_netmap(csv: str):
return NetMap(ips, cidrs, True)
def load_ipu(log: "RootLogger", ipus: list[str]) -> tuple[dict[str, str], NetMap]:
ip_u = {"": "*"}
cidr_u = {}
for ipu in ipus:
try:
cidr, uname = ipu.split("=")
cip, csz = cidr.split("/")
except:
t = "\n invalid value %r for argument --ipu; must be CIDR=UNAME (192.168.0.0/16=amelia)"
raise Exception(t % (ipu,))
uname2 = cidr_u.get(cidr)
if uname2 is not None:
t = "\n invalid value %r for argument --ipu; cidr %s already mapped to %r"
raise Exception(t % (ipu, cidr, uname2))
cidr_u[cidr] = uname
ip_u[cip] = uname
try:
nm = NetMap(["::"], list(cidr_u.keys()), True, True)
except Exception as ex:
t = "failed to translate --ipu into netmap, probably due to invalid config: %r"
log("root", t % (ex,), 1)
raise
return ip_u, nm
def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
readsz = min(bufsz, 128 * 1024)
with open(fsenc(fn), "rb", bufsz) as f:
@@ -2692,10 +2731,12 @@ def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
def hashcopy(
fin: Generator[bytes, None, None],
fout: Union[typing.BinaryIO, typing.IO[Any]],
slp: float = 0,
max_sz: int = 0,
hashobj: Optional["hashlib._Hash"],
max_sz: int,
slp: float,
) -> tuple[int, str, str]:
hashobj = hashlib.sha512()
if not hashobj:
hashobj = hashlib.sha512()
tlen = 0
for buf in fin:
tlen += len(buf)

View File

@@ -32,7 +32,7 @@ window.baguetteBox = (function () {
scrollCSS = ['', ''],
scrollTimer = 0,
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
re_v = /^[^?]+\.(webm|mkv|mp4|m4v)(\?|$)/i,
anims = ['slideIn', 'fadeIn', 'none'],
data = {}, // all galleries
imagesElements = [],

View File

@@ -1,7 +1,7 @@
"use strict";
var XHR = XMLHttpRequest,
img_re = /\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp|webm|mkv|mp4)(\?|$)/i;
img_re = /\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp|webm|mkv|mp4|m4v)(\?|$)/i;
var Ls = {
"eng": {

View File

@@ -73,8 +73,8 @@ html {
position: absolute;
height: 1px;
top: 1px;
right: 1%;
width: 99%;
right: 1px;
left: 1px;
animation: toastt var(--tmtime) steps(var(--tmstep)) forwards;
transform-origin: right;
}

View File

@@ -17,10 +17,14 @@ function goto_up2k() {
var up2k = null,
up2k_hooks = [],
hws = [],
hws_ok = 0,
hws_ng = false,
sha_js = WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
m = 'will use ' + sha_js + ' instead of native sha512 due to';
try {
if (sread('nosubtle') || window.nosubtle)
throw 'chickenbit';
var cf = crypto.subtle || crypto.webkitSubtle;
cf.digest('SHA-512', new Uint8Array(1)).then(
function (x) { console.log('sha-ok'); up2k = up2k_init(cf); },
@@ -242,7 +246,7 @@ function U2pvis(act, btns, uc, st) {
p = bd * 100.0 / sz,
nb = bd - bd0,
spd = nb / (td / 1000),
eta = (sz - bd) / spd;
eta = spd ? (sz - bd) / spd : 3599;
return [p, s2ms(eta), spd / (1024 * 1024)];
};
@@ -853,8 +857,13 @@ function up2k_init(subtle) {
setmsg(suggest_up2k, 'msg');
var u2szs = u2sz.split(','),
u2sz_min = parseInt(u2szs[0]),
u2sz_tgt = parseInt(u2szs[1]),
u2sz_max = parseInt(u2szs[2]);
var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz.split(',')[1]),
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz_tgt),
uc = {},
fdom_ctr = 0,
biggest_file = 0;
@@ -1353,6 +1362,10 @@ function up2k_init(subtle) {
for (var a = 0; a < Math.min(navigator.hardwareConcurrency || 4, 16); a++)
hws.push(new Worker(SR + '/.cpr/w.hash.js?_=' + TS));
if (!subtle)
for (var a = 0; a < hws.length; a++)
hws[a].postMessage('nosubtle');
console.log(hws.length + " hashers");
}
@@ -1863,10 +1876,12 @@ function up2k_init(subtle) {
function chill(t) {
var now = Date.now();
if ((t.coolmul || 0) < 2 || now - t.cooldown < t.coolmul * 700)
if ((t.coolmul || 0) < 5 || now - t.cooldown < t.coolmul * 700)
t.coolmul = Math.min((t.coolmul || 0.5) * 2, 32);
t.cooldown = Math.max(t.cooldown || 1, Date.now() + t.coolmul * 1000);
var cd = now + 1000 * (t.coolmul + Math.random() * 4 + 2);
t.cooldown = Math.floor(Math.max(cd, t.cooldown || 1));
return t;
}
/////
@@ -1951,7 +1966,7 @@ function up2k_init(subtle) {
pvis.setab(t.n, nchunks);
pvis.move(t.n, 'bz');
if (hws.length && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
if (hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
return wexec_hash(t, chunksize, nchunks);
@@ -2060,16 +2075,27 @@ function up2k_init(subtle) {
free = [],
busy = {},
nbusy = 0,
init = 0,
hashtab = {},
mem = (MOBILE ? 128 : 256) * 1024 * 1024;
if (!hws_ok)
init = setTimeout(function() {
hws_ng = true;
toast.warn(30, 'webworkers failed to start\n\nwill be a bit slower due to\nhashing on main-thread');
apop(st.busy.hash, t);
st.todo.hash.unshift(t);
exec_hash();
}, 5000);
for (var a = 0; a < hws.length; a++) {
var w = hws[a];
free.push(w);
w.onmessage = onmsg;
if (init)
w.postMessage('ping');
if (mem > 0)
free.push(w);
mem -= chunksize;
if (mem <= 0)
break;
}
function go_next() {
@@ -2099,6 +2125,12 @@ function up2k_init(subtle) {
d = d.data;
var k = d[0];
if (k == "pong")
if (++hws_ok == hws.length) {
clearTimeout(init);
go_next();
}
if (k == "panic")
return vis_exh(d[1], 'up2k.js', '', '', d[1]);
@@ -2161,7 +2193,8 @@ function up2k_init(subtle) {
tasker();
}
}
go_next();
if (!init)
go_next();
}
/////
@@ -2259,8 +2292,7 @@ function up2k_init(subtle) {
console.log('handshake onerror, retrying', t.name, t);
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
st.todo.handshake.unshift(chill(t));
t.keepalive = keepalive;
};
var orz = function (e) {
@@ -2273,8 +2305,7 @@ function up2k_init(subtle) {
}
catch (ex) {
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
st.todo.handshake.unshift(chill(t));
var txt = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
return toast.err(0, txt + '\n\n' + L.badreply + ':\n\n' + unpre(xhr.responseText));
}
@@ -2453,6 +2484,7 @@ function up2k_init(subtle) {
else {
pvis.seth(t.n, 1, "ERROR");
pvis.seth(t.n, 2, L.u_ehstmp, t);
apop(st.busy.handshake, t);
var err = "",
cls = "ERROR",
@@ -2466,7 +2498,6 @@ function up2k_init(subtle) {
var penalty = rsp.replace(/.*rate-limit /, "").split(' ')[0];
console.log("rate-limit: " + penalty);
t.cooldown = Date.now() + parseFloat(penalty) * 1000;
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
return;
}
@@ -2489,8 +2520,6 @@ function up2k_init(subtle) {
cls = 'defer';
}
}
if (rsp.indexOf('server HDD is full') + 1)
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
if (err != "") {
if (!t.t_uploading)
@@ -2500,10 +2529,15 @@ function up2k_init(subtle) {
pvis.seth(t.n, 2, err);
pvis.move(t.n, 'ng');
apop(st.busy.handshake, t);
tasker();
return;
}
st.todo.handshake.unshift(chill(t));
if (rsp.indexOf('server HDD is full') + 1)
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
err = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
xhrchk(xhr, err + "\n\nfile: " + t.name + "\n\nerror ", "404, target folder not found", "warn", t);
}
@@ -2574,8 +2608,7 @@ function up2k_init(subtle) {
nparts = upt.nparts,
pcar = nparts[0],
pcdr = nparts[nparts.length - 1],
snpart = pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr),
tries = 0;
maxsz = (u2sz_max > 1 ? u2sz_max : 2040) * 1024 * 1024;
if (t.done)
return console.log('done; skip chunk', t.name, t);
@@ -2595,6 +2628,30 @@ function up2k_init(subtle) {
if (cdr >= t.size)
cdr = t.size;
if (cdr - car <= maxsz)
return upload_sub(t, upt, pcar, pcdr, car, cdr, chunksize, car, []);
var car0 = car, subs = [];
while (car < cdr) {
subs.push([car, Math.min(cdr, car + maxsz)]);
car += maxsz;
}
upload_sub(t, upt, pcar, pcdr, 0, 0, chunksize, car0, subs);
}
function upload_sub(t, upt, pcar, pcdr, car, cdr, chunksize, car0, subs) {
var nparts = upt.nparts,
is_sub = subs.length;
if (is_sub) {
var x = subs.shift();
car = x[0];
cdr = x[1];
}
var snpart = is_sub ? ('' + pcar + '(' + (car-car0) +'+'+ (cdr-car)) :
pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr);
var orz = function (xhr) {
st.bytes.inflight -= xhr.bsent;
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
@@ -2608,6 +2665,10 @@ function up2k_init(subtle) {
return;
}
if (xhr.status == 200) {
car = car0;
if (subs.length)
return upload_sub(t, upt, pcar, pcdr, 0, 0, chunksize, car0, subs);
var bdone = cdr - car;
for (var a = pcar; a <= pcdr; a++) {
pvis.prog(t, a, Math.min(bdone, chunksize));
@@ -2616,6 +2677,7 @@ function up2k_init(subtle) {
st.bytes.finished += cdr - car;
st.bytes.uploaded += cdr - car;
t.bytes_uploaded += cdr - car;
t.cooldown = t.coolmul = 0;
st.etac.u++;
st.etac.t++;
}
@@ -2674,7 +2736,7 @@ function up2k_init(subtle) {
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), t.name), t);
t.nojoin = t.nojoin || t.postlist.length; // maybe rproxy postsize limit
console.log('chunkpit onerror,', ++tries, t.name, t);
console.log('chunkpit onerror,', t.name, t);
orz2(xhr);
};
@@ -2692,9 +2754,13 @@ function up2k_init(subtle) {
xhr.open('POST', t.purl, true);
xhr.setRequestHeader("X-Up2k-Hash", ctxt);
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
if (is_sub)
xhr.setRequestHeader("X-Up2k-Subc", car - car0);
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
st.eta.t.split(' ').pop()));
st.eta.t.indexOf('/s, ')+1 ? st.eta.t.split(' ').pop() : 'x'));
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
if (xhr.overrideMimeType)
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
@@ -2812,13 +2878,13 @@ function up2k_init(subtle) {
}
var read_u2sz = function () {
var el = ebi('u2szg'), n = parseInt(el.value), dv = u2sz.split(',');
var el = ebi('u2szg'), n = parseInt(el.value);
stitch_tgt = n = (
isNaN(n) ? dv[1] :
n < dv[0] ? dv[0] :
n > dv[2] ? dv[2] : n
isNaN(n) ? u2sz_tgt :
n < u2sz_min ? u2sz_min :
n > u2sz_max ? u2sz_max : n
);
if (n == dv[1]) sdrop('u2sz'); else swrite('u2sz', n);
if (n == u2sz_tgt) sdrop('u2sz'); else swrite('u2sz', n);
if (el.value != n) el.value = n;
};
ebi('u2szg').addEventListener('blur', read_u2sz);

View File

@@ -1527,21 +1527,26 @@ var toast = (function () {
if (sec)
te = setTimeout(r.hide, sec * 1000);
var tb = ebi('toastt');
if (same && delta < 1000 && tb) {
tb.style.animation = 'none';
tb.offsetHeight;
tb.style.animation = null;
if (same && delta < 1000) {
var tb = ebi('toastt');
if (tb) {
tb.style.animation = 'none';
tb.offsetHeight;
tb.style.animation = null;
}
return;
}
if (txt.indexOf('<body>') + 1)
txt = txt.slice(0, txt.indexOf('<')) + ' [...]';
setcvar('--tmtime', sec + 's');
setcvar('--tmstep', sec * 15);
obj.innerHTML = '<div id="toastt"></div><a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
var html = '';
if (sec) {
setcvar('--tmtime', sec + 's');
setcvar('--tmstep', sec * 15);
html += '<div id="toastt"></div>';
}
obj.innerHTML = html + '<a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
obj.className = cl;
sec += obj.offsetWidth;
obj.className += ' vis';

View File

@@ -20,6 +20,7 @@ catch (ex) {
function load_fb() {
subtle = null;
importScripts('deps/sha512.hw.js');
console.log('using fallback hasher');
}
@@ -29,6 +30,12 @@ var reader = null,
onmessage = (d) => {
if (d.data == 'nosubtle')
return load_fb();
if (d.data == 'ping')
return postMessage(['pong']);
if (busy)
return postMessage(["panic", 'worker got another task while busy']);

View File

@@ -1,3 +1,77 @@
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1016-2153 `v1.15.8` the sky is the limit
## 🧪 new features
* subchunks; avoid the Cloudflare filesize limit entirely fc8298c4 48147c07
* the previous max filesize was `383.9 GiB`, now only the sky is the limit
* if you're using another proxy with a more restrictive limit than Cloudflare's 100 MiB, for example 64 MiB, then `--u2sz 1,64,64`
* m4v videos can be played in the gallery ff0a71f2
## 🩹 bugfixes
* up2k: uploading duplicate files could initially fail (but would succeed after a few automatic retries) due to a toctou 114b71b7
* [u2c](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#u2cpy) / commandline uploader:
* directory scanner got stuck if it found a FIFO cba1878b
* excessive number of FDs when uploading large files 65a2b6a2
* chunksize calculation; only affected files exactly 128 GiB large a2e037d6
* support filenames with newlines and invalid utf-8 b2770a20
* invalid utf-8 is replaced by `?` when they hit the server
## 🔧 other changes
* don't show the toast countdown bar if duration is infinite 22dfc6ec
* chickenbit to disable the browser's built-in sha512 implementation and force the bundled wasm instead d715479e
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1013-2244 `v1.15.7` the 'a' in "ip address" stands for authentication
## 🧪 new features
* [cidr-based autologin](https://github.com/9001/copyparty#ip-auth) b7f9bf5a
* map a cidr ip-range to a username; anyone connecting from that ip-range will autologin as that user
* thx to @byteturtle for the idea!
* [u2c](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#u2cpy) / commandline uploader:
* option `--chs` to list individual chunk hashes cf1b7562
* fix progress indicator when resuming an upload 53ffd245
* up2k: verbose logging of detected/corrected bitflips ee628363
* *foreshadowing intensifies* (story still developing)
## 🩹 bugfixes
* up2k with database disabled / running without `-e2d` 705f598b
* respect `noforget` when loading snaps
* ...but actually forget deleted files otherwise
* snap-loader adds empty need/hash entries as necessary
## 🔧 other changes
* authed users can now unpost recent uploads of unauthed users from the same IP 22b58e31
* would have become problematic now that cidr-based autologin is a thing
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1011-2256 `v1.15.6` preadme
## 🧪 new features
* #105 files named `preadme.md` appear at the top of directory listings 1d68acf8
* entirely disable dedup with `--no-clone` / volflag `noclone` 3d7facd7 6b7ebdb7
* even if a file exists for sure on the server HDD, let the client continue uploading instead of reusing the existing data
* using this option "never" makes sense, unless you're using something like S3 Glacier storage where reading is really expensive but writing is cheap
## 🩹 bugfixes
* up2k jank after detecting a bitflip or network glitch 4a4ec88d
* instead of resuming the interrupted upload like it should, the upload client could get stuck or start over
* #104 support viewing dotfile documents when dotfiles are hidden 9ccd8bb3
* fix a buttload of typos 6adc778d 1e7697b5
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1005-1803 `v1.15.5` pyz all the cores

View File

@@ -122,13 +122,13 @@ class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None, **ka0):
ka = {}
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_clone no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand re_dirsz smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_clone no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand re_dirsz rss smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
ka.update(**{k: False for k in ex.split()})
ex = "dedup dotpart dotsrch hook_v no_dhash no_fastboot no_fpool no_htp no_rescan no_sendfile no_ses no_snap no_up_list no_voldump re_dhash plain_ip"
ka.update(**{k: True for k in ex.split()})
ex = "ah_cli ah_gen css_browser hist js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
ex = "ah_cli ah_gen css_browser hist ipu js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
ka.update(**{k: None for k in ex.split()})
ex = "hash_mt safe_dedup srch_time u2abort u2j u2sz"