Compare commits

...

33 Commits

Author SHA1 Message Date
ed
ccdacea0c4 v1.15.10 2024-10-27 07:51:11 +00:00
ed
4bdcbc1cb5 shares: allow upload, unpost
* files can be uploaded into writeable shares

* add "write-only" button to the create-share ui

* unpost is possible while viewing the relevant share
2024-10-26 21:36:07 +00:00
ed
833c6cf2ec partyfuse: bump dircache size
dircache size should exceed max dir depth, because the OS
may periodically listdir all parents of current folder
2024-10-26 18:25:21 +00:00
ed
dd6dbdd90a http 304: client-option to force-disable cache
an extremely brutish workaround for issues such as #110 where
browsers receive an HTTP 304 and misinterpret as HTTP 200

option `--no304=1` adds the button `no304` to the controlpanel
which can be enabled to force-disable caching in that browser

the button is default-disabled; by specifying `--no304=2`
instead of `--no304=1` the button becomes default-enabled

can also always be enabled by accessing `/?setck=no304=y`
2024-10-26 17:56:54 +00:00
ed
63013cc565 http 304: k304 obsoleted for ie11 by Vary
the Vary header killed caching in all versions of internet explorer
so there's no point conditionally enabling k304 for trident anymore
2024-10-25 22:32:58 +00:00
ed
912402364a http 304: strip Content-Length and Content-Type
these response headers are usually not included in 304 replies,
and their presence are suspected to confuse some clients (#110)

also strip `out_headerlist` (primarily cookie assignments)
2024-10-25 22:24:40 +00:00
ed
159f51b12b http 304: if-range, backdating
add support for the `If-Range` header which is generally used to
prevent resuming a partial download after the source file on the
server has been modified, by returning HTTP 200 instead of a 206

also simplifies `If-Modified-Since` and `If-Range` handling;
previously this was a spec-compliant lexical comparison,
now it's a basic string-comparison instead. The server will now
reply 200 also when the server mtime is older than the client's.
This is technically not according to spec, but should be safer,
as it allows backdating timestamps without purging client cache
2024-10-25 22:05:59 +00:00
ed
7678a91b0e debug: --ohead (log response headers) 2024-10-25 20:00:19 +00:00
ed
b13899c63d make --u2sz more intuitive
previously, it only accepted the 3-tuple `min,default,max`

if given a single integer (or any other unexpected value),
the up2k js would enter an infinite loop, eat all the ram
and crash the browser (nice)

fix this by accepting a single integer (for example 96)
and translating it to `1,96,96`
2024-10-22 21:37:51 +00:00
ed
3a0d882c5e fix NetMap -j0 compat
would crash on startup if `-j0` was
combined with `--ipa` or `--ipu`
2024-10-22 20:53:19 +00:00
ed
cb81f0ad6d readme: add nintendo 3ds to supported browsers 2024-10-21 00:06:13 +00:00
ed
518bacf628 add pingvin-share to comparison 2024-10-20 01:18:07 +00:00
ed
ca63b03e55 update pkgs to 1.15.9 2024-10-18 23:54:46 +00:00
ed
cecef88d6b v1.15.9 2024-10-18 23:42:20 +00:00
ed
7ffd805a03 add RSS feed output; closes #109 2024-10-18 23:24:12 +00:00
ed
a7e2a0c981 up2k: fix chinese-specific js crash; closes #108
the client-side ETA, included as metadata in POSTs,
would crash the js with the initial "Starting..." text
2024-10-18 19:04:22 +00:00
ed
2a570bb4ca fix --df for webdav; closes #107
PUT uploads, as used by webdav, would stat the absolute
path of the file to be created, which would throw ENOENT

strip components until the path is an existing directory

and also try to enforce disk space / volume size limits
even when the incoming file is of unknown size
2024-10-18 18:14:35 +00:00
ed
5ca8f0706d up2k.js: detect broken webworkers;
the first time a file is to be hashed after a website refresh,
a set of webworkers are launched for efficient parallelization

in the unlikely event of a network outage exactly at this point,
the workers will fail to start, and the hashing would never begin

add a ping/pong sequence to smoketest the workers, and
fallback to hashing on the main-thread when necessary
2024-10-18 16:50:15 +00:00
ed
a9b4436cdc up2k: improve upload retry/timeout
* `js:` make handshake retries more aggressive
* `u2c:` reduce chunks timeout + ^
* `main:` reduce tcp timeout to 128sec (js is 42s)
* `httpcli:` less confusing log messages
2024-10-18 16:24:31 +00:00
ed
5f91999512 update pkgs to 1.15.8 2024-10-16 22:22:29 +00:00
ed
9f000beeaf v1.15.8 2024-10-16 21:53:23 +00:00
ed
ff0a71f212 gallery: play m4v videos 2024-10-16 21:36:11 +00:00
ed
22dfc6ec24 ui-toast: hide countdown if infinite 2024-10-16 21:32:47 +00:00
ed
48147c079e subchunks: fix eta, cfg-ui 2024-10-16 21:17:00 +00:00
ed
d715479ef6 add chickenbit to force hashwasm 2024-10-16 20:23:02 +00:00
ed
fc8298c468 up2k: avoid cloudflare upload size-limit
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096

`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets

subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time

if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
2024-10-16 19:29:08 +00:00
ed
e94ca5dc91 up2k: improve logging 2024-10-16 15:41:19 +00:00
ed
114b71b751 up2k: fix filesystem toctou
previously and currently, as an upload completes, its "done" flag
is not set until all the data has been flushed to disk

however, the list of missing chunks becomes empty before the flush,
and that list was incorrectly used to determine completion state
in some dedup-related logic

as a result, duplicate uploads could initially fail, and would
succeed after the client automatically retried a handful of times
2024-10-16 15:32:58 +00:00
ed
b2770a2087 u2c: support more crazy filenames
newlines, invalid utf8, and worst of all... %20 (whitespace)

due to up2k protocol limitations,
filenames are normalized when they hit the server,
but folders get to keep their intended jank
2024-10-15 23:01:07 +00:00
ed
cba1878bb2 u2c: don't get stuck at fifos and such 2024-10-15 22:53:55 +00:00
ed
a2e037d6af u2c: fix chunksize calculation
files which were exactly 128 GiB large would fail
(you can't make this shit up)
2024-10-15 22:39:48 +00:00
ed
65a2b6a223 u2c: fix excessive FDs
it would open separate FDs for all chunks to be uploaded...

open and close files as they are needed during upload instead
2024-10-15 22:30:15 +00:00
ed
9ed799e803 update pkgs to 1.15.7 2024-10-13 23:07:31 +00:00
31 changed files with 915 additions and 240 deletions

View File

@@ -47,6 +47,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [file manager](#file-manager) - cut/paste, rename, and delete files/folders (if you have permission)
* [shares](#shares) - share a file or folder by creating a temporary link
* [batch rename](#batch-rename) - select some files and press `F2` to bring up the rename UI
* [rss feeds](#rss-feeds) - monitor a folder with your RSS reader
* [media player](#media-player) - plays almost every audio format there is
* [audio equalizer](#audio-equalizer) - and [dynamic range compressor](https://en.wikipedia.org/wiki/Dynamic_range_compression)
* [fix unreliable playback on android](#fix-unreliable-playback-on-android) - due to phone / app settings
@@ -219,7 +220,7 @@ also see [comparison to similar software](./docs/versus.md)
* upload
* ☑ basic: plain multipart, ie6 support
* ☑ [up2k](#uploading): js, resumable, multithreaded
* **no filesize limit!** ...unless you use Cloudflare, then it's 383.9 GiB
* **no filesize limit!** even on Cloudflare
* ☑ stash: simple PUT filedropper
* ☑ filename randomizer
* ☑ write-only folders
@@ -654,7 +655,7 @@ up2k has several advantages:
* uploads resume if you reboot your browser or pc, just upload the same files again
* server detects any corruption; the client reuploads affected chunks
* the client doesn't upload anything that already exists on the server
* no filesize limit unless imposed by a proxy, for example Cloudflare, which blocks uploads over 383.9 GiB
* no filesize limit, even when a proxy limits the request size (for example Cloudflare)
* much higher speeds than ftp/scp/tarpipe on some internet connections (mainly american ones) thanks to parallel connections
* the last-modified timestamp of the file is preserved
@@ -690,6 +691,8 @@ note that since up2k has to read each file twice, `[🎈] bup` can *theoreticall
if you are resuming a massive upload and want to skip hashing the files which already finished, you can enable `turbo` in the `[⚙️] config` tab, but please read the tooltip on that button
if the server is behind a proxy which imposes a request-size limit, you can configure up2k to sneak below the limit with server-option `--u2sz` (the default is 96 MiB to support Cloudflare)
### file-search
@@ -843,6 +846,30 @@ or a mix of both:
the metadata keys you can use in the format field are the ones in the file-browser table header (whatever is collected with `-mte` and `-mtp`)
## rss feeds
monitor a folder with your RSS reader , optionally recursive
must be enabled per-volume with volflag `rss` or globally with `--rss`
the feed includes itunes metadata for use with podcast readers such as [AntennaPod](https://antennapod.org/)
a feed example: https://cd.ocv.me/a/d2/d22/?rss&fext=mp3
url parameters:
* `pw=hunter2` for password auth
* `recursive` to also include subfolders
* `title=foo` changes the feed title (default: folder name)
* `fext=mp3,opus` only include mp3 and opus files (default: all)
* `nf=30` only show the first 30 results (default: 250)
* `sort=m` sort by mtime (file last-modified), newest first (default)
* `u` = upload-time; NOTE: non-uploaded files have upload-time `0`
* `n` = filename
* `a` = filesize
* uppercase = reverse-sort; `M` = oldest file first
## media player
plays almost every audio format there is (if the server has FFmpeg installed for on-demand transcoding)
@@ -1906,6 +1933,9 @@ quick summary of more eccentric web-browsers trying to view a directory index:
| **ie4** and **netscape** 4.0 | can browse, upload with `?b=u`, auth with `&pw=wark` |
| **ncsa mosaic** 2.7 | does not get a pass, [pic1](https://user-images.githubusercontent.com/241032/174189227-ae816026-cf6f-4be5-a26e-1b3b072c1b2f.png) - [pic2](https://user-images.githubusercontent.com/241032/174189225-5651c059-5152-46e9-ac26-7e98e497901b.png) |
| **SerenityOS** (7e98457) | hits a page fault, works with `?b=u`, file upload not-impl |
| **nintendo 3ds** | can browse, upload, view thumbnails (thx bnjmn) |
<p align="center"><img src="https://github.com/user-attachments/assets/88deab3d-6cad-4017-8841-2f041472b853" /></p>
# client examples

View File

@@ -1128,7 +1128,7 @@ def main():
# dircache is always a boost,
# only want to disable it for tests etc,
cdn = 9 # max num dirs; 0=disable
cdn = 24 # max num dirs; keep larger than max dir depth; 0=disable
cds = 1 # numsec until an entry goes stale
where = "local directory"

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3
from __future__ import print_function, unicode_literals
S_VERSION = "2.2"
S_BUILD_DT = "2024-10-13"
S_VERSION = "2.5"
S_BUILD_DT = "2024-10-18"
"""
u2c.py: upload to copyparty
@@ -62,6 +62,9 @@ else:
unicode = str
WTF8 = "replace" if PY2 else "surrogateescape"
VT100 = platform.system() != "Windows"
@@ -151,6 +154,7 @@ class HCli(object):
self.tls = tls
self.verify = ar.te or not ar.td
self.conns = []
self.hconns = []
if tls:
import ssl
@@ -170,7 +174,7 @@ class HCli(object):
"User-Agent": "u2c/%s" % (S_VERSION,),
}
def _connect(self):
def _connect(self, timeout):
args = {}
if PY37:
args["blocksize"] = 1048576
@@ -182,7 +186,7 @@ class HCli(object):
if self.ctx:
args = {"context": self.ctx}
return C(self.addr, self.port, timeout=999, **args)
return C(self.addr, self.port, timeout=timeout, **args)
def req(self, meth, vpath, hdrs, body=None, ctype=None):
hdrs.update(self.base_hdrs)
@@ -195,7 +199,9 @@ class HCli(object):
0 if not body else body.len if hasattr(body, "len") else len(body)
)
c = self.conns.pop() if self.conns else self._connect()
# large timeout for handshakes (safededup)
conns = self.hconns if ctype == MJ else self.conns
c = conns.pop() if conns else self._connect(999 if ctype == MJ else 128)
try:
c.request(meth, vpath, body, hdrs)
if PY27:
@@ -204,7 +210,7 @@ class HCli(object):
rsp = c.getresponse()
data = rsp.read()
self.conns.append(c)
conns.append(c)
return rsp.status, data.decode("utf-8")
except:
c.close()
@@ -228,7 +234,7 @@ class File(object):
self.lmod = lmod # type: float
self.abs = os.path.join(top, rel) # type: bytes
self.name = self.rel.split(b"/")[-1].decode("utf-8", "replace") # type: str
self.name = self.rel.split(b"/")[-1].decode("utf-8", WTF8) # type: str
# set by get_hashlist
self.cids = [] # type: list[tuple[str, int, int]] # [ hash, ofs, sz ]
@@ -267,10 +273,41 @@ class FileSlice(object):
raise Exception(9)
tlen += clen
self.len = tlen
self.len = self.tlen = tlen
self.cdr = self.car + self.len
self.ofs = 0 # type: int
self.f = open(file.abs, "rb", 512 * 1024)
self.f = None
self.seek = self._seek0
self.read = self._read0
def subchunk(self, maxsz, nth):
if self.tlen <= maxsz:
return -1
if not nth:
self.car0 = self.car
self.cdr0 = self.cdr
self.car = self.car0 + maxsz * nth
if self.car >= self.cdr0:
return -2
self.cdr = self.car + min(self.cdr0 - self.car, maxsz)
self.len = self.cdr - self.car
self.seek(0)
return nth
def unsub(self):
self.car = self.car0
self.cdr = self.cdr0
self.len = self.tlen
def _open(self):
self.seek = self._seek
self.read = self._read
self.f = open(self.file.abs, "rb", 512 * 1024)
self.f.seek(self.car)
# https://stackoverflow.com/questions/4359495/what-is-exactly-a-file-like-object-in-python
@@ -282,10 +319,15 @@ class FileSlice(object):
except:
pass # py27 probably
def close(self, *a, **ka):
return # until _open
def tell(self):
return self.ofs
def seek(self, ofs, wh=0):
def _seek(self, ofs, wh=0):
assert self.f # !rm
if wh == 1:
ofs = self.ofs + ofs
elif wh == 2:
@@ -299,12 +341,22 @@ class FileSlice(object):
self.ofs = ofs
self.f.seek(self.car + ofs)
def read(self, sz):
def _read(self, sz):
assert self.f # !rm
sz = min(sz, self.len - self.ofs)
ret = self.f.read(sz)
self.ofs += len(ret)
return ret
def _seek0(self, ofs, wh=0):
self._open()
return self.seek(ofs, wh)
def _read0(self, sz):
self._open()
return self.read(sz)
class MTHash(object):
def __init__(self, cores):
@@ -557,13 +609,17 @@ def walkdir(err, top, excl, seen):
for ap, inf in sorted(statdir(err, top)):
if excl.match(ap):
continue
yield ap, inf
if stat.S_ISDIR(inf.st_mode):
yield ap, inf
try:
for x in walkdir(err, ap, excl, seen):
yield x
except Exception as ex:
err.append((ap, str(ex)))
elif stat.S_ISREG(inf.st_mode):
yield ap, inf
else:
err.append((ap, "irregular filetype 0%o" % (inf.st_mode,)))
def walkdirs(err, tops, excl):
@@ -609,11 +665,12 @@ def walkdirs(err, tops, excl):
# mostly from copyparty/util.py
def quotep(btxt):
# type: (bytes) -> bytes
quot1 = quote(btxt, safe=b"/")
if not PY2:
quot1 = quot1.encode("ascii")
return quot1.replace(b" ", b"+") # type: ignore
return quot1.replace(b" ", b"%20") # type: ignore
# from copyparty/util.py
@@ -641,7 +698,7 @@ def up2k_chunksize(filesize):
while True:
for mul in [1, 2]:
nchunks = math.ceil(filesize * 1.0 / chunksize)
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks < 4096):
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks <= 4096):
return chunksize
chunksize += stepsize
@@ -720,7 +777,7 @@ def handshake(ar, file, search):
url = file.url
else:
if b"/" in file.rel:
url = quotep(file.rel.rsplit(b"/", 1)[0]).decode("utf-8", "replace")
url = quotep(file.rel.rsplit(b"/", 1)[0]).decode("utf-8")
else:
url = ""
url = ar.vtop + url
@@ -766,15 +823,15 @@ def handshake(ar, file, search):
if search:
return r["hits"], False
file.url = r["purl"]
file.url = quotep(r["purl"].encode("utf-8", WTF8)).decode("utf-8")
file.name = r["name"]
file.wark = r["wark"]
return r["hash"], r["sprs"]
def upload(fsl, stats):
# type: (FileSlice, str) -> None
def upload(fsl, stats, maxsz):
# type: (FileSlice, str, int) -> None
"""upload a range of file data, defined by one or more `cid` (chunk-hash)"""
ctxt = fsl.cids[0]
@@ -792,21 +849,34 @@ def upload(fsl, stats):
if stats:
headers["X-Up2k-Stat"] = stats
nsub = 0
try:
sc, txt = web.req("POST", fsl.file.url, headers, fsl, MO)
while nsub != -1:
nsub = fsl.subchunk(maxsz, nsub)
if nsub == -2:
return
if nsub >= 0:
headers["X-Up2k-Subc"] = str(maxsz * nsub)
headers.pop(CLEN, None)
nsub += 1
if sc == 400:
if (
"already being written" in txt
or "already got that" in txt
or "only sibling chunks" in txt
):
fsl.file.nojoin = 1
sc, txt = web.req("POST", fsl.file.url, headers, fsl, MO)
if sc >= 400:
raise Exception("http %s: %s" % (sc, txt))
if sc == 400:
if (
"already being written" in txt
or "already got that" in txt
or "only sibling chunks" in txt
):
fsl.file.nojoin = 1
if sc >= 400:
raise Exception("http %s: %s" % (sc, txt))
finally:
fsl.f.close()
if fsl.f:
fsl.f.close()
if nsub != -1:
fsl.unsub()
class Ctl(object):
@@ -938,7 +1008,7 @@ class Ctl(object):
print(" %d up %s" % (ncs - nc, cid))
stats = "%d/0/0/%d" % (nf, self.nfiles - nf)
fslice = FileSlice(file, [cid])
upload(fslice, stats)
upload(fslice, stats, self.ar.szm)
print(" ok!")
if file.recheck:
@@ -1057,7 +1127,7 @@ class Ctl(object):
print(" ls ~{0}".format(srd))
zt = (
self.ar.vtop,
quotep(rd.replace(b"\\", b"/")).decode("utf-8", "replace"),
quotep(rd.replace(b"\\", b"/")).decode("utf-8"),
)
sc, txt = web.req("GET", "%s%s?ls&lt&dots" % zt, {})
if sc >= 400:
@@ -1066,7 +1136,7 @@ class Ctl(object):
j = json.loads(txt)
for f in j["dirs"] + j["files"]:
rfn = f["href"].split("?")[0].rstrip("/")
ls[unquote(rfn.encode("utf-8", "replace"))] = f
ls[unquote(rfn.encode("utf-8", WTF8))] = f
except Exception as ex:
print(" mkdir ~{0} ({1})".format(srd, ex))
@@ -1080,7 +1150,7 @@ class Ctl(object):
lnodes = [x.split(b"/")[-1] for x in zls]
bnames = [x for x in ls if x not in lnodes and x != b".hist"]
vpath = self.ar.url.split("://")[-1].split("/", 1)[-1]
names = [x.decode("utf-8", "replace") for x in bnames]
names = [x.decode("utf-8", WTF8) for x in bnames]
locs = [vpath + srd + "/" + x for x in names]
while locs:
req = locs
@@ -1286,7 +1356,7 @@ class Ctl(object):
self._check_if_done()
continue
njoin = (self.ar.sz * 1024 * 1024) // chunksz
njoin = self.ar.sz // chunksz
cs = hs[:]
while cs:
fsl = FileSlice(file, cs[:1])
@@ -1338,7 +1408,7 @@ class Ctl(object):
)
try:
upload(fsl, stats)
upload(fsl, stats, self.ar.szm)
except Exception as ex:
t = "upload failed, retrying: %s #%s+%d (%s)\n"
eprint(t % (file.name, cids[0][:8], len(cids) - 1, ex))
@@ -1427,6 +1497,7 @@ source file/folder selection uses rsync syntax, meaning that:
ap.add_argument("-j", type=int, metavar="CONNS", default=2, help="parallel connections")
ap.add_argument("-J", type=int, metavar="CORES", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing")
ap.add_argument("--sz", type=int, metavar="MiB", default=64, help="try to make each POST this big")
ap.add_argument("--szm", type=int, metavar="MiB", default=96, help="max size of each POST (default is cloudflare max)")
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
ap.add_argument("-ns", action="store_true", help="no status panel (for slow consoles and macos)")
ap.add_argument("--cd", type=float, metavar="SEC", default=5, help="delay before reattempting a failed handshake/upload")
@@ -1454,6 +1525,9 @@ source file/folder selection uses rsync syntax, meaning that:
if ar.dr:
ar.ow = True
ar.sz *= 1024 * 1024
ar.szm *= 1024 * 1024
ar.x = "|".join(ar.x or [])
setattr(ar, "wlist", ar.url == "-")

View File

@@ -1,6 +1,6 @@
# Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty
pkgver="1.15.6"
pkgver="1.15.9"
pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any")
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
)
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" )
sha256sums=("abb5c1705cd80ea553d647d4a7b35b5e1dac5a517200551bcca79aa199f30875")
sha256sums=("f75668c752468cab3e4d7bb93323ab9ac7309ee74a138d3597ef0edabd9235de")
build() {
cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,5 +1,5 @@
{
"url": "https://github.com/9001/copyparty/releases/download/v1.15.6/copyparty-sfx.py",
"version": "1.15.6",
"hash": "sha256-0ikt3jv9/XT/w/ew+R4rZxF6s7LwNhUvUYYIZtkQqbk="
"url": "https://github.com/9001/copyparty/releases/download/v1.15.9/copyparty-sfx.py",
"version": "1.15.9",
"hash": "sha256-pxX1RkfO3h1XyPQnrwkFp8laOS4sqXAwbsi2V18VnVY="
}

View File

@@ -1017,7 +1017,7 @@ def add_upload(ap):
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for this size. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for \033[33mdefault\033[0m, and never exceed \033[33mmax\033[0m. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
@@ -1037,7 +1037,7 @@ def add_network(ap):
else:
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=128.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0.0, help="debug: socket write delay in seconds")
@@ -1307,6 +1307,7 @@ def add_logging(ap):
ap2.add_argument("--log-conn", action="store_true", help="debug: print tcp-server msgs")
ap2.add_argument("--log-htp", action="store_true", help="debug: print http-server threadpool scaling")
ap2.add_argument("--ihead", metavar="HEADER", type=u, action='append', help="print request \033[33mHEADER\033[0m; [\033[32m*\033[0m]=all")
ap2.add_argument("--ohead", metavar="HEADER", type=u, action='append', help="print response \033[33mHEADER\033[0m; [\033[32m*\033[0m]=all")
ap2.add_argument("--lf-url", metavar="RE", type=u, default=r"^/\.cpr/|\?th=[wj]$|/\.(_|ql_|DS_Store$|localized$)", help="dont log URLs matching regex \033[33mRE\033[0m")
@@ -1357,6 +1358,14 @@ def add_transcoding(ap):
ap2.add_argument("--ac-maxage", metavar="SEC", type=int, default=86400, help="delete cached transcode output after \033[33mSEC\033[0m seconds")
def add_rss(ap):
ap2 = ap.add_argument_group('RSS options')
ap2.add_argument("--rss", action="store_true", help="enable RSS output (experimental)")
ap2.add_argument("--rss-nf", metavar="HITS", type=int, default=250, help="default number of files to return (url-param 'nf')")
ap2.add_argument("--rss-fext", metavar="E,E", type=u, default="", help="default list of file extensions to include (url-param 'fext'); blank=all")
ap2.add_argument("--rss-sort", metavar="ORD", type=u, default="m", help="default sort order (url-param 'sort'); [\033[32mm\033[0m]=last-modified [\033[32mu\033[0m]=upload-time [\033[32mn\033[0m]=filename [\033[32ms\033[0m]=filesize; Uppercase=oldest-first. Note that upload-time is 0 for non-uploaded files")
def add_db_general(ap, hcores):
noidx = APPLESAN_TXT if MACOS else ""
ap2 = ap.add_argument_group('general db options')
@@ -1452,6 +1461,7 @@ def add_ui(ap, retry):
ap2.add_argument("--pb-url", metavar="URL", type=u, default="https://github.com/9001/copyparty", help="powered-by link; disable with \033[33m-np\033[0m")
ap2.add_argument("--ver", action="store_true", help="show version on the control panel (incompatible with \033[33m-nb\033[0m)")
ap2.add_argument("--k304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable k304 on the controlpanel (workaround for buggy reverse-proxies); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
ap2.add_argument("--no304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable no304 on the controlpanel (workaround for buggy caching in browsers); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox")
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for prologue/epilogue docs (volflag=lg_sbf)")
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README/PREADME.md documents (volflags: no_sb_md | sb_md)")
@@ -1526,6 +1536,7 @@ def run_argparse(
add_db_metadata(ap)
add_thumbnail(ap)
add_transcoding(ap)
add_rss(ap)
add_ftp(ap)
add_webdav(ap)
add_tftp(ap)
@@ -1748,6 +1759,9 @@ def main(argv: Optional[list[str]] = None) -> None:
if al.ihead:
al.ihead = [x.lower() for x in al.ihead]
if al.ohead:
al.ohead = [x.lower() for x in al.ohead]
if HAVE_SSL:
if al.ssl_ver:
configure_ssl_ver(al)

View File

@@ -1,8 +1,8 @@
# coding: utf-8
VERSION = (1, 15, 7)
VERSION = (1, 15, 10)
CODENAME = "fill the drives"
BUILD_DT = (2024, 10, 14)
BUILD_DT = (2024, 10, 26)
S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -66,6 +66,7 @@ if PY2:
LEELOO_DALLAS = "leeloo_dallas"
SEE_LOG = "see log for details"
SEESLOG = " (see serverlog for details)"
SSEELOG = " ({})".format(SEE_LOG)
BAD_CFG = "invalid config; {}".format(SEE_LOG)
SBADCFG = " ({})".format(BAD_CFG)
@@ -164,8 +165,11 @@ class Lim(object):
self.chk_rem(rem)
if sz != -1:
self.chk_sz(sz)
self.chk_vsz(broker, ptop, sz, volgetter)
self.chk_df(abspath, sz) # side effects; keep last-ish
else:
sz = 0
self.chk_vsz(broker, ptop, sz, volgetter)
self.chk_df(abspath, sz) # side effects; keep last-ish
ap2, vp2 = self.rot(abspath)
if abspath == ap2:
@@ -205,7 +209,15 @@ class Lim(object):
if self.dft < time.time():
self.dft = int(time.time()) + 300
self.dfv = get_df(abspath)[0] or 0
df, du, err = get_df(abspath, True)
if err:
t = "failed to read disk space usage for [%s]: %s"
self.log(t % (abspath, err), 3)
self.dfv = 0xAAAAAAAAA # 42.6 GiB
else:
self.dfv = df or 0
for j in list(self.reg.values()) if self.reg else []:
self.dfv -= int(j["size"] / (len(j["hash"]) or 999) * len(j["need"]))
@@ -540,15 +552,14 @@ class VFS(object):
return self._get_dbv(vrem)
shv, srem = src
return shv, vjoin(srem, vrem)
return shv._get_dbv(vjoin(srem, vrem))
def _get_dbv(self, vrem: str) -> tuple["VFS", str]:
dbv = self.dbv
if not dbv:
return self, vrem
tv = [self.vpath[len(dbv.vpath) :].lstrip("/"), vrem]
vrem = "/".join([x for x in tv if x])
vrem = vjoin(self.vpath[len(dbv.vpath) :].lstrip("/"), vrem)
return dbv, vrem
def canonical(self, rem: str, resolve: bool = True) -> str:
@@ -2628,7 +2639,7 @@ class AuthSrv(object):
]
csv = set("i p th_covers zm_on zm_off zs_on zs_off".split())
zs = "c ihead mtm mtp on403 on404 xad xar xau xiu xban xbd xbr xbu xm"
zs = "c ihead ohead mtm mtp on403 on404 xad xar xau xiu xban xbd xbr xbu xm"
lst = set(zs.split())
askip = set("a v c vc cgen exp_lg exp_md theme".split())
fskip = set("exp_lg exp_md mv_re_r mv_re_t rm_re_r rm_re_t".split())

View File

@@ -46,6 +46,7 @@ def vf_bmap() -> dict[str, str]:
"og_no_head",
"og_s_title",
"rand",
"rss",
"xdev",
"xlink",
"xvol",

View File

@@ -2,7 +2,6 @@
from __future__ import print_function, unicode_literals
import argparse # typechk
import calendar
import copy
import errno
import gzip
@@ -19,7 +18,6 @@ import threading # typechk
import time
import uuid
from datetime import datetime
from email.utils import parsedate
from operator import itemgetter
import jinja2 # typechk
@@ -107,6 +105,7 @@ from .util import (
unquotep,
vjoin,
vol_san,
vroots,
vsplit,
wrename,
wunlink,
@@ -127,10 +126,17 @@ _ = (argparse, threading)
NO_CACHE = {"Cache-Control": "no-cache"}
ALL_COOKIES = "k304 no304 js idxh dots cppwd cppws".split()
H_CONN_KEEPALIVE = "Connection: Keep-Alive"
H_CONN_CLOSE = "Connection: Close"
LOGUES = [[0, ".prologue.html"], [1, ".epilogue.html"]]
READMES = [[0, ["preadme.md", "PREADME.md"]], [1, ["readme.md", "README.md"]]]
RSS_SORT = {"m": "mt", "u": "at", "n": "fn", "s": "sz"}
class HttpCli(object):
"""
@@ -790,11 +796,11 @@ class HttpCli(object):
def k304(self) -> bool:
k304 = self.cookies.get("k304")
return (
k304 == "y"
or (self.args.k304 == 2 and k304 != "n")
or ("; Trident/" in self.ua and not k304)
)
return k304 == "y" or (self.args.k304 == 2 and k304 != "n")
def no304(self) -> bool:
no304 = self.cookies.get("no304")
return no304 == "y" or (self.args.no304 == 2 and no304 != "n")
def _build_html_head(self, maybe_html: Any, kv: dict[str, Any]) -> None:
html = str(maybe_html)
@@ -829,25 +835,28 @@ class HttpCli(object):
) -> None:
response = ["%s %s %s" % (self.http_ver, status, HTTPCODE[status])]
if length is not None:
response.append("Content-Length: " + unicode(length))
if status == 304 and self.k304():
self.keepalive = False
# close if unknown length, otherwise take client's preference
response.append("Connection: " + ("Keep-Alive" if self.keepalive else "Close"))
response.append("Date: " + formatdate())
# headers{} overrides anything set previously
if headers:
self.out_headers.update(headers)
# default to utf8 html if no content-type is set
if not mime:
mime = self.out_headers.get("Content-Type") or "text/html; charset=utf-8"
if status == 304:
self.out_headers.pop("Content-Length", None)
self.out_headers.pop("Content-Type", None)
self.out_headerlist.clear()
if self.k304():
self.keepalive = False
else:
if length is not None:
response.append("Content-Length: " + unicode(length))
self.out_headers["Content-Type"] = mime
if mime:
self.out_headers["Content-Type"] = mime
elif "Content-Type" not in self.out_headers:
self.out_headers["Content-Type"] = "text/html; charset=utf-8"
# close if unknown length, otherwise take client's preference
response.append(H_CONN_KEEPALIVE if self.keepalive else H_CONN_CLOSE)
response.append("Date: " + formatdate())
for k, zs in list(self.out_headers.items()) + self.out_headerlist:
response.append("%s: %s" % (k, zs))
@@ -861,6 +870,19 @@ class HttpCli(object):
self.cbonk(self.conn.hsrv.gmal, zs, "cc_hdr", "Cc in out-hdr")
raise Pebkac(999)
if self.args.ohead and self.do_log:
keys = self.args.ohead
if "*" in keys:
lines = response[1:]
else:
lines = []
for zs in response[1:]:
if zs.split(":")[0].lower() in keys:
lines.append(zs)
for zs in lines:
hk, hv = zs.split(": ")
self.log("[O] {}: \033[33m[{}]".format(hk, hv), 5)
response.append("\r\n")
try:
self.s.sendall("\r\n".join(response).encode("utf-8"))
@@ -940,13 +962,14 @@ class HttpCli(object):
lines = [
"%s %s %s" % (self.http_ver or "HTTP/1.1", status, HTTPCODE[status]),
"Connection: Close",
H_CONN_CLOSE,
]
if body:
lines.append("Content-Length: " + unicode(len(body)))
self.s.sendall("\r\n".join(lines).encode("utf-8") + b"\r\n\r\n" + body)
lines.append("\r\n")
self.s.sendall("\r\n".join(lines).encode("utf-8") + body)
def urlq(self, add: dict[str, str], rm: list[str]) -> str:
"""
@@ -1180,12 +1203,6 @@ class HttpCli(object):
if "stack" in self.uparam:
return self.tx_stack()
if "ups" in self.uparam:
return self.tx_ups()
if "k304" in self.uparam:
return self.set_k304()
if "setck" in self.uparam:
return self.setck()
@@ -1201,8 +1218,150 @@ class HttpCli(object):
if "h" in self.uparam:
return self.tx_mounts()
if "ups" in self.uparam:
# vpath is used for share translation
return self.tx_ups()
if "rss" in self.uparam:
return self.tx_rss()
return self.tx_browser()
def tx_rss(self) -> bool:
if self.do_log:
self.log("RSS %s @%s" % (self.req, self.uname))
if not self.can_read:
return self.tx_404()
vn = self.vn
if not vn.flags.get("rss"):
raise Pebkac(405, "RSS is disabled in server config")
rem = self.rem
idx = self.conn.get_u2idx()
if not idx or not hasattr(idx, "p_end"):
if not HAVE_SQLITE3:
raise Pebkac(500, "sqlite3 not found on server; rss is disabled")
raise Pebkac(500, "server busy, cannot generate rss; please retry in a bit")
uv = [rem]
if "recursive" in self.uparam:
uq = "up.rd like ?||'%'"
else:
uq = "up.rd == ?"
zs = str(self.uparam.get("fext", self.args.rss_fext))
if zs in ("True", "False"):
zs = ""
if zs:
zsl = []
for ext in zs.split(","):
zsl.append("+up.fn like '%.'||?")
uv.append(ext)
uq += " and ( %s )" % (" or ".join(zsl),)
zs1 = self.uparam.get("sort", self.args.rss_sort)
zs2 = zs1.lower()
zs = RSS_SORT.get(zs2)
if not zs:
raise Pebkac(400, "invalid sort key; must be m/u/n/s")
uq += " order by up." + zs
if zs1 == zs2:
uq += " desc"
nmax = int(self.uparam.get("nf") or self.args.rss_nf)
hits = idx.run_query(self.uname, [self.vn], uq, uv, False, False, nmax)[0]
pw = self.ouparam.get("pw")
if pw:
q_pw = "?pw=%s" % (pw,)
a_pw = "&pw=%s" % (pw,)
for i in hits:
i["rp"] += a_pw if "?" in i["rp"] else q_pw
else:
q_pw = a_pw = ""
title = self.uparam.get("title") or self.vpath.split("/")[-1]
etitle = html_escape(title, True, True)
baseurl = "%s://%s%s" % (
"https" if self.is_https else "http",
self.host,
self.args.SRS,
)
feed = "%s%s" % (baseurl, self.req[1:])
efeed = html_escape(feed, True, True)
edirlink = efeed.split("?")[0] + q_pw
ret = [
"""\
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:content="http://purl.org/rss/1.0/modules/content/">
\t<channel>
\t\t<atom:link href="%s" rel="self" type="application/rss+xml" />
\t\t<title>%s</title>
\t\t<description></description>
\t\t<link>%s</link>
\t\t<generator>copyparty-1</generator>
"""
% (efeed, etitle, edirlink)
]
q = "select fn from cv where rd=? and dn=?"
crd, cdn = rem.rsplit("/", 1) if "/" in rem else ("", rem)
try:
cfn = idx.cur[self.vn.realpath].execute(q, (crd, cdn)).fetchone()[0]
bos.stat(os.path.join(vn.canonical(rem), cfn))
cv_url = "%s%s?th=jf%s" % (baseurl, vjoin(self.vpath, cfn), a_pw)
cv_url = html_escape(cv_url, True, True)
zs = """\
\t\t
"""
ret.append(zs % (cv_url, etitle, edirlink))
except:
pass
for i in hits:
iurl = html_escape("%s%s" % (baseurl, i["rp"]), True, True)
title = unquotep(i["rp"].split("?")[0].split("/")[-1])
title = html_escape(title, True, True)
tag_t = str(i["tags"].get("title") or "")
tag_a = str(i["tags"].get("artist") or "")
desc = "%s - %s" % (tag_a, tag_t) if tag_t and tag_a else (tag_t or tag_a)
desc = html_escape(desc, True, True) if desc else title
mime = html_escape(guess_mime(title))
lmod = formatdate(i["ts"])
zsa = (iurl, iurl, title, desc, lmod, iurl, mime, i["sz"])
zs = (
"""\
\t\t<item>
\t\t\t<guid>%s</guid>
\t\t\t<link>%s</link>
\t\t\t<title>%s</title>
\t\t\t<description>%s</description>
\t\t\t<pubDate>%s</pubDate>
\t\t\t<enclosure url="%s" type="%s" length="%d"/>
"""
% zsa
)
dur = i["tags"].get(".dur")
if dur:
zs += "\t\t\t<itunes:duration>%d</itunes:duration>\n" % (dur,)
ret.append(zs + "\t\t</item>\n")
ret.append("\t</channel>\n</rss>\n")
bret = "".join(ret).encode("utf-8", "replace")
self.reply(bret, 200, "text/xml; charset=utf-8")
self.log("rss: %d hits, %d bytes" % (len(hits), len(bret)))
return True
def handle_propfind(self) -> bool:
if self.do_log:
self.log("PFIND %s @%s" % (self.req, self.uname))
@@ -1884,7 +2043,7 @@ class HttpCli(object):
f, fn = ren_open(fn, *open_a, **params)
try:
path = os.path.join(fdir, fn)
post_sz, sha_hex, sha_b64 = hashcopy(reader, f, self.args.s_wr_slp)
post_sz, sha_hex, sha_b64 = hashcopy(reader, f, None, 0, self.args.s_wr_slp)
finally:
f.close()
@@ -2251,6 +2410,15 @@ class HttpCli(object):
if "purl" in ret:
ret["purl"] = self.args.SR + ret["purl"]
if self.args.shr and self.vpath.startswith(self.args.shr1):
# strip common suffix (uploader's folder structure)
vp_req, vp_vfs = vroots(self.vpath, vjoin(dbv.vpath, vrem))
if not ret["purl"].startswith(vp_vfs):
t = "share-mapping failed; req=[%s] dbv=[%s] vrem=[%s] n1=[%s] n2=[%s] purl=[%s]"
zt = (self.vpath, dbv.vpath, vrem, vp_req, vp_vfs, ret["purl"])
raise Pebkac(500, t % zt)
ret["purl"] = vp_req + ret["purl"][len(vp_vfs) :]
ret = json.dumps(ret)
self.log(ret)
self.reply(ret.encode("utf-8"), mime="application/json")
@@ -2343,12 +2511,16 @@ class HttpCli(object):
chashes.append(siblings[n : n + clen])
vfs, _ = self.asrv.vfs.get(self.vpath, self.uname, False, True)
ptop = (vfs.dbv or vfs).realpath
ptop = vfs.get_dbv("")[0].realpath
# if this is a share, then get_dbv has been overridden to return
# the dbv (which does not exist as a property). And its realpath
# could point into the middle of its origin vfs node, meaning it
# is not necessarily registered with up2k, so get_dbv is crucial
broker = self.conn.hsrv.broker
x = broker.ask("up2k.handle_chunks", ptop, wark, chashes)
response = x.get()
chashes, chunksize, cstarts, path, lastmod, sprs = response
chashes, chunksize, cstarts, path, lastmod, fsize, sprs = response
maxsize = chunksize * len(chashes)
cstart0 = cstarts[0]
locked = chashes # remaining chunks to be received in this request
@@ -2356,6 +2528,50 @@ class HttpCli(object):
num_left = -1 # num chunks left according to most recent up2k release
treport = time.time() # ratelimit up2k reporting to reduce overhead
if "x-up2k-subc" in self.headers:
sc_ofs = int(self.headers["x-up2k-subc"])
chash = chashes[0]
u2sc = self.conn.hsrv.u2sc
try:
sc_pofs, hasher = u2sc[chash]
if not sc_ofs:
t = "client restarted the chunk; forgetting subchunk offset %d"
self.log(t % (sc_pofs,))
raise Exception()
except:
sc_pofs = 0
hasher = hashlib.sha512()
et = "subchunk protocol error; resetting chunk "
if sc_pofs != sc_ofs:
u2sc.pop(chash, None)
t = "%s[%s]: the expected resume-point was %d, not %d"
raise Pebkac(400, t % (et, chash, sc_pofs, sc_ofs))
if len(cstarts) > 1:
u2sc.pop(chash, None)
t = "%s[%s]: only a single subchunk can be uploaded in one request; you are sending %d chunks"
raise Pebkac(400, t % (et, chash, len(cstarts)))
csize = min(chunksize, fsize - cstart0[0])
cstart0[0] += sc_ofs # also sets cstarts[0][0]
sc_next_ofs = sc_ofs + postsize
if sc_next_ofs > csize:
u2sc.pop(chash, None)
t = "%s[%s]: subchunk offset (%d) plus postsize (%d) exceeds chunksize (%d)"
raise Pebkac(400, t % (et, chash, sc_ofs, postsize, csize))
else:
final_subchunk = sc_next_ofs == csize
t = "subchunk %s %d:%d/%d %s"
zs = "END" if final_subchunk else ""
self.log(t % (chash[:15], sc_ofs, sc_next_ofs, csize, zs), 6)
if final_subchunk:
u2sc.pop(chash, None)
else:
u2sc[chash] = (sc_next_ofs, hasher)
else:
hasher = None
final_subchunk = True
try:
if self.args.nw:
path = os.devnull
@@ -2386,9 +2602,11 @@ class HttpCli(object):
reader = read_socket(
self.sr, self.args.s_rd_sz, min(remains, chunksize)
)
post_sz, _, sha_b64 = hashcopy(reader, f, self.args.s_wr_slp)
post_sz, _, sha_b64 = hashcopy(
reader, f, hasher, 0, self.args.s_wr_slp
)
if sha_b64 != chash:
if sha_b64 != chash and final_subchunk:
try:
self.bakflip(
f, path, cstart[0], post_sz, chash, sha_b64, vfs.flags
@@ -2420,7 +2638,8 @@ class HttpCli(object):
# be quick to keep the tcp winsize scale;
# if we can't confirm rn then that's fine
written.append(chash)
if final_subchunk:
written.append(chash)
now = time.time()
if now - treport < 1:
continue
@@ -2445,6 +2664,7 @@ class HttpCli(object):
except:
# maybe busted handle (eg. disk went full)
f.close()
chashes = [] # exception flag
raise
finally:
if locked:
@@ -2453,9 +2673,11 @@ class HttpCli(object):
num_left, t = x.get()
if num_left < 0:
self.loud_reply(t, status=500)
return False
t = "got %d more chunks, %d left"
self.log(t % (len(written), num_left), 6)
if chashes: # kills exception bubbling otherwise
return False
else:
t = "got %d more chunks, %d left"
self.log(t % (len(written), num_left), 6)
if num_left < 0:
raise Pebkac(500, "unconfirmed; see serverlog")
@@ -2813,7 +3035,7 @@ class HttpCli(object):
tabspath = os.path.join(fdir, tnam)
self.log("writing to {}".format(tabspath))
sz, sha_hex, sha_b64 = hashcopy(
p_data, f, self.args.s_wr_slp, max_sz
p_data, f, None, max_sz, self.args.s_wr_slp
)
if sz == 0:
raise Pebkac(400, "empty files in post")
@@ -3145,7 +3367,7 @@ class HttpCli(object):
wunlink(self.log, fp, vfs.flags)
with open(fsenc(fp), "wb", self.args.iobuf) as f:
sz, sha512, _ = hashcopy(p_data, f, self.args.s_wr_slp)
sz, sha512, _ = hashcopy(p_data, f, None, 0, self.args.s_wr_slp)
if lim:
lim.nup(self.ip)
@@ -3206,26 +3428,29 @@ class HttpCli(object):
self.reply(response.encode("utf-8"))
return True
def _chk_lastmod(self, file_ts: int) -> tuple[str, bool]:
def _chk_lastmod(self, file_ts: int) -> tuple[str, bool, bool]:
# ret: lastmod, do_send, can_range
file_lastmod = formatdate(file_ts)
cli_lastmod = self.headers.get("if-modified-since")
if cli_lastmod:
try:
# some browser append "; length=573"
cli_lastmod = cli_lastmod.split(";")[0].strip()
cli_dt = parsedate(cli_lastmod)
assert cli_dt # !rm
cli_ts = calendar.timegm(cli_dt)
return file_lastmod, int(file_ts) > int(cli_ts)
except Exception as ex:
self.log(
"lastmod {}\nremote: [{}]\n local: [{}]".format(
repr(ex), cli_lastmod, file_lastmod
)
)
return file_lastmod, file_lastmod != cli_lastmod
c_ifrange = self.headers.get("if-range")
c_lastmod = self.headers.get("if-modified-since")
return file_lastmod, True
if not c_ifrange and not c_lastmod:
return file_lastmod, True, True
if c_ifrange and c_ifrange != file_lastmod:
t = "sending entire file due to If-Range; cli(%s) file(%s)"
self.log(t % (c_ifrange, file_lastmod), 6)
return file_lastmod, True, False
do_send = c_lastmod != file_lastmod
if do_send and c_lastmod:
t = "sending body due to If-Modified-Since cli(%s) file(%s)"
self.log(t % (c_lastmod, file_lastmod), 6)
elif not do_send and self.no304():
do_send = True
self.log("sending body due to no304")
return file_lastmod, do_send, True
def _use_dirkey(self, vn: VFS, ap: str) -> bool:
if self.can_read or not self.can_get:
@@ -3375,7 +3600,7 @@ class HttpCli(object):
# if-modified
if file_ts > 0:
file_lastmod, do_send = self._chk_lastmod(int(file_ts))
file_lastmod, do_send, _ = self._chk_lastmod(int(file_ts))
self.out_headers["Last-Modified"] = file_lastmod
if not do_send:
status = 304
@@ -3537,7 +3762,7 @@ class HttpCli(object):
#
# if-modified
file_lastmod, do_send = self._chk_lastmod(int(file_ts))
file_lastmod, do_send, can_range = self._chk_lastmod(int(file_ts))
self.out_headers["Last-Modified"] = file_lastmod
if not do_send:
status = 304
@@ -3581,7 +3806,14 @@ class HttpCli(object):
# let's not support 206 with compression
# and multirange / multipart is also not-impl (mostly because calculating contentlength is a pain)
if do_send and not is_compressed and hrange and file_sz and "," not in hrange:
if (
do_send
and not is_compressed
and hrange
and can_range
and file_sz
and "," not in hrange
):
try:
if not hrange.lower().startswith("bytes"):
raise Exception()
@@ -4052,7 +4284,7 @@ class HttpCli(object):
sz_md = len(lead) + len(fullfile)
file_ts = int(max(ts_md, self.E.t0))
file_lastmod, do_send = self._chk_lastmod(file_ts)
file_lastmod, do_send, _ = self._chk_lastmod(file_ts)
self.out_headers["Last-Modified"] = file_lastmod
self.out_headers.update(NO_CACHE)
status = 200 if do_send else 304
@@ -4239,7 +4471,7 @@ class HttpCli(object):
rvol=rvol,
wvol=wvol,
avol=avol,
in_shr=self.args.shr and self.vpath.startswith(self.args.shr[1:]),
in_shr=self.args.shr and self.vpath.startswith(self.args.shr1),
vstate=vstate,
ups=ups,
scanning=vs["scanning"],
@@ -4249,7 +4481,9 @@ class HttpCli(object):
dbwt=vs["dbwt"],
url_suf=suf,
k304=self.k304(),
no304=self.no304(),
k304vis=self.args.k304 > 0,
no304vis=self.args.no304 > 0,
ver=S_VERSION if self.args.ver else "",
chpw=self.args.chpw and self.uname != "*",
ahttps="" if self.is_https else "https://" + self.host + self.req,
@@ -4257,29 +4491,21 @@ class HttpCli(object):
self.reply(html.encode("utf-8"))
return True
def set_k304(self) -> bool:
v = self.uparam["k304"].lower()
if v in "yn":
dur = 86400 * 299
else:
dur = 0
v = "x"
ck = gencookie("k304", v, self.args.R, False, dur)
self.out_headerlist.append(("Set-Cookie", ck))
self.redirect("", "?h#cc")
return True
def setck(self) -> bool:
k, v = self.uparam["setck"].split("=", 1)
t = 0 if v == "" else 86400 * 299
t = 0 if v in ("", "x") else 86400 * 299
ck = gencookie(k, v, self.args.R, False, t)
self.out_headerlist.append(("Set-Cookie", ck))
self.reply(b"o7\n")
if "cc" in self.ouparam:
self.redirect("", "?h#cc")
else:
self.reply(b"o7\n")
return True
def set_cfg_reset(self) -> bool:
for k in ("k304", "js", "idxh", "dots", "cppwd", "cppws"):
for k in ALL_COOKIES:
if k not in self.cookies:
continue
cookie = gencookie(k, "x", self.args.R, False)
self.out_headerlist.append(("Set-Cookie", cookie))
@@ -4309,7 +4535,7 @@ class HttpCli(object):
t = t.format(self.args.SR)
qv = quotep(self.vpaths) + self.ourlq()
in_shr = self.args.shr and self.vpath.startswith(self.args.shr[1:])
in_shr = self.args.shr and self.vpath.startswith(self.args.shr1)
html = self.j2s("splash", this=self, qvpath=qv, in_shr=in_shr, msg=t)
self.reply(html.encode("utf-8"), status=rc)
return True
@@ -4472,6 +4698,11 @@ class HttpCli(object):
lm = "ups [{}]".format(filt)
self.log(lm)
if self.args.shr and self.vpath.startswith(self.args.shr1):
shr_dbv, shr_vrem = self.vn.get_dbv(self.rem)
else:
shr_dbv = None
ret: list[dict[str, Any]] = []
t0 = time.time()
lim = time.time() - self.args.unpost
@@ -4492,7 +4723,12 @@ class HttpCli(object):
else:
allvols = list(self.asrv.vfs.all_vols.values())
allvols = [x for x in allvols if "e2d" in x.flags]
allvols = [
x
for x in allvols
if "e2d" in x.flags
and ("*" in x.axs.uwrite or self.uname in x.axs.uwrite or x == shr_dbv)
]
for vol in allvols:
cur = idx.get_cur(vol)
@@ -4542,6 +4778,16 @@ class HttpCli(object):
ret = ret[:2000]
if shr_dbv:
# translate vpaths from share-target to share-url
# to satisfy access checks
assert shr_vrem.split # type: ignore # !rm
vp_shr, vp_vfs = vroots(self.vpath, vjoin(shr_dbv.vpath, shr_vrem))
for v in ret:
vp = v["vp"]
if vp.startswith(vp_vfs):
v["vp"] = vp_shr + vp[len(vp_vfs) :]
if self.is_vproxied:
for v in ret:
v["vp"] = self.args.SR + v["vp"]
@@ -4671,7 +4917,7 @@ class HttpCli(object):
if m:
raise Pebkac(400, "sharekey has illegal character [%s]" % (m[1],))
if vp.startswith(self.args.shr[1:]):
if vp.startswith(self.args.shr1):
raise Pebkac(400, "yo dawg...")
cur = idx.get_shr()
@@ -5055,7 +5301,7 @@ class HttpCli(object):
self.log("#wow #whoa")
if not self.args.nid:
free, total = get_df(abspath)
free, total, _ = get_df(abspath, False)
if total is not None:
h1 = humansize(free or 0)
h2 = humansize(total)

View File

@@ -1,6 +1,7 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import hashlib
import math
import os
import re
@@ -144,6 +145,7 @@ class HttpSrv(object):
self.t_periodic: Optional[threading.Thread] = None
self.u2fh = FHC()
self.u2sc: dict[str, tuple[int, "hashlib._Hash"]] = {}
self.pipes = CachedDict(0.2)
self.metrics = Metrics(self)
self.nreq = 0

View File

@@ -128,7 +128,7 @@ class Metrics(object):
addbh("cpp_disk_size_bytes", "total HDD size of volume")
addbh("cpp_disk_free_bytes", "free HDD space in volume")
for vpath, vol in allvols:
free, total = get_df(vol.realpath)
free, total, _ = get_df(vol.realpath, False)
if free is None or total is None:
continue

View File

@@ -223,13 +223,14 @@ class SvcHub(object):
args.chpw_no = noch
if args.ipu:
iu, nm = load_ipu(self.log, args.ipu)
iu, nm = load_ipu(self.log, args.ipu, True)
setattr(args, "ipu_iu", iu)
setattr(args, "ipu_nm", nm)
if not self.args.no_ses:
self.setup_session_db()
args.shr1 = ""
if args.shr:
self.setup_share_db()
@@ -378,6 +379,14 @@ class SvcHub(object):
self.broker = Broker(self)
# create netmaps early to avoid firewall gaps,
# but the mutex blocks multiprocessing startup
for zs in "ipu_iu ftp_ipa_nm tftp_ipa_nm".split():
try:
getattr(args, zs).mutex = threading.Lock()
except:
pass
def setup_session_db(self) -> None:
if not HAVE_SQLITE3:
self.args.no_ses = True
@@ -452,6 +461,7 @@ class SvcHub(object):
raise Exception(t)
al.shr = "/%s/" % (al.shr,)
al.shr1 = al.shr[1:]
create = True
modified = False
@@ -761,8 +771,8 @@ class SvcHub(object):
al.idp_h_grp = al.idp_h_grp.lower()
al.idp_h_key = al.idp_h_key.lower()
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa)
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa)
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa, True)
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa, True)
mte = ODict.fromkeys(DEF_MTE.split(","), True)
al.mte = odfusion(mte, al.mte)
@@ -809,6 +819,24 @@ class SvcHub(object):
if len(al.tcolor) == 3: # fc5 => ffcc55
al.tcolor = "".join([x * 2 for x in al.tcolor])
zs = al.u2sz
zsl = zs.split(",")
if len(zsl) not in (1, 3):
t = "invalid --u2sz; must be either one number, or a comma-separated list of three numbers (min,default,max)"
raise Exception(t)
if len(zsl) < 3:
zsl = ["1", zs, zs]
zi2 = 1
for zs in zsl:
zi = int(zs)
# arbitrary constraint (anything above 2 GiB is probably unintended)
if zi < 1 or zi > 2047:
raise Exception("invalid --u2sz; minimum is 1, max is 2047")
if zi < zi2:
raise Exception("invalid --u2sz; values must be equal or ascending")
zi2 = zi
al.u2sz = ",".join(zsl)
return True
def _ipa2re(self, txt) -> Optional[re.Pattern]:

View File

@@ -95,7 +95,7 @@ class U2idx(object):
uv: list[Union[str, int]] = [wark[:16], wark]
try:
return self.run_query(uname, vols, uq, uv, False, 99999)[0]
return self.run_query(uname, vols, uq, uv, False, True, 99999)[0]
except:
raise Pebkac(500, min_ex())
@@ -301,7 +301,7 @@ class U2idx(object):
q += " lower({}) {} ? ) ".format(field, oper)
try:
return self.run_query(uname, vols, q, va, have_mt, lim)
return self.run_query(uname, vols, q, va, have_mt, True, lim)
except Exception as ex:
raise Pebkac(500, repr(ex))
@@ -312,6 +312,7 @@ class U2idx(object):
uq: str,
uv: list[Union[str, int]],
have_mt: bool,
sort: bool,
lim: int,
) -> tuple[list[dict[str, Any]], list[str], bool]:
if self.args.srch_dbg:
@@ -458,7 +459,8 @@ class U2idx(object):
done_flag.append(True)
self.active_id = ""
ret.sort(key=itemgetter("rp"))
if sort:
ret.sort(key=itemgetter("rp"))
return ret, list(taglist.keys()), lim < 0 and not clamped

View File

@@ -20,7 +20,7 @@ from copy import deepcopy
from queue import Queue
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, WINDOWS, E
from .authsrv import LEELOO_DALLAS, SSEELOG, VFS, AuthSrv
from .authsrv import LEELOO_DALLAS, SEESLOG, VFS, AuthSrv
from .bos import bos
from .cfg import vf_bmap, vf_cmap, vf_vmap
from .fsutil import Fstab
@@ -2891,9 +2891,6 @@ class Up2k(object):
"user": cj["user"],
"addr": ip,
"at": at,
"hash": [],
"need": [],
"busy": {},
}
for k in ["life"]:
if k in cj:
@@ -2927,17 +2924,20 @@ class Up2k(object):
hashes2, st = self._hashlist_from_file(orig_ap)
wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2)
if dwark != wark2:
t = "will not dedup (fs index desync): fs=%s, db=%s, file: %s"
self.log(t % (wark2, dwark, orig_ap))
t = "will not dedup (fs index desync): fs=%s, db=%s, file: %s\n%s"
self.log(t % (wark2, dwark, orig_ap, rj))
lost.append(dupe[3:])
continue
data_ok = True
job = rj
break
if job and wark in reg:
# self.log("pop " + wark + " " + job["name"] + " handle_json db", 4)
del reg[wark]
if job:
if wark in reg:
del reg[wark]
job["hash"] = job["need"] = []
job["done"] = True
job["busy"] = {}
if lost:
c2 = None
@@ -2966,7 +2966,7 @@ class Up2k(object):
path = djoin(rj["ptop"], rj["prel"], fn)
try:
st = bos.stat(path)
if st.st_size > 0 or not rj["need"]:
if st.st_size > 0 or "done" in rj:
# upload completed or both present
break
except:
@@ -2980,13 +2980,13 @@ class Up2k(object):
inc_ap = djoin(cj["ptop"], cj["prel"], cj["name"])
orig_ap = djoin(rj["ptop"], rj["prel"], rj["name"])
if self.args.nw or n4g or not st:
if self.args.nw or n4g or not st or "done" not in rj:
pass
elif st.st_size != rj["size"]:
t = "will not dedup (fs index desync): {}, size fs={} db={}, mtime fs={} db={}, file: {}"
t = "will not dedup (fs index desync): {}, size fs={} db={}, mtime fs={} db={}, file: {}\n{}"
t = t.format(
wark, st.st_size, rj["size"], st.st_mtime, rj["lmod"], path
wark, st.st_size, rj["size"], st.st_mtime, rj["lmod"], path, rj
)
self.log(t)
del reg[wark]
@@ -2996,8 +2996,8 @@ class Up2k(object):
hashes2, _ = self._hashlist_from_file(orig_ap)
wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2)
if wark != wark2:
t = "will not dedup (fs index desync): fs=%s, idx=%s, file: %s"
self.log(t % (wark2, wark, orig_ap))
t = "will not dedup (fs index desync): fs=%s, idx=%s, file: %s\n%s"
self.log(t % (wark2, wark, orig_ap, rj))
del reg[wark]
if job or wark in reg:
@@ -3012,7 +3012,7 @@ class Up2k(object):
dst = djoin(cj["ptop"], cj["prel"], cj["name"])
vsrc = djoin(job["vtop"], job["prel"], job["name"])
vsrc = vsrc.replace("\\", "/") # just for prints anyways
if job["need"]:
if "done" not in job:
self.log("unfinished:\n {0}\n {1}".format(src, dst))
err = "partial upload exists at a different location; please resume uploading here instead:\n"
err += "/" + quotep(vsrc) + " "
@@ -3373,14 +3373,14 @@ class Up2k(object):
def handle_chunks(
self, ptop: str, wark: str, chashes: list[str]
) -> tuple[list[str], int, list[list[int]], str, float, bool]:
) -> tuple[list[str], int, list[list[int]], str, float, int, bool]:
with self.mutex, self.reg_mutex:
self.db_act = self.vol_act[ptop] = time.time()
job = self.registry[ptop].get(wark)
if not job:
known = " ".join([x for x in self.registry[ptop].keys()])
self.log("unknown wark [{}], known: {}".format(wark, known))
raise Pebkac(400, "unknown wark" + SSEELOG)
raise Pebkac(400, "unknown wark" + SEESLOG)
if "t0c" not in job:
job["t0c"] = time.time()
@@ -3396,7 +3396,7 @@ class Up2k(object):
try:
nchunk = uniq.index(chashes[0])
except:
raise Pebkac(400, "unknown chunk0 [%s]" % (chashes[0]))
raise Pebkac(400, "unknown chunk0 [%s]" % (chashes[0],))
expanded = [chashes[0]]
for prefix in chashes[1:]:
nchunk += 1
@@ -3431,7 +3431,7 @@ class Up2k(object):
for chash in chashes:
nchunk = [n for n, v in enumerate(job["hash"]) if v == chash]
if not nchunk:
raise Pebkac(400, "unknown chunk %s" % (chash))
raise Pebkac(400, "unknown chunk %s" % (chash,))
ofs = [chunksize * x for x in nchunk]
coffsets.append(ofs)
@@ -3456,7 +3456,7 @@ class Up2k(object):
job["poke"] = time.time()
return chashes, chunksize, coffsets, path, job["lmod"], job["sprs"]
return chashes, chunksize, coffsets, path, job["lmod"], job["size"], job["sprs"]
def fast_confirm_chunks(
self, ptop: str, wark: str, chashes: list[str]
@@ -3498,6 +3498,7 @@ class Up2k(object):
for chash in written:
job["need"].remove(chash)
except Exception as ex:
# dead tcp connections can get here by timeout (OK)
return -2, "confirm_chunk, chash(%s) %r" % (chash, ex) # type: ignore
ret = len(job["need"])
@@ -3525,11 +3526,13 @@ class Up2k(object):
src = djoin(pdir, job["tnam"])
dst = djoin(pdir, job["name"])
except Exception as ex:
raise Pebkac(500, "finish_upload, wark, " + repr(ex))
self.log(min_ex(), 1)
raise Pebkac(500, "finish_upload, wark, %r%s" % (ex, SEESLOG))
if job["need"]:
t = "finish_upload {} with remaining chunks {}"
raise Pebkac(500, t.format(wark, job["need"]))
self.log(min_ex(), 1)
t = "finish_upload %s with remaining chunks %s%s"
raise Pebkac(500, t % (wark, job["need"], SEESLOG))
upt = job.get("at") or time.time()
vflags = self.flags[ptop]
@@ -3904,11 +3907,9 @@ class Up2k(object):
if unpost:
raise Pebkac(400, "cannot unpost folders")
elif stat.S_ISLNK(st.st_mode) or stat.S_ISREG(st.st_mode):
dbv, vrem = self.asrv.vfs.get(vpath, uname, *permsets[0])
dbv, vrem = dbv.get_dbv(vrem)
voldir = vsplit(vrem)[0]
voldir = vsplit(rem)[0]
vpath_dir = vsplit(vpath)[0]
g = [(dbv, voldir, vpath_dir, adir, [(fn, 0)], [], {})] # type: ignore
g = [(vn, voldir, vpath_dir, adir, [(fn, 0)], [], {})] # type: ignore
else:
self.log("rm: skip type-{:x} file [{}]".format(st.st_mode, atop))
return 0, [], []
@@ -3935,7 +3936,10 @@ class Up2k(object):
volpath = ("%s/%s" % (vrem, fn)).strip("/")
vpath = ("%s/%s" % (dbv.vpath, volpath)).strip("/")
self.log("rm %s\n %s" % (vpath, abspath))
_ = dbv.get(volpath, uname, *permsets[0])
if not unpost:
# recursion-only sanchk
_ = dbv.get(volpath, uname, *permsets[0])
if xbd:
if not runhook(
self.log,
@@ -4054,7 +4058,9 @@ class Up2k(object):
self.db_act = self.vol_act[dbv.realpath] = time.time()
svpf = "/".join(x for x in [dbv.vpath, vrem, fn[0]] if x)
if not svpf.startswith(svp + "/"): # assert
raise Pebkac(500, "mv: bug at {}, top {}".format(svpf, svp))
self.log(min_ex(), 1)
t = "mv: bug at %s, top %s%s"
raise Pebkac(500, t % (svpf, svp, SEESLOG))
dvpf = dvp + svpf[len(svp) :]
self._mv_file(uname, ip, svpf, dvpf, curs)
@@ -4069,7 +4075,9 @@ class Up2k(object):
for zsl in (rm_ok, rm_ng):
for ap in reversed(zsl):
if not ap.startswith(sabs):
raise Pebkac(500, "mv_d: bug at {}, top {}".format(ap, sabs))
self.log(min_ex(), 1)
t = "mv_d: bug at %s, top %s%s"
raise Pebkac(500, t % (ap, sabs, SEESLOG))
rem = ap[len(sabs) :].replace(os.sep, "/").lstrip("/")
vp = vjoin(dvp, rem)

View File

@@ -213,6 +213,9 @@ except:
ansi_re = re.compile("\033\\[[^mK]*[mK]")
BOS_SEP = ("%s" % (os.sep,)).encode("ascii")
surrogateescape.register_surrogateescape()
if WINDOWS and PY2:
FS_ENCODING = "utf-8"
@@ -666,13 +669,20 @@ class HLog(logging.Handler):
class NetMap(object):
def __init__(
self, ips: list[str], cidrs: list[str], keep_lo=False, strict_cidr=False
self,
ips: list[str],
cidrs: list[str],
keep_lo=False,
strict_cidr=False,
defer_mutex=False,
) -> None:
"""
ips: list of plain ipv4/ipv6 IPs, not cidr
cidrs: list of cidr-notation IPs (ip/prefix)
"""
self.mutex = threading.Lock()
# fails multiprocessing; defer assignment
self.mutex: Optional[threading.Lock] = None if defer_mutex else threading.Lock()
if "::" in ips:
ips = [x for x in ips if x != "::"] + list(
@@ -711,6 +721,9 @@ class NetMap(object):
try:
return self.cache[ip]
except:
# intentionally crash the calling thread if unset:
assert self.mutex # type: ignore # !rm
with self.mutex:
return self._map(ip)
@@ -2191,6 +2204,23 @@ def unquotep(txt: str) -> str:
return w8dec(unq2)
def vroots(vp1: str, vp2: str) -> tuple[str, str]:
"""
input("q/w/e/r","a/s/d/e/r") output("/q/w/","/a/s/d/")
"""
while vp1 and vp2:
zt1 = vp1.rsplit("/", 1) if "/" in vp1 else ("", vp1)
zt2 = vp2.rsplit("/", 1) if "/" in vp2 else ("", vp2)
if zt1[1] != zt2[1]:
break
vp1 = zt1[0]
vp2 = zt2[0]
return (
"/%s/" % (vp1,) if vp1 else "/",
"/%s/" % (vp2,) if vp2 else "/",
)
def vsplit(vpath: str) -> tuple[str, str]:
if "/" not in vpath:
return "", vpath
@@ -2492,23 +2522,28 @@ def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
return _fs_mvrm(log, abspath, "", False, flags)
def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]:
def get_df(abspath: str, prune: bool) -> tuple[Optional[int], Optional[int], str]:
try:
# some fuses misbehave
assert ctypes # type: ignore # !rm
ap = fsenc(abspath)
while prune and not os.path.isdir(ap) and BOS_SEP in ap:
# strip leafs until it hits an existing folder
ap = ap.rsplit(BOS_SEP, 1)[0]
if ANYWIN:
assert ctypes # type: ignore # !rm
abspath = fsdec(ap)
bfree = ctypes.c_ulonglong(0)
ctypes.windll.kernel32.GetDiskFreeSpaceExW( # type: ignore
ctypes.c_wchar_p(abspath), None, None, ctypes.pointer(bfree)
)
return (bfree.value, None)
return (bfree.value, None, "")
else:
sv = os.statvfs(fsenc(abspath))
sv = os.statvfs(ap)
free = sv.f_frsize * sv.f_bfree
total = sv.f_frsize * sv.f_blocks
return (free, total)
except:
return (None, None)
return (free, total, "")
except Exception as ex:
return (None, None, repr(ex))
if not ANYWIN and not MACOS:
@@ -2646,7 +2681,7 @@ def list_ips() -> list[str]:
return list(ret)
def build_netmap(csv: str):
def build_netmap(csv: str, defer_mutex: bool = False):
csv = csv.lower().strip()
if csv in ("any", "all", "no", ",", ""):
@@ -2681,10 +2716,12 @@ def build_netmap(csv: str):
cidrs.append(zs)
ips = [x.split("/")[0] for x in cidrs]
return NetMap(ips, cidrs, True)
return NetMap(ips, cidrs, True, False, defer_mutex)
def load_ipu(log: "RootLogger", ipus: list[str]) -> tuple[dict[str, str], NetMap]:
def load_ipu(
log: "RootLogger", ipus: list[str], defer_mutex: bool = False
) -> tuple[dict[str, str], NetMap]:
ip_u = {"": "*"}
cidr_u = {}
for ipu in ipus:
@@ -2701,7 +2738,7 @@ def load_ipu(log: "RootLogger", ipus: list[str]) -> tuple[dict[str, str], NetMap
cidr_u[cidr] = uname
ip_u[cip] = uname
try:
nm = NetMap(["::"], list(cidr_u.keys()), True, True)
nm = NetMap(["::"], list(cidr_u.keys()), True, True, defer_mutex)
except Exception as ex:
t = "failed to translate --ipu into netmap, probably due to invalid config: %r"
log("root", t % (ex,), 1)
@@ -2723,10 +2760,12 @@ def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
def hashcopy(
fin: Generator[bytes, None, None],
fout: Union[typing.BinaryIO, typing.IO[Any]],
slp: float = 0,
max_sz: int = 0,
hashobj: Optional["hashlib._Hash"],
max_sz: int,
slp: float,
) -> tuple[int, str, str]:
hashobj = hashlib.sha512()
if not hashobj:
hashobj = hashlib.sha512()
tlen = 0
for buf in fin:
tlen += len(buf)

View File

@@ -32,7 +32,7 @@ window.baguetteBox = (function () {
scrollCSS = ['', ''],
scrollTimer = 0,
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
re_v = /^[^?]+\.(webm|mkv|mp4|m4v)(\?|$)/i,
anims = ['slideIn', 'fadeIn', 'none'],
data = {}, // all galleries
imagesElements = [],

View File

@@ -1,7 +1,7 @@
"use strict";
var XHR = XMLHttpRequest,
img_re = /\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp|webm|mkv|mp4)(\?|$)/i;
img_re = /\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp|webm|mkv|mp4|m4v)(\?|$)/i;
var Ls = {
"eng": {
@@ -4510,9 +4510,12 @@ var fileman = (function () {
'<tr><td>perms</td><td class="sh_axs">',
];
for (var a = 0; a < perms.length; a++)
if (perms[a] != 'admin')
if (!has(['admin', 'move'], perms[a]))
html.push('<a href="#" class="tgl btn">' + perms[a] + '</a>');
if (has(perms, 'write'))
html.push('<a href="#" class="btn">write-only</a>');
html.push('</td></tr></div');
shui.innerHTML = html.join('\n');
@@ -4576,6 +4579,9 @@ var fileman = (function () {
function shspf() {
clmod(this, 'on', 't');
if (this.textContent == 'write-only')
for (var a = 0; a < pbtns.length; a++)
clmod(pbtns[a], 'on', pbtns[a].textContent == 'write');
}
clmod(pbtns[0], 'on', 1);
@@ -7380,6 +7386,9 @@ var treectl = (function () {
r.ls_cb = null;
fun();
}
if (window.have_shr && QS('#op_unpost.act') && (cdir.startsWith(SR + have_shr) || get_evpath().startsWith(SR + have_shr)))
goto('unpost');
}
r.chk_index_html = function (top, res) {
@@ -9115,7 +9124,7 @@ var unpost = (function () {
r.me = me;
}
var q = SR + '/?ups';
var q = get_evpath() + '?ups';
if (filt.value)
q += '&filter=' + uricom_enc(filt.value, true);

View File

@@ -129,11 +129,20 @@
{% if k304 or k304vis %}
{% if k304 %}
<li><a id="h" href="{{ r }}/?k304=n">disable k304</a> (currently enabled)
<li><a id="h" href="{{ r }}/?cc&setck=k304=n">disable k304</a> (currently enabled)
{%- else %}
<li><a id="i" href="{{ r }}/?k304=y" class="r">enable k304</a> (currently disabled)
<li><a id="i" href="{{ r }}/?cc&setck=k304=y" class="r">enable k304</a> (currently disabled)
{% endif %}
<blockquote id="j">enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general</blockquote></li>
<blockquote id="j">enabling k304 will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general</blockquote></li>
{% endif %}
{% if no304 or no304vis %}
{% if no304 %}
<li><a id="ab" href="{{ r }}/?cc&setck=no304=n">disable no304</a> (currently enabled)
{%- else %}
<li><a id="ac" href="{{ r }}/?cc&setck=no304=y" class="r">enable no304</a> (currently disabled)
{% endif %}
<blockquote id="ad">enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!</blockquote></li>
{% endif %}
<li><a id="k" href="{{ r }}/?reset" class="r" onclick="localStorage.clear();return true">reset client settings</a></li>

View File

@@ -34,6 +34,9 @@ var Ls = {
"ta2": "gjenta for å bekrefte nytt passord:",
"ta3": "fant en skrivefeil; vennligst prøv igjen",
"aa1": "innkommende:",
"ab1": "skru av no304",
"ac1": "skru på no304",
"ad1": "no304 stopper all bruk av cache. Hvis ikke k304 var nok, prøv denne. Vil mangedoble dataforbruk!",
},
"eng": {
"d2": "shows the state of all active threads",
@@ -80,6 +83,9 @@ var Ls = {
"ta2": "重复以确认新密码:",
"ta3": "发现拼写错误;请重试",
"aa1": "正在接收的文件:", //m
"ab1": "关闭 k304",
"ac1": "开启 k304",
"ad1": "启用 no304 将禁用所有缓存;如果 k304 不够,可以尝试此选项。这将消耗大量的网络流量!", //m
}
};

View File

@@ -73,8 +73,8 @@ html {
position: absolute;
height: 1px;
top: 1px;
right: 1%;
width: 99%;
right: 1px;
left: 1px;
animation: toastt var(--tmtime) steps(var(--tmstep)) forwards;
transform-origin: right;
}

View File

@@ -17,10 +17,14 @@ function goto_up2k() {
var up2k = null,
up2k_hooks = [],
hws = [],
hws_ok = 0,
hws_ng = false,
sha_js = WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
m = 'will use ' + sha_js + ' instead of native sha512 due to';
try {
if (sread('nosubtle') || window.nosubtle)
throw 'chickenbit';
var cf = crypto.subtle || crypto.webkitSubtle;
cf.digest('SHA-512', new Uint8Array(1)).then(
function (x) { console.log('sha-ok'); up2k = up2k_init(cf); },
@@ -242,7 +246,7 @@ function U2pvis(act, btns, uc, st) {
p = bd * 100.0 / sz,
nb = bd - bd0,
spd = nb / (td / 1000),
eta = (sz - bd) / spd;
eta = spd ? (sz - bd) / spd : 3599;
return [p, s2ms(eta), spd / (1024 * 1024)];
};
@@ -853,8 +857,13 @@ function up2k_init(subtle) {
setmsg(suggest_up2k, 'msg');
var u2szs = u2sz.split(','),
u2sz_min = parseInt(u2szs[0]),
u2sz_tgt = parseInt(u2szs[1]),
u2sz_max = parseInt(u2szs[2]);
var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz.split(',')[1]),
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz_tgt),
uc = {},
fdom_ctr = 0,
biggest_file = 0;
@@ -1353,6 +1362,10 @@ function up2k_init(subtle) {
for (var a = 0; a < Math.min(navigator.hardwareConcurrency || 4, 16); a++)
hws.push(new Worker(SR + '/.cpr/w.hash.js?_=' + TS));
if (!subtle)
for (var a = 0; a < hws.length; a++)
hws[a].postMessage('nosubtle');
console.log(hws.length + " hashers");
}
@@ -1863,10 +1876,12 @@ function up2k_init(subtle) {
function chill(t) {
var now = Date.now();
if ((t.coolmul || 0) < 2 || now - t.cooldown < t.coolmul * 700)
if ((t.coolmul || 0) < 5 || now - t.cooldown < t.coolmul * 700)
t.coolmul = Math.min((t.coolmul || 0.5) * 2, 32);
t.cooldown = Math.max(t.cooldown || 1, Date.now() + t.coolmul * 1000);
var cd = now + 1000 * (t.coolmul + Math.random() * 4 + 2);
t.cooldown = Math.floor(Math.max(cd, t.cooldown || 1));
return t;
}
/////
@@ -1951,7 +1966,7 @@ function up2k_init(subtle) {
pvis.setab(t.n, nchunks);
pvis.move(t.n, 'bz');
if (hws.length && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
if (hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
return wexec_hash(t, chunksize, nchunks);
@@ -2060,16 +2075,27 @@ function up2k_init(subtle) {
free = [],
busy = {},
nbusy = 0,
init = 0,
hashtab = {},
mem = (MOBILE ? 128 : 256) * 1024 * 1024;
if (!hws_ok)
init = setTimeout(function() {
hws_ng = true;
toast.warn(30, 'webworkers failed to start\n\nwill be a bit slower due to\nhashing on main-thread');
apop(st.busy.hash, t);
st.todo.hash.unshift(t);
exec_hash();
}, 5000);
for (var a = 0; a < hws.length; a++) {
var w = hws[a];
free.push(w);
w.onmessage = onmsg;
if (init)
w.postMessage('ping');
if (mem > 0)
free.push(w);
mem -= chunksize;
if (mem <= 0)
break;
}
function go_next() {
@@ -2099,6 +2125,12 @@ function up2k_init(subtle) {
d = d.data;
var k = d[0];
if (k == "pong")
if (++hws_ok == hws.length) {
clearTimeout(init);
go_next();
}
if (k == "panic")
return vis_exh(d[1], 'up2k.js', '', '', d[1]);
@@ -2161,7 +2193,8 @@ function up2k_init(subtle) {
tasker();
}
}
go_next();
if (!init)
go_next();
}
/////
@@ -2259,8 +2292,7 @@ function up2k_init(subtle) {
console.log('handshake onerror, retrying', t.name, t);
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
st.todo.handshake.unshift(chill(t));
t.keepalive = keepalive;
};
var orz = function (e) {
@@ -2273,8 +2305,7 @@ function up2k_init(subtle) {
}
catch (ex) {
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
st.todo.handshake.unshift(chill(t));
var txt = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
return toast.err(0, txt + '\n\n' + L.badreply + ':\n\n' + unpre(xhr.responseText));
}
@@ -2453,6 +2484,7 @@ function up2k_init(subtle) {
else {
pvis.seth(t.n, 1, "ERROR");
pvis.seth(t.n, 2, L.u_ehstmp, t);
apop(st.busy.handshake, t);
var err = "",
cls = "ERROR",
@@ -2466,7 +2498,6 @@ function up2k_init(subtle) {
var penalty = rsp.replace(/.*rate-limit /, "").split(' ')[0];
console.log("rate-limit: " + penalty);
t.cooldown = Date.now() + parseFloat(penalty) * 1000;
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
return;
}
@@ -2489,8 +2520,6 @@ function up2k_init(subtle) {
cls = 'defer';
}
}
if (rsp.indexOf('server HDD is full') + 1)
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
if (err != "") {
if (!t.t_uploading)
@@ -2500,10 +2529,15 @@ function up2k_init(subtle) {
pvis.seth(t.n, 2, err);
pvis.move(t.n, 'ng');
apop(st.busy.handshake, t);
tasker();
return;
}
st.todo.handshake.unshift(chill(t));
if (rsp.indexOf('server HDD is full') + 1)
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
err = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
xhrchk(xhr, err + "\n\nfile: " + t.name + "\n\nerror ", "404, target folder not found", "warn", t);
}
@@ -2574,8 +2608,7 @@ function up2k_init(subtle) {
nparts = upt.nparts,
pcar = nparts[0],
pcdr = nparts[nparts.length - 1],
snpart = pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr),
tries = 0;
maxsz = (u2sz_max > 1 ? u2sz_max : 2040) * 1024 * 1024;
if (t.done)
return console.log('done; skip chunk', t.name, t);
@@ -2595,6 +2628,30 @@ function up2k_init(subtle) {
if (cdr >= t.size)
cdr = t.size;
if (cdr - car <= maxsz)
return upload_sub(t, upt, pcar, pcdr, car, cdr, chunksize, car, []);
var car0 = car, subs = [];
while (car < cdr) {
subs.push([car, Math.min(cdr, car + maxsz)]);
car += maxsz;
}
upload_sub(t, upt, pcar, pcdr, 0, 0, chunksize, car0, subs);
}
function upload_sub(t, upt, pcar, pcdr, car, cdr, chunksize, car0, subs) {
var nparts = upt.nparts,
is_sub = subs.length;
if (is_sub) {
var x = subs.shift();
car = x[0];
cdr = x[1];
}
var snpart = is_sub ? ('' + pcar + '(' + (car-car0) +'+'+ (cdr-car)) :
pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr);
var orz = function (xhr) {
st.bytes.inflight -= xhr.bsent;
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
@@ -2608,6 +2665,10 @@ function up2k_init(subtle) {
return;
}
if (xhr.status == 200) {
car = car0;
if (subs.length)
return upload_sub(t, upt, pcar, pcdr, 0, 0, chunksize, car0, subs);
var bdone = cdr - car;
for (var a = pcar; a <= pcdr; a++) {
pvis.prog(t, a, Math.min(bdone, chunksize));
@@ -2616,6 +2677,7 @@ function up2k_init(subtle) {
st.bytes.finished += cdr - car;
st.bytes.uploaded += cdr - car;
t.bytes_uploaded += cdr - car;
t.cooldown = t.coolmul = 0;
st.etac.u++;
st.etac.t++;
}
@@ -2674,7 +2736,7 @@ function up2k_init(subtle) {
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), t.name), t);
t.nojoin = t.nojoin || t.postlist.length; // maybe rproxy postsize limit
console.log('chunkpit onerror,', ++tries, t.name, t);
console.log('chunkpit onerror,', t.name, t);
orz2(xhr);
};
@@ -2692,9 +2754,13 @@ function up2k_init(subtle) {
xhr.open('POST', t.purl, true);
xhr.setRequestHeader("X-Up2k-Hash", ctxt);
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
if (is_sub)
xhr.setRequestHeader("X-Up2k-Subc", car - car0);
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
st.eta.t.split(' ').pop()));
st.eta.t.indexOf('/s, ')+1 ? st.eta.t.split(' ').pop() : 'x'));
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
if (xhr.overrideMimeType)
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
@@ -2812,13 +2878,13 @@ function up2k_init(subtle) {
}
var read_u2sz = function () {
var el = ebi('u2szg'), n = parseInt(el.value), dv = u2sz.split(',');
var el = ebi('u2szg'), n = parseInt(el.value);
stitch_tgt = n = (
isNaN(n) ? dv[1] :
n < dv[0] ? dv[0] :
n > dv[2] ? dv[2] : n
isNaN(n) ? u2sz_tgt :
n < u2sz_min ? u2sz_min :
n > u2sz_max ? u2sz_max : n
);
if (n == dv[1]) sdrop('u2sz'); else swrite('u2sz', n);
if (n == u2sz_tgt) sdrop('u2sz'); else swrite('u2sz', n);
if (el.value != n) el.value = n;
};
ebi('u2szg').addEventListener('blur', read_u2sz);

View File

@@ -1527,21 +1527,26 @@ var toast = (function () {
if (sec)
te = setTimeout(r.hide, sec * 1000);
var tb = ebi('toastt');
if (same && delta < 1000 && tb) {
tb.style.animation = 'none';
tb.offsetHeight;
tb.style.animation = null;
if (same && delta < 1000) {
var tb = ebi('toastt');
if (tb) {
tb.style.animation = 'none';
tb.offsetHeight;
tb.style.animation = null;
}
return;
}
if (txt.indexOf('<body>') + 1)
txt = txt.slice(0, txt.indexOf('<')) + ' [...]';
setcvar('--tmtime', sec + 's');
setcvar('--tmstep', sec * 15);
obj.innerHTML = '<div id="toastt"></div><a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
var html = '';
if (sec) {
setcvar('--tmtime', sec + 's');
setcvar('--tmstep', sec * 15);
html += '<div id="toastt"></div>';
}
obj.innerHTML = html + '<a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
obj.className = cl;
sec += obj.offsetWidth;
obj.className += ' vis';

View File

@@ -20,6 +20,7 @@ catch (ex) {
function load_fb() {
subtle = null;
importScripts('deps/sha512.hw.js');
console.log('using fallback hasher');
}
@@ -29,6 +30,12 @@ var reader = null,
onmessage = (d) => {
if (d.data == 'nosubtle')
return load_fb();
if (d.data == 'ping')
return postMessage(['pong']);
if (busy)
return postMessage(["panic", 'worker got another task while busy']);

View File

@@ -1,3 +1,82 @@
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1018-2342 `v1.15.9` rss server
## 🧪 new features
* #109 [rss feed generator](https://github.com/9001/copyparty#rss-feeds) 7ffd805a
* monitor folders recursively with RSS readers
## 🩹 bugfixes
* #107 `--df` diskspace limits was incompatible with webdav 2a570bb4
* #108 up2k javascript crash (only affected the Chinese translation) a7e2a0c9
## 🔧 other changes
* up2k: detect buggy webworkers 5ca8f070
* up2k: improve upload retry/timeout logic a9b4436c
* js: make handshake retries more aggressive
* u2c: reduce chunks timeout + ^
* main: reduce tcp timeout to 128sec (js is 42s)
* httpcli: less confusing log messages
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1016-2153 `v1.15.8` the sky is the limit
## 🧪 new features
* subchunks; avoid the Cloudflare filesize limit entirely fc8298c4 48147c07
* the previous max filesize was `383.9 GiB`, now only the sky is the limit
* if you're using another proxy with a more restrictive limit than Cloudflare's 100 MiB, for example 64 MiB, then `--u2sz 1,64,64`
* m4v videos can be played in the gallery ff0a71f2
## 🩹 bugfixes
* up2k: uploading duplicate files could initially fail (but would succeed after a few automatic retries) due to a toctou 114b71b7
* [u2c](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#u2cpy) / commandline uploader:
* directory scanner got stuck if it found a FIFO cba1878b
* excessive number of FDs when uploading large files 65a2b6a2
* chunksize calculation; only affected files exactly 128 GiB large a2e037d6
* support filenames with newlines and invalid utf-8 b2770a20
* invalid utf-8 is replaced by `?` when they hit the server
## 🔧 other changes
* don't show the toast countdown bar if duration is infinite 22dfc6ec
* chickenbit to disable the browser's built-in sha512 implementation and force the bundled wasm instead d715479e
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1013-2244 `v1.15.7` the 'a' in "ip address" stands for authentication
## 🧪 new features
* [cidr-based autologin](https://github.com/9001/copyparty#ip-auth) b7f9bf5a
* map a cidr ip-range to a username; anyone connecting from that ip-range will autologin as that user
* thx to @byteturtle for the idea!
* [u2c](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#u2cpy) / commandline uploader:
* option `--chs` to list individual chunk hashes cf1b7562
* fix progress indicator when resuming an upload 53ffd245
* up2k: verbose logging of detected/corrected bitflips ee628363
* *foreshadowing intensifies* (story still developing)
## 🩹 bugfixes
* up2k with database disabled / running without `-e2d` 705f598b
* respect `noforget` when loading snaps
* ...but actually forget deleted files otherwise
* snap-loader adds empty need/hash entries as necessary
## 🔧 other changes
* authed users can now unpost recent uploads of unauthed users from the same IP 22b58e31
* would have become problematic now that cidr-based autologin is a thing
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1011-2256 `v1.15.6` preadme

View File

@@ -58,7 +58,9 @@ currently up to date with [awesome-selfhosted](https://github.com/awesome-selfho
* [h5ai](#h5ai)
* [autoindex](#autoindex)
* [miniserve](#miniserve)
* [pingvin-share](#pingvin-share)
* [briefly considered](#briefly-considered)
* [notes](#notes)
# recommendations
@@ -106,6 +108,7 @@ some softwares not in the matrixes,
* [h5ai](#h5ai)
* [autoindex](#autoindex)
* [miniserve](#miniserve)
* [pingvin-share](#pingvin-share)
symbol legend,
* `█` = absolutely
@@ -426,6 +429,10 @@ symbol legend,
| gimme-that | python | █ mit | 4.8 MB |
| ass | ts | █ isc | • |
| linx | go | ░ gpl3 | 20 MB |
| h5ai | php | █ mit | • |
| autoindex | go | █ mpl2 | 11 MB |
| miniserve | rust | █ mit | 2 MB |
| pingvin-share | go | █ bsd2 | 487 MB |
* `size` = binary (if available) or installed size of program and its dependencies
* copyparty size is for the [standalone python](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py) file; the [windows exe](https://github.com/9001/copyparty/releases/latest/download/copyparty.exe) is **6 MiB**
@@ -719,7 +726,31 @@ symbol legend,
* 🔵 upload, tar/zip download, qr-code
* ✅ faster at loading huge folders
## [pingvin-share](https://github.com/stonith404/pingvin-share)
* node; linux (docker)
* mainly for uploads, not a general file server
* 🔵 uploads are segmented (avoids cloudflare size limit)
* 🔵 segments are written directly to target file (HDD-friendly)
* ⚠️ uploads not resumable after a browser or laptop crash
* ⚠️ uploads are not accelerated / integrity-checked
* ⚠️ across the atlantic, copyparty is 3x faster
* measured with chunksize 96 MiB; pingvin's default 10 MiB is much slower
* ⚠️ can't upload folders with subfolders
* ⚠️ no upload ETA
* 🔵 expiration times, shares, upload-undo
* ✅ config + user-registration gui
* ✅ built-in OpenID and LDAP support
* 💾 [IdP middleware](https://github.com/9001/copyparty#identity-providers) and config-files
* ✅ probably more than one person who understands the code
# briefly considered
* [pydio](https://github.com/pydio/cells): python/agpl3, looks great, fantastic ux -- but needs mariadb, systemwide install
* [gossa](https://github.com/pldubouilh/gossa): go/mit, minimalistic, basic file upload, text editor, mkdir and rename (no delete/move)
# notes
* high-latency connections (cross-atlantic uploads) can be accurately simulated with `tc qdisc add dev eth0 root netem delay 100ms`

View File

@@ -25,6 +25,7 @@ classifiers = [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: Jython",
"Programming Language :: Python :: Implementation :: PyPy",

View File

@@ -54,7 +54,7 @@ var tl_cpanel = {
"cc1": "other stuff:",
"h1": "disable k304", // TLNote: "j1" explains what k304 is
"i1": "enable k304",
"j1": "enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
"j1": "enabling k304 will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
"k1": "reset client settings",
"l1": "login for more:",
"m1": "welcome back,", // TLNote: "welcome back, USERNAME"
@@ -76,6 +76,9 @@ var tl_cpanel = {
"ta2": "repeat to confirm new password:",
"ta3": "found a typo; please try again",
"aa1": "incoming files:",
"ab1": "disable no304",
"ac1": "enable no304",
"ad1": "enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!",
},
};

View File

@@ -89,7 +89,7 @@ var tl_cpanel = {{
"cc1": "other stuff:",
"h1": "disable k304", // TLNote: "j1" explains what k304 is
"i1": "enable k304",
"j1": "enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
"j1": "enabling k304 will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
"k1": "reset client settings",
"l1": "login for more:",
"m1": "welcome back,", // TLNote: "welcome back, USERNAME"
@@ -111,6 +111,9 @@ var tl_cpanel = {{
"ta2": "repeat to confirm new password:",
"ta3": "found a typo; please try again",
"aa1": "incoming files:",
"ab1": "disable no304",
"ac1": "enable no304",
"ad1": "enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!",
}},
}};

View File

@@ -108,6 +108,7 @@ args = {
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: Jython",
"Programming Language :: Python :: Implementation :: PyPy",

View File

@@ -122,7 +122,7 @@ class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None, **ka0):
ka = {}
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_clone no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand re_dirsz smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_clone no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title ohead q rand re_dirsz rss smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
ka.update(**{k: False for k in ex.split()})
ex = "dedup dotpart dotsrch hook_v no_dhash no_fastboot no_fpool no_htp no_rescan no_sendfile no_ses no_snap no_up_list no_voldump re_dhash plain_ip"
@@ -137,7 +137,7 @@ class Cfg(Namespace):
ex = "au_vol mtab_age reg_cap s_thead s_tbody th_convt"
ka.update(**{k: 9 for k in ex.split()})
ex = "db_act k304 loris re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
ex = "db_act k304 loris no304 re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
ka.update(**{k: 0 for k in ex.split()})
ex = "ah_alg bname chpw_db doctitle df exit favico idp_h_usr ipa html_head lg_sbf log_fk md_sbf name og_desc og_site og_th og_title og_title_a og_title_v og_title_i shr tcolor textfiles unlist vname xff_src R RS SR"