Compare commits
82 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a90dde94e1 | ||
|
|
7dfbfc7227 | ||
|
|
b10843d051 | ||
|
|
520ac8f4dc | ||
|
|
537a6e50e9 | ||
|
|
2d0cbdf1a8 | ||
|
|
5afb562aa3 | ||
|
|
db069c3d4a | ||
|
|
fae40c7e2f | ||
|
|
0c43b592dc | ||
|
|
2ab8924e2d | ||
|
|
0e31cfa784 | ||
|
|
8f7ffcf350 | ||
|
|
9c8507a0fd | ||
|
|
e9b2cab088 | ||
|
|
d3ccacccb1 | ||
|
|
df386c8fbc | ||
|
|
4d15dd6e17 | ||
|
|
56a0499636 | ||
|
|
10fc4768e8 | ||
|
|
2b63d7d10d | ||
|
|
1f177528c1 | ||
|
|
fc3bbb70a3 | ||
|
|
ce3cab0295 | ||
|
|
c784e5285e | ||
|
|
2bf9055cae | ||
|
|
8aba5aed4f | ||
|
|
0ce7cf5e10 | ||
|
|
96edcbccd7 | ||
|
|
4603afb6de | ||
|
|
56317b00af | ||
|
|
cacec9c1f3 | ||
|
|
44ee07f0b2 | ||
|
|
6a8d5e1731 | ||
|
|
d9962f65b3 | ||
|
|
119e88d87b | ||
|
|
71d9e010d9 | ||
|
|
5718caa957 | ||
|
|
efd8a32ed6 | ||
|
|
b22d700e16 | ||
|
|
ccdacea0c4 | ||
|
|
4bdcbc1cb5 | ||
|
|
833c6cf2ec | ||
|
|
dd6dbdd90a | ||
|
|
63013cc565 | ||
|
|
912402364a | ||
|
|
159f51b12b | ||
|
|
7678a91b0e | ||
|
|
b13899c63d | ||
|
|
3a0d882c5e | ||
|
|
cb81f0ad6d | ||
|
|
518bacf628 | ||
|
|
ca63b03e55 | ||
|
|
cecef88d6b | ||
|
|
7ffd805a03 | ||
|
|
a7e2a0c981 | ||
|
|
2a570bb4ca | ||
|
|
5ca8f0706d | ||
|
|
a9b4436cdc | ||
|
|
5f91999512 | ||
|
|
9f000beeaf | ||
|
|
ff0a71f212 | ||
|
|
22dfc6ec24 | ||
|
|
48147c079e | ||
|
|
d715479ef6 | ||
|
|
fc8298c468 | ||
|
|
e94ca5dc91 | ||
|
|
114b71b751 | ||
|
|
b2770a2087 | ||
|
|
cba1878bb2 | ||
|
|
a2e037d6af | ||
|
|
65a2b6a223 | ||
|
|
9ed799e803 | ||
|
|
c1c0ecca13 | ||
|
|
ee62836383 | ||
|
|
705f598b1a | ||
|
|
414de88925 | ||
|
|
53ffd245dd | ||
|
|
cf1b756206 | ||
|
|
22b58e31ef | ||
|
|
b7f9bf5a28 | ||
|
|
aba680b6c2 |
61
README.md
61
README.md
@@ -47,6 +47,7 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [file manager](#file-manager) - cut/paste, rename, and delete files/folders (if you have permission)
|
||||
* [shares](#shares) - share a file or folder by creating a temporary link
|
||||
* [batch rename](#batch-rename) - select some files and press `F2` to bring up the rename UI
|
||||
* [rss feeds](#rss-feeds) - monitor a folder with your RSS reader
|
||||
* [media player](#media-player) - plays almost every audio format there is
|
||||
* [audio equalizer](#audio-equalizer) - and [dynamic range compressor](https://en.wikipedia.org/wiki/Dynamic_range_compression)
|
||||
* [fix unreliable playback on android](#fix-unreliable-playback-on-android) - due to phone / app settings
|
||||
@@ -80,6 +81,7 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [event hooks](#event-hooks) - trigger a program on uploads, renames etc ([examples](./bin/hooks/))
|
||||
* [upload events](#upload-events) - the older, more powerful approach ([examples](./bin/mtag/))
|
||||
* [handlers](#handlers) - redefine behavior with plugins ([examples](./bin/handlers/))
|
||||
* [ip auth](#ip-auth) - autologin based on IP range (CIDR)
|
||||
* [identity providers](#identity-providers) - replace copyparty passwords with oauth and such
|
||||
* [user-changeable passwords](#user-changeable-passwords) - if permitted, users can change their own passwords
|
||||
* [using the cloud as storage](#using-the-cloud-as-storage) - connecting to an aws s3 bucket and similar
|
||||
@@ -218,7 +220,7 @@ also see [comparison to similar software](./docs/versus.md)
|
||||
* upload
|
||||
* ☑ basic: plain multipart, ie6 support
|
||||
* ☑ [up2k](#uploading): js, resumable, multithreaded
|
||||
* **no filesize limit!** ...unless you use Cloudflare, then it's 383.9 GiB
|
||||
* **no filesize limit!** even on Cloudflare
|
||||
* ☑ stash: simple PUT filedropper
|
||||
* ☑ filename randomizer
|
||||
* ☑ write-only folders
|
||||
@@ -426,7 +428,7 @@ configuring accounts/volumes with arguments:
|
||||
|
||||
permissions:
|
||||
* `r` (read): browse folder contents, download files, download as zip/tar, see filekeys/dirkeys
|
||||
* `w` (write): upload files, move files *into* this folder
|
||||
* `w` (write): upload files, move/copy files *into* this folder
|
||||
* `m` (move): move files/folders *from* this folder
|
||||
* `d` (delete): delete files/folders
|
||||
* `.` (dots): user can ask to show dotfiles in directory listings
|
||||
@@ -506,7 +508,8 @@ the browser has the following hotkeys (always qwerty)
|
||||
* `ESC` close various things
|
||||
* `ctrl-K` delete selected files/folders
|
||||
* `ctrl-X` cut selected files/folders
|
||||
* `ctrl-V` paste
|
||||
* `ctrl-C` copy selected files/folders to clipboard
|
||||
* `ctrl-V` paste (move/copy)
|
||||
* `Y` download selected files
|
||||
* `F2` [rename](#batch-rename) selected file/folder
|
||||
* when a file/folder is selected (in not-grid-view):
|
||||
@@ -575,6 +578,7 @@ click the `🌲` or pressing the `B` hotkey to toggle between breadcrumbs path (
|
||||
|
||||
press `g` or `田` to toggle grid-view instead of the file listing and `t` toggles icons / thumbnails
|
||||
* can be made default globally with `--grid` or per-volume with volflag `grid`
|
||||
* enable by adding `?imgs` to a link, or disable with `?imgs=0`
|
||||
|
||||

|
||||
|
||||
@@ -653,7 +657,7 @@ up2k has several advantages:
|
||||
* uploads resume if you reboot your browser or pc, just upload the same files again
|
||||
* server detects any corruption; the client reuploads affected chunks
|
||||
* the client doesn't upload anything that already exists on the server
|
||||
* no filesize limit unless imposed by a proxy, for example Cloudflare, which blocks uploads over 383.9 GiB
|
||||
* no filesize limit, even when a proxy limits the request size (for example Cloudflare)
|
||||
* much higher speeds than ftp/scp/tarpipe on some internet connections (mainly american ones) thanks to parallel connections
|
||||
* the last-modified timestamp of the file is preserved
|
||||
|
||||
@@ -689,6 +693,8 @@ note that since up2k has to read each file twice, `[🎈] bup` can *theoreticall
|
||||
|
||||
if you are resuming a massive upload and want to skip hashing the files which already finished, you can enable `turbo` in the `[⚙️] config` tab, but please read the tooltip on that button
|
||||
|
||||
if the server is behind a proxy which imposes a request-size limit, you can configure up2k to sneak below the limit with server-option `--u2sz` (the default is 96 MiB to support Cloudflare)
|
||||
|
||||
|
||||
### file-search
|
||||
|
||||
@@ -752,10 +758,11 @@ file selection: click somewhere on the line (not the link itself), then:
|
||||
* shift-click another line for range-select
|
||||
|
||||
* cut: select some files and `ctrl-x`
|
||||
* copy: select some files and `ctrl-c`
|
||||
* paste: `ctrl-v` in another folder
|
||||
* rename: `F2`
|
||||
|
||||
you can move files across browser tabs (cut in one tab, paste in another)
|
||||
you can copy/move files across browser tabs (cut/copy in one tab, paste in another)
|
||||
|
||||
|
||||
## shares
|
||||
@@ -842,6 +849,30 @@ or a mix of both:
|
||||
the metadata keys you can use in the format field are the ones in the file-browser table header (whatever is collected with `-mte` and `-mtp`)
|
||||
|
||||
|
||||
## rss feeds
|
||||
|
||||
monitor a folder with your RSS reader , optionally recursive
|
||||
|
||||
must be enabled per-volume with volflag `rss` or globally with `--rss`
|
||||
|
||||
the feed includes itunes metadata for use with podcast readers such as [AntennaPod](https://antennapod.org/)
|
||||
|
||||
a feed example: https://cd.ocv.me/a/d2/d22/?rss&fext=mp3
|
||||
|
||||
url parameters:
|
||||
|
||||
* `pw=hunter2` for password auth
|
||||
* `recursive` to also include subfolders
|
||||
* `title=foo` changes the feed title (default: folder name)
|
||||
* `fext=mp3,opus` only include mp3 and opus files (default: all)
|
||||
* `nf=30` only show the first 30 results (default: 250)
|
||||
* `sort=m` sort by mtime (file last-modified), newest first (default)
|
||||
* `u` = upload-time; NOTE: non-uploaded files have upload-time `0`
|
||||
* `n` = filename
|
||||
* `a` = filesize
|
||||
* uppercase = reverse-sort; `M` = oldest file first
|
||||
|
||||
|
||||
## media player
|
||||
|
||||
plays almost every audio format there is (if the server has FFmpeg installed for on-demand transcoding)
|
||||
@@ -1432,6 +1463,22 @@ redefine behavior with plugins ([examples](./bin/handlers/))
|
||||
replace 404 and 403 errors with something completely different (that's it for now)
|
||||
|
||||
|
||||
## ip auth
|
||||
|
||||
autologin based on IP range (CIDR) , using the global-option `--ipu`
|
||||
|
||||
for example, if everyone with an IP that starts with `192.168.123` should automatically log in as the user `spartacus`, then you can either specify `--ipu=192.168.123.0/24=spartacus` as a commandline option, or put this in a config file:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
ipu: 192.168.123.0/24=spartacus
|
||||
```
|
||||
|
||||
repeat the option to map additional subnets
|
||||
|
||||
**be careful with this one!** if you have a reverseproxy, then you definitely want to make sure you have [real-ip](#real-ip) configured correctly, and it's probably a good idea to nullmap the reverseproxy's IP just in case; so if your reverseproxy is sending requests from `172.24.27.9` then that would be `--ipu=172.24.27.9/32=`
|
||||
|
||||
|
||||
## identity providers
|
||||
|
||||
replace copyparty passwords with oauth and such
|
||||
@@ -1640,6 +1687,7 @@ scrape_configs:
|
||||
currently the following metrics are available,
|
||||
* `cpp_uptime_seconds` time since last copyparty restart
|
||||
* `cpp_boot_unixtime_seconds` same but as an absolute timestamp
|
||||
* `cpp_active_dl` number of active downloads
|
||||
* `cpp_http_conns` number of open http(s) connections
|
||||
* `cpp_http_reqs` number of http(s) requests handled
|
||||
* `cpp_sus_reqs` number of 403/422/malicious requests
|
||||
@@ -1889,6 +1937,9 @@ quick summary of more eccentric web-browsers trying to view a directory index:
|
||||
| **ie4** and **netscape** 4.0 | can browse, upload with `?b=u`, auth with `&pw=wark` |
|
||||
| **ncsa mosaic** 2.7 | does not get a pass, [pic1](https://user-images.githubusercontent.com/241032/174189227-ae816026-cf6f-4be5-a26e-1b3b072c1b2f.png) - [pic2](https://user-images.githubusercontent.com/241032/174189225-5651c059-5152-46e9-ac26-7e98e497901b.png) |
|
||||
| **SerenityOS** (7e98457) | hits a page fault, works with `?b=u`, file upload not-impl |
|
||||
| **nintendo 3ds** | can browse, upload, view thumbnails (thx bnjmn) |
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/88deab3d-6cad-4017-8841-2f041472b853" /></p>
|
||||
|
||||
|
||||
# client examples
|
||||
|
||||
@@ -2,7 +2,7 @@ standalone programs which are executed by copyparty when an event happens (uploa
|
||||
|
||||
these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info
|
||||
|
||||
run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbr/xar/xbd/xad/xban)
|
||||
run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbc/xac/xbr/xar/xbd/xad/xban)
|
||||
|
||||
> **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead
|
||||
|
||||
|
||||
@@ -393,7 +393,8 @@ class Gateway(object):
|
||||
if r.status != 200:
|
||||
self.closeconn()
|
||||
info("http error %s reading dir %r", r.status, web_path)
|
||||
raise FuseOSError(errno.ENOENT)
|
||||
err = errno.ENOENT if r.status == 404 else errno.EIO
|
||||
raise FuseOSError(err)
|
||||
|
||||
ctype = r.getheader("Content-Type", "")
|
||||
if ctype == "application/json":
|
||||
@@ -1128,7 +1129,7 @@ def main():
|
||||
|
||||
# dircache is always a boost,
|
||||
# only want to disable it for tests etc,
|
||||
cdn = 9 # max num dirs; 0=disable
|
||||
cdn = 24 # max num dirs; keep larger than max dir depth; 0=disable
|
||||
cds = 1 # numsec until an entry goes stale
|
||||
|
||||
where = "local directory"
|
||||
|
||||
201
bin/u2c.py
201
bin/u2c.py
@@ -1,8 +1,8 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
S_VERSION = "2.1"
|
||||
S_BUILD_DT = "2024-09-23"
|
||||
S_VERSION = "2.6"
|
||||
S_BUILD_DT = "2024-11-10"
|
||||
|
||||
"""
|
||||
u2c.py: upload to copyparty
|
||||
@@ -62,6 +62,9 @@ else:
|
||||
|
||||
unicode = str
|
||||
|
||||
|
||||
WTF8 = "replace" if PY2 else "surrogateescape"
|
||||
|
||||
VT100 = platform.system() != "Windows"
|
||||
|
||||
|
||||
@@ -151,6 +154,7 @@ class HCli(object):
|
||||
self.tls = tls
|
||||
self.verify = ar.te or not ar.td
|
||||
self.conns = []
|
||||
self.hconns = []
|
||||
if tls:
|
||||
import ssl
|
||||
|
||||
@@ -170,7 +174,7 @@ class HCli(object):
|
||||
"User-Agent": "u2c/%s" % (S_VERSION,),
|
||||
}
|
||||
|
||||
def _connect(self):
|
||||
def _connect(self, timeout):
|
||||
args = {}
|
||||
if PY37:
|
||||
args["blocksize"] = 1048576
|
||||
@@ -182,9 +186,11 @@ class HCli(object):
|
||||
if self.ctx:
|
||||
args = {"context": self.ctx}
|
||||
|
||||
return C(self.addr, self.port, timeout=999, **args)
|
||||
return C(self.addr, self.port, timeout=timeout, **args)
|
||||
|
||||
def req(self, meth, vpath, hdrs, body=None, ctype=None):
|
||||
now = time.time()
|
||||
|
||||
hdrs.update(self.base_hdrs)
|
||||
if self.ar.a:
|
||||
hdrs["PW"] = self.ar.a
|
||||
@@ -195,7 +201,11 @@ class HCli(object):
|
||||
0 if not body else body.len if hasattr(body, "len") else len(body)
|
||||
)
|
||||
|
||||
c = self.conns.pop() if self.conns else self._connect()
|
||||
# large timeout for handshakes (safededup)
|
||||
conns = self.hconns if ctype == MJ else self.conns
|
||||
while conns and self.ar.cxp < now - conns[0][0]:
|
||||
conns.pop(0)[1].close()
|
||||
c = conns.pop()[1] if conns else self._connect(999 if ctype == MJ else 128)
|
||||
try:
|
||||
c.request(meth, vpath, body, hdrs)
|
||||
if PY27:
|
||||
@@ -204,8 +214,15 @@ class HCli(object):
|
||||
rsp = c.getresponse()
|
||||
|
||||
data = rsp.read()
|
||||
self.conns.append(c)
|
||||
conns.append((time.time(), c))
|
||||
return rsp.status, data.decode("utf-8")
|
||||
except http_client.BadStatusLine:
|
||||
if self.ar.cxp > 4:
|
||||
t = "\nWARNING: --cxp probably too high; reducing from %d to 4"
|
||||
print(t % (self.ar.cxp,))
|
||||
self.ar.cxp = 4
|
||||
c.close()
|
||||
raise
|
||||
except:
|
||||
c.close()
|
||||
raise
|
||||
@@ -228,7 +245,7 @@ class File(object):
|
||||
self.lmod = lmod # type: float
|
||||
|
||||
self.abs = os.path.join(top, rel) # type: bytes
|
||||
self.name = self.rel.split(b"/")[-1].decode("utf-8", "replace") # type: str
|
||||
self.name = self.rel.split(b"/")[-1].decode("utf-8", WTF8) # type: str
|
||||
|
||||
# set by get_hashlist
|
||||
self.cids = [] # type: list[tuple[str, int, int]] # [ hash, ofs, sz ]
|
||||
@@ -267,10 +284,41 @@ class FileSlice(object):
|
||||
raise Exception(9)
|
||||
tlen += clen
|
||||
|
||||
self.len = tlen
|
||||
self.len = self.tlen = tlen
|
||||
self.cdr = self.car + self.len
|
||||
self.ofs = 0 # type: int
|
||||
self.f = open(file.abs, "rb", 512 * 1024)
|
||||
|
||||
self.f = None
|
||||
self.seek = self._seek0
|
||||
self.read = self._read0
|
||||
|
||||
def subchunk(self, maxsz, nth):
|
||||
if self.tlen <= maxsz:
|
||||
return -1
|
||||
|
||||
if not nth:
|
||||
self.car0 = self.car
|
||||
self.cdr0 = self.cdr
|
||||
|
||||
self.car = self.car0 + maxsz * nth
|
||||
if self.car >= self.cdr0:
|
||||
return -2
|
||||
|
||||
self.cdr = self.car + min(self.cdr0 - self.car, maxsz)
|
||||
self.len = self.cdr - self.car
|
||||
self.seek(0)
|
||||
return nth
|
||||
|
||||
def unsub(self):
|
||||
self.car = self.car0
|
||||
self.cdr = self.cdr0
|
||||
self.len = self.tlen
|
||||
|
||||
def _open(self):
|
||||
self.seek = self._seek
|
||||
self.read = self._read
|
||||
|
||||
self.f = open(self.file.abs, "rb", 512 * 1024)
|
||||
self.f.seek(self.car)
|
||||
|
||||
# https://stackoverflow.com/questions/4359495/what-is-exactly-a-file-like-object-in-python
|
||||
@@ -282,10 +330,15 @@ class FileSlice(object):
|
||||
except:
|
||||
pass # py27 probably
|
||||
|
||||
def close(self, *a, **ka):
|
||||
return # until _open
|
||||
|
||||
def tell(self):
|
||||
return self.ofs
|
||||
|
||||
def seek(self, ofs, wh=0):
|
||||
def _seek(self, ofs, wh=0):
|
||||
assert self.f # !rm
|
||||
|
||||
if wh == 1:
|
||||
ofs = self.ofs + ofs
|
||||
elif wh == 2:
|
||||
@@ -299,12 +352,22 @@ class FileSlice(object):
|
||||
self.ofs = ofs
|
||||
self.f.seek(self.car + ofs)
|
||||
|
||||
def read(self, sz):
|
||||
def _read(self, sz):
|
||||
assert self.f # !rm
|
||||
|
||||
sz = min(sz, self.len - self.ofs)
|
||||
ret = self.f.read(sz)
|
||||
self.ofs += len(ret)
|
||||
return ret
|
||||
|
||||
def _seek0(self, ofs, wh=0):
|
||||
self._open()
|
||||
return self.seek(ofs, wh)
|
||||
|
||||
def _read0(self, sz):
|
||||
self._open()
|
||||
return self.read(sz)
|
||||
|
||||
|
||||
class MTHash(object):
|
||||
def __init__(self, cores):
|
||||
@@ -557,13 +620,17 @@ def walkdir(err, top, excl, seen):
|
||||
for ap, inf in sorted(statdir(err, top)):
|
||||
if excl.match(ap):
|
||||
continue
|
||||
yield ap, inf
|
||||
if stat.S_ISDIR(inf.st_mode):
|
||||
yield ap, inf
|
||||
try:
|
||||
for x in walkdir(err, ap, excl, seen):
|
||||
yield x
|
||||
except Exception as ex:
|
||||
err.append((ap, str(ex)))
|
||||
elif stat.S_ISREG(inf.st_mode):
|
||||
yield ap, inf
|
||||
else:
|
||||
err.append((ap, "irregular filetype 0%o" % (inf.st_mode,)))
|
||||
|
||||
|
||||
def walkdirs(err, tops, excl):
|
||||
@@ -609,11 +676,12 @@ def walkdirs(err, tops, excl):
|
||||
|
||||
# mostly from copyparty/util.py
|
||||
def quotep(btxt):
|
||||
# type: (bytes) -> bytes
|
||||
quot1 = quote(btxt, safe=b"/")
|
||||
if not PY2:
|
||||
quot1 = quot1.encode("ascii")
|
||||
|
||||
return quot1.replace(b" ", b"+") # type: ignore
|
||||
return quot1.replace(b" ", b"%20") # type: ignore
|
||||
|
||||
|
||||
# from copyparty/util.py
|
||||
@@ -641,7 +709,7 @@ def up2k_chunksize(filesize):
|
||||
while True:
|
||||
for mul in [1, 2]:
|
||||
nchunks = math.ceil(filesize * 1.0 / chunksize)
|
||||
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks < 4096):
|
||||
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks <= 4096):
|
||||
return chunksize
|
||||
|
||||
chunksize += stepsize
|
||||
@@ -720,7 +788,7 @@ def handshake(ar, file, search):
|
||||
url = file.url
|
||||
else:
|
||||
if b"/" in file.rel:
|
||||
url = quotep(file.rel.rsplit(b"/", 1)[0]).decode("utf-8", "replace")
|
||||
url = quotep(file.rel.rsplit(b"/", 1)[0]).decode("utf-8")
|
||||
else:
|
||||
url = ""
|
||||
url = ar.vtop + url
|
||||
@@ -728,6 +796,7 @@ def handshake(ar, file, search):
|
||||
while True:
|
||||
sc = 600
|
||||
txt = ""
|
||||
t0 = time.time()
|
||||
try:
|
||||
zs = json.dumps(req, separators=(",\n", ": "))
|
||||
sc, txt = web.req("POST", url, {}, zs.encode("utf-8"), MJ)
|
||||
@@ -752,7 +821,9 @@ def handshake(ar, file, search):
|
||||
print("\nERROR: login required, or wrong password:\n%s" % (txt,))
|
||||
raise BadAuth()
|
||||
|
||||
eprint("handshake failed, retrying: %s\n %s\n\n" % (file.name, em))
|
||||
t = "handshake failed, retrying: %s\n t0=%.3f t1=%.3f td=%.3f\n %s\n\n"
|
||||
now = time.time()
|
||||
eprint(t % (file.name, t0, now, now - t0, em))
|
||||
time.sleep(ar.cd)
|
||||
|
||||
try:
|
||||
@@ -763,15 +834,15 @@ def handshake(ar, file, search):
|
||||
if search:
|
||||
return r["hits"], False
|
||||
|
||||
file.url = r["purl"]
|
||||
file.url = quotep(r["purl"].encode("utf-8", WTF8)).decode("utf-8")
|
||||
file.name = r["name"]
|
||||
file.wark = r["wark"]
|
||||
|
||||
return r["hash"], r["sprs"]
|
||||
|
||||
|
||||
def upload(fsl, stats):
|
||||
# type: (FileSlice, str) -> None
|
||||
def upload(fsl, stats, maxsz):
|
||||
# type: (FileSlice, str, int) -> None
|
||||
"""upload a range of file data, defined by one or more `cid` (chunk-hash)"""
|
||||
|
||||
ctxt = fsl.cids[0]
|
||||
@@ -789,21 +860,34 @@ def upload(fsl, stats):
|
||||
if stats:
|
||||
headers["X-Up2k-Stat"] = stats
|
||||
|
||||
nsub = 0
|
||||
try:
|
||||
sc, txt = web.req("POST", fsl.file.url, headers, fsl, MO)
|
||||
while nsub != -1:
|
||||
nsub = fsl.subchunk(maxsz, nsub)
|
||||
if nsub == -2:
|
||||
return
|
||||
if nsub >= 0:
|
||||
headers["X-Up2k-Subc"] = str(maxsz * nsub)
|
||||
headers.pop(CLEN, None)
|
||||
nsub += 1
|
||||
|
||||
if sc == 400:
|
||||
if (
|
||||
"already being written" in txt
|
||||
or "already got that" in txt
|
||||
or "only sibling chunks" in txt
|
||||
):
|
||||
fsl.file.nojoin = 1
|
||||
sc, txt = web.req("POST", fsl.file.url, headers, fsl, MO)
|
||||
|
||||
if sc >= 400:
|
||||
raise Exception("http %s: %s" % (sc, txt))
|
||||
if sc == 400:
|
||||
if (
|
||||
"already being written" in txt
|
||||
or "already got that" in txt
|
||||
or "only sibling chunks" in txt
|
||||
):
|
||||
fsl.file.nojoin = 1
|
||||
|
||||
if sc >= 400:
|
||||
raise Exception("http %s: %s" % (sc, txt))
|
||||
finally:
|
||||
fsl.f.close()
|
||||
if fsl.f:
|
||||
fsl.f.close()
|
||||
if nsub != -1:
|
||||
fsl.unsub()
|
||||
|
||||
|
||||
class Ctl(object):
|
||||
@@ -869,8 +953,8 @@ class Ctl(object):
|
||||
self.hash_b = 0
|
||||
self.up_f = 0
|
||||
self.up_c = 0
|
||||
self.up_b = 0
|
||||
self.up_br = 0
|
||||
self.up_b = 0 # num bytes handled
|
||||
self.up_br = 0 # num bytes actually transferred
|
||||
self.uploader_busy = 0
|
||||
self.serialized = False
|
||||
|
||||
@@ -935,7 +1019,7 @@ class Ctl(object):
|
||||
print(" %d up %s" % (ncs - nc, cid))
|
||||
stats = "%d/0/0/%d" % (nf, self.nfiles - nf)
|
||||
fslice = FileSlice(file, [cid])
|
||||
upload(fslice, stats)
|
||||
upload(fslice, stats, self.ar.szm)
|
||||
|
||||
print(" ok!")
|
||||
if file.recheck:
|
||||
@@ -1006,6 +1090,8 @@ class Ctl(object):
|
||||
|
||||
spd = humansize(spd)
|
||||
self.eta = str(datetime.timedelta(seconds=int(eta)))
|
||||
if eta > 2591999:
|
||||
self.eta = self.eta.split(",")[0] # truncate HH:MM:SS
|
||||
sleft = humansize(self.nbytes - self.up_b)
|
||||
nleft = self.nfiles - self.up_f
|
||||
tail = "\033[K\033[u" if VT100 and not self.ar.ns else "\r"
|
||||
@@ -1013,11 +1099,14 @@ class Ctl(object):
|
||||
t = "%s eta @ %s/s, %s, %d# left\033[K" % (self.eta, spd, sleft, nleft)
|
||||
eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail))
|
||||
|
||||
if self.ar.wlist:
|
||||
self.at_hash = time.time() - self.t0
|
||||
|
||||
if self.hash_b and self.at_hash:
|
||||
spd = humansize(self.hash_b / self.at_hash)
|
||||
eprint("\nhasher: %.2f sec, %s/s\n" % (self.at_hash, spd))
|
||||
if self.up_b and self.at_up:
|
||||
spd = humansize(self.up_b / self.at_up)
|
||||
if self.up_br and self.at_up:
|
||||
spd = humansize(self.up_br / self.at_up)
|
||||
eprint("upload: %.2f sec, %s/s\n" % (self.at_up, spd))
|
||||
|
||||
if not self.recheck:
|
||||
@@ -1051,7 +1140,7 @@ class Ctl(object):
|
||||
print(" ls ~{0}".format(srd))
|
||||
zt = (
|
||||
self.ar.vtop,
|
||||
quotep(rd.replace(b"\\", b"/")).decode("utf-8", "replace"),
|
||||
quotep(rd.replace(b"\\", b"/")).decode("utf-8"),
|
||||
)
|
||||
sc, txt = web.req("GET", "%s%s?ls<&dots" % zt, {})
|
||||
if sc >= 400:
|
||||
@@ -1060,13 +1149,16 @@ class Ctl(object):
|
||||
j = json.loads(txt)
|
||||
for f in j["dirs"] + j["files"]:
|
||||
rfn = f["href"].split("?")[0].rstrip("/")
|
||||
ls[unquote(rfn.encode("utf-8", "replace"))] = f
|
||||
ls[unquote(rfn.encode("utf-8", WTF8))] = f
|
||||
except Exception as ex:
|
||||
print(" mkdir ~{0} ({1})".format(srd, ex))
|
||||
|
||||
if self.ar.drd:
|
||||
dp = os.path.join(top, rd)
|
||||
lnodes = set(os.listdir(dp))
|
||||
try:
|
||||
lnodes = set(os.listdir(dp))
|
||||
except:
|
||||
lnodes = list(ls) # fs eio; don't delete
|
||||
if ptn:
|
||||
zs = dp.replace(sep, b"/").rstrip(b"/") + b"/"
|
||||
zls = [zs + x for x in lnodes]
|
||||
@@ -1074,7 +1166,7 @@ class Ctl(object):
|
||||
lnodes = [x.split(b"/")[-1] for x in zls]
|
||||
bnames = [x for x in ls if x not in lnodes and x != b".hist"]
|
||||
vpath = self.ar.url.split("://")[-1].split("/", 1)[-1]
|
||||
names = [x.decode("utf-8", "replace") for x in bnames]
|
||||
names = [x.decode("utf-8", WTF8) for x in bnames]
|
||||
locs = [vpath + srd + "/" + x for x in names]
|
||||
while locs:
|
||||
req = locs
|
||||
@@ -1136,10 +1228,16 @@ class Ctl(object):
|
||||
self.up_b = self.hash_b
|
||||
|
||||
if self.ar.wlist:
|
||||
vp = file.rel.decode("utf-8")
|
||||
if self.ar.chs:
|
||||
zsl = [
|
||||
"%s %d %d" % (zsii[0], n, zsii[1])
|
||||
for n, zsii in enumerate(file.cids)
|
||||
]
|
||||
print("chs: %s\n%s" % (vp, "\n".join(zsl)))
|
||||
zsl = [self.ar.wsalt, str(file.size)] + [x[0] for x in file.kchunks]
|
||||
zb = hashlib.sha512("\n".join(zsl).encode("utf-8")).digest()[:33]
|
||||
wark = ub64enc(zb).decode("utf-8")
|
||||
vp = file.rel.decode("utf-8")
|
||||
if self.ar.jw:
|
||||
print("%s %s" % (wark, vp))
|
||||
else:
|
||||
@@ -1177,6 +1275,7 @@ class Ctl(object):
|
||||
self.q_upload.put(None)
|
||||
return
|
||||
|
||||
chunksz = up2k_chunksize(file.size)
|
||||
upath = file.abs.decode("utf-8", "replace")
|
||||
if not VT100:
|
||||
upath = upath.lstrip("\\?")
|
||||
@@ -1236,9 +1335,14 @@ class Ctl(object):
|
||||
file.up_c -= len(hs)
|
||||
for cid in hs:
|
||||
sz = file.kchunks[cid][1]
|
||||
self.up_br -= sz
|
||||
self.up_b -= sz
|
||||
file.up_b -= sz
|
||||
|
||||
if hs and not file.up_b:
|
||||
# first hs of this file; is this an upload resume?
|
||||
file.up_b = chunksz * max(0, len(file.kchunks) - len(hs))
|
||||
|
||||
file.ucids = hs
|
||||
|
||||
if not hs:
|
||||
@@ -1252,7 +1356,7 @@ class Ctl(object):
|
||||
c1 = c2 = ""
|
||||
|
||||
spd_h = humansize(file.size / file.t_hash, True)
|
||||
if file.up_b:
|
||||
if file.up_c:
|
||||
t_up = file.t1_up - file.t0_up
|
||||
spd_u = humansize(file.size / t_up, True)
|
||||
|
||||
@@ -1262,14 +1366,13 @@ class Ctl(object):
|
||||
t = " found %s %s(%.2fs,%s/s)%s"
|
||||
print(t % (upath, c1, file.t_hash, spd_h, c2))
|
||||
else:
|
||||
kw = "uploaded" if file.up_b else " found"
|
||||
kw = "uploaded" if file.up_c else " found"
|
||||
print("{0} {1}".format(kw, upath))
|
||||
|
||||
self._check_if_done()
|
||||
continue
|
||||
|
||||
chunksz = up2k_chunksize(file.size)
|
||||
njoin = (self.ar.sz * 1024 * 1024) // chunksz
|
||||
njoin = self.ar.sz // chunksz
|
||||
cs = hs[:]
|
||||
while cs:
|
||||
fsl = FileSlice(file, cs[:1])
|
||||
@@ -1321,7 +1424,7 @@ class Ctl(object):
|
||||
)
|
||||
|
||||
try:
|
||||
upload(fsl, stats)
|
||||
upload(fsl, stats, self.ar.szm)
|
||||
except Exception as ex:
|
||||
t = "upload failed, retrying: %s #%s+%d (%s)\n"
|
||||
eprint(t % (file.name, cids[0][:8], len(cids) - 1, ex))
|
||||
@@ -1365,7 +1468,7 @@ def main():
|
||||
cores = (os.cpu_count() if hasattr(os, "cpu_count") else 0) or 2
|
||||
hcores = min(cores, 3) # 4% faster than 4+ on py3.9 @ r5-4500U
|
||||
|
||||
ver = "{0} v{1} https://youtu.be/BIcOO6TLKaY".format(S_BUILD_DT, S_VERSION)
|
||||
ver = "{0}, v{1}".format(S_BUILD_DT, S_VERSION)
|
||||
if "--version" in sys.argv:
|
||||
print(ver)
|
||||
return
|
||||
@@ -1403,14 +1506,17 @@ source file/folder selection uses rsync syntax, meaning that:
|
||||
|
||||
ap = app.add_argument_group("file-ID calculator; enable with url '-' to list warks (file identifiers) instead of upload/search")
|
||||
ap.add_argument("--wsalt", type=unicode, metavar="S", default="hunter2", help="salt to use when creating warks; must match server config")
|
||||
ap.add_argument("--chs", action="store_true", help="verbose (print the hash/offset of each chunk in each file)")
|
||||
ap.add_argument("--jw", action="store_true", help="just identifier+filepath, not mtime/size too")
|
||||
|
||||
ap = app.add_argument_group("performance tweaks")
|
||||
ap.add_argument("-j", type=int, metavar="CONNS", default=2, help="parallel connections")
|
||||
ap.add_argument("-J", type=int, metavar="CORES", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing")
|
||||
ap.add_argument("--sz", type=int, metavar="MiB", default=64, help="try to make each POST this big")
|
||||
ap.add_argument("--szm", type=int, metavar="MiB", default=96, help="max size of each POST (default is cloudflare max)")
|
||||
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
|
||||
ap.add_argument("-ns", action="store_true", help="no status panel (for slow consoles and macos)")
|
||||
ap.add_argument("--cxp", type=float, metavar="SEC", default=57, help="assume http connections expired after SEConds")
|
||||
ap.add_argument("--cd", type=float, metavar="SEC", default=5, help="delay before reattempting a failed handshake/upload")
|
||||
ap.add_argument("--safe", action="store_true", help="use simple fallback approach")
|
||||
ap.add_argument("-z", action="store_true", help="ZOOMIN' (skip uploading files if they exist at the destination with the ~same last-modified timestamp, so same as yolo / turbo with date-chk but even faster)")
|
||||
@@ -1436,6 +1542,9 @@ source file/folder selection uses rsync syntax, meaning that:
|
||||
if ar.dr:
|
||||
ar.ow = True
|
||||
|
||||
ar.sz *= 1024 * 1024
|
||||
ar.szm *= 1024 * 1024
|
||||
|
||||
ar.x = "|".join(ar.x or [])
|
||||
|
||||
setattr(ar, "wlist", ar.url == "-")
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Maintainer: icxes <dev.null@need.moe>
|
||||
pkgname=copyparty
|
||||
pkgver="1.15.5"
|
||||
pkgver="1.16.1"
|
||||
pkgrel=1
|
||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||
arch=("any")
|
||||
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
||||
)
|
||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||
backup=("etc/${pkgname}.d/init" )
|
||||
sha256sums=("c380ad1d20787d80077123ced583d45bc26467386bbceac35436662f435a6b8c")
|
||||
sha256sums=("48506881f7920ad9d528763833a8cc3d1b6df39402bbe1cb90c3ff58c865dfc6")
|
||||
|
||||
build() {
|
||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.15.5/copyparty-sfx.py",
|
||||
"version": "1.15.5",
|
||||
"hash": "sha256-2JcXSbtyEn+EtpyQTcE9U4XuckVKvAowVGqBZ110Jt4="
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.16.1/copyparty-sfx.py",
|
||||
"version": "1.16.1",
|
||||
"hash": "sha256-vlxAuVtd/o11CIC6E6K6UDUdtYDzQ6u7kG3Qc3eqJ+U="
|
||||
}
|
||||
@@ -80,6 +80,7 @@ web/deps/prismd.css
|
||||
web/deps/scp.woff2
|
||||
web/deps/sha512.ac.js
|
||||
web/deps/sha512.hw.js
|
||||
web/iiam.gif
|
||||
web/md.css
|
||||
web/md.html
|
||||
web/md.js
|
||||
|
||||
@@ -50,6 +50,8 @@ from .util import (
|
||||
PARTFTPY_VER,
|
||||
PY_DESC,
|
||||
PYFTPD_VER,
|
||||
RAM_AVAIL,
|
||||
RAM_TOTAL,
|
||||
SQLITE_VER,
|
||||
UNPLICATIONS,
|
||||
Daemon,
|
||||
@@ -684,6 +686,8 @@ def get_sects():
|
||||
\033[36mxbu\033[35m executes CMD before a file upload starts
|
||||
\033[36mxau\033[35m executes CMD after a file upload finishes
|
||||
\033[36mxiu\033[35m executes CMD after all uploads finish and volume is idle
|
||||
\033[36mxbc\033[35m executes CMD before a file copy
|
||||
\033[36mxac\033[35m executes CMD after a file copy
|
||||
\033[36mxbr\033[35m executes CMD before a file rename/move
|
||||
\033[36mxar\033[35m executes CMD after a file rename/move
|
||||
\033[36mxbd\033[35m executes CMD before a file delete
|
||||
@@ -874,8 +878,9 @@ def get_sects():
|
||||
use argon2id with timecost 3, 256 MiB, 4 threads, version 19 (0x13/v1.3)
|
||||
|
||||
\033[36m--ah-alg scrypt\033[0m # which is the same as:
|
||||
\033[36m--ah-alg scrypt,13,2,8,4\033[0m
|
||||
use scrypt with cost 2**13, 2 iterations, blocksize 8, 4 threads
|
||||
\033[36m--ah-alg scrypt,13,2,8,4,32\033[0m
|
||||
use scrypt with cost 2**13, 2 iterations, blocksize 8, 4 threads,
|
||||
and allow using up to 32 MiB RAM (ram=cost*blksz roughly)
|
||||
|
||||
\033[36m--ah-alg sha2\033[0m # which is the same as:
|
||||
\033[36m--ah-alg sha2,424242\033[0m
|
||||
@@ -1017,7 +1022,7 @@ def add_upload(ap):
|
||||
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
|
||||
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
|
||||
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
|
||||
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for this size. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
|
||||
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for \033[33mdefault\033[0m, and never exceed \033[33mmax\033[0m. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
|
||||
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
|
||||
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
|
||||
|
||||
@@ -1037,7 +1042,7 @@ def add_network(ap):
|
||||
else:
|
||||
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
|
||||
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
|
||||
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
|
||||
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=128.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
|
||||
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
|
||||
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
|
||||
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0.0, help="debug: socket write delay in seconds")
|
||||
@@ -1087,6 +1092,7 @@ def add_auth(ap):
|
||||
ap2.add_argument("--ses-db", metavar="PATH", type=u, default=ses_db, help="where to store the sessions database (if you run multiple copyparty instances, make sure they use different DBs)")
|
||||
ap2.add_argument("--ses-len", metavar="CHARS", type=int, default=20, help="session key length; default is 120 bits ((20//4)*4*6)")
|
||||
ap2.add_argument("--no-ses", action="store_true", help="disable sessions; use plaintext passwords in cookies")
|
||||
ap2.add_argument("--ipu", metavar="CIDR=USR", type=u, action="append", help="users with IP matching \033[33mCIDR\033[0m are auto-authenticated as username \033[33mUSR\033[0m; example: [\033[32m172.16.24.0/24=dave]")
|
||||
|
||||
|
||||
def add_chpw(ap):
|
||||
@@ -1200,6 +1206,8 @@ def add_hooks(ap):
|
||||
ap2.add_argument("--xbu", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file upload starts")
|
||||
ap2.add_argument("--xau", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file upload finishes")
|
||||
ap2.add_argument("--xiu", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after all uploads finish and volume is idle")
|
||||
ap2.add_argument("--xbc", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file copy")
|
||||
ap2.add_argument("--xac", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file copy")
|
||||
ap2.add_argument("--xbr", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file move/rename")
|
||||
ap2.add_argument("--xar", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file move/rename")
|
||||
ap2.add_argument("--xbd", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file delete")
|
||||
@@ -1232,6 +1240,7 @@ def add_optouts(ap):
|
||||
ap2.add_argument("--no-dav", action="store_true", help="disable webdav support")
|
||||
ap2.add_argument("--no-del", action="store_true", help="disable delete operations")
|
||||
ap2.add_argument("--no-mv", action="store_true", help="disable move/rename operations")
|
||||
ap2.add_argument("--no-cp", action="store_true", help="disable copy operations")
|
||||
ap2.add_argument("-nth", action="store_true", help="no title hostname; don't show \033[33m--name\033[0m in <title>")
|
||||
ap2.add_argument("-nih", action="store_true", help="no info hostname -- don't show in UI")
|
||||
ap2.add_argument("-nid", action="store_true", help="no info disk-usage -- don't show in UI")
|
||||
@@ -1306,6 +1315,7 @@ def add_logging(ap):
|
||||
ap2.add_argument("--log-conn", action="store_true", help="debug: print tcp-server msgs")
|
||||
ap2.add_argument("--log-htp", action="store_true", help="debug: print http-server threadpool scaling")
|
||||
ap2.add_argument("--ihead", metavar="HEADER", type=u, action='append', help="print request \033[33mHEADER\033[0m; [\033[32m*\033[0m]=all")
|
||||
ap2.add_argument("--ohead", metavar="HEADER", type=u, action='append', help="print response \033[33mHEADER\033[0m; [\033[32m*\033[0m]=all")
|
||||
ap2.add_argument("--lf-url", metavar="RE", type=u, default=r"^/\.cpr/|\?th=[wj]$|/\.(_|ql_|DS_Store$|localized$)", help="dont log URLs matching regex \033[33mRE\033[0m")
|
||||
|
||||
|
||||
@@ -1314,9 +1324,12 @@ def add_admin(ap):
|
||||
ap2.add_argument("--no-reload", action="store_true", help="disable ?reload=cfg (reload users/volumes/volflags from config file)")
|
||||
ap2.add_argument("--no-rescan", action="store_true", help="disable ?scan (volume reindexing)")
|
||||
ap2.add_argument("--no-stack", action="store_true", help="disable ?stack (list all stacks)")
|
||||
ap2.add_argument("--dl-list", metavar="LVL", type=int, default=2, help="who can see active downloads in the controlpanel? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=everyone")
|
||||
|
||||
|
||||
def add_thumbnail(ap):
|
||||
th_ram = (RAM_AVAIL or RAM_TOTAL or 9) * 0.6
|
||||
th_ram = int(max(min(th_ram, 6), 1) * 10) / 10
|
||||
ap2 = ap.add_argument_group('thumbnail options')
|
||||
ap2.add_argument("--no-thumb", action="store_true", help="disable all thumbnails (volflag=dthumb)")
|
||||
ap2.add_argument("--no-vthumb", action="store_true", help="disable video thumbnails (volflag=dvthumb)")
|
||||
@@ -1324,7 +1337,7 @@ def add_thumbnail(ap):
|
||||
ap2.add_argument("--th-size", metavar="WxH", default="320x256", help="thumbnail res (volflag=thsize)")
|
||||
ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails")
|
||||
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60.0, help="conversion timeout in seconds (volflag=convt)")
|
||||
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=6.0, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
|
||||
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=th_ram, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
|
||||
ap2.add_argument("--th-crop", metavar="TXT", type=u, default="y", help="crop thumbnails to 4:3 or keep dynamic height; client can override in UI unless force. [\033[32my\033[0m]=crop, [\033[32mn\033[0m]=nocrop, [\033[32mfy\033[0m]=force-y, [\033[32mfn\033[0m]=force-n (volflag=crop)")
|
||||
ap2.add_argument("--th-x3", metavar="TXT", type=u, default="n", help="show thumbs at 3x resolution; client can override in UI unless force. [\033[32my\033[0m]=yes, [\033[32mn\033[0m]=no, [\033[32mfy\033[0m]=force-yes, [\033[32mfn\033[0m]=force-no (volflag=th3x)")
|
||||
ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference")
|
||||
@@ -1339,12 +1352,12 @@ def add_thumbnail(ap):
|
||||
# https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html
|
||||
# https://github.com/libvips/libvips
|
||||
# ffmpeg -hide_banner -demuxers | awk '/^ D /{print$2}' | while IFS= read -r x; do ffmpeg -hide_banner -h demuxer=$x; done | grep -E '^Demuxer |extensions:'
|
||||
ap2.add_argument("--th-r-pil", metavar="T,T", type=u, default="avif,avifs,blp,bmp,dcx,dds,dib,emf,eps,fits,flc,fli,fpx,gif,heic,heics,heif,heifs,icns,ico,im,j2p,j2k,jp2,jpeg,jpg,jpx,pbm,pcx,pgm,png,pnm,ppm,psd,qoi,sgi,spi,tga,tif,tiff,webp,wmf,xbm,xpm", help="image formats to decode using pillow")
|
||||
ap2.add_argument("--th-r-pil", metavar="T,T", type=u, default="avif,avifs,blp,bmp,cbz,dcx,dds,dib,emf,eps,fits,flc,fli,fpx,gif,heic,heics,heif,heifs,icns,ico,im,j2p,j2k,jp2,jpeg,jpg,jpx,pbm,pcx,pgm,png,pnm,ppm,psd,qoi,sgi,spi,tga,tif,tiff,webp,wmf,xbm,xpm", help="image formats to decode using pillow")
|
||||
ap2.add_argument("--th-r-vips", metavar="T,T", type=u, default="avif,exr,fit,fits,fts,gif,hdr,heic,jp2,jpeg,jpg,jpx,jxl,nii,pfm,pgm,png,ppm,svg,tif,tiff,webp", help="image formats to decode using pyvips")
|
||||
ap2.add_argument("--th-r-ffi", metavar="T,T", type=u, default="apng,avif,avifs,bmp,dds,dib,fit,fits,fts,gif,hdr,heic,heics,heif,heifs,icns,ico,jp2,jpeg,jpg,jpx,jxl,pbm,pcx,pfm,pgm,png,pnm,ppm,psd,qoi,sgi,tga,tif,tiff,webp,xbm,xpm", help="image formats to decode using ffmpeg")
|
||||
ap2.add_argument("--th-r-ffi", metavar="T,T", type=u, default="apng,avif,avifs,bmp,cbz,dds,dib,fit,fits,fts,gif,hdr,heic,heics,heif,heifs,icns,ico,jp2,jpeg,jpg,jpx,jxl,pbm,pcx,pfm,pgm,png,pnm,ppm,psd,qoi,sgi,tga,tif,tiff,webp,xbm,xpm", help="image formats to decode using ffmpeg")
|
||||
ap2.add_argument("--th-r-ffv", metavar="T,T", type=u, default="3gp,asf,av1,avc,avi,flv,h264,h265,hevc,m4v,mjpeg,mjpg,mkv,mov,mp4,mpeg,mpeg2,mpegts,mpg,mpg2,mts,nut,ogm,ogv,rm,ts,vob,webm,wmv", help="video formats to decode using ffmpeg")
|
||||
ap2.add_argument("--th-r-ffa", metavar="T,T", type=u, default="aac,ac3,aif,aiff,alac,alaw,amr,apac,ape,au,bonk,dfpwm,dts,flac,gsm,ilbc,it,itgz,itxz,itz,m4a,mdgz,mdxz,mdz,mo3,mod,mp2,mp3,mpc,mptm,mt2,mulaw,ogg,okt,opus,ra,s3m,s3gz,s3xz,s3z,tak,tta,ulaw,wav,wma,wv,xm,xmgz,xmxz,xmz,xpk", help="audio formats to decode using ffmpeg")
|
||||
ap2.add_argument("--au-unpk", metavar="E=F.C", type=u, default="mdz=mod.zip, mdgz=mod.gz, mdxz=mod.xz, s3z=s3m.zip, s3gz=s3m.gz, s3xz=s3m.xz, xmz=xm.zip, xmgz=xm.gz, xmxz=xm.xz, itz=it.zip, itgz=it.gz, itxz=it.xz", help="audio formats to decompress before passing to ffmpeg")
|
||||
ap2.add_argument("--au-unpk", metavar="E=F.C", type=u, default="mdz=mod.zip, mdgz=mod.gz, mdxz=mod.xz, s3z=s3m.zip, s3gz=s3m.gz, s3xz=s3m.xz, xmz=xm.zip, xmgz=xm.gz, xmxz=xm.xz, itz=it.zip, itgz=it.gz, itxz=it.xz, cbz=jpg.cbz", help="audio/image formats to decompress before passing to ffmpeg")
|
||||
|
||||
|
||||
def add_transcoding(ap):
|
||||
@@ -1356,6 +1369,14 @@ def add_transcoding(ap):
|
||||
ap2.add_argument("--ac-maxage", metavar="SEC", type=int, default=86400, help="delete cached transcode output after \033[33mSEC\033[0m seconds")
|
||||
|
||||
|
||||
def add_rss(ap):
|
||||
ap2 = ap.add_argument_group('RSS options')
|
||||
ap2.add_argument("--rss", action="store_true", help="enable RSS output (experimental)")
|
||||
ap2.add_argument("--rss-nf", metavar="HITS", type=int, default=250, help="default number of files to return (url-param 'nf')")
|
||||
ap2.add_argument("--rss-fext", metavar="E,E", type=u, default="", help="default list of file extensions to include (url-param 'fext'); blank=all")
|
||||
ap2.add_argument("--rss-sort", metavar="ORD", type=u, default="m", help="default sort order (url-param 'sort'); [\033[32mm\033[0m]=last-modified [\033[32mu\033[0m]=upload-time [\033[32mn\033[0m]=filename [\033[32ms\033[0m]=filesize; Uppercase=oldest-first. Note that upload-time is 0 for non-uploaded files")
|
||||
|
||||
|
||||
def add_db_general(ap, hcores):
|
||||
noidx = APPLESAN_TXT if MACOS else ""
|
||||
ap2 = ap.add_argument_group('general db options')
|
||||
@@ -1436,6 +1457,7 @@ def add_ui(ap, retry):
|
||||
ap2.add_argument("--themes", metavar="NUM", type=int, default=8, help="number of themes installed")
|
||||
ap2.add_argument("--au-vol", metavar="0-100", type=int, default=50, choices=range(0, 101), help="default audio/video volume percent")
|
||||
ap2.add_argument("--sort", metavar="C,C,C", type=u, default="href", help="default sort order, comma-separated column IDs (see header tooltips), prefix with '-' for descending. Examples: \033[32mhref -href ext sz ts tags/Album tags/.tn\033[0m (volflag=sort)")
|
||||
ap2.add_argument("--nsort", action="store_true", help="default-enable natural sort of filenames with leading numbers (volflag=nsort)")
|
||||
ap2.add_argument("--unlist", metavar="REGEX", type=u, default="", help="don't show files matching \033[33mREGEX\033[0m in file list. Purely cosmetic! Does not affect API calls, just the browser. Example: [\033[32m\\.(js|css)$\033[0m] (volflag=unlist)")
|
||||
ap2.add_argument("--favico", metavar="TXT", type=u, default="c 000 none" if retry else "🎉 000 none", help="\033[33mfavicon-text\033[0m [ \033[33mforeground\033[0m [ \033[33mbackground\033[0m ] ], set blank to disable")
|
||||
ap2.add_argument("--mpmc", metavar="URL", type=u, default="", help="change the mediaplayer-toggle mouse cursor; URL to a folder with {2..5}.png inside (or disable with [\033[32m.\033[0m])")
|
||||
@@ -1451,6 +1473,7 @@ def add_ui(ap, retry):
|
||||
ap2.add_argument("--pb-url", metavar="URL", type=u, default="https://github.com/9001/copyparty", help="powered-by link; disable with \033[33m-np\033[0m")
|
||||
ap2.add_argument("--ver", action="store_true", help="show version on the control panel (incompatible with \033[33m-nb\033[0m)")
|
||||
ap2.add_argument("--k304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable k304 on the controlpanel (workaround for buggy reverse-proxies); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
|
||||
ap2.add_argument("--no304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable no304 on the controlpanel (workaround for buggy caching in browsers); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
|
||||
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox")
|
||||
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for prologue/epilogue docs (volflag=lg_sbf)")
|
||||
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README/PREADME.md documents (volflags: no_sb_md | sb_md)")
|
||||
@@ -1477,6 +1500,7 @@ def add_debug(ap):
|
||||
ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to \033[33m--bf-nc\033[0m and \033[33m--bf-dir\033[0m")
|
||||
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
|
||||
ap2.add_argument("--bf-dir", metavar="PATH", type=u, default="bf", help="bak-flips: store corrupted chunks at \033[33mPATH\033[0m; default: folder named 'bf' wherever copyparty was started")
|
||||
ap2.add_argument("--bf-log", metavar="PATH", type=u, default="", help="bak-flips: log corruption info to a textfile at \033[33mPATH\033[0m")
|
||||
|
||||
|
||||
# fmt: on
|
||||
@@ -1524,6 +1548,7 @@ def run_argparse(
|
||||
add_db_metadata(ap)
|
||||
add_thumbnail(ap)
|
||||
add_transcoding(ap)
|
||||
add_rss(ap)
|
||||
add_ftp(ap)
|
||||
add_webdav(ap)
|
||||
add_tftp(ap)
|
||||
@@ -1746,6 +1771,9 @@ def main(argv: Optional[list[str]] = None) -> None:
|
||||
if al.ihead:
|
||||
al.ihead = [x.lower() for x in al.ihead]
|
||||
|
||||
if al.ohead:
|
||||
al.ohead = [x.lower() for x in al.ohead]
|
||||
|
||||
if HAVE_SSL:
|
||||
if al.ssl_ver:
|
||||
configure_ssl_ver(al)
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# coding: utf-8
|
||||
|
||||
VERSION = (1, 15, 6)
|
||||
CODENAME = "fill the drives"
|
||||
BUILD_DT = (2024, 10, 12)
|
||||
VERSION = (1, 16, 2)
|
||||
CODENAME = "COPYparty"
|
||||
BUILD_DT = (2024, 11, 23)
|
||||
|
||||
S_VERSION = ".".join(map(str, VERSION))
|
||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||
|
||||
@@ -66,6 +66,7 @@ if PY2:
|
||||
LEELOO_DALLAS = "leeloo_dallas"
|
||||
|
||||
SEE_LOG = "see log for details"
|
||||
SEESLOG = " (see serverlog for details)"
|
||||
SSEELOG = " ({})".format(SEE_LOG)
|
||||
BAD_CFG = "invalid config; {}".format(SEE_LOG)
|
||||
SBADCFG = " ({})".format(BAD_CFG)
|
||||
@@ -164,8 +165,11 @@ class Lim(object):
|
||||
self.chk_rem(rem)
|
||||
if sz != -1:
|
||||
self.chk_sz(sz)
|
||||
self.chk_vsz(broker, ptop, sz, volgetter)
|
||||
self.chk_df(abspath, sz) # side effects; keep last-ish
|
||||
else:
|
||||
sz = 0
|
||||
|
||||
self.chk_vsz(broker, ptop, sz, volgetter)
|
||||
self.chk_df(abspath, sz) # side effects; keep last-ish
|
||||
|
||||
ap2, vp2 = self.rot(abspath)
|
||||
if abspath == ap2:
|
||||
@@ -205,7 +209,15 @@ class Lim(object):
|
||||
|
||||
if self.dft < time.time():
|
||||
self.dft = int(time.time()) + 300
|
||||
self.dfv = get_df(abspath)[0] or 0
|
||||
|
||||
df, du, err = get_df(abspath, True)
|
||||
if err:
|
||||
t = "failed to read disk space usage for [%s]: %s"
|
||||
self.log(t % (abspath, err), 3)
|
||||
self.dfv = 0xAAAAAAAAA # 42.6 GiB
|
||||
else:
|
||||
self.dfv = df or 0
|
||||
|
||||
for j in list(self.reg.values()) if self.reg else []:
|
||||
self.dfv -= int(j["size"] / (len(j["hash"]) or 999) * len(j["need"]))
|
||||
|
||||
@@ -355,18 +367,21 @@ class VFS(object):
|
||||
self.ahtml: dict[str, list[str]] = {}
|
||||
self.aadmin: dict[str, list[str]] = {}
|
||||
self.adot: dict[str, list[str]] = {}
|
||||
self.all_vols: dict[str, VFS] = {}
|
||||
self.js_ls = {}
|
||||
self.js_htm = ""
|
||||
|
||||
if realpath:
|
||||
rp = realpath + ("" if realpath.endswith(os.sep) else os.sep)
|
||||
vp = vpath + ("/" if vpath else "")
|
||||
self.histpath = os.path.join(realpath, ".hist") # db / thumbcache
|
||||
self.all_vols = {vpath: self} # flattened recursive
|
||||
self.all_nodes = {vpath: self} # also jumpvols
|
||||
self.all_aps = [(rp, self)]
|
||||
self.all_vps = [(vp, self)]
|
||||
else:
|
||||
self.histpath = ""
|
||||
self.all_vols = {}
|
||||
self.all_nodes = {}
|
||||
self.all_aps = []
|
||||
self.all_vps = []
|
||||
|
||||
@@ -384,9 +399,11 @@ class VFS(object):
|
||||
def get_all_vols(
|
||||
self,
|
||||
vols: dict[str, "VFS"],
|
||||
nodes: dict[str, "VFS"],
|
||||
aps: list[tuple[str, "VFS"]],
|
||||
vps: list[tuple[str, "VFS"]],
|
||||
) -> None:
|
||||
nodes[self.vpath] = self
|
||||
if self.realpath:
|
||||
vols[self.vpath] = self
|
||||
rp = self.realpath
|
||||
@@ -396,7 +413,7 @@ class VFS(object):
|
||||
vps.append((vp, self))
|
||||
|
||||
for v in self.nodes.values():
|
||||
v.get_all_vols(vols, aps, vps)
|
||||
v.get_all_vols(vols, nodes, aps, vps)
|
||||
|
||||
def add(self, src: str, dst: str) -> "VFS":
|
||||
"""get existing, or add new path to the vfs"""
|
||||
@@ -540,15 +557,14 @@ class VFS(object):
|
||||
return self._get_dbv(vrem)
|
||||
|
||||
shv, srem = src
|
||||
return shv, vjoin(srem, vrem)
|
||||
return shv._get_dbv(vjoin(srem, vrem))
|
||||
|
||||
def _get_dbv(self, vrem: str) -> tuple["VFS", str]:
|
||||
dbv = self.dbv
|
||||
if not dbv:
|
||||
return self, vrem
|
||||
|
||||
tv = [self.vpath[len(dbv.vpath) :].lstrip("/"), vrem]
|
||||
vrem = "/".join([x for x in tv if x])
|
||||
vrem = vjoin(self.vpath[len(dbv.vpath) :].lstrip("/"), vrem)
|
||||
return dbv, vrem
|
||||
|
||||
def canonical(self, rem: str, resolve: bool = True) -> str:
|
||||
@@ -580,10 +596,11 @@ class VFS(object):
|
||||
scandir: bool,
|
||||
permsets: list[list[bool]],
|
||||
lstat: bool = False,
|
||||
throw: bool = False,
|
||||
) -> tuple[str, list[tuple[str, os.stat_result]], dict[str, "VFS"]]:
|
||||
"""replaces _ls for certain shares (single-file, or file selection)"""
|
||||
vn, rem = self.shr_src # type: ignore
|
||||
abspath, real, _ = vn.ls(rem, "\n", scandir, permsets, lstat)
|
||||
abspath, real, _ = vn.ls(rem, "\n", scandir, permsets, lstat, throw)
|
||||
real = [x for x in real if os.path.basename(x[0]) in self.shr_files]
|
||||
return abspath, real, {}
|
||||
|
||||
@@ -594,11 +611,12 @@ class VFS(object):
|
||||
scandir: bool,
|
||||
permsets: list[list[bool]],
|
||||
lstat: bool = False,
|
||||
throw: bool = False,
|
||||
) -> tuple[str, list[tuple[str, os.stat_result]], dict[str, "VFS"]]:
|
||||
"""return user-readable [fsdir,real,virt] items at vpath"""
|
||||
virt_vis = {} # nodes readable by user
|
||||
abspath = self.canonical(rem)
|
||||
real = list(statdir(self.log, scandir, lstat, abspath))
|
||||
real = list(statdir(self.log, scandir, lstat, abspath, throw))
|
||||
real.sort()
|
||||
if not rem:
|
||||
# no vfs nodes in the list of real inodes
|
||||
@@ -660,6 +678,10 @@ class VFS(object):
|
||||
"""
|
||||
recursively yields from ./rem;
|
||||
rel is a unix-style user-defined vpath (not vfs-related)
|
||||
|
||||
NOTE: don't invoke this function from a dbv; subvols are only
|
||||
descended into if rem is blank due to the _ls `if not rem:`
|
||||
which intention is to prevent unintended access to subvols
|
||||
"""
|
||||
|
||||
fsroot, vfs_ls, vfs_virt = self.ls(rem, uname, scandir, permsets, lstat=lstat)
|
||||
@@ -900,7 +922,7 @@ class AuthSrv(object):
|
||||
self._reload()
|
||||
return True
|
||||
|
||||
broker.ask("_reload_blocking", False).get()
|
||||
broker.ask("reload", False, True).get()
|
||||
return True
|
||||
|
||||
def _map_volume_idp(
|
||||
@@ -1370,7 +1392,7 @@ class AuthSrv(object):
|
||||
flags[name] = True
|
||||
return
|
||||
|
||||
zs = "mtp on403 on404 xbu xau xiu xbr xar xbd xad xm xban"
|
||||
zs = "mtp on403 on404 xbu xau xiu xbc xac xbr xar xbd xad xm xban"
|
||||
if name not in zs.split():
|
||||
if value is True:
|
||||
t = "└─add volflag [{}] = {} ({})"
|
||||
@@ -1518,10 +1540,11 @@ class AuthSrv(object):
|
||||
|
||||
assert vfs # type: ignore
|
||||
vfs.all_vols = {}
|
||||
vfs.all_nodes = {}
|
||||
vfs.all_aps = []
|
||||
vfs.all_vps = []
|
||||
vfs.get_all_vols(vfs.all_vols, vfs.all_aps, vfs.all_vps)
|
||||
for vol in vfs.all_vols.values():
|
||||
vfs.get_all_vols(vfs.all_vols, vfs.all_nodes, vfs.all_aps, vfs.all_vps)
|
||||
for vol in vfs.all_nodes.values():
|
||||
vol.all_aps.sort(key=lambda x: len(x[0]), reverse=True)
|
||||
vol.all_vps.sort(key=lambda x: len(x[0]), reverse=True)
|
||||
vol.root = vfs
|
||||
@@ -1572,7 +1595,7 @@ class AuthSrv(object):
|
||||
|
||||
vfs.nodes[shr] = vfs.all_vols[shr] = shv
|
||||
for vol in shv.nodes.values():
|
||||
vfs.all_vols[vol.vpath] = vol
|
||||
vfs.all_vols[vol.vpath] = vfs.all_nodes[vol.vpath] = vol
|
||||
vol.get_dbv = vol._get_share_src
|
||||
vol.ls = vol._ls_nope
|
||||
|
||||
@@ -1715,7 +1738,19 @@ class AuthSrv(object):
|
||||
|
||||
self.log("\n\n".join(ta) + "\n", c=3)
|
||||
|
||||
vfs.histtab = {zv.realpath: zv.histpath for zv in vfs.all_vols.values()}
|
||||
rhisttab = {}
|
||||
vfs.histtab = {}
|
||||
for zv in vfs.all_vols.values():
|
||||
histp = zv.histpath
|
||||
is_shr = shr and zv.vpath.split("/")[0] == shr
|
||||
if histp and not is_shr and histp in rhisttab:
|
||||
zv2 = rhisttab[histp]
|
||||
t = "invalid config; multiple volumes share the same histpath (database location):\n histpath: %s\n volume 1: /%s [%s]\n volume 2: %s [%s]"
|
||||
t = t % (histp, zv2.vpath, zv2.realpath, zv.vpath, zv.realpath)
|
||||
self.log(t, 1)
|
||||
raise Exception(t)
|
||||
rhisttab[histp] = zv
|
||||
vfs.histtab[zv.realpath] = histp
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
lim = Lim(self.log_func)
|
||||
@@ -1774,12 +1809,12 @@ class AuthSrv(object):
|
||||
vol.lim = lim
|
||||
|
||||
if self.args.no_robots:
|
||||
for vol in vfs.all_vols.values():
|
||||
for vol in vfs.all_nodes.values():
|
||||
# volflag "robots" overrides global "norobots", allowing indexing by search engines for this vol
|
||||
if not vol.flags.get("robots"):
|
||||
vol.flags["norobots"] = True
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
for vol in vfs.all_nodes.values():
|
||||
if self.args.no_vthumb:
|
||||
vol.flags["dvthumb"] = True
|
||||
if self.args.no_athumb:
|
||||
@@ -1791,7 +1826,7 @@ class AuthSrv(object):
|
||||
vol.flags["dithumb"] = True
|
||||
|
||||
have_fk = False
|
||||
for vol in vfs.all_vols.values():
|
||||
for vol in vfs.all_nodes.values():
|
||||
fk = vol.flags.get("fk")
|
||||
fka = vol.flags.get("fka")
|
||||
if fka and not fk:
|
||||
@@ -1823,7 +1858,7 @@ class AuthSrv(object):
|
||||
zs = os.path.join(E.cfg, "fk-salt.txt")
|
||||
self.log(t % (fk_len, 16, zs), 3)
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
for vol in vfs.all_nodes.values():
|
||||
if "pk" in vol.flags and "gz" not in vol.flags and "xz" not in vol.flags:
|
||||
vol.flags["gz"] = False # def.pk
|
||||
|
||||
@@ -1834,7 +1869,7 @@ class AuthSrv(object):
|
||||
|
||||
all_mte = {}
|
||||
errors = False
|
||||
for vol in vfs.all_vols.values():
|
||||
for vol in vfs.all_nodes.values():
|
||||
if (self.args.e2ds and vol.axs.uwrite) or self.args.e2dsa:
|
||||
vol.flags["e2ds"] = True
|
||||
|
||||
@@ -1925,7 +1960,7 @@ class AuthSrv(object):
|
||||
vol.flags[k] = odfusion(getattr(self.args, k), vol.flags[k])
|
||||
|
||||
# append additive args from argv to volflags
|
||||
hooks = "xbu xau xiu xbr xar xbd xad xm xban".split()
|
||||
hooks = "xbu xau xiu xbc xac xbr xar xbd xad xm xban".split()
|
||||
for name in "mtp on404 on403".split() + hooks:
|
||||
self._read_volflag(vol.flags, name, getattr(self.args, name), True)
|
||||
|
||||
@@ -2052,7 +2087,7 @@ class AuthSrv(object):
|
||||
errors = True
|
||||
|
||||
have_daw = False
|
||||
for vol in vfs.all_vols.values():
|
||||
for vol in vfs.all_nodes.values():
|
||||
daw = vol.flags.get("daw") or self.args.daw
|
||||
if daw:
|
||||
vol.flags["daw"] = True
|
||||
@@ -2067,13 +2102,12 @@ class AuthSrv(object):
|
||||
self.log("--smb can only be used when --ah-alg is none", 1)
|
||||
errors = True
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
for vol in vfs.all_nodes.values():
|
||||
for k in list(vol.flags.keys()):
|
||||
if re.match("^-[^-]+$", k):
|
||||
vol.flags.pop(k[1:], None)
|
||||
vol.flags.pop(k)
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
if vol.flags.get("dots"):
|
||||
for name in vol.axs.uread:
|
||||
vol.axs.udot.add(name)
|
||||
@@ -2215,6 +2249,11 @@ class AuthSrv(object):
|
||||
for x, y in vfs.all_vols.items()
|
||||
if x != shr and not x.startswith(shrs)
|
||||
}
|
||||
vfs.all_nodes = {
|
||||
x: y
|
||||
for x, y in vfs.all_nodes.items()
|
||||
if x != shr and not x.startswith(shrs)
|
||||
}
|
||||
|
||||
assert db and cur and cur2 and shv # type: ignore
|
||||
for row in cur.execute("select * from sh"):
|
||||
@@ -2267,6 +2306,69 @@ class AuthSrv(object):
|
||||
cur.close()
|
||||
db.close()
|
||||
|
||||
self.js_ls = {}
|
||||
self.js_htm = {}
|
||||
for vn in self.vfs.all_nodes.values():
|
||||
vf = vn.flags
|
||||
vn.js_ls = {
|
||||
"idx": "e2d" in vf,
|
||||
"itag": "e2t" in vf,
|
||||
"dnsort": "nsort" in vf,
|
||||
"dsort": vf["sort"],
|
||||
"dcrop": vf["crop"],
|
||||
"dth3x": vf["th3x"],
|
||||
"u2ts": vf["u2ts"],
|
||||
"frand": bool(vf.get("rand")),
|
||||
"lifetime": vf.get("lifetime") or 0,
|
||||
"unlist": vf.get("unlist") or "",
|
||||
}
|
||||
js_htm = {
|
||||
"s_name": self.args.bname,
|
||||
"have_up2k_idx": "e2d" in vf,
|
||||
"have_acode": not self.args.no_acode,
|
||||
"have_shr": self.args.shr,
|
||||
"have_zip": not self.args.no_zip,
|
||||
"have_mv": not self.args.no_mv,
|
||||
"have_del": not self.args.no_del,
|
||||
"have_unpost": int(self.args.unpost),
|
||||
"have_emp": self.args.emp,
|
||||
"sb_md": "" if "no_sb_md" in vf else (vf.get("md_sbf") or "y"),
|
||||
"txt_ext": self.args.textfiles.replace(",", " "),
|
||||
"def_hcols": list(vf.get("mth") or []),
|
||||
"unlist0": vf.get("unlist") or "",
|
||||
"dgrid": "grid" in vf,
|
||||
"dgsel": "gsel" in vf,
|
||||
"dnsort": "nsort" in vf,
|
||||
"dsort": vf["sort"],
|
||||
"dcrop": vf["crop"],
|
||||
"dth3x": vf["th3x"],
|
||||
"dvol": self.args.au_vol,
|
||||
"idxh": int(self.args.ih),
|
||||
"themes": self.args.themes,
|
||||
"turbolvl": self.args.turbo,
|
||||
"u2j": self.args.u2j,
|
||||
"u2sz": self.args.u2sz,
|
||||
"u2ts": vf["u2ts"],
|
||||
"frand": bool(vf.get("rand")),
|
||||
"lifetime": vn.js_ls["lifetime"],
|
||||
"u2sort": self.args.u2sort,
|
||||
}
|
||||
vn.js_htm = json.dumps(js_htm)
|
||||
|
||||
vols = list(vfs.all_nodes.values())
|
||||
if enshare:
|
||||
assert shv # type: ignore # !rm
|
||||
vols.append(shv)
|
||||
vols.extend(list(shv.nodes.values()))
|
||||
|
||||
for vol in vols:
|
||||
dbv = vol.get_dbv("")[0]
|
||||
vol.js_ls = vol.js_ls or dbv.js_ls or {}
|
||||
vol.js_htm = vol.js_htm or dbv.js_htm or "{}"
|
||||
|
||||
zs = str(vol.flags.get("tcolor") or self.args.tcolor)
|
||||
vol.flags["tcolor"] = zs.lstrip("#")
|
||||
|
||||
def load_sessions(self, quiet=False) -> None:
|
||||
# mutex me
|
||||
if self.args.no_ses:
|
||||
@@ -2376,7 +2478,7 @@ class AuthSrv(object):
|
||||
self._reload()
|
||||
return True, "new password OK"
|
||||
|
||||
broker.ask("_reload_blocking", False, False).get()
|
||||
broker.ask("reload", False, False).get()
|
||||
return True, "new password OK"
|
||||
|
||||
def setup_chpw(self, acct: dict[str, str]) -> None:
|
||||
@@ -2628,7 +2730,7 @@ class AuthSrv(object):
|
||||
]
|
||||
|
||||
csv = set("i p th_covers zm_on zm_off zs_on zs_off".split())
|
||||
zs = "c ihead mtm mtp on403 on404 xad xar xau xiu xban xbd xbr xbu xm"
|
||||
zs = "c ihead ohead mtm mtp on403 on404 xac xad xar xau xiu xban xbc xbd xbr xbu xm"
|
||||
lst = set(zs.split())
|
||||
askip = set("a v c vc cgen exp_lg exp_md theme".split())
|
||||
fskip = set("exp_lg exp_md mv_re_r mv_re_t rm_re_r rm_re_t".split())
|
||||
|
||||
@@ -43,6 +43,9 @@ class BrokerMp(object):
|
||||
self.procs = []
|
||||
self.mutex = threading.Lock()
|
||||
|
||||
self.retpend: dict[int, Any] = {}
|
||||
self.retpend_mutex = threading.Lock()
|
||||
|
||||
self.num_workers = self.args.j or CORES
|
||||
self.log("broker", "booting {} subprocesses".format(self.num_workers))
|
||||
for n in range(1, self.num_workers + 1):
|
||||
@@ -54,6 +57,8 @@ class BrokerMp(object):
|
||||
self.procs.append(proc)
|
||||
proc.start()
|
||||
|
||||
Daemon(self.periodic, "mp-periodic")
|
||||
|
||||
def shutdown(self) -> None:
|
||||
self.log("broker", "shutting down")
|
||||
for n, proc in enumerate(self.procs):
|
||||
@@ -90,8 +95,10 @@ class BrokerMp(object):
|
||||
self.log(*args)
|
||||
|
||||
elif dest == "retq":
|
||||
# response from previous ipc call
|
||||
raise Exception("invalid broker_mp usage")
|
||||
with self.retpend_mutex:
|
||||
retq = self.retpend.pop(retq_id)
|
||||
|
||||
retq.put(args[0])
|
||||
|
||||
else:
|
||||
# new ipc invoking managed service in hub
|
||||
@@ -109,7 +116,6 @@ class BrokerMp(object):
|
||||
proc.q_pend.put((retq_id, "retq", rv))
|
||||
|
||||
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
||||
|
||||
# new non-ipc invoking managed service in hub
|
||||
obj = self.hub
|
||||
for node in dest.split("."):
|
||||
@@ -121,17 +127,30 @@ class BrokerMp(object):
|
||||
retq.put(rv)
|
||||
return retq
|
||||
|
||||
def wask(self, dest: str, *args: Any) -> list[Union[ExceptionalQueue, NotExQueue]]:
|
||||
# call from hub to workers
|
||||
ret = []
|
||||
for p in self.procs:
|
||||
retq = ExceptionalQueue(1)
|
||||
retq_id = id(retq)
|
||||
with self.retpend_mutex:
|
||||
self.retpend[retq_id] = retq
|
||||
|
||||
p.q_pend.put((retq_id, dest, list(args)))
|
||||
ret.append(retq)
|
||||
return ret
|
||||
|
||||
def say(self, dest: str, *args: Any) -> None:
|
||||
"""
|
||||
send message to non-hub component in other process,
|
||||
returns a Queue object which eventually contains the response if want_retval
|
||||
(not-impl here since nothing uses it yet)
|
||||
"""
|
||||
if dest == "listen":
|
||||
if dest == "httpsrv.listen":
|
||||
for p in self.procs:
|
||||
p.q_pend.put((0, dest, [args[0], len(self.procs)]))
|
||||
|
||||
elif dest == "set_netdevs":
|
||||
elif dest == "httpsrv.set_netdevs":
|
||||
for p in self.procs:
|
||||
p.q_pend.put((0, dest, list(args)))
|
||||
|
||||
@@ -140,3 +159,19 @@ class BrokerMp(object):
|
||||
|
||||
else:
|
||||
raise Exception("what is " + str(dest))
|
||||
|
||||
def periodic(self) -> None:
|
||||
while True:
|
||||
time.sleep(1)
|
||||
|
||||
tdli = {}
|
||||
tdls = {}
|
||||
qs = self.wask("httpsrv.read_dls")
|
||||
for q in qs:
|
||||
qr = q.get()
|
||||
dli, dls = qr
|
||||
tdli.update(dli)
|
||||
tdls.update(dls)
|
||||
tdl = (tdli, tdls)
|
||||
for p in self.procs:
|
||||
p.q_pend.put((0, "httpsrv.write_dls", tdl))
|
||||
|
||||
@@ -82,37 +82,38 @@ class MpWorker(BrokerCli):
|
||||
while True:
|
||||
retq_id, dest, args = self.q_pend.get()
|
||||
|
||||
# self.logw("work: [{}]".format(d[0]))
|
||||
if dest == "retq":
|
||||
# response from previous ipc call
|
||||
with self.retpend_mutex:
|
||||
retq = self.retpend.pop(retq_id)
|
||||
|
||||
retq.put(args)
|
||||
continue
|
||||
|
||||
if dest == "shutdown":
|
||||
self.httpsrv.shutdown()
|
||||
self.logw("ok bye")
|
||||
sys.exit(0)
|
||||
return
|
||||
|
||||
elif dest == "reload":
|
||||
if dest == "reload":
|
||||
self.logw("mpw.asrv reloading")
|
||||
self.asrv.reload()
|
||||
self.logw("mpw.asrv reloaded")
|
||||
continue
|
||||
|
||||
elif dest == "reload_sessions":
|
||||
if dest == "reload_sessions":
|
||||
with self.asrv.mutex:
|
||||
self.asrv.load_sessions()
|
||||
continue
|
||||
|
||||
elif dest == "listen":
|
||||
self.httpsrv.listen(args[0], args[1])
|
||||
obj = self
|
||||
for node in dest.split("."):
|
||||
obj = getattr(obj, node)
|
||||
|
||||
elif dest == "set_netdevs":
|
||||
self.httpsrv.set_netdevs(args[0])
|
||||
|
||||
elif dest == "retq":
|
||||
# response from previous ipc call
|
||||
with self.retpend_mutex:
|
||||
retq = self.retpend.pop(retq_id)
|
||||
|
||||
retq.put(args)
|
||||
|
||||
else:
|
||||
raise Exception("what is " + str(dest))
|
||||
rv = obj(*args) # type: ignore
|
||||
if retq_id:
|
||||
self.say("retq", rv, retq_id=retq_id)
|
||||
|
||||
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
||||
retq = ExceptionalQueue(1)
|
||||
@@ -123,5 +124,5 @@ class MpWorker(BrokerCli):
|
||||
self.q_yield.put((retq_id, dest, list(args)))
|
||||
return retq
|
||||
|
||||
def say(self, dest: str, *args: Any) -> None:
|
||||
self.q_yield.put((0, dest, list(args)))
|
||||
def say(self, dest: str, *args: Any, retq_id=0) -> None:
|
||||
self.q_yield.put((retq_id, dest, list(args)))
|
||||
|
||||
@@ -53,11 +53,11 @@ class BrokerThr(BrokerCli):
|
||||
return NotExQueue(obj(*args)) # type: ignore
|
||||
|
||||
def say(self, dest: str, *args: Any) -> None:
|
||||
if dest == "listen":
|
||||
if dest == "httpsrv.listen":
|
||||
self.httpsrv.listen(args[0], 1)
|
||||
return
|
||||
|
||||
if dest == "set_netdevs":
|
||||
if dest == "httpsrv.set_netdevs":
|
||||
self.httpsrv.set_netdevs(args[0])
|
||||
return
|
||||
|
||||
|
||||
@@ -42,10 +42,12 @@ def vf_bmap() -> dict[str, str]:
|
||||
"magic",
|
||||
"no_sb_md",
|
||||
"no_sb_lg",
|
||||
"nsort",
|
||||
"og",
|
||||
"og_no_head",
|
||||
"og_s_title",
|
||||
"rand",
|
||||
"rss",
|
||||
"xdev",
|
||||
"xlink",
|
||||
"xvol",
|
||||
@@ -102,10 +104,12 @@ def vf_cmap() -> dict[str, str]:
|
||||
"mte",
|
||||
"mth",
|
||||
"mtp",
|
||||
"xac",
|
||||
"xad",
|
||||
"xar",
|
||||
"xau",
|
||||
"xban",
|
||||
"xbc",
|
||||
"xbd",
|
||||
"xbr",
|
||||
"xbu",
|
||||
@@ -211,6 +215,8 @@ flagcats = {
|
||||
"xbu=CMD": "execute CMD before a file upload starts",
|
||||
"xau=CMD": "execute CMD after a file upload finishes",
|
||||
"xiu=CMD": "execute CMD after all uploads finish and volume is idle",
|
||||
"xbc=CMD": "execute CMD before a file copy",
|
||||
"xac=CMD": "execute CMD after a file copy",
|
||||
"xbr=CMD": "execute CMD before a file rename/move",
|
||||
"xar=CMD": "execute CMD after a file rename/move",
|
||||
"xbd=CMD": "execute CMD before a file delete",
|
||||
|
||||
@@ -76,6 +76,7 @@ class FtpAuth(DummyAuthorizer):
|
||||
else:
|
||||
raise AuthenticationFailed("banned")
|
||||
|
||||
args = self.hub.args
|
||||
asrv = self.hub.asrv
|
||||
uname = "*"
|
||||
if username != "anonymous":
|
||||
@@ -86,6 +87,9 @@ class FtpAuth(DummyAuthorizer):
|
||||
uname = zs
|
||||
break
|
||||
|
||||
if args.ipu and uname == "*":
|
||||
uname = args.ipu_iu[args.ipu_nm.map(ip)]
|
||||
|
||||
if not uname or not (asrv.vfs.aread.get(uname) or asrv.vfs.awrite.get(uname)):
|
||||
g = self.hub.gpwd
|
||||
if g.lim:
|
||||
@@ -292,6 +296,7 @@ class FtpFs(AbstractedFS):
|
||||
self.uname,
|
||||
not self.args.no_scandir,
|
||||
[[True, False], [False, True]],
|
||||
throw=True,
|
||||
)
|
||||
vfs_ls = [x[0] for x in vfs_ls1]
|
||||
vfs_ls.extend(vfs_virt.keys())
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -59,6 +59,8 @@ class HttpConn(object):
|
||||
self.asrv: AuthSrv = hsrv.asrv # mypy404
|
||||
self.u2fh: Util.FHC = hsrv.u2fh # mypy404
|
||||
self.pipes: Util.CachedDict = hsrv.pipes # mypy404
|
||||
self.ipu_iu: Optional[dict[str, str]] = hsrv.ipu_iu
|
||||
self.ipu_nm: Optional[NetMap] = hsrv.ipu_nm
|
||||
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
|
||||
self.xff_nm: Optional[NetMap] = hsrv.xff_nm
|
||||
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import hashlib
|
||||
import math
|
||||
import os
|
||||
import re
|
||||
@@ -69,6 +70,7 @@ from .util import (
|
||||
build_netmap,
|
||||
has_resource,
|
||||
ipnorm,
|
||||
load_ipu,
|
||||
load_resource,
|
||||
min_ex,
|
||||
shut_socket,
|
||||
@@ -79,6 +81,7 @@ from .util import (
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .authsrv import VFS
|
||||
from .broker_util import BrokerCli
|
||||
from .ssdp import SSDPr
|
||||
|
||||
@@ -128,6 +131,12 @@ class HttpSrv(object):
|
||||
self.bans: dict[str, int] = {}
|
||||
self.aclose: dict[str, int] = {}
|
||||
|
||||
dli: dict[str, tuple[float, int, "VFS", str, str]] = {} # info
|
||||
dls: dict[str, tuple[float, int]] = {} # state
|
||||
self.dli = self.tdli = dli
|
||||
self.dls = self.tdls = dls
|
||||
self.iiam = '<img src="%s.cpr/iiam.gif?cache=i" />' % (self.args.SRS,)
|
||||
|
||||
self.bound: set[tuple[str, int]] = set()
|
||||
self.name = "hsrv" + nsuf
|
||||
self.mutex = threading.Lock()
|
||||
@@ -143,6 +152,7 @@ class HttpSrv(object):
|
||||
self.t_periodic: Optional[threading.Thread] = None
|
||||
|
||||
self.u2fh = FHC()
|
||||
self.u2sc: dict[str, tuple[int, "hashlib._Hash"]] = {}
|
||||
self.pipes = CachedDict(0.2)
|
||||
self.metrics = Metrics(self)
|
||||
self.nreq = 0
|
||||
@@ -175,6 +185,11 @@ class HttpSrv(object):
|
||||
self.j2 = {x: env.get_template(x + ".html") for x in jn}
|
||||
self.prism = has_resource(self.E, "web/deps/prism.js.gz")
|
||||
|
||||
if self.args.ipu:
|
||||
self.ipu_iu, self.ipu_nm = load_ipu(self.log, self.args.ipu)
|
||||
else:
|
||||
self.ipu_iu = self.ipu_nm = None
|
||||
|
||||
self.ipa_nm = build_netmap(self.args.ipa)
|
||||
self.xff_nm = build_netmap(self.args.xff_src)
|
||||
self.xff_lan = build_netmap("lan")
|
||||
@@ -197,6 +212,9 @@ class HttpSrv(object):
|
||||
self.start_threads(4)
|
||||
|
||||
if nid:
|
||||
self.tdli = {}
|
||||
self.tdls = {}
|
||||
|
||||
if self.args.stackmon:
|
||||
start_stackmon(self.args.stackmon, nid)
|
||||
|
||||
@@ -571,3 +589,32 @@ class HttpSrv(object):
|
||||
ident += "a"
|
||||
|
||||
self.u2idx_free[ident] = u2idx
|
||||
|
||||
def read_dls(
|
||||
self,
|
||||
) -> tuple[
|
||||
dict[str, tuple[float, int, str, str, str]], dict[str, tuple[float, int]]
|
||||
]:
|
||||
"""
|
||||
mp-broker asking for local dl-info + dl-state;
|
||||
reduce overhead by sending just the vfs vpath
|
||||
"""
|
||||
dli = {k: (a, b, c.vpath, d, e) for k, (a, b, c, d, e) in self.dli.items()}
|
||||
return (dli, self.dls)
|
||||
|
||||
def write_dls(
|
||||
self,
|
||||
sdli: dict[str, tuple[float, int, str, str, str]],
|
||||
dls: dict[str, tuple[float, int]],
|
||||
) -> None:
|
||||
"""
|
||||
mp-broker pushing total dl-info + dl-state;
|
||||
swap out the vfs vpath with the vfs node
|
||||
"""
|
||||
dli: dict[str, tuple[float, int, "VFS", str, str]] = {}
|
||||
for k, (a, b, c, d, e) in sdli.items():
|
||||
vn = self.asrv.vfs.all_nodes[c]
|
||||
dli[k] = (a, b, vn, d, e)
|
||||
|
||||
self.tdli = dli
|
||||
self.tdls = dls
|
||||
|
||||
@@ -72,6 +72,9 @@ class Metrics(object):
|
||||
v = "{:.3f}".format(self.hsrv.t0)
|
||||
addug("cpp_boot_unixtime", "seconds", v, t)
|
||||
|
||||
t = "number of active downloads"
|
||||
addg("cpp_active_dl", str(len(self.hsrv.tdls)), t)
|
||||
|
||||
t = "number of open http(s) client connections"
|
||||
addg("cpp_http_conns", str(self.hsrv.ncli), t)
|
||||
|
||||
@@ -128,7 +131,7 @@ class Metrics(object):
|
||||
addbh("cpp_disk_size_bytes", "total HDD size of volume")
|
||||
addbh("cpp_disk_free_bytes", "free HDD space in volume")
|
||||
for vpath, vol in allvols:
|
||||
free, total = get_df(vol.realpath)
|
||||
free, total, _ = get_df(vol.realpath, False)
|
||||
if free is None or total is None:
|
||||
continue
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@ from __future__ import print_function, unicode_literals
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import subprocess as sp
|
||||
import sys
|
||||
@@ -62,6 +63,9 @@ def have_ff(scmd: str) -> bool:
|
||||
HAVE_FFMPEG = not os.environ.get("PRTY_NO_FFMPEG") and have_ff("ffmpeg")
|
||||
HAVE_FFPROBE = not os.environ.get("PRTY_NO_FFPROBE") and have_ff("ffprobe")
|
||||
|
||||
CBZ_PICS = set("png jpg jpeg gif bmp tga tif tiff webp avif".split())
|
||||
CBZ_01 = re.compile(r"(^|[^0-9v])0+[01]\b")
|
||||
|
||||
|
||||
class MParser(object):
|
||||
def __init__(self, cmdline: str) -> None:
|
||||
@@ -126,6 +130,7 @@ def au_unpk(
|
||||
log: "NamedLogger", fmt_map: dict[str, str], abspath: str, vn: Optional[VFS] = None
|
||||
) -> str:
|
||||
ret = ""
|
||||
maxsz = 1024 * 1024 * 64
|
||||
try:
|
||||
ext = abspath.split(".")[-1].lower()
|
||||
au, pk = fmt_map[ext].split(".")
|
||||
@@ -148,17 +153,41 @@ def au_unpk(
|
||||
zf = zipfile.ZipFile(abspath, "r")
|
||||
zil = zf.infolist()
|
||||
zil = [x for x in zil if x.filename.lower().split(".")[-1] == au]
|
||||
if not zil:
|
||||
raise Exception("no audio inside zip")
|
||||
fi = zf.open(zil[0])
|
||||
|
||||
elif pk == "cbz":
|
||||
import zipfile
|
||||
|
||||
zf = zipfile.ZipFile(abspath, "r")
|
||||
znil = [(x.filename.lower(), x) for x in zf.infolist()]
|
||||
nf = len(znil)
|
||||
znil = [x for x in znil if x[0].split(".")[-1] in CBZ_PICS]
|
||||
znil = [x for x in znil if "cover" in x[0]] or znil
|
||||
znil = [x for x in znil if CBZ_01.search(x[0])] or znil
|
||||
t = "cbz: %d files, %d hits" % (nf, len(znil))
|
||||
if znil:
|
||||
t += ", using " + znil[0][1].filename
|
||||
log(t)
|
||||
if not znil:
|
||||
raise Exception("no images inside cbz")
|
||||
fi = zf.open(znil[0][1])
|
||||
|
||||
else:
|
||||
raise Exception("unknown compression %s" % (pk,))
|
||||
|
||||
fsz = 0
|
||||
with os.fdopen(fd, "wb") as fo:
|
||||
while True:
|
||||
buf = fi.read(32768)
|
||||
if not buf:
|
||||
break
|
||||
|
||||
fsz += len(buf)
|
||||
if fsz > maxsz:
|
||||
raise Exception("zipbomb defused")
|
||||
|
||||
fo.write(buf)
|
||||
|
||||
return ret
|
||||
|
||||
@@ -24,17 +24,13 @@ class PWHash(object):
|
||||
def __init__(self, args: argparse.Namespace):
|
||||
self.args = args
|
||||
|
||||
try:
|
||||
alg, ac = args.ah_alg.split(",")
|
||||
except:
|
||||
alg = args.ah_alg
|
||||
ac = {}
|
||||
|
||||
zsl = args.ah_alg.split(",")
|
||||
alg = zsl[0]
|
||||
if alg == "none":
|
||||
alg = ""
|
||||
|
||||
self.alg = alg
|
||||
self.ac = ac
|
||||
self.ac = zsl[1:]
|
||||
if not alg:
|
||||
self.on = False
|
||||
self.hash = unicode
|
||||
@@ -90,17 +86,23 @@ class PWHash(object):
|
||||
its = 2
|
||||
blksz = 8
|
||||
para = 4
|
||||
ramcap = 0 # openssl 1.1 = 32 MiB
|
||||
try:
|
||||
cost = 2 << int(self.ac[0])
|
||||
its = int(self.ac[1])
|
||||
blksz = int(self.ac[2])
|
||||
para = int(self.ac[3])
|
||||
ramcap = int(self.ac[4]) * 1024 * 1024
|
||||
except:
|
||||
pass
|
||||
|
||||
cfg = {"salt": self.salt, "n": cost, "r": blksz, "p": para, "dklen": 24}
|
||||
if ramcap:
|
||||
cfg["maxmem"] = ramcap
|
||||
|
||||
ret = plain.encode("utf-8")
|
||||
for _ in range(its):
|
||||
ret = hashlib.scrypt(ret, salt=self.salt, n=cost, r=blksz, p=para, dklen=24)
|
||||
ret = hashlib.scrypt(ret, **cfg)
|
||||
|
||||
return "+" + base64.urlsafe_b64encode(ret).decode("utf-8")
|
||||
|
||||
|
||||
@@ -60,6 +60,7 @@ from .util import (
|
||||
alltrace,
|
||||
ansi_re,
|
||||
build_netmap,
|
||||
load_ipu,
|
||||
min_ex,
|
||||
mp,
|
||||
odfusion,
|
||||
@@ -111,7 +112,7 @@ class SvcHub(object):
|
||||
self.stopping = False
|
||||
self.stopped = False
|
||||
self.reload_req = False
|
||||
self.reloading = 0
|
||||
self.reload_mutex = threading.Lock()
|
||||
self.stop_cond = threading.Condition()
|
||||
self.nsigs = 3
|
||||
self.retcode = 0
|
||||
@@ -210,6 +211,15 @@ class SvcHub(object):
|
||||
t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
|
||||
self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
|
||||
|
||||
zs = ""
|
||||
if args.th_ram_max < 0.22:
|
||||
zs = "generate thumbnails"
|
||||
elif args.th_ram_max < 1:
|
||||
zs = "generate audio waveforms or spectrograms"
|
||||
if zs:
|
||||
t = "WARNING: --th-ram-max is very small (%.2f GiB); will not be able to %s"
|
||||
self.log("root", t % (args.th_ram_max, zs), 3)
|
||||
|
||||
if args.chpw and args.idp_h_usr:
|
||||
t = "ERROR: user-changeable passwords is incompatible with IdP/identity-providers; you must disable either --chpw or --idp-h-usr"
|
||||
self.log("root", t, 1)
|
||||
@@ -221,9 +231,15 @@ class SvcHub(object):
|
||||
noch.update([x for x in zsl if x])
|
||||
args.chpw_no = noch
|
||||
|
||||
if args.ipu:
|
||||
iu, nm = load_ipu(self.log, args.ipu, True)
|
||||
setattr(args, "ipu_iu", iu)
|
||||
setattr(args, "ipu_nm", nm)
|
||||
|
||||
if not self.args.no_ses:
|
||||
self.setup_session_db()
|
||||
|
||||
args.shr1 = ""
|
||||
if args.shr:
|
||||
self.setup_share_db()
|
||||
|
||||
@@ -372,6 +388,14 @@ class SvcHub(object):
|
||||
|
||||
self.broker = Broker(self)
|
||||
|
||||
# create netmaps early to avoid firewall gaps,
|
||||
# but the mutex blocks multiprocessing startup
|
||||
for zs in "ipu_iu ftp_ipa_nm tftp_ipa_nm".split():
|
||||
try:
|
||||
getattr(args, zs).mutex = threading.Lock()
|
||||
except:
|
||||
pass
|
||||
|
||||
def setup_session_db(self) -> None:
|
||||
if not HAVE_SQLITE3:
|
||||
self.args.no_ses = True
|
||||
@@ -446,6 +470,7 @@ class SvcHub(object):
|
||||
raise Exception(t)
|
||||
|
||||
al.shr = "/%s/" % (al.shr,)
|
||||
al.shr1 = al.shr[1:]
|
||||
|
||||
create = True
|
||||
modified = False
|
||||
@@ -755,8 +780,8 @@ class SvcHub(object):
|
||||
al.idp_h_grp = al.idp_h_grp.lower()
|
||||
al.idp_h_key = al.idp_h_key.lower()
|
||||
|
||||
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa)
|
||||
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa)
|
||||
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa, True)
|
||||
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa, True)
|
||||
|
||||
mte = ODict.fromkeys(DEF_MTE.split(","), True)
|
||||
al.mte = odfusion(mte, al.mte)
|
||||
@@ -803,6 +828,24 @@ class SvcHub(object):
|
||||
if len(al.tcolor) == 3: # fc5 => ffcc55
|
||||
al.tcolor = "".join([x * 2 for x in al.tcolor])
|
||||
|
||||
zs = al.u2sz
|
||||
zsl = zs.split(",")
|
||||
if len(zsl) not in (1, 3):
|
||||
t = "invalid --u2sz; must be either one number, or a comma-separated list of three numbers (min,default,max)"
|
||||
raise Exception(t)
|
||||
if len(zsl) < 3:
|
||||
zsl = ["1", zs, zs]
|
||||
zi2 = 1
|
||||
for zs in zsl:
|
||||
zi = int(zs)
|
||||
# arbitrary constraint (anything above 2 GiB is probably unintended)
|
||||
if zi < 1 or zi > 2047:
|
||||
raise Exception("invalid --u2sz; minimum is 1, max is 2047")
|
||||
if zi < zi2:
|
||||
raise Exception("invalid --u2sz; values must be equal or ascending")
|
||||
zi2 = zi
|
||||
al.u2sz = ",".join(zsl)
|
||||
|
||||
return True
|
||||
|
||||
def _ipa2re(self, txt) -> Optional[re.Pattern]:
|
||||
@@ -970,41 +1013,18 @@ class SvcHub(object):
|
||||
except:
|
||||
self.log("root", "ssdp startup failed;\n" + min_ex(), 3)
|
||||
|
||||
def reload(self) -> str:
|
||||
with self.up2k.mutex:
|
||||
if self.reloading:
|
||||
return "cannot reload; already in progress"
|
||||
self.reloading = 1
|
||||
|
||||
Daemon(self._reload, "reloading")
|
||||
return "reload initiated"
|
||||
|
||||
def _reload(self, rescan_all_vols: bool = True, up2k: bool = True) -> None:
|
||||
with self.up2k.mutex:
|
||||
if self.reloading != 1:
|
||||
return
|
||||
self.reloading = 2
|
||||
def reload(self, rescan_all_vols: bool, up2k: bool) -> str:
|
||||
t = "config has been reloaded"
|
||||
with self.reload_mutex:
|
||||
self.log("root", "reloading config")
|
||||
self.asrv.reload(9 if up2k else 4)
|
||||
if up2k:
|
||||
self.up2k.reload(rescan_all_vols)
|
||||
t += "; volumes are now reinitializing"
|
||||
else:
|
||||
self.log("root", "reload done")
|
||||
self.broker.reload()
|
||||
self.reloading = 0
|
||||
|
||||
def _reload_blocking(self, rescan_all_vols: bool = True, up2k: bool = True) -> None:
|
||||
while True:
|
||||
with self.up2k.mutex:
|
||||
if self.reloading < 2:
|
||||
self.reloading = 1
|
||||
break
|
||||
time.sleep(0.05)
|
||||
|
||||
# try to handle multiple pending IdP reloads at once:
|
||||
time.sleep(0.2)
|
||||
|
||||
self._reload(rescan_all_vols=rescan_all_vols, up2k=up2k)
|
||||
return t
|
||||
|
||||
def _reload_sessions(self) -> None:
|
||||
with self.asrv.mutex:
|
||||
@@ -1018,7 +1038,7 @@ class SvcHub(object):
|
||||
|
||||
if self.reload_req:
|
||||
self.reload_req = False
|
||||
self.reload()
|
||||
self.reload(True, True)
|
||||
|
||||
self.shutdown()
|
||||
|
||||
|
||||
@@ -100,7 +100,7 @@ def gen_hdr(
|
||||
|
||||
# spec says to put zeros when !crc if bit3 (streaming)
|
||||
# however infozip does actual sz and it even works on winxp
|
||||
# (same reasning for z64 extradata later)
|
||||
# (same reasoning for z64 extradata later)
|
||||
vsz = 0xFFFFFFFF if z64 else sz
|
||||
ret += spack(b"<LL", vsz, vsz)
|
||||
|
||||
|
||||
@@ -371,7 +371,7 @@ class TcpSrv(object):
|
||||
if self.args.q:
|
||||
print(msg)
|
||||
|
||||
self.hub.broker.say("listen", srv)
|
||||
self.hub.broker.say("httpsrv.listen", srv)
|
||||
|
||||
self.srv = srvs
|
||||
self.bound = bound
|
||||
@@ -379,7 +379,7 @@ class TcpSrv(object):
|
||||
self._distribute_netdevs()
|
||||
|
||||
def _distribute_netdevs(self):
|
||||
self.hub.broker.say("set_netdevs", self.netdevs)
|
||||
self.hub.broker.say("httpsrv.set_netdevs", self.netdevs)
|
||||
self.hub.start_zeroconf()
|
||||
gencert(self.log, self.args, self.netdevs)
|
||||
self.hub.restart_ftpd()
|
||||
|
||||
@@ -269,6 +269,7 @@ class Tftpd(object):
|
||||
"*",
|
||||
not self.args.no_scandir,
|
||||
[[True, False]],
|
||||
throw=True,
|
||||
)
|
||||
dnames = set([x[0] for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)])
|
||||
dirs1 = [(v.st_mtime, v.st_size, k + "/") for k, v in vfs_ls if k in dnames]
|
||||
|
||||
@@ -20,7 +20,6 @@ from .util import (
|
||||
FFMPEG_URL,
|
||||
Cooldown,
|
||||
Daemon,
|
||||
Pebkac,
|
||||
afsenc,
|
||||
fsenc,
|
||||
min_ex,
|
||||
@@ -164,6 +163,7 @@ class ThumbSrv(object):
|
||||
self.ram: dict[str, float] = {}
|
||||
self.memcond = threading.Condition(self.mutex)
|
||||
self.stopping = False
|
||||
self.rm_nullthumbs = True # forget failed conversions on startup
|
||||
self.nthr = max(1, self.args.th_mt)
|
||||
|
||||
self.q: Queue[Optional[tuple[str, str, str, VFS]]] = Queue(self.nthr * 4)
|
||||
@@ -862,7 +862,6 @@ class ThumbSrv(object):
|
||||
def cleaner(self) -> None:
|
||||
interval = self.args.th_clean
|
||||
while True:
|
||||
time.sleep(interval)
|
||||
ndirs = 0
|
||||
for vol, histpath in self.asrv.vfs.histtab.items():
|
||||
if histpath.startswith(vol):
|
||||
@@ -876,6 +875,8 @@ class ThumbSrv(object):
|
||||
self.log("\033[Jcln err in %s: %r" % (histpath, ex), 3)
|
||||
|
||||
self.log("\033[Jcln ok; rm {} dirs".format(ndirs))
|
||||
self.rm_nullthumbs = False
|
||||
time.sleep(interval)
|
||||
|
||||
def clean(self, histpath: str) -> int:
|
||||
ret = 0
|
||||
@@ -896,7 +897,9 @@ class ThumbSrv(object):
|
||||
prev_b64 = None
|
||||
prev_fp = ""
|
||||
try:
|
||||
t1 = statdir(self.log_func, not self.args.no_scandir, False, thumbpath)
|
||||
t1 = statdir(
|
||||
self.log_func, not self.args.no_scandir, False, thumbpath, False
|
||||
)
|
||||
ents = sorted(list(t1))
|
||||
except:
|
||||
return 0
|
||||
@@ -937,6 +940,10 @@ class ThumbSrv(object):
|
||||
|
||||
continue
|
||||
|
||||
if self.rm_nullthumbs and not inf.st_size:
|
||||
bos.unlink(fp)
|
||||
continue
|
||||
|
||||
if b64 == prev_b64:
|
||||
self.log("rm replaced [{}]".format(fp))
|
||||
bos.unlink(prev_fp)
|
||||
|
||||
@@ -70,6 +70,9 @@ class U2idx(object):
|
||||
self.log_func("u2idx", msg, c)
|
||||
|
||||
def shutdown(self) -> None:
|
||||
if not HAVE_SQLITE3:
|
||||
return
|
||||
|
||||
for cur in self.cur.values():
|
||||
db = cur.connection
|
||||
try:
|
||||
@@ -80,6 +83,12 @@ class U2idx(object):
|
||||
cur.close()
|
||||
db.close()
|
||||
|
||||
for cur in (self.mem_cur, self.sh_cur):
|
||||
if cur:
|
||||
db = cur.connection
|
||||
cur.close()
|
||||
db.close()
|
||||
|
||||
def fsearch(
|
||||
self, uname: str, vols: list[VFS], body: dict[str, Any]
|
||||
) -> list[dict[str, Any]]:
|
||||
@@ -95,7 +104,7 @@ class U2idx(object):
|
||||
uv: list[Union[str, int]] = [wark[:16], wark]
|
||||
|
||||
try:
|
||||
return self.run_query(uname, vols, uq, uv, False, 99999)[0]
|
||||
return self.run_query(uname, vols, uq, uv, False, True, 99999)[0]
|
||||
except:
|
||||
raise Pebkac(500, min_ex())
|
||||
|
||||
@@ -301,7 +310,7 @@ class U2idx(object):
|
||||
q += " lower({}) {} ? ) ".format(field, oper)
|
||||
|
||||
try:
|
||||
return self.run_query(uname, vols, q, va, have_mt, lim)
|
||||
return self.run_query(uname, vols, q, va, have_mt, True, lim)
|
||||
except Exception as ex:
|
||||
raise Pebkac(500, repr(ex))
|
||||
|
||||
@@ -312,6 +321,7 @@ class U2idx(object):
|
||||
uq: str,
|
||||
uv: list[Union[str, int]],
|
||||
have_mt: bool,
|
||||
sort: bool,
|
||||
lim: int,
|
||||
) -> tuple[list[dict[str, Any]], list[str], bool]:
|
||||
if self.args.srch_dbg:
|
||||
@@ -458,7 +468,8 @@ class U2idx(object):
|
||||
done_flag.append(True)
|
||||
self.active_id = ""
|
||||
|
||||
ret.sort(key=itemgetter("rp"))
|
||||
if sort:
|
||||
ret.sort(key=itemgetter("rp"))
|
||||
|
||||
return ret, list(taglist.keys()), lim < 0 and not clamped
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ from copy import deepcopy
|
||||
from queue import Queue
|
||||
|
||||
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, WINDOWS, E
|
||||
from .authsrv import LEELOO_DALLAS, SSEELOG, VFS, AuthSrv
|
||||
from .authsrv import LEELOO_DALLAS, SEESLOG, VFS, AuthSrv
|
||||
from .bos import bos
|
||||
from .cfg import vf_bmap, vf_cmap, vf_vmap
|
||||
from .fsutil import Fstab
|
||||
@@ -89,6 +89,8 @@ zsg = "avif,avifs,bmp,gif,heic,heics,heif,heifs,ico,j2p,j2k,jp2,jpeg,jpg,jpx,png
|
||||
CV_EXTS = set(zsg.split(","))
|
||||
|
||||
|
||||
SBUSY = "cannot receive uploads right now;\nserver busy with %s.\nPlease wait; the client will retry..."
|
||||
|
||||
HINT_HISTPATH = "you could try moving the database to another location (preferably an SSD or NVME drive) using either the --hist argument (global option for all volumes), or the hist volflag (just for this volume)"
|
||||
|
||||
|
||||
@@ -125,12 +127,22 @@ class Up2k(object):
|
||||
self.args = hub.args
|
||||
self.log_func = hub.log
|
||||
|
||||
self.vfs = self.asrv.vfs
|
||||
self.acct = self.asrv.acct
|
||||
self.iacct = self.asrv.iacct
|
||||
self.grps = self.asrv.grps
|
||||
|
||||
self.salt = self.args.warksalt
|
||||
self.r_hash = re.compile("^[0-9a-zA-Z_-]{44}$")
|
||||
|
||||
self.gid = 0
|
||||
self.gt0 = 0
|
||||
self.gt1 = 0
|
||||
self.stop = False
|
||||
self.mutex = threading.Lock()
|
||||
self.reload_mutex = threading.Lock()
|
||||
self.reload_flag = 0
|
||||
self.reloading = False
|
||||
self.blocked: Optional[str] = None
|
||||
self.pp: Optional[ProgressPrinter] = None
|
||||
self.rescan_cond = threading.Condition()
|
||||
@@ -203,7 +215,38 @@ class Up2k(object):
|
||||
|
||||
Daemon(self.deferred_init, "up2k-deferred-init")
|
||||
|
||||
def unpp(self) -> None:
|
||||
self.gt1 = time.time()
|
||||
if self.pp:
|
||||
self.pp.end = True
|
||||
self.pp = None
|
||||
|
||||
def reload(self, rescan_all_vols: bool) -> None:
|
||||
n = 2 if rescan_all_vols else 1
|
||||
with self.reload_mutex:
|
||||
if self.reload_flag < n:
|
||||
self.reload_flag = n
|
||||
with self.rescan_cond:
|
||||
self.rescan_cond.notify_all()
|
||||
|
||||
def _reload_thr(self) -> None:
|
||||
while self.pp:
|
||||
time.sleep(0.1)
|
||||
while True:
|
||||
with self.reload_mutex:
|
||||
if not self.reload_flag:
|
||||
break
|
||||
rav = self.reload_flag == 2
|
||||
self.reload_flag = 0
|
||||
gt1 = self.gt1
|
||||
with self.mutex:
|
||||
self._reload(rav)
|
||||
while gt1 == self.gt1 or self.pp:
|
||||
time.sleep(0.1)
|
||||
|
||||
self.reloading = False
|
||||
|
||||
def _reload(self, rescan_all_vols: bool) -> None:
|
||||
"""mutex(main) me"""
|
||||
self.log("reload #{} scheduled".format(self.gid + 1))
|
||||
all_vols = self.asrv.vfs.all_vols
|
||||
@@ -228,10 +271,7 @@ class Up2k(object):
|
||||
with self.mutex, self.reg_mutex:
|
||||
self._drop_caches()
|
||||
|
||||
if self.pp:
|
||||
self.pp.end = True
|
||||
self.pp = None
|
||||
|
||||
self.unpp()
|
||||
return
|
||||
|
||||
if not self.pp and self.args.exit == "idx":
|
||||
@@ -311,8 +351,8 @@ class Up2k(object):
|
||||
|
||||
def _active_uploads(self, uname: str) -> list[tuple[float, int, int, str]]:
|
||||
ret = []
|
||||
for vtop in self.asrv.vfs.aread[uname]:
|
||||
vfs = self.asrv.vfs.all_vols.get(vtop)
|
||||
for vtop in self.vfs.aread.get(uname) or []:
|
||||
vfs = self.vfs.all_vols.get(vtop)
|
||||
if not vfs: # dbv only
|
||||
continue
|
||||
ptop = vfs.realpath
|
||||
@@ -357,17 +397,18 @@ class Up2k(object):
|
||||
return '[{"timeout":1}]'
|
||||
|
||||
ret: list[tuple[int, str, int, int, int]] = []
|
||||
userset = set([(uname or "\n"), "*"])
|
||||
try:
|
||||
for ptop, tab2 in self.registry.items():
|
||||
cfg = self.flags.get(ptop, {}).get("u2abort", 1)
|
||||
if not cfg:
|
||||
continue
|
||||
addr = (ip or "\n") if cfg in (1, 2) else ""
|
||||
user = (uname or "\n") if cfg in (1, 3) else ""
|
||||
user = userset if cfg in (1, 3) else None
|
||||
for job in tab2.values():
|
||||
if (
|
||||
"done" in job
|
||||
or (user and user != job["user"])
|
||||
or (user and job["user"] not in user)
|
||||
or (addr and addr != job["addr"])
|
||||
):
|
||||
continue
|
||||
@@ -484,6 +525,12 @@ class Up2k(object):
|
||||
if self.stop:
|
||||
return
|
||||
|
||||
with self.reload_mutex:
|
||||
if self.reload_flag and not self.reloading:
|
||||
self.reloading = True
|
||||
zs = "up2k-reload-%d" % (self.gid,)
|
||||
Daemon(self._reload_thr, zs)
|
||||
|
||||
now = time.time()
|
||||
if now < cooldown:
|
||||
# self.log("SR: cd - now = {:.2f}".format(cooldown - now), 5)
|
||||
@@ -520,7 +567,7 @@ class Up2k(object):
|
||||
raise
|
||||
|
||||
with self.mutex:
|
||||
for vp, vol in sorted(self.asrv.vfs.all_vols.items()):
|
||||
for vp, vol in sorted(self.vfs.all_vols.items()):
|
||||
maxage = vol.flags.get("scan")
|
||||
if not maxage:
|
||||
continue
|
||||
@@ -553,7 +600,7 @@ class Up2k(object):
|
||||
|
||||
if vols:
|
||||
cooldown = now + 10
|
||||
err = self.rescan(self.asrv.vfs.all_vols, vols, False, False)
|
||||
err = self.rescan(self.vfs.all_vols, vols, False, False)
|
||||
if err:
|
||||
for v in vols:
|
||||
self.need_rescan.add(v)
|
||||
@@ -566,7 +613,7 @@ class Up2k(object):
|
||||
def _check_lifetimes(self) -> float:
|
||||
now = time.time()
|
||||
timeout = now + 9001
|
||||
for vp, vol in sorted(self.asrv.vfs.all_vols.items()):
|
||||
for vp, vol in sorted(self.vfs.all_vols.items()):
|
||||
lifetime = vol.flags.get("lifetime")
|
||||
if not lifetime:
|
||||
continue
|
||||
@@ -620,7 +667,7 @@ class Up2k(object):
|
||||
maxage = self.args.shr_rt * 60
|
||||
low = now - maxage
|
||||
|
||||
vn = self.asrv.vfs.nodes.get(self.args.shr.strip("/"))
|
||||
vn = self.vfs.nodes.get(self.args.shr.strip("/"))
|
||||
active = vn and vn.nodes
|
||||
|
||||
db = sqlite3.connect(self.args.shr_db, timeout=2)
|
||||
@@ -645,7 +692,7 @@ class Up2k(object):
|
||||
db.commit()
|
||||
|
||||
if reload:
|
||||
Daemon(self.hub._reload_blocking, "sharedrop", (False, False))
|
||||
Daemon(self.hub.reload, "sharedrop", (False, False))
|
||||
|
||||
q = "select min(t1) from sh where t1 > ?"
|
||||
(earliest,) = cur.execute(q, (1,)).fetchone()
|
||||
@@ -671,7 +718,7 @@ class Up2k(object):
|
||||
return 2
|
||||
|
||||
ret = 9001
|
||||
for _, vol in sorted(self.asrv.vfs.all_vols.items()):
|
||||
for _, vol in sorted(self.vfs.all_vols.items()):
|
||||
rp = vol.realpath
|
||||
cur = self.cur.get(rp)
|
||||
if not cur:
|
||||
@@ -773,6 +820,8 @@ class Up2k(object):
|
||||
with self.mutex:
|
||||
gid = self.gid
|
||||
|
||||
self.gt0 = time.time()
|
||||
|
||||
nspin = 0
|
||||
while True:
|
||||
nspin += 1
|
||||
@@ -795,6 +844,11 @@ class Up2k(object):
|
||||
if gid:
|
||||
self.log("reload #%d running" % (gid,))
|
||||
|
||||
self.vfs = self.asrv.vfs
|
||||
self.acct = self.asrv.acct
|
||||
self.iacct = self.asrv.iacct
|
||||
self.grps = self.asrv.grps
|
||||
|
||||
vols = list(all_vols.values())
|
||||
t0 = time.time()
|
||||
have_e2d = False
|
||||
@@ -858,7 +912,7 @@ class Up2k(object):
|
||||
self._drop_caches()
|
||||
|
||||
for vol in vols:
|
||||
if self.stop:
|
||||
if self.stop or gid != self.gid:
|
||||
break
|
||||
|
||||
en = set(vol.flags.get("mte", {}))
|
||||
@@ -989,7 +1043,7 @@ class Up2k(object):
|
||||
if self.mtag:
|
||||
Daemon(self._run_all_mtp, "up2k-mtp-scan", (gid,))
|
||||
else:
|
||||
self.pp = None
|
||||
self.unpp()
|
||||
|
||||
return have_e2d
|
||||
|
||||
@@ -997,7 +1051,7 @@ class Up2k(object):
|
||||
self, ptop: str, flags: dict[str, Any]
|
||||
) -> Optional[tuple["sqlite3.Cursor", str]]:
|
||||
"""mutex(main,reg) me"""
|
||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||
histpath = self.vfs.histtab.get(ptop)
|
||||
if not histpath:
|
||||
self.log("no histpath for [{}]".format(ptop))
|
||||
return None
|
||||
@@ -1010,11 +1064,12 @@ class Up2k(object):
|
||||
return None
|
||||
|
||||
vpath = "?"
|
||||
for k, v in self.asrv.vfs.all_vols.items():
|
||||
for k, v in self.vfs.all_vols.items():
|
||||
if v.realpath == ptop:
|
||||
vpath = k
|
||||
|
||||
_, flags = self._expr_idx_filter(flags)
|
||||
n4g = bool(flags.get("noforget"))
|
||||
|
||||
ft = "\033[0;32m{}{:.0}"
|
||||
ff = "\033[0;35m{}{:.0}"
|
||||
@@ -1072,21 +1127,35 @@ class Up2k(object):
|
||||
for job in reg2.values():
|
||||
job["dwrk"] = job["wark"]
|
||||
|
||||
rm = []
|
||||
for k, job in reg2.items():
|
||||
job["ptop"] = ptop
|
||||
if "done" in job:
|
||||
job["need"] = job["hash"] = emptylist
|
||||
else:
|
||||
if "need" not in job:
|
||||
job["need"] = []
|
||||
if "hash" not in job:
|
||||
job["hash"] = []
|
||||
|
||||
fp = djoin(ptop, job["prel"], job["name"])
|
||||
if bos.path.exists(fp):
|
||||
reg[k] = job
|
||||
if "done" in job:
|
||||
job["need"] = job["hash"] = emptylist
|
||||
continue
|
||||
job["poke"] = time.time()
|
||||
job["busy"] = {}
|
||||
else:
|
||||
self.log("ign deleted file in snap: [{}]".format(fp))
|
||||
if not n4g:
|
||||
rm.append(k)
|
||||
continue
|
||||
|
||||
for x in rm:
|
||||
del reg2[x]
|
||||
|
||||
if drp is None:
|
||||
drp = [k for k, v in reg.items() if not v.get("need", [])]
|
||||
drp = [k for k, v in reg.items() if not v["need"]]
|
||||
else:
|
||||
drp = [x for x in drp if x in reg]
|
||||
|
||||
@@ -1162,7 +1231,7 @@ class Up2k(object):
|
||||
def _verify_db_cache(self, cur: "sqlite3.Cursor", vpath: str) -> None:
|
||||
# check if list of intersecting volumes changed since last use; drop caches if so
|
||||
prefix = (vpath + "/").lstrip("/")
|
||||
zsl = [x for x in self.asrv.vfs.all_vols if x.startswith(prefix)]
|
||||
zsl = [x for x in self.vfs.all_vols if x.startswith(prefix)]
|
||||
zsl = [x[len(prefix) :] for x in zsl]
|
||||
zsl.sort()
|
||||
zb = hashlib.sha1("\n".join(zsl).encode("utf-8", "replace")).digest()
|
||||
@@ -1207,7 +1276,7 @@ class Up2k(object):
|
||||
if d != vol and (d.vpath.startswith(vol.vpath + "/") or not vol.vpath)
|
||||
]
|
||||
excl += [absreal(x) for x in excl]
|
||||
excl += list(self.asrv.vfs.histtab.values())
|
||||
excl += list(self.vfs.histtab.values())
|
||||
if WINDOWS:
|
||||
excl = [x.replace("/", "\\") for x in excl]
|
||||
else:
|
||||
@@ -1331,7 +1400,7 @@ class Up2k(object):
|
||||
rds = rd + "/" if rd else ""
|
||||
cdirs = cdir + os.sep
|
||||
|
||||
g = statdir(self.log_func, not self.args.no_scandir, True, cdir)
|
||||
g = statdir(self.log_func, not self.args.no_scandir, True, cdir, False)
|
||||
gl = sorted(g)
|
||||
partials = set([x[0] for x in gl if "PARTIAL" in x[0]])
|
||||
for iname, inf in gl:
|
||||
@@ -1395,7 +1464,7 @@ class Up2k(object):
|
||||
t = "failed to index subdir [{}]:\n{}"
|
||||
self.log(t.format(abspath, min_ex()), c=1)
|
||||
elif not stat.S_ISREG(inf.st_mode):
|
||||
self.log("skip type-{:x} file [{}]".format(inf.st_mode, abspath))
|
||||
self.log("skip type-0%o file [%s]" % (inf.st_mode, abspath))
|
||||
else:
|
||||
# self.log("file: {}".format(abspath))
|
||||
if rp.endswith(".PARTIAL") and time.time() - lmod < 60:
|
||||
@@ -1717,7 +1786,7 @@ class Up2k(object):
|
||||
|
||||
excl = [
|
||||
d[len(vol.vpath) :].lstrip("/")
|
||||
for d in self.asrv.vfs.all_vols
|
||||
for d in self.vfs.all_vols
|
||||
if d != vol.vpath and (d.startswith(vol.vpath + "/") or not vol.vpath)
|
||||
]
|
||||
qexa: list[str] = []
|
||||
@@ -1869,7 +1938,7 @@ class Up2k(object):
|
||||
def _drop_caches(self) -> None:
|
||||
"""mutex(main,reg) me"""
|
||||
self.log("dropping caches for a full filesystem scan")
|
||||
for vol in self.asrv.vfs.all_vols.values():
|
||||
for vol in self.vfs.all_vols.values():
|
||||
reg = self.register_vpath(vol.realpath, vol.flags)
|
||||
if not reg:
|
||||
continue
|
||||
@@ -2097,7 +2166,7 @@ class Up2k(object):
|
||||
self._run_one_mtp(ptop, gid)
|
||||
|
||||
vtop = "\n"
|
||||
for vol in self.asrv.vfs.all_vols.values():
|
||||
for vol in self.vfs.all_vols.values():
|
||||
if vol.realpath == ptop:
|
||||
vtop = vol.vpath
|
||||
if "running mtp" in self.volstate.get(vtop, ""):
|
||||
@@ -2107,7 +2176,7 @@ class Up2k(object):
|
||||
msg = "mtp finished in {:.2f} sec ({})"
|
||||
self.log(msg.format(td, s2hms(td, True)))
|
||||
|
||||
self.pp = None
|
||||
self.unpp()
|
||||
if self.args.exit == "idx":
|
||||
self.hub.sigterm()
|
||||
|
||||
@@ -2749,6 +2818,9 @@ class Up2k(object):
|
||||
) -> dict[str, Any]:
|
||||
# busy_aps is u2fh (always undefined if -j0) so this is safe
|
||||
self.busy_aps = busy_aps
|
||||
if self.reload_flag or self.reloading:
|
||||
raise Pebkac(503, SBUSY % ("fs-reload",))
|
||||
|
||||
got_lock = False
|
||||
try:
|
||||
# bit expensive; 3.9=10x 3.11=2x
|
||||
@@ -2757,8 +2829,7 @@ class Up2k(object):
|
||||
with self.reg_mutex:
|
||||
ret = self._handle_json(cj)
|
||||
else:
|
||||
t = "cannot receive uploads right now;\nserver busy with {}.\nPlease wait; the client will retry..."
|
||||
raise Pebkac(503, t.format(self.blocked or "[unknown]"))
|
||||
raise Pebkac(503, SBUSY % (self.blocked or "[unknown]",))
|
||||
except TypeError:
|
||||
if not PY2:
|
||||
raise
|
||||
@@ -2800,7 +2871,7 @@ class Up2k(object):
|
||||
if True:
|
||||
jcur = self.cur.get(ptop)
|
||||
reg = self.registry[ptop]
|
||||
vfs = self.asrv.vfs.all_vols[cj["vtop"]]
|
||||
vfs = self.vfs.all_vols[cj["vtop"]]
|
||||
n4g = bool(vfs.flags.get("noforget"))
|
||||
noclone = bool(vfs.flags.get("noclone"))
|
||||
rand = vfs.flags.get("rand") or cj.get("rand")
|
||||
@@ -2824,7 +2895,7 @@ class Up2k(object):
|
||||
|
||||
alts: list[tuple[int, int, dict[str, Any], "sqlite3.Cursor", str, str]] = []
|
||||
for ptop, cur in vols:
|
||||
allv = self.asrv.vfs.all_vols
|
||||
allv = self.vfs.all_vols
|
||||
cvfs = next((v for v in allv.values() if v.realpath == ptop), vfs)
|
||||
vtop = cj["vtop"] if cur == jcur else cvfs.vpath
|
||||
|
||||
@@ -2875,9 +2946,6 @@ class Up2k(object):
|
||||
"user": cj["user"],
|
||||
"addr": ip,
|
||||
"at": at,
|
||||
"hash": [],
|
||||
"need": [],
|
||||
"busy": {},
|
||||
}
|
||||
for k in ["life"]:
|
||||
if k in cj:
|
||||
@@ -2911,17 +2979,20 @@ class Up2k(object):
|
||||
hashes2, st = self._hashlist_from_file(orig_ap)
|
||||
wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2)
|
||||
if dwark != wark2:
|
||||
t = "will not dedup (fs index desync): fs=%s, db=%s, file: %s"
|
||||
self.log(t % (wark2, dwark, orig_ap))
|
||||
t = "will not dedup (fs index desync): fs=%s, db=%s, file: %s\n%s"
|
||||
self.log(t % (wark2, dwark, orig_ap, rj))
|
||||
lost.append(dupe[3:])
|
||||
continue
|
||||
data_ok = True
|
||||
job = rj
|
||||
break
|
||||
|
||||
if job and wark in reg:
|
||||
# self.log("pop " + wark + " " + job["name"] + " handle_json db", 4)
|
||||
del reg[wark]
|
||||
if job:
|
||||
if wark in reg:
|
||||
del reg[wark]
|
||||
job["hash"] = job["need"] = []
|
||||
job["done"] = True
|
||||
job["busy"] = {}
|
||||
|
||||
if lost:
|
||||
c2 = None
|
||||
@@ -2950,7 +3021,7 @@ class Up2k(object):
|
||||
path = djoin(rj["ptop"], rj["prel"], fn)
|
||||
try:
|
||||
st = bos.stat(path)
|
||||
if st.st_size > 0 or not rj["need"]:
|
||||
if st.st_size > 0 or "done" in rj:
|
||||
# upload completed or both present
|
||||
break
|
||||
except:
|
||||
@@ -2964,13 +3035,13 @@ class Up2k(object):
|
||||
inc_ap = djoin(cj["ptop"], cj["prel"], cj["name"])
|
||||
orig_ap = djoin(rj["ptop"], rj["prel"], rj["name"])
|
||||
|
||||
if self.args.nw or n4g or not st:
|
||||
if self.args.nw or n4g or not st or "done" not in rj:
|
||||
pass
|
||||
|
||||
elif st.st_size != rj["size"]:
|
||||
t = "will not dedup (fs index desync): {}, size fs={} db={}, mtime fs={} db={}, file: {}"
|
||||
t = "will not dedup (fs index desync): {}, size fs={} db={}, mtime fs={} db={}, file: {}\n{}"
|
||||
t = t.format(
|
||||
wark, st.st_size, rj["size"], st.st_mtime, rj["lmod"], path
|
||||
wark, st.st_size, rj["size"], st.st_mtime, rj["lmod"], path, rj
|
||||
)
|
||||
self.log(t)
|
||||
del reg[wark]
|
||||
@@ -2980,8 +3051,8 @@ class Up2k(object):
|
||||
hashes2, _ = self._hashlist_from_file(orig_ap)
|
||||
wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2)
|
||||
if wark != wark2:
|
||||
t = "will not dedup (fs index desync): fs=%s, idx=%s, file: %s"
|
||||
self.log(t % (wark2, wark, orig_ap))
|
||||
t = "will not dedup (fs index desync): fs=%s, idx=%s, file: %s\n%s"
|
||||
self.log(t % (wark2, wark, orig_ap, rj))
|
||||
del reg[wark]
|
||||
|
||||
if job or wark in reg:
|
||||
@@ -2996,7 +3067,7 @@ class Up2k(object):
|
||||
dst = djoin(cj["ptop"], cj["prel"], cj["name"])
|
||||
vsrc = djoin(job["vtop"], job["prel"], job["name"])
|
||||
vsrc = vsrc.replace("\\", "/") # just for prints anyways
|
||||
if job["need"]:
|
||||
if "done" not in job:
|
||||
self.log("unfinished:\n {0}\n {1}".format(src, dst))
|
||||
err = "partial upload exists at a different location; please resume uploading here instead:\n"
|
||||
err += "/" + quotep(vsrc) + " "
|
||||
@@ -3067,7 +3138,7 @@ class Up2k(object):
|
||||
vp,
|
||||
job["host"],
|
||||
job["user"],
|
||||
self.asrv.vfs.get_perms(job["vtop"], job["user"]),
|
||||
self.vfs.get_perms(job["vtop"], job["user"]),
|
||||
job["lmod"],
|
||||
job["size"],
|
||||
job["addr"],
|
||||
@@ -3079,7 +3150,7 @@ class Up2k(object):
|
||||
self.log(t, 1)
|
||||
raise Pebkac(403, t)
|
||||
if hr.get("reloc"):
|
||||
x = pathmod(self.asrv.vfs, dst, vp, hr["reloc"])
|
||||
x = pathmod(self.vfs, dst, vp, hr["reloc"])
|
||||
if x:
|
||||
zvfs = vfs
|
||||
pdir, _, job["name"], (vfs, rem) = x
|
||||
@@ -3357,14 +3428,14 @@ class Up2k(object):
|
||||
|
||||
def handle_chunks(
|
||||
self, ptop: str, wark: str, chashes: list[str]
|
||||
) -> tuple[list[str], int, list[list[int]], str, float, bool]:
|
||||
) -> tuple[list[str], int, list[list[int]], str, float, int, bool]:
|
||||
with self.mutex, self.reg_mutex:
|
||||
self.db_act = self.vol_act[ptop] = time.time()
|
||||
job = self.registry[ptop].get(wark)
|
||||
if not job:
|
||||
known = " ".join([x for x in self.registry[ptop].keys()])
|
||||
self.log("unknown wark [{}], known: {}".format(wark, known))
|
||||
raise Pebkac(400, "unknown wark" + SSEELOG)
|
||||
raise Pebkac(400, "unknown wark" + SEESLOG)
|
||||
|
||||
if "t0c" not in job:
|
||||
job["t0c"] = time.time()
|
||||
@@ -3380,7 +3451,7 @@ class Up2k(object):
|
||||
try:
|
||||
nchunk = uniq.index(chashes[0])
|
||||
except:
|
||||
raise Pebkac(400, "unknown chunk0 [%s]" % (chashes[0]))
|
||||
raise Pebkac(400, "unknown chunk0 [%s]" % (chashes[0],))
|
||||
expanded = [chashes[0]]
|
||||
for prefix in chashes[1:]:
|
||||
nchunk += 1
|
||||
@@ -3415,7 +3486,7 @@ class Up2k(object):
|
||||
for chash in chashes:
|
||||
nchunk = [n for n, v in enumerate(job["hash"]) if v == chash]
|
||||
if not nchunk:
|
||||
raise Pebkac(400, "unknown chunk %s" % (chash))
|
||||
raise Pebkac(400, "unknown chunk %s" % (chash,))
|
||||
|
||||
ofs = [chunksize * x for x in nchunk]
|
||||
coffsets.append(ofs)
|
||||
@@ -3440,7 +3511,7 @@ class Up2k(object):
|
||||
|
||||
job["poke"] = time.time()
|
||||
|
||||
return chashes, chunksize, coffsets, path, job["lmod"], job["sprs"]
|
||||
return chashes, chunksize, coffsets, path, job["lmod"], job["size"], job["sprs"]
|
||||
|
||||
def fast_confirm_chunks(
|
||||
self, ptop: str, wark: str, chashes: list[str]
|
||||
@@ -3482,6 +3553,7 @@ class Up2k(object):
|
||||
for chash in written:
|
||||
job["need"].remove(chash)
|
||||
except Exception as ex:
|
||||
# dead tcp connections can get here by timeout (OK)
|
||||
return -2, "confirm_chunk, chash(%s) %r" % (chash, ex) # type: ignore
|
||||
|
||||
ret = len(job["need"])
|
||||
@@ -3509,11 +3581,13 @@ class Up2k(object):
|
||||
src = djoin(pdir, job["tnam"])
|
||||
dst = djoin(pdir, job["name"])
|
||||
except Exception as ex:
|
||||
raise Pebkac(500, "finish_upload, wark, " + repr(ex))
|
||||
self.log(min_ex(), 1)
|
||||
raise Pebkac(500, "finish_upload, wark, %r%s" % (ex, SEESLOG))
|
||||
|
||||
if job["need"]:
|
||||
t = "finish_upload {} with remaining chunks {}"
|
||||
raise Pebkac(500, t.format(wark, job["need"]))
|
||||
self.log(min_ex(), 1)
|
||||
t = "finish_upload %s with remaining chunks %s%s"
|
||||
raise Pebkac(500, t % (wark, job["need"], SEESLOG))
|
||||
|
||||
upt = job.get("at") or time.time()
|
||||
vflags = self.flags[ptop]
|
||||
@@ -3536,7 +3610,7 @@ class Up2k(object):
|
||||
wake_sr = False
|
||||
try:
|
||||
flt = job["life"]
|
||||
vfs = self.asrv.vfs.all_vols[job["vtop"]]
|
||||
vfs = self.vfs.all_vols[job["vtop"]]
|
||||
vlt = vfs.flags["lifetime"]
|
||||
if vlt and flt > 1 and flt < vlt:
|
||||
upt -= vlt - flt
|
||||
@@ -3712,7 +3786,7 @@ class Up2k(object):
|
||||
djoin(vtop, rd, fn),
|
||||
host,
|
||||
usr,
|
||||
self.asrv.vfs.get_perms(djoin(vtop, rd, fn), usr),
|
||||
self.vfs.get_perms(djoin(vtop, rd, fn), usr),
|
||||
ts,
|
||||
sz,
|
||||
ip,
|
||||
@@ -3822,21 +3896,23 @@ class Up2k(object):
|
||||
partial = ""
|
||||
if not unpost:
|
||||
permsets = [[True, False, False, True]]
|
||||
vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0])
|
||||
vn, rem = vn.get_dbv(rem)
|
||||
vn0, rem0 = self.vfs.get(vpath, uname, *permsets[0])
|
||||
vn, rem = vn0.get_dbv(rem0)
|
||||
else:
|
||||
# unpost with missing permissions? verify with db
|
||||
permsets = [[False, True]]
|
||||
vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0])
|
||||
vn, rem = vn.get_dbv(rem)
|
||||
vn0, rem0 = self.vfs.get(vpath, uname, *permsets[0])
|
||||
vn, rem = vn0.get_dbv(rem0)
|
||||
ptop = vn.realpath
|
||||
with self.mutex, self.reg_mutex:
|
||||
abrt_cfg = self.flags.get(ptop, {}).get("u2abort", 1)
|
||||
addr = (ip or "\n") if abrt_cfg in (1, 2) else ""
|
||||
user = (uname or "\n") if abrt_cfg in (1, 3) else ""
|
||||
user = ((uname or "\n"), "*") if abrt_cfg in (1, 3) else None
|
||||
reg = self.registry.get(ptop, {}) if abrt_cfg else {}
|
||||
for wark, job in reg.items():
|
||||
if (user and user != job["user"]) or (addr and addr != job["addr"]):
|
||||
if (addr and addr != job["addr"]) or (
|
||||
user and job["user"] not in user
|
||||
):
|
||||
continue
|
||||
jrem = djoin(job["prel"], job["name"])
|
||||
if ANYWIN:
|
||||
@@ -3882,17 +3958,17 @@ class Up2k(object):
|
||||
|
||||
scandir = not self.args.no_scandir
|
||||
if is_dir:
|
||||
g = vn.walk("", rem, [], uname, permsets, True, scandir, True)
|
||||
# note: deletion inside shares would require a rewrite here;
|
||||
# shares necessitate get_dbv which is incompatible with walk
|
||||
g = vn0.walk("", rem0, [], uname, permsets, True, scandir, True)
|
||||
if unpost:
|
||||
raise Pebkac(400, "cannot unpost folders")
|
||||
elif stat.S_ISLNK(st.st_mode) or stat.S_ISREG(st.st_mode):
|
||||
dbv, vrem = self.asrv.vfs.get(vpath, uname, *permsets[0])
|
||||
dbv, vrem = dbv.get_dbv(vrem)
|
||||
voldir = vsplit(vrem)[0]
|
||||
voldir = vsplit(rem)[0]
|
||||
vpath_dir = vsplit(vpath)[0]
|
||||
g = [(dbv, voldir, vpath_dir, adir, [(fn, 0)], [], {})] # type: ignore
|
||||
g = [(vn, voldir, vpath_dir, adir, [(fn, 0)], [], {})] # type: ignore
|
||||
else:
|
||||
self.log("rm: skip type-{:x} file [{}]".format(st.st_mode, atop))
|
||||
self.log("rm: skip type-0%o file [%s]" % (st.st_mode, atop))
|
||||
return 0, [], []
|
||||
|
||||
xbd = vn.flags.get("xbd")
|
||||
@@ -3917,7 +3993,10 @@ class Up2k(object):
|
||||
volpath = ("%s/%s" % (vrem, fn)).strip("/")
|
||||
vpath = ("%s/%s" % (dbv.vpath, volpath)).strip("/")
|
||||
self.log("rm %s\n %s" % (vpath, abspath))
|
||||
_ = dbv.get(volpath, uname, *permsets[0])
|
||||
if not unpost:
|
||||
# recursion-only sanchk
|
||||
_ = dbv.get(volpath, uname, *permsets[0])
|
||||
|
||||
if xbd:
|
||||
if not runhook(
|
||||
self.log,
|
||||
@@ -3929,7 +4008,7 @@ class Up2k(object):
|
||||
vpath,
|
||||
"",
|
||||
uname,
|
||||
self.asrv.vfs.get_perms(vpath, uname),
|
||||
self.vfs.get_perms(vpath, uname),
|
||||
stl.st_mtime,
|
||||
st.st_size,
|
||||
ip,
|
||||
@@ -3969,7 +4048,7 @@ class Up2k(object):
|
||||
vpath,
|
||||
"",
|
||||
uname,
|
||||
self.asrv.vfs.get_perms(vpath, uname),
|
||||
self.vfs.get_perms(vpath, uname),
|
||||
stl.st_mtime,
|
||||
st.st_size,
|
||||
ip,
|
||||
@@ -3989,17 +4068,226 @@ class Up2k(object):
|
||||
|
||||
return n_files, ok + ok2, ng + ng2
|
||||
|
||||
def handle_cp(self, uname: str, ip: str, svp: str, dvp: str) -> str:
|
||||
if svp == dvp or dvp.startswith(svp + "/"):
|
||||
raise Pebkac(400, "cp: cannot copy parent into subfolder")
|
||||
|
||||
svn, srem = self.vfs.get(svp, uname, True, False)
|
||||
svn_dbv, _ = svn.get_dbv(srem)
|
||||
sabs = svn.canonical(srem, False)
|
||||
curs: set["sqlite3.Cursor"] = set()
|
||||
self.db_act = self.vol_act[svn_dbv.realpath] = time.time()
|
||||
|
||||
st = bos.stat(sabs)
|
||||
if stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode):
|
||||
with self.mutex:
|
||||
try:
|
||||
ret = self._cp_file(uname, ip, svp, dvp, curs)
|
||||
finally:
|
||||
for v in curs:
|
||||
v.connection.commit()
|
||||
|
||||
return ret
|
||||
|
||||
if not stat.S_ISDIR(st.st_mode):
|
||||
raise Pebkac(400, "cannot copy type-0%o file" % (st.st_mode,))
|
||||
|
||||
permsets = [[True, False]]
|
||||
scandir = not self.args.no_scandir
|
||||
|
||||
# don't use svn_dbv; would skip subvols due to _ls `if not rem:`
|
||||
g = svn.walk("", srem, [], uname, permsets, True, scandir, True)
|
||||
with self.mutex:
|
||||
try:
|
||||
for dbv, vrem, _, atop, files, rd, vd in g:
|
||||
for fn in files:
|
||||
self.db_act = self.vol_act[dbv.realpath] = time.time()
|
||||
svpf = "/".join(x for x in [dbv.vpath, vrem, fn[0]] if x)
|
||||
if not svpf.startswith(svp + "/"): # assert
|
||||
self.log(min_ex(), 1)
|
||||
t = "cp: bug at %s, top %s%s"
|
||||
raise Pebkac(500, t % (svpf, svp, SEESLOG))
|
||||
|
||||
dvpf = dvp + svpf[len(svp) :]
|
||||
self._cp_file(uname, ip, svpf, dvpf, curs)
|
||||
|
||||
for v in curs:
|
||||
v.connection.commit()
|
||||
curs.clear()
|
||||
finally:
|
||||
for v in curs:
|
||||
v.connection.commit()
|
||||
|
||||
return "k"
|
||||
|
||||
def _cp_file(
|
||||
self, uname: str, ip: str, svp: str, dvp: str, curs: set["sqlite3.Cursor"]
|
||||
) -> str:
|
||||
"""mutex(main) me; will mutex(reg)"""
|
||||
svn, srem = self.vfs.get(svp, uname, True, False)
|
||||
svn_dbv, srem_dbv = svn.get_dbv(srem)
|
||||
|
||||
dvn, drem = self.vfs.get(dvp, uname, False, True)
|
||||
dvn, drem = dvn.get_dbv(drem)
|
||||
|
||||
sabs = svn.canonical(srem, False)
|
||||
dabs = dvn.canonical(drem)
|
||||
drd, dfn = vsplit(drem)
|
||||
|
||||
if bos.path.exists(dabs):
|
||||
raise Pebkac(400, "cp2: target file exists")
|
||||
|
||||
st = stl = bos.lstat(sabs)
|
||||
if stat.S_ISLNK(stl.st_mode):
|
||||
is_link = True
|
||||
try:
|
||||
st = bos.stat(sabs)
|
||||
except:
|
||||
pass # broken symlink; keep as-is
|
||||
elif not stat.S_ISREG(st.st_mode):
|
||||
self.log("skipping type-0%o file [%s]" % (st.st_mode, sabs))
|
||||
return ""
|
||||
else:
|
||||
is_link = False
|
||||
|
||||
ftime = stl.st_mtime
|
||||
fsize = st.st_size
|
||||
|
||||
xbc = svn.flags.get("xbc")
|
||||
xac = dvn.flags.get("xac")
|
||||
if xbc:
|
||||
if not runhook(
|
||||
self.log,
|
||||
None,
|
||||
self,
|
||||
"xbc",
|
||||
xbc,
|
||||
sabs,
|
||||
svp,
|
||||
"",
|
||||
uname,
|
||||
self.vfs.get_perms(svp, uname),
|
||||
ftime,
|
||||
fsize,
|
||||
ip,
|
||||
time.time(),
|
||||
"",
|
||||
):
|
||||
t = "copy blocked by xbr server config: {}".format(svp)
|
||||
self.log(t, 1)
|
||||
raise Pebkac(405, t)
|
||||
|
||||
bos.makedirs(os.path.dirname(dabs))
|
||||
|
||||
c1, w, ftime_, fsize_, ip, at = self._find_from_vpath(
|
||||
svn_dbv.realpath, srem_dbv
|
||||
)
|
||||
c2 = self.cur.get(dvn.realpath)
|
||||
|
||||
if w:
|
||||
assert c1 # !rm
|
||||
if c2 and c2 != c1:
|
||||
self._copy_tags(c1, c2, w)
|
||||
|
||||
curs.add(c1)
|
||||
|
||||
if c2:
|
||||
self.db_add(
|
||||
c2,
|
||||
{}, # skip upload hooks
|
||||
drd,
|
||||
dfn,
|
||||
ftime,
|
||||
fsize,
|
||||
dvn.realpath,
|
||||
dvn.vpath,
|
||||
w,
|
||||
w,
|
||||
"",
|
||||
"",
|
||||
ip or "",
|
||||
at or 0,
|
||||
)
|
||||
curs.add(c2)
|
||||
else:
|
||||
self.log("not found in src db: [{}]".format(svp))
|
||||
|
||||
try:
|
||||
if is_link and st != stl:
|
||||
# relink non-broken symlinks to still work after the move,
|
||||
# but only resolve 1st level to maintain relativity
|
||||
dlink = bos.readlink(sabs)
|
||||
dlink = os.path.join(os.path.dirname(sabs), dlink)
|
||||
dlink = bos.path.abspath(dlink)
|
||||
self._symlink(dlink, dabs, dvn.flags, lmod=ftime)
|
||||
else:
|
||||
self._symlink(sabs, dabs, dvn.flags, lmod=ftime)
|
||||
|
||||
except OSError as ex:
|
||||
if ex.errno != errno.EXDEV:
|
||||
raise
|
||||
|
||||
self.log("using plain copy (%s):\n %s\n %s" % (ex.strerror, sabs, dabs))
|
||||
b1, b2 = fsenc(sabs), fsenc(dabs)
|
||||
is_link = os.path.islink(b1) # due to _relink
|
||||
try:
|
||||
shutil.copy2(b1, b2)
|
||||
except:
|
||||
try:
|
||||
wunlink(self.log, dabs, dvn.flags)
|
||||
except:
|
||||
pass
|
||||
|
||||
if not is_link:
|
||||
raise
|
||||
|
||||
# broken symlink? keep it as-is
|
||||
try:
|
||||
zb = os.readlink(b1)
|
||||
os.symlink(zb, b2)
|
||||
except:
|
||||
wunlink(self.log, dabs, dvn.flags)
|
||||
raise
|
||||
|
||||
if is_link:
|
||||
try:
|
||||
times = (int(time.time()), int(ftime))
|
||||
bos.utime(dabs, times, False)
|
||||
except:
|
||||
pass
|
||||
|
||||
if xac:
|
||||
runhook(
|
||||
self.log,
|
||||
None,
|
||||
self,
|
||||
"xac",
|
||||
xac,
|
||||
dabs,
|
||||
dvp,
|
||||
"",
|
||||
uname,
|
||||
self.vfs.get_perms(dvp, uname),
|
||||
ftime,
|
||||
fsize,
|
||||
ip,
|
||||
time.time(),
|
||||
"",
|
||||
)
|
||||
|
||||
return "k"
|
||||
|
||||
def handle_mv(self, uname: str, ip: str, svp: str, dvp: str) -> str:
|
||||
if svp == dvp or dvp.startswith(svp + "/"):
|
||||
raise Pebkac(400, "mv: cannot move parent into subfolder")
|
||||
|
||||
svn, srem = self.asrv.vfs.get(svp, uname, True, False, True)
|
||||
svn, srem = svn.get_dbv(srem)
|
||||
svn, srem = self.vfs.get(svp, uname, True, False, True)
|
||||
jail, jail_rem = svn.get_dbv(srem)
|
||||
sabs = svn.canonical(srem, False)
|
||||
curs: set["sqlite3.Cursor"] = set()
|
||||
self.db_act = self.vol_act[svn.realpath] = time.time()
|
||||
self.db_act = self.vol_act[jail.realpath] = time.time()
|
||||
|
||||
if not srem:
|
||||
if not jail_rem:
|
||||
raise Pebkac(400, "mv: cannot move a mountpoint")
|
||||
|
||||
st = bos.lstat(sabs)
|
||||
@@ -4013,7 +4301,9 @@ class Up2k(object):
|
||||
|
||||
return ret
|
||||
|
||||
jail = svn.get_dbv(srem)[0]
|
||||
if not stat.S_ISDIR(st.st_mode):
|
||||
raise Pebkac(400, "cannot move type-0%o file" % (st.st_mode,))
|
||||
|
||||
permsets = [[True, False, True]]
|
||||
scandir = not self.args.no_scandir
|
||||
|
||||
@@ -4025,38 +4315,44 @@ class Up2k(object):
|
||||
raise Pebkac(400, "mv: source folder contains other volumes")
|
||||
|
||||
g = svn.walk("", srem, [], uname, permsets, True, scandir, True)
|
||||
for dbv, vrem, _, atop, files, rd, vd in g:
|
||||
if dbv != jail:
|
||||
# the actual check (avoid toctou)
|
||||
raise Pebkac(400, "mv: source folder contains other volumes")
|
||||
with self.mutex:
|
||||
try:
|
||||
for dbv, vrem, _, atop, files, rd, vd in g:
|
||||
if dbv != jail:
|
||||
# the actual check (avoid toctou)
|
||||
raise Pebkac(400, "mv: source folder contains other volumes")
|
||||
|
||||
with self.mutex:
|
||||
try:
|
||||
for fn in files:
|
||||
self.db_act = self.vol_act[dbv.realpath] = time.time()
|
||||
svpf = "/".join(x for x in [dbv.vpath, vrem, fn[0]] if x)
|
||||
if not svpf.startswith(svp + "/"): # assert
|
||||
raise Pebkac(500, "mv: bug at {}, top {}".format(svpf, svp))
|
||||
self.log(min_ex(), 1)
|
||||
t = "mv: bug at %s, top %s%s"
|
||||
raise Pebkac(500, t % (svpf, svp, SEESLOG))
|
||||
|
||||
dvpf = dvp + svpf[len(svp) :]
|
||||
self._mv_file(uname, ip, svpf, dvpf, curs)
|
||||
finally:
|
||||
|
||||
for v in curs:
|
||||
v.connection.commit()
|
||||
|
||||
curs.clear()
|
||||
curs.clear()
|
||||
finally:
|
||||
for v in curs:
|
||||
v.connection.commit()
|
||||
|
||||
rm_ok, rm_ng = rmdirs(self.log_func, scandir, True, sabs, 1)
|
||||
|
||||
for zsl in (rm_ok, rm_ng):
|
||||
for ap in reversed(zsl):
|
||||
if not ap.startswith(sabs):
|
||||
raise Pebkac(500, "mv_d: bug at {}, top {}".format(ap, sabs))
|
||||
self.log(min_ex(), 1)
|
||||
t = "mv_d: bug at %s, top %s%s"
|
||||
raise Pebkac(500, t % (ap, sabs, SEESLOG))
|
||||
|
||||
rem = ap[len(sabs) :].replace(os.sep, "/").lstrip("/")
|
||||
vp = vjoin(dvp, rem)
|
||||
try:
|
||||
dvn, drem = self.asrv.vfs.get(vp, uname, False, True)
|
||||
dvn, drem = self.vfs.get(vp, uname, False, True)
|
||||
bos.mkdir(dvn.canonical(drem))
|
||||
except:
|
||||
pass
|
||||
@@ -4067,10 +4363,10 @@ class Up2k(object):
|
||||
self, uname: str, ip: str, svp: str, dvp: str, curs: set["sqlite3.Cursor"]
|
||||
) -> str:
|
||||
"""mutex(main) me; will mutex(reg)"""
|
||||
svn, srem = self.asrv.vfs.get(svp, uname, True, False, True)
|
||||
svn, srem = self.vfs.get(svp, uname, True, False, True)
|
||||
svn, srem = svn.get_dbv(srem)
|
||||
|
||||
dvn, drem = self.asrv.vfs.get(dvp, uname, False, True)
|
||||
dvn, drem = self.vfs.get(dvp, uname, False, True)
|
||||
dvn, drem = dvn.get_dbv(drem)
|
||||
|
||||
sabs = svn.canonical(srem, False)
|
||||
@@ -4114,7 +4410,7 @@ class Up2k(object):
|
||||
svp,
|
||||
"",
|
||||
uname,
|
||||
self.asrv.vfs.get_perms(svp, uname),
|
||||
self.vfs.get_perms(svp, uname),
|
||||
ftime,
|
||||
fsize,
|
||||
ip,
|
||||
@@ -4154,7 +4450,7 @@ class Up2k(object):
|
||||
dvp,
|
||||
"",
|
||||
uname,
|
||||
self.asrv.vfs.get_perms(dvp, uname),
|
||||
self.vfs.get_perms(dvp, uname),
|
||||
ftime,
|
||||
fsize,
|
||||
ip,
|
||||
@@ -4267,7 +4563,7 @@ class Up2k(object):
|
||||
dvp,
|
||||
"",
|
||||
uname,
|
||||
self.asrv.vfs.get_perms(dvp, uname),
|
||||
self.vfs.get_perms(dvp, uname),
|
||||
ftime,
|
||||
fsize,
|
||||
ip,
|
||||
@@ -4580,7 +4876,7 @@ class Up2k(object):
|
||||
vp_chk,
|
||||
job["host"],
|
||||
job["user"],
|
||||
self.asrv.vfs.get_perms(vp_chk, job["user"]),
|
||||
self.vfs.get_perms(vp_chk, job["user"]),
|
||||
job["lmod"],
|
||||
job["size"],
|
||||
job["addr"],
|
||||
@@ -4592,7 +4888,7 @@ class Up2k(object):
|
||||
self.log(t, 1)
|
||||
raise Pebkac(403, t)
|
||||
if hr.get("reloc"):
|
||||
x = pathmod(self.asrv.vfs, ap_chk, vp_chk, hr["reloc"])
|
||||
x = pathmod(self.vfs, ap_chk, vp_chk, hr["reloc"])
|
||||
if x:
|
||||
zvfs = vfs
|
||||
pdir, _, job["name"], (vfs, rem) = x
|
||||
@@ -4699,7 +4995,7 @@ class Up2k(object):
|
||||
|
||||
def _snap_reg(self, ptop: str, reg: dict[str, dict[str, Any]]) -> None:
|
||||
now = time.time()
|
||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||
histpath = self.vfs.histtab.get(ptop)
|
||||
if not histpath:
|
||||
return
|
||||
|
||||
@@ -4947,7 +5243,7 @@ class Up2k(object):
|
||||
else:
|
||||
fvp, fn = vsplit(fvp)
|
||||
|
||||
x = pathmod(self.asrv.vfs, "", req_vp, {"vp": fvp, "fn": fn})
|
||||
x = pathmod(self.vfs, "", req_vp, {"vp": fvp, "fn": fn})
|
||||
if not x:
|
||||
t = "hook_fx(%s): failed to resolve %s based on %s"
|
||||
self.log(t % (act, fvp, req_vp))
|
||||
@@ -5001,6 +5297,11 @@ class Up2k(object):
|
||||
cur.close()
|
||||
db.close()
|
||||
|
||||
if self.mem_cur:
|
||||
db = self.mem_cur.connection
|
||||
self.mem_cur.close()
|
||||
db.close()
|
||||
|
||||
self.registry = {}
|
||||
|
||||
|
||||
|
||||
@@ -213,6 +213,9 @@ except:
|
||||
ansi_re = re.compile("\033\\[[^mK]*[mK]")
|
||||
|
||||
|
||||
BOS_SEP = ("%s" % (os.sep,)).encode("ascii")
|
||||
|
||||
|
||||
surrogateescape.register_surrogateescape()
|
||||
if WINDOWS and PY2:
|
||||
FS_ENCODING = "utf-8"
|
||||
@@ -433,6 +436,27 @@ UNHUMANIZE_UNITS = {
|
||||
VF_CAREFUL = {"mv_re_t": 5, "rm_re_t": 5, "mv_re_r": 0.1, "rm_re_r": 0.1}
|
||||
|
||||
|
||||
def read_ram() -> tuple[float, float]:
|
||||
a = b = 0
|
||||
try:
|
||||
with open("/proc/meminfo", "rb", 0x10000) as f:
|
||||
zsl = f.read(0x10000).decode("ascii", "replace").split("\n")
|
||||
|
||||
p = re.compile("^MemTotal:.* kB")
|
||||
zs = next((x for x in zsl if p.match(x)))
|
||||
a = int((int(zs.split()[1]) / 0x100000) * 100) / 100
|
||||
|
||||
p = re.compile("^MemAvailable:.* kB")
|
||||
zs = next((x for x in zsl if p.match(x)))
|
||||
b = int((int(zs.split()[1]) / 0x100000) * 100) / 100
|
||||
except:
|
||||
pass
|
||||
return a, b
|
||||
|
||||
|
||||
RAM_TOTAL, RAM_AVAIL = read_ram()
|
||||
|
||||
|
||||
pybin = sys.executable or ""
|
||||
if EXE:
|
||||
pybin = ""
|
||||
@@ -665,11 +689,22 @@ class HLog(logging.Handler):
|
||||
|
||||
|
||||
class NetMap(object):
|
||||
def __init__(self, ips: list[str], cidrs: list[str], keep_lo=False) -> None:
|
||||
def __init__(
|
||||
self,
|
||||
ips: list[str],
|
||||
cidrs: list[str],
|
||||
keep_lo=False,
|
||||
strict_cidr=False,
|
||||
defer_mutex=False,
|
||||
) -> None:
|
||||
"""
|
||||
ips: list of plain ipv4/ipv6 IPs, not cidr
|
||||
cidrs: list of cidr-notation IPs (ip/prefix)
|
||||
"""
|
||||
|
||||
# fails multiprocessing; defer assignment
|
||||
self.mutex: Optional[threading.Lock] = None if defer_mutex else threading.Lock()
|
||||
|
||||
if "::" in ips:
|
||||
ips = [x for x in ips if x != "::"] + list(
|
||||
[x.split("/")[0] for x in cidrs if ":" in x]
|
||||
@@ -696,7 +731,7 @@ class NetMap(object):
|
||||
bip = socket.inet_pton(fam, ip.split("/")[0])
|
||||
self.bip.append(bip)
|
||||
self.b2sip[bip] = ip.split("/")[0]
|
||||
self.b2net[bip] = (IPv6Network if v6 else IPv4Network)(ip, False)
|
||||
self.b2net[bip] = (IPv6Network if v6 else IPv4Network)(ip, strict_cidr)
|
||||
|
||||
self.bip.sort(reverse=True)
|
||||
|
||||
@@ -707,8 +742,13 @@ class NetMap(object):
|
||||
try:
|
||||
return self.cache[ip]
|
||||
except:
|
||||
pass
|
||||
# intentionally crash the calling thread if unset:
|
||||
assert self.mutex # type: ignore # !rm
|
||||
|
||||
with self.mutex:
|
||||
return self._map(ip)
|
||||
|
||||
def _map(self, ip: str) -> str:
|
||||
v6 = ":" in ip
|
||||
ci = IPv6Address(ip) if v6 else IPv4Address(ip)
|
||||
bip = next((x for x in self.bip if ci in self.b2net[x]), None)
|
||||
@@ -1011,6 +1051,7 @@ class MTHash(object):
|
||||
self.sz = 0
|
||||
self.csz = 0
|
||||
self.stop = False
|
||||
self.readsz = 1024 * 1024 * (2 if (RAM_AVAIL or 2) < 1 else 12)
|
||||
self.omutex = threading.Lock()
|
||||
self.imutex = threading.Lock()
|
||||
self.work_q: Queue[int] = Queue()
|
||||
@@ -1086,7 +1127,7 @@ class MTHash(object):
|
||||
while chunk_rem > 0:
|
||||
with self.imutex:
|
||||
f.seek(ofs)
|
||||
buf = f.read(min(chunk_rem, 1024 * 1024 * 12))
|
||||
buf = f.read(min(chunk_rem, self.readsz))
|
||||
|
||||
if not buf:
|
||||
raise Exception("EOF at " + str(ofs))
|
||||
@@ -2185,6 +2226,23 @@ def unquotep(txt: str) -> str:
|
||||
return w8dec(unq2)
|
||||
|
||||
|
||||
def vroots(vp1: str, vp2: str) -> tuple[str, str]:
|
||||
"""
|
||||
input("q/w/e/r","a/s/d/e/r") output("/q/w/","/a/s/d/")
|
||||
"""
|
||||
while vp1 and vp2:
|
||||
zt1 = vp1.rsplit("/", 1) if "/" in vp1 else ("", vp1)
|
||||
zt2 = vp2.rsplit("/", 1) if "/" in vp2 else ("", vp2)
|
||||
if zt1[1] != zt2[1]:
|
||||
break
|
||||
vp1 = zt1[0]
|
||||
vp2 = zt2[0]
|
||||
return (
|
||||
"/%s/" % (vp1,) if vp1 else "/",
|
||||
"/%s/" % (vp2,) if vp2 else "/",
|
||||
)
|
||||
|
||||
|
||||
def vsplit(vpath: str) -> tuple[str, str]:
|
||||
if "/" not in vpath:
|
||||
return "", vpath
|
||||
@@ -2486,23 +2544,28 @@ def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
|
||||
return _fs_mvrm(log, abspath, "", False, flags)
|
||||
|
||||
|
||||
def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]:
|
||||
def get_df(abspath: str, prune: bool) -> tuple[Optional[int], Optional[int], str]:
|
||||
try:
|
||||
# some fuses misbehave
|
||||
assert ctypes # type: ignore # !rm
|
||||
ap = fsenc(abspath)
|
||||
while prune and not os.path.isdir(ap) and BOS_SEP in ap:
|
||||
# strip leafs until it hits an existing folder
|
||||
ap = ap.rsplit(BOS_SEP, 1)[0]
|
||||
|
||||
if ANYWIN:
|
||||
assert ctypes # type: ignore # !rm
|
||||
abspath = fsdec(ap)
|
||||
bfree = ctypes.c_ulonglong(0)
|
||||
ctypes.windll.kernel32.GetDiskFreeSpaceExW( # type: ignore
|
||||
ctypes.c_wchar_p(abspath), None, None, ctypes.pointer(bfree)
|
||||
)
|
||||
return (bfree.value, None)
|
||||
return (bfree.value, None, "")
|
||||
else:
|
||||
sv = os.statvfs(fsenc(abspath))
|
||||
sv = os.statvfs(ap)
|
||||
free = sv.f_frsize * sv.f_bfree
|
||||
total = sv.f_frsize * sv.f_blocks
|
||||
return (free, total)
|
||||
except:
|
||||
return (None, None)
|
||||
return (free, total, "")
|
||||
except Exception as ex:
|
||||
return (None, None, repr(ex))
|
||||
|
||||
|
||||
if not ANYWIN and not MACOS:
|
||||
@@ -2640,18 +2703,35 @@ def list_ips() -> list[str]:
|
||||
return list(ret)
|
||||
|
||||
|
||||
def build_netmap(csv: str):
|
||||
def build_netmap(csv: str, defer_mutex: bool = False):
|
||||
csv = csv.lower().strip()
|
||||
|
||||
if csv in ("any", "all", "no", ",", ""):
|
||||
return None
|
||||
|
||||
if csv in ("lan", "local", "private", "prvt"):
|
||||
csv = "10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, fd00::/8" # lan
|
||||
csv += ", 169.254.0.0/16, fe80::/10" # link-local
|
||||
csv += ", 127.0.0.0/8, ::1/128" # loopback
|
||||
|
||||
srcs = [x.strip() for x in csv.split(",") if x.strip()]
|
||||
|
||||
expanded_shorthands = False
|
||||
for shorthand in ("lan", "local", "private", "prvt"):
|
||||
if shorthand in srcs:
|
||||
if not expanded_shorthands:
|
||||
srcs += [
|
||||
# lan:
|
||||
"10.0.0.0/8",
|
||||
"172.16.0.0/12",
|
||||
"192.168.0.0/16",
|
||||
"fd00::/8",
|
||||
# link-local:
|
||||
"169.254.0.0/16",
|
||||
"fe80::/10",
|
||||
# loopback:
|
||||
"127.0.0.0/8",
|
||||
"::1/128",
|
||||
]
|
||||
expanded_shorthands = True
|
||||
|
||||
srcs.remove(shorthand)
|
||||
|
||||
if not HAVE_IPV6:
|
||||
srcs = [x for x in srcs if ":" not in x]
|
||||
|
||||
@@ -2675,7 +2755,34 @@ def build_netmap(csv: str):
|
||||
cidrs.append(zs)
|
||||
|
||||
ips = [x.split("/")[0] for x in cidrs]
|
||||
return NetMap(ips, cidrs, True)
|
||||
return NetMap(ips, cidrs, True, False, defer_mutex)
|
||||
|
||||
|
||||
def load_ipu(
|
||||
log: "RootLogger", ipus: list[str], defer_mutex: bool = False
|
||||
) -> tuple[dict[str, str], NetMap]:
|
||||
ip_u = {"": "*"}
|
||||
cidr_u = {}
|
||||
for ipu in ipus:
|
||||
try:
|
||||
cidr, uname = ipu.split("=")
|
||||
cip, csz = cidr.split("/")
|
||||
except:
|
||||
t = "\n invalid value %r for argument --ipu; must be CIDR=UNAME (192.168.0.0/16=amelia)"
|
||||
raise Exception(t % (ipu,))
|
||||
uname2 = cidr_u.get(cidr)
|
||||
if uname2 is not None:
|
||||
t = "\n invalid value %r for argument --ipu; cidr %s already mapped to %r"
|
||||
raise Exception(t % (ipu, cidr, uname2))
|
||||
cidr_u[cidr] = uname
|
||||
ip_u[cip] = uname
|
||||
try:
|
||||
nm = NetMap(["::"], list(cidr_u.keys()), True, True, defer_mutex)
|
||||
except Exception as ex:
|
||||
t = "failed to translate --ipu into netmap, probably due to invalid config: %r"
|
||||
log("root", t % (ex,), 1)
|
||||
raise
|
||||
return ip_u, nm
|
||||
|
||||
|
||||
def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
|
||||
@@ -2692,10 +2799,12 @@ def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
|
||||
def hashcopy(
|
||||
fin: Generator[bytes, None, None],
|
||||
fout: Union[typing.BinaryIO, typing.IO[Any]],
|
||||
slp: float = 0,
|
||||
max_sz: int = 0,
|
||||
hashobj: Optional["hashlib._Hash"],
|
||||
max_sz: int,
|
||||
slp: float,
|
||||
) -> tuple[int, str, str]:
|
||||
hashobj = hashlib.sha512()
|
||||
if not hashobj:
|
||||
hashobj = hashlib.sha512()
|
||||
tlen = 0
|
||||
for buf in fin:
|
||||
tlen += len(buf)
|
||||
@@ -2721,7 +2830,10 @@ def sendfile_py(
|
||||
bufsz: int,
|
||||
slp: float,
|
||||
use_poll: bool,
|
||||
dls: dict[str, tuple[float, int]],
|
||||
dl_id: str,
|
||||
) -> int:
|
||||
sent = 0
|
||||
remains = upper - lower
|
||||
f.seek(lower)
|
||||
while remains > 0:
|
||||
@@ -2738,6 +2850,10 @@ def sendfile_py(
|
||||
except:
|
||||
return remains
|
||||
|
||||
if dl_id:
|
||||
sent += len(buf)
|
||||
dls[dl_id] = (time.time(), sent)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
@@ -2750,6 +2866,8 @@ def sendfile_kern(
|
||||
bufsz: int,
|
||||
slp: float,
|
||||
use_poll: bool,
|
||||
dls: dict[str, tuple[float, int]],
|
||||
dl_id: str,
|
||||
) -> int:
|
||||
out_fd = s.fileno()
|
||||
in_fd = f.fileno()
|
||||
@@ -2762,7 +2880,7 @@ def sendfile_kern(
|
||||
while ofs < upper:
|
||||
stuck = stuck or time.time()
|
||||
try:
|
||||
req = min(2 ** 30, upper - ofs)
|
||||
req = min(0x2000000, upper - ofs) # 32 MiB
|
||||
if use_poll:
|
||||
poll.poll(10000)
|
||||
else:
|
||||
@@ -2786,13 +2904,16 @@ def sendfile_kern(
|
||||
return upper - ofs
|
||||
|
||||
ofs += n
|
||||
if dl_id:
|
||||
dls[dl_id] = (time.time(), ofs - lower)
|
||||
|
||||
# print("sendfile: ok, sent {} now, {} total, {} remains".format(n, ofs - lower, upper - ofs))
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def statdir(
|
||||
logger: Optional["RootLogger"], scandir: bool, lstat: bool, top: str
|
||||
logger: Optional["RootLogger"], scandir: bool, lstat: bool, top: str, throw: bool
|
||||
) -> Generator[tuple[str, os.stat_result], None, None]:
|
||||
if lstat and ANYWIN:
|
||||
lstat = False
|
||||
@@ -2828,6 +2949,12 @@ def statdir(
|
||||
logger(src, "[s] {} @ {}".format(repr(ex), fsdec(abspath)), 6)
|
||||
|
||||
except Exception as ex:
|
||||
if throw:
|
||||
zi = getattr(ex, "errno", 0)
|
||||
if zi == errno.ENOENT:
|
||||
raise Pebkac(404, str(ex))
|
||||
raise
|
||||
|
||||
t = "{} @ {}".format(repr(ex), top)
|
||||
if logger:
|
||||
logger(src, t, 1)
|
||||
@@ -2836,7 +2963,7 @@ def statdir(
|
||||
|
||||
|
||||
def dir_is_empty(logger: "RootLogger", scandir: bool, top: str):
|
||||
for _ in statdir(logger, scandir, False, top):
|
||||
for _ in statdir(logger, scandir, False, top, False):
|
||||
return False
|
||||
return True
|
||||
|
||||
@@ -2849,7 +2976,7 @@ def rmdirs(
|
||||
top = os.path.dirname(top)
|
||||
depth -= 1
|
||||
|
||||
stats = statdir(logger, scandir, lstat, top)
|
||||
stats = statdir(logger, scandir, lstat, top, False)
|
||||
dirs = [x[0] for x in stats if stat.S_ISDIR(x[1].st_mode)]
|
||||
dirs = [os.path.join(top, x) for x in dirs]
|
||||
ok = []
|
||||
|
||||
@@ -32,7 +32,7 @@ window.baguetteBox = (function () {
|
||||
scrollCSS = ['', ''],
|
||||
scrollTimer = 0,
|
||||
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
|
||||
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
|
||||
re_v = /^[^?]+\.(webm|mkv|mp4|m4v|mov)(\?|$)/i,
|
||||
anims = ['slideIn', 'fadeIn', 'none'],
|
||||
data = {}, // all galleries
|
||||
imagesElements = [],
|
||||
|
||||
@@ -188,7 +188,6 @@ html.y {
|
||||
--srv-1: #555;
|
||||
--srv-2: #c83;
|
||||
--srv-3: #c0a;
|
||||
--srv-3b: rgba(255,68,204,0.6);
|
||||
|
||||
--tree-bg: #fff;
|
||||
|
||||
@@ -286,6 +285,7 @@ html.bz {
|
||||
--f-h-b1: #34384e;
|
||||
--mp-sh: #11121d;
|
||||
/*--mp-b-bg: #2c3044;*/
|
||||
--f-play-bg: var(--btn-1-bg);
|
||||
}
|
||||
html.by {
|
||||
--bg: #f2f2f2;
|
||||
@@ -389,8 +389,6 @@ html.cy {
|
||||
}
|
||||
html.dz {
|
||||
--fg: #4d4;
|
||||
--fg-max: #fff;
|
||||
--fg2-max: #fff;
|
||||
--fg-weak: #2a2;
|
||||
|
||||
--bg-u6: #020;
|
||||
@@ -400,11 +398,9 @@ html.dz {
|
||||
--bg-u2: #020;
|
||||
--bg-u1: #020;
|
||||
--bg: #010;
|
||||
--bgg: var(--bg);
|
||||
--bg-d1: #000;
|
||||
--bg-d2: #020;
|
||||
--bg-d3: #000;
|
||||
--bg-max: #000;
|
||||
|
||||
--tab-alt: #6f6;
|
||||
--row-alt: #030;
|
||||
@@ -417,45 +413,21 @@ html.dz {
|
||||
--a-dark: #afa;
|
||||
--a-gray: #2a2;
|
||||
|
||||
--btn-fg: var(--a);
|
||||
--btn-bg: rgba(64,128,64,0.15);
|
||||
--btn-h-fg: var(--a-hil);
|
||||
--btn-h-bg: #050;
|
||||
--btn-1-fg: #000;
|
||||
--btn-1-bg: #4f4;
|
||||
--btn-1h-fg: var(--btn-1-fg);
|
||||
--btn-1h-bg: #3f3;
|
||||
--btn-bs: 0 0 0 .1em #080 inset;
|
||||
--btn-1-bs: a;
|
||||
|
||||
--chk-fg: var(--tab-alt);
|
||||
--txt-sh: var(--bg-d2);
|
||||
--txt-bg: var(--btn-bg);
|
||||
|
||||
--op-aa-fg: var(--a);
|
||||
--op-aa-bg: var(--bg-d2);
|
||||
--op-a-sh: rgba(0,0,0,0.5);
|
||||
|
||||
--u2-btn-b1: var(--fg-weak);
|
||||
--u2-sbtn-b1: var(--fg-weak);
|
||||
--u2-txt-bg: var(--bg-u5);
|
||||
--u2-tab-bg: linear-gradient(to bottom, var(--bg), var(--bg-u1));
|
||||
--u2-tab-b1: var(--fg-weak);
|
||||
--u2-tab-1-fg: #fff;
|
||||
--u2-tab-1-bg: linear-gradient(to bottom, #151, var(--bg) 80%);
|
||||
--u2-tab-1-b1: #7c5;
|
||||
--u2-tab-1-b2: #583;
|
||||
--u2-tab-1-sh: #280;
|
||||
--u2-b-fg: #fff;
|
||||
--u2-b1-bg: #3a3;
|
||||
--u2-b2-bg: #3a3;
|
||||
--u2-inf-bg: #07a;
|
||||
--u2-inf-b1: #0be;
|
||||
--u2-ok-bg: #380;
|
||||
--u2-ok-b1: #8e4;
|
||||
--u2-err-bg: #900;
|
||||
--u2-err-b1: #d06;
|
||||
--ud-b1: #888;
|
||||
|
||||
--sort-1: #fff;
|
||||
--sort-2: #3f3;
|
||||
@@ -467,47 +439,12 @@ html.dz {
|
||||
|
||||
--tree-bg: #010;
|
||||
|
||||
--g-play-bg: #750;
|
||||
--g-play-b1: #c90;
|
||||
--g-play-b2: #da4;
|
||||
--g-play-sh: #b83;
|
||||
|
||||
--g-sel-fg: #fff;
|
||||
--g-sel-bg: #925;
|
||||
--g-sel-b1: #c37;
|
||||
--g-sel-sh: #b36;
|
||||
--g-fsel-bg: #d39;
|
||||
--g-fsel-b1: #d48;
|
||||
--g-fsel-ts: #804;
|
||||
--g-fg: var(--a-hil);
|
||||
--g-bg: var(--bg-u2);
|
||||
--g-b1: var(--bg-u4);
|
||||
--g-b2: var(--bg-u5);
|
||||
--g-g1: var(--bg-u2);
|
||||
--g-g2: var(--bg-u5);
|
||||
--g-f-bg: var(--bg-u4);
|
||||
--g-f-b1: var(--bg-u5);
|
||||
--g-f-fg: var(--a-hil);
|
||||
--g-sh: rgba(0,0,0,0.3);
|
||||
|
||||
--f-sh1: 0.33;
|
||||
--f-sh2: 0.02;
|
||||
--f-sh3: 0.2;
|
||||
--f-h-b1: #3b3;
|
||||
|
||||
--f-play-bg: #fc5;
|
||||
--f-play-fg: #000;
|
||||
--f-sel-sh: #fc0;
|
||||
--f-gray: #999;
|
||||
|
||||
--fm-off: #f6c;
|
||||
--mp-sh: var(--bg-d3);
|
||||
|
||||
--err-fg: #fff;
|
||||
--err-bg: #a20;
|
||||
--err-b1: #f00;
|
||||
--err-ts: #500;
|
||||
|
||||
text-shadow: none;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
@@ -1710,6 +1647,18 @@ html.dz .btn {
|
||||
background: var(--btn-1-bg);
|
||||
text-shadow: none;
|
||||
}
|
||||
#tree ul a.ld::before {
|
||||
font-weight: bold;
|
||||
font-family: sans-serif;
|
||||
display: inline-block;
|
||||
text-align: center;
|
||||
width: 1em;
|
||||
margin: 0 .3em 0 -1.3em;
|
||||
color: var(--fg-max);
|
||||
opacity: 0;
|
||||
content: '◠';
|
||||
animation: .5s linear infinite forwards spin, ease .25s 1 forwards fadein;
|
||||
}
|
||||
#tree ul a.par {
|
||||
color: var(--fg-max);
|
||||
}
|
||||
@@ -1931,11 +1880,10 @@ html.y #tree.nowrap .ntree a+a:hover {
|
||||
#rn_f.m td+td {
|
||||
width: 50%;
|
||||
}
|
||||
#rn_f .err td {
|
||||
background: var(--err-bg);
|
||||
color: var(--fg-max);
|
||||
}
|
||||
#rn_f .err input[readonly] {
|
||||
#rn_f .err td,
|
||||
#rn_f .err input[readonly],
|
||||
#rui .ng input[readonly] {
|
||||
color: var(--err-fg);
|
||||
background: var(--err-bg);
|
||||
}
|
||||
#rui input[readonly] {
|
||||
|
||||
@@ -132,16 +132,15 @@
|
||||
|
||||
<script>
|
||||
var SR = {{ r|tojson }},
|
||||
CGV1 = {{ cgv1 }},
|
||||
CGV = {{ cgv|tojson }},
|
||||
TS = "{{ ts }}",
|
||||
dtheme = "{{ dtheme }}",
|
||||
srvinf = "{{ srv_info }}",
|
||||
s_name = "{{ s_name }}",
|
||||
lang = "{{ lang }}",
|
||||
dfavico = "{{ favico }}",
|
||||
have_tags_idx = {{ have_tags_idx|tojson }},
|
||||
have_tags_idx = {{ have_tags_idx }},
|
||||
sb_lg = "{{ sb_lg }}",
|
||||
txt_ext = "{{ txt_ext }}",
|
||||
logues = {{ logues|tojson if sb_lg else "[]" }},
|
||||
ls0 = {{ ls0|tojson }};
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"use strict";
|
||||
|
||||
var XHR = XMLHttpRequest,
|
||||
img_re = /\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp|webm|mkv|mp4)(\?|$)/i;
|
||||
img_re = /\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp|webm|mkv|mp4|m4v|mov)(\?|$)/i;
|
||||
|
||||
var Ls = {
|
||||
"eng": {
|
||||
@@ -37,8 +37,9 @@ var Ls = {
|
||||
["T", "toggle thumbnails / icons"],
|
||||
["🡅 A/D", "thumbnail size"],
|
||||
["ctrl-K", "delete selected"],
|
||||
["ctrl-X", "cut selected"],
|
||||
["ctrl-V", "paste into folder"],
|
||||
["ctrl-X", "cut selection to clipboard"],
|
||||
["ctrl-C", "copy selection to clipboard"],
|
||||
["ctrl-V", "paste (move/copy) here"],
|
||||
["Y", "download selected"],
|
||||
["F2", "rename selected"],
|
||||
|
||||
@@ -83,7 +84,7 @@ var Ls = {
|
||||
["I/K", "prev/next file"],
|
||||
["M", "close textfile"],
|
||||
["E", "edit textfile"],
|
||||
["S", "select file (for cut/rename)"],
|
||||
["S", "select file (for cut/copy/rename)"],
|
||||
]
|
||||
],
|
||||
|
||||
@@ -133,6 +134,7 @@ var Ls = {
|
||||
"wt_ren": "rename selected items$NHotkey: F2",
|
||||
"wt_del": "delete selected items$NHotkey: ctrl-K",
|
||||
"wt_cut": "cut selected items <small>(then paste somewhere else)</small>$NHotkey: ctrl-X",
|
||||
"wt_cpy": "copy selected items to clipboard$N(to paste them somewhere else)$NHotkey: ctrl-C",
|
||||
"wt_pst": "paste a previously cut / copied selection$NHotkey: ctrl-V",
|
||||
"wt_selall": "select all files$NHotkey: ctrl-A (when file focused)",
|
||||
"wt_selinv": "invert selection",
|
||||
@@ -327,6 +329,7 @@ var Ls = {
|
||||
"fr_emore": "select at least one item to rename",
|
||||
"fd_emore": "select at least one item to delete",
|
||||
"fc_emore": "select at least one item to cut",
|
||||
"fcp_emore": "select at least one item to copy to clipboard",
|
||||
|
||||
"fs_sc": "share the folder you're in",
|
||||
"fs_ss": "share the selected files",
|
||||
@@ -379,16 +382,28 @@ var Ls = {
|
||||
"fc_ok": "cut {0} items",
|
||||
"fc_warn": 'cut {0} items\n\nbut: only <b>this</b> browser-tab can paste them\n(since the selection is so absolutely massive)',
|
||||
|
||||
"fp_ecut": "first cut some files / folders to paste / move\n\nnote: you can cut / paste across different browser tabs",
|
||||
"fp_ename": "these {0} items cannot be moved here (names already exist):",
|
||||
"fcc_ok": "copied {0} items to clipboard",
|
||||
"fcc_warn": 'copied {0} items to clipboard\n\nbut: only <b>this</b> browser-tab can paste them\n(since the selection is so absolutely massive)',
|
||||
|
||||
"fp_apply": "use these names",
|
||||
"fp_ecut": "first cut or copy some files / folders to paste / move\n\nnote: you can cut / paste across different browser tabs",
|
||||
"fp_ename": "{0} items cannot be moved here because the names are already taken. Give them new names below to continue, or blank the name to skip them:",
|
||||
"fcp_ename": "{0} items cannot be copied here because the names are already taken. Give them new names below to continue, or blank the name to skip them:",
|
||||
"fp_emore": "there are still some filename collisions left to fix",
|
||||
"fp_ok": "move OK",
|
||||
"fcp_ok": "copy OK",
|
||||
"fp_busy": "moving {0} items...\n\n{1}",
|
||||
"fcp_busy": "copying {0} items...\n\n{1}",
|
||||
"fp_err": "move failed:\n",
|
||||
"fcp_err": "copy failed:\n",
|
||||
"fp_confirm": "move these {0} items here?",
|
||||
"fcp_confirm": "copy these {0} items here?",
|
||||
"fp_etab": 'failed to read clipboard from other browser tab',
|
||||
"fp_name": "uploading a file from your device. Give it a name:",
|
||||
"fp_both_m": '<h6>choose what to paste</h6><code>Enter</code> = Move {0} files from «{1}»\n<code>ESC</code> = Upload {2} files from your device',
|
||||
"fcp_both_m": '<h6>choose what to paste</h6><code>Enter</code> = Copy {0} files from «{1}»\n<code>ESC</code> = Upload {2} files from your device',
|
||||
"fp_both_b": '<a href="#" id="modal-ok">Move</a><a href="#" id="modal-ng">Upload</a>',
|
||||
"fcp_both_b": '<a href="#" id="modal-ok">Copy</a><a href="#" id="modal-ng">Upload</a>',
|
||||
|
||||
"mk_noname": "type a name into the text field on the left before you do that :p",
|
||||
|
||||
@@ -400,7 +415,7 @@ var Ls = {
|
||||
"tvt_dl": "download this file$NHotkey: Y\">💾 download",
|
||||
"tvt_prev": "show previous document$NHotkey: i\">⬆ prev",
|
||||
"tvt_next": "show next document$NHotkey: K\">⬇ next",
|
||||
"tvt_sel": "select file ( for cut / delete / ... )$NHotkey: S\">sel",
|
||||
"tvt_sel": "select file ( for cut / copy / delete / ... )$NHotkey: S\">sel",
|
||||
"tvt_edit": "open file in text editor$NHotkey: E\">✏️ edit",
|
||||
|
||||
"gt_vau": "don't show videos, just play the audio\">🎧",
|
||||
@@ -605,8 +620,9 @@ var Ls = {
|
||||
["T", "miniatyrbilder på/av"],
|
||||
["🡅 A/D", "ikonstørrelse"],
|
||||
["ctrl-K", "slett valgte"],
|
||||
["ctrl-X", "klipp ut"],
|
||||
["ctrl-V", "lim inn"],
|
||||
["ctrl-X", "klipp ut valgte"],
|
||||
["ctrl-C", "kopiér til utklippstavle"],
|
||||
["ctrl-V", "lim inn (flytt/kopiér)"],
|
||||
["Y", "last ned valgte"],
|
||||
["F2", "endre navn på valgte"],
|
||||
|
||||
@@ -702,7 +718,8 @@ var Ls = {
|
||||
"wt_ren": "gi nye navn til de valgte filene$NSnarvei: F2",
|
||||
"wt_del": "slett de valgte filene$NSnarvei: ctrl-K",
|
||||
"wt_cut": "klipp ut de valgte filene <small>(for å lime inn et annet sted)</small>$NSnarvei: ctrl-X",
|
||||
"wt_pst": "lim inn filer (som tidligere ble klippet ut et annet sted)$NSnarvei: ctrl-V",
|
||||
"wt_cpy": "kopiér de valgte filene til utklippstavlen$N(for å lime inn et annet sted)$NSnarvei: ctrl-C",
|
||||
"wt_pst": "lim inn filer (som tidligere ble klippet ut / kopiert et annet sted)$NSnarvei: ctrl-V",
|
||||
"wt_selall": "velg alle filer$NSnarvei: ctrl-A (mens fokus er på en fil)",
|
||||
"wt_selinv": "inverter utvalg",
|
||||
"wt_selzip": "last ned de valgte filene som et arkiv",
|
||||
@@ -845,7 +862,7 @@ var Ls = {
|
||||
"mt_oscv": "vis album-cover på infoskjermen\">bilde",
|
||||
"mt_follow": "bla slik at sangen som spilles alltid er synlig\">🎯",
|
||||
"mt_compact": "tettpakket avspillerpanel\">⟎",
|
||||
"mt_uncache": "prøv denne hvis en sang ikke spiller riktig\">uncache",
|
||||
"mt_uncache": "prøv denne hvis en sang ikke spiller riktig\">oppfrisk",
|
||||
"mt_mloop": "repeter hele mappen\">🔁 gjenta",
|
||||
"mt_mnext": "hopp til neste mappe og fortsett\">📂 neste",
|
||||
"mt_cflac": "konverter flac / wav-filer til opus\">flac",
|
||||
@@ -896,6 +913,7 @@ var Ls = {
|
||||
"fr_emore": "velg minst én fil som skal få nytt navn",
|
||||
"fd_emore": "velg minst én fil som skal slettes",
|
||||
"fc_emore": "velg minst én fil som skal klippes ut",
|
||||
"fcp_emore": "velg minst én fil som skal kopieres til utklippstavlen",
|
||||
|
||||
"fs_sc": "del mappen du er i nå",
|
||||
"fs_ss": "del de valgte filene",
|
||||
@@ -948,16 +966,28 @@ var Ls = {
|
||||
"fc_ok": "klippet ut {0} filer",
|
||||
"fc_warn": 'klippet ut {0} filer\n\nmen: kun <b>denne</b> nettleserfanen har mulighet til å lime dem inn et annet sted, siden antallet filer er helt hinsides',
|
||||
|
||||
"fp_ecut": "du må klippe ut noen filer / mapper først\n\nmerk: du kan gjerne jobbe på kryss av nettleserfaner; klippe ut i én fane, lime inn i en annen",
|
||||
"fp_ename": "disse {0} filene kan ikke flyttes til målmappen fordi det allerede finnes filer med samme navn:",
|
||||
"fcc_ok": "kopierte {0} filer til utklippstavlen",
|
||||
"fcc_warn": 'kopierte {0} filer til utklippstavlen\n\nmen: kun <b>denne</b> nettleserfanen har mulighet til å lime dem inn et annet sted, siden antallet filer er helt hinsides',
|
||||
|
||||
"fp_apply": "bekreft og lim inn nå",
|
||||
"fp_ecut": "du må klippe ut eller kopiere noen filer / mapper først\n\nmerk: du kan gjerne jobbe på kryss av nettleserfaner; klippe ut i én fane, lime inn i en annen",
|
||||
"fp_ename": "{0} filer kan ikke flyttes til målmappen fordi det allerede finnes filer med samme navn. Gi dem nye navn nedenfor, eller gi dem et blankt navn for å hoppe over dem:",
|
||||
"fcp_ename": "{0} filer kan ikke kopieres til målmappen fordi det allerede finnes filer med samme navn. Gi dem nye navn nedenfor, eller gi dem et blankt navn for å hoppe over dem:",
|
||||
"fp_emore": "det er fortsatt flere navn som må endres",
|
||||
"fp_ok": "flytting OK",
|
||||
"fcp_ok": "kopiering OK",
|
||||
"fp_busy": "flytter {0} filer...\n\n{1}",
|
||||
"fcp_busy": "kopierer {0} filer...\n\n{1}",
|
||||
"fp_err": "flytting feilet:\n",
|
||||
"fcp_err": "kopiering feilet:\n",
|
||||
"fp_confirm": "flytt disse {0} filene hit?",
|
||||
"fcp_confirm": "kopiér disse {0} filene hit?",
|
||||
"fp_etab": 'kunne ikke lese listen med filer ifra den andre nettleserfanen',
|
||||
"fp_name": "Laster opp én fil fra enheten din. Velg filnavn:",
|
||||
"fp_both_m": '<h6>hva skal limes inn her?</h6><code>Enter</code> = Flytt {0} filer fra «{1}»\n<code>ESC</code> = Last opp {2} filer fra enheten din',
|
||||
"fcp_both_m": '<h6>hva skal limes inn her?</h6><code>Enter</code> = Kopiér {0} filer fra «{1}»\n<code>ESC</code> = Last opp {2} filer fra enheten din',
|
||||
"fp_both_b": '<a href="#" id="modal-ok">Flytt</a><a href="#" id="modal-ng">Last opp</a>',
|
||||
"fcp_both_b": '<a href="#" id="modal-ok">Kopiér</a><a href="#" id="modal-ng">Last opp</a>',
|
||||
|
||||
"mk_noname": "skriv inn et navn i tekstboksen til venstre først :p",
|
||||
|
||||
@@ -1176,6 +1206,7 @@ var Ls = {
|
||||
["🡅 A/D", "缩略图大小"],
|
||||
["ctrl-K", "删除选中项"],
|
||||
["ctrl-X", "剪切选中项"],
|
||||
["ctrl-C", "复制选中项"], //m
|
||||
["ctrl-V", "粘贴到文件夹"],
|
||||
["Y", "下载选中项"],
|
||||
["F2", "重命名选中项"],
|
||||
@@ -1271,6 +1302,7 @@ var Ls = {
|
||||
"wt_ren": "重命名选中的项目$N快捷键: F2",
|
||||
"wt_del": "删除选中的项目$N快捷键: ctrl-K",
|
||||
"wt_cut": "剪切选中的项目<small>(然后粘贴到其他地方)</small>$N快捷键: ctrl-X",
|
||||
"wt_cpy": "将选中的项目复制到剪贴板<small>(然后粘贴到其他地方)</small>$N快捷键: ctrl-C", //m
|
||||
"wt_pst": "粘贴之前剪切/复制的选择$N快捷键: ctrl-V",
|
||||
"wt_selall": "选择所有文件$N快捷键: ctrl-A(当文件被聚焦时)",
|
||||
"wt_selinv": "反转选择",
|
||||
@@ -1465,6 +1497,7 @@ var Ls = {
|
||||
"fr_emore": "选择至少一个项目以重命名",
|
||||
"fd_emore": "选择至少一个项目以删除",
|
||||
"fc_emore": "选择至少一个项目以剪切",
|
||||
"fcp_emore": "选择至少一个要复制到剪贴板的项目", //m
|
||||
|
||||
"fs_sc": "分享你所在的文件夹",
|
||||
"fs_ss": "分享选定的文件",
|
||||
@@ -1517,16 +1550,28 @@ var Ls = {
|
||||
"fc_ok": "剪切 {0} 项",
|
||||
"fc_warn": '剪切 {0} 项\n\n但:只有 <b>这个</b> 浏览器标签页可以粘贴它们\n(因为选择非常庞大)',
|
||||
|
||||
"fp_ecut": "首先剪切一些文件/文件夹以粘贴/移动\n\n注意:你可以在不同的浏览器标签页之间剪切/粘贴",
|
||||
"fp_ename": "这些 {0} 项不能移动到这里(名称已存在):",
|
||||
"fcc_ok": "已将 {0} 项复制到剪贴板", //m
|
||||
"fcc_warn": '已将 {0} 项复制到剪贴板\n\n但:只有 <b>这个</b> 浏览器标签页可以粘贴它们\n(因为选择非常庞大)', //m
|
||||
|
||||
"fp_apply": "确认并立即粘贴", //m
|
||||
"fp_ecut": "首先剪切或复制一些文件/文件夹以粘贴/移动\n\n注意:你可以在不同的浏览器标签页之间剪切/粘贴", //m
|
||||
"fp_ename": "{0} 项不能移动到这里,因为名称已被占用。请在下方输入新名称以继续,或将名称留空以跳过这些项:", //m
|
||||
"fcp_ename": "{0} 项不能复制到这里,因为名称已被占用。请在下方输入新名称以继续,或将名称留空以跳过这些项:", //m
|
||||
"fp_emore": "还有一些文件名冲突需要解决", //m
|
||||
"fp_ok": "移动成功",
|
||||
"fcp_ok": "复制成功", //m
|
||||
"fp_busy": "正在移动 {0} 项...\n\n{1}",
|
||||
"fcp_busy": "正在复制 {0} 项...\n\n{1}", //m
|
||||
"fp_err": "移动失败:\n",
|
||||
"fcp_err": "复制失败:\n", //m
|
||||
"fp_confirm": "将这些 {0} 项移动到这里?",
|
||||
"fcp_confirm": "将这些 {0} 项复制到这里?", //m
|
||||
"fp_etab": '无法从其他浏览器标签页读取剪贴板',
|
||||
"fp_name": "从你的设备上传一个文件。给它一个名字:",
|
||||
"fp_both_m": '<h6>选择粘贴内容</h6><code>Enter</code> = 从 «{1}» 移动 {0} 个文件\n<code>ESC</code> = 从你的设备上传 {2} 个文件',
|
||||
"fcp_both_m": '<h6>选择粘贴内容</h6><code>Enter</code> = 从 «{1}» 复制 {0} 个文件\n<code>ESC</code> = 从你的设备上传 {2} 个文件', //m
|
||||
"fp_both_b": '<a href="#" id="modal-ok">移动</a><a href="#" id="modal-ng">上传</a>',
|
||||
"fcp_both_b": '<a href="#" id="modal-ok">复制</a><a href="#" id="modal-ng">上传</a>', //m
|
||||
|
||||
"mk_noname": "在左侧文本框中输入名称,然后再执行此操作 :p",
|
||||
|
||||
@@ -1771,6 +1816,7 @@ ebi('widget').innerHTML = (
|
||||
' href="#" id="fren" tt="' + L.wt_ren + '">✎<span>name</span></a><a' +
|
||||
' href="#" id="fdel" tt="' + L.wt_del + '">⌫<span>del.</span></a><a' +
|
||||
' href="#" id="fcut" tt="' + L.wt_cut + '">✂<span>cut</span></a><a' +
|
||||
' href="#" id="fcpy" tt="' + L.wt_cpy + '">⧉<span>copy</span></a><a' +
|
||||
' href="#" id="fpst" tt="' + L.wt_pst + '">📋<span>paste</span></a>' +
|
||||
'</span><span id="wzip"><a' +
|
||||
' href="#" id="selall" tt="' + L.wt_selall + '">sel.<br />all</a><a' +
|
||||
@@ -2118,8 +2164,9 @@ function set_files_html(html) {
|
||||
// actx breaks background album playback on ios
|
||||
var ACtx = !IPHONE && (window.AudioContext || window.webkitAudioContext),
|
||||
ACB = sread('au_cbv') || 1,
|
||||
noih = /[?&]v\b/.exec('' + location),
|
||||
hash0 = location.hash,
|
||||
sloc0 = '' + location,
|
||||
noih = /[?&]v\b/.exec(sloc0),
|
||||
ldks = [],
|
||||
dks = {},
|
||||
dk, mp;
|
||||
@@ -4091,6 +4138,12 @@ function eval_hash() {
|
||||
if (!im)
|
||||
return toast.warn(10, L.im_hnf);
|
||||
|
||||
if (thegrid.sel)
|
||||
setTimeout(function () {
|
||||
thegrid.sel = true;
|
||||
}, 1);
|
||||
|
||||
thegrid.sel = false;
|
||||
im.click();
|
||||
im.scrollIntoView();
|
||||
}, 50);
|
||||
@@ -4161,6 +4214,8 @@ function eval_hash() {
|
||||
|
||||
|
||||
function read_dsort(txt) {
|
||||
dnsort = dnsort ? 1 : 0;
|
||||
clmod(ebi('nsort'), 'on', (sread('nsort') || dnsort) == 1);
|
||||
try {
|
||||
var zt = (('' + txt).trim() || 'href').split(/,+/g);
|
||||
dsort = [];
|
||||
@@ -4191,7 +4246,7 @@ function sortfiles(nodes) {
|
||||
var sopts = jread('fsort', jcp(dsort)),
|
||||
dir1st = sread('dir1st') !== '0';
|
||||
|
||||
var collator = sread('nsort') != 1 ? null :
|
||||
var collator = !clgot(ebi('nsort'), 'on') ? null :
|
||||
new Intl.Collator([], {numeric: true});
|
||||
|
||||
try {
|
||||
@@ -4376,6 +4431,7 @@ var fileman = (function () {
|
||||
var bren = ebi('fren'),
|
||||
bdel = ebi('fdel'),
|
||||
bcut = ebi('fcut'),
|
||||
bcpy = ebi('fcpy'),
|
||||
bpst = ebi('fpst'),
|
||||
bshr = ebi('fshr'),
|
||||
t_paste,
|
||||
@@ -4388,14 +4444,19 @@ var fileman = (function () {
|
||||
catch (ex) { }
|
||||
|
||||
r.render = function () {
|
||||
if (r.clip === null)
|
||||
if (r.clip === null) {
|
||||
r.clip = jread('fman_clip', []).slice(1);
|
||||
r.ccp = r.clip.length && r.clip[0] == '//c';
|
||||
if (r.ccp)
|
||||
r.clip.shift();
|
||||
}
|
||||
|
||||
var sel = msel.getsel(),
|
||||
nsel = sel.length,
|
||||
enren = nsel,
|
||||
endel = nsel,
|
||||
encut = nsel,
|
||||
encpy = nsel,
|
||||
enpst = r.clip && r.clip.length,
|
||||
hren = !(have_mv && has(perms, 'write') && has(perms, 'move')),
|
||||
hdel = !(have_del && has(perms, 'delete')),
|
||||
@@ -4409,6 +4470,7 @@ var fileman = (function () {
|
||||
clmod(bren, 'en', enren);
|
||||
clmod(bdel, 'en', endel);
|
||||
clmod(bcut, 'en', encut);
|
||||
clmod(bcpy, 'en', encpy);
|
||||
clmod(bpst, 'en', enpst);
|
||||
clmod(bshr, 'en', 1);
|
||||
|
||||
@@ -4510,9 +4572,12 @@ var fileman = (function () {
|
||||
'<tr><td>perms</td><td class="sh_axs">',
|
||||
];
|
||||
for (var a = 0; a < perms.length; a++)
|
||||
if (perms[a] != 'admin')
|
||||
if (!has(['admin', 'move'], perms[a]))
|
||||
html.push('<a href="#" class="tgl btn">' + perms[a] + '</a>');
|
||||
|
||||
if (has(perms, 'write'))
|
||||
html.push('<a href="#" class="btn">write-only</a>');
|
||||
|
||||
html.push('</td></tr></div');
|
||||
shui.innerHTML = html.join('\n');
|
||||
|
||||
@@ -4576,6 +4641,9 @@ var fileman = (function () {
|
||||
|
||||
function shspf() {
|
||||
clmod(this, 'on', 't');
|
||||
if (this.textContent == 'write-only')
|
||||
for (var a = 0; a < pbtns.length; a++)
|
||||
clmod(pbtns[a], 'on', pbtns[a].textContent == 'write');
|
||||
}
|
||||
clmod(pbtns[0], 'on', 1);
|
||||
|
||||
@@ -4604,8 +4672,8 @@ var fileman = (function () {
|
||||
toast.err(9, msg);
|
||||
return;
|
||||
}
|
||||
surl = surl.slice(15);
|
||||
var txt = esc(surl) + '<img class="b64" src="' + surl + '?qr" />';
|
||||
surl = surl.slice(15).trim();
|
||||
var txt = esc(surl) + '<img class="b64" width="100" height="100" src="' + surl + '?qr" />';
|
||||
modal.confirm(txt + L.fs_ok, function() {
|
||||
cliptxt(surl, function () {
|
||||
toast.ok(2, L.clipped);
|
||||
@@ -4697,9 +4765,9 @@ var fileman = (function () {
|
||||
|
||||
var html = sel.length > 1 ? ['<div>'] : [
|
||||
'<div>',
|
||||
'<button class="rn_dec" n="0" tt="' + L.frt_dec + '</button>',
|
||||
'<button class="rn_dec" id="rn_dec_0" tt="' + L.frt_dec + '</button>',
|
||||
'//',
|
||||
'<button class="rn_reset" n="0" tt="' + L.frt_rst + '</button>'
|
||||
'<button class="rn_reset" id="rn_reset_0" tt="' + L.frt_rst + '</button>'
|
||||
];
|
||||
|
||||
html = html.concat([
|
||||
@@ -4726,8 +4794,8 @@ var fileman = (function () {
|
||||
if (sel.length == 1)
|
||||
html.push(
|
||||
'<div><table id="rn_f">\n' +
|
||||
'<tr><td>old:</td><td><input type="text" id="rn_old" n="0" readonly /></td></tr>\n' +
|
||||
'<tr><td>new:</td><td><input type="text" id="rn_new" n="0" /></td></tr>');
|
||||
'<tr><td>old:</td><td><input type="text" id="rn_old_0" readonly /></td></tr>\n' +
|
||||
'<tr><td>new:</td><td><input type="text" id="rn_new_0" /></td></tr>');
|
||||
else {
|
||||
html.push(
|
||||
'<div><table id="rn_f" class="m">' +
|
||||
@@ -4736,10 +4804,10 @@ var fileman = (function () {
|
||||
html.push(
|
||||
'<tr><td>' +
|
||||
(cheap ? '</td>' :
|
||||
'<button class="rn_dec" n="' + a + '">decode</button>' +
|
||||
'<button class="rn_reset" n="' + a + '">' + t_rst + '</button></td>') +
|
||||
'<td><input type="text" id="rn_new" n="' + a + '" /></td>' +
|
||||
'<td><input type="text" id="rn_old" n="' + a + '" readonly /></td></tr>');
|
||||
'<button class="rn_dec" id="rn_dec_' + a + '">decode</button>' +
|
||||
'<button class="rn_reset" id="rn_reset_' + a + '">' + t_rst + '</button></td>') +
|
||||
'<td><input type="text" id="rn_new_' + a + '" /></td>' +
|
||||
'<td><input type="text" id="rn_old_' + a + '" readonly /></td></tr>');
|
||||
}
|
||||
html.push('</table></div>');
|
||||
|
||||
@@ -4753,9 +4821,8 @@ var fileman = (function () {
|
||||
|
||||
rui.innerHTML = html.join('\n');
|
||||
for (var a = 0; a < f.length; a++) {
|
||||
var k = '[n="' + a + '"]';
|
||||
f[a].iold = QS('#rn_old' + k);
|
||||
f[a].inew = QS('#rn_new' + k);
|
||||
f[a].iold = ebi('rn_old_' + a);
|
||||
f[a].inew = ebi('rn_new_' + a);
|
||||
f[a].inew.value = f[a].iold.value = f[a].ofn;
|
||||
|
||||
if (!cheap)
|
||||
@@ -4766,11 +4833,11 @@ var fileman = (function () {
|
||||
if (kc.endsWith('Enter'))
|
||||
return rn_apply();
|
||||
};
|
||||
QS('.rn_dec' + k).onclick = function (e) {
|
||||
ebi('rn_dec_' + a).onclick = function (e) {
|
||||
ev(e);
|
||||
f[a].inew.value = uricom_dec(f[a].inew.value);
|
||||
};
|
||||
QS('.rn_reset' + k).onclick = function (e) {
|
||||
ebi('rn_reset_' + a).onclick = function (e) {
|
||||
ev(e);
|
||||
rn_reset(a);
|
||||
};
|
||||
@@ -4813,6 +4880,9 @@ var fileman = (function () {
|
||||
inew = ebi('rn_pnew'),
|
||||
defp = '$lpad((tn),2,0). [(artist) - ](title).(ext)';
|
||||
|
||||
ire.value = sread('cpp_rn_re') || '';
|
||||
ifmt.value = sread('cpp_rn_fmt') || '';
|
||||
|
||||
var presets = {};
|
||||
presets[defp] = ['', defp];
|
||||
presets = jread("rn_pre", presets);
|
||||
@@ -4903,6 +4973,8 @@ var fileman = (function () {
|
||||
|
||||
function rn_apply(e) {
|
||||
ev(e);
|
||||
swrite('cpp_rn_re', ire.value);
|
||||
swrite('cpp_rn_fmt', ifmt.value);
|
||||
if (r.win || r.slash) {
|
||||
var changed = 0;
|
||||
for (var a = 0; a < f.length; a++) {
|
||||
@@ -4952,7 +5024,6 @@ var fileman = (function () {
|
||||
};
|
||||
|
||||
r.delete = function (e) {
|
||||
ev(e);
|
||||
var sel = msel.getsel(),
|
||||
vps = [];
|
||||
|
||||
@@ -4962,6 +5033,8 @@ var fileman = (function () {
|
||||
if (!sel.length)
|
||||
return toast.err(3, L.fd_emore);
|
||||
|
||||
ev(e);
|
||||
|
||||
if (clgot(bdel, 'hide'))
|
||||
return toast.err(3, L.fd_eperm);
|
||||
|
||||
@@ -5001,13 +5074,15 @@ var fileman = (function () {
|
||||
};
|
||||
|
||||
r.cut = function (e) {
|
||||
ev(e);
|
||||
var sel = msel.getsel(),
|
||||
vps = [];
|
||||
stamp = Date.now(),
|
||||
vps = [stamp];
|
||||
|
||||
if (!sel.length)
|
||||
return toast.err(3, L.fc_emore);
|
||||
|
||||
ev(e);
|
||||
|
||||
if (clgot(bcut, 'hide'))
|
||||
return toast.err(3, L.fc_eperm);
|
||||
|
||||
@@ -5034,9 +5109,11 @@ var fileman = (function () {
|
||||
catch (ex) { }
|
||||
}, 1);
|
||||
|
||||
r.ccp = false;
|
||||
r.clip = vps.slice(1);
|
||||
|
||||
try {
|
||||
var stamp = Date.now();
|
||||
vps = JSON.stringify([stamp].concat(vps));
|
||||
vps = JSON.stringify(vps);
|
||||
if (vps.length > 1024 * 1024)
|
||||
throw 'a';
|
||||
|
||||
@@ -5050,6 +5127,60 @@ var fileman = (function () {
|
||||
}
|
||||
};
|
||||
|
||||
r.cpy = function (e) {
|
||||
var sel = msel.getsel(),
|
||||
stamp = Date.now(),
|
||||
vps = [stamp, '//c'];
|
||||
|
||||
if (!sel.length)
|
||||
return toast.err(3, L.fcp_emore);
|
||||
|
||||
ev(e);
|
||||
|
||||
var els = [], griden = thegrid.en;
|
||||
for (var a = 0; a < sel.length; a++) {
|
||||
vps.push(sel[a].vp);
|
||||
if (sel.length < 100)
|
||||
try {
|
||||
if (griden)
|
||||
els.push(QS('#ggrid>a[ref="' + sel[a].id + '"]'));
|
||||
else
|
||||
els.push(ebi(sel[a].id).closest('tr'));
|
||||
|
||||
clmod(els[a], 'fcut');
|
||||
}
|
||||
catch (ex) { }
|
||||
}
|
||||
|
||||
setTimeout(function () {
|
||||
try {
|
||||
for (var a = 0; a < els.length; a++)
|
||||
clmod(els[a], 'fcut', 1);
|
||||
}
|
||||
catch (ex) { }
|
||||
}, 1);
|
||||
|
||||
if (vps.length < 3)
|
||||
vps.pop();
|
||||
|
||||
r.ccp = true;
|
||||
r.clip = vps.slice(2);
|
||||
|
||||
try {
|
||||
vps = JSON.stringify(vps);
|
||||
if (vps.length > 1024 * 1024)
|
||||
throw 'a';
|
||||
|
||||
swrite('fman_clip', vps);
|
||||
r.tx(stamp);
|
||||
if (sel.length)
|
||||
toast.inf(1.5, L.fcc_ok.format(sel.length));
|
||||
}
|
||||
catch (ex) {
|
||||
toast.warn(30, L.fcc_warn.format(sel.length));
|
||||
}
|
||||
};
|
||||
|
||||
document.onpaste = function (e) {
|
||||
var xfer = e.clipboardData || window.clipboardData;
|
||||
if (!xfer || !xfer.files || !xfer.files.length)
|
||||
@@ -5065,9 +5196,9 @@ var fileman = (function () {
|
||||
return r.clip_up(files);
|
||||
|
||||
var src = r.clip.length == 1 ? r.clip[0] : vsplit(r.clip[0])[0],
|
||||
msg = L.fp_both_m.format(r.clip.length, src, files.length);
|
||||
msg = (r.ccp ? L.fcp_both_m : L.fp_both_m).format(r.clip.length, src, files.length);
|
||||
|
||||
modal.confirm(msg, r.paste, function () { r.clip_up(files); }, null, L.fp_both_b);
|
||||
modal.confirm(msg, r.paste, function () { r.clip_up(files); }, null, (r.ccp ? L.fcp_both_b : L.fp_both_b));
|
||||
};
|
||||
|
||||
r.clip_up = function (files) {
|
||||
@@ -5123,65 +5254,147 @@ var fileman = (function () {
|
||||
if (clgot(bpst, 'hide'))
|
||||
return toast.err(3, L.fp_eperm);
|
||||
|
||||
var req = [],
|
||||
exists = [],
|
||||
var html = [
|
||||
'<div>',
|
||||
'<button id="rn_cancel" tt="' + L.frt_abrt + '</button>',
|
||||
'<button id="rn_apply">✅ ' + L.fp_apply + '</button>',
|
||||
' src: ' + esc(r.clip[0].replace(/[^/]+$/, '')),
|
||||
'</div>',
|
||||
'<p id="cnmt"></p>',
|
||||
'<div><table id="rn_f" class="m">',
|
||||
'<tr><td>' + L.fr_lnew + '</td><td>' + L.fr_lold + '</td></tr>',
|
||||
],
|
||||
ui = false,
|
||||
f = [],
|
||||
indir = [],
|
||||
srcdir = vsplit(r.clip[0])[0],
|
||||
links = QSA('#files tbody td:nth-child(2) a');
|
||||
|
||||
for (var a = 0, aa = links.length; a < aa; a++)
|
||||
indir.push(vsplit(noq_href(links[a]))[1]);
|
||||
indir.push(uricom_dec(vsplit(noq_href(links[a]))[1]));
|
||||
|
||||
for (var a = 0; a < r.clip.length; a++) {
|
||||
var found = false;
|
||||
for (var b = 0; b < indir.length; b++) {
|
||||
if (r.clip[a].endsWith('/' + indir[b])) {
|
||||
exists.push(r.clip[a]);
|
||||
found = true;
|
||||
var t = {
|
||||
'ok': true,
|
||||
'src': r.clip[a],
|
||||
'dst': uricom_dec(r.clip[a].split('/').pop()),
|
||||
};
|
||||
f.push(t);
|
||||
|
||||
for (var b = 0; b < indir.length; b++)
|
||||
if (t.dst == indir[b]) {
|
||||
t.ok = false;
|
||||
ui = true;
|
||||
}
|
||||
}
|
||||
if (!found)
|
||||
req.push(r.clip[a]);
|
||||
|
||||
html.push('<tr' + (!t.ok ? ' class="ng"' : '') + '><td><input type="text" id="rn_new_' + a + '" value="' + esc(t.dst) + '" /></td><td><input type="text" id="rn_old_' + a + '" value="' + esc(t.dst) + '" readonly /></td></tr>');
|
||||
}
|
||||
|
||||
if (exists.length)
|
||||
toast.warn(30, L.fp_ename.format(exists.length) + '<ul>' + uricom_adec(exists, true).join('') + '</ul>');
|
||||
|
||||
if (!req.length)
|
||||
return;
|
||||
|
||||
function paster() {
|
||||
var xhr = new XHR(),
|
||||
vp = req.shift();
|
||||
|
||||
if (!vp) {
|
||||
toast.ok(2, L.fp_ok);
|
||||
var t = f.shift();
|
||||
if (!t) {
|
||||
toast.ok(2, r.ccp ? L.fcp_ok : L.fp_ok);
|
||||
treectl.goto();
|
||||
r.tx(srcdir);
|
||||
return;
|
||||
}
|
||||
toast.show('inf r', 0, esc(L.fp_busy.format(req.length + 1, uricom_dec(vp))));
|
||||
if (!t.dst)
|
||||
return paster();
|
||||
|
||||
var dst = get_evpath() + vp.split('/').pop();
|
||||
toast.show('inf r', 0, esc((r.ccp ? L.fcp_busy : L.fp_busy).format(f.length + 1, uricom_dec(t.src))));
|
||||
|
||||
xhr.open('POST', vp + '?move=' + dst, true);
|
||||
var xhr = new XHR(),
|
||||
act = r.ccp ? '?copy=' : '?move=',
|
||||
dst = get_evpath() + uricom_enc(t.dst);
|
||||
|
||||
xhr.open('POST', t.src + act + dst, true);
|
||||
xhr.onload = xhr.onerror = paste_cb;
|
||||
xhr.send();
|
||||
}
|
||||
function paste_cb() {
|
||||
if (this.status !== 201) {
|
||||
var msg = unpre(this.responseText);
|
||||
toast.err(9, L.fp_err + msg);
|
||||
toast.err(9, (r.ccp ? L.fcp_err : L.fp_err) + msg);
|
||||
return;
|
||||
}
|
||||
paster();
|
||||
}
|
||||
|
||||
modal.confirm(L.fp_confirm.format(req.length) + '<ul>' + uricom_adec(req, true).join('') + '</ul>', function () {
|
||||
function okgo() {
|
||||
paster();
|
||||
jwrite('fman_clip', [Date.now()]);
|
||||
}, null);
|
||||
};
|
||||
}
|
||||
|
||||
if (!ui) {
|
||||
var src = [];
|
||||
for (var a = 0; a < f.length; a++)
|
||||
src.push(f[a].src);
|
||||
|
||||
return modal.confirm((r.ccp ? L.fcp_confirm : L.fp_confirm).format(f.length) + '<ul>' + uricom_adec(src, true).join('') + '</ul>', okgo, null);
|
||||
}
|
||||
|
||||
var rui = ebi('rui');
|
||||
if (!rui) {
|
||||
rui = mknod('div', 'rui');
|
||||
document.body.appendChild(rui);
|
||||
}
|
||||
html.push('</table>');
|
||||
rui.innerHTML = html.join('\n');
|
||||
tt.att(rui);
|
||||
|
||||
function rn_apply(e) {
|
||||
for (var a = 0; a < f.length; a++)
|
||||
if (!f[a].ok) {
|
||||
toast.err(30, L.fp_emore);
|
||||
return setcnmt(true);
|
||||
}
|
||||
rn_cancel(e);
|
||||
okgo();
|
||||
}
|
||||
function rn_cancel(e) {
|
||||
ev(e);
|
||||
rui.parentNode.removeChild(rui);
|
||||
}
|
||||
ebi('rn_cancel').onclick = rn_cancel;
|
||||
ebi('rn_apply').onclick = rn_apply;
|
||||
|
||||
var first_bad = 0;
|
||||
function setcnmt(sel) {
|
||||
var nbad = 0;
|
||||
for (var a = 0; a < f.length; a++) {
|
||||
if (f[a].ok)
|
||||
continue;
|
||||
if (!nbad)
|
||||
first_bad = a;
|
||||
nbad += 1;
|
||||
}
|
||||
ebi('cnmt').innerHTML = (r.ccp ? L.fcp_ename : L.fp_ename).format(nbad);
|
||||
if (sel && nbad) {
|
||||
var el = ebi('rn_new_' + first_bad);
|
||||
el.focus();
|
||||
el.setSelectionRange(0, el.value.lastIndexOf('.'), "forward");
|
||||
}
|
||||
}
|
||||
setcnmt(true);
|
||||
|
||||
for (var a = 0; a < f.length; a++)
|
||||
(function (a) {
|
||||
var inew = ebi('rn_new_' + a);
|
||||
inew.onkeydown = function (e) {
|
||||
if (((e.code || e.key) + '').endsWith('Enter'))
|
||||
return rn_apply();
|
||||
};
|
||||
inew.oninput = function (e) {
|
||||
f[a].dst = this.value;
|
||||
f[a].ok = true;
|
||||
if (f[a].dst)
|
||||
for (var b = 0; b < indir.length; b++)
|
||||
if (indir[b] == this.value)
|
||||
f[a].ok = false;
|
||||
clmod(this.closest('tr'), 'ng', !f[a].ok);
|
||||
setcnmt();
|
||||
};
|
||||
})(a);
|
||||
}
|
||||
|
||||
function onmsg(msg) {
|
||||
r.clip = null;
|
||||
@@ -5219,6 +5432,7 @@ var fileman = (function () {
|
||||
bren.onclick = r.rename;
|
||||
bdel.onclick = r.delete;
|
||||
bcut.onclick = r.cut;
|
||||
bcpy.onclick = r.cpy;
|
||||
bpst.onclick = r.paste;
|
||||
bshr.onclick = r.share;
|
||||
|
||||
@@ -5328,7 +5542,7 @@ var showfile = (function () {
|
||||
|
||||
if (ah.textContent.endsWith('/'))
|
||||
continue;
|
||||
|
||||
|
||||
if (lang == 'ts' || (lang == 'md' && td.textContent != '-'))
|
||||
continue;
|
||||
|
||||
@@ -5731,7 +5945,7 @@ var thegrid = (function () {
|
||||
fid = oth.getAttribute('id'),
|
||||
aplay = ebi('a' + fid),
|
||||
atext = ebi('t' + fid),
|
||||
is_txt = atext && showfile.getlang(href) && !/\.ts$/.test(href),
|
||||
is_txt = atext && !/\.ts$/.test(href) && showfile.getlang(href),
|
||||
is_img = img_re.test(href),
|
||||
is_dir = href.endsWith('/'),
|
||||
is_srch = !!ebi('unsearch'),
|
||||
@@ -6041,6 +6255,19 @@ var thegrid = (function () {
|
||||
toast.warn(10, L.ul_btnlk);
|
||||
};
|
||||
|
||||
if (/[?&]grid\b/.exec(sloc0))
|
||||
swrite('griden', /[?&]grid=0\b/.exec(sloc0) ? 0 : 1)
|
||||
|
||||
if (/[?&]thumb\b/.exec(sloc0))
|
||||
swrite('thumbs', /[?&]thumb=0\b/.exec(sloc0) ? 0 : 1)
|
||||
|
||||
if (/[?&]imgs\b/.exec(sloc0)) {
|
||||
var n = /[?&]imgs=0\b/.exec(sloc0) ? 0 : 1;
|
||||
swrite('griden', n);
|
||||
if (n)
|
||||
swrite('thumbs', 1);
|
||||
}
|
||||
|
||||
bcfg_bind(r, 'thumbs', 'thumbs', true, r.setdirty);
|
||||
bcfg_bind(r, 'ihop', 'ihop', true);
|
||||
bcfg_bind(r, 'vau', 'gridvau', false);
|
||||
@@ -6120,8 +6347,6 @@ function tree_neigh(n) {
|
||||
links[act].click();
|
||||
else
|
||||
treectl.treego.call(links[act]);
|
||||
|
||||
links[act].focus();
|
||||
}
|
||||
|
||||
|
||||
@@ -6219,9 +6444,6 @@ var ahotkeys = function (e) {
|
||||
ae = document.activeElement,
|
||||
aet = ae && ae != document.body ? ae.nodeName.toLowerCase() : '';
|
||||
|
||||
if (e.key == '?')
|
||||
return hkhelp();
|
||||
|
||||
if (k == 'Escape' || k == 'Esc') {
|
||||
ae && ae.blur();
|
||||
tt.hide();
|
||||
@@ -6302,15 +6524,24 @@ var ahotkeys = function (e) {
|
||||
if (aet && aet != 'a' && aet != 'tr' && aet != 'td' && aet != 'div' && aet != 'pre')
|
||||
return;
|
||||
|
||||
if (ctrl(e)) {
|
||||
if (e.key == '?')
|
||||
return hkhelp();
|
||||
|
||||
if (!e.shiftKey && ctrl(e)) {
|
||||
var sel = window.getSelection && window.getSelection() || {};
|
||||
sel = sel && !sel.isCollapsed && sel.direction != 'none';
|
||||
|
||||
if (k == 'KeyX' || k == 'x')
|
||||
return fileman.cut();
|
||||
return fileman.cut(e);
|
||||
|
||||
if ((k == 'KeyC' || k == 'c') && !sel)
|
||||
return fileman.cpy(e);
|
||||
|
||||
if (k == 'KeyV' || k == 'v')
|
||||
return fileman.d_paste();
|
||||
return fileman.d_paste(e);
|
||||
|
||||
if (k == 'KeyK' || k == 'k')
|
||||
return fileman.delete();
|
||||
return fileman.delete(e);
|
||||
|
||||
return;
|
||||
}
|
||||
@@ -6803,7 +7034,7 @@ var treectl = (function () {
|
||||
xhr.open('GET', SR + '/?setck=dots=' + (v ? 'y' : ''), true);
|
||||
xhr.send();
|
||||
});
|
||||
bcfg_bind(r, 'nsort', 'nsort', false, resort);
|
||||
bcfg_bind(r, 'nsort', 'nsort', dnsort, resort);
|
||||
bcfg_bind(r, 'dir1st', 'dir1st', true, resort);
|
||||
setwrap(bcfg_bind(r, 'wtree', 'wraptree', true, setwrap));
|
||||
setwrap(bcfg_bind(r, 'parpane', 'parpane', true, onscroll));
|
||||
@@ -7226,6 +7457,7 @@ var treectl = (function () {
|
||||
r.reqls(href, true);
|
||||
r.dir_cb = tree_scrollto;
|
||||
thegrid.setvis(true);
|
||||
clmod(this, 'ld', 1);
|
||||
}
|
||||
|
||||
r.reqls = function (url, hpush, back, hydrate) {
|
||||
@@ -7286,6 +7518,7 @@ var treectl = (function () {
|
||||
|
||||
try {
|
||||
var res = JSON.parse(this.responseText);
|
||||
Object.assign(res, res.cfg);
|
||||
}
|
||||
catch (ex) {
|
||||
if (r.ls_cb) {
|
||||
@@ -7308,6 +7541,7 @@ var treectl = (function () {
|
||||
if (res.files[a].tags === undefined)
|
||||
res.files[a].tags = {};
|
||||
|
||||
dnsort = res.dnsort;
|
||||
read_dsort(res.dsort);
|
||||
dcrop = res.dcrop;
|
||||
dth3x = res.dth3x;
|
||||
@@ -7380,6 +7614,9 @@ var treectl = (function () {
|
||||
r.ls_cb = null;
|
||||
fun();
|
||||
}
|
||||
|
||||
if (window.have_shr && QS('#op_unpost.act') && (cdir.startsWith(SR + have_shr) || get_evpath().startsWith(SR + have_shr)))
|
||||
goto('unpost');
|
||||
}
|
||||
|
||||
r.chk_index_html = function (top, res) {
|
||||
@@ -7548,6 +7785,7 @@ var treectl = (function () {
|
||||
r.ls_cb = showfile.addlinks;
|
||||
return r.reqls(get_evpath(), false, undefined, true);
|
||||
}
|
||||
ls0.unlist = unlist0;
|
||||
|
||||
var top = get_evpath();
|
||||
if (r.chk_index_html(top, ls0))
|
||||
@@ -9115,7 +9353,7 @@ var unpost = (function () {
|
||||
r.me = me;
|
||||
}
|
||||
|
||||
var q = SR + '/?ups';
|
||||
var q = get_evpath() + '?ups';
|
||||
if (filt.value)
|
||||
q += '&filter=' + uricom_enc(filt.value, true);
|
||||
|
||||
|
||||
BIN
copyparty/web/iiam.gif
Normal file
BIN
copyparty/web/iiam.gif
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 230 B |
@@ -45,7 +45,7 @@ function qr(e) {
|
||||
|
||||
function showqr(href) {
|
||||
var vhref = href.replace('?qr&', '?').replace('?qr', '');
|
||||
modal.alert(esc(vhref) + '<img class="b64" src="' + href + '" />');
|
||||
modal.alert(esc(vhref) + '<img class="b64" width="100" height="100" src="' + href + '" />');
|
||||
}
|
||||
|
||||
(function() {
|
||||
@@ -71,7 +71,7 @@ function showqr(href) {
|
||||
tr[a].cells[11].innerHTML =
|
||||
'<button value="1">1min</button> ' +
|
||||
'<button value="60">1h</button>';
|
||||
|
||||
|
||||
var btns = QSA('td button'), aa = btns.length;
|
||||
for (var a = 0; a < aa; a++)
|
||||
btns[a].onclick = bump;
|
||||
|
||||
@@ -90,6 +90,13 @@ table {
|
||||
text-align: left;
|
||||
white-space: nowrap;
|
||||
}
|
||||
.vols td:empty,
|
||||
.vols th:empty {
|
||||
padding: 0;
|
||||
}
|
||||
.vols img {
|
||||
margin: -4px 0;
|
||||
}
|
||||
.num {
|
||||
border-right: 1px solid #bbb;
|
||||
}
|
||||
@@ -222,3 +229,6 @@ html.bz {
|
||||
color: #bbd;
|
||||
background: #11121d;
|
||||
}
|
||||
html.bz .vols img {
|
||||
filter: sepia(0.8) hue-rotate(180deg);
|
||||
}
|
||||
|
||||
@@ -44,6 +44,18 @@
|
||||
</table>
|
||||
{%- endif %}
|
||||
|
||||
{%- if dls %}
|
||||
<h1 id="ae">active downloads:</h1>
|
||||
<table class="vols">
|
||||
<thead><tr><th>%</th><th>sent</th><th>speed</th><th>eta</th><th>idle</th><th></th><th>dir</th><th>file</th></tr></thead>
|
||||
<tbody>
|
||||
{% for u in dls %}
|
||||
<tr><td>{{ u[0] }}</td><td>{{ u[1] }}</td><td>{{ u[2] }}</td><td>{{ u[3] }}</td><td>{{ u[4] }}</td><td>{{ u[5] }}</td><td><a href="{{ u[6] }}">{{ u[7]|e }}</a></td><td>{{ u[8] }}</td></tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
{%- endif %}
|
||||
|
||||
{%- if avol %}
|
||||
<h1>admin panel:</h1>
|
||||
<table><tr><td> <!-- hehehe -->
|
||||
@@ -129,11 +141,20 @@
|
||||
|
||||
{% if k304 or k304vis %}
|
||||
{% if k304 %}
|
||||
<li><a id="h" href="{{ r }}/?k304=n">disable k304</a> (currently enabled)
|
||||
<li><a id="h" href="{{ r }}/?cc&setck=k304=n">disable k304</a> (currently enabled)
|
||||
{%- else %}
|
||||
<li><a id="i" href="{{ r }}/?k304=y" class="r">enable k304</a> (currently disabled)
|
||||
<li><a id="i" href="{{ r }}/?cc&setck=k304=y" class="r">enable k304</a> (currently disabled)
|
||||
{% endif %}
|
||||
<blockquote id="j">enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general</blockquote></li>
|
||||
<blockquote id="j">enabling k304 will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general</blockquote></li>
|
||||
{% endif %}
|
||||
|
||||
{% if no304 or no304vis %}
|
||||
{% if no304 %}
|
||||
<li><a id="ab" href="{{ r }}/?cc&setck=no304=n">disable no304</a> (currently enabled)
|
||||
{%- else %}
|
||||
<li><a id="ac" href="{{ r }}/?cc&setck=no304=y" class="r">enable no304</a> (currently disabled)
|
||||
{% endif %}
|
||||
<blockquote id="ad">enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!</blockquote></li>
|
||||
{% endif %}
|
||||
|
||||
<li><a id="k" href="{{ r }}/?reset" class="r" onclick="localStorage.clear();return true">reset client settings</a></li>
|
||||
|
||||
@@ -34,6 +34,10 @@ var Ls = {
|
||||
"ta2": "gjenta for å bekrefte nytt passord:",
|
||||
"ta3": "fant en skrivefeil; vennligst prøv igjen",
|
||||
"aa1": "innkommende:",
|
||||
"ab1": "skru av no304",
|
||||
"ac1": "skru på no304",
|
||||
"ad1": "no304 stopper all bruk av cache. Hvis ikke k304 var nok, prøv denne. Vil mangedoble dataforbruk!",
|
||||
"ae1": "utgående:",
|
||||
},
|
||||
"eng": {
|
||||
"d2": "shows the state of all active threads",
|
||||
@@ -80,6 +84,10 @@ var Ls = {
|
||||
"ta2": "重复以确认新密码:",
|
||||
"ta3": "发现拼写错误;请重试",
|
||||
"aa1": "正在接收的文件:", //m
|
||||
"ab1": "关闭 k304",
|
||||
"ac1": "开启 k304",
|
||||
"ad1": "启用 no304 将禁用所有缓存;如果 k304 不够,可以尝试此选项。这将消耗大量的网络流量!", //m
|
||||
"ae1": "正在下载:", //m
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
@@ -73,9 +73,9 @@ html {
|
||||
position: absolute;
|
||||
height: 1px;
|
||||
top: 1px;
|
||||
right: 1%;
|
||||
width: 99%;
|
||||
animation: toastt var(--tmtime) steps(var(--tmstep)) forwards;
|
||||
right: 1px;
|
||||
left: 1px;
|
||||
animation: toastt var(--tmtime) 0.07s steps(var(--tmstep)) forwards;
|
||||
transform-origin: right;
|
||||
}
|
||||
@keyframes toastt {
|
||||
@@ -322,6 +322,8 @@ html.y #tth {
|
||||
margin: .1em auto;
|
||||
width: 60%;
|
||||
height: 60%;
|
||||
background: #999;
|
||||
background: rgba(128,128,128,0.2);
|
||||
}
|
||||
#modalb {
|
||||
position: sticky;
|
||||
|
||||
@@ -17,10 +17,14 @@ function goto_up2k() {
|
||||
var up2k = null,
|
||||
up2k_hooks = [],
|
||||
hws = [],
|
||||
hws_ok = 0,
|
||||
hws_ng = false,
|
||||
sha_js = WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
|
||||
m = 'will use ' + sha_js + ' instead of native sha512 due to';
|
||||
|
||||
try {
|
||||
if (sread('nosubtle') || window.nosubtle)
|
||||
throw 'chickenbit';
|
||||
var cf = crypto.subtle || crypto.webkitSubtle;
|
||||
cf.digest('SHA-512', new Uint8Array(1)).then(
|
||||
function (x) { console.log('sha-ok'); up2k = up2k_init(cf); },
|
||||
@@ -242,7 +246,7 @@ function U2pvis(act, btns, uc, st) {
|
||||
p = bd * 100.0 / sz,
|
||||
nb = bd - bd0,
|
||||
spd = nb / (td / 1000),
|
||||
eta = (sz - bd) / spd;
|
||||
eta = spd ? (sz - bd) / spd : 3599;
|
||||
|
||||
return [p, s2ms(eta), spd / (1024 * 1024)];
|
||||
};
|
||||
@@ -853,8 +857,13 @@ function up2k_init(subtle) {
|
||||
|
||||
setmsg(suggest_up2k, 'msg');
|
||||
|
||||
var u2szs = u2sz.split(','),
|
||||
u2sz_min = parseInt(u2szs[0]),
|
||||
u2sz_tgt = parseInt(u2szs[1]),
|
||||
u2sz_max = parseInt(u2szs[2]);
|
||||
|
||||
var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
|
||||
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz.split(',')[1]),
|
||||
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz_tgt),
|
||||
uc = {},
|
||||
fdom_ctr = 0,
|
||||
biggest_file = 0;
|
||||
@@ -1353,6 +1362,10 @@ function up2k_init(subtle) {
|
||||
for (var a = 0; a < Math.min(navigator.hardwareConcurrency || 4, 16); a++)
|
||||
hws.push(new Worker(SR + '/.cpr/w.hash.js?_=' + TS));
|
||||
|
||||
if (!subtle)
|
||||
for (var a = 0; a < hws.length; a++)
|
||||
hws[a].postMessage('nosubtle');
|
||||
|
||||
console.log(hws.length + " hashers");
|
||||
}
|
||||
|
||||
@@ -1863,10 +1876,12 @@ function up2k_init(subtle) {
|
||||
|
||||
function chill(t) {
|
||||
var now = Date.now();
|
||||
if ((t.coolmul || 0) < 2 || now - t.cooldown < t.coolmul * 700)
|
||||
if ((t.coolmul || 0) < 5 || now - t.cooldown < t.coolmul * 700)
|
||||
t.coolmul = Math.min((t.coolmul || 0.5) * 2, 32);
|
||||
|
||||
t.cooldown = Math.max(t.cooldown || 1, Date.now() + t.coolmul * 1000);
|
||||
var cd = now + 1000 * (t.coolmul + Math.random() * 4 + 2);
|
||||
t.cooldown = Math.floor(Math.max(cd, t.cooldown || 1));
|
||||
return t;
|
||||
}
|
||||
|
||||
/////
|
||||
@@ -1951,7 +1966,7 @@ function up2k_init(subtle) {
|
||||
pvis.setab(t.n, nchunks);
|
||||
pvis.move(t.n, 'bz');
|
||||
|
||||
if (hws.length && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
|
||||
if (hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
|
||||
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
|
||||
return wexec_hash(t, chunksize, nchunks);
|
||||
|
||||
@@ -2060,16 +2075,27 @@ function up2k_init(subtle) {
|
||||
free = [],
|
||||
busy = {},
|
||||
nbusy = 0,
|
||||
init = 0,
|
||||
hashtab = {},
|
||||
mem = (MOBILE ? 128 : 256) * 1024 * 1024;
|
||||
|
||||
if (!hws_ok)
|
||||
init = setTimeout(function() {
|
||||
hws_ng = true;
|
||||
toast.warn(30, 'webworkers failed to start\n\nwill be a bit slower due to\nhashing on main-thread');
|
||||
apop(st.busy.hash, t);
|
||||
st.todo.hash.unshift(t);
|
||||
exec_hash();
|
||||
}, 5000);
|
||||
|
||||
for (var a = 0; a < hws.length; a++) {
|
||||
var w = hws[a];
|
||||
free.push(w);
|
||||
w.onmessage = onmsg;
|
||||
if (init)
|
||||
w.postMessage('ping');
|
||||
if (mem > 0)
|
||||
free.push(w);
|
||||
mem -= chunksize;
|
||||
if (mem <= 0)
|
||||
break;
|
||||
}
|
||||
|
||||
function go_next() {
|
||||
@@ -2099,6 +2125,12 @@ function up2k_init(subtle) {
|
||||
d = d.data;
|
||||
var k = d[0];
|
||||
|
||||
if (k == "pong")
|
||||
if (++hws_ok == hws.length) {
|
||||
clearTimeout(init);
|
||||
go_next();
|
||||
}
|
||||
|
||||
if (k == "panic")
|
||||
return vis_exh(d[1], 'up2k.js', '', '', d[1]);
|
||||
|
||||
@@ -2161,7 +2193,8 @@ function up2k_init(subtle) {
|
||||
tasker();
|
||||
}
|
||||
}
|
||||
go_next();
|
||||
if (!init)
|
||||
go_next();
|
||||
}
|
||||
|
||||
/////
|
||||
@@ -2259,8 +2292,7 @@ function up2k_init(subtle) {
|
||||
|
||||
console.log('handshake onerror, retrying', t.name, t);
|
||||
apop(st.busy.handshake, t);
|
||||
st.todo.handshake.unshift(t);
|
||||
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
|
||||
st.todo.handshake.unshift(chill(t));
|
||||
t.keepalive = keepalive;
|
||||
};
|
||||
var orz = function (e) {
|
||||
@@ -2273,8 +2305,7 @@ function up2k_init(subtle) {
|
||||
}
|
||||
catch (ex) {
|
||||
apop(st.busy.handshake, t);
|
||||
st.todo.handshake.unshift(t);
|
||||
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
|
||||
st.todo.handshake.unshift(chill(t));
|
||||
var txt = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
|
||||
return toast.err(0, txt + '\n\n' + L.badreply + ':\n\n' + unpre(xhr.responseText));
|
||||
}
|
||||
@@ -2453,6 +2484,7 @@ function up2k_init(subtle) {
|
||||
else {
|
||||
pvis.seth(t.n, 1, "ERROR");
|
||||
pvis.seth(t.n, 2, L.u_ehstmp, t);
|
||||
apop(st.busy.handshake, t);
|
||||
|
||||
var err = "",
|
||||
cls = "ERROR",
|
||||
@@ -2466,7 +2498,6 @@ function up2k_init(subtle) {
|
||||
var penalty = rsp.replace(/.*rate-limit /, "").split(' ')[0];
|
||||
console.log("rate-limit: " + penalty);
|
||||
t.cooldown = Date.now() + parseFloat(penalty) * 1000;
|
||||
apop(st.busy.handshake, t);
|
||||
st.todo.handshake.unshift(t);
|
||||
return;
|
||||
}
|
||||
@@ -2489,8 +2520,6 @@ function up2k_init(subtle) {
|
||||
cls = 'defer';
|
||||
}
|
||||
}
|
||||
if (rsp.indexOf('server HDD is full') + 1)
|
||||
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
|
||||
|
||||
if (err != "") {
|
||||
if (!t.t_uploading)
|
||||
@@ -2500,10 +2529,15 @@ function up2k_init(subtle) {
|
||||
pvis.seth(t.n, 2, err);
|
||||
pvis.move(t.n, 'ng');
|
||||
|
||||
apop(st.busy.handshake, t);
|
||||
tasker();
|
||||
return;
|
||||
}
|
||||
|
||||
st.todo.handshake.unshift(chill(t));
|
||||
|
||||
if (rsp.indexOf('server HDD is full') + 1)
|
||||
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
|
||||
|
||||
err = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
|
||||
xhrchk(xhr, err + "\n\nfile: " + t.name + "\n\nerror ", "404, target folder not found", "warn", t);
|
||||
}
|
||||
@@ -2574,8 +2608,7 @@ function up2k_init(subtle) {
|
||||
nparts = upt.nparts,
|
||||
pcar = nparts[0],
|
||||
pcdr = nparts[nparts.length - 1],
|
||||
snpart = pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr),
|
||||
tries = 0;
|
||||
maxsz = (u2sz_max > 1 ? u2sz_max : 2040) * 1024 * 1024;
|
||||
|
||||
if (t.done)
|
||||
return console.log('done; skip chunk', t.name, t);
|
||||
@@ -2595,6 +2628,30 @@ function up2k_init(subtle) {
|
||||
if (cdr >= t.size)
|
||||
cdr = t.size;
|
||||
|
||||
if (cdr - car <= maxsz)
|
||||
return upload_sub(t, upt, pcar, pcdr, car, cdr, chunksize, car, []);
|
||||
|
||||
var car0 = car, subs = [];
|
||||
while (car < cdr) {
|
||||
subs.push([car, Math.min(cdr, car + maxsz)]);
|
||||
car += maxsz;
|
||||
}
|
||||
upload_sub(t, upt, pcar, pcdr, 0, 0, chunksize, car0, subs);
|
||||
}
|
||||
|
||||
function upload_sub(t, upt, pcar, pcdr, car, cdr, chunksize, car0, subs) {
|
||||
var nparts = upt.nparts,
|
||||
is_sub = subs.length;
|
||||
|
||||
if (is_sub) {
|
||||
var x = subs.shift();
|
||||
car = x[0];
|
||||
cdr = x[1];
|
||||
}
|
||||
|
||||
var snpart = is_sub ? ('' + pcar + '(' + (car-car0) +'+'+ (cdr-car)) :
|
||||
pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr);
|
||||
|
||||
var orz = function (xhr) {
|
||||
st.bytes.inflight -= xhr.bsent;
|
||||
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
|
||||
@@ -2608,6 +2665,10 @@ function up2k_init(subtle) {
|
||||
return;
|
||||
}
|
||||
if (xhr.status == 200) {
|
||||
car = car0;
|
||||
if (subs.length)
|
||||
return upload_sub(t, upt, pcar, pcdr, 0, 0, chunksize, car0, subs);
|
||||
|
||||
var bdone = cdr - car;
|
||||
for (var a = pcar; a <= pcdr; a++) {
|
||||
pvis.prog(t, a, Math.min(bdone, chunksize));
|
||||
@@ -2616,6 +2677,7 @@ function up2k_init(subtle) {
|
||||
st.bytes.finished += cdr - car;
|
||||
st.bytes.uploaded += cdr - car;
|
||||
t.bytes_uploaded += cdr - car;
|
||||
t.cooldown = t.coolmul = 0;
|
||||
st.etac.u++;
|
||||
st.etac.t++;
|
||||
}
|
||||
@@ -2674,7 +2736,7 @@ function up2k_init(subtle) {
|
||||
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), t.name), t);
|
||||
|
||||
t.nojoin = t.nojoin || t.postlist.length; // maybe rproxy postsize limit
|
||||
console.log('chunkpit onerror,', ++tries, t.name, t);
|
||||
console.log('chunkpit onerror,', t.name, t);
|
||||
orz2(xhr);
|
||||
};
|
||||
|
||||
@@ -2692,9 +2754,13 @@ function up2k_init(subtle) {
|
||||
xhr.open('POST', t.purl, true);
|
||||
xhr.setRequestHeader("X-Up2k-Hash", ctxt);
|
||||
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
|
||||
if (is_sub)
|
||||
xhr.setRequestHeader("X-Up2k-Subc", car - car0);
|
||||
|
||||
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
|
||||
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
|
||||
st.eta.t.split(' ').pop()));
|
||||
st.eta.t.indexOf('/s, ')+1 ? st.eta.t.split(' ').pop() : 'x'));
|
||||
|
||||
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
|
||||
if (xhr.overrideMimeType)
|
||||
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
|
||||
@@ -2812,13 +2878,13 @@ function up2k_init(subtle) {
|
||||
}
|
||||
|
||||
var read_u2sz = function () {
|
||||
var el = ebi('u2szg'), n = parseInt(el.value), dv = u2sz.split(',');
|
||||
var el = ebi('u2szg'), n = parseInt(el.value);
|
||||
stitch_tgt = n = (
|
||||
isNaN(n) ? dv[1] :
|
||||
n < dv[0] ? dv[0] :
|
||||
n > dv[2] ? dv[2] : n
|
||||
isNaN(n) ? u2sz_tgt :
|
||||
n < u2sz_min ? u2sz_min :
|
||||
n > u2sz_max ? u2sz_max : n
|
||||
);
|
||||
if (n == dv[1]) sdrop('u2sz'); else swrite('u2sz', n);
|
||||
if (n == u2sz_tgt) sdrop('u2sz'); else swrite('u2sz', n);
|
||||
if (el.value != n) el.value = n;
|
||||
};
|
||||
ebi('u2szg').addEventListener('blur', read_u2sz);
|
||||
|
||||
@@ -5,10 +5,17 @@ if (!window.console || !console.log)
|
||||
"log": function (msg) { }
|
||||
};
|
||||
|
||||
if (!Object.assign)
|
||||
Object.assign = function (a, b) {
|
||||
for (var k in b)
|
||||
a[k] = b[k];
|
||||
};
|
||||
|
||||
if (window.CGV1)
|
||||
Object.assign(window, window.CGV1);
|
||||
|
||||
if (window.CGV)
|
||||
for (var k in CGV)
|
||||
window[k] = CGV[k];
|
||||
Object.assign(window, window.CGV);
|
||||
|
||||
|
||||
var wah = '',
|
||||
@@ -874,6 +881,8 @@ if (window.Number && Number.isFinite)
|
||||
|
||||
function f2f(val, nd) {
|
||||
// 10.toFixed(1) returns 10.00 for certain values of 10
|
||||
if (!isNum(val))
|
||||
val = 999;
|
||||
val = (val * Math.pow(10, nd)).toFixed(0).split('.')[0];
|
||||
return nd ? (val.slice(0, -nd) || '0') + '.' + val.slice(-nd) : val;
|
||||
}
|
||||
@@ -1527,21 +1536,26 @@ var toast = (function () {
|
||||
if (sec)
|
||||
te = setTimeout(r.hide, sec * 1000);
|
||||
|
||||
var tb = ebi('toastt');
|
||||
if (same && delta < 1000 && tb) {
|
||||
tb.style.animation = 'none';
|
||||
tb.offsetHeight;
|
||||
tb.style.animation = null;
|
||||
if (same && delta < 1000) {
|
||||
var tb = ebi('toastt');
|
||||
if (tb) {
|
||||
tb.style.animation = 'none';
|
||||
tb.offsetHeight;
|
||||
tb.style.animation = null;
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (txt.indexOf('<body>') + 1)
|
||||
txt = txt.slice(0, txt.indexOf('<')) + ' [...]';
|
||||
|
||||
setcvar('--tmtime', sec + 's');
|
||||
setcvar('--tmstep', sec * 15);
|
||||
|
||||
obj.innerHTML = '<div id="toastt"></div><a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
|
||||
var html = '';
|
||||
if (sec) {
|
||||
setcvar('--tmtime', (sec - 0.15) + 's');
|
||||
setcvar('--tmstep', Math.floor(sec * 20));
|
||||
html += '<div id="toastt"></div>';
|
||||
}
|
||||
obj.innerHTML = html + '<a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
|
||||
obj.className = cl;
|
||||
sec += obj.offsetWidth;
|
||||
obj.className += ' vis';
|
||||
|
||||
@@ -20,6 +20,7 @@ catch (ex) {
|
||||
function load_fb() {
|
||||
subtle = null;
|
||||
importScripts('deps/sha512.hw.js');
|
||||
console.log('using fallback hasher');
|
||||
}
|
||||
|
||||
|
||||
@@ -29,6 +30,12 @@ var reader = null,
|
||||
|
||||
|
||||
onmessage = (d) => {
|
||||
if (d.data == 'nosubtle')
|
||||
return load_fb();
|
||||
|
||||
if (d.data == 'ping')
|
||||
return postMessage(['pong']);
|
||||
|
||||
if (busy)
|
||||
return postMessage(["panic", 'worker got another task while busy']);
|
||||
|
||||
|
||||
@@ -1,3 +1,194 @@
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1115-2218 `v1.16.1` cbz thumbnails
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* thumbnails of .cbz manga archives 4d15dd6e
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* when running with `-j0`, download-ETA could break in complex volume layouts 10fc4768
|
||||
* linking to the image gallery didn't quite work if multiselect was enabled 56a04996
|
||||
* password-hashing parameters (cpu/ram cost) could not be customized 1f177528
|
||||
* the defaults must be perfect considering nobody ever tried changing them ¯\\_(ツ)_/¯
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* add intentional crash on startup if two volumes are configured to use the same histpath 2b63d7d1
|
||||
* prevents funky deadlocks and an eventual database loss in case of a no-thoughts-head-empty moment, purely hypothetical of course 🗿
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1110-1932 `v1.16.0` COPYparty
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* #46 #115 copy/paste files and folders cacec9c1
|
||||
* cut/paste still exists, but now you can copy too
|
||||
* with a UI to rename files in case of filename collisions 56317b00
|
||||
* files are created according to the dedup settings in the target volume (either full copies or symlinks/hardlinks)
|
||||
* show currently active downloads in the controlpanel 8aba5aed
|
||||
* can be made admin-only with `--dl-list=1` or disabled with `--dl-list=0`
|
||||
* hides filenames of hidden files, and files from volumes where the viewer doesn't have access
|
||||
* #114 async reinit on new [IdP users](https://github.com/9001/copyparty#identity-providers) 44ee07f0
|
||||
* new IdP users can now always auth, even while a filesystem reindex is running
|
||||
* ux:
|
||||
* remember batch-rename settings from last time 6a8d5e17
|
||||
* URL parameters to force grid/thumbs on/off 5718caa9
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* folders that fail to list due to a corrupt HDD/filesystem will now return a 404 instead of an empty listing 119e88d8
|
||||
* also fixes similar issues in u2c and partyfuse
|
||||
* u2c (commandline uploader): detect and adapt to proxies with short connection keepalives c784e528
|
||||
* ui/ux:
|
||||
* show the "switch-to-https" button in 404-messages too efd8a32e
|
||||
* the folder-loading indicator could steal keyboard focus d9962f65
|
||||
* hotkey-help was very trigger-happy 71d9e010
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* choose more conservative defaults when server has less than 1 GiB RAM 2bf9055c
|
||||
* runs okay down to 128 MiB, but thumbnails die below 256 MiB
|
||||
* update the [comparison to similar software](https://github.com/9001/copyparty/blob/hovudstraum/docs/versus.md) after years of optimizations on both sides 0ce7cf5e
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1027-0751 `v1.15.10` temporary upload links
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* [shares](https://github.com/9001/copyparty#shares) can now be uploaded into, and unpost works too 4bdcbc1c
|
||||
* useful to create temporary URLs for other people to upload to
|
||||
* shares can be write-only, so visitors can't browse or see any files
|
||||
* #110 HTTP 304 (caching):
|
||||
* support `If-Range` for HTTP 206 159f51b1
|
||||
* add server-side and client-side options to force-disable cache dd6dbdd9
|
||||
* `--no304=1` shows a button in the controlpanel to disable caching
|
||||
* `--no304=2` makes that button auto-enabled
|
||||
* even when `--no304` is not specified, accessing the URL `/?setck=no304=y` force-disables cache
|
||||
* when cache is force-disabled, browsers will waste a lot of network traffic / data usage
|
||||
* might help to avoid bugs in browsers or proxies, for example if media files suddenly stop loading
|
||||
* but such bugs should be exceedingly rare, so do not enable this unless actually necessary
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* #110 HTTP 304 (caching):
|
||||
* remove `Content-Length` and `Content-Type` response headers from 304 replies 91240236
|
||||
* browsers don't need these, and some middlewares might get confused if they're present
|
||||
* #113 fix crash on startup if `-j0` was combined with `--ipa` or `--ipu` 3a0d882c
|
||||
* #111 fix javascript crash if `--u2sz` was set to an invalid value b13899c6
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* #110 HTTP 304 (caching):
|
||||
* never automatically enable k304 because the `Vary` header killed support for caching in msie anyways 63013cc5
|
||||
* change time comparison for `If-Modified-Since` to require an exact timestamp match, instead of the intended "modified since". This technically violates the http-spec, but should be safer for backdating file mtimes 159f51b1
|
||||
* new option `--ohead` to log response headers 7678a91b
|
||||
* added [nintendo 3ds](https://github.com/user-attachments/assets/88deab3d-6cad-4017-8841-2f041472b853) to the [list of supported browsers](https://github.com/9001/copyparty#browser-support) cb81f0ad
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1018-2342 `v1.15.9` rss server
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* #109 [rss feed generator](https://github.com/9001/copyparty#rss-feeds) 7ffd805a
|
||||
* monitor folders recursively with RSS readers
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* #107 `--df` diskspace limits was incompatible with webdav 2a570bb4
|
||||
* #108 up2k javascript crash (only affected the Chinese translation) a7e2a0c9
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* up2k: detect buggy webworkers 5ca8f070
|
||||
* up2k: improve upload retry/timeout logic a9b4436c
|
||||
* js: make handshake retries more aggressive
|
||||
* u2c: reduce chunks timeout + ^
|
||||
* main: reduce tcp timeout to 128sec (js is 42s)
|
||||
* httpcli: less confusing log messages
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1016-2153 `v1.15.8` the sky is the limit
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* subchunks; avoid the Cloudflare filesize limit entirely fc8298c4 48147c07
|
||||
* the previous max filesize was `383.9 GiB`, now only the sky is the limit
|
||||
* if you're using another proxy with a more restrictive limit than Cloudflare's 100 MiB, for example 64 MiB, then `--u2sz 1,64,64`
|
||||
* m4v videos can be played in the gallery ff0a71f2
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* up2k: uploading duplicate files could initially fail (but would succeed after a few automatic retries) due to a toctou 114b71b7
|
||||
* [u2c](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#u2cpy) / commandline uploader:
|
||||
* directory scanner got stuck if it found a FIFO cba1878b
|
||||
* excessive number of FDs when uploading large files 65a2b6a2
|
||||
* chunksize calculation; only affected files exactly 128 GiB large a2e037d6
|
||||
* support filenames with newlines and invalid utf-8 b2770a20
|
||||
* invalid utf-8 is replaced by `?` when they hit the server
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* don't show the toast countdown bar if duration is infinite 22dfc6ec
|
||||
* chickenbit to disable the browser's built-in sha512 implementation and force the bundled wasm instead d715479e
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1013-2244 `v1.15.7` the 'a' in "ip address" stands for authentication
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* [cidr-based autologin](https://github.com/9001/copyparty#ip-auth) b7f9bf5a
|
||||
* map a cidr ip-range to a username; anyone connecting from that ip-range will autologin as that user
|
||||
* thx to @byteturtle for the idea!
|
||||
* [u2c](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#u2cpy) / commandline uploader:
|
||||
* option `--chs` to list individual chunk hashes cf1b7562
|
||||
* fix progress indicator when resuming an upload 53ffd245
|
||||
* up2k: verbose logging of detected/corrected bitflips ee628363
|
||||
* *foreshadowing intensifies* (story still developing)
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* up2k with database disabled / running without `-e2d` 705f598b
|
||||
* respect `noforget` when loading snaps
|
||||
* ...but actually forget deleted files otherwise
|
||||
* snap-loader adds empty need/hash entries as necessary
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* authed users can now unpost recent uploads of unauthed users from the same IP 22b58e31
|
||||
* would have become problematic now that cidr-based autologin is a thing
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1011-2256 `v1.15.6` preadme
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* #105 files named `preadme.md` appear at the top of directory listings 1d68acf8
|
||||
* entirely disable dedup with `--no-clone` / volflag `noclone` 3d7facd7 6b7ebdb7
|
||||
* even if a file exists for sure on the server HDD, let the client continue uploading instead of reusing the existing data
|
||||
* using this option "never" makes sense, unless you're using something like S3 Glacier storage where reading is really expensive but writing is cheap
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* up2k jank after detecting a bitflip or network glitch 4a4ec88d
|
||||
* instead of resuming the interrupted upload like it should, the upload client could get stuck or start over
|
||||
* #104 support viewing dotfile documents when dotfiles are hidden 9ccd8bb3
|
||||
* fix a buttload of typos 6adc778d 1e7697b5
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1005-1803 `v1.15.5` pyz all the cores
|
||||
|
||||
|
||||
@@ -140,6 +140,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
||||
| GET | `?tar&j` | pregenerate jpg thumbnails |
|
||||
| GET | `?tar&p` | pregenerate audio waveforms |
|
||||
| GET | `?shares` | list your shared files/folders |
|
||||
| GET | `?dls` | show active downloads (do this as admin) |
|
||||
| GET | `?ups` | show recent uploads from your IP |
|
||||
| GET | `?ups&filter=f` | ...where URL contains `f` |
|
||||
| GET | `?mime=foo` | specify return mimetype `foo` |
|
||||
@@ -163,6 +164,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
||||
|
||||
| method | params | result |
|
||||
|--|--|--|
|
||||
| POST | `?copy=/foo/bar` | copy the file/folder at URL to /foo/bar |
|
||||
| POST | `?move=/foo/bar` | move/rename the file/folder at URL to /foo/bar |
|
||||
|
||||
| method | params | body | result |
|
||||
@@ -208,6 +210,12 @@ upload modifiers:
|
||||
| method | params | result |
|
||||
|--|--|--|
|
||||
| GET | `?pw=x` | logout |
|
||||
| GET | `?grid` | ui: show grid-view |
|
||||
| GET | `?imgs` | ui: show grid-view with thumbnails |
|
||||
| GET | `?grid=0` | ui: show list-view |
|
||||
| GET | `?imgs=0` | ui: show list-view |
|
||||
| GET | `?thumb` | ui, grid-mode: show thumbnails |
|
||||
| GET | `?thumb=0` | ui, grid-mode: show icons |
|
||||
|
||||
|
||||
# event hooks
|
||||
|
||||
@@ -58,7 +58,9 @@ currently up to date with [awesome-selfhosted](https://github.com/awesome-selfho
|
||||
* [h5ai](#h5ai)
|
||||
* [autoindex](#autoindex)
|
||||
* [miniserve](#miniserve)
|
||||
* [pingvin-share](#pingvin-share)
|
||||
* [briefly considered](#briefly-considered)
|
||||
* [notes](#notes)
|
||||
|
||||
|
||||
# recommendations
|
||||
@@ -106,6 +108,7 @@ some softwares not in the matrixes,
|
||||
* [h5ai](#h5ai)
|
||||
* [autoindex](#autoindex)
|
||||
* [miniserve](#miniserve)
|
||||
* [pingvin-share](#pingvin-share)
|
||||
|
||||
symbol legend,
|
||||
* `█` = absolutely
|
||||
@@ -426,6 +429,10 @@ symbol legend,
|
||||
| gimme-that | python | █ mit | 4.8 MB |
|
||||
| ass | ts | █ isc | • |
|
||||
| linx | go | ░ gpl3 | 20 MB |
|
||||
| h5ai | php | █ mit | • |
|
||||
| autoindex | go | █ mpl2 | 11 MB |
|
||||
| miniserve | rust | █ mit | 2 MB |
|
||||
| pingvin-share | go | █ bsd2 | 487 MB |
|
||||
|
||||
* `size` = binary (if available) or installed size of program and its dependencies
|
||||
* copyparty size is for the [standalone python](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py) file; the [windows exe](https://github.com/9001/copyparty/releases/latest/download/copyparty.exe) is **6 MiB**
|
||||
@@ -458,11 +465,13 @@ symbol legend,
|
||||
## [hfs3](https://rejetto.com/hfs/)
|
||||
* nodejs; cross-platform
|
||||
* vfs with gui config, per-volume permissions
|
||||
* tested locally, v0.53.2 on archlinux
|
||||
* 🔵 uploads are resumable
|
||||
* ⚠️ uploads are not segmented; max upload size 100 MiB on cloudflare
|
||||
* ⚠️ uploads are not accelerated (copyparty is 3x faster across the atlantic)
|
||||
* ⚠️ uploads are not integrity-checked
|
||||
* ⚠️ copies the file after upload; need twice filesize free disk space
|
||||
* ⚠️ uploading small files is decent; `107` files per sec (copyparty does `670`/sec, 6x faster)
|
||||
* ⚠️ doesn't support crazy filenames
|
||||
* ✅ config GUI
|
||||
* ✅ download counter
|
||||
@@ -471,11 +480,12 @@ symbol legend,
|
||||
|
||||
## [nextcloud](https://github.com/nextcloud/server)
|
||||
* php, mariadb
|
||||
* tested locally, [linuxserver/nextcloud](https://hub.docker.com/r/linuxserver/nextcloud) v30.0.2 (sqlite)
|
||||
* ⚠️ [isolated on-disk file hierarchy] in per-user folders
|
||||
* not that bad, can probably be remedied with bindmounts or maybe symlinks
|
||||
* ⚠️ uploads not resumable / accelerated / integrity-checked
|
||||
* ⚠️ on cloudflare: max upload size 100 MiB
|
||||
* ⚠️ uploading small files is slow; `2.2` files per sec (copyparty does `87`/sec), tested locally with [linuxserver/nextcloud](https://hub.docker.com/r/linuxserver/nextcloud) (sqlite)
|
||||
* ⚠️ uploading small files is slow; `4` files per sec (copyparty does `670`/sec, 160x faster)
|
||||
* ⚠️ no write-only / upload-only folders
|
||||
* ⚠️ http/webdav only; no ftp, zeroconf
|
||||
* ⚠️ less awesome music player
|
||||
@@ -491,11 +501,12 @@ symbol legend,
|
||||
|
||||
## [seafile](https://github.com/haiwen/seafile)
|
||||
* c, mariadb
|
||||
* tested locally, [official container](https://manual.seafile.com/latest/docker/deploy_seafile_with_docker/) v11.0.13
|
||||
* ⚠️ [isolated on-disk file hierarchy](https://manual.seafile.com/maintain/seafile_fsck/), incompatible with other software
|
||||
* *much worse than nextcloud* in that regard
|
||||
* ⚠️ uploads not resumable / accelerated / integrity-checked
|
||||
* ⚠️ on cloudflare: max upload size 100 MiB
|
||||
* ⚠️ uploading small files is slow; `2.7` files per sec (copyparty does `87`/sec), tested locally with [official container](https://manual.seafile.com/docker/deploy_seafile_with_docker/)
|
||||
* ⚠️ uploading small files is slow; `4.7` files per sec (copyparty does `670`/sec, 140x faster)
|
||||
* ⚠️ no write-only / upload-only folders
|
||||
* ⚠️ big folders cannot be zip-downloaded
|
||||
* ⚠️ http/webdav only; no ftp, zeroconf
|
||||
@@ -519,9 +530,11 @@ symbol legend,
|
||||
|
||||
## [dufs](https://github.com/sigoden/dufs)
|
||||
* rust; cross-platform (windows, linux, macos)
|
||||
* tested locally, v0.43.0 on archlinux (plain binary)
|
||||
* ⚠️ uploads not resumable / accelerated / integrity-checked
|
||||
* ⚠️ on cloudflare: max upload size 100 MiB
|
||||
* ⚠️ across the atlantic, copyparty is 3x faster
|
||||
* ⚠️ uploading small files is decent; `97` files per sec (copyparty does `670`/sec, 7x faster)
|
||||
* ⚠️ doesn't support crazy filenames
|
||||
* ✅ per-url access control (copyparty is per-volume)
|
||||
* 🔵 basic but really snappy ui
|
||||
@@ -564,10 +577,12 @@ symbol legend,
|
||||
|
||||
## [filebrowser](https://github.com/filebrowser/filebrowser)
|
||||
* go; cross-platform (windows, linux, mac)
|
||||
* tested locally, v2.31.2 on archlinux (plain binary)
|
||||
* 🔵 uploads are resumable and segmented
|
||||
* 🔵 multiple files are uploaded in parallel, but...
|
||||
* ⚠️ big files are not accelerated (copyparty is 5x faster across the atlantic)
|
||||
* ⚠️ uploads are not integrity-checked
|
||||
* ⚠️ uploading small files is decent; `69` files per sec (copyparty does `670`/sec, 9x faster)
|
||||
* ⚠️ http only; no webdav / ftp / zeroconf
|
||||
* ⚠️ doesn't support crazy filenames
|
||||
* ⚠️ no directory tree nav
|
||||
@@ -605,6 +620,7 @@ symbol legend,
|
||||
* ⚠️ no zeroconf (mdns/ssdp)
|
||||
* ⚠️ impractical directory URLs
|
||||
* ⚠️ AGPL licensed
|
||||
* 🔵 uploading small files is fast; `340` files per sec (copyparty does `670`/sec)
|
||||
* 🔵 ftp, ftps, webdav
|
||||
* ✅ sftp server
|
||||
* ✅ settings gui
|
||||
@@ -719,7 +735,31 @@ symbol legend,
|
||||
* 🔵 upload, tar/zip download, qr-code
|
||||
* ✅ faster at loading huge folders
|
||||
|
||||
## [pingvin-share](https://github.com/stonith404/pingvin-share)
|
||||
* node; linux (docker)
|
||||
* mainly for uploads, not a general file server
|
||||
* 🔵 uploads are segmented (avoids cloudflare size limit)
|
||||
* 🔵 segments are written directly to target file (HDD-friendly)
|
||||
* ⚠️ uploads not resumable after a browser or laptop crash
|
||||
* ⚠️ uploads are not accelerated / integrity-checked
|
||||
* ⚠️ across the atlantic, copyparty is 3x faster
|
||||
* measured with chunksize 96 MiB; pingvin's default 10 MiB is much slower
|
||||
* ⚠️ can't upload folders with subfolders
|
||||
* ⚠️ no upload ETA
|
||||
* 🔵 expiration times, shares, upload-undo
|
||||
* ✅ config + user-registration gui
|
||||
* ✅ built-in OpenID and LDAP support
|
||||
* 💾 [IdP middleware](https://github.com/9001/copyparty#identity-providers) and config-files
|
||||
* ✅ probably more than one person who understands the code
|
||||
|
||||
|
||||
|
||||
# briefly considered
|
||||
* [pydio](https://github.com/pydio/cells): python/agpl3, looks great, fantastic ux -- but needs mariadb, systemwide install
|
||||
* [gossa](https://github.com/pldubouilh/gossa): go/mit, minimalistic, basic file upload, text editor, mkdir and rename (no delete/move)
|
||||
|
||||
|
||||
|
||||
# notes
|
||||
|
||||
* high-latency connections (cross-atlantic uploads) can be accurately simulated with `tc qdisc add dev eth0 root netem delay 100ms`
|
||||
|
||||
@@ -25,6 +25,7 @@ classifiers = [
|
||||
"Programming Language :: Python :: 3.10",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"Programming Language :: Python :: 3.12",
|
||||
"Programming Language :: Python :: 3.13",
|
||||
"Programming Language :: Python :: Implementation :: CPython",
|
||||
"Programming Language :: Python :: Implementation :: Jython",
|
||||
"Programming Language :: Python :: Implementation :: PyPy",
|
||||
|
||||
@@ -94,6 +94,7 @@ copyparty/web/deps/prismd.css,
|
||||
copyparty/web/deps/scp.woff2,
|
||||
copyparty/web/deps/sha512.ac.js,
|
||||
copyparty/web/deps/sha512.hw.js,
|
||||
copyparty/web/iiam.gif,
|
||||
copyparty/web/md.css,
|
||||
copyparty/web/md.html,
|
||||
copyparty/web/md.js,
|
||||
|
||||
@@ -54,7 +54,7 @@ var tl_cpanel = {
|
||||
"cc1": "other stuff:",
|
||||
"h1": "disable k304", // TLNote: "j1" explains what k304 is
|
||||
"i1": "enable k304",
|
||||
"j1": "enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
|
||||
"j1": "enabling k304 will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
|
||||
"k1": "reset client settings",
|
||||
"l1": "login for more:",
|
||||
"m1": "welcome back,", // TLNote: "welcome back, USERNAME"
|
||||
@@ -76,6 +76,10 @@ var tl_cpanel = {
|
||||
"ta2": "repeat to confirm new password:",
|
||||
"ta3": "found a typo; please try again",
|
||||
"aa1": "incoming files:",
|
||||
"ab1": "disable no304",
|
||||
"ac1": "enable no304",
|
||||
"ad1": "enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!",
|
||||
"ae1": "active downloads:",
|
||||
},
|
||||
};
|
||||
|
||||
@@ -118,8 +122,9 @@ var tl_browser = {
|
||||
["T", "toggle thumbnails / icons"],
|
||||
["🡅 A/D", "thumbnail size"],
|
||||
["ctrl-K", "delete selected"],
|
||||
["ctrl-X", "cut selected"],
|
||||
["ctrl-V", "paste into folder"],
|
||||
["ctrl-X", "cut selection to clipboard"],
|
||||
["ctrl-C", "copy selection to clipboard"],
|
||||
["ctrl-V", "paste (move/copy) here"],
|
||||
["Y", "download selected"],
|
||||
["F2", "rename selected"],
|
||||
|
||||
@@ -164,7 +169,7 @@ var tl_browser = {
|
||||
["I/K", "prev/next file"],
|
||||
["M", "close textfile"],
|
||||
["E", "edit textfile"],
|
||||
["S", "select file (for cut/rename)"],
|
||||
["S", "select file (for cut/copy/rename)"],
|
||||
]
|
||||
],
|
||||
|
||||
@@ -214,6 +219,7 @@ var tl_browser = {
|
||||
"wt_ren": "rename selected items$NHotkey: F2",
|
||||
"wt_del": "delete selected items$NHotkey: ctrl-K",
|
||||
"wt_cut": "cut selected items <small>(then paste somewhere else)</small>$NHotkey: ctrl-X",
|
||||
"wt_cpy": "copy selected items to clipboard$N(to paste them somewhere else)$NHotkey: ctrl-C",
|
||||
"wt_pst": "paste a previously cut / copied selection$NHotkey: ctrl-V",
|
||||
"wt_selall": "select all files$NHotkey: ctrl-A (when file focused)",
|
||||
"wt_selinv": "invert selection",
|
||||
@@ -408,6 +414,7 @@ var tl_browser = {
|
||||
"fr_emore": "select at least one item to rename",
|
||||
"fd_emore": "select at least one item to delete",
|
||||
"fc_emore": "select at least one item to cut",
|
||||
"fcp_emore": "select at least one item to copy",
|
||||
|
||||
"fs_sc": "share the folder you're in",
|
||||
"fs_ss": "share the selected files",
|
||||
@@ -460,16 +467,26 @@ var tl_browser = {
|
||||
"fc_ok": "cut {0} items",
|
||||
"fc_warn": 'cut {0} items\n\nbut: only <b>this</b> browser-tab can paste them\n(since the selection is so absolutely massive)',
|
||||
|
||||
"fp_ecut": "first cut some files / folders to paste / move\n\nnote: you can cut / paste across different browser tabs",
|
||||
"fcc_ok": "copied {0} items to clipboard",
|
||||
"fcc_warn": 'copied {0} items to clipboard\n\nbut: only <b>this</b> browser-tab can paste them\n(since the selection is so absolutely massive)',
|
||||
|
||||
"fp_ecut": "first cut or copy some files / folders to paste / move\n\nnote: you can cut / paste across different browser tabs",
|
||||
"fp_ename": "these {0} items cannot be moved here (names already exist):",
|
||||
"fcp_ename": "these {0} items cannot be copied here (names already exist):",
|
||||
"fp_ok": "move OK",
|
||||
"fcp_ok": "copy OK",
|
||||
"fp_busy": "moving {0} items...\n\n{1}",
|
||||
"fcp_busy": "copying {0} items...\n\n{1}",
|
||||
"fp_err": "move failed:\n",
|
||||
"fcp_err": "copy failed:\n",
|
||||
"fp_confirm": "move these {0} items here?",
|
||||
"fcp_confirm": "copy these {0} items here?",
|
||||
"fp_etab": 'failed to read clipboard from other browser tab',
|
||||
"fp_name": "uploading a file from your device. Give it a name:",
|
||||
"fp_both_m": '<h6>choose what to paste</h6><code>Enter</code> = Move {0} files from «{1}»\n<code>ESC</code> = Upload {2} files from your device',
|
||||
"fcp_both_m": '<h6>choose what to paste</h6><code>Enter</code> = Copy {0} files from «{1}»\n<code>ESC</code> = Upload {2} files from your device',
|
||||
"fp_both_b": '<a href="#" id="modal-ok">Move</a><a href="#" id="modal-ng">Upload</a>',
|
||||
"fcp_both_b": '<a href="#" id="modal-ok">Copy</a><a href="#" id="modal-ng">Upload</a>',
|
||||
|
||||
"mk_noname": "type a name into the text field on the left before you do that :p",
|
||||
|
||||
@@ -481,7 +498,7 @@ var tl_browser = {
|
||||
"tvt_dl": "download this file$NHotkey: Y\">💾 download",
|
||||
"tvt_prev": "show previous document$NHotkey: i\">⬆ prev",
|
||||
"tvt_next": "show next document$NHotkey: K\">⬇ next",
|
||||
"tvt_sel": "select file ( for cut / delete / ... )$NHotkey: S\">sel",
|
||||
"tvt_sel": "select file ( for cut / copy / delete / ... )$NHotkey: S\">sel",
|
||||
"tvt_edit": "open file in text editor$NHotkey: E\">✏️ edit",
|
||||
|
||||
"gt_vau": "don't show videos, just play the audio\">🎧",
|
||||
|
||||
@@ -89,7 +89,7 @@ var tl_cpanel = {{
|
||||
"cc1": "other stuff:",
|
||||
"h1": "disable k304", // TLNote: "j1" explains what k304 is
|
||||
"i1": "enable k304",
|
||||
"j1": "enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
|
||||
"j1": "enabling k304 will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
|
||||
"k1": "reset client settings",
|
||||
"l1": "login for more:",
|
||||
"m1": "welcome back,", // TLNote: "welcome back, USERNAME"
|
||||
@@ -111,6 +111,9 @@ var tl_cpanel = {{
|
||||
"ta2": "repeat to confirm new password:",
|
||||
"ta3": "found a typo; please try again",
|
||||
"aa1": "incoming files:",
|
||||
"ab1": "disable no304",
|
||||
"ac1": "enable no304",
|
||||
"ad1": "enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!",
|
||||
}},
|
||||
}};
|
||||
|
||||
|
||||
1
setup.py
1
setup.py
@@ -108,6 +108,7 @@ args = {
|
||||
"Programming Language :: Python :: 3.10",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"Programming Language :: Python :: 3.12",
|
||||
"Programming Language :: Python :: 3.13",
|
||||
"Programming Language :: Python :: Implementation :: CPython",
|
||||
"Programming Language :: Python :: Implementation :: Jython",
|
||||
"Programming Language :: Python :: Implementation :: PyPy",
|
||||
|
||||
110
tests/test_cp.py
Normal file
110
tests/test_cp.py
Normal file
@@ -0,0 +1,110 @@
|
||||
#!/usr/bin/env python3
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
from itertools import product
|
||||
|
||||
from copyparty.authsrv import AuthSrv
|
||||
from copyparty.httpcli import HttpCli
|
||||
from tests import util as tu
|
||||
from tests.util import Cfg
|
||||
|
||||
|
||||
class TestDedup(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.td = tu.get_ramdisk()
|
||||
|
||||
def tearDown(self):
|
||||
if self.conn:
|
||||
self.conn.shutdown()
|
||||
os.chdir(tempfile.gettempdir())
|
||||
shutil.rmtree(self.td)
|
||||
|
||||
def reset(self):
|
||||
td = os.path.join(self.td, "vfs")
|
||||
if os.path.exists(td):
|
||||
shutil.rmtree(td)
|
||||
os.mkdir(td)
|
||||
os.chdir(td)
|
||||
for a in "abc":
|
||||
os.mkdir(a)
|
||||
for b in "fg":
|
||||
d = "%s/%s%s" % (a, a, b)
|
||||
os.mkdir(d)
|
||||
for fn in "x":
|
||||
fp = "%s/%s%s%s" % (d, a, b, fn)
|
||||
with open(fp, "wb") as f:
|
||||
f.write(fp.encode("utf-8"))
|
||||
return td
|
||||
|
||||
def cinit(self):
|
||||
if self.conn:
|
||||
self.fstab = self.conn.hsrv.hub.up2k.fstab
|
||||
self.conn.hsrv.hub.up2k.shutdown()
|
||||
self.asrv = AuthSrv(self.args, self.log)
|
||||
self.conn = tu.VHttpConn(self.args, self.asrv, self.log, b"", True)
|
||||
if self.fstab:
|
||||
self.conn.hsrv.hub.up2k.fstab = self.fstab
|
||||
|
||||
def test(self):
|
||||
tc_dedup = ["sym", "no"]
|
||||
vols = [".::A", "a/af:a/af:r", "b:a/b:r"]
|
||||
tcs = [
|
||||
"/a?copy=/c/a /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /b/bf/bfx /b/bg/bgx /c/a/af/afx /c/a/ag/agx /c/a/b/bf/bfx /c/a/b/bg/bgx /c/cf/cfx /c/cg/cgx",
|
||||
"/b?copy=/d /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /b/bf/bfx /b/bg/bgx /c/cf/cfx /c/cg/cgx /d/bf/bfx /d/bg/bgx",
|
||||
"/b/bf?copy=/d /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /b/bf/bfx /b/bg/bgx /c/cf/cfx /c/cg/cgx /d/bfx",
|
||||
"/a/af?copy=/d /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /b/bf/bfx /b/bg/bgx /c/cf/cfx /c/cg/cgx /d/afx",
|
||||
"/a/af?copy=/ /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /afx /b/bf/bfx /b/bg/bgx /c/cf/cfx /c/cg/cgx",
|
||||
"/a/af/afx?copy=/afx /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /afx /b/bf/bfx /b/bg/bgx /c/cf/cfx /c/cg/cgx",
|
||||
]
|
||||
|
||||
self.conn = None
|
||||
self.fstab = None
|
||||
for dedup, act_exp in product(tc_dedup, tcs):
|
||||
action, expect = act_exp.split(" ", 1)
|
||||
t = "dedup:%s action:%s" % (dedup, action)
|
||||
print("\n\n\033[0;7m# ", t, "\033[0m")
|
||||
|
||||
ka = {"dav_inf": True}
|
||||
if dedup == "hard":
|
||||
ka["hardlink"] = True
|
||||
elif dedup == "no":
|
||||
ka["no_dedup"] = True
|
||||
|
||||
self.args = Cfg(v=vols, a=[], **ka)
|
||||
self.reset()
|
||||
self.cinit()
|
||||
|
||||
self.do_cp(action)
|
||||
zs = self.propfind()
|
||||
|
||||
fns = " ".join(zs[1])
|
||||
self.assertEqual(expect, fns)
|
||||
|
||||
def do_cp(self, action):
|
||||
hdr = "POST %s HTTP/1.1\r\nConnection: close\r\nContent-Length: 0\r\n\r\n"
|
||||
buf = (hdr % (action,)).encode("utf-8")
|
||||
print("CP [%s]" % (action,))
|
||||
HttpCli(self.conn.setbuf(buf)).run()
|
||||
ret = self.conn.s._reply.decode("utf-8").split("\r\n\r\n", 1)
|
||||
print("CP <-- ", ret)
|
||||
self.assertIn(" 201 Created", ret[0])
|
||||
self.assertEqual("k\r\n", ret[1])
|
||||
return ret
|
||||
|
||||
def propfind(self):
|
||||
h = "PROPFIND / HTTP/1.1\r\nConnection: close\r\n\r\n"
|
||||
HttpCli(self.conn.setbuf(h.encode("utf-8"))).run()
|
||||
h, t = self.conn.s._reply.decode("utf-8").split("\r\n\r\n", 1)
|
||||
fns = t.split("<D:response><D:href>")[1:]
|
||||
fns = [x.split("</D", 1)[0] for x in fns]
|
||||
fns = [x for x in fns if not x.endswith("/")]
|
||||
fns.sort()
|
||||
return h, fns
|
||||
|
||||
def log(self, src, msg, c=0):
|
||||
print(msg)
|
||||
@@ -34,6 +34,8 @@ class TestDedup(unittest.TestCase):
|
||||
]
|
||||
|
||||
def tearDown(self):
|
||||
if self.conn:
|
||||
self.conn.shutdown()
|
||||
os.chdir(tempfile.gettempdir())
|
||||
shutil.rmtree(self.td)
|
||||
|
||||
|
||||
@@ -17,6 +17,11 @@ from copyparty.up2k import Up2k
|
||||
from tests import util as tu
|
||||
from tests.util import Cfg
|
||||
|
||||
try:
|
||||
from typing import Optional
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
def hdr(query, uname):
|
||||
h = "GET /%s HTTP/1.1\r\nPW: %s\r\nConnection: close\r\n\r\n"
|
||||
@@ -29,12 +34,21 @@ class TestDots(unittest.TestCase):
|
||||
self.is_dut = True
|
||||
|
||||
def setUp(self):
|
||||
self.conn: Optional[tu.VHttpConn] = None
|
||||
self.td = tu.get_ramdisk()
|
||||
|
||||
def tearDown(self):
|
||||
if self.conn:
|
||||
self.conn.shutdown()
|
||||
os.chdir(tempfile.gettempdir())
|
||||
shutil.rmtree(self.td)
|
||||
|
||||
def cinit(self):
|
||||
if self.conn:
|
||||
self.conn.shutdown()
|
||||
self.conn = None
|
||||
self.conn = tu.VHttpConn(self.args, self.asrv, self.log, b"")
|
||||
|
||||
def test_dots(self):
|
||||
td = os.path.join(self.td, "vfs")
|
||||
os.mkdir(td)
|
||||
@@ -57,6 +71,7 @@ class TestDots(unittest.TestCase):
|
||||
vcfg = [".::r,u1:r.,u2", "a:a:r,u1:r,u2", ".b:.b:r.,u1:r,u2"]
|
||||
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"], e2dsa=True)
|
||||
self.asrv = AuthSrv(self.args, self.log)
|
||||
self.cinit()
|
||||
|
||||
self.assertEqual(self.tardir("", "u1"), "f0 t/f1 a/f3 a/da/f4")
|
||||
self.assertEqual(self.tardir(".t", "u1"), "f2")
|
||||
@@ -88,6 +103,7 @@ class TestDots(unittest.TestCase):
|
||||
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"], dotsrch=False, e2d=True)
|
||||
self.asrv = AuthSrv(self.args, self.log)
|
||||
u2idx = U2idx(self)
|
||||
self.cinit()
|
||||
|
||||
x = u2idx.search("u1", self.asrv.vfs.all_vols.values(), "", 999)
|
||||
x = " ".join(sorted([x["rp"] for x in x[0]]))
|
||||
@@ -113,6 +129,8 @@ class TestDots(unittest.TestCase):
|
||||
]
|
||||
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"])
|
||||
self.asrv = AuthSrv(self.args, self.log)
|
||||
self.cinit()
|
||||
|
||||
zj = json.loads(self.curl("?ls", "u1")[1])
|
||||
url = "?k=" + zj["dk"]
|
||||
# should descend into folders, but not other volumes:
|
||||
@@ -148,6 +166,7 @@ class TestDots(unittest.TestCase):
|
||||
|
||||
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"])
|
||||
self.asrv = AuthSrv(self.args, self.log)
|
||||
self.cinit()
|
||||
|
||||
dk = {}
|
||||
for d in "dk dks dk,fk dks,fk".split():
|
||||
@@ -353,7 +372,7 @@ class TestDots(unittest.TestCase):
|
||||
|
||||
def curl(self, url, uname, binary=False, req=b""):
|
||||
req = req or hdr(url, uname)
|
||||
conn = tu.VHttpConn(self.args, self.asrv, self.log, req)
|
||||
conn = self.conn.setbuf(req)
|
||||
HttpCli(conn).run()
|
||||
if binary:
|
||||
h, b = conn.s._reply.split(b"\r\n\r\n", 1)
|
||||
|
||||
@@ -12,6 +12,11 @@ from copyparty.httpcli import HttpCli
|
||||
from tests import util as tu
|
||||
from tests.util import Cfg
|
||||
|
||||
try:
|
||||
from typing import Optional
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
def hdr(query):
|
||||
h = "GET /{} HTTP/1.1\r\nPW: o\r\nConnection: close\r\n\r\n"
|
||||
@@ -20,6 +25,7 @@ def hdr(query):
|
||||
|
||||
class TestHooks(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.conn: Optional[tu.VHttpConn] = None
|
||||
self.td = tu.get_ramdisk()
|
||||
|
||||
def tearDown(self):
|
||||
@@ -34,6 +40,12 @@ class TestHooks(unittest.TestCase):
|
||||
os.chdir(td)
|
||||
return td
|
||||
|
||||
def cinit(self):
|
||||
if self.conn:
|
||||
self.conn.shutdown()
|
||||
self.conn = None
|
||||
self.conn = tu.VHttpConn(self.args, self.asrv, self.log, b"")
|
||||
|
||||
def test(self):
|
||||
vcfg = ["a/b/c/d:c/d:A", "a:a:r"]
|
||||
|
||||
@@ -59,6 +71,7 @@ class TestHooks(unittest.TestCase):
|
||||
ka = {hooktype: ["j,c1,h.py"]}
|
||||
self.args = Cfg(v=vcfg, a=["o:o"], e2d=True, **ka)
|
||||
self.asrv = AuthSrv(self.args, self.log)
|
||||
self.cinit()
|
||||
|
||||
h, b = upfun(url_up)
|
||||
self.assertIn("201 Created", h)
|
||||
@@ -73,7 +86,7 @@ class TestHooks(unittest.TestCase):
|
||||
buf = "PUT /{0} HTTP/1.1\r\nPW: o\r\nConnection: close\r\nContent-Length: {1}\r\n\r\nok {0}\n"
|
||||
buf = buf.format(url, len(url) + 4).encode("utf-8")
|
||||
print("PUT -->", buf)
|
||||
conn = tu.VHttpConn(self.args, self.asrv, self.log, buf)
|
||||
conn = self.conn.setbuf(buf)
|
||||
HttpCli(conn).run()
|
||||
ret = conn.s._reply.decode("utf-8").split("\r\n\r\n", 1)
|
||||
print("PUT <--", ret)
|
||||
@@ -92,14 +105,14 @@ class TestHooks(unittest.TestCase):
|
||||
buf = (bdy % (fn,) + "ok %s/%s\n" % (url, fn) + ftr).encode("utf-8")
|
||||
buf = (hdr % (url, len(buf))).encode("utf-8") + buf
|
||||
print("PoST -->", buf)
|
||||
conn = tu.VHttpConn(self.args, self.asrv, self.log, buf)
|
||||
conn = self.conn.setbuf(buf)
|
||||
HttpCli(conn).run()
|
||||
ret = conn.s._reply.decode("utf-8").split("\r\n\r\n", 1)
|
||||
print("POST <--", ret)
|
||||
return ret
|
||||
|
||||
def curl(self, url, binary=False):
|
||||
conn = tu.VHttpConn(self.args, self.asrv, self.log, hdr(url))
|
||||
conn = self.conn.setbuf(hdr(url))
|
||||
HttpCli(conn).run()
|
||||
if binary:
|
||||
h, b = conn.s._reply.split(b"\r\n\r\n", 1)
|
||||
|
||||
@@ -178,6 +178,8 @@ class TestHttpCli(unittest.TestCase):
|
||||
ap = os.path.join(vn.realpath, rem)
|
||||
os.unlink(ap)
|
||||
|
||||
self.conn.shutdown()
|
||||
|
||||
def can_rw(self, fp):
|
||||
# lowest non-neutral folder declares permissions
|
||||
expect = fp.split("/")[:-1]
|
||||
|
||||
@@ -27,6 +27,8 @@ class TestMetrics(unittest.TestCase):
|
||||
os.chdir(self.td)
|
||||
|
||||
def tearDown(self):
|
||||
if self.conn:
|
||||
self.conn.shutdown()
|
||||
os.chdir(tempfile.gettempdir())
|
||||
shutil.rmtree(self.td)
|
||||
|
||||
@@ -57,6 +59,7 @@ class TestMetrics(unittest.TestCase):
|
||||
ptns = r"""
|
||||
cpp_uptime_seconds [0-9]\.[0-9]{3}$
|
||||
cpp_boot_unixtime_seconds [0-9]{7,10}\.[0-9]{3}$
|
||||
cpp_active_dl 0$
|
||||
cpp_http_reqs_created [0-9]{7,10}$
|
||||
cpp_http_reqs_total -1$
|
||||
cpp_http_conns 9$
|
||||
|
||||
@@ -25,6 +25,8 @@ class TestDedup(unittest.TestCase):
|
||||
self.td = tu.get_ramdisk()
|
||||
|
||||
def tearDown(self):
|
||||
if not PY2 and self.conn:
|
||||
self.conn.shutdown()
|
||||
os.chdir(tempfile.gettempdir())
|
||||
shutil.rmtree(self.td)
|
||||
|
||||
|
||||
@@ -122,22 +122,22 @@ class Cfg(Namespace):
|
||||
def __init__(self, a=None, v=None, c=None, **ka0):
|
||||
ka = {}
|
||||
|
||||
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_clone no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand re_dirsz smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
|
||||
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_clone no_cp no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nsort nw og og_no_head og_s_title ohead q rand re_dirsz rss smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
|
||||
ka.update(**{k: False for k in ex.split()})
|
||||
|
||||
ex = "dedup dotpart dotsrch hook_v no_dhash no_fastboot no_fpool no_htp no_rescan no_sendfile no_ses no_snap no_up_list no_voldump re_dhash plain_ip"
|
||||
ka.update(**{k: True for k in ex.split()})
|
||||
|
||||
ex = "ah_cli ah_gen css_browser hist js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
|
||||
ex = "ah_cli ah_gen css_browser hist ipu js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
|
||||
ka.update(**{k: None for k in ex.split()})
|
||||
|
||||
ex = "hash_mt safe_dedup srch_time u2abort u2j u2sz"
|
||||
ka.update(**{k: 1 for k in ex.split()})
|
||||
|
||||
ex = "au_vol mtab_age reg_cap s_thead s_tbody th_convt"
|
||||
ex = "au_vol dl_list mtab_age reg_cap s_thead s_tbody th_convt"
|
||||
ka.update(**{k: 9 for k in ex.split()})
|
||||
|
||||
ex = "db_act k304 loris re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
|
||||
ex = "db_act k304 loris no304 re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
|
||||
ka.update(**{k: 0 for k in ex.split()})
|
||||
|
||||
ex = "ah_alg bname chpw_db doctitle df exit favico idp_h_usr ipa html_head lg_sbf log_fk md_sbf name og_desc og_site og_th og_title og_title_a og_title_v og_title_i shr tcolor textfiles unlist vname xff_src R RS SR"
|
||||
@@ -146,7 +146,7 @@ class Cfg(Namespace):
|
||||
ex = "ban_403 ban_404 ban_422 ban_pw ban_url"
|
||||
ka.update(**{k: "no" for k in ex.split()})
|
||||
|
||||
ex = "grp on403 on404 xad xar xau xban xbd xbr xbu xiu xm"
|
||||
ex = "grp on403 on404 xac xad xar xau xban xbc xbd xbr xbu xiu xm"
|
||||
ka.update(**{k: [] for k in ex.split()})
|
||||
|
||||
ex = "exp_lg exp_md"
|
||||
@@ -254,6 +254,8 @@ class VHttpSrv(object):
|
||||
self.broker = NullBroker(args, asrv)
|
||||
self.prism = None
|
||||
self.bans = {}
|
||||
self.tdls = self.dls = {}
|
||||
self.tdli = self.dli = {}
|
||||
self.nreq = 0
|
||||
self.nsus = 0
|
||||
|
||||
@@ -276,6 +278,10 @@ class VHttpSrv(object):
|
||||
self.u2idx = self.u2idx or U2idx(self)
|
||||
return self.u2idx
|
||||
|
||||
def shutdown(self):
|
||||
if self.u2idx:
|
||||
self.u2idx.shutdown()
|
||||
|
||||
|
||||
class VHttpSrvUp2k(VHttpSrv):
|
||||
def __init__(self, args, asrv, log):
|
||||
@@ -283,6 +289,11 @@ class VHttpSrvUp2k(VHttpSrv):
|
||||
self.hub = VHub(args, asrv, log)
|
||||
self.broker = VBrokerThr(self.hub)
|
||||
|
||||
def shutdown(self):
|
||||
self.hub.up2k.shutdown()
|
||||
if self.u2idx:
|
||||
self.u2idx.shutdown()
|
||||
|
||||
|
||||
class VHttpConn(object):
|
||||
def __init__(self, args, asrv, log, buf, use_up2k=False):
|
||||
@@ -292,6 +303,8 @@ class VHttpConn(object):
|
||||
self.args = args
|
||||
self.asrv = asrv
|
||||
self.bans = {}
|
||||
self.tdls = self.dls = {}
|
||||
self.tdli = self.dli = {}
|
||||
self.freshen_pwd = 0.0
|
||||
|
||||
Ctor = VHttpSrvUp2k if use_up2k else VHttpSrv
|
||||
@@ -318,6 +331,9 @@ class VHttpConn(object):
|
||||
self.sr = Unrecv(self.s, None) # type: ignore
|
||||
return self
|
||||
|
||||
def shutdown(self):
|
||||
self.hsrv.shutdown()
|
||||
|
||||
|
||||
if WINDOWS:
|
||||
os.system("rem")
|
||||
|
||||
Reference in New Issue
Block a user