Compare commits
16 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b90e1200d7 | ||
|
|
4493a0a804 | ||
|
|
58835b2b42 | ||
|
|
427597b603 | ||
|
|
7d64879ba8 | ||
|
|
bb715704b7 | ||
|
|
d67e9cc507 | ||
|
|
2927bbb2d6 | ||
|
|
0527b59180 | ||
|
|
a5ce1032d3 | ||
|
|
1c2acdc985 | ||
|
|
4e75534ef8 | ||
|
|
7a573cafd1 | ||
|
|
844194ee29 | ||
|
|
609c5921d4 | ||
|
|
c79eaa089a |
16
README.md
16
README.md
@@ -1,4 +1,4 @@
|
||||
<img src="docs/logo.svg" width="250" align="right"/>
|
||||
<img src="https://github.com/9001/copyparty/raw/hovudstraum/docs/logo.svg" width="250" align="right"/>
|
||||
|
||||
### 💾🎉 copyparty
|
||||
|
||||
@@ -43,6 +43,7 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [unpost](#unpost) - undo/delete accidental uploads
|
||||
* [self-destruct](#self-destruct) - uploads can be given a lifetime
|
||||
* [race the beam](#race-the-beam) - download files while they're still uploading ([demo video](http://a.ocv.me/pub/g/nerd-stuff/cpp/2024-0418-race-the-beam.webm))
|
||||
* [incoming files](#incoming-files) - the control-panel shows the ETA for all incoming files
|
||||
* [file manager](#file-manager) - cut/paste, rename, and delete files/folders (if you have permission)
|
||||
* [shares](#shares) - share a file or folder by creating a temporary link
|
||||
* [batch rename](#batch-rename) - select some files and press `F2` to bring up the rename UI
|
||||
@@ -240,7 +241,7 @@ also see [comparison to similar software](./docs/versus.md)
|
||||
* ☑ ...of videos using FFmpeg
|
||||
* ☑ ...of audio (spectrograms) using FFmpeg
|
||||
* ☑ cache eviction (max-age; maybe max-size eventually)
|
||||
* ☑ multilingual UI (english, norwegian, [add your own](./docs/rice/#translations)))
|
||||
* ☑ multilingual UI (english, norwegian, chinese, [add your own](./docs/rice/#translations)))
|
||||
* ☑ SPA (browse while uploading)
|
||||
* server indexing
|
||||
* ☑ [locate files by contents](#file-search)
|
||||
@@ -731,6 +732,13 @@ download files while they're still uploading ([demo video](http://a.ocv.me/pub/g
|
||||
requires the file to be uploaded using up2k (which is the default drag-and-drop uploader), alternatively the command-line program
|
||||
|
||||
|
||||
### incoming files
|
||||
|
||||
the control-panel shows the ETA for all incoming files , but only for files being uploaded into volumes where you have read-access
|
||||
|
||||

|
||||
|
||||
|
||||
## file manager
|
||||
|
||||
cut/paste, rename, and delete files/folders (if you have permission)
|
||||
@@ -1549,6 +1557,8 @@ you can either:
|
||||
* or do location-based proxying, using `--rp-loc=/stuff` to tell copyparty where it is mounted -- has a slight performance cost and higher chance of bugs
|
||||
* if copyparty says `incorrect --rp-loc or webserver config; expected vpath starting with [...]` it's likely because the webserver is stripping away the proxy location from the request URLs -- see the `ProxyPass` in the apache example below
|
||||
|
||||
when running behind a reverse-proxy (this includes services like cloudflare), it is important to configure real-ip correctly, as many features rely on knowing the client's IP. Look out for red and yellow log messages which explain how to do this. But basically, set `--xff-hdr` to the name of the http header to read the IP from (usually `x-forwarded-for`, but cloudflare uses `cf-connecting-ip`), and then `--xff-src` to the IP of the reverse-proxy so copyparty will trust the xff-hdr. Note that `--rp-loc` in particular will not work at all unless you do this
|
||||
|
||||
some reverse proxies (such as [Caddy](https://caddyserver.com/)) can automatically obtain a valid https/tls certificate for you, and some support HTTP/2 and QUIC which *could* be a nice speed boost, depending on a lot of factors
|
||||
* **warning:** nginx-QUIC (HTTP/3) is still experimental and can make uploads much slower, so HTTP/1.1 is recommended for now
|
||||
* depending on server/client, HTTP/1.1 can also be 5x faster than HTTP/2
|
||||
@@ -1879,6 +1889,7 @@ interact with copyparty using non-browser clients
|
||||
* [rclone](https://rclone.org/) as client can give ~5x performance, see [./docs/rclone.md](docs/rclone.md)
|
||||
|
||||
* sharex (screenshot utility): see [./contrib/sharex.sxcu](contrib/#sharexsxcu)
|
||||
* and for screenshots on linux, see [./contrib/flameshot.sh](./contrib/flameshot.sh)
|
||||
|
||||
* contextlet (web browser integration); see [contrib contextlet](contrib/#send-to-cppcontextletjson)
|
||||
|
||||
@@ -1957,6 +1968,7 @@ below are some tweaks roughly ordered by usefulness:
|
||||
* and also makes thumbnails load faster, regardless of e2d/e2t
|
||||
* `--dedup` enables deduplication and thus avoids writing to the HDD if someone uploads a dupe
|
||||
* `--safe-dedup 1` makes deduplication much faster during upload by skipping verification of file contents; safe if there is no other software editing/moving the files in the volumes
|
||||
* `--no-dirsz` shows the size of folder inodes instead of the total size of the contents, giving about 30% faster folder listings
|
||||
* `--no-hash .` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable
|
||||
* if your volumes are on a network-disk such as NFS / SMB / s3, specifying larger values for `--iobuf` and/or `--s-rd-sz` and/or `--s-wr-sz` may help; try setting all of them to `524288` or `1048576` or `4194304`
|
||||
* `--no-htp --hash-mt=0 --mtag-mt=1 --th-mt=1` minimizes the number of threads; can help in some eccentric environments (like the vscode debugger)
|
||||
|
||||
@@ -19,6 +19,9 @@
|
||||
* the `act:bput` thing is optional since copyparty v1.9.29
|
||||
* using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu)
|
||||
|
||||
### [`flameshot.sh`](flameshot.sh)
|
||||
* takes a screenshot with [flameshot](https://flameshot.org/) on Linux, uploads it, and writes the URL to clipboard
|
||||
|
||||
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
|
||||
* browser integration, kind of? custom rightclick actions and stuff
|
||||
* rightclick a pic and send it to copyparty straight from your browser
|
||||
|
||||
14
contrib/flameshot.sh
Executable file
14
contrib/flameshot.sh
Executable file
@@ -0,0 +1,14 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# take a screenshot with flameshot and send it to copyparty;
|
||||
# the image url will be placed on your clipboard
|
||||
|
||||
password=wark
|
||||
url=https://a.ocv.me/up/
|
||||
filename=$(date +%Y-%m%d-%H%M%S).png
|
||||
|
||||
flameshot gui -s -r |
|
||||
curl -T- $url$filename?pw=$password |
|
||||
tail -n 1 |
|
||||
xsel -ib
|
||||
@@ -1,14 +1,10 @@
|
||||
# when running copyparty behind a reverse proxy,
|
||||
# the following arguments are recommended:
|
||||
#
|
||||
# -i 127.0.0.1 only accept connections from nginx
|
||||
#
|
||||
# -nc must match or exceed the webserver's max number of concurrent clients;
|
||||
# copyparty default is 1024 if OS permits it (see "max clients:" on startup),
|
||||
# look for "max clients:" when starting copyparty, as nginx should
|
||||
# not accept more consecutive clients than what copyparty is able to;
|
||||
# nginx default is 512 (worker_processes 1, worker_connections 512)
|
||||
#
|
||||
# you may also consider adding -j0 for CPU-intensive configurations
|
||||
# (5'000 requests per second, or 20gbps upload/download in parallel)
|
||||
# rarely, in some extreme usecases, it can be good to add -j0
|
||||
# (40'000 requests per second, or 20gbps upload/download in parallel)
|
||||
# but this is usually counterproductive and slightly buggy
|
||||
#
|
||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||
#
|
||||
@@ -20,10 +16,33 @@
|
||||
#
|
||||
# and then enable it below by uncomenting the cloudflare-only.conf line
|
||||
|
||||
upstream cpp {
|
||||
|
||||
upstream cpp_tcp {
|
||||
# alternative 1: connect to copyparty using tcp;
|
||||
# cpp_uds is slightly faster and more secure, but
|
||||
# cpp_tcp is easier to setup and "just works"
|
||||
# ...you should however restrict copyparty to only
|
||||
# accept connections from nginx by adding these args:
|
||||
# -i 127.0.0.1
|
||||
|
||||
server 127.0.0.1:3923 fail_timeout=1s;
|
||||
keepalive 1;
|
||||
}
|
||||
|
||||
|
||||
upstream cpp_uds {
|
||||
# alternative 2: unix-socket, aka. "unix domain socket";
|
||||
# 5-10% faster, and better isolation from other software,
|
||||
# but there must be at least one unix-group which both
|
||||
# nginx and copyparty is a member of; if that group is
|
||||
# "www" then run copyparty with the following args:
|
||||
# -i unix:770:www:/tmp/party.sock
|
||||
|
||||
server unix:/tmp/party.sock fail_timeout=1s;
|
||||
keepalive 1;
|
||||
}
|
||||
|
||||
|
||||
server {
|
||||
listen 443 ssl;
|
||||
listen [::]:443 ssl;
|
||||
@@ -34,7 +53,8 @@ server {
|
||||
#include /etc/nginx/cloudflare-only.conf;
|
||||
|
||||
location / {
|
||||
proxy_pass http://cpp;
|
||||
# recommendation: replace cpp_tcp with cpp_uds below
|
||||
proxy_pass http://cpp_tcp;
|
||||
proxy_redirect off;
|
||||
# disable buffering (next 4 lines)
|
||||
proxy_http_version 1.1;
|
||||
@@ -52,6 +72,7 @@ server {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# default client_max_body_size (1M) blocks uploads larger than 256 MiB
|
||||
client_max_body_size 1024M;
|
||||
client_header_timeout 610m;
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Maintainer: icxes <dev.null@need.moe>
|
||||
pkgname=copyparty
|
||||
pkgver="1.15.0"
|
||||
pkgver="1.15.1"
|
||||
pkgrel=1
|
||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||
arch=("any")
|
||||
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
||||
)
|
||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||
backup=("etc/${pkgname}.d/init" )
|
||||
sha256sums=("cd082e1dc93ef0bd8b6115155f467e14bf450874d0a822567416f5e30fc55618")
|
||||
sha256sums=("5fb048fe7e2aa5ad18c9cdb333af3ee5e51c338efa74b34aa8aa444675eac913")
|
||||
|
||||
build() {
|
||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.15.0/copyparty-sfx.py",
|
||||
"version": "1.15.0",
|
||||
"hash": "sha256-4W7GMdukwG6CNaVrCCOF12tdQ/12XZz/orHAoB/3G8U="
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.15.1/copyparty-sfx.py",
|
||||
"version": "1.15.1",
|
||||
"hash": "sha256-i4S/TmuAphv/wbndfoSUYztNqO+o+qh/v8GcslxWWUk="
|
||||
}
|
||||
@@ -19,6 +19,7 @@ if True:
|
||||
from typing import Any, Callable
|
||||
|
||||
PY2 = sys.version_info < (3,)
|
||||
PY36 = sys.version_info > (3, 6)
|
||||
if not PY2:
|
||||
unicode: Callable[[Any], str] = str
|
||||
else:
|
||||
|
||||
@@ -27,6 +27,7 @@ from .__init__ import (
|
||||
EXE,
|
||||
MACOS,
|
||||
PY2,
|
||||
PY36,
|
||||
VT100,
|
||||
WINDOWS,
|
||||
E,
|
||||
@@ -54,6 +55,7 @@ from .util import (
|
||||
Daemon,
|
||||
align_tab,
|
||||
ansi_re,
|
||||
b64enc,
|
||||
dedent,
|
||||
min_ex,
|
||||
pybin,
|
||||
@@ -204,7 +206,7 @@ def init_E(EE: EnvParams) -> None:
|
||||
errs.append("Using [%s] instead" % (p,))
|
||||
|
||||
if errs:
|
||||
print("WARNING: " + ". ".join(errs))
|
||||
warn(". ".join(errs))
|
||||
|
||||
return p # type: ignore
|
||||
except Exception as ex:
|
||||
@@ -234,7 +236,7 @@ def init_E(EE: EnvParams) -> None:
|
||||
raise
|
||||
|
||||
|
||||
def get_srvname() -> str:
|
||||
def get_srvname(verbose) -> str:
|
||||
try:
|
||||
ret: str = unicode(socket.gethostname()).split(".")[0]
|
||||
except:
|
||||
@@ -244,7 +246,8 @@ def get_srvname() -> str:
|
||||
return ret
|
||||
|
||||
fp = os.path.join(E.cfg, "name.txt")
|
||||
lprint("using hostname from {}\n".format(fp))
|
||||
if verbose:
|
||||
lprint("using hostname from {}\n".format(fp))
|
||||
try:
|
||||
with open(fp, "rb") as f:
|
||||
ret = f.read().decode("utf-8", "replace").strip()
|
||||
@@ -266,7 +269,7 @@ def get_fk_salt() -> str:
|
||||
with open(fp, "rb") as f:
|
||||
ret = f.read().strip()
|
||||
except:
|
||||
ret = base64.b64encode(os.urandom(18))
|
||||
ret = b64enc(os.urandom(18))
|
||||
with open(fp, "wb") as f:
|
||||
f.write(ret + b"\n")
|
||||
|
||||
@@ -279,7 +282,7 @@ def get_dk_salt() -> str:
|
||||
with open(fp, "rb") as f:
|
||||
ret = f.read().strip()
|
||||
except:
|
||||
ret = base64.b64encode(os.urandom(30))
|
||||
ret = b64enc(os.urandom(30))
|
||||
with open(fp, "wb") as f:
|
||||
f.write(ret + b"\n")
|
||||
|
||||
@@ -292,7 +295,7 @@ def get_ah_salt() -> str:
|
||||
with open(fp, "rb") as f:
|
||||
ret = f.read().strip()
|
||||
except:
|
||||
ret = base64.b64encode(os.urandom(18))
|
||||
ret = b64enc(os.urandom(18))
|
||||
with open(fp, "wb") as f:
|
||||
f.write(ret + b"\n")
|
||||
|
||||
@@ -350,7 +353,7 @@ def configure_ssl_ver(al: argparse.Namespace) -> None:
|
||||
# oh man i love openssl
|
||||
# check this out
|
||||
# hold my beer
|
||||
assert ssl # type: ignore
|
||||
assert ssl # type: ignore # !rm
|
||||
ptn = re.compile(r"^OP_NO_(TLS|SSL)v")
|
||||
sslver = terse_sslver(al.ssl_ver).split(",")
|
||||
flags = [k for k in ssl.__dict__ if ptn.match(k)]
|
||||
@@ -384,7 +387,7 @@ def configure_ssl_ver(al: argparse.Namespace) -> None:
|
||||
|
||||
|
||||
def configure_ssl_ciphers(al: argparse.Namespace) -> None:
|
||||
assert ssl # type: ignore
|
||||
assert ssl # type: ignore # !rm
|
||||
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
|
||||
if al.ssl_ver:
|
||||
ctx.options &= ~al.ssl_flags_en
|
||||
@@ -1230,6 +1233,7 @@ def add_optouts(ap):
|
||||
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar")
|
||||
ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)")
|
||||
ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time")
|
||||
ap2.add_argument("--no-up-list", action="store_true", help="don't show list of incoming files in controlpanel")
|
||||
ap2.add_argument("--no-pipe", action="store_true", help="disable race-the-beam (lockstep download of files which are currently being uploaded) (volflag=nopipe)")
|
||||
ap2.add_argument("--no-db-ip", action="store_true", help="do not write uploader IPs into the database")
|
||||
|
||||
@@ -1358,6 +1362,8 @@ def add_db_general(ap, hcores):
|
||||
ap2.add_argument("--hist", metavar="PATH", type=u, default="", help="where to store volume data (db, thumbs); default is a folder named \".hist\" inside each volume (volflag=hist)")
|
||||
ap2.add_argument("--no-hash", metavar="PTN", type=u, default="", help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
|
||||
ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
|
||||
ap2.add_argument("--no-dirsz", action="store_true", help="do not show total recursive size of folders in listings, show inode size instead; slightly faster (volflag=nodirsz)")
|
||||
ap2.add_argument("--re-dirsz", action="store_true", help="if the directory-sizes in the UI are bonkers, use this along with \033[33m-e2dsa\033[0m to rebuild the index from scratch")
|
||||
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
|
||||
ap2.add_argument("--re-dhash", action="store_true", help="force a cache rebuild on startup; enable this once if it gets out of sync (should never be necessary)")
|
||||
ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice -- only useful for offloading uploads to a cloud service or something (volflag=noforget)")
|
||||
@@ -1471,7 +1477,7 @@ def add_debug(ap):
|
||||
|
||||
|
||||
def run_argparse(
|
||||
argv: list[str], formatter: Any, retry: bool, nc: int
|
||||
argv: list[str], formatter: Any, retry: bool, nc: int, verbose=True
|
||||
) -> argparse.Namespace:
|
||||
ap = argparse.ArgumentParser(
|
||||
formatter_class=formatter,
|
||||
@@ -1493,7 +1499,7 @@ def run_argparse(
|
||||
|
||||
tty = os.environ.get("TERM", "").lower() == "linux"
|
||||
|
||||
srvname = get_srvname()
|
||||
srvname = get_srvname(verbose)
|
||||
|
||||
add_general(ap, nc, srvname)
|
||||
add_network(ap)
|
||||
@@ -1673,7 +1679,7 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
|
||||
for fmtr in [RiceFormatter, RiceFormatter, Dodge11874, BasicDodge11874]:
|
||||
try:
|
||||
al = run_argparse(argv, fmtr, retry, nc)
|
||||
dal = run_argparse([], fmtr, retry, nc)
|
||||
dal = run_argparse([], fmtr, retry, nc, False)
|
||||
break
|
||||
except SystemExit:
|
||||
raise
|
||||
@@ -1757,7 +1763,7 @@ def main(argv: Optional[list[str]] = None, rsrc: Optional[str] = None) -> None:
|
||||
print("error: python2 cannot --smb")
|
||||
return
|
||||
|
||||
if sys.version_info < (3, 6):
|
||||
if not PY36:
|
||||
al.no_scandir = True
|
||||
|
||||
if not hasattr(os, "sendfile"):
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# coding: utf-8
|
||||
|
||||
VERSION = (1, 15, 1)
|
||||
VERSION = (1, 15, 2)
|
||||
CODENAME = "fill the drives"
|
||||
BUILD_DT = (2024, 9, 9)
|
||||
BUILD_DT = (2024, 9, 16)
|
||||
|
||||
S_VERSION = ".".join(map(str, VERSION))
|
||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||
|
||||
@@ -855,6 +855,7 @@ class AuthSrv(object):
|
||||
self.idp_accs: dict[str, list[str]] = {} # username->groupnames
|
||||
self.idp_usr_gh: dict[str, str] = {} # username->group-header-value (cache)
|
||||
|
||||
self.hid_cache: dict[str, str] = {}
|
||||
self.mutex = threading.Lock()
|
||||
self.reload()
|
||||
|
||||
@@ -1531,7 +1532,7 @@ class AuthSrv(object):
|
||||
if enshare:
|
||||
import sqlite3
|
||||
|
||||
shv = VFS(self.log_func, "", shr, AXS(), {"d2d": True})
|
||||
shv = VFS(self.log_func, "", shr, AXS(), {})
|
||||
|
||||
db_path = self.args.shr_db
|
||||
db = sqlite3.connect(db_path)
|
||||
@@ -1550,8 +1551,8 @@ class AuthSrv(object):
|
||||
if s_pw:
|
||||
# gotta reuse the "account" for all shares with this pw,
|
||||
# so do a light scramble as this appears in the web-ui
|
||||
zs = ub64enc(hashlib.sha512(s_pw.encode("utf-8")).digest())[4:16]
|
||||
sun = "s_%s" % (zs.decode("utf-8"),)
|
||||
zb = hashlib.sha512(s_pw.encode("utf-8")).digest()
|
||||
sun = "s_%s" % (ub64enc(zb)[4:16].decode("ascii"),)
|
||||
acct[sun] = s_pw
|
||||
else:
|
||||
sun = "*"
|
||||
@@ -1656,8 +1657,12 @@ class AuthSrv(object):
|
||||
promote = []
|
||||
demote = []
|
||||
for vol in vfs.all_vols.values():
|
||||
zb = hashlib.sha512(afsenc(vol.realpath)).digest()
|
||||
hid = base64.b32encode(zb).decode("ascii").lower()
|
||||
hid = self.hid_cache.get(vol.realpath)
|
||||
if not hid:
|
||||
zb = hashlib.sha512(afsenc(vol.realpath)).digest()
|
||||
hid = base64.b32encode(zb).decode("ascii").lower()
|
||||
self.hid_cache[vol.realpath] = hid
|
||||
|
||||
vflag = vol.flags.get("hist")
|
||||
if vflag == "-":
|
||||
pass
|
||||
@@ -2286,7 +2291,7 @@ class AuthSrv(object):
|
||||
q = "insert into us values (?,?,?)"
|
||||
for uname in self.acct:
|
||||
if uname not in ases:
|
||||
sid = ub64enc(os.urandom(blen)).decode("utf-8")
|
||||
sid = ub64enc(os.urandom(blen)).decode("ascii")
|
||||
cur.execute(q, (uname, sid, int(time.time())))
|
||||
ases[uname] = sid
|
||||
n.append(uname)
|
||||
|
||||
@@ -9,14 +9,14 @@ import queue
|
||||
|
||||
from .__init__ import CORES, TYPE_CHECKING
|
||||
from .broker_mpw import MpWorker
|
||||
from .broker_util import ExceptionalQueue, try_exec
|
||||
from .broker_util import ExceptionalQueue, NotExQueue, try_exec
|
||||
from .util import Daemon, mp
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .svchub import SvcHub
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
from typing import Any
|
||||
from typing import Any, Union
|
||||
|
||||
|
||||
class MProcess(mp.Process):
|
||||
@@ -108,7 +108,7 @@ class BrokerMp(object):
|
||||
if retq_id:
|
||||
proc.q_pend.put((retq_id, "retq", rv))
|
||||
|
||||
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
||||
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
||||
|
||||
# new non-ipc invoking managed service in hub
|
||||
obj = self.hub
|
||||
|
||||
@@ -11,7 +11,7 @@ import queue
|
||||
|
||||
from .__init__ import ANYWIN
|
||||
from .authsrv import AuthSrv
|
||||
from .broker_util import BrokerCli, ExceptionalQueue
|
||||
from .broker_util import BrokerCli, ExceptionalQueue, NotExQueue
|
||||
from .httpsrv import HttpSrv
|
||||
from .util import FAKE_MP, Daemon, HMaccas
|
||||
|
||||
@@ -114,7 +114,7 @@ class MpWorker(BrokerCli):
|
||||
else:
|
||||
raise Exception("what is " + str(dest))
|
||||
|
||||
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
||||
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
||||
retq = ExceptionalQueue(1)
|
||||
retq_id = id(retq)
|
||||
with self.retpend_mutex:
|
||||
|
||||
@@ -5,7 +5,7 @@ import os
|
||||
import threading
|
||||
|
||||
from .__init__ import TYPE_CHECKING
|
||||
from .broker_util import BrokerCli, ExceptionalQueue, try_exec
|
||||
from .broker_util import BrokerCli, ExceptionalQueue, NotExQueue
|
||||
from .httpsrv import HttpSrv
|
||||
from .util import HMaccas
|
||||
|
||||
@@ -13,7 +13,7 @@ if TYPE_CHECKING:
|
||||
from .svchub import SvcHub
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
from typing import Any
|
||||
from typing import Any, Union
|
||||
|
||||
|
||||
class BrokerThr(BrokerCli):
|
||||
@@ -43,19 +43,14 @@ class BrokerThr(BrokerCli):
|
||||
def noop(self) -> None:
|
||||
pass
|
||||
|
||||
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
||||
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
||||
|
||||
# new ipc invoking managed service in hub
|
||||
obj = self.hub
|
||||
for node in dest.split("."):
|
||||
obj = getattr(obj, node)
|
||||
|
||||
rv = try_exec(True, obj, *args)
|
||||
|
||||
# pretend we're broker_mp
|
||||
retq = ExceptionalQueue(1)
|
||||
retq.put(rv)
|
||||
return retq
|
||||
return NotExQueue(obj(*args)) # type: ignore
|
||||
|
||||
def say(self, dest: str, *args: Any) -> None:
|
||||
if dest == "listen":
|
||||
@@ -71,4 +66,4 @@ class BrokerThr(BrokerCli):
|
||||
for node in dest.split("."):
|
||||
obj = getattr(obj, node)
|
||||
|
||||
try_exec(False, obj, *args)
|
||||
obj(*args) # type: ignore
|
||||
|
||||
@@ -33,6 +33,18 @@ class ExceptionalQueue(Queue, object):
|
||||
return rv
|
||||
|
||||
|
||||
class NotExQueue(object):
|
||||
"""
|
||||
BrokerThr uses this instead of ExceptionalQueue; 7x faster
|
||||
"""
|
||||
|
||||
def __init__(self, rv: Any) -> None:
|
||||
self.rv = rv
|
||||
|
||||
def get(self) -> Any:
|
||||
return self.rv
|
||||
|
||||
|
||||
class BrokerCli(object):
|
||||
"""
|
||||
helps mypy understand httpsrv.broker but still fails a few levels deeper,
|
||||
@@ -48,7 +60,7 @@ class BrokerCli(object):
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
|
||||
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
|
||||
return ExceptionalQueue(1)
|
||||
|
||||
def say(self, dest: str, *args: Any) -> None:
|
||||
|
||||
@@ -13,6 +13,7 @@ def vf_bmap() -> dict[str, str]:
|
||||
"dav_rt": "davrt",
|
||||
"ed": "dots",
|
||||
"hardlink_only": "hardlinkonly",
|
||||
"no_dirsz": "nodirsz",
|
||||
"no_dupe": "nodupe",
|
||||
"no_forget": "noforget",
|
||||
"no_pipe": "nopipe",
|
||||
|
||||
@@ -119,7 +119,7 @@ class Fstab(object):
|
||||
self.srctab = srctab
|
||||
|
||||
def relabel(self, path: str, nval: str) -> None:
|
||||
assert self.tab
|
||||
assert self.tab # !rm
|
||||
self.cache = {}
|
||||
if ANYWIN:
|
||||
path = self._winpath(path)
|
||||
@@ -156,7 +156,7 @@ class Fstab(object):
|
||||
self.log("failed to build tab:\n{}".format(min_ex()), 3)
|
||||
self.build_fallback()
|
||||
|
||||
assert self.tab
|
||||
assert self.tab # !rm
|
||||
ret = self.tab._find(path)[0]
|
||||
if self.trusted or path == ret.vpath:
|
||||
return ret.realpath.split("/")[0]
|
||||
@@ -167,6 +167,6 @@ class Fstab(object):
|
||||
if not self.tab:
|
||||
self.build_fallback()
|
||||
|
||||
assert self.tab
|
||||
assert self.tab # !rm
|
||||
ret = self.tab._find(path)[0]
|
||||
return ret.realpath
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import argparse # typechk
|
||||
import base64
|
||||
import calendar
|
||||
import copy
|
||||
import errno
|
||||
@@ -58,6 +57,7 @@ from .util import (
|
||||
absreal,
|
||||
alltrace,
|
||||
atomic_move,
|
||||
b64dec,
|
||||
exclude_dotfiles,
|
||||
formatdate,
|
||||
fsenc,
|
||||
@@ -87,6 +87,7 @@ from .util import (
|
||||
relchk,
|
||||
ren_open,
|
||||
runhook,
|
||||
s2hms,
|
||||
s3enc,
|
||||
sanitize_fn,
|
||||
sanitize_vpath,
|
||||
@@ -127,7 +128,7 @@ class HttpCli(object):
|
||||
"""
|
||||
|
||||
def __init__(self, conn: "HttpConn") -> None:
|
||||
assert conn.sr
|
||||
assert conn.sr # !rm
|
||||
|
||||
self.t0 = time.time()
|
||||
self.conn = conn
|
||||
@@ -502,7 +503,7 @@ class HttpCli(object):
|
||||
):
|
||||
try:
|
||||
zb = zso.split(" ")[1].encode("ascii")
|
||||
zs = base64.b64decode(zb).decode("utf-8")
|
||||
zs = b64dec(zb).decode("utf-8")
|
||||
# try "pwd", "x:pwd", "pwd:x"
|
||||
for bauth in [zs] + zs.split(":", 1)[::-1]:
|
||||
if bauth in self.asrv.sesa:
|
||||
@@ -1395,7 +1396,7 @@ class HttpCli(object):
|
||||
xroot = mkenod("D:orz")
|
||||
xroot.insert(0, parse_xml(txt))
|
||||
xprop = xroot.find(r"./{DAV:}propertyupdate/{DAV:}set/{DAV:}prop")
|
||||
assert xprop
|
||||
assert xprop # !rm
|
||||
for ze in xprop:
|
||||
ze.clear()
|
||||
|
||||
@@ -1403,12 +1404,12 @@ class HttpCli(object):
|
||||
xroot = parse_xml(txt)
|
||||
|
||||
el = xroot.find(r"./{DAV:}response")
|
||||
assert el
|
||||
assert el # !rm
|
||||
e2 = mktnod("D:href", quotep(self.args.SRS + self.vpath))
|
||||
el.insert(0, e2)
|
||||
|
||||
el = xroot.find(r"./{DAV:}response/{DAV:}propstat")
|
||||
assert el
|
||||
assert el # !rm
|
||||
el.insert(0, xprop)
|
||||
|
||||
ret = '<?xml version="1.0" encoding="{}"?>\n'.format(uenc)
|
||||
@@ -1792,7 +1793,7 @@ class HttpCli(object):
|
||||
fn = os.devnull
|
||||
|
||||
params.update(open_ka)
|
||||
assert fn
|
||||
assert fn # !rm
|
||||
|
||||
if not self.args.nw:
|
||||
if rnd:
|
||||
@@ -1864,10 +1865,12 @@ class HttpCli(object):
|
||||
# small toctou, but better than clobbering a hardlink
|
||||
wunlink(self.log, path, vfs.flags)
|
||||
|
||||
with ren_open(fn, *open_a, **params) as zfw:
|
||||
f, fn = zfw["orz"]
|
||||
f, fn = ren_open(fn, *open_a, **params)
|
||||
try:
|
||||
path = os.path.join(fdir, fn)
|
||||
post_sz, sha_hex, sha_b64 = hashcopy(reader, f, self.args.s_wr_slp)
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
if lim:
|
||||
lim.nup(self.ip)
|
||||
@@ -1906,8 +1909,8 @@ class HttpCli(object):
|
||||
fn2 = fn.rsplit(".", 1)[0] + "." + ext
|
||||
|
||||
params["suffix"] = suffix[:-4]
|
||||
with ren_open(fn, *open_a, **params) as zfw:
|
||||
f, fn = zfw["orz"]
|
||||
f, fn2 = ren_open(fn2, *open_a, **params)
|
||||
f.close()
|
||||
|
||||
path2 = os.path.join(fdir, fn2)
|
||||
atomic_move(self.log, path, path2, vfs.flags)
|
||||
@@ -2101,7 +2104,7 @@ class HttpCli(object):
|
||||
raise Pebkac(422, 'invalid action "{}"'.format(act))
|
||||
|
||||
def handle_zip_post(self) -> bool:
|
||||
assert self.parser
|
||||
assert self.parser # !rm
|
||||
try:
|
||||
k = next(x for x in self.uparam if x in ("zip", "tar"))
|
||||
except:
|
||||
@@ -2301,11 +2304,16 @@ class HttpCli(object):
|
||||
vfs, _ = self.asrv.vfs.get(self.vpath, self.uname, False, True)
|
||||
ptop = (vfs.dbv or vfs).realpath
|
||||
|
||||
x = self.conn.hsrv.broker.ask("up2k.handle_chunks", ptop, wark, chashes)
|
||||
broker = self.conn.hsrv.broker
|
||||
x = broker.ask("up2k.handle_chunks", ptop, wark, chashes)
|
||||
response = x.get()
|
||||
chashes, chunksize, cstarts, path, lastmod, sprs = response
|
||||
maxsize = chunksize * len(chashes)
|
||||
cstart0 = cstarts[0]
|
||||
locked = chashes # remaining chunks to be received in this request
|
||||
written = [] # chunks written to disk, but not yet released by up2k
|
||||
num_left = -1 # num chunks left according to most recent up2k release
|
||||
treport = time.time() # ratelimit up2k reporting to reduce overhead
|
||||
|
||||
try:
|
||||
if self.args.nw:
|
||||
@@ -2351,11 +2359,8 @@ class HttpCli(object):
|
||||
remains -= chunksize
|
||||
|
||||
if len(cstart) > 1 and path != os.devnull:
|
||||
self.log(
|
||||
"clone {} to {}".format(
|
||||
cstart[0], " & ".join(unicode(x) for x in cstart[1:])
|
||||
)
|
||||
)
|
||||
t = " & ".join(unicode(x) for x in cstart[1:])
|
||||
self.log("clone %s to %s" % (cstart[0], t))
|
||||
ofs = 0
|
||||
while ofs < chunksize:
|
||||
bufsz = max(4 * 1024 * 1024, self.args.iobuf)
|
||||
@@ -2370,6 +2375,25 @@ class HttpCli(object):
|
||||
|
||||
self.log("clone {} done".format(cstart[0]))
|
||||
|
||||
# be quick to keep the tcp winsize scale;
|
||||
# if we can't confirm rn then that's fine
|
||||
written.append(chash)
|
||||
now = time.time()
|
||||
if now - treport < 1:
|
||||
continue
|
||||
treport = now
|
||||
x = broker.ask("up2k.fast_confirm_chunks", ptop, wark, written)
|
||||
num_left, t = x.get()
|
||||
if num_left < -1:
|
||||
self.loud_reply(t, status=500)
|
||||
locked = written = []
|
||||
return False
|
||||
elif num_left >= 0:
|
||||
t = "got %d more chunks, %d left"
|
||||
self.log(t % (len(written), num_left), 6)
|
||||
locked = locked[len(written) :]
|
||||
written = []
|
||||
|
||||
if not fpool:
|
||||
f.close()
|
||||
else:
|
||||
@@ -2380,25 +2404,25 @@ class HttpCli(object):
|
||||
f.close()
|
||||
raise
|
||||
finally:
|
||||
x = self.conn.hsrv.broker.ask("up2k.release_chunks", ptop, wark, chashes)
|
||||
x.get() # block client until released
|
||||
if locked:
|
||||
# now block until all chunks released+confirmed
|
||||
x = broker.ask("up2k.confirm_chunks", ptop, wark, locked)
|
||||
num_left, t = x.get()
|
||||
if num_left < 0:
|
||||
self.loud_reply(t, status=500)
|
||||
return False
|
||||
t = "got %d more chunks, %d left"
|
||||
self.log(t % (len(locked), num_left), 6)
|
||||
|
||||
x = self.conn.hsrv.broker.ask("up2k.confirm_chunks", ptop, wark, chashes)
|
||||
ztis = x.get()
|
||||
try:
|
||||
num_left, fin_path = ztis
|
||||
except:
|
||||
self.loud_reply(ztis, status=500)
|
||||
return False
|
||||
if num_left < 0:
|
||||
raise Pebkac(500, "unconfirmed; see serverlog")
|
||||
|
||||
if not num_left and fpool:
|
||||
with self.u2mutex:
|
||||
self.u2fh.close(path)
|
||||
|
||||
if not num_left and not self.args.nw:
|
||||
self.conn.hsrv.broker.ask(
|
||||
"up2k.finish_upload", ptop, wark, self.u2fh.aps
|
||||
).get()
|
||||
broker.ask("up2k.finish_upload", ptop, wark, self.u2fh.aps).get()
|
||||
|
||||
cinf = self.headers.get("x-up2k-stat", "")
|
||||
|
||||
@@ -2408,7 +2432,7 @@ class HttpCli(object):
|
||||
return True
|
||||
|
||||
def handle_chpw(self) -> bool:
|
||||
assert self.parser
|
||||
assert self.parser # !rm
|
||||
pwd = self.parser.require("pw", 64)
|
||||
self.parser.drop()
|
||||
|
||||
@@ -2425,7 +2449,7 @@ class HttpCli(object):
|
||||
return True
|
||||
|
||||
def handle_login(self) -> bool:
|
||||
assert self.parser
|
||||
assert self.parser # !rm
|
||||
pwd = self.parser.require("cppwd", 64)
|
||||
try:
|
||||
uhash = self.parser.require("uhash", 256)
|
||||
@@ -2453,7 +2477,7 @@ class HttpCli(object):
|
||||
return True
|
||||
|
||||
def handle_logout(self) -> bool:
|
||||
assert self.parser
|
||||
assert self.parser # !rm
|
||||
self.parser.drop()
|
||||
|
||||
self.log("logout " + self.uname)
|
||||
@@ -2482,7 +2506,7 @@ class HttpCli(object):
|
||||
logpwd = ""
|
||||
elif self.args.log_badpwd == 2:
|
||||
zb = hashlib.sha512(pwd.encode("utf-8", "replace")).digest()
|
||||
logpwd = "%" + base64.b64encode(zb[:12]).decode("utf-8")
|
||||
logpwd = "%" + ub64enc(zb[:12]).decode("ascii")
|
||||
|
||||
if pwd != "x":
|
||||
self.log("invalid password: {}".format(logpwd), 3)
|
||||
@@ -2507,7 +2531,7 @@ class HttpCli(object):
|
||||
return dur > 0, msg
|
||||
|
||||
def handle_mkdir(self) -> bool:
|
||||
assert self.parser
|
||||
assert self.parser # !rm
|
||||
new_dir = self.parser.require("name", 512)
|
||||
self.parser.drop()
|
||||
|
||||
@@ -2553,7 +2577,7 @@ class HttpCli(object):
|
||||
return True
|
||||
|
||||
def handle_new_md(self) -> bool:
|
||||
assert self.parser
|
||||
assert self.parser # !rm
|
||||
new_file = self.parser.require("name", 512)
|
||||
self.parser.drop()
|
||||
|
||||
@@ -2719,8 +2743,8 @@ class HttpCli(object):
|
||||
bos.makedirs(fdir)
|
||||
|
||||
# reserve destination filename
|
||||
with ren_open(fname, "wb", fdir=fdir, suffix=suffix) as zfw:
|
||||
fname = zfw["orz"][1]
|
||||
f, fname = ren_open(fname, "wb", fdir=fdir, suffix=suffix)
|
||||
f.close()
|
||||
|
||||
tnam = fname + ".PARTIAL"
|
||||
if self.args.dotpart:
|
||||
@@ -2743,8 +2767,8 @@ class HttpCli(object):
|
||||
v2 = lim.dfv - lim.dfl
|
||||
max_sz = min(v1, v2) if v1 and v2 else v1 or v2
|
||||
|
||||
with ren_open(tnam, "wb", self.args.iobuf, **open_args) as zfw:
|
||||
f, tnam = zfw["orz"]
|
||||
f, tnam = ren_open(tnam, "wb", self.args.iobuf, **open_args)
|
||||
try:
|
||||
tabspath = os.path.join(fdir, tnam)
|
||||
self.log("writing to {}".format(tabspath))
|
||||
sz, sha_hex, sha_b64 = hashcopy(
|
||||
@@ -2752,6 +2776,8 @@ class HttpCli(object):
|
||||
)
|
||||
if sz == 0:
|
||||
raise Pebkac(400, "empty files in post")
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
if lim:
|
||||
lim.nup(self.ip)
|
||||
@@ -2961,7 +2987,7 @@ class HttpCli(object):
|
||||
return True
|
||||
|
||||
def handle_text_upload(self) -> bool:
|
||||
assert self.parser
|
||||
assert self.parser # !rm
|
||||
try:
|
||||
cli_lastmod3 = int(self.parser.require("lastmod", 16))
|
||||
except:
|
||||
@@ -3046,7 +3072,7 @@ class HttpCli(object):
|
||||
pass
|
||||
wrename(self.log, fp, os.path.join(mdir, ".hist", mfile2), vfs.flags)
|
||||
|
||||
assert self.parser.gen
|
||||
assert self.parser.gen # !rm
|
||||
p_field, _, p_data = next(self.parser.gen)
|
||||
if p_field != "body":
|
||||
raise Pebkac(400, "expected body, got {}".format(p_field))
|
||||
@@ -3147,7 +3173,7 @@ class HttpCli(object):
|
||||
# some browser append "; length=573"
|
||||
cli_lastmod = cli_lastmod.split(";")[0].strip()
|
||||
cli_dt = parsedate(cli_lastmod)
|
||||
assert cli_dt
|
||||
assert cli_dt # !rm
|
||||
cli_ts = calendar.timegm(cli_dt)
|
||||
return file_lastmod, int(file_ts) > int(cli_ts)
|
||||
except Exception as ex:
|
||||
@@ -3915,6 +3941,9 @@ class HttpCli(object):
|
||||
vp = re.sub(r"[<>&$?`\"']", "_", self.uparam["hc"] or "").lstrip("/")
|
||||
pw = pw.replace(" ", "%20")
|
||||
vp = vp.replace(" ", "%20")
|
||||
if pw in self.asrv.sesa:
|
||||
pw = "pwd"
|
||||
|
||||
html = self.j2s(
|
||||
"svcs",
|
||||
args=self.args,
|
||||
@@ -3939,11 +3968,30 @@ class HttpCli(object):
|
||||
for y in [self.rvol, self.wvol, self.avol]
|
||||
]
|
||||
|
||||
if self.avol and not self.args.no_rescan:
|
||||
x = self.conn.hsrv.broker.ask("up2k.get_state")
|
||||
ups = []
|
||||
now = time.time()
|
||||
get_vst = self.avol and not self.args.no_rescan
|
||||
get_ups = self.rvol and not self.args.no_up_list and self.uname or ""
|
||||
if get_vst or get_ups:
|
||||
x = self.conn.hsrv.broker.ask("up2k.get_state", get_vst, get_ups)
|
||||
vs = json.loads(x.get())
|
||||
vstate = {("/" + k).rstrip("/") + "/": v for k, v in vs["volstate"].items()}
|
||||
else:
|
||||
try:
|
||||
for rem, sz, t0, poke, vp in vs["ups"]:
|
||||
fdone = max(0.001, 1 - rem)
|
||||
td = max(0.1, now - t0)
|
||||
rd, fn = vsplit(vp.replace(os.sep, "/"))
|
||||
if not rd:
|
||||
rd = "/"
|
||||
erd = quotep(rd)
|
||||
rds = rd.replace("/", " / ")
|
||||
spd = humansize(sz * fdone / td, True) + "/s"
|
||||
eta = s2hms((td / fdone) - td, True)
|
||||
idle = s2hms(now - poke, True)
|
||||
ups.append((int(100 * fdone), spd, eta, idle, erd, rds, fn))
|
||||
except Exception as ex:
|
||||
self.log("failed to list upload progress: %r" % (ex,), 1)
|
||||
if not get_vst:
|
||||
vstate = {}
|
||||
vs = {
|
||||
"scanning": None,
|
||||
@@ -3968,6 +4016,12 @@ class HttpCli(object):
|
||||
for k in ["scanning", "hashq", "tagq", "mtpq", "dbwt"]:
|
||||
txt += " {}({})".format(k, vs[k])
|
||||
|
||||
if ups:
|
||||
txt += "\n\nincoming files:"
|
||||
for zt in ups:
|
||||
txt += "\n%s" % (", ".join((str(x) for x in zt)),)
|
||||
txt += "\n"
|
||||
|
||||
if rvol:
|
||||
txt += "\nyou can browse:"
|
||||
for v in rvol:
|
||||
@@ -3991,6 +4045,7 @@ class HttpCli(object):
|
||||
avol=avol,
|
||||
in_shr=self.args.shr and self.vpath.startswith(self.args.shr[1:]),
|
||||
vstate=vstate,
|
||||
ups=ups,
|
||||
scanning=vs["scanning"],
|
||||
hashq=vs["hashq"],
|
||||
tagq=vs["tagq"],
|
||||
@@ -5095,7 +5150,6 @@ class HttpCli(object):
|
||||
dirs.append(item)
|
||||
else:
|
||||
files.append(item)
|
||||
item["rd"] = rem
|
||||
|
||||
if is_dk and not vf.get("dks"):
|
||||
dirs = []
|
||||
@@ -5118,16 +5172,10 @@ class HttpCli(object):
|
||||
add_up_at = ".up_at" in mte
|
||||
is_admin = self.can_admin
|
||||
tagset: set[str] = set()
|
||||
for fe in files:
|
||||
rd = vrem
|
||||
for fe in files if icur else []:
|
||||
assert icur # !rm
|
||||
fn = fe["name"]
|
||||
rd = fe["rd"]
|
||||
del fe["rd"]
|
||||
if not icur:
|
||||
continue
|
||||
|
||||
if vn != dbv:
|
||||
_, rd = vn.get_dbv(rd)
|
||||
|
||||
erd_efn = (rd, fn)
|
||||
q = "select mt.k, mt.v from up inner join mt on mt.w = substr(up.w,1,16) where up.rd = ? and up.fn = ? and +mt.k != 'x'"
|
||||
try:
|
||||
@@ -5169,13 +5217,25 @@ class HttpCli(object):
|
||||
fe["tags"] = tags
|
||||
|
||||
if icur:
|
||||
for fe in dirs:
|
||||
fe["tags"] = ODict()
|
||||
|
||||
lmte = list(mte)
|
||||
if self.can_admin:
|
||||
lmte.extend(("up_ip", ".up_at"))
|
||||
|
||||
if "nodirsz" not in vf:
|
||||
tagset.add(".files")
|
||||
vdir = "%s/" % (rd,) if rd else ""
|
||||
q = "select sz, nf from ds where rd=? limit 1"
|
||||
for fe in dirs:
|
||||
try:
|
||||
hit = icur.execute(q, (vdir + fe["name"],)).fetchone()
|
||||
(fe["sz"], fe["tags"][".files"]) = hit
|
||||
except:
|
||||
pass # 404 or mojibake
|
||||
|
||||
taglist = [k for k in lmte if k in tagset]
|
||||
for fe in dirs:
|
||||
fe["tags"] = ODict()
|
||||
else:
|
||||
taglist = list(tagset)
|
||||
|
||||
@@ -5319,7 +5379,7 @@ class HttpCli(object):
|
||||
fmt = vn.flags.get("og_th", "j")
|
||||
th_base = ujoin(url_base, quotep(thumb))
|
||||
query = "th=%s&cache" % (fmt,)
|
||||
query = ub64enc(query.encode("utf-8")).decode("utf-8")
|
||||
query = ub64enc(query.encode("utf-8")).decode("ascii")
|
||||
# discord looks at file extension, not content-type...
|
||||
query += "/th.jpg" if "j" in fmt else "/th.webp"
|
||||
j2a["og_thumb"] = "%s/.uqe/%s" % (th_base, query)
|
||||
@@ -5328,7 +5388,7 @@ class HttpCli(object):
|
||||
j2a["og_file"] = file
|
||||
if og_fn:
|
||||
og_fn_q = quotep(og_fn)
|
||||
query = ub64enc(b"raw").decode("utf-8")
|
||||
query = ub64enc(b"raw").decode("ascii")
|
||||
query += "/%s" % (og_fn_q,)
|
||||
j2a["og_url"] = ujoin(url_base, og_fn_q)
|
||||
j2a["og_raw"] = j2a["og_url"] + "/.uqe/" + query
|
||||
|
||||
@@ -190,7 +190,7 @@ class HttpConn(object):
|
||||
|
||||
if self.args.ssl_dbg and hasattr(self.s, "shared_ciphers"):
|
||||
ciphers = self.s.shared_ciphers()
|
||||
assert ciphers
|
||||
assert ciphers # !rm
|
||||
overlap = [str(y[::-1]) for y in ciphers]
|
||||
self.log("TLS cipher overlap:" + "\n".join(overlap))
|
||||
for k, v in [
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import base64
|
||||
import math
|
||||
import os
|
||||
import re
|
||||
@@ -75,6 +74,7 @@ from .util import (
|
||||
spack,
|
||||
start_log_thrs,
|
||||
start_stackmon,
|
||||
ub64enc,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -237,7 +237,7 @@ class HttpSrv(object):
|
||||
if self.args.log_htp:
|
||||
self.log(self.name, "workers -= {} = {}".format(n, self.tp_nthr), 6)
|
||||
|
||||
assert self.tp_q
|
||||
assert self.tp_q # !rm
|
||||
for _ in range(n):
|
||||
self.tp_q.put(None)
|
||||
|
||||
@@ -431,7 +431,7 @@ class HttpSrv(object):
|
||||
)
|
||||
|
||||
def thr_poolw(self) -> None:
|
||||
assert self.tp_q
|
||||
assert self.tp_q # !rm
|
||||
while True:
|
||||
task = self.tp_q.get()
|
||||
if not task:
|
||||
@@ -543,8 +543,8 @@ class HttpSrv(object):
|
||||
except:
|
||||
pass
|
||||
|
||||
v = base64.urlsafe_b64encode(spack(b">xxL", int(v)))
|
||||
self.cb_v = v.decode("ascii")[-4:]
|
||||
# spack gives 4 lsb, take 3 lsb, get 4 ch
|
||||
self.cb_v = ub64enc(spack(b">L", int(v))[1:]).decode("ascii")
|
||||
self.cb_ts = time.time()
|
||||
return self.cb_v
|
||||
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import argparse
|
||||
import base64
|
||||
import errno
|
||||
import gzip
|
||||
import logging
|
||||
@@ -67,6 +66,7 @@ from .util import (
|
||||
pybin,
|
||||
start_log_thrs,
|
||||
start_stackmon,
|
||||
ub64enc,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -419,8 +419,8 @@ class SvcHub(object):
|
||||
r"insert into kv values ('sver', 1)",
|
||||
]
|
||||
|
||||
assert db # type: ignore
|
||||
assert cur # type: ignore
|
||||
assert db # type: ignore # !rm
|
||||
assert cur # type: ignore # !rm
|
||||
if create:
|
||||
for cmd in sch:
|
||||
cur.execute(cmd)
|
||||
@@ -488,8 +488,8 @@ class SvcHub(object):
|
||||
r"create index sh_t1 on sh(t1)",
|
||||
]
|
||||
|
||||
assert db # type: ignore
|
||||
assert cur # type: ignore
|
||||
assert db # type: ignore # !rm
|
||||
assert cur # type: ignore # !rm
|
||||
if create:
|
||||
dver = 2
|
||||
modified = True
|
||||
@@ -1297,5 +1297,5 @@ class SvcHub(object):
|
||||
zs = "{}\n{}".format(VERSIONS, alltrace())
|
||||
zb = zs.encode("utf-8", "replace")
|
||||
zb = gzip.compress(zb)
|
||||
zs = base64.b64encode(zb).decode("ascii")
|
||||
zs = ub64enc(zb).decode("ascii")
|
||||
self.log("stacks", zs)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import base64
|
||||
import hashlib
|
||||
import logging
|
||||
import os
|
||||
@@ -27,6 +26,7 @@ from .util import (
|
||||
min_ex,
|
||||
runcmd,
|
||||
statdir,
|
||||
ub64enc,
|
||||
vsplit,
|
||||
wrename,
|
||||
wunlink,
|
||||
@@ -109,6 +109,9 @@ except:
|
||||
HAVE_VIPS = False
|
||||
|
||||
|
||||
th_dir_cache = {}
|
||||
|
||||
|
||||
def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -> str:
|
||||
# base16 = 16 = 256
|
||||
# b64-lc = 38 = 1444
|
||||
@@ -122,14 +125,20 @@ def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -
|
||||
if ext in ffa and fmt[:2] in ("wf", "jf"):
|
||||
fmt = fmt.replace("f", "")
|
||||
|
||||
rd += "\n" + fmt
|
||||
h = hashlib.sha512(afsenc(rd)).digest()
|
||||
b64 = base64.urlsafe_b64encode(h).decode("ascii")[:24]
|
||||
rd = ("%s/%s/" % (b64[:2], b64[2:4])).lower() + b64
|
||||
dcache = th_dir_cache
|
||||
rd_key = rd + "\n" + fmt
|
||||
rd = dcache.get(rd_key)
|
||||
if not rd:
|
||||
h = hashlib.sha512(afsenc(rd_key)).digest()
|
||||
b64 = ub64enc(h).decode("ascii")[:24]
|
||||
rd = ("%s/%s/" % (b64[:2], b64[2:4])).lower() + b64
|
||||
if len(dcache) > 9001:
|
||||
dcache.clear()
|
||||
dcache[rd_key] = rd
|
||||
|
||||
# could keep original filenames but this is safer re pathlen
|
||||
h = hashlib.sha512(afsenc(fn)).digest()
|
||||
fn = base64.urlsafe_b64encode(h).decode("ascii")[:24]
|
||||
fn = ub64enc(h).decode("ascii")[:24]
|
||||
|
||||
if fmt in ("opus", "caf", "mp3"):
|
||||
cat = "ac"
|
||||
@@ -479,7 +488,7 @@ class ThumbSrv(object):
|
||||
if c == crops[-1]:
|
||||
raise
|
||||
|
||||
assert img # type: ignore
|
||||
assert img # type: ignore # !rm
|
||||
img.write_to_file(tpath, Q=40)
|
||||
|
||||
def conv_ffmpeg(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||
|
||||
@@ -104,7 +104,7 @@ class U2idx(object):
|
||||
if not HAVE_SQLITE3 or not self.args.shr:
|
||||
return None
|
||||
|
||||
assert sqlite3 # type: ignore
|
||||
assert sqlite3 # type: ignore # !rm
|
||||
|
||||
db = sqlite3.connect(self.args.shr_db, timeout=2, check_same_thread=False)
|
||||
cur = db.cursor()
|
||||
@@ -120,7 +120,7 @@ class U2idx(object):
|
||||
if not HAVE_SQLITE3 or "e2d" not in vn.flags:
|
||||
return None
|
||||
|
||||
assert sqlite3 # type: ignore
|
||||
assert sqlite3 # type: ignore # !rm
|
||||
|
||||
ptop = vn.realpath
|
||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||
@@ -467,5 +467,5 @@ class U2idx(object):
|
||||
return
|
||||
|
||||
if identifier == self.active_id:
|
||||
assert self.active_cur
|
||||
assert self.active_cur # !rm
|
||||
self.active_cur.connection.interrupt()
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import base64
|
||||
import errno
|
||||
import gzip
|
||||
import hashlib
|
||||
@@ -61,6 +60,7 @@ from .util import (
|
||||
sfsenc,
|
||||
spack,
|
||||
statdir,
|
||||
ub64enc,
|
||||
unhumanize,
|
||||
vjoin,
|
||||
vsplit,
|
||||
@@ -268,19 +268,29 @@ class Up2k(object):
|
||||
if not self.stop:
|
||||
self.log("uploads are now possible", 2)
|
||||
|
||||
def get_state(self) -> str:
|
||||
def get_state(self, get_q: bool, uname: str) -> str:
|
||||
mtpq: Union[int, str] = 0
|
||||
ups = []
|
||||
up_en = not self.args.no_up_list
|
||||
q = "select count(w) from mt where k = 't:mtp'"
|
||||
got_lock = False if PY2 else self.mutex.acquire(timeout=0.5)
|
||||
if got_lock:
|
||||
for cur in self.cur.values():
|
||||
try:
|
||||
mtpq += cur.execute(q).fetchone()[0]
|
||||
except:
|
||||
pass
|
||||
self.mutex.release()
|
||||
try:
|
||||
for cur in self.cur.values() if get_q else []:
|
||||
try:
|
||||
mtpq += cur.execute(q).fetchone()[0]
|
||||
except:
|
||||
pass
|
||||
if uname and up_en:
|
||||
ups = self._active_uploads(uname)
|
||||
finally:
|
||||
self.mutex.release()
|
||||
else:
|
||||
mtpq = "(?)"
|
||||
if up_en:
|
||||
ups = [(0, 0, time.time(), "cannot show list (server too busy)")]
|
||||
|
||||
ups.sort(reverse=True)
|
||||
|
||||
ret = {
|
||||
"volstate": self.volstate,
|
||||
@@ -288,6 +298,7 @@ class Up2k(object):
|
||||
"hashq": self.n_hashq,
|
||||
"tagq": self.n_tagq,
|
||||
"mtpq": mtpq,
|
||||
"ups": ups,
|
||||
"dbwu": "{:.2f}".format(self.db_act),
|
||||
"dbwt": "{:.2f}".format(
|
||||
min(1000 * 24 * 60 * 60 - 1, time.time() - self.db_act)
|
||||
@@ -295,6 +306,32 @@ class Up2k(object):
|
||||
}
|
||||
return json.dumps(ret, separators=(",\n", ": "))
|
||||
|
||||
def _active_uploads(self, uname: str) -> list[tuple[float, int, int, str]]:
|
||||
ret = []
|
||||
for vtop in self.asrv.vfs.aread[uname]:
|
||||
vfs = self.asrv.vfs.all_vols.get(vtop)
|
||||
if not vfs: # dbv only
|
||||
continue
|
||||
ptop = vfs.realpath
|
||||
tab = self.registry.get(ptop)
|
||||
if not tab:
|
||||
continue
|
||||
for job in tab.values():
|
||||
ineed = len(job["need"])
|
||||
ihash = len(job["hash"])
|
||||
if ineed == ihash or not ineed:
|
||||
continue
|
||||
|
||||
zt = (
|
||||
ineed / ihash,
|
||||
job["size"],
|
||||
int(job["t0"]),
|
||||
int(job["poke"]),
|
||||
djoin(vtop, job["prel"], job["name"]),
|
||||
)
|
||||
ret.append(zt)
|
||||
return ret
|
||||
|
||||
def find_job_by_ap(self, ptop: str, ap: str) -> str:
|
||||
try:
|
||||
if ANYWIN:
|
||||
@@ -575,7 +612,7 @@ class Up2k(object):
|
||||
return timeout
|
||||
|
||||
def _check_shares(self) -> float:
|
||||
assert sqlite3 # type: ignore
|
||||
assert sqlite3 # type: ignore # !rm
|
||||
|
||||
now = time.time()
|
||||
timeout = now + 9001
|
||||
@@ -896,7 +933,7 @@ class Up2k(object):
|
||||
with self.mutex, self.reg_mutex:
|
||||
reg = self.register_vpath(vol.realpath, vol.flags)
|
||||
|
||||
assert reg
|
||||
assert reg # !rm
|
||||
cur, _ = reg
|
||||
with self.mutex:
|
||||
cur.connection.commit()
|
||||
@@ -913,7 +950,7 @@ class Up2k(object):
|
||||
reg = self.register_vpath(vol.realpath, vol.flags)
|
||||
|
||||
try:
|
||||
assert reg
|
||||
assert reg # !rm
|
||||
cur, db_path = reg
|
||||
if bos.path.getsize(db_path + "-wal") < 1024 * 1024 * 5:
|
||||
continue
|
||||
@@ -1119,7 +1156,7 @@ class Up2k(object):
|
||||
zsl = [x[len(prefix) :] for x in zsl]
|
||||
zsl.sort()
|
||||
zb = hashlib.sha1("\n".join(zsl).encode("utf-8", "replace")).digest()
|
||||
vcfg = base64.urlsafe_b64encode(zb[:18]).decode("ascii")
|
||||
vcfg = ub64enc(zb[:18]).decode("ascii")
|
||||
|
||||
c = cur.execute("select v from kv where k = 'volcfg'")
|
||||
try:
|
||||
@@ -1148,7 +1185,7 @@ class Up2k(object):
|
||||
with self.reg_mutex:
|
||||
reg = self.register_vpath(top, vol.flags)
|
||||
|
||||
assert reg and self.pp
|
||||
assert reg and self.pp # !rm
|
||||
cur, db_path = reg
|
||||
|
||||
db = Dbw(cur, 0, time.time())
|
||||
@@ -1167,6 +1204,10 @@ class Up2k(object):
|
||||
# ~/.wine/dosdevices/z:/ and such
|
||||
excl.extend(("/dev", "/proc", "/run", "/sys"))
|
||||
|
||||
if self.args.re_dirsz:
|
||||
db.c.execute("delete from ds")
|
||||
db.n += 1
|
||||
|
||||
rtop = absreal(top)
|
||||
n_add = n_rm = 0
|
||||
try:
|
||||
@@ -1175,7 +1216,7 @@ class Up2k(object):
|
||||
self.log(t % (vol.vpath, rtop), 6)
|
||||
return True, False
|
||||
|
||||
n_add = self._build_dir(
|
||||
n_add, _, _ = self._build_dir(
|
||||
db,
|
||||
top,
|
||||
set(excl),
|
||||
@@ -1249,17 +1290,18 @@ class Up2k(object):
|
||||
cst: os.stat_result,
|
||||
dev: int,
|
||||
xvol: bool,
|
||||
) -> int:
|
||||
) -> tuple[int, int, int]:
|
||||
if xvol and not rcdir.startswith(top):
|
||||
self.log("skip xvol: [{}] -> [{}]".format(cdir, rcdir), 6)
|
||||
return 0
|
||||
return 0, 0, 0
|
||||
|
||||
if rcdir in seen:
|
||||
t = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}"
|
||||
self.log(t.format(seen[-1], rcdir, cdir), 3)
|
||||
return 0
|
||||
return 0, 0, 0
|
||||
|
||||
ret = 0
|
||||
# total-files-added, total-num-files, recursive-size
|
||||
tfa = tnf = rsz = 0
|
||||
seen = seen + [rcdir]
|
||||
unreg: list[str] = []
|
||||
files: list[tuple[int, int, str]] = []
|
||||
@@ -1269,22 +1311,25 @@ class Up2k(object):
|
||||
th_cvd = self.args.th_coversd
|
||||
th_cvds = self.args.th_coversd_set
|
||||
|
||||
assert self.pp and self.mem_cur
|
||||
assert self.pp and self.mem_cur # !rm
|
||||
self.pp.msg = "a%d %s" % (self.pp.n, cdir)
|
||||
|
||||
rd = cdir[len(top) :].strip("/")
|
||||
if WINDOWS:
|
||||
rd = rd.replace("\\", "/").strip("/")
|
||||
|
||||
rds = rd + "/" if rd else ""
|
||||
cdirs = cdir + os.sep
|
||||
|
||||
g = statdir(self.log_func, not self.args.no_scandir, True, cdir)
|
||||
gl = sorted(g)
|
||||
partials = set([x[0] for x in gl if "PARTIAL" in x[0]])
|
||||
for iname, inf in gl:
|
||||
if self.stop:
|
||||
return -1
|
||||
return -1, 0, 0
|
||||
|
||||
rp = vjoin(rd, iname)
|
||||
abspath = os.path.join(cdir, iname)
|
||||
rp = rds + iname
|
||||
abspath = cdirs + iname
|
||||
|
||||
if rei and rei.search(abspath):
|
||||
unreg.append(rp)
|
||||
@@ -1318,7 +1363,7 @@ class Up2k(object):
|
||||
continue
|
||||
# self.log(" dir: {}".format(abspath))
|
||||
try:
|
||||
ret += self._build_dir(
|
||||
i1, i2, i3 = self._build_dir(
|
||||
db,
|
||||
top,
|
||||
excl,
|
||||
@@ -1333,6 +1378,9 @@ class Up2k(object):
|
||||
dev,
|
||||
xvol,
|
||||
)
|
||||
tfa += i1
|
||||
tnf += i2
|
||||
rsz += i3
|
||||
except:
|
||||
t = "failed to index subdir [{}]:\n{}"
|
||||
self.log(t.format(abspath, min_ex()), c=1)
|
||||
@@ -1351,6 +1399,7 @@ class Up2k(object):
|
||||
# placeholder for unfinished upload
|
||||
continue
|
||||
|
||||
rsz += sz
|
||||
files.append((sz, lmod, iname))
|
||||
liname = iname.lower()
|
||||
if (
|
||||
@@ -1372,6 +1421,18 @@ class Up2k(object):
|
||||
):
|
||||
cv = iname
|
||||
|
||||
if not self.args.no_dirsz:
|
||||
tnf += len(files)
|
||||
q = "select sz, nf from ds where rd=? limit 1"
|
||||
try:
|
||||
db_sz, db_nf = db.c.execute(q, (rd,)).fetchone() or (-1, -1)
|
||||
if rsz != db_sz or tnf != db_nf:
|
||||
db.c.execute("delete from ds where rd=?", (rd,))
|
||||
db.c.execute("insert into ds values (?,?,?)", (rd, rsz, tnf))
|
||||
db.n += 1
|
||||
except:
|
||||
pass # mojibake rd
|
||||
|
||||
# folder of 1000 files = ~1 MiB RAM best-case (tiny filenames);
|
||||
# free up stuff we're done with before dhashing
|
||||
gl = []
|
||||
@@ -1385,7 +1446,7 @@ class Up2k(object):
|
||||
|
||||
zh.update(cv.encode("utf-8", "replace"))
|
||||
zh.update(spack(b"<d", cst.st_mtime))
|
||||
dhash = base64.urlsafe_b64encode(zh.digest()[:12]).decode("ascii")
|
||||
dhash = ub64enc(zh.digest()[:12]).decode("ascii")
|
||||
sql = "select d from dh where d=? and +h=?"
|
||||
try:
|
||||
c = db.c.execute(sql, (rd, dhash))
|
||||
@@ -1395,7 +1456,7 @@ class Up2k(object):
|
||||
c = db.c.execute(sql, (drd, dhash))
|
||||
|
||||
if c.fetchone():
|
||||
return ret
|
||||
return tfa, tnf, rsz
|
||||
|
||||
if cv and rd:
|
||||
# mojibake not supported (for performance / simplicity):
|
||||
@@ -1412,10 +1473,10 @@ class Up2k(object):
|
||||
seen_files = set([x[2] for x in files]) # for dropcheck
|
||||
for sz, lmod, fn in files:
|
||||
if self.stop:
|
||||
return -1
|
||||
return -1, 0, 0
|
||||
|
||||
rp = vjoin(rd, fn)
|
||||
abspath = os.path.join(cdir, fn)
|
||||
rp = rds + fn
|
||||
abspath = cdirs + fn
|
||||
nohash = reh.search(abspath) if reh else False
|
||||
|
||||
sql = "select w, mt, sz, ip, at from up where rd = ? and fn = ?"
|
||||
@@ -1445,7 +1506,7 @@ class Up2k(object):
|
||||
)
|
||||
self.log(t)
|
||||
self.db_rm(db.c, rd, fn, 0)
|
||||
ret += 1
|
||||
tfa += 1
|
||||
db.n += 1
|
||||
in_db = []
|
||||
else:
|
||||
@@ -1470,7 +1531,7 @@ class Up2k(object):
|
||||
continue
|
||||
|
||||
if not hashes:
|
||||
return -1
|
||||
return -1, 0, 0
|
||||
|
||||
wark = up2k_wark_from_hashlist(self.salt, sz, hashes)
|
||||
|
||||
@@ -1481,7 +1542,7 @@ class Up2k(object):
|
||||
# skip upload hooks by not providing vflags
|
||||
self.db_add(db.c, {}, rd, fn, lmod, sz, "", "", wark, "", "", ip, at)
|
||||
db.n += 1
|
||||
ret += 1
|
||||
tfa += 1
|
||||
td = time.time() - db.t
|
||||
if db.n >= 4096 or td >= 60:
|
||||
self.log("commit {} new files".format(db.n))
|
||||
@@ -1494,33 +1555,38 @@ class Up2k(object):
|
||||
db.c.execute("insert into dh values (?,?)", (drd, dhash)) # type: ignore
|
||||
|
||||
if self.stop:
|
||||
return -1
|
||||
return -1, 0, 0
|
||||
|
||||
# drop shadowed folders
|
||||
for sh_rd in unreg:
|
||||
n = 0
|
||||
q = "select count(w) from up where (rd = ? or rd like ?||'%') and at == 0"
|
||||
q = "select count(w) from up where (rd=? or rd like ?||'/%') and +at == 0"
|
||||
for sh_erd in [sh_rd, "//" + w8b64enc(sh_rd)]:
|
||||
try:
|
||||
n = db.c.execute(q, (sh_erd, sh_erd + "/")).fetchone()[0]
|
||||
erd_erd = (sh_erd, sh_erd)
|
||||
n = db.c.execute(q, erd_erd).fetchone()[0]
|
||||
break
|
||||
except:
|
||||
pass
|
||||
|
||||
assert erd_erd # type: ignore # !rm
|
||||
|
||||
if n:
|
||||
t = "forgetting {} shadowed autoindexed files in [{}] > [{}]"
|
||||
self.log(t.format(n, top, sh_rd))
|
||||
assert sh_erd # type: ignore
|
||||
|
||||
q = "delete from dh where (d = ? or d like ?||'%')"
|
||||
db.c.execute(q, (sh_erd, sh_erd + "/"))
|
||||
q = "delete from dh where (d = ? or d like ?||'/%')"
|
||||
db.c.execute(q, erd_erd)
|
||||
|
||||
q = "delete from up where (rd = ? or rd like ?||'%') and at == 0"
|
||||
db.c.execute(q, (sh_erd, sh_erd + "/"))
|
||||
ret += n
|
||||
q = "delete from up where (rd=? or rd like ?||'/%') and +at == 0"
|
||||
db.c.execute(q, erd_erd)
|
||||
tfa += n
|
||||
|
||||
q = "delete from ds where (rd=? or rd like ?||'/%')"
|
||||
db.c.execute(q, erd_erd)
|
||||
|
||||
if n4g:
|
||||
return ret
|
||||
return tfa, tnf, rsz
|
||||
|
||||
# drop missing files
|
||||
q = "select fn from up where rd = ?"
|
||||
@@ -1538,7 +1604,7 @@ class Up2k(object):
|
||||
if n_rm:
|
||||
self.log("forgot {} deleted files".format(n_rm))
|
||||
|
||||
return ret
|
||||
return tfa, tnf, rsz
|
||||
|
||||
def _drop_lost(self, cur: "sqlite3.Cursor", top: str, excl: list[str]) -> int:
|
||||
rm = []
|
||||
@@ -1613,7 +1679,7 @@ class Up2k(object):
|
||||
|
||||
# then covers
|
||||
n_rm3 = 0
|
||||
qu = "select 1 from up where rd=? and +fn=? limit 1"
|
||||
qu = "select 1 from up where rd=? and fn=? limit 1"
|
||||
q = "delete from cv where rd=? and dn=? and +fn=?"
|
||||
for crd, cdn, fn in cur.execute("select * from cv"):
|
||||
urd = vjoin(crd, cdn)
|
||||
@@ -1756,13 +1822,13 @@ class Up2k(object):
|
||||
return 0
|
||||
|
||||
with self.mutex:
|
||||
q = "update up set w=?, sz=?, mt=? where rd=? and fn=?"
|
||||
for rd, fn, w, sz, mt in rewark:
|
||||
q = "update up set w = ?, sz = ?, mt = ? where rd = ? and fn = ? limit 1"
|
||||
cur.execute(q, (w, sz, int(mt), rd, fn))
|
||||
|
||||
for _, _, w in f404:
|
||||
q = "delete from up where w = ? limit 1"
|
||||
cur.execute(q, (w,))
|
||||
if f404:
|
||||
q = "delete from up where rd=? and fn=? and +w=?"
|
||||
cur.executemany(q, f404)
|
||||
|
||||
cur.connection.commit()
|
||||
|
||||
@@ -2229,7 +2295,7 @@ class Up2k(object):
|
||||
# mp.pool.ThreadPool and concurrent.futures.ThreadPoolExecutor
|
||||
# both do crazy runahead so lets reinvent another wheel
|
||||
nw = max(1, self.args.mtag_mt)
|
||||
assert self.mtag
|
||||
assert self.mtag # !rm
|
||||
if not self.mpool_used:
|
||||
self.mpool_used = True
|
||||
self.log("using {}x {}".format(nw, self.mtag.backend))
|
||||
@@ -2303,7 +2369,7 @@ class Up2k(object):
|
||||
at: float,
|
||||
) -> int:
|
||||
"""will mutex(main)"""
|
||||
assert self.mtag
|
||||
assert self.mtag # !rm
|
||||
|
||||
try:
|
||||
st = bos.stat(abspath)
|
||||
@@ -2335,7 +2401,7 @@ class Up2k(object):
|
||||
tags: dict[str, Union[str, float]],
|
||||
) -> int:
|
||||
"""mutex(main) me"""
|
||||
assert self.mtag
|
||||
assert self.mtag # !rm
|
||||
|
||||
if not bos.path.isfile(abspath):
|
||||
return 0
|
||||
@@ -2391,7 +2457,7 @@ class Up2k(object):
|
||||
def _log_sqlite_incompat(self, db_path, t0) -> None:
|
||||
txt = t0 or ""
|
||||
digest = hashlib.sha512(db_path.encode("utf-8", "replace")).digest()
|
||||
stackname = base64.urlsafe_b64encode(digest[:9]).decode("utf-8")
|
||||
stackname = ub64enc(digest[:9]).decode("ascii")
|
||||
stackpath = os.path.join(E.cfg, "stack-%s.txt" % (stackname,))
|
||||
|
||||
t = " the filesystem at %s may not support locking, or is otherwise incompatible with sqlite\n\n %s\n\n"
|
||||
@@ -2434,12 +2500,11 @@ class Up2k(object):
|
||||
self.log("WARN: failed to upgrade from v4", 3)
|
||||
|
||||
if ver == DB_VER:
|
||||
try:
|
||||
self._add_cv_tab(cur)
|
||||
self._add_xiu_tab(cur)
|
||||
self._add_dhash_tab(cur)
|
||||
except:
|
||||
pass
|
||||
self._add_dhash_tab(cur)
|
||||
self._add_xiu_tab(cur)
|
||||
self._add_cv_tab(cur)
|
||||
self._add_idx_up_vp(cur, db_path)
|
||||
self._add_ds_tab(cur)
|
||||
|
||||
try:
|
||||
nfiles = next(cur.execute("select count(w) from up"))[0]
|
||||
@@ -2536,9 +2601,10 @@ class Up2k(object):
|
||||
|
||||
for cmd in [
|
||||
r"create table up (w text, mt int, sz int, rd text, fn text, ip text, at int)",
|
||||
r"create index up_rd on up(rd)",
|
||||
r"create index up_vp on up(rd, fn)",
|
||||
r"create index up_fn on up(fn)",
|
||||
r"create index up_ip on up(ip)",
|
||||
r"create index up_at on up(at)",
|
||||
idx,
|
||||
r"create table mt (w text, k text, v int)",
|
||||
r"create index mt_w on mt(w)",
|
||||
@@ -2552,6 +2618,7 @@ class Up2k(object):
|
||||
self._add_dhash_tab(cur)
|
||||
self._add_xiu_tab(cur)
|
||||
self._add_cv_tab(cur)
|
||||
self._add_ds_tab(cur)
|
||||
self.log("created DB at {}".format(db_path))
|
||||
return cur
|
||||
|
||||
@@ -2568,6 +2635,12 @@ class Up2k(object):
|
||||
|
||||
def _add_dhash_tab(self, cur: "sqlite3.Cursor") -> None:
|
||||
# v5 -> v5a
|
||||
try:
|
||||
cur.execute("select d, h from dh limit 1").fetchone()
|
||||
return
|
||||
except:
|
||||
pass
|
||||
|
||||
for cmd in [
|
||||
r"create table dh (d text, h text)",
|
||||
r"create index dh_d on dh(d)",
|
||||
@@ -2621,6 +2694,40 @@ class Up2k(object):
|
||||
|
||||
cur.connection.commit()
|
||||
|
||||
def _add_idx_up_vp(self, cur: "sqlite3.Cursor", db_path: str) -> None:
|
||||
# v5c -> v5d
|
||||
try:
|
||||
cur.execute("drop index up_rd")
|
||||
except:
|
||||
return
|
||||
|
||||
for cmd in [
|
||||
r"create index up_vp on up(rd, fn)",
|
||||
r"create index up_at on up(at)",
|
||||
]:
|
||||
self.log("upgrading db [%s]: %s" % (db_path, cmd[:18]))
|
||||
cur.execute(cmd)
|
||||
|
||||
self.log("upgrading db [%s]: writing to disk..." % (db_path,))
|
||||
cur.connection.commit()
|
||||
cur.execute("vacuum")
|
||||
|
||||
def _add_ds_tab(self, cur: "sqlite3.Cursor") -> None:
|
||||
# v5d -> v5e
|
||||
try:
|
||||
cur.execute("select rd, sz from ds limit 1").fetchone()
|
||||
return
|
||||
except:
|
||||
pass
|
||||
|
||||
for cmd in [
|
||||
r"create table ds (rd text, sz int, nf int)",
|
||||
r"create index ds_rd on ds(rd)",
|
||||
]:
|
||||
cur.execute(cmd)
|
||||
|
||||
cur.connection.commit()
|
||||
|
||||
def wake_rescanner(self):
|
||||
with self.rescan_cond:
|
||||
self.rescan_cond.notify_all()
|
||||
@@ -2805,7 +2912,7 @@ class Up2k(object):
|
||||
|
||||
c2 = cur
|
||||
|
||||
assert c2
|
||||
assert c2 # !rm
|
||||
c2.connection.commit()
|
||||
|
||||
cur = jcur
|
||||
@@ -2910,9 +3017,12 @@ class Up2k(object):
|
||||
job = deepcopy(job)
|
||||
job["wark"] = wark
|
||||
job["at"] = cj.get("at") or time.time()
|
||||
zs = "lmod ptop vtop prel name host user addr poke"
|
||||
zs = "vtop ptop prel name lmod host user addr poke"
|
||||
for k in zs.split():
|
||||
job[k] = cj.get(k) or ""
|
||||
for k in ("life", "replace"):
|
||||
if k in cj:
|
||||
job[k] = cj[k]
|
||||
|
||||
pdir = djoin(cj["ptop"], cj["prel"])
|
||||
if rand:
|
||||
@@ -3013,18 +3123,8 @@ class Up2k(object):
|
||||
"busy": {},
|
||||
}
|
||||
# client-provided, sanitized by _get_wark: name, size, lmod
|
||||
for k in [
|
||||
"host",
|
||||
"user",
|
||||
"addr",
|
||||
"vtop",
|
||||
"ptop",
|
||||
"prel",
|
||||
"name",
|
||||
"size",
|
||||
"lmod",
|
||||
"poke",
|
||||
]:
|
||||
zs = "vtop ptop prel name size lmod host user addr poke"
|
||||
for k in zs.split():
|
||||
job[k] = cj[k]
|
||||
|
||||
for k in ["life", "replace"]:
|
||||
@@ -3130,8 +3230,9 @@ class Up2k(object):
|
||||
dip = self.hub.iphash.s(ip)
|
||||
|
||||
suffix = "-%.6f-%s" % (ts, dip)
|
||||
with ren_open(fname, "wb", fdir=fdir, suffix=suffix) as zfw:
|
||||
return zfw["orz"][1]
|
||||
f, ret = ren_open(fname, "wb", fdir=fdir, suffix=suffix)
|
||||
f.close()
|
||||
return ret
|
||||
|
||||
def _symlink(
|
||||
self,
|
||||
@@ -3278,7 +3379,7 @@ class Up2k(object):
|
||||
t = "that chunk is already being written to:\n {}\n {} {}/{}\n {}"
|
||||
raise Pebkac(400, t.format(wark, chash, idx, nh, job["name"]))
|
||||
|
||||
assert chash # type: ignore
|
||||
assert chash # type: ignore # !rm
|
||||
chunksize = up2k_chunksize(job["size"])
|
||||
|
||||
coffsets = []
|
||||
@@ -3313,19 +3414,30 @@ class Up2k(object):
|
||||
|
||||
return chashes, chunksize, coffsets, path, job["lmod"], job["sprs"]
|
||||
|
||||
def release_chunks(self, ptop: str, wark: str, chashes: list[str]) -> bool:
|
||||
with self.reg_mutex:
|
||||
job = self.registry[ptop].get(wark)
|
||||
if job:
|
||||
for chash in chashes:
|
||||
job["busy"].pop(chash, None)
|
||||
|
||||
return True
|
||||
def fast_confirm_chunks(
|
||||
self, ptop: str, wark: str, chashes: list[str]
|
||||
) -> tuple[int, str]:
|
||||
if not self.mutex.acquire(False):
|
||||
return -1, ""
|
||||
if not self.reg_mutex.acquire(False):
|
||||
self.mutex.release()
|
||||
return -1, ""
|
||||
try:
|
||||
return self._confirm_chunks(ptop, wark, chashes)
|
||||
finally:
|
||||
self.reg_mutex.release()
|
||||
self.mutex.release()
|
||||
|
||||
def confirm_chunks(
|
||||
self, ptop: str, wark: str, chashes: list[str]
|
||||
) -> tuple[int, str]:
|
||||
with self.mutex, self.reg_mutex:
|
||||
return self._confirm_chunks(ptop, wark, chashes)
|
||||
|
||||
def _confirm_chunks(
|
||||
self, ptop: str, wark: str, chashes: list[str]
|
||||
) -> tuple[int, str]:
|
||||
if True:
|
||||
self.db_act = self.vol_act[ptop] = time.time()
|
||||
try:
|
||||
job = self.registry[ptop][wark]
|
||||
@@ -3333,7 +3445,7 @@ class Up2k(object):
|
||||
src = djoin(pdir, job["tnam"])
|
||||
dst = djoin(pdir, job["name"])
|
||||
except Exception as ex:
|
||||
return "confirm_chunk, wark(%r)" % (ex,) # type: ignore
|
||||
return -2, "confirm_chunk, wark(%r)" % (ex,) # type: ignore
|
||||
|
||||
for chash in chashes:
|
||||
job["busy"].pop(chash, None)
|
||||
@@ -3342,7 +3454,7 @@ class Up2k(object):
|
||||
for chash in chashes:
|
||||
job["need"].remove(chash)
|
||||
except Exception as ex:
|
||||
return "confirm_chunk, chash(%s) %r" % (chash, ex) # type: ignore
|
||||
return -2, "confirm_chunk, chash(%s) %r" % (chash, ex) # type: ignore
|
||||
|
||||
ret = len(job["need"])
|
||||
if ret > 0:
|
||||
@@ -3491,7 +3603,7 @@ class Up2k(object):
|
||||
cur.connection.commit()
|
||||
except Exception as ex:
|
||||
x = self.register_vpath(ptop, {})
|
||||
assert x
|
||||
assert x # !rm
|
||||
db_ex_chk(self.log, ex, x[1])
|
||||
raise
|
||||
|
||||
@@ -3506,7 +3618,7 @@ class Up2k(object):
|
||||
try:
|
||||
r = db.execute(sql, (rd, fn))
|
||||
except:
|
||||
assert self.mem_cur
|
||||
assert self.mem_cur # !rm
|
||||
r = db.execute(sql, s3enc(self.mem_cur, rd, fn))
|
||||
|
||||
if r.rowcount:
|
||||
@@ -3544,7 +3656,7 @@ class Up2k(object):
|
||||
try:
|
||||
db.execute(sql, v)
|
||||
except:
|
||||
assert self.mem_cur
|
||||
assert self.mem_cur # !rm
|
||||
rd, fn = s3enc(self.mem_cur, rd, fn)
|
||||
v = (wark, int(ts), sz, rd, fn, db_ip, int(at or 0))
|
||||
db.execute(sql, v)
|
||||
@@ -3592,7 +3704,7 @@ class Up2k(object):
|
||||
try:
|
||||
db.execute(q, (cd, wark[:16], rd, fn))
|
||||
except:
|
||||
assert self.mem_cur
|
||||
assert self.mem_cur # !rm
|
||||
rd, fn = s3enc(self.mem_cur, rd, fn)
|
||||
db.execute(q, (cd, wark[:16], rd, fn))
|
||||
|
||||
@@ -3625,6 +3737,19 @@ class Up2k(object):
|
||||
except:
|
||||
pass
|
||||
|
||||
if "nodirsz" not in vflags:
|
||||
try:
|
||||
q = "update ds set nf=nf+1, sz=sz+? where rd=?"
|
||||
q2 = "insert into ds values(?,?,1)"
|
||||
while True:
|
||||
if not db.execute(q, (sz, rd)).rowcount:
|
||||
db.execute(q2, (rd, sz))
|
||||
if not rd:
|
||||
break
|
||||
rd = rd.rsplit("/", 1)[0] if "/" in rd else ""
|
||||
except:
|
||||
pass
|
||||
|
||||
def handle_rm(
|
||||
self,
|
||||
uname: str,
|
||||
@@ -4009,7 +4134,7 @@ class Up2k(object):
|
||||
|
||||
has_dupes = False
|
||||
if w:
|
||||
assert c1
|
||||
assert c1 # !rm
|
||||
if c2 and c2 != c1:
|
||||
self._copy_tags(c1, c2, w)
|
||||
|
||||
@@ -4147,7 +4272,7 @@ class Up2k(object):
|
||||
try:
|
||||
c = cur.execute(q, (rd, fn))
|
||||
except:
|
||||
assert self.mem_cur
|
||||
assert self.mem_cur # !rm
|
||||
c = cur.execute(q, s3enc(self.mem_cur, rd, fn))
|
||||
|
||||
hit = c.fetchone()
|
||||
@@ -4390,8 +4515,7 @@ class Up2k(object):
|
||||
rem -= len(buf)
|
||||
|
||||
digest = hashobj.digest()[:33]
|
||||
digest = base64.urlsafe_b64encode(digest)
|
||||
ret.append(digest.decode("utf-8"))
|
||||
ret.append(ub64enc(digest).decode("ascii"))
|
||||
|
||||
return ret, st
|
||||
|
||||
@@ -4463,8 +4587,8 @@ class Up2k(object):
|
||||
dip = self.hub.iphash.s(job["addr"])
|
||||
|
||||
suffix = "-%.6f-%s" % (job["t0"], dip)
|
||||
with ren_open(tnam, "wb", fdir=pdir, suffix=suffix) as zfw:
|
||||
f, job["tnam"] = zfw["orz"]
|
||||
f, job["tnam"] = ren_open(tnam, "wb", fdir=pdir, suffix=suffix)
|
||||
try:
|
||||
abspath = djoin(pdir, job["tnam"])
|
||||
sprs = job["sprs"]
|
||||
sz = job["size"]
|
||||
@@ -4511,6 +4635,8 @@ class Up2k(object):
|
||||
if job["hash"] and sprs:
|
||||
f.seek(sz - 1)
|
||||
f.write(b"e")
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
if not job["hash"]:
|
||||
self._finish_upload(job["ptop"], job["wark"])
|
||||
@@ -4853,11 +4979,10 @@ def up2k_wark_from_hashlist(salt: str, filesize: int, hashes: list[str]) -> str:
|
||||
vstr = "\n".join(values)
|
||||
|
||||
wark = hashlib.sha512(vstr.encode("utf-8")).digest()[:33]
|
||||
wark = base64.urlsafe_b64encode(wark)
|
||||
return wark.decode("ascii")
|
||||
return ub64enc(wark).decode("ascii")
|
||||
|
||||
|
||||
def up2k_wark_from_metadata(salt: str, sz: int, lastmod: int, rd: str, fn: str) -> str:
|
||||
ret = sfsenc("%s\n%d\n%d\n%s\n%s" % (salt, lastmod, sz, rd, fn))
|
||||
ret = base64.urlsafe_b64encode(hashlib.sha512(ret).digest())
|
||||
ret = ub64enc(hashlib.sha512(ret).digest())
|
||||
return ("#%s" % (ret.decode("ascii"),))[:44]
|
||||
|
||||
@@ -3,7 +3,7 @@ from __future__ import print_function, unicode_literals
|
||||
|
||||
import argparse
|
||||
import base64
|
||||
import contextlib
|
||||
import binascii
|
||||
import errno
|
||||
import hashlib
|
||||
import hmac
|
||||
@@ -30,13 +30,10 @@ from collections import Counter
|
||||
from ipaddress import IPv4Address, IPv4Network, IPv6Address, IPv6Network
|
||||
from queue import Queue
|
||||
|
||||
from .__init__ import ANYWIN, EXE, MACOS, PY2, TYPE_CHECKING, VT100, WINDOWS
|
||||
from .__init__ import ANYWIN, EXE, MACOS, PY2, PY36, TYPE_CHECKING, VT100, WINDOWS
|
||||
from .__version__ import S_BUILD_DT, S_VERSION
|
||||
from .stolen import surrogateescape
|
||||
|
||||
ub64dec = base64.urlsafe_b64decode
|
||||
ub64enc = base64.urlsafe_b64encode
|
||||
|
||||
try:
|
||||
from datetime import datetime, timezone
|
||||
|
||||
@@ -64,7 +61,7 @@ if PY2:
|
||||
|
||||
|
||||
if sys.version_info >= (3, 7) or (
|
||||
sys.version_info >= (3, 6) and platform.python_implementation() == "CPython"
|
||||
PY36 and platform.python_implementation() == "CPython"
|
||||
):
|
||||
ODict = dict
|
||||
else:
|
||||
@@ -164,12 +161,8 @@ except ImportError:
|
||||
|
||||
if not PY2:
|
||||
from io import BytesIO
|
||||
from urllib.parse import quote_from_bytes as quote
|
||||
from urllib.parse import unquote_to_bytes as unquote
|
||||
else:
|
||||
from StringIO import StringIO as BytesIO # type: ignore
|
||||
from urllib import quote # type: ignore # pylint: disable=no-name-in-module
|
||||
from urllib import unquote # type: ignore # pylint: disable=no-name-in-module
|
||||
|
||||
|
||||
try:
|
||||
@@ -216,7 +209,7 @@ else:
|
||||
FS_ENCODING = sys.getfilesystemencoding()
|
||||
|
||||
|
||||
SYMTIME = sys.version_info > (3, 6) and os.utime in os.supports_follow_symlinks
|
||||
SYMTIME = PY36 and os.utime in os.supports_follow_symlinks
|
||||
|
||||
META_NOBOTS = '<meta name="robots" content="noindex, nofollow">\n'
|
||||
|
||||
@@ -338,7 +331,7 @@ MAGIC_MAP = {"jpeg": "jpg"}
|
||||
|
||||
DEF_EXP = "self.ip self.ua self.uname self.host cfg.name cfg.logout vf.scan vf.thsize hdr.cf_ipcountry srv.itime srv.htime"
|
||||
|
||||
DEF_MTE = "circle,album,.tn,artist,title,.bpm,key,.dur,.q,.vq,.aq,vc,ac,fmt,res,.fps,ahash,vhash"
|
||||
DEF_MTE = ".files,circle,album,.tn,artist,title,.bpm,key,.dur,.q,.vq,.aq,vc,ac,fmt,res,.fps,ahash,vhash"
|
||||
|
||||
DEF_MTH = ".vq,.aq,vc,ac,fmt,res,.fps"
|
||||
|
||||
@@ -440,7 +433,7 @@ def py_desc() -> str:
|
||||
|
||||
|
||||
def _sqlite_ver() -> str:
|
||||
assert sqlite3 # type: ignore
|
||||
assert sqlite3 # type: ignore # !rm
|
||||
try:
|
||||
co = sqlite3.connect(":memory:")
|
||||
cur = co.cursor()
|
||||
@@ -488,17 +481,36 @@ VERSIONS = (
|
||||
)
|
||||
|
||||
|
||||
_: Any = (mp, BytesIO, quote, unquote, SQLITE_VER, JINJA_VER, PYFTPD_VER, PARTFTPY_VER)
|
||||
__all__ = [
|
||||
"mp",
|
||||
"BytesIO",
|
||||
"quote",
|
||||
"unquote",
|
||||
"SQLITE_VER",
|
||||
"JINJA_VER",
|
||||
"PYFTPD_VER",
|
||||
"PARTFTPY_VER",
|
||||
]
|
||||
try:
|
||||
_b64_enc_tl = bytes.maketrans(b"+/", b"-_")
|
||||
_b64_dec_tl = bytes.maketrans(b"-_", b"+/")
|
||||
|
||||
def ub64enc(bs: bytes) -> bytes:
|
||||
x = binascii.b2a_base64(bs, newline=False)
|
||||
return x.translate(_b64_enc_tl)
|
||||
|
||||
def ub64dec(bs: bytes) -> bytes:
|
||||
bs = bs.translate(_b64_dec_tl)
|
||||
return binascii.a2b_base64(bs)
|
||||
|
||||
def b64enc(bs: bytes) -> bytes:
|
||||
return binascii.b2a_base64(bs, newline=False)
|
||||
|
||||
def b64dec(bs: bytes) -> bytes:
|
||||
return binascii.a2b_base64(bs)
|
||||
|
||||
zb = b">>>????"
|
||||
zb2 = base64.urlsafe_b64encode(zb)
|
||||
if zb2 != ub64enc(zb) or zb != ub64dec(zb2):
|
||||
raise Exception("bad smoke")
|
||||
|
||||
except Exception as ex:
|
||||
ub64enc = base64.urlsafe_b64encode # type: ignore
|
||||
ub64dec = base64.urlsafe_b64decode # type: ignore
|
||||
b64enc = base64.b64encode # type: ignore
|
||||
b64dec = base64.b64decode # type: ignore
|
||||
if not PY36:
|
||||
print("using fallback base64 codec due to %r" % (ex,))
|
||||
|
||||
|
||||
class Daemon(threading.Thread):
|
||||
@@ -1030,7 +1042,7 @@ class MTHash(object):
|
||||
if self.stop:
|
||||
return nch, "", ofs0, chunk_sz
|
||||
|
||||
assert f
|
||||
assert f # !rm
|
||||
hashobj = hashlib.sha512()
|
||||
while chunk_rem > 0:
|
||||
with self.imutex:
|
||||
@@ -1045,7 +1057,7 @@ class MTHash(object):
|
||||
ofs += len(buf)
|
||||
|
||||
bdig = hashobj.digest()[:33]
|
||||
udig = base64.urlsafe_b64encode(bdig).decode("utf-8")
|
||||
udig = ub64enc(bdig).decode("ascii")
|
||||
return nch, udig, ofs0, chunk_sz
|
||||
|
||||
|
||||
@@ -1071,7 +1083,7 @@ class HMaccas(object):
|
||||
self.cache = {}
|
||||
|
||||
zb = hmac.new(self.key, msg, hashlib.sha512).digest()
|
||||
zs = base64.urlsafe_b64encode(zb)[: self.retlen].decode("utf-8")
|
||||
zs = ub64enc(zb)[: self.retlen].decode("ascii")
|
||||
self.cache[msg] = zs
|
||||
return zs
|
||||
|
||||
@@ -1402,18 +1414,13 @@ def min_ex(max_lines: int = 8, reverse: bool = False) -> str:
|
||||
return "\n".join(ex[-max_lines:][:: -1 if reverse else 1])
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def ren_open(
|
||||
fname: str, *args: Any, **kwargs: Any
|
||||
) -> Generator[dict[str, tuple[typing.IO[Any], str]], None, None]:
|
||||
def ren_open(fname: str, *args: Any, **kwargs: Any) -> tuple[typing.IO[Any], str]:
|
||||
fun = kwargs.pop("fun", open)
|
||||
fdir = kwargs.pop("fdir", None)
|
||||
suffix = kwargs.pop("suffix", None)
|
||||
|
||||
if fname == os.devnull:
|
||||
with fun(fname, *args, **kwargs) as f:
|
||||
yield {"orz": (f, fname)}
|
||||
return
|
||||
return fun(fname, *args, **kwargs), fname
|
||||
|
||||
if suffix:
|
||||
ext = fname.split(".")[-1]
|
||||
@@ -1435,6 +1442,7 @@ def ren_open(
|
||||
asciified = False
|
||||
b64 = ""
|
||||
while True:
|
||||
f = None
|
||||
try:
|
||||
if fdir:
|
||||
fpath = os.path.join(fdir, fname)
|
||||
@@ -1446,19 +1454,20 @@ def ren_open(
|
||||
fname += suffix
|
||||
ext += suffix
|
||||
|
||||
with fun(fsenc(fpath), *args, **kwargs) as f:
|
||||
if b64:
|
||||
assert fdir
|
||||
fp2 = "fn-trunc.%s.txt" % (b64,)
|
||||
fp2 = os.path.join(fdir, fp2)
|
||||
with open(fsenc(fp2), "wb") as f2:
|
||||
f2.write(orig_name.encode("utf-8"))
|
||||
f = fun(fsenc(fpath), *args, **kwargs)
|
||||
if b64:
|
||||
assert fdir # !rm
|
||||
fp2 = "fn-trunc.%s.txt" % (b64,)
|
||||
fp2 = os.path.join(fdir, fp2)
|
||||
with open(fsenc(fp2), "wb") as f2:
|
||||
f2.write(orig_name.encode("utf-8"))
|
||||
|
||||
yield {"orz": (f, fname)}
|
||||
return
|
||||
return f, fname
|
||||
|
||||
except OSError as ex_:
|
||||
ex = ex_
|
||||
if f:
|
||||
f.close()
|
||||
|
||||
# EPERM: android13
|
||||
if ex.errno in (errno.EINVAL, errno.EPERM) and not asciified:
|
||||
@@ -1479,8 +1488,7 @@ def ren_open(
|
||||
|
||||
if not b64:
|
||||
zs = ("%s\n%s" % (orig_name, suffix)).encode("utf-8", "replace")
|
||||
zs = hashlib.sha512(zs).digest()[:12]
|
||||
b64 = base64.urlsafe_b64encode(zs).decode("utf-8")
|
||||
b64 = ub64enc(hashlib.sha512(zs).digest()[:12]).decode("ascii")
|
||||
|
||||
badlen = len(fname)
|
||||
while len(fname) >= badlen:
|
||||
@@ -1713,7 +1721,7 @@ class MultipartParser(object):
|
||||
returns the value of the next field in the multipart body,
|
||||
raises if the field name is not as expected
|
||||
"""
|
||||
assert self.gen
|
||||
assert self.gen # !rm
|
||||
p_field, p_fname, p_data = next(self.gen)
|
||||
if p_field != field_name:
|
||||
raise WrongPostKey(field_name, p_field, p_fname, p_data)
|
||||
@@ -1722,7 +1730,7 @@ class MultipartParser(object):
|
||||
|
||||
def drop(self) -> None:
|
||||
"""discards the remaining multipart body"""
|
||||
assert self.gen
|
||||
assert self.gen # !rm
|
||||
for _, _, data in self.gen:
|
||||
for _ in data:
|
||||
pass
|
||||
@@ -1786,9 +1794,8 @@ def rand_name(fdir: str, fn: str, rnd: int) -> str:
|
||||
|
||||
nc = rnd + extra
|
||||
nb = (6 + 6 * nc) // 8
|
||||
zb = os.urandom(nb)
|
||||
zb = base64.urlsafe_b64encode(zb)
|
||||
fn = zb[:nc].decode("utf-8") + ext
|
||||
zb = ub64enc(os.urandom(nb))
|
||||
fn = zb[:nc].decode("ascii") + ext
|
||||
ok = not os.path.exists(fsenc(os.path.join(fdir, fn)))
|
||||
|
||||
return fn
|
||||
@@ -1801,7 +1808,7 @@ def gen_filekey(alg: int, salt: str, fspath: str, fsize: int, inode: int) -> str
|
||||
zs = "%s %s" % (salt, fspath)
|
||||
|
||||
zb = zs.encode("utf-8", "replace")
|
||||
return base64.urlsafe_b64encode(hashlib.sha512(zb).digest()).decode("ascii")
|
||||
return ub64enc(hashlib.sha512(zb).digest()).decode("ascii")
|
||||
|
||||
|
||||
def gen_filekey_dbg(
|
||||
@@ -1815,7 +1822,7 @@ def gen_filekey_dbg(
|
||||
) -> str:
|
||||
ret = gen_filekey(alg, salt, fspath, fsize, inode)
|
||||
|
||||
assert log_ptn
|
||||
assert log_ptn # !rm
|
||||
if log_ptn.search(fspath):
|
||||
try:
|
||||
import inspect
|
||||
@@ -2074,6 +2081,8 @@ def html_bescape(s: bytes, quot: bool = False, crlf: bool = False) -> bytes:
|
||||
|
||||
def _quotep2(txt: str) -> str:
|
||||
"""url quoter which deals with bytes correctly"""
|
||||
if not txt:
|
||||
return ""
|
||||
btxt = w8enc(txt)
|
||||
quot = quote(btxt, safe=b"/")
|
||||
return w8dec(quot.replace(b" ", b"+")) # type: ignore
|
||||
@@ -2081,18 +2090,61 @@ def _quotep2(txt: str) -> str:
|
||||
|
||||
def _quotep3(txt: str) -> str:
|
||||
"""url quoter which deals with bytes correctly"""
|
||||
if not txt:
|
||||
return ""
|
||||
btxt = w8enc(txt)
|
||||
quot = quote(btxt, safe=b"/").encode("utf-8")
|
||||
return w8dec(quot.replace(b" ", b"+"))
|
||||
|
||||
|
||||
quotep = _quotep3 if not PY2 else _quotep2
|
||||
if not PY2:
|
||||
_uqsb = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_.-~/"
|
||||
_uqtl = {
|
||||
n: ("%%%02X" % (n,) if n not in _uqsb else chr(n)).encode("utf-8")
|
||||
for n in range(256)
|
||||
}
|
||||
_uqtl[b" "] = b"+"
|
||||
|
||||
def _quotep3b(txt: str) -> str:
|
||||
"""url quoter which deals with bytes correctly"""
|
||||
if not txt:
|
||||
return ""
|
||||
btxt = w8enc(txt)
|
||||
if btxt.rstrip(_uqsb):
|
||||
lut = _uqtl
|
||||
btxt = b"".join([lut[ch] for ch in btxt])
|
||||
return w8dec(btxt)
|
||||
|
||||
quotep = _quotep3b
|
||||
|
||||
_hexd = "0123456789ABCDEFabcdef"
|
||||
_hex2b = {(a + b).encode(): bytes.fromhex(a + b) for a in _hexd for b in _hexd}
|
||||
|
||||
def unquote(btxt: bytes) -> bytes:
|
||||
h2b = _hex2b
|
||||
parts = iter(btxt.split(b"%"))
|
||||
ret = [next(parts)]
|
||||
for item in parts:
|
||||
c = h2b.get(item[:2])
|
||||
if c is None:
|
||||
ret.append(b"%")
|
||||
ret.append(item)
|
||||
else:
|
||||
ret.append(c)
|
||||
ret.append(item[2:])
|
||||
return b"".join(ret)
|
||||
|
||||
from urllib.parse import quote_from_bytes as quote
|
||||
else:
|
||||
from urllib import quote # type: ignore # pylint: disable=no-name-in-module
|
||||
from urllib import unquote # type: ignore # pylint: disable=no-name-in-module
|
||||
|
||||
quotep = _quotep2
|
||||
|
||||
|
||||
def unquotep(txt: str) -> str:
|
||||
"""url unquoter which deals with bytes correctly"""
|
||||
btxt = w8enc(txt)
|
||||
# btxt = btxt.replace(b"+", b" ")
|
||||
unq2 = unquote(btxt)
|
||||
return w8dec(unq2)
|
||||
|
||||
@@ -2238,12 +2290,12 @@ w8enc = _w8enc3 if not PY2 else _w8enc2
|
||||
|
||||
def w8b64dec(txt: str) -> str:
|
||||
"""decodes base64(filesystem-bytes) to wtf8"""
|
||||
return w8dec(base64.urlsafe_b64decode(txt.encode("ascii")))
|
||||
return w8dec(ub64dec(txt.encode("ascii")))
|
||||
|
||||
|
||||
def w8b64enc(txt: str) -> str:
|
||||
"""encodes wtf8 to base64(filesystem-bytes)"""
|
||||
return base64.urlsafe_b64encode(w8enc(txt)).decode("ascii")
|
||||
return ub64enc(w8enc(txt)).decode("ascii")
|
||||
|
||||
|
||||
if not PY2 and WINDOWS:
|
||||
@@ -2401,7 +2453,7 @@ def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
|
||||
def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]:
|
||||
try:
|
||||
# some fuses misbehave
|
||||
assert ctypes
|
||||
assert ctypes # type: ignore # !rm
|
||||
if ANYWIN:
|
||||
bfree = ctypes.c_ulonglong(0)
|
||||
ctypes.windll.kernel32.GetDiskFreeSpaceExW( # type: ignore
|
||||
@@ -2619,8 +2671,7 @@ def hashcopy(
|
||||
if slp:
|
||||
time.sleep(slp)
|
||||
|
||||
digest = hashobj.digest()[:33]
|
||||
digest_b64 = base64.urlsafe_b64encode(digest).decode("utf-8")
|
||||
digest_b64 = ub64enc(hashobj.digest()[:33]).decode("ascii")
|
||||
|
||||
return tlen, hashobj.hexdigest(), digest_b64
|
||||
|
||||
@@ -2860,7 +2911,7 @@ def getalive(pids: list[int], pgid: int) -> list[int]:
|
||||
alive.append(pid)
|
||||
else:
|
||||
# windows doesn't have pgroups; assume
|
||||
assert psutil
|
||||
assert psutil # type: ignore # !rm
|
||||
psutil.Process(pid)
|
||||
alive.append(pid)
|
||||
except:
|
||||
@@ -2878,7 +2929,7 @@ def killtree(root: int) -> None:
|
||||
pgid = 0
|
||||
|
||||
if HAVE_PSUTIL:
|
||||
assert psutil
|
||||
assert psutil # type: ignore # !rm
|
||||
pids = [root]
|
||||
parent = psutil.Process(root)
|
||||
for child in parent.children(recursive=True):
|
||||
@@ -3291,7 +3342,7 @@ def runhook(
|
||||
at: float,
|
||||
txt: str,
|
||||
) -> dict[str, Any]:
|
||||
assert broker or up2k
|
||||
assert broker or up2k # !rm
|
||||
asrv = (broker or up2k).asrv
|
||||
args = (broker or up2k).args
|
||||
vp = vp.replace("\\", "/")
|
||||
@@ -3485,7 +3536,7 @@ def termsize() -> tuple[int, int]:
|
||||
def hidedir(dp) -> None:
|
||||
if ANYWIN:
|
||||
try:
|
||||
assert ctypes
|
||||
assert ctypes # type: ignore # !rm
|
||||
k32 = ctypes.WinDLL("kernel32")
|
||||
attrs = k32.GetFileAttributesW(dp)
|
||||
if attrs >= 0:
|
||||
@@ -3521,3 +3572,16 @@ class WrongPostKey(Pebkac):
|
||||
self.got = got
|
||||
self.fname = fname
|
||||
self.datagen = datagen
|
||||
|
||||
|
||||
_: Any = (mp, BytesIO, quote, unquote, SQLITE_VER, JINJA_VER, PYFTPD_VER, PARTFTPY_VER)
|
||||
__all__ = [
|
||||
"mp",
|
||||
"BytesIO",
|
||||
"quote",
|
||||
"unquote",
|
||||
"SQLITE_VER",
|
||||
"JINJA_VER",
|
||||
"PYFTPD_VER",
|
||||
"PARTFTPY_VER",
|
||||
]
|
||||
|
||||
@@ -1218,9 +1218,9 @@ var Ls = {
|
||||
"m_ok": "确定",
|
||||
"m_ng": "取消",
|
||||
|
||||
"enable": "启用", //m
|
||||
"danger": "危险", //m
|
||||
"clipped": "已复制到剪贴板", //m
|
||||
"enable": "启用",
|
||||
"danger": "危险",
|
||||
"clipped": "已复制到剪贴板",
|
||||
|
||||
"ht_s": "秒",
|
||||
"ht_m": "分",
|
||||
@@ -1303,12 +1303,12 @@ var Ls = {
|
||||
"utl_prog": "进度",
|
||||
|
||||
// 保持简短:
|
||||
"utl_404": "404", //m
|
||||
"utl_err": "故障", //m
|
||||
"utl_oserr": "OS故障", //m
|
||||
"utl_found": "已找到", //m
|
||||
"utl_defer": "延期", //m
|
||||
"utl_yolo": "加速", //m
|
||||
"utl_404": "404",
|
||||
"utl_err": "错误",
|
||||
"utl_oserr": "OS错误",
|
||||
"utl_found": "已找到",
|
||||
"utl_defer": "延期",
|
||||
"utl_yolo": "加速",
|
||||
"utl_done": "完成",
|
||||
|
||||
"ul_flagblk": "文件已添加到队列</b><br>但另一个浏览器标签中有一个繁忙的 up2k,<br>因此等待它完成",
|
||||
@@ -1336,7 +1336,7 @@ var Ls = {
|
||||
"cl_hcancel": "列隐藏已取消",
|
||||
|
||||
"ct_grid": '网格视图',
|
||||
"ct_ttips": '◔ ◡ ◔">ℹ️ 工具提示', //m
|
||||
"ct_ttips": '◔ ◡ ◔">ℹ️ 工具提示',
|
||||
"ct_thumb": '在网格视图中,切换图标或缩略图$N快捷键: T">🖼️ 缩略图',
|
||||
"ct_csel": '在网格视图中使用 CTRL 和 SHIFT 进行文件选择">CTRL',
|
||||
"ct_ihop": '当图像查看器关闭时,滚动到最后查看的文件">滚动',
|
||||
@@ -1468,7 +1468,7 @@ var Ls = {
|
||||
"fs_pname": "链接名称可选;如果为空则随机",
|
||||
"fs_tsrc": "共享的文件或文件夹",
|
||||
"fs_ppwd": "密码可选",
|
||||
"fs_w8": "正在创建文件共享...", //m
|
||||
"fs_w8": "正在创建文件共享...",
|
||||
"fs_ok": "<h6>分享链接已创建</h6>\n按 <code>Enter/OK</code> 复制到剪贴板\n按 <code>ESC/Cancel</code> 关闭\n\n",
|
||||
|
||||
"frt_dec": "可能修复一些损坏的文件名\">url-decode",
|
||||
@@ -1479,8 +1479,8 @@ var Ls = {
|
||||
"fr_case": "区分大小写的正则表达式\">case",
|
||||
"fr_win": "Windows 安全名称;将 <code><>:"\\|?*</code> 替换为日文全角字符\">win",
|
||||
"fr_slash": "将 <code>/</code> 替换为不会导致新文件夹创建的字符\">不使用 /",
|
||||
"fr_re": "正则表达式搜索模式应用于原始文件名;$N可以在下面的格式字段中引用捕获组,如<code>(1)</code>和<code>(2)</code>等等。", //m
|
||||
"fr_fmt": "受到 foobar2000 的启发:$N<code>(title)</code> 被歌曲名称替换,$N<code>[(artist) - ](title)</code> 仅当歌曲艺术家不为空时才包含<code>[此]</code>部分$N<code>$lpad((tn),2,0)</code> 将曲目编号填充为 2 位数字", //m
|
||||
"fr_re": "正则表达式搜索模式应用于原始文件名;$N可以在下面的格式字段中引用捕获组,如<code>(1)</code>和<code>(2)</code>等等。",
|
||||
"fr_fmt": "受到 foobar2000 的启发:$N<code>(title)</code> 被歌曲名称替换,$N<code>[(artist) - ](title)</code> 仅当歌曲艺术家不为空时才包含<code>[此]</code>部分$N<code>$lpad((tn),2,0)</code> 将曲目编号填充为 2 位数字",
|
||||
"fr_pdel": "删除",
|
||||
"fr_pnew": "另存为",
|
||||
"fr_pname": "为你的新预设提供一个名称",
|
||||
@@ -1540,8 +1540,8 @@ var Ls = {
|
||||
"gt_c1": "截断文件名更多(显示更少)",
|
||||
"gt_c2": "截断文件名更少(显示更多)",
|
||||
|
||||
"sm_w8": "正在搜寻匹配...", //m
|
||||
"sm_prev": "以下是来自先前查询的搜索结果:\n ",
|
||||
"sm_w8": "正在搜索...",
|
||||
"sm_prev": "上次查询的搜索结果:\n ",
|
||||
"sl_close": "关闭搜索结果",
|
||||
"sl_hits": "显示 {0} 个结果",
|
||||
"sl_moar": "加载更多",
|
||||
@@ -1613,20 +1613,20 @@ var Ls = {
|
||||
"un_del": "删除",
|
||||
"un_m3": "正在加载你的近期上传...",
|
||||
"un_busy": "正在删除 {0} 个文件...",
|
||||
"un_clip": "{0} 个链接已复制到剪贴板", //m
|
||||
"un_clip": "{0} 个链接已复制到剪贴板",
|
||||
|
||||
"u_https1": "你应该",
|
||||
"u_https2": "切换到 https",
|
||||
"u_https3": "以获得更好的性能",
|
||||
"u_ancient": '你的浏览器非常古老 -- 也许你应该 <a href="#" onclick="goto(\'bup\')">改用 bup</a>',
|
||||
"u_nowork": "需要 Firefox 53+ 或 Chrome 57+ 或 iOS 11+",
|
||||
"u_nodrop": '您的浏览器太旧,不支持通过拖动文件到窗口来上传文件', //m
|
||||
"u_notdir": "未收到文件夹!\n\n您的浏览器太旧;\n请尝试将文件夹拖入窗口", //m
|
||||
"u_nodrop": '浏览器版本低,不支持通过拖动文件到窗口来上传文件',
|
||||
"u_notdir": "不是文件夹!\n\n您的浏览器太旧;\n请尝试将文件夹拖入窗口",
|
||||
"u_uri": "要从其他浏览器窗口拖放图片,\n请将其拖放到大的上传按钮上",
|
||||
"u_enpot": '切换到 <a href="#">简约 UI</a>(可能提高上传速度)',
|
||||
"u_depot": '切换到 <a href="#">精美 UI</a>(可能降低上传速度)',
|
||||
"u_gotpot": '切换到土豆 UI 以提高上传速度,\n\n随时可以不同意并切换回去!',
|
||||
"u_pott": "<p>个文件: <b>{0}</b> 已完成, <b>{1}</b> 失败, <b>{2}</b> 正在处理, <b>{3}</b> 已排队</p>", //m
|
||||
"u_gotpot": '切换到简化UI以提高上传速度,\n\n随时可以不同意并切换回去!',
|
||||
"u_pott": "<p>个文件: <b>{0}</b> 已完成, <b>{1}</b> 失败, <b>{2}</b> 正在处理, <b>{3}</b> 排队中</p>",
|
||||
"u_ever": "这是基本的上传工具; up2k 需要至少<br>chrome 21 // firefox 13 // edge 12 // opera 12 // safari 5.1",
|
||||
"u_su2k": '这是基本的上传工具;<a href="#" id="u2yea">up2k</a> 更好',
|
||||
"u_ewrite": '你对这个文件夹没有写入权限',
|
||||
@@ -1639,16 +1639,16 @@ var Ls = {
|
||||
"u_up_life": "此上传将在 {0} 后从服务器删除",
|
||||
"u_asku": '将这些 {0} 个文件上传到 <code>{1}</code>',
|
||||
"u_unpt": "你可以使用左上角的 🧯 撤销/删除此上传",
|
||||
"u_bigtab": '即将显示 {0} 个文件。这可能会导致您的浏览器崩溃。您确定吗?', //m
|
||||
"u_scan": '正在扫描文件...', //m
|
||||
"u_dirstuck": '您的浏览器无法访问以下 {0} 个文件/文件夹,因此它们将被跳过:', //m
|
||||
"u_bigtab": '将显示 {0} 个文件,可能会导致您的浏览器崩溃。您确定吗?',
|
||||
"u_scan": '正在扫描文件...',
|
||||
"u_dirstuck": '您的浏览器无法访问以下 {0} 个文件/文件夹,它们将被跳过:',
|
||||
"u_etadone": '完成 ({0}, {1} 个文件)',
|
||||
"u_etaprep": '(准备上传)',
|
||||
"u_hashdone": '哈希完成',
|
||||
"u_hashing": '哈希',
|
||||
"u_hs": '正在等待服务器...', //m
|
||||
"u_dupdefer": "这是一个重复文件。它将在所有其他文件上传后进行处理", //m
|
||||
"u_actx": "单击此文本以防止切换到其他窗口/选项卡时性能下降", //m
|
||||
"u_hs": '正在等待服务器...',
|
||||
"u_dupdefer": "这是一个重复文件。它将在所有其他文件上传后进行处理",
|
||||
"u_actx": "单击此文本以防止切换到其他窗口/选项卡时性能下降",
|
||||
"u_fixed": "好! 已修复 👍",
|
||||
"u_cuerr": "上传块 {0} 的 {1} 失败;\n可能无害,继续中\n\n文件:{2}",
|
||||
"u_cuerr2": "服务器拒绝上传(块 {0} 的 {1});\n稍后重试\n\n文件:{2}\n\n错误 ",
|
||||
@@ -5301,8 +5301,12 @@ var showfile = (function () {
|
||||
|
||||
r.files.push({ 'id': link.id, 'name': uricom_dec(fn) });
|
||||
|
||||
var td = ebi(link.id).closest('tr').getElementsByTagName('td')[0];
|
||||
var ah = ebi(link.id),
|
||||
td = ah.closest('tr').getElementsByTagName('td')[0];
|
||||
|
||||
if (ah.textContent.endsWith('/'))
|
||||
continue;
|
||||
|
||||
if (lang == 'ts' || (lang == 'md' && td.textContent != '-'))
|
||||
continue;
|
||||
|
||||
|
||||
@@ -53,7 +53,7 @@ a.r {
|
||||
border-color: #c7a;
|
||||
}
|
||||
a.g {
|
||||
color: #2b0;
|
||||
color: #0a0;
|
||||
border-color: #3a0;
|
||||
box-shadow: 0 .3em 1em #4c0;
|
||||
}
|
||||
@@ -152,11 +152,13 @@ pre b,
|
||||
code b {
|
||||
color: #000;
|
||||
font-weight: normal;
|
||||
text-shadow: 0 0 .2em #0f0;
|
||||
text-shadow: 0 0 .2em #3f3;
|
||||
border-bottom: 1px solid #090;
|
||||
}
|
||||
html.z pre b,
|
||||
html.z code b {
|
||||
color: #fff;
|
||||
border-bottom: 1px solid #9f9;
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -60,6 +60,18 @@
|
||||
</div>
|
||||
{%- endif %}
|
||||
|
||||
{%- if ups %}
|
||||
<h1 id="aa">incoming files:</h1>
|
||||
<table class="vols">
|
||||
<thead><tr><th>%</th><th>speed</th><th>eta</th><th>idle</th><th>dir</th><th>file</th></tr></thead>
|
||||
<tbody>
|
||||
{% for u in ups %}
|
||||
<tr><td>{{ u[0] }}</td><td>{{ u[1] }}</td><td>{{ u[2] }}</td><td>{{ u[3] }}</td><td><a href="{{ u[4] }}">{{ u[5]|e }}</a></td><td>{{ u[6]|e }}</td></tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
{%- endif %}
|
||||
|
||||
{%- if rvol %}
|
||||
<h1 id="f">you can browse:</h1>
|
||||
<ul>
|
||||
|
||||
@@ -33,6 +33,7 @@ var Ls = {
|
||||
"ta1": "du må skrive et nytt passord først",
|
||||
"ta2": "gjenta for å bekrefte nytt passord:",
|
||||
"ta3": "fant en skrivefeil; vennligst prøv igjen",
|
||||
"aa1": "innkommende:",
|
||||
},
|
||||
"eng": {
|
||||
"d2": "shows the state of all active threads",
|
||||
@@ -78,6 +79,7 @@ var Ls = {
|
||||
"ta1": "请先输入新密码",
|
||||
"ta2": "重复以确认新密码:",
|
||||
"ta3": "发现拼写错误;请重试",
|
||||
"aa1": "正在接收的文件:", //m
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
@@ -1,3 +1,25 @@
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0909-2343 `v1.15.1` session
|
||||
|
||||
<img src="https://github.com/9001/copyparty/raw/hovudstraum/docs/logo.svg" width="250" align="right"/>
|
||||
|
||||
blessed by ⑨, this release is [certified strong](https://github.com/user-attachments/assets/05459032-736c-4b9a-9ade-a0044461194a) ([artist](https://x.com/hcnone))
|
||||
|
||||
## new features
|
||||
|
||||
* login sessions b5405174
|
||||
* a random session cookie is generated for each known user, replacing the previous plaintext login cookie
|
||||
* the logout button will nuke the session on all clients where that user is logged in
|
||||
* the sessions are stored in the database at `--ses-db`, default `~/.config/copyparty/sessions.db` (docker uses `/cfg/sessions.db` similar to the other runtime configs)
|
||||
* if you run multiple copyparty instances, much like [shares](https://github.com/9001/copyparty#shares) and [user-changeable passwords](https://github.com/9001/copyparty#user-changeable-passwords) you'll want to keep a separate db for each instance
|
||||
* can be mostly disabled with `--no-ses` when it turns out to be buggy
|
||||
|
||||
## bugfixes
|
||||
|
||||
* v1.13.8 broke the u2c `--ow` option to replace/overwrite files on the server during upload 6eee6015
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0908-1925 `v1.15.0` fill the drives
|
||||
|
||||
|
||||
@@ -255,6 +255,9 @@ cat copyparty/httpcli.py | awk '/^[^a-zA-Z0-9]+def / {printf "%s\n%s\n\n", f, pl
|
||||
# create a folder with symlinks to big files
|
||||
for d in /usr /var; do find $d -type f -size +30M 2>/dev/null; done | while IFS= read -r x; do ln -s "$x" big/; done
|
||||
|
||||
# up2k worst-case testfiles: create 64 GiB (256 x 256 MiB) of sparse files; each file takes 1 MiB disk space; each 1 MiB chunk is globally unique
|
||||
for f in {0..255}; do echo $f; truncate -s 256M $f; b1=$(printf '%02x' $f); for o in {0..255}; do b2=$(printf '%02x' $o); printf "\x$b1\x$b2" | dd of=$f bs=2 seek=$((o*1024*1024)) conv=notrunc 2>/dev/null; done; done
|
||||
|
||||
# py2 on osx
|
||||
brew install python@2
|
||||
pip install virtualenv
|
||||
|
||||
@@ -63,6 +63,8 @@ add your own translations by using the english or norwegian one from `browser.js
|
||||
|
||||
the easy way is to open up and modify `browser.js` in your own installation; depending on how you installed copyparty it might be named `browser.js.gz` instead, in which case just decompress it, restart copyparty, and start editing it anyways
|
||||
|
||||
you will be delighted to see inline html in the translation strings; to help prevent syntax errors, there is [a very jank linux script](https://github.com/9001/copyparty/blob/hovudstraum/scripts/tlcheck.sh) which is slightly better than nothing -- just beware the false-positives, so even if it complains it's not necessarily wrong/bad
|
||||
|
||||
if you're running `copyparty-sfx.py` then you'll find it at `/tmp/pe-copyparty.1000/copyparty/web` (on linux) or `%TEMP%\pe-copyparty\copyparty\web` (on windows)
|
||||
* make sure to keep backups of your work religiously! since that location is volatile af
|
||||
|
||||
|
||||
@@ -70,6 +70,10 @@ def uh2(fp):
|
||||
continue
|
||||
|
||||
on = True
|
||||
|
||||
if " # !rm" in ln:
|
||||
continue
|
||||
|
||||
lns.append(ln)
|
||||
|
||||
cs = "\n".join(lns)
|
||||
|
||||
38
tests/test_utils.py
Normal file
38
tests/test_utils.py
Normal file
@@ -0,0 +1,38 @@
|
||||
#!/usr/bin/env python3
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import unittest
|
||||
|
||||
from copyparty.__main__ import PY2
|
||||
from copyparty.util import w8enc
|
||||
from tests import util as tu
|
||||
|
||||
|
||||
class TestUtils(unittest.TestCase):
|
||||
def cmp(self, orig, t1, t2):
|
||||
if t1 != t2:
|
||||
raise Exception("\n%r\n%r\n%r\n" % (w8enc(orig), t1, t2))
|
||||
|
||||
def test_quotep(self):
|
||||
if PY2:
|
||||
raise unittest.SkipTest()
|
||||
|
||||
from copyparty.util import _quotep3, _quotep3b, w8dec
|
||||
|
||||
txt = w8dec(tu.randbytes(8192))
|
||||
self.cmp(txt, _quotep3(txt), _quotep3b(txt))
|
||||
|
||||
def test_unquote(self):
|
||||
if PY2:
|
||||
raise unittest.SkipTest()
|
||||
|
||||
from urllib.parse import unquote_to_bytes as u2b
|
||||
|
||||
from copyparty.util import unquote
|
||||
|
||||
for btxt in (
|
||||
tu.randbytes(8192),
|
||||
br"%ed%91qw,er;ty%20as df?gh+jkl%zxc&vbn <qwe>\"rty'uio&asd fgh",
|
||||
):
|
||||
self.cmp(btxt, unquote(btxt), u2b(btxt))
|
||||
@@ -3,6 +3,7 @@
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import shutil
|
||||
import socket
|
||||
@@ -49,6 +50,10 @@ from copyparty.util import FHC, CachedDict, Garda, Unrecv
|
||||
init_E(E)
|
||||
|
||||
|
||||
def randbytes(n):
|
||||
return random.getrandbits(n * 8).to_bytes(n, "little")
|
||||
|
||||
|
||||
def runcmd(argv):
|
||||
p = sp.Popen(argv, stdout=sp.PIPE, stderr=sp.PIPE)
|
||||
stdout, stderr = p.communicate()
|
||||
@@ -117,10 +122,10 @@ class Cfg(Namespace):
|
||||
def __init__(self, a=None, v=None, c=None, **ka0):
|
||||
ka = {}
|
||||
|
||||
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_dav no_db_ip no_del no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
|
||||
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand re_dirsz smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
|
||||
ka.update(**{k: False for k in ex.split()})
|
||||
|
||||
ex = "dedup dotpart dotsrch hook_v no_dhash no_fastboot no_fpool no_htp no_rescan no_sendfile no_ses no_snap no_voldump re_dhash plain_ip"
|
||||
ex = "dedup dotpart dotsrch hook_v no_dhash no_fastboot no_fpool no_htp no_rescan no_sendfile no_ses no_snap no_up_list no_voldump re_dhash plain_ip"
|
||||
ka.update(**{k: True for k in ex.split()})
|
||||
|
||||
ex = "ah_cli ah_gen css_browser hist js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
|
||||
|
||||
Reference in New Issue
Block a user