Compare commits

...

22 Commits

Author SHA1 Message Date
ed
dac2fad48e v1.3.8 2022-07-27 16:07:26 +02:00
ed
77f624b01e improve shumantime + use it everywhere 2022-07-27 15:07:04 +02:00
ed
e24ffebfc8 indicate write-activity on splashpage 2022-07-27 14:53:15 +02:00
ed
70d07d1609 perf 2022-07-27 14:01:30 +02:00
ed
bfb3303d87 include client total ETA in upload logs 2022-07-27 12:07:51 +02:00
ed
660705a436 defer volume reindexing on db activity 2022-07-27 11:48:47 +02:00
ed
74a3f97671 cleanup + bump deps 2022-07-27 00:15:49 +02:00
ed
b3e35bb494 async lsof w/ timeout 2022-07-26 22:38:13 +02:00
ed
76adac7c72 up2k-hook-ytid: add mp4/webm/mkv metadata scanner 2022-07-26 22:09:18 +02:00
ed
5dc75ebb67 async e2ts / e2v + forget deleted shadowed 2022-07-26 12:47:40 +02:00
ed
d686ce12b6 lsof db on stuck transaction 2022-07-25 02:07:59 +02:00
ed
d3c40a423e mutagen: support nullduration tags 2022-07-25 01:21:34 +02:00
ed
2fb1e6dab8 mute exception on zip abort 2022-07-25 01:20:38 +02:00
ed
10430b347f fix dumb prisonparty bug 2022-07-22 20:49:35 +02:00
ed
e0e3f6ac3e up2k-hook-ytid: add override 2022-07-22 10:47:10 +02:00
ed
c694cbffdc a11y: improve skip-to-files 2022-07-20 23:44:57 +02:00
ed
bdd0e5d771 a11y: enter = onclick 2022-07-20 23:32:02 +02:00
ed
aa98e427f0 audio-eq: add crossfeed 2022-07-20 01:54:59 +02:00
ed
daa6f4c94c add video hotkeys for digit-seeking 2022-07-17 23:45:02 +02:00
ed
4a76663fb2 ensure free disk space 2022-07-17 22:33:08 +02:00
ed
cebda5028a v1.3.7 2022-07-16 20:48:23 +02:00
ed
3fa377a580 sqlite diag 2022-07-16 20:43:26 +02:00
32 changed files with 1043 additions and 405 deletions

3
.gitignore vendored
View File

@@ -8,8 +8,9 @@ copyparty.egg-info/
buildenv/
build/
dist/
sfx/
py2/
sfx/
unt/
.venv/
# ide

View File

@@ -57,6 +57,8 @@ try the **[read-only demo server](https://a.ocv.me/pub/demo/)** 👀 running fro
* [server config](#server-config) - using arguments or config files, or a mix of both
* [ftp-server](#ftp-server) - an FTP server can be started using `--ftp 3921`
* [file indexing](#file-indexing)
* [exclude-patterns](#exclude-patterns)
* [periodic rescan](#periodic-rescan) - filesystem monitoring;
* [upload rules](#upload-rules) - set upload rules using volume flags
* [compress uploads](#compress-uploads) - files can be autocompressed on upload
* [database location](#database-location) - in-volume (`.hist/up2k.db`, default) or somewhere else
@@ -373,6 +375,7 @@ the browser has the following hotkeys (always qwerty)
* `Esc` close viewer
* videos:
* `U/O` skip 10sec back/forward
* `0..9` jump to 0%..90%
* `P/K/Space` play/pause
* `M` mute
* `C` continue playing next video
@@ -680,6 +683,8 @@ note:
* `e2tsr` is probably always overkill, since `e2ds`/`e2dsa` would pick up any file modifications and `e2ts` would then reindex those, unless there is a new copyparty version with new parsers and the release note says otherwise
* the rescan button in the admin panel has no effect unless the volume has `-e2ds` or higher
### exclude-patterns
to save some time, you can provide a regex pattern for filepaths to only index by filename/path/size/last-modified (and not the hash of the file contents) by setting `--no-hash \.iso$` or the volume-flag `:c,nohash=\.iso$`, this has the following consequences:
* initial indexing is way faster, especially when the volume is on a network disk
* makes it impossible to [file-search](#file-search)
@@ -689,12 +694,21 @@ similarly, you can fully ignore files/folders using `--no-idx [...]` and `:c,noi
if you set `--no-hash [...]` globally, you can enable hashing for specific volumes using flag `:c,nohash=`
### periodic rescan
filesystem monitoring; if copyparty is not the only software doing stuff on your filesystem, you may want to enable periodic rescans to keep the index up to date
argument `--re-maxage 60` will rescan all volumes every 60 sec, same as volflag `:c,scan=60` to specify it per-volume
uploads are disabled while a rescan is happening, so rescans will be delayed by `--db-act` (default 10 sec) when there is write-activity going on (uploads, renames, ...)
## upload rules
set upload rules using volume flags, some examples:
* `:c,sz=1k-3m` sets allowed filesize between 1 KiB and 3 MiB inclusive (suffixes: `b`, `k`, `m`, `g`)
* `:c,df=4g` block uploads if there would be less than 4 GiB free disk space afterwards
* `:c,nosub` disallow uploading into subdirectories; goes well with `rotn` and `rotf`:
* `:c,rotn=1000,2` moves uploads into subfolders, up to 1000 files in each folder before making a new one, two levels deep (must be at least 1)
* `:c,rotf=%Y/%m/%d/%H` enforces files to be uploaded into a structure of subfolders according to that date format
@@ -969,10 +983,10 @@ quick outline of the up2k protocol, see [uploading](#uploading) for the web-clie
up2k has saved a few uploads from becoming corrupted in-transfer already; caught an android phone on wifi redhanded in wireshark with a bitflip, however bup with https would *probably* have noticed as well (thanks to tls also functioning as an integrity check)
regarding the frequent server log message during uploads;
`6.0M 106M/s 2.77G 102.9M/s n948 thank 4/0/3/1 10042/7198`
`6.0M 106M/s 2.77G 102.9M/s n948 thank 4/0/3/1 10042/7198 00:01:09`
* this chunk was `6 MiB`, uploaded at `106 MiB/s`
* on this http connection, `2.77 GiB` transferred, `102.9 MiB/s` average, `948` chunks handled
* client says `4` uploads OK, `0` failed, `3` busy, `1` queued, `10042 MiB` total size, `7198 MiB` left
* client says `4` uploads OK, `0` failed, `3` busy, `1` queued, `10042 MiB` total size, `7198 MiB` and `00:01:09` left
## why chunk-hashes
@@ -1238,11 +1252,15 @@ if you want thumbnails, `apt -y install ffmpeg`
ideas for context to include in bug reports
in general, commandline arguments (and config file if any)
if something broke during an upload (replacing FILENAME with a part of the filename that broke):
```
journalctl -aS '48 hour ago' -u copyparty | grep -C10 FILENAME | tee bug.log
```
if there's a wall of base64 in the log (thread stacks) then please include that, especially if you run into something freezing up or getting stuck, for example `OperationalError('database is locked')` -- alternatively you can visit `/?stack` to see the stacks live, so http://127.0.0.1:3923/?stack for example
# building

View File

@@ -11,13 +11,13 @@ sysdirs=( /bin /lib /lib32 /lib64 /sbin /usr )
help() { cat <<'EOF'
usage:
./prisonparty.sh <ROOTDIR> <UID> <GID> [VOLDIR [VOLDIR...]] -- python3 copyparty-sfx.py [...]"
./prisonparty.sh <ROOTDIR> <UID> <GID> [VOLDIR [VOLDIR...]] -- python3 copyparty-sfx.py [...]
example:
./prisonparty.sh /var/lib/copyparty-jail 1000 1000 /mnt/nas/music -- python3 copyparty-sfx.py -v /mnt/nas/music::rwmd"
./prisonparty.sh /var/lib/copyparty-jail 1000 1000 /mnt/nas/music -- python3 copyparty-sfx.py -v /mnt/nas/music::rwmd
example for running straight from source (instead of using an sfx):
PYTHONPATH=$PWD ./prisonparty.sh /var/lib/copyparty-jail 1000 1000 /mnt/nas/music -- python3 -um copyparty -v /mnt/nas/music::rwmd"
PYTHONPATH=$PWD ./prisonparty.sh /var/lib/copyparty-jail 1000 1000 /mnt/nas/music -- python3 -um copyparty -v /mnt/nas/music::rwmd
note that if you have python modules installed as --user (such as bpm/key detectors),
you should add /home/foo/.local as a VOLDIR

View File

@@ -2,30 +2,141 @@
// assumes all files dropped into the uploader have a youtube-id somewhere in the filename,
// locates the youtube-ids and passes them to an API which returns a list of IDs which should be uploaded
//
// also tries to find the youtube-id in the embedded metadata
//
// assumes copyparty is behind nginx as /ytq is a standalone service which must be rproxied in place
function up2k_namefilter(good_files, nil_files, bad_files, hooks) {
var filenames = [],
file_lists = [good_files, nil_files, bad_files];
var passthru = up2k.uc.fsearch;
if (passthru)
return hooks[0](good_files, nil_files, bad_files, hooks.slice(1));
for (var lst of file_lists)
for (var ent of lst)
filenames.push(ent[1]);
a_up2k_namefilter(good_files, nil_files, bad_files, hooks).then(() => { });
}
var yt_ids = new Set();
for (var lst of file_lists)
for (var ent of lst) {
var m, name = ent[1];
while (true) {
// some ytdl fork did %(title)-%(id).%(ext) ...
m = /(?:^|[^\w])([\w-]{11})(?:$|[^\w-])/.exec(name);
if (!m)
break;
function bstrpos(buf, ptn) {
var ofs = 0,
ch0 = ptn[0],
sz = buf.byteLength;
yt_ids.add(m[1]);
name = name.replace(m[1], '');
while (true) {
ofs = buf.indexOf(ch0, ofs);
if (ofs < 0 || ofs >= sz)
return -1;
for (var a = 1; a < ptn.length; a++)
if (buf[ofs + a] !== ptn[a])
break;
if (a === ptn.length)
return ofs;
++ofs;
}
}
async function a_up2k_namefilter(good_files, nil_files, bad_files, hooks) {
var t0 = Date.now(),
yt_ids = new Set(),
textdec = new TextDecoder('latin1'),
md_ptn = new TextEncoder().encode('youtube.com/watch?v='),
file_ids = [], // all IDs found for each good_files
mofs = 0,
mnchk = 0,
mfile = '';
for (var a = 0; a < good_files.length; a++) {
var [fobj, name] = good_files[a],
sz = fobj.size,
ids = [],
id_ok = false,
m;
// all IDs found in this file
file_ids.push(ids);
// look for ID in filename; reduce the
// metadata-scan intensity if the id looks safe
m = /[\[(-]([\w-]{11})[\])]?\.(?:mp4|webm|mkv)$/i.exec(name);
id_ok = !!m;
while (true) {
// fuzzy catch-all;
// some ytdl fork did %(title)-%(id).%(ext) ...
m = /(?:^|[^\w])([\w-]{11})(?:$|[^\w-])/.exec(name);
if (!m)
break;
name = name.replace(m[1], '');
yt_ids.add(m[1]);
ids.push(m[1]);
}
// look for IDs in video metadata,
if (/\.(mp4|webm|mkv)$/i.exec(name)) {
toast.show('inf r', 0, `analyzing file ${a + 1} / ${good_files.length} :\n${name}\n\nhave analysed ${++mnchk} files in ${(Date.now() - t0) / 1000} seconds, ${humantime((good_files.length - (a + 1)) * (((Date.now() - t0) / 1000) / mnchk))} remaining,\n\nbiggest offset so far is ${mofs}, in this file:\n\n${mfile}`);
// check first and last 128 MiB;
// pWxOroN5WCo.mkv @ 6edb98 (6.92M)
// Nf-nN1wF5Xo.mp4 @ 4a98034 (74.6M)
var chunksz = 1024 * 1024 * 2, // byte
aspan = id_ok ? 128 : 512; // MiB
aspan = parseInt(Math.min(sz / 2, aspan * 1024 * 1024) / chunksz) * chunksz;
for (var side = 0; side < 2; side++) {
var ofs = side ? Math.max(0, sz - aspan) : 0,
nchunks = aspan / chunksz;
for (var chunk = 0; chunk < nchunks; chunk++) {
var bchunk = await fobj.slice(ofs, ofs + chunksz + 16).arrayBuffer(),
uchunk = new Uint8Array(bchunk, 0, bchunk.byteLength),
bofs = bstrpos(uchunk, md_ptn),
absofs = Math.min(ofs + bofs, (sz - ofs) + bofs),
txt = bofs < 0 ? '' : textdec.decode(uchunk.subarray(bofs)),
m;
//console.log(`side ${ side }, chunk ${ chunk }, ofs ${ ofs }, bchunk ${ bchunk.byteLength }, txt ${ txt.length }`);
while (true) {
// mkv/webm have [a-z] immediately after url
m = /(youtube\.com\/watch\?v=[\w-]{11})/.exec(txt);
if (!m)
break;
txt = txt.replace(m[1], '');
m = m[1].slice(-11);
console.log(`found ${m} @${bofs}, ${name} `);
yt_ids.add(m);
if (!has(ids, m))
ids.push(m);
// bail after next iteration
chunk = nchunks - 1;
side = 9;
if (mofs < absofs) {
mofs = absofs;
mfile = name;
}
}
ofs += chunksz;
if (ofs >= sz)
break;
}
}
}
}
if (false) {
var msg = `finished analysing ${mnchk} files in ${(Date.now() - t0) / 1000} seconds,\n\nbiggest offset was ${mofs} in this file:\n\n${mfile}`,
mfun = function () { toast.ok(0, msg); };
mfun();
setTimeout(mfun, 200);
return hooks[0]([], [], [], hooks.slice(1));
}
toast.inf(5, `running query for ${yt_ids.size} videos...`);
@@ -34,36 +145,65 @@ function up2k_namefilter(good_files, nil_files, bad_files, hooks) {
xhr.setRequestHeader('Content-Type', 'text/plain');
xhr.onload = xhr.onerror = function () {
if (this.status != 200)
return toast.err(0, `sorry, database query failed ;_;\n\nplease let us know so we can look at it, thx!!\n\nerror ${this.status}: ${(this.response && this.response.err) || this.responseText}`);
return toast.err(0, `sorry, database query failed; _; \n\nplease let us know so we can look at it, thx!!\n\nerror ${this.status}: ${(this.response && this.response.err) || this.responseText} `);
var new_lists = [],
ptn = new RegExp(this.responseText.trim().split('\n').join('|') || '\n'),
nothing_to_do = true,
n_skip = 0;
for (var lst of file_lists) {
var keep = [];
new_lists.push(keep);
for (var ent of lst)
if (ptn.exec(ent[1]))
keep.push(ent);
else
n_skip++;
if (keep.length)
nothing_to_do = false;
}
if (nothing_to_do)
return modal.alert('Good news -- turns out we already have all those videos.\n\nBut thank you for checking in!');
else if (n_skip)
toast.inf(0, `skipped ${n_skip} files which already exist on the server`);
[good_files, nil_files, bad_files] = new_lists;
hooks[0](good_files, nil_files, bad_files, hooks.slice(1));
process_id_list(this.responseText);
};
xhr.send(Array.from(yt_ids).join('\n'));
setTimeout(function () { process_id_list('Nf-nN1wF5Xo\n'); }, 500);
function process_id_list(txt) {
var wanted_ids = new Set(txt.trim().split('\n')),
wanted_names = new Set(), // basenames with a wanted ID
wanted_files = new Set(); // filedrops
for (var a = 0; a < good_files.length; a++) {
var name = good_files[a][1];
for (var b = 0; b < file_ids[a].length; b++)
if (wanted_ids.has(file_ids[a][b])) {
wanted_files.add(good_files[a]);
var m = /(.*)\.(mp4|webm|mkv)$/i.exec(name);
if (m)
wanted_names.add(m[1]);
break;
}
}
// add all files with the same basename as each explicitly wanted file
// (infojson/chatlog/etc when ID was discovered from metadata)
for (var a = 0; a < good_files.length; a++) {
var name = good_files[a][1];
for (var b = 0; b < 3; b++) {
name = name.replace(/\.[^\.]+$/, '');
if (wanted_names.has(name)) {
wanted_files.add(good_files[a]);
break;
}
}
}
function upload_filtered() {
if (!wanted_files.size)
return modal.alert('Good news -- turns out we already have all those.\n\nBut thank you for checking in!');
hooks[0](Array.from(wanted_files), nil_files, bad_files, hooks.slice(1));
}
function upload_all() {
hooks[0](good_files, nil_files, bad_files, hooks.slice(1));
}
var n_skip = good_files.length - wanted_files.size,
msg = `you added ${good_files.length} files; ${n_skip} of them were skipped --\neither because we already have them,\nor because there is no youtube-ID in your filename.\n\n<code>OK</code> / <code>Enter</code> = continue uploading just the ${wanted_files.size} files we definitely need\n\n<code>Cancel</code> / <code>ESC</code> = override the filter; upload ALL the files you added`;
if (!n_skip)
upload_filtered();
else
modal.confirm(msg, upload_filtered, upload_all);
};
}
up2k_hooks.push(function () {

View File

@@ -382,6 +382,7 @@ def run_argparse(argv: list[str], formatter: Any, retry: bool) -> argparse.Names
\033[36mmaxn=250,600\033[35m max 250 uploads over 15min
\033[36mmaxb=1g,300\033[35m max 1 GiB over 5min (suffixes: b, k, m, g)
\033[36msz=1k-3m\033[35m allow filesizes between 1 KiB and 3MiB
\033[36mdf=1g\033[35m ensure 1 GiB free disk space
\033[0mupload rotation:
(moves all uploads into the specified folder structure)
@@ -482,6 +483,7 @@ def run_argparse(argv: list[str], formatter: Any, retry: bool) -> argparse.Names
ap2.add_argument("--hardlink", action="store_true", help="prefer hardlinks instead of symlinks when possible (within same filesystem)")
ap2.add_argument("--never-symlink", action="store_true", help="do not fallback to symlinks when a hardlink cannot be made")
ap2.add_argument("--no-dedup", action="store_true", help="disable symlink/hardlink creation; copy file contents instead")
ap2.add_argument("--df", metavar="GiB", type=float, default=0, help="ensure GiB free disk space by rejecting upload requests")
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; 0 = off and warn if enabled, 1 = off, 2 = on, 3 = on and disable datecheck")
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; s=smallest-first, n=alphabetical, fs=force-s, fn=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
@@ -594,6 +596,7 @@ def run_argparse(argv: list[str], formatter: Any, retry: bool) -> argparse.Names
ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching paths during e2ds folder scans")
ap2.add_argument("--no-idx", metavar="PTN", type=u, help="regex: disable indexing of matching paths during e2ds folder scans")
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="disk rescan volume interval, 0=off, can be set per-volume with the 'scan' volflag")
ap2.add_argument("--db-act", metavar="SEC", type=float, default=10, help="defer any scheduled volume reindexing until SEC seconds after last db write (uploads, renames, ...)")
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=30, help="search deadline -- terminate searches running for more than SEC seconds")
ap2.add_argument("--srch-hits", metavar="N", type=int, default=7999, help="max search results to allow clients to fetch; 125 results will be shown initially")

View File

@@ -1,8 +1,8 @@
# coding: utf-8
VERSION = (1, 3, 6)
VERSION = (1, 3, 8)
CODENAME = "god dag"
BUILD_DT = (2022, 7, 16)
BUILD_DT = (2022, 7, 27)
S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -20,6 +20,8 @@ from .util import (
Pebkac,
absreal,
fsenc,
get_df,
humansize,
relchk,
statdir,
uncyg,
@@ -72,15 +74,23 @@ class AXS(object):
class Lim(object):
def __init__(self) -> None:
def __init__(self, log_func: Optional["RootLogger"]) -> None:
self.log_func = log_func
self.reg: Optional[dict[str, dict[str, Any]]] = None # up2k registry
self.nups: dict[str, list[float]] = {} # num tracker
self.bups: dict[str, list[tuple[float, int]]] = {} # byte tracker list
self.bupc: dict[str, int] = {} # byte tracker cache
self.nosub = False # disallow subdirectories
self.smin = -1 # filesize min
self.smax = -1 # filesize max
self.dfl = 0 # free disk space limit
self.dft = 0 # last-measured time
self.dfv = 0 # currently free
self.smin = 0 # filesize min
self.smax = 0 # filesize max
self.bwin = 0 # bytes window
self.bmax = 0 # bytes max
@@ -92,18 +102,34 @@ class Lim(object):
self.rotf = "" # rot datefmt
self.rot_re = re.compile("") # rotf check
def log(self, msg: str, c: Union[int, str] = 0) -> None:
if self.log_func:
self.log_func("up-lim", msg, c)
def set_rotf(self, fmt: str) -> None:
self.rotf = fmt
r = re.escape(fmt).replace("%Y", "[0-9]{4}").replace("%j", "[0-9]{3}")
r = re.sub("%[mdHMSWU]", "[0-9]{2}", r)
self.rot_re = re.compile("(^|/)" + r + "$")
def all(self, ip: str, rem: str, sz: float, abspath: str) -> tuple[str, str]:
def all(
self,
ip: str,
rem: str,
sz: int,
abspath: str,
reg: Optional[dict[str, dict[str, Any]]] = None,
) -> tuple[str, str]:
if reg is not None and self.reg is None:
self.reg = reg
self.dft = 0
self.chk_nup(ip)
self.chk_bup(ip)
self.chk_rem(rem)
if sz != -1:
self.chk_sz(sz)
self.chk_df(abspath, sz) # side effects; keep last-ish
ap2, vp2 = self.rot(abspath)
if abspath == ap2:
@@ -111,13 +137,33 @@ class Lim(object):
return ap2, ("{}/{}".format(rem, vp2) if rem else vp2)
def chk_sz(self, sz: float) -> None:
if self.smin != -1 and sz < self.smin:
def chk_sz(self, sz: int) -> None:
if sz < self.smin:
raise Pebkac(400, "file too small")
if self.smax != -1 and sz > self.smax:
if self.smax and sz > self.smax:
raise Pebkac(400, "file too big")
def chk_df(self, abspath: str, sz: int, already_written: bool = False) -> None:
if not self.dfl:
return
if self.dft < time.time():
self.dft = int(time.time()) + 300
self.dfv = get_df(abspath)[0] or 0
for j in list(self.reg.values()) if self.reg else []:
self.dfv -= int(j["size"] / len(j["hash"]) * len(j["need"]))
if already_written:
sz = 0
if self.dfv - sz < self.dfl:
self.dft = min(self.dft, int(time.time()) + 10)
t = "server HDD is full; {} free, need {}"
raise Pebkac(500, t.format(humansize(self.dfv - self.dfl), humansize(sz)))
self.dfv -= int(sz)
def chk_rem(self, rem: str) -> None:
if self.nosub and rem:
raise Pebkac(500, "no subdirectories allowed")
@@ -226,7 +272,7 @@ class VFS(object):
def __init__(
self,
log: Optional[RootLogger],
log: Optional["RootLogger"],
realpath: str,
vpath: str,
axs: AXS,
@@ -569,7 +615,7 @@ class AuthSrv(object):
def __init__(
self,
args: argparse.Namespace,
log_func: Optional[RootLogger],
log_func: Optional["RootLogger"],
warn_anonwrite: bool = True,
) -> None:
self.args = args
@@ -917,13 +963,20 @@ class AuthSrv(object):
vfs.histtab = {zv.realpath: zv.histpath for zv in vfs.all_vols.values()}
for vol in vfs.all_vols.values():
lim = Lim()
lim = Lim(self.log_func)
use = False
if vol.flags.get("nosub"):
use = True
lim.nosub = True
zs = vol.flags.get("df") or (
"{}g".format(self.args.df) if self.args.df else ""
)
if zs:
use = True
lim.dfl = unhumanize(zs)
zs = vol.flags.get("sz")
if zs:
use = True
@@ -1107,6 +1160,7 @@ class AuthSrv(object):
vfs.bubble_flags()
e2vs = []
t = "volumes and permissions:\n"
for zv in vfs.all_vols.values():
if not self.warn_anonwrite:
@@ -1124,8 +1178,16 @@ class AuthSrv(object):
u = ", ".join("\033[35meverybody\033[0m" if x == "*" else x for x in u)
u = u if u else "\033[36m--none--\033[0m"
t += "\n| {}: {}".format(txt, u)
if "e2v" in zv.flags:
e2vs.append(zv.vpath or "/")
t += "\n"
if e2vs:
t += "\n\033[33me2v enabled for the following volumes;\nuploads will be blocked until scan has finished:\n \033[0m"
t += " ".join(e2vs) + "\n"
if self.warn_anonwrite and not self.args.no_voldump:
self.log(t)
@@ -1133,7 +1195,7 @@ class AuthSrv(object):
zv, _ = vfs.get("/", "*", False, True)
if self.warn_anonwrite and os.getcwd() == zv.realpath:
self.warn_anonwrite = False
t = "anyone can read/write the current directory: {}\n"
t = "anyone can write to the current directory: {}\n"
self.log(t.format(zv.realpath), c=1)
except Pebkac:
self.warn_anonwrite = True

View File

@@ -42,7 +42,7 @@ class BrokerCli(object):
"""
def __init__(self) -> None:
self.log: RootLogger = None
self.log: "RootLogger" = None
self.args: argparse.Namespace = None
self.asrv: AuthSrv = None
self.httpsrv: "HttpSrv" = None

View File

@@ -1,7 +1,11 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import ctypes
try:
import ctypes
except:
pass
import os
import re
import time
@@ -19,7 +23,7 @@ except:
class Fstab(object):
def __init__(self, log: RootLogger):
def __init__(self, log: "RootLogger"):
self.log_func = log
self.trusted = False
@@ -136,7 +140,7 @@ class Fstab(object):
def get_w32(self, path: str) -> str:
# list mountpoints: fsutil fsinfo drives
assert ctypes
from ctypes.wintypes import BOOL, DWORD, LPCWSTR, LPDWORD, LPWSTR, MAX_PATH
def echk(rc: int, fun: Any, args: Any) -> None:

View File

@@ -24,12 +24,7 @@ try:
except:
pass
try:
import ctypes
except:
pass
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, WINDOWS, E, unicode
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, E, unicode
from .authsrv import VFS # typechk
from .bos import bos
from .star import StreamTar
@@ -48,6 +43,7 @@ from .util import (
fsenc,
gen_filekey,
gencookie,
get_df,
get_spd,
guess_mime,
gzip_orig_sz,
@@ -380,13 +376,21 @@ class HttpCli(object):
if not self._check_nonfatal(pex, post):
self.keepalive = False
msg = str(ex) if pex == ex else min_ex()
self.log("{}\033[0m, {}".format(msg, self.vpath), 3)
em = str(ex)
msg = em if pex == ex else min_ex()
self.log(
"{}\033[0m, {}".format(msg, self.vpath),
6 if em.startswith("client d/c ") else 3,
)
msg = "{}\r\nURL: {}\r\n".format(str(ex), self.vpath)
msg = "{}\r\nURL: {}\r\n".format(em, self.vpath)
if self.hint:
msg += "hint: {}\r\n".format(self.hint)
if "database is locked" in em:
self.conn.hsrv.broker.say("log_stacks")
msg += "hint: important info in the server log\r\n"
msg = "<pre>" + html_escape(msg)
self.reply(msg.encode("utf-8", "replace"), status=pex.code, volsan=True)
return self.keepalive
@@ -1289,7 +1293,12 @@ class HttpCli(object):
lim.chk_nup(self.ip)
try:
max_sz = lim.smax if lim else 0
max_sz = 0
if lim:
v1 = lim.smax
v2 = lim.dfv - lim.dfl
max_sz = min(v1, v2) if v1 and v2 else v1 or v2
with ren_open(tnam, "wb", 512 * 1024, **open_args) as zfw:
f, tnam = zfw["orz"]
tabspath = os.path.join(fdir, tnam)
@@ -1304,6 +1313,7 @@ class HttpCli(object):
lim.nup(self.ip)
lim.bup(self.ip, sz)
try:
lim.chk_df(tabspath, sz, True)
lim.chk_sz(sz)
lim.chk_bup(self.ip)
lim.chk_nup(self.ip)
@@ -1917,7 +1927,13 @@ class HttpCli(object):
vstate = {("/" + k).rstrip("/") + "/": v for k, v in vs["volstate"].items()}
else:
vstate = {}
vs = {"scanning": None, "hashq": None, "tagq": None, "mtpq": None}
vs = {
"scanning": None,
"hashq": None,
"tagq": None,
"mtpq": None,
"dbwt": None,
}
if self.uparam.get("ls") in ["v", "t", "txt"]:
if self.uname == "*":
@@ -1927,7 +1943,7 @@ class HttpCli(object):
if vstate:
txt += "\nstatus:"
for k in ["scanning", "hashq", "tagq", "mtpq"]:
for k in ["scanning", "hashq", "tagq", "mtpq", "dbwt"]:
txt += " {}({})".format(k, vs[k])
if rvol:
@@ -1956,6 +1972,7 @@ class HttpCli(object):
hashq=vs["hashq"],
tagq=vs["tagq"],
mtpq=vs["mtpq"],
dbwt=vs["dbwt"],
url_suf=suf,
k304=self.k304(),
)
@@ -2317,26 +2334,14 @@ class HttpCli(object):
except:
self.log("#wow #whoa")
try:
# some fuses misbehave
if not self.args.nid:
if WINDOWS:
try:
bfree = ctypes.c_ulonglong(0)
ctypes.windll.kernel32.GetDiskFreeSpaceExW( # type: ignore
ctypes.c_wchar_p(abspath), None, None, ctypes.pointer(bfree)
)
srv_info.append(humansize(bfree.value) + " free")
except:
pass
else:
sv = os.statvfs(fsenc(abspath))
free = humansize(sv.f_frsize * sv.f_bfree, True)
total = humansize(sv.f_frsize * sv.f_blocks, True)
srv_info.append("{} free of {}".format(free, total))
except:
pass
if not self.args.nid:
free, total = get_df(abspath)
if total is not None:
h1 = humansize(free or 0)
h2 = humansize(total)
srv_info.append("{} free of {}".format(h1, h2))
elif free is not None:
srv_info.append(humansize(free, True) + " free")
srv_infot = "</span> // <span>".join(srv_info)

View File

@@ -62,7 +62,7 @@ class HttpConn(object):
self.nreq: int = 0 # mypy404
self.nbyte: int = 0 # mypy404
self.u2idx: Optional[U2idx] = None
self.log_func: Util.RootLogger = hsrv.log # mypy404
self.log_func: "Util.RootLogger" = hsrv.log # mypy404
self.log_src: str = "httpconn" # mypy404
self.lf_url: Optional[Pattern[str]] = (
re.compile(self.args.lf_url) if self.args.lf_url else None

View File

@@ -261,8 +261,11 @@ class HttpSrv(object):
)
self.thr_client(sck, addr)
me.name = self.name + "-poolw"
except:
self.log(self.name, "thr_client: " + min_ex(), 3)
except Exception as ex:
if str(ex).startswith("client d/c "):
self.log(self.name, "thr_client: " + str(ex), 6)
else:
self.log(self.name, "thr_client: " + min_ex(), 3)
def shutdown(self) -> None:
self.stopping = True

View File

@@ -248,7 +248,7 @@ def parse_ffprobe(txt: str) -> tuple[dict[str, tuple[int, Any]], dict[str, list[
class MTag(object):
def __init__(self, log_func: RootLogger, args: argparse.Namespace) -> None:
def __init__(self, log_func: "RootLogger", args: argparse.Namespace) -> None:
self.log_func = log_func
self.args = args
self.usable = True
@@ -437,6 +437,8 @@ class MTag(object):
return r1
def get_mutagen(self, abspath: str) -> dict[str, Union[str, float]]:
ret: dict[str, tuple[int, Any]] = {}
if not bos.path.isfile(abspath):
return {}
@@ -450,7 +452,10 @@ class MTag(object):
return self.get_ffprobe(abspath) if self.can_ffprobe else {}
sz = bos.path.getsize(abspath)
ret = {".q": (0, int((sz / md.info.length) / 128))}
try:
ret[".q"] = (0, int((sz / md.info.length) / 128))
except:
pass
for attr, k, norm in [
["codec", "ac", unicode],

View File

@@ -44,7 +44,7 @@ class StreamTar(StreamArc):
def __init__(
self,
log: NamedLogger,
log: "NamedLogger",
fgen: Generator[dict[str, Any], None, None],
**kwargs: Any
):
@@ -65,17 +65,19 @@ class StreamTar(StreamArc):
w.start()
def gen(self) -> Generator[Optional[bytes], None, None]:
while True:
buf = self.qfile.q.get()
if not buf:
break
try:
while True:
buf = self.qfile.q.get()
if not buf:
break
self.co += len(buf)
yield buf
self.co += len(buf)
yield buf
yield None
if self.errf:
bos.unlink(self.errf["ap"])
yield None
finally:
if self.errf:
bos.unlink(self.errf["ap"])
def ser(self, f: dict[str, Any]) -> None:
name = f["vp"]

View File

@@ -17,7 +17,7 @@ except:
class StreamArc(object):
def __init__(
self,
log: NamedLogger,
log: "NamedLogger",
fgen: Generator[dict[str, Any], None, None],
**kwargs: Any
):

View File

@@ -2,7 +2,9 @@
from __future__ import print_function, unicode_literals
import argparse
import base64
import calendar
import gzip
import os
import shlex
import signal
@@ -27,7 +29,7 @@ from .mtag import HAVE_FFMPEG, HAVE_FFPROBE
from .tcpsrv import TcpSrv
from .th_srv import HAVE_PIL, HAVE_VIPS, HAVE_WEBP, ThumbSrv
from .up2k import Up2k
from .util import ansi_re, min_ex, mp, start_log_thrs, start_stackmon
from .util import ansi_re, min_ex, mp, start_log_thrs, start_stackmon, alltrace
class SvcHub(object):
@@ -56,6 +58,7 @@ class SvcHub(object):
self.log_mutex = threading.Lock()
self.next_day = 0
self.tstack = 0.0
if args.sss or args.s >= 3:
args.ss = True
@@ -500,3 +503,15 @@ class SvcHub(object):
sck.sendall(b"READY=1")
except:
self.log("sd_notify", min_ex())
def log_stacks(self) -> None:
td = time.time() - self.tstack
if td < 300:
self.log("stacks", "cooldown {}".format(td))
return
self.tstack = time.time()
zb = alltrace().encode("utf-8", "replace")
zb = gzip.compress(zb)
zs = base64.b64encode(zb).decode("ascii")
self.log("stacks", zs)

View File

@@ -218,7 +218,7 @@ def gen_ecdr64_loc(ecdr64_pos: int) -> bytes:
class StreamZip(StreamArc):
def __init__(
self,
log: NamedLogger,
log: "NamedLogger",
fgen: Generator[dict[str, Any], None, None],
utf8: bool = False,
pre_crc: bool = False,
@@ -272,41 +272,44 @@ class StreamZip(StreamArc):
def gen(self) -> Generator[bytes, None, None]:
errors = []
for f in self.fgen:
if "err" in f:
errors.append((f["vp"], f["err"]))
continue
try:
for f in self.fgen:
if "err" in f:
errors.append((f["vp"], f["err"]))
continue
try:
for x in self.ser(f):
try:
for x in self.ser(f):
yield x
except GeneratorExit:
raise
except:
ex = min_ex(5, True).replace("\n", "\n-- ")
errors.append((f["vp"], ex))
if errors:
errf, txt = errdesc(errors)
self.log("\n".join(([repr(errf)] + txt[1:])))
for x in self.ser(errf):
yield x
except:
ex = min_ex(5, True).replace("\n", "\n-- ")
errors.append((f["vp"], ex))
if errors:
errf, txt = errdesc(errors)
self.log("\n".join(([repr(errf)] + txt[1:])))
for x in self.ser(errf):
yield x
cdir_pos = self.pos
for name, sz, ts, crc, h_pos in self.items:
buf = gen_hdr(h_pos, name, sz, ts, self.utf8, crc, self.pre_crc)
yield self._ct(buf)
cdir_end = self.pos
cdir_pos = self.pos
for name, sz, ts, crc, h_pos in self.items:
buf = gen_hdr(h_pos, name, sz, ts, self.utf8, crc, self.pre_crc)
yield self._ct(buf)
cdir_end = self.pos
_, need_64 = gen_ecdr(self.items, cdir_pos, cdir_end)
if need_64:
ecdir64_pos = self.pos
buf = gen_ecdr64(self.items, cdir_pos, cdir_end)
yield self._ct(buf)
_, need_64 = gen_ecdr(self.items, cdir_pos, cdir_end)
if need_64:
ecdir64_pos = self.pos
buf = gen_ecdr64(self.items, cdir_pos, cdir_end)
yield self._ct(buf)
buf = gen_ecdr64_loc(ecdir64_pos)
yield self._ct(buf)
buf = gen_ecdr64_loc(ecdir64_pos)
yield self._ct(buf)
ecdr, _ = gen_ecdr(self.items, cdir_pos, cdir_end)
yield self._ct(ecdr)
if errors:
bos.unlink(errf["ap"])
ecdr, _ = gen_ecdr(self.items, cdir_pos, cdir_end)
yield self._ct(ecdr)
finally:
if errors:
bos.unlink(errf["ap"])

View File

@@ -12,6 +12,7 @@ import shutil
import signal
import stat
import subprocess as sp
import tempfile
import threading
import time
import traceback
@@ -31,6 +32,8 @@ from .util import (
ProgressPrinter,
absreal,
atomic_move,
djoin,
db_ex_chk,
fsenc,
min_ex,
quotep,
@@ -69,6 +72,8 @@ class Dbw(object):
class Mpqe(object):
"""pending files to tag-scan"""
def __init__(
self,
mtp: dict[str, MParser],
@@ -98,9 +103,11 @@ class Up2k(object):
self.gid = 0
self.stop = False
self.mutex = threading.Lock()
self.blocked: Optional[str] = None
self.pp: Optional[ProgressPrinter] = None
self.rescan_cond = threading.Condition()
self.need_rescan: set[str] = set()
self.db_act = 0.0
self.registry: dict[str, dict[str, dict[str, Any]]] = {}
self.flags: dict[str, dict[str, Any]] = {}
@@ -126,6 +133,8 @@ class Up2k(object):
self.mem_cur = None
self.sqlite_ver = None
self.no_expr_idx = False
self.timeout = int(max(self.args.srch_time, 5) * 1.2) + 1
self.spools: set[tempfile.SpooledTemporaryFile[bytes]] = set()
if HAVE_SQLITE3:
# mojibake detector
self.mem_cur = self._orz(":memory:")
@@ -193,6 +202,15 @@ class Up2k(object):
def log(self, msg: str, c: Union[int, str] = 0) -> None:
self.log_func("up2k", msg + "\033[K", c)
def _block(self, why: str) -> None:
self.blocked = why
self.log("uploads temporarily blocked due to " + why, 3)
def _unblock(self) -> None:
if self.blocked is not None:
self.blocked = None
self.log("uploads are now possible", 2)
def get_state(self) -> str:
mtpq: Union[int, str] = 0
q = "select count(w) from mt where k = 't:mtp'"
@@ -213,6 +231,9 @@ class Up2k(object):
"hashq": self.n_hashq,
"tagq": self.n_tagq,
"mtpq": mtpq,
"dbwt": "{:.2f}".format(
min(1000 * 24 * 60 * 60 - 1, time.time() - self.db_act)
),
}
return json.dumps(ret, indent=4)
@@ -245,10 +266,15 @@ class Up2k(object):
continue
if self.pp:
cooldown = now + 5
cooldown = now + 1
continue
timeout = now + 9001
if self.args.no_lifetime:
timeout = now + 9001
else:
# important; not deferred by db_act
timeout = self._check_lifetimes()
with self.mutex:
for vp, vol in sorted(self.asrv.vfs.all_vols.items()):
maxage = vol.flags.get("scan")
@@ -264,6 +290,20 @@ class Up2k(object):
timeout = min(timeout, deadline)
if self.db_act > now - self.args.db_act:
# recent db activity; defer volume rescan
act_timeout = self.db_act + self.args.db_act
if self.need_rescan:
timeout = now
if timeout < act_timeout:
timeout = act_timeout
t = "volume rescan deferred {:.1f} sec, due to database activity"
self.log(t.format(timeout - now))
continue
with self.mutex:
vols = list(sorted(self.need_rescan))
self.need_rescan.clear()
@@ -279,9 +319,10 @@ class Up2k(object):
for v in vols:
volage[v] = now
if self.args.no_lifetime:
continue
def _check_lifetimes(self) -> float:
now = time.time()
timeout = now + 9001
if now: # diff-golf
for vp, vol in sorted(self.asrv.vfs.all_vols.items()):
lifetime = vol.flags.get("lifetime")
if not lifetime:
@@ -328,6 +369,8 @@ class Up2k(object):
if hits:
timeout = min(timeout, now + lifetime - (now - hits[0]))
return timeout
def _vis_job_progress(self, job: dict[str, Any]) -> str:
perc = 100 - (len(job["need"]) * 100.0 / len(job["hash"]))
path = os.path.join(job["ptop"], job["prel"], job["name"])
@@ -416,6 +459,9 @@ class Up2k(object):
self.mtag = None
# e2ds(a) volumes first
if next((zv for zv in vols if "e2ds" in zv.flags), None):
self._block("indexing")
for vol in vols:
if self.stop:
break
@@ -444,6 +490,8 @@ class Up2k(object):
self.volstate[vol.vpath] = t
self._unblock()
# file contents verification
for vol in vols:
if self.stop:
@@ -626,7 +674,7 @@ class Up2k(object):
with self.mutex:
reg = self.register_vpath(top, vol.flags)
assert reg and self.pp
cur, _ = reg
cur, db_path = reg
db = Dbw(cur, 0, time.time())
self.pp.n = next(db.c.execute("select count(w) from up"))[0]
@@ -645,8 +693,9 @@ class Up2k(object):
n_add = n_rm = 0
try:
n_add = self._build_dir(db, top, set(excl), top, rtop, rei, reh, [])
n_rm = self._drop_lost(db.c, top)
except:
n_rm = self._drop_lost(db.c, top, excl)
except Exception as ex:
db_ex_chk(self.log, ex, db_path)
t = "failed to index volume [{}]:\n{}"
self.log(t.format(top, min_ex()), c=1)
@@ -677,6 +726,7 @@ class Up2k(object):
assert self.pp and self.mem_cur
self.pp.msg = "a{} {}".format(self.pp.n, cdir)
ret = 0
unreg: list[str] = []
seen_files = {} # != inames; files-only for dropcheck
g = statdir(self.log_func, not self.args.no_scandir, False, cdir)
gl = sorted(g)
@@ -686,7 +736,12 @@ class Up2k(object):
return -1
abspath = os.path.join(cdir, iname)
rp = abspath[len(top) :].lstrip("/")
if WINDOWS:
rp = rp.replace("\\", "/").strip("/")
if rei and rei.search(abspath):
unreg.append(rp)
continue
nohash = reh.search(abspath) if reh else False
@@ -695,6 +750,7 @@ class Up2k(object):
if stat.S_ISDIR(inf.st_mode):
rap = absreal(abspath)
if abspath in excl or rap in excl:
unreg.append(rp)
continue
if iname == ".th" and bos.path.isdir(os.path.join(abspath, "top")):
# abandoned or foreign, skip
@@ -710,10 +766,6 @@ class Up2k(object):
else:
# self.log("file: {}".format(abspath))
seen_files[iname] = 1
rp = abspath[len(top) :].lstrip("/")
if WINDOWS:
rp = rp.replace("\\", "/").strip("/")
if rp.endswith(".PARTIAL") and time.time() - lmod < 60:
# rescan during upload
continue
@@ -785,6 +837,25 @@ class Up2k(object):
if self.stop:
return -1
# drop shadowed folders
for rd in unreg:
n = 0
q = "select count(w) from up where (rd = ? or rd like ?||'%') and at == 0"
for erd in [rd, "//" + w8b64enc(rd)]:
try:
n = db.c.execute(q, (erd, erd + "/")).fetchone()[0]
break
except:
pass
if n:
t = "forgetting {} shadowed autoindexed files in [{}] > [{}]"
self.log(t.format(n, top, rd))
q = "delete from up where (rd = ? or rd like ?||'%') and at == 0"
db.c.execute(q, (erd, erd + "/"))
ret += n
# drop missing files
rd = cdir[len(top) + 1 :].strip("/")
if WINDOWS:
@@ -807,12 +878,13 @@ class Up2k(object):
return ret
def _drop_lost(self, cur: "sqlite3.Cursor", top: str) -> int:
def _drop_lost(self, cur: "sqlite3.Cursor", top: str, excl: list[str]) -> int:
rm = []
n_rm = 0
nchecked = 0
assert self.pp
# `_build_dir` did all the files, now do dirs
# `_build_dir` did all unshadowed files; first do dirs:
ndirs = next(cur.execute("select count(distinct rd) from up"))[0]
c = cur.execute("select distinct rd from up order by rd desc")
for (drd,) in c:
@@ -832,23 +904,55 @@ class Up2k(object):
rm.append(drd)
if not rm:
return 0
if rm:
q = "select count(w) from up where rd = ?"
for rd in rm:
n_rm += next(cur.execute(q, (rd,)))[0]
q = "select count(w) from up where rd = ?"
for rd in rm:
n_rm += next(cur.execute(q, (rd,)))[0]
self.log("forgetting {} deleted dirs, {} files".format(len(rm), n_rm))
for rd in rm:
cur.execute("delete from up where rd = ?", (rd,))
self.log("forgetting {} deleted dirs, {} files".format(len(rm), n_rm))
for rd in rm:
cur.execute("delete from up where rd = ?", (rd,))
# then shadowed deleted files
n_rm2 = 0
c2 = cur.connection.cursor()
excl = [x[len(top) + 1 :] for x in excl if x.startswith(top + "/")]
q = "select rd, fn from up where (rd = ? or rd like ?||'%') order by rd"
for rd in excl:
for erd in [rd, "//" + w8b64enc(rd)]:
try:
c = cur.execute(q, (erd, erd + "/"))
break
except:
pass
return n_rm
crd = "///"
cdc: set[str] = set()
for drd, dfn in c:
rd, fn = s3dec(drd, dfn)
if crd != rd:
crd = rd
try:
cdc = set(os.listdir(os.path.join(top, rd)))
except:
cdc.clear()
if fn not in cdc:
q = "delete from up where rd = ? and fn = ?"
c2.execute(q, (drd, dfn))
n_rm2 += 1
if n_rm2:
self.log("forgetting {} shadowed deleted files".format(n_rm2))
c2.close()
return n_rm + n_rm2
def _verify_integrity(self, vol: VFS) -> int:
"""expensive; blocks database access until finished"""
ptop = vol.realpath
assert self.pp and self.mtag
assert self.pp
cur = self.cur[ptop]
rei = vol.flags.get("noidx")
@@ -867,7 +971,7 @@ class Up2k(object):
qexa.append("up.rd != ? and not up.rd like ?||'%'")
pexa.extend([vpath, vpath])
pex = tuple(pexa)
pex: tuple[Any, ...] = tuple(pexa)
qex = " and ".join(qexa)
if qex:
qex = " where " + qex
@@ -882,11 +986,22 @@ class Up2k(object):
b_left += sz # sum() can overflow according to docs
n_left += 1
q = "select w, mt, sz, rd, fn from up" + qex
for w, mt, sz, drd, dfn in cur.execute(q, pex):
tf, _ = self._spool_warks(cur, "select w, rd, fn from up" + qex, pex, 0)
with gzip.GzipFile(mode="rb", fileobj=tf) as gf:
for zb in gf:
if self.stop:
return -1
w, drd, dfn = zb[:-1].decode("utf-8").split("\x00")
with self.mutex:
q = "select mt, sz from up where w = ? and rd = ? and fn = ?"
try:
mt, sz = cur.execute(q, (w, drd, dfn)).fetchone()
except:
# file moved/deleted since spooling
continue
n_left -= 1
b_left -= sz
if drd.startswith("//") or dfn.startswith("//"):
@@ -931,18 +1046,21 @@ class Up2k(object):
t = t.format(abspath, w, sz, mt, w2, sz2, mt2)
self.log(t, 1)
if e2vp and rewark:
self.hub.retcode = 1
os.kill(os.getpid(), signal.SIGTERM)
raise Exception("{} files have incorrect hashes".format(len(rewark)))
if e2vp and rewark:
self.hub.retcode = 1
os.kill(os.getpid(), signal.SIGTERM)
raise Exception("{} files have incorrect hashes".format(len(rewark)))
if not e2vu:
return 0
if not e2vu or not rewark:
return 0
with self.mutex:
for rd, fn, w, sz, mt in rewark:
q = "update up set w = ?, sz = ?, mt = ? where rd = ? and fn = ? limit 1"
cur.execute(q, (w, sz, int(mt), rd, fn))
cur.connection.commit()
return len(rewark)
def _build_tags_index(self, vol: VFS) -> tuple[int, int, bool]:
@@ -950,17 +1068,12 @@ class Up2k(object):
with self.mutex:
reg = self.register_vpath(ptop, vol.flags)
assert reg and self.pp and self.mtag
_, db_path = reg
assert reg and self.pp
entags = self.entags[ptop]
flags = self.flags[ptop]
cur = self.cur[ptop]
n_add = 0
n_rm = 0
n_buf = 0
last_write = time.time()
if "e2tsr" in flags:
with self.mutex:
n_rm = cur.execute("select count(w) from mt").fetchone()[0]
@@ -971,93 +1084,160 @@ class Up2k(object):
# integrity: drop tags for tracks that were deleted
if "e2t" in flags:
with self.mutex:
drops = []
n = 0
c2 = cur.connection.cursor()
up_q = "select w from up where substr(w,1,16) = ?"
rm_q = "delete from mt where w = ?"
for (w,) in cur.execute("select w from mt"):
if not c2.execute(up_q, (w,)).fetchone():
drops.append(w[:16])
c2.close()
c2.execute(rm_q, (w[:16],))
n += 1
if drops:
msg = "discarding media tags for {} deleted files"
self.log(msg.format(len(drops)))
n_rm += len(drops)
for w in drops:
cur.execute("delete from mt where w = ?", (w,))
c2.close()
if n:
t = "discarded media tags for {} deleted files"
self.log(t.format(n))
n_rm += n
with self.mutex:
cur.connection.commit()
# bail if a volume flag disables indexing
if "d2t" in flags or "d2d" in flags:
return n_add, n_rm, True
return 0, n_rm, True
# add tags for new files
gcur = cur
with self.mutex:
gcur.connection.commit()
if "e2ts" in flags:
if not self.mtag:
return n_add, n_rm, False
return 0, n_rm, False
mpool: Optional[Queue[Mpqe]] = None
nq = 0
with self.mutex:
tf, nq = self._spool_warks(
cur, "select w from up order by rd, fn", (), 1
)
if self.mtag.prefer_mt and self.args.mtag_mt > 1:
mpool = self._start_mpool()
if not nq:
# self.log("tags ok")
self._unspool(tf)
return 0, n_rm, True
# TODO blocks writes to registry cursor; do chunks instead
conn = sqlite3.connect(db_path, timeout=15)
cur = conn.cursor()
c2 = conn.cursor()
c3 = conn.cursor()
n_left = cur.execute("select count(w) from up").fetchone()[0]
for w, rd, fn in cur.execute("select w, rd, fn from up order by rd, fn"):
if self.stop:
return -1, -1, False
if nq == -1:
return -1, -1, True
n_left -= 1
q = "select w from mt where w = ?"
if c2.execute(q, (w[:16],)).fetchone():
continue
with gzip.GzipFile(mode="rb", fileobj=tf) as gf:
n_add = self._e2ts_q(gf, nq, cur, ptop, entags)
if "mtp" in flags:
q = "insert into mt values (?,'t:mtp','a')"
c2.execute(q, (w[:16],))
if rd.startswith("//") or fn.startswith("//"):
rd, fn = s3dec(rd, fn)
abspath = os.path.join(ptop, rd, fn)
self.pp.msg = "c{} {}".format(n_left, abspath)
if not mpool:
n_tags = self._tag_file(c3, entags, w, abspath)
else:
mpool.put(Mpqe({}, entags, w, abspath, {}))
# not registry cursor; do not self.mutex:
n_tags = len(self._flush_mpool(c3))
n_add += n_tags
n_buf += n_tags
td = time.time() - last_write
if n_buf >= 4096 or td >= 60:
self.log("commit {} new tags".format(n_buf))
cur.connection.commit()
last_write = time.time()
n_buf = 0
if mpool:
self._stop_mpool(mpool)
with self.mutex:
n_add += len(self._flush_mpool(c3))
conn.commit()
c3.close()
c2.close()
cur.close()
conn.close()
self._unspool(tf)
return n_add, n_rm, True
def _e2ts_q(
self,
qf: gzip.GzipFile,
nq: int,
cur: "sqlite3.Cursor",
ptop: str,
entags: set[str],
) -> int:
assert self.pp and self.mtag
flags = self.flags[ptop]
mpool: Optional[Queue[Mpqe]] = None
if self.mtag.prefer_mt and self.args.mtag_mt > 1:
mpool = self._start_mpool()
n_add = 0
n_buf = 0
last_write = time.time()
for bw in qf:
if self.stop:
return -1
w = bw[:-1].decode("ascii")
q = "select rd, fn from up where w = ?"
try:
rd, fn = cur.execute(q, (w,)).fetchone()
except:
# file modified/deleted since spooling
continue
if rd.startswith("//") or fn.startswith("//"):
rd, fn = s3dec(rd, fn)
if "mtp" in flags:
q = "insert into mt values (?,'t:mtp','a')"
with self.mutex:
cur.execute(q, (w[:16],))
abspath = os.path.join(ptop, rd, fn)
self.pp.msg = "c{} {}".format(nq, abspath)
if not mpool:
n_tags = self._tagscan_file(cur, entags, w, abspath)
else:
mpool.put(Mpqe({}, entags, w, abspath, {}))
with self.mutex:
n_tags = len(self._flush_mpool(cur))
n_add += n_tags
n_buf += n_tags
td = time.time() - last_write
if n_buf >= 4096 or td >= max(1, self.timeout - 1):
self.log("commit {} new tags".format(n_buf))
with self.mutex:
cur.connection.commit()
last_write = time.time()
n_buf = 0
if mpool:
self._stop_mpool(mpool)
with self.mutex:
n_add += len(self._flush_mpool(cur))
with self.mutex:
cur.connection.commit()
return n_add
def _spool_warks(
self,
cur: "sqlite3.Cursor",
q: str,
params: tuple[Any, ...],
flt: int,
) -> tuple[tempfile.SpooledTemporaryFile[bytes], int]:
"""mutex me"""
n = 0
c2 = cur.connection.cursor()
tf = tempfile.SpooledTemporaryFile(1024 * 1024 * 8, "w+b", prefix="cpp-tq-")
with gzip.GzipFile(mode="wb", fileobj=tf) as gf:
for row in cur.execute(q, params):
if self.stop:
return tf, -1
if flt == 1:
q = "select w from mt where w = ?"
if c2.execute(q, (row[0][:16],)).fetchone():
continue
gf.write("{}\n".format("\x00".join(row)).encode("utf-8"))
n += 1
c2.close()
tf.seek(0)
self.spools.add(tf)
return tf, n
def _unspool(self, tf: tempfile.SpooledTemporaryFile[bytes]) -> None:
self.spools.remove(tf)
try:
tf.close()
except Exception as ex:
self.log("failed to delete spool: {}".format(ex), 3)
def _flush_mpool(self, wcur: "sqlite3.Cursor") -> list[str]:
ret = []
for x in self.pending_tags:
@@ -1317,21 +1497,38 @@ class Up2k(object):
msg = "{} failed to read tags from {}:\n{}".format(parser, abspath, ex)
self.log(msg.lstrip(), c=1 if "<Signals.SIG" in msg else 3)
def _tagscan_file(
self,
write_cur: "sqlite3.Cursor",
entags: set[str],
wark: str,
abspath: str,
) -> int:
"""will mutex"""
assert self.mtag
if not bos.path.isfile(abspath):
return 0
try:
tags = self.mtag.get(abspath)
except Exception as ex:
self._log_tag_err("", abspath, ex)
return 0
with self.mutex:
return self._tag_file(write_cur, entags, wark, abspath, tags)
def _tag_file(
self,
write_cur: "sqlite3.Cursor",
entags: set[str],
wark: str,
abspath: str,
tags: Optional[dict[str, Union[str, float]]] = None,
tags: dict[str, Union[str, float]],
) -> int:
"""mutex me"""
assert self.mtag
if tags is None:
try:
tags = self.mtag.get(abspath)
except Exception as ex:
self._log_tag_err("", abspath, ex)
return 0
if not bos.path.isfile(abspath):
return 0
@@ -1361,8 +1558,7 @@ class Up2k(object):
return ret
def _orz(self, db_path: str) -> "sqlite3.Cursor":
timeout = int(max(self.args.srch_time, 5) * 1.2)
return sqlite3.connect(db_path, timeout, check_same_thread=False).cursor()
return sqlite3.connect(db_path, self.timeout, check_same_thread=False).cursor()
# x.set_trace_callback(trace)
def _open_db(self, db_path: str) -> "sqlite3.Cursor":
@@ -1485,18 +1681,30 @@ class Up2k(object):
cur.connection.commit()
def _job_volchk(self, cj: dict[str, Any]) -> None:
if not self.register_vpath(cj["ptop"], cj["vcfg"]):
if cj["ptop"] not in self.registry:
raise Pebkac(410, "location unavailable")
def handle_json(self, cj: dict[str, Any]) -> dict[str, Any]:
with self.mutex:
if not self.register_vpath(cj["ptop"], cj["vcfg"]):
if cj["ptop"] not in self.registry:
raise Pebkac(410, "location unavailable")
try:
# bit expensive; 3.9=10x 3.11=2x
if self.mutex.acquire(timeout=10):
self._job_volchk(cj)
self.mutex.release()
else:
t = "cannot receive uploads right now;\nserver busy with {}.\nPlease wait; the client will retry..."
raise Pebkac(503, t.format(self.blocked or "[unknown]"))
except TypeError:
# py2
with self.mutex:
self._job_volchk(cj)
cj["name"] = sanitize_fn(cj["name"], "", [".prologue.html", ".epilogue.html"])
cj["poke"] = time.time()
cj["poke"] = now = self.db_act = time.time()
wark = self._get_wark(cj)
now = time.time()
job = None
pdir = os.path.join(cj["ptop"], cj["prel"])
pdir = djoin(cj["ptop"], cj["prel"])
try:
dev = bos.stat(pdir).st_dev
except:
@@ -1608,7 +1816,7 @@ class Up2k(object):
for k in ["ptop", "vtop", "prel"]:
job[k] = cj[k]
pdir = os.path.join(cj["ptop"], cj["prel"])
pdir = djoin(cj["ptop"], cj["prel"])
job["name"] = self._untaken(pdir, cj["name"], now, cj["addr"])
dst = os.path.join(job["ptop"], job["prel"], job["name"])
if not self.args.nw:
@@ -1624,9 +1832,9 @@ class Up2k(object):
if not job:
vfs = self.asrv.vfs.all_vols[cj["vtop"]]
if vfs.lim:
ap1 = os.path.join(cj["ptop"], cj["prel"])
ap1 = djoin(cj["ptop"], cj["prel"])
ap2, cj["prel"] = vfs.lim.all(
cj["addr"], cj["prel"], cj["size"], ap1
cj["addr"], cj["prel"], cj["size"], ap1, reg
)
bos.makedirs(ap2)
vfs.lim.nup(cj["addr"])
@@ -1754,6 +1962,7 @@ class Up2k(object):
self, ptop: str, wark: str, chash: str
) -> tuple[int, list[int], str, float, bool]:
with self.mutex:
self.db_act = time.time()
job = self.registry[ptop].get(wark)
if not job:
known = " ".join([x for x in self.registry[ptop].keys()])
@@ -1804,6 +2013,7 @@ class Up2k(object):
def confirm_chunk(self, ptop: str, wark: str, chash: str) -> tuple[int, str]:
with self.mutex:
self.db_act = time.time()
try:
job = self.registry[ptop][wark]
pdir = os.path.join(job["ptop"], job["prel"])
@@ -1838,6 +2048,7 @@ class Up2k(object):
self._finish_upload(ptop, wark)
def _finish_upload(self, ptop: str, wark: str) -> None:
self.db_act = time.time()
try:
job = self.registry[ptop][wark]
pdir = os.path.join(job["ptop"], job["prel"])
@@ -1914,9 +2125,15 @@ class Up2k(object):
if not cur:
return False
self.db_rm(cur, rd, fn)
self.db_add(cur, wark, rd, fn, lmod, sz, ip, at)
cur.connection.commit()
try:
self.db_rm(cur, rd, fn)
self.db_add(cur, wark, rd, fn, lmod, sz, ip, at)
cur.connection.commit()
except Exception as ex:
x = self.register_vpath(ptop, {})
assert x
db_ex_chk(self.log, ex, x[1])
raise
if "e2t" in self.flags[ptop]:
self.tagq.put((ptop, wark, rd, fn))
@@ -1974,6 +2191,7 @@ class Up2k(object):
def _handle_rm(
self, uname: str, ip: str, vpath: str
) -> tuple[int, list[str], list[str]]:
self.db_act = time.time()
try:
permsets = [[True, False, False, True]]
vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0])
@@ -2058,6 +2276,7 @@ class Up2k(object):
return n_files, ok + ok2, ng + ng2
def handle_mv(self, uname: str, svp: str, dvp: str) -> str:
self.db_act = time.time()
svn, srem = self.asrv.vfs.get(svp, uname, True, False, True)
svn, srem = svn.get_dbv(srem)
sabs = svn.canonical(srem, False)
@@ -2382,7 +2601,7 @@ class Up2k(object):
return ret
def _new_upload(self, job: dict[str, Any]) -> None:
pdir = os.path.join(job["ptop"], job["prel"])
pdir = djoin(job["ptop"], job["prel"])
if not job["size"] and bos.path.isfile(os.path.join(pdir, job["name"])):
return
@@ -2619,6 +2838,10 @@ class Up2k(object):
def shutdown(self) -> None:
self.stop = True
for x in list(self.spools):
self._unspool(x)
self.log("writing snapshot")
self.do_snapshot()

View File

@@ -24,6 +24,11 @@ from datetime import datetime
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, VT100, WINDOWS
from .stolen import surrogateescape
try:
import ctypes
except:
pass
try:
HAVE_SQLITE3 = True
import sqlite3 # pylint: disable=unused-import # typechk
@@ -243,7 +248,7 @@ class _Unrecv(object):
undo any number of socket recv ops
"""
def __init__(self, s: socket.socket, log: Optional[NamedLogger]) -> None:
def __init__(self, s: socket.socket, log: Optional["NamedLogger"]) -> None:
self.s = s
self.log = log
self.buf: bytes = b""
@@ -287,7 +292,7 @@ class _LUnrecv(object):
with expensive debug logging
"""
def __init__(self, s: socket.socket, log: Optional[NamedLogger]) -> None:
def __init__(self, s: socket.socket, log: Optional["NamedLogger"]) -> None:
self.s = s
self.log = log
self.buf = b""
@@ -662,7 +667,9 @@ def ren_open(
class MultipartParser(object):
def __init__(self, log_func: NamedLogger, sr: Unrecv, http_headers: dict[str, str]):
def __init__(
self, log_func: "NamedLogger", sr: Unrecv, http_headers: dict[str, str]
):
self.sr = sr
self.log = log_func
self.headers = http_headers
@@ -982,6 +989,11 @@ def s2hms(s: float, optional_h: bool = False) -> str:
return "{}:{:02}:{:02}".format(h, m, s)
def djoin(*paths: str) -> str:
"""joins without adding a trailing slash on blank args"""
return os.path.join(*[x for x in paths if x])
def uncyg(path: str) -> str:
if len(path) < 2 or not path.startswith("/"):
return path
@@ -1184,15 +1196,30 @@ def s3enc(mem_cur: "sqlite3.Cursor", rd: str, fn: str) -> tuple[str, str]:
def s3dec(rd: str, fn: str) -> tuple[str, str]:
ret = []
for v in [rd, fn]:
if v.startswith("//"):
ret.append(w8b64dec(v[2:]))
# self.log("mojide [{}] {}".format(ret[-1], v[2:]))
else:
ret.append(v)
return (
w8b64dec(rd[2:]) if rd.startswith("//") else rd,
w8b64dec(fn[2:]) if fn.startswith("//") else fn,
)
return ret[0], ret[1]
def db_ex_chk(log: "NamedLogger", ex: Exception, db_path: str) -> bool:
if str(ex) != "database is locked":
return False
thr = threading.Thread(target=lsof, args=(log, db_path))
thr.daemon = True
thr.start()
return True
def lsof(log: "NamedLogger", abspath: str) -> None:
try:
rc, so, se = runcmd([b"lsof", b"-R", fsenc(abspath)], timeout=5)
zs = (so.strip() + "\n" + se.strip()).strip()
log("lsof {} = {}\n{}".format(abspath, rc, zs), 3)
except:
log("lsof failed; " + min_ex(), 3)
def atomic_move(usrc: str, udst: str) -> None:
@@ -1207,6 +1234,24 @@ def atomic_move(usrc: str, udst: str) -> None:
os.rename(src, dst)
def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]:
try:
# some fuses misbehave
if ANYWIN:
bfree = ctypes.c_ulonglong(0)
ctypes.windll.kernel32.GetDiskFreeSpaceExW( # type: ignore
ctypes.c_wchar_p(abspath), None, None, ctypes.pointer(bfree)
)
return (bfree.value, None)
else:
sv = os.statvfs(fsenc(abspath))
free = sv.f_frsize * sv.f_bfree
total = sv.f_frsize * sv.f_blocks
return (free, total)
except:
return (None, None)
def read_socket(sr: Unrecv, total_size: int) -> Generator[bytes, None, None]:
remains = total_size
while remains > 0:
@@ -1233,7 +1278,7 @@ def read_socket_unbounded(sr: Unrecv) -> Generator[bytes, None, None]:
def read_socket_chunked(
sr: Unrecv, log: Optional[NamedLogger] = None
sr: Unrecv, log: Optional["NamedLogger"] = None
) -> Generator[bytes, None, None]:
err = "upload aborted: expected chunk length, got [{}] |{}| instead"
while True:
@@ -1311,7 +1356,7 @@ def hashcopy(
def sendfile_py(
log: NamedLogger,
log: "NamedLogger",
lower: int,
upper: int,
f: typing.BinaryIO,
@@ -1339,7 +1384,7 @@ def sendfile_py(
def sendfile_kern(
log: NamedLogger,
log: "NamedLogger",
lower: int,
upper: int,
f: typing.BinaryIO,
@@ -1380,7 +1425,7 @@ def sendfile_kern(
def statdir(
logger: Optional[RootLogger], scandir: bool, lstat: bool, top: str
logger: Optional["RootLogger"], scandir: bool, lstat: bool, top: str
) -> Generator[tuple[str, os.stat_result], None, None]:
if lstat and ANYWIN:
lstat = False
@@ -1423,7 +1468,7 @@ def statdir(
def rmdirs(
logger: RootLogger, scandir: bool, lstat: bool, top: str, depth: int
logger: "RootLogger", scandir: bool, lstat: bool, top: str, depth: int
) -> tuple[list[str], list[str]]:
"""rmdir all descendants, then self"""
if not os.path.isdir(fsenc(top)):
@@ -1644,7 +1689,7 @@ def retchk(
rc: int,
cmd: Union[list[bytes], list[str]],
serr: str,
logger: Optional[NamedLogger] = None,
logger: Optional["NamedLogger"] = None,
color: Union[int, str] = 0,
verbose: bool = False,
) -> None:

View File

@@ -224,6 +224,7 @@ window.baguetteBox = (function () {
['space, P, K', 'video: play / pause'],
['U', 'video: seek 10sec back'],
['P', 'video: seek 10sec ahead'],
['0..9', 'video: seek 0%..90%'],
['M', 'video: toggle mute'],
['V', 'video: toggle loop'],
['C', 'video: toggle auto-next'],
@@ -248,7 +249,7 @@ window.baguetteBox = (function () {
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing || modal.busy)
return;
var k = e.code + '', v = vid();
var k = e.code + '', v = vid(), pos = -1;
if (k == "ArrowLeft" || k == "KeyJ")
showPreviousImage();
@@ -264,6 +265,8 @@ window.baguetteBox = (function () {
playpause();
else if (k == "KeyU" || k == "KeyO")
relseek(k == "KeyU" ? -10 : 10);
else if (k.indexOf('Digit') === 0)
vid().currentTime = vid().duration * parseInt(k.slice(-1)) * 0.1;
else if (k == "KeyM" && v) {
v.muted = vmute = !vmute;
mp_ctl();

View File

@@ -2244,6 +2244,7 @@ html.y #bbox-overlay figcaption a {
max-width: none;
}
#u2tab td {
word-wrap: break-word;
border: 1px solid rgba(128,128,128,0.8);
border-width: 0 0px 1px 0;
padding: .2em .3em;
@@ -2258,7 +2259,19 @@ html.y #bbox-overlay figcaption a {
#u2tab.up.ok td:nth-child(3),
#u2tab.up.bz td:nth-child(3),
#u2tab.up.q td:nth-child(3) {
width: 19em;
width: 18em;
}
@media (max-width: 65em) {
#u2tab {
font-size: .9em;
}
}
@media (max-width: 50em) {
#u2tab.up.ok td:nth-child(3),
#u2tab.up.bz td:nth-child(3),
#u2tab.up.q td:nth-child(3) {
width: 16em;
}
}
#op_up2k.srch td.prog {
font-family: sans-serif;

View File

@@ -146,7 +146,7 @@ var Ls = {
"mt_caac": "convert aac / m4a to opus\">aac",
"mt_coth": "convert all others (not mp3) to opus\">oth",
"mt_tint": "background level (0-100) on the seekbar$Nto make buffering less distracting",
"mt_eq": "enables the equalizer and gain control;$Nboost 0 = unmodified 100% volume$N$Nenabling the equalizer makes gapless albums fully gapless, so leave it on with all the values at zero if you care about that",
"mt_eq": "enables the equalizer and gain control;$N$Nboost &lt;code&gt;0&lt;/code&gt; = standard 100% volume (unmodified)$N$Nwidth &lt;code&gt;1 &nbsp;&lt;/code&gt; = standard stereo (unmodified)$Nwidth &lt;code&gt;0.5&lt;/code&gt; = 50% left-right crossfeed$Nwidth &lt;code&gt;0 &nbsp;&lt;/code&gt; = mono$N$Nboost &lt;code&gt;-0.8&lt;/code&gt; &amp; width &lt;code&gt;10&lt;/code&gt; = vocal removal :^)$N$Nenabling the equalizer makes gapless albums fully gapless, so leave it on with all the values at zero (except width = 1) if you care about that",
"mb_play": "play",
"mm_hashplay": "play this audio file?",
@@ -311,6 +311,7 @@ var Ls = {
"u_ehsfin": "server rejected the request to finalize upload",
"u_ehssrch": "server rejected the request to perform search",
"u_ehsinit": "server rejected the request to initiate upload",
"u_ehsdf": "server ran out of disk space!\n\nwill keep retrying, in case someone\nfrees up enough space to continue",
"u_s404": "not found on server",
"u_expl": "explain",
"u_tu": '<p class="warn">WARNING: turbo enabled, <span>&nbsp;client may not detect and resume incomplete uploads; see turbo-button tooltip</span></p>',
@@ -477,7 +478,7 @@ var Ls = {
"mt_caac": "konverter aac / m4a-filer til to opus\">aac",
"mt_coth": "konverter alt annet (men ikke mp3) til opus\">andre",
"mt_tint": "nivå av bakgrunnsfarge på søkestripa (0-100),$Ngjør oppdateringer mindre distraherende",
"mt_eq": "aktiver tonekontroll og forsterker;$Nboost 0 = normal volumskala$N$Nreduserer også dødtid imellom sangfiler",
"mt_eq": "aktiver tonekontroll og forsterker;$N$Nboost &lt;code&gt;0&lt;/code&gt; = normal volumskala$N$Nwidth &lt;code&gt;1 &nbsp;&lt;/code&gt; = normal stereo$Nwidth &lt;code&gt;0.5&lt;/code&gt; = 50% blanding venstre-høyre$Nwidth &lt;code&gt;0 &nbsp;&lt;/code&gt; = mono$N$Nboost &lt;code&gt;-0.8&lt;/code&gt; &amp; width &lt;code&gt;10&lt;/code&gt; = instrumental :^)$N$Nreduserer også dødtid imellom sangfiler",
"mb_play": "lytt",
"mm_hashplay": "spill denne sangen?",
@@ -642,6 +643,7 @@ var Ls = {
"u_ehsfin": "server nektet forespørselen om å ferdigstille filen",
"u_ehssrch": "server nektet forespørselen om å utføre søk",
"u_ehsinit": "server nektet forespørselen om å begynne en ny opplastning",
"u_ehsdf": "serveren er full!\n\nprøver igjen regelmessig,\ni tilfelle noen rydder litt...",
"u_s404": "ikke funnet på serveren",
"u_expl": "forklar",
"u_tu": '<p class="warn">ADVARSEL: turbo er på, <span>&nbsp;avbrutte opplastninger vil muligens ikke oppdages og gjenopptas; hold musepekeren over turbo-knappen for mer info</span></p>',
@@ -1890,6 +1892,7 @@ var audio_eq = (function () {
"gains": [4, 3, 2, 1, 0, 0, 1, 2, 3, 4],
"filters": [],
"amp": 0,
"chw": 1,
"last_au": null,
"acst": {}
};
@@ -1941,6 +1944,7 @@ var audio_eq = (function () {
try {
r.amp = fcfg_get('au_eq_amp', r.amp);
r.chw = fcfg_get('au_eq_chw', r.chw);
var gains = jread('au_eq_gain', r.gains);
if (r.gains.length == gains.length)
r.gains = gains;
@@ -1950,12 +1954,14 @@ var audio_eq = (function () {
r.draw = function () {
jwrite('au_eq_gain', r.gains);
swrite('au_eq_amp', r.amp);
swrite('au_eq_chw', r.chw);
var txt = QSA('input.eq_gain');
for (var a = 0; a < r.bands.length; a++)
txt[a].value = r.gains[a];
QS('input.eq_gain[band="amp"]').value = r.amp;
QS('input.eq_gain[band="chw"]').value = r.chw;
};
r.stop = function () {
@@ -2025,16 +2031,47 @@ var audio_eq = (function () {
for (var a = r.filters.length - 1; a >= 0; a--)
r.filters[a].connect(a > 0 ? r.filters[a - 1] : actx.destination);
if (Math.round(r.chw * 25) != 25) {
var split = actx.createChannelSplitter(2),
merge = actx.createChannelMerger(2),
lg1 = actx.createGain(),
lg2 = actx.createGain(),
rg1 = actx.createGain(),
rg2 = actx.createGain(),
vg1 = 1 - (1 - r.chw) / 2,
vg2 = 1 - vg1;
console.log('chw', vg1, vg2);
merge.connect(r.filters[r.filters.length - 1]);
lg1.gain.value = rg2.gain.value = vg1;
lg2.gain.value = rg1.gain.value = vg2;
lg1.connect(merge, 0, 0);
rg1.connect(merge, 0, 0);
lg2.connect(merge, 0, 1);
rg2.connect(merge, 0, 1);
split.connect(lg1, 0);
split.connect(lg2, 0);
split.connect(rg1, 1);
split.connect(rg2, 1);
r.filters.push(split);
mp.acs.channelCountMode = 'explicit';
}
mp.acs.connect(r.filters[r.filters.length - 1]);
}
function eq_step(e) {
ev(e);
var band = parseInt(this.getAttribute('band')),
var sb = this.getAttribute('band'),
band = parseInt(sb),
step = parseFloat(this.getAttribute('step'));
if (isNaN(band))
if (sb == 'amp')
r.amp = Math.round((r.amp + step * 0.2) * 100) / 100;
else if (sb == 'chw')
r.chw = Math.round((r.chw + step * 0.2) * 100) / 100;
else
r.gains[band] += step;
@@ -2044,15 +2081,18 @@ var audio_eq = (function () {
function adj_band(that, step) {
var err = false;
try {
var band = parseInt(that.getAttribute('band')),
var sb = that.getAttribute('band'),
band = parseInt(sb),
vs = that.value,
v = parseFloat(vs);
if (isNaN(v) || v + '' != vs)
throw new Error('inval band');
if (isNaN(band))
if (sb == 'amp')
r.amp = Math.round((v + step * 0.2) * 100) / 100;
else if (sb == 'chw')
r.chw = Math.round((v + step * 0.2) * 100) / 100;
else
r.gains[band] = v + step;
@@ -2089,6 +2129,7 @@ var audio_eq = (function () {
vs.push([a, hz, r.gains[a]]);
}
vs.push(["amp", "boost", r.amp]);
vs.push(["chw", "width", r.chw]);
for (var a = 0; a < vs.length; a++) {
var b = vs[a][0];
@@ -2423,7 +2464,7 @@ function eval_hash() {
if (a)
QS(treectl.hidden ? '#path a:nth-last-child(2)' : '#treeul a.hl').focus();
else
QS(thegrid.en ? '#ggrid a' : '#files tbody a').focus();
QS(thegrid.en ? '#ggrid a' : '#files tbody tr[tabindex]').focus();
};
})(a);
@@ -3955,6 +3996,9 @@ document.onkeydown = function (e) {
}
}
if (k == 'Enter' && ae && (ae.onclick || ae.hasAttribute('tabIndex')))
return ev(e) && ae.click() || true;
if (aet && aet != 'a' && aet != 'tr' && aet != 'pre')
return;

View File

@@ -36,6 +36,7 @@
<tr><td>hash-q</td><td>{{ hashq }}</td></tr>
<tr><td>tag-q</td><td>{{ tagq }}</td></tr>
<tr><td>mtp-q</td><td>{{ mtpq }}</td></tr>
<tr><td>db-act</td><td id="u">{{ dbwt }}</td></tr>
</table>
</td><td>
<table class="vols">
@@ -50,8 +51,8 @@
</table>
</td></tr></table>
<div class="btns">
<a id="d" href="/?stack" tt="shows the state of all active threads">dump stack</a>
<a id="e" href="/?reload=cfg" tt="reload config files (accounts/volumes/volflags),$Nand rescan all e2ds volumes">reload cfg</a>
<a id="d" href="/?stack">dump stack</a>
<a id="e" href="/?reload=cfg">reload cfg</a>
</div>
{%- endif %}

View File

@@ -23,6 +23,12 @@ var Ls = {
"r1": "gå hjem",
".s1": "kartlegg",
"t1": "handling",
"u2": "tid siden noen sist skrev til serveren$N( opplastning / navneendring / ... )$N$N17d = 17 dager$N1h23 = 1 time 23 minutter$N4m56 = 4 minuter 56 sekunder",
},
"eng": {
"d2": "shows the state of all active threads",
"e2": "reload config files (accounts/volumes/volflags),$Nand rescan all e2ds volumes",
"u2": "time since the last server write$N( upload / rename / ... )$N$N17d = 17 days$N1h23 = 1 hour 23 minutes$N4m56 = 4 minutes 56 seconds",
}
},
d = Ls[sread("lang") || lang];
@@ -40,5 +46,10 @@ for (var k in (d || {})) {
}
tt.init();
if (!ebi('c'))
QS('input[name="cppwd"]').focus();
var o = QS('input[name="cppwd"]');
if (!ebi('c') && o.offsetTop + o.offsetHeight < window.innerHeight)
o.focus();
o = ebi('u');
if (o && /[0-9]+$/.exec(o.innerHTML))
o.innerHTML = shumantime(o.innerHTML);

View File

@@ -205,7 +205,7 @@ function U2pvis(act, btns, uc, st) {
if (!r.is_act(fo.in))
return;
var k = 'f{0}{1}'.format(nfile, field.slice(1)),
var k = 'f' + nfile + '' + field.slice(1),
obj = ebi(k);
obj.innerHTML = field == 'ht' ? (markup[html] || html) : html;
@@ -250,9 +250,7 @@ function U2pvis(act, btns, uc, st) {
nb = fo.bt * (++fo.nh / fo.cb.length),
p = r.perc(nb, 0, fobj.size, fobj.t_hashing);
fo.hp = '{0}%, {1}, {2} MB/s'.format(
f2f(p[0], 2), p[1], f2f(p[2], 2)
);
fo.hp = f2f(p[0], 2) + '%, ' + p[1] + ', ' + f2f(p[2], 2) + ' MB/s';
if (!r.is_act(fo.in))
return;
@@ -269,14 +267,12 @@ function U2pvis(act, btns, uc, st) {
fo.bd += delta;
var p = r.perc(fo.bd, fo.bd0, fo.bt, fobj.t_uploading);
fo.hp = '{0}%, {1}, {2} MB/s'.format(
f2f(p[0], 2), p[1], f2f(p[2], 2)
);
fo.hp = f2f(p[0], 2) + '%, ' + p[1] + ', ' + f2f(p[2], 2) + ' MB/s';
if (!r.is_act(fo.in))
return;
var obj = ebi('f{0}p'.format(fobj.n)),
var obj = ebi('f' + fobj.n + 'p'),
o1 = p[0] - 2, o2 = p[0] - 0.1, o3 = p[0];
if (!obj) {
@@ -446,8 +442,8 @@ function U2pvis(act, btns, uc, st) {
r.npotato = 0;
var html = [
"<p>files: &nbsp; <b>{0}</b> finished, &nbsp; <b>{1}</b> failed, &nbsp; <b>{2}</b> busy, &nbsp; <b>{3}</b> queued</p>".format(r.ctr.ok, r.ctr.ng, r.ctr.bz, r.ctr.q),
];
"<p>files: &nbsp; <b>{0}</b> finished, &nbsp; <b>{1}</b> failed, &nbsp; <b>{2}</b> busy, &nbsp; <b>{3}</b> queued</p>".format(
r.ctr.ok, r.ctr.ng, r.ctr.bz, r.ctr.q)];
while (r.head < r.tab.length && has(["ok", "ng"], r.tab[r.head].in))
r.head++;
@@ -457,7 +453,8 @@ function U2pvis(act, btns, uc, st) {
act = r.tab[r.head];
if (act)
html.push("<p>file {0} of {1} : &nbsp; {2} &nbsp; <code>{3}</code></p>\n<div>{4}</div>".format(r.head + 1, r.tab.length, act.ht, act.hp, act.hn));
html.push("<p>file {0} of {1} : &nbsp; {2} &nbsp; <code>{3}</code></p>\n<div>{4}</div>".format(
r.head + 1, r.tab.length, act.ht, act.hp, act.hn));
html = html.join('\n');
if (r.hpotato == html)
@@ -470,7 +467,7 @@ function U2pvis(act, btns, uc, st) {
function apply_html() {
var oq = {}, n = 0;
for (var k in r.hq) {
var o = ebi('f{0}p'.format(k));
var o = ebi('f' + k + 'p');
if (!o)
continue;
@@ -682,8 +679,8 @@ function Donut(uc, st) {
}
if (++r.tc >= 10) {
wintitle("{0}%, {1}s, #{2}, ".format(
f2f(v * 100 / t, 1), r.eta, st.files.length - st.nfile.upload), true);
wintitle("{0}%, {1}, #{2}, ".format(
f2f(v * 100 / t, 1), shumantime(r.eta), st.files.length - st.nfile.upload), true);
r.tc = 0;
}
@@ -835,6 +832,11 @@ function up2k_init(subtle) {
"uploading": 0,
"busy": 0
},
"eta": {
"h": "",
"u": "",
"t": ""
},
"car": 0,
"modn": 0,
"modv": 0,
@@ -919,8 +921,14 @@ function up2k_init(subtle) {
catch (ex) { }
ev(e);
e.dataTransfer.dropEffect = 'copy';
e.dataTransfer.effectAllowed = 'copy';
try {
e.dataTransfer.dropEffect = 'copy';
e.dataTransfer.effectAllowed = 'copy';
}
catch (ex) {
document.body.ondragenter = document.body.ondragleave = document.body.ondragover = null;
return modal.alert('your browser does not support drag-and-drop uploading');
}
clmod(ebi('drops'), 'vis', 1);
var v = this.getAttribute('v');
if (v)
@@ -1278,12 +1286,21 @@ function up2k_init(subtle) {
ebi('u2tabw').style.minHeight = utw_minh + 'px';
}
if (!nhash)
ebi('u2etah').innerHTML = L.u_etadone.format(humansize(st.bytes.hashed), pvis.ctr.ok + pvis.ctr.ng);
if (!nhash) {
var h = L.u_etadone.format(humansize(st.bytes.hashed), pvis.ctr.ok + pvis.ctr.ng);
if (st.eta.h !== h)
st.eta.h = ebi('u2etah').innerHTML = h;
}
if (!nsend && !nhash)
ebi('u2etau').innerHTML = ebi('u2etat').innerHTML = (
L.u_etadone.format(humansize(st.bytes.uploaded), pvis.ctr.ok + pvis.ctr.ng));
if (!nsend && !nhash) {
var h = L.u_etadone.format(humansize(st.bytes.uploaded), pvis.ctr.ok + pvis.ctr.ng);
if (st.eta.u !== h)
st.eta.u = ebi('u2etau').innerHTML = h;
if (st.eta.t !== h)
st.eta.t = ebi('u2etat').innerHTML = h;
}
if (!st.busy.hash.length && !hashing_permitted())
nhash = 0;
@@ -1314,19 +1331,21 @@ function up2k_init(subtle) {
for (var a = 0; a < t.length; a++) {
var rem = st.bytes.total - t[a][2],
bps = t[a][1] / t[a][3],
hid = t[a][0],
eid = hid.slice(-1),
eta = Math.floor(rem / bps);
if (t[a][1] < 1024 || t[a][3] < 0.1) {
ebi(t[a][0]).innerHTML = L.u_etaprep;
ebi(hid).innerHTML = L.u_etaprep;
continue;
}
donut.eta = eta;
if (etaskip)
continue;
ebi(t[a][0]).innerHTML = '{0}, {1}/s, {2}'.format(
st.eta[eid] = '{0}, {1}/s, {2}'.format(
humansize(rem), humansize(bps, 1), humantime(eta));
if (!etaskip)
ebi(hid).innerHTML = st.eta[eid];
}
if (++etaskip > 2)
etaskip = 0;
@@ -1356,6 +1375,10 @@ function up2k_init(subtle) {
st.busy.handshake.length)
return false;
if (t.n - st.car > 8)
// prevent runahead from a stuck upload (slow server hdd)
return false;
if ((uc.multitask ? 1 : 0) <
st.todo.upload.length +
st.busy.upload.length)
@@ -2011,6 +2034,9 @@ function up2k_init(subtle) {
t.want_recheck = true;
}
}
if (rsp.indexOf('server HDD is full') + 1)
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
if (err != "") {
pvis.seth(t.n, 1, "ERROR");
pvis.seth(t.n, 2, err);
@@ -2135,8 +2161,9 @@ function up2k_init(subtle) {
xhr.open('POST', t.purl, true);
xhr.setRequestHeader("X-Up2k-Hash", t.hash[npart]);
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5}".format(
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin));
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
st.eta.t.split(' ').pop()));
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
if (xhr.overrideMimeType)
xhr.overrideMimeType('Content-Type', 'application/octet-stream');

View File

@@ -642,7 +642,7 @@ function humansize(b, terse) {
function humantime(v) {
if (v >= 60 * 60 * 24)
return v;
return shumantime(v);
try {
return /.*(..:..:..).*/.exec(new Date(v * 1000).toUTCString())[1];
@@ -653,12 +653,39 @@ function humantime(v) {
}
function shumantime(v) {
if (v < 10)
return f2f(v, 2) + 's';
if (v < 60)
return f2f(v, 1) + 's';
v = parseInt(v);
var st = [[60 * 60 * 24, 60 * 60, 'd'], [60 * 60, 60, 'h'], [60, 1, 'm']];
for (var a = 0; a < st.length; a++) {
var m1 = st[a][0],
m2 = st[a][1],
ch = st[a][2];
if (v < m1)
continue;
var v1 = parseInt(v / m1),
v2 = ('0' + parseInt((v % m1) / m2)).slice(-2);
return v1 + ch + (v1 >= 10 ? '' : v2);
}
}
function clamp(v, a, b) {
return Math.min(Math.max(v, a), b);
}
function has(haystack, needle) {
try { return haystack.includes(needle); } catch (ex) { }
for (var a = 0; a < haystack.length; a++)
if (haystack[a] == needle)
return true;

10
docs/notes.md Normal file
View File

@@ -0,0 +1,10 @@
# up2k.js
## potato detection
* tsk 0.25/8.4/31.5 bzw 1.27/22.9/18 = 77% (38.4s, 49.7s)
* 4c locale #1313, ff-102,deb-11 @ ryzen4500u wifi -> win10
* profiling shows 2sec heavy gc every 2sec
* tsk 0.41/4.1/10 bzw 1.41/9.9/7 = 73% (13.3s, 18.2s)
* 4c locale #1313, ch-103,deb-11 @ ryzen4500u wifi -> win10

View File

@@ -185,7 +185,7 @@ brew install python@2
pip install virtualenv
# readme toc
cat README.md | awk 'function pr() { if (!h) {return}; if (/^ *[*!#|]/||!s) {printf "%s\n",h;h=0;return}; if (/.../) {printf "%s - %s\n",h,$0;h=0}; }; /^#/{s=1;pr()} /^#* *(file indexing|install on android|dev env setup|just the sfx|complete release|optional gpl stuff)|`$/{s=0} /^#/{lv=length($1);sub(/[^ ]+ /,"");bab=$0;gsub(/ /,"-",bab); h=sprintf("%" ((lv-1)*4+1) "s [%s](#%s)", "*",$0,bab);next} !h{next} {sub(/ .*/,"");sub(/[:,]$/,"")} {pr()}' > toc; grep -E '^## readme toc' -B1000 -A2 <README.md >p1; grep -E '^## quickstart' -B2 -A999999 <README.md >p2; (cat p1; grep quickstart -A1000 <toc; cat p2) >README.md; rm p1 p2 toc
cat README.md | awk 'function pr() { if (!h) {return}; if (/^ *[*!#|]/||!s) {printf "%s\n",h;h=0;return}; if (/.../) {printf "%s - %s\n",h,$0;h=0}; }; /^#/{s=1;pr()} /^#* *(file indexing|exclude-patterns|install on android|dev env setup|just the sfx|complete release|optional gpl stuff)|`$/{s=0} /^#/{lv=length($1);sub(/[^ ]+ /,"");bab=$0;gsub(/ /,"-",bab); h=sprintf("%" ((lv-1)*4+1) "s [%s](#%s)", "*",$0,bab);next} !h{next} {sub(/ .*/,"");sub(/[:,]$/,"")} {pr()}' > toc; grep -E '^## readme toc' -B1000 -A2 <README.md >p1; grep -E '^## quickstart' -B2 -A999999 <README.md >p2; (cat p1; grep quickstart -A1000 <toc; cat p2) >README.md; rm p1 p2 toc
# fix firefox phantom breakpoints,
# suggestions from bugtracker, doesnt work (debugger is not attachable)

View File

@@ -2,9 +2,9 @@ FROM alpine:3.16
WORKDIR /z
ENV ver_asmcrypto=5b994303a9d3e27e0915f72a10b6c2c51535a4dc \
ver_hashwasm=4.9.0 \
ver_marked=4.0.17 \
ver_marked=4.0.18 \
ver_mde=2.16.1 \
ver_codemirror=5.65.6 \
ver_codemirror=5.65.7 \
ver_fontawesome=5.13.0 \
ver_zopfli=1.0.3

View File

@@ -10,9 +10,10 @@ import pprint
import tarfile
import tempfile
import unittest
from argparse import Namespace
from tests import util as tu
from tests.util import Cfg
from copyparty.authsrv import AuthSrv
from copyparty.httpcli import HttpCli
@@ -22,39 +23,6 @@ def hdr(query):
return h.format(query).encode("utf-8")
class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None):
ka = {}
ex = "e2d e2ds e2dsa e2t e2ts e2tsr ed emp force_js ihead no_acode no_athumb no_del no_logues no_mv no_readme no_robots no_scandir no_thumb no_vthumb no_zip nw"
ka.update(**{k: False for k in ex.split()})
ex = "nih no_rescan no_sendfile no_voldump"
ka.update(**{k: True for k in ex.split()})
ex = "css_browser hist js_browser no_hash no_idx"
ka.update(**{k: None for k in ex.split()})
ex = "re_maxage rproxy rsp_slp s_wr_slp theme themes turbo"
ka.update(**{k: 0 for k in ex.split()})
ex = "doctitle favico html_head mth textfiles"
ka.update(**{k: "" for k in ex.split()})
super(Cfg, self).__init__(
a=a or [],
v=v or [],
c=c,
s_wr_sz=512 * 1024,
unpost=600,
mtp=[],
mte="a",
lang="eng",
logout=573,
**ka
)
class TestHttpCli(unittest.TestCase):
def setUp(self):
self.td = tu.get_ramdisk()

View File

@@ -8,44 +8,14 @@ import shutil
import tempfile
import unittest
from textwrap import dedent
from argparse import Namespace
from tests import util as tu
from tests.util import Cfg
from copyparty.authsrv import AuthSrv, VFS
from copyparty import util
class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None):
ex = "nw e2d e2ds e2dsa e2t e2ts e2tsr no_logues no_readme no_acode force_js no_robots no_thumb no_athumb no_vthumb"
ex = {k: False for k in ex.split()}
ex2 = {
"mtp": [],
"mte": "a",
"mth": "",
"doctitle": "",
"html_head": "",
"hist": None,
"no_idx": None,
"no_hash": None,
"js_browser": None,
"css_browser": None,
"no_voldump": True,
"re_maxage": 0,
"rproxy": 0,
"rsp_slp": 0,
"s_wr_slp": 0,
"s_wr_sz": 512 * 1024,
"lang": "eng",
"theme": 0,
"themes": 0,
"turbo": 0,
"logout": 573,
}
ex.update(ex2)
super(Cfg, self).__init__(a=a or [], v=v or [], c=c, **ex)
class TestVFS(unittest.TestCase):
def setUp(self):
self.td = tu.get_ramdisk()

View File

@@ -7,6 +7,7 @@ import threading
import tempfile
import platform
import subprocess as sp
from argparse import Namespace
WINDOWS = platform.system() == "Windows"
@@ -89,6 +90,40 @@ def get_ramdisk():
return subdir(ret)
class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None):
ka = {}
ex = "e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp ed emp force_js ihead no_acode no_athumb no_del no_logues no_mv no_readme no_robots no_scandir no_thumb no_vthumb no_zip nid nih nw"
ka.update(**{k: False for k in ex.split()})
ex = "no_rescan no_sendfile no_voldump"
ka.update(**{k: True for k in ex.split()})
ex = "css_browser hist js_browser no_hash no_idx"
ka.update(**{k: None for k in ex.split()})
ex = "re_maxage rproxy rsp_slp s_wr_slp theme themes turbo df"
ka.update(**{k: 0 for k in ex.split()})
ex = "doctitle favico html_head mth textfiles"
ka.update(**{k: "" for k in ex.split()})
super(Cfg, self).__init__(
a=a or [],
v=v or [],
c=c,
s_wr_sz=512 * 1024,
unpost=600,
u2sort="s",
mtp=[],
mte="a",
lang="eng",
logout=573,
**ka
)
class NullBroker(object):
def say(*args):
pass