Compare commits
42 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
38c2dcce3e | ||
|
|
5b3a5fe76b | ||
|
|
d5a9bd80b2 | ||
|
|
71c5565949 | ||
|
|
db33d68d42 | ||
|
|
e1c20c7a18 | ||
|
|
d3f1b45ce3 | ||
|
|
c7aa1a3558 | ||
|
|
7b2bd6da83 | ||
|
|
2bd955ba9f | ||
|
|
98dcaee210 | ||
|
|
361aebf877 | ||
|
|
ffc1610980 | ||
|
|
233075aee7 | ||
|
|
d1a4d335df | ||
|
|
96acbd3593 | ||
|
|
4b876dd133 | ||
|
|
a06c5eb048 | ||
|
|
c9cdc3e1c1 | ||
|
|
c0becc6418 | ||
|
|
b17ccc38ee | ||
|
|
acfaacbd46 | ||
|
|
8e0364efad | ||
|
|
e3043004ba | ||
|
|
b2aaf40a3e | ||
|
|
21db8833dc | ||
|
|
ec14c3944e | ||
|
|
20920e844f | ||
|
|
f9954bc4e5 | ||
|
|
d450f61534 | ||
|
|
2b50fc2010 | ||
|
|
c2034f7bc5 | ||
|
|
cec3bee020 | ||
|
|
e1b9ac631f | ||
|
|
19ee64e5e3 | ||
|
|
4f397b9b5b | ||
|
|
71775dcccb | ||
|
|
b383c08cc3 | ||
|
|
fc88341820 | ||
|
|
43bbd566d7 | ||
|
|
e1dea7ef3e | ||
|
|
de2fedd2cd |
@@ -1,8 +1,21 @@
|
||||
* do something cool
|
||||
* **found a bug?** [create an issue!](https://github.com/9001/copyparty/issues) or let me know in the [discord](https://discord.gg/25J8CdTT6G) :>
|
||||
* **fixed a bug?** create a PR or post a patch! big thx in advance :>
|
||||
* **have a cool idea?** let's discuss it! anywhere's fine, you choose.
|
||||
|
||||
really tho, send a PR or an issue or whatever, all appreciated, anything goes, just behave aight 👍👍
|
||||
but please:
|
||||
|
||||
|
||||
|
||||
# do not use AI / LMM when writing code
|
||||
|
||||
copyparty is 100% organic, free-range, human-written software!
|
||||
|
||||
> ⚠ you are now entering a no-copilot zone
|
||||
|
||||
the *only* place where LMM/AI *may* be accepted is for [localization](https://github.com/9001/copyparty/tree/hovudstraum/docs/rice#translations) if you are fluent and have confirmed that the translation is accurate.
|
||||
|
||||
sorry for the harsh tone, but this is important to me 🙏
|
||||
|
||||
but to be more specific,
|
||||
|
||||
|
||||
# contribution ideas
|
||||
|
||||
21
README.md
21
README.md
@@ -147,6 +147,7 @@ just run **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/
|
||||
* or if you are on android, [install copyparty in termux](#install-on-android)
|
||||
* or maybe you have a [synology nas / dsm](./docs/synology-dsm.md)
|
||||
* or if your computer is messed up and nothing else works, [try the pyz](#zipapp)
|
||||
* or if your OS is dead, give the [bootable flashdrive / cd-rom](https://a.ocv.me/pub/stuff/edcd001/enterprise-edition/) a spin
|
||||
* or if you don't trust copyparty yet and want to isolate it a little, then...
|
||||
* ...maybe [prisonparty](./bin/prisonparty.sh) to create a tiny [chroot](https://wiki.archlinux.org/title/Chroot) (very portable),
|
||||
* ...or [bubbleparty](./bin/bubbleparty.sh) to wrap it in [bubblewrap](https://github.com/containers/bubblewrap) (much better)
|
||||
@@ -281,6 +282,8 @@ small collection of user feedback
|
||||
|
||||
`good enough`, `surprisingly correct`, `certified good software`, `just works`, `why`, `wow this is better than nextcloud`
|
||||
|
||||
* UI просто ужасно. Если буду описывать детально не смогу удержаться в рамках приличий
|
||||
|
||||
|
||||
# motivations
|
||||
|
||||
@@ -300,6 +303,8 @@ project goals / philosophy
|
||||
* adaptable, malleable, hackable
|
||||
* no build steps; modify the js/python without needing node.js or anything like that
|
||||
|
||||
becoming rich is specifically *not* a motivation, but if you wanna donate then see my [github profile](https://github.com/9001) regarding donations for my FOSS stuff in general (also THANKS!)
|
||||
|
||||
|
||||
## notes
|
||||
|
||||
@@ -329,7 +334,8 @@ roughly sorted by chance of encounter
|
||||
* `--th-ff-jpg` may fix video thumbnails on some FFmpeg versions (macos, some linux)
|
||||
* `--th-ff-swr` may fix audio thumbnails on some FFmpeg versions
|
||||
* if the `up2k.db` (filesystem index) is on a samba-share or network disk, you'll get unpredictable behavior if the share is disconnected for a bit
|
||||
* use `--hist` or the `hist` volflag (`-v [...]:c,hist=/tmp/foo`) to place the db on a local disk instead
|
||||
* use `--hist` or the `hist` volflag (`-v [...]:c,hist=/tmp/foo`) to place the db and thumbnails on a local disk instead
|
||||
* or, if you only want to move the db (and not the thumbnails), then use `--dbpath` or the `dbpath` volflag
|
||||
* all volumes must exist / be available on startup; up2k (mtp especially) gets funky otherwise
|
||||
* probably more, pls let me know
|
||||
|
||||
@@ -382,7 +388,8 @@ same order here too
|
||||
* this is an msys2 bug, the regular windows edition of python is fine
|
||||
|
||||
* VirtualBox: sqlite throws `Disk I/O Error` when running in a VM and the up2k database is in a vboxsf
|
||||
* use `--hist` or the `hist` volflag (`-v [...]:c,hist=/tmp/foo`) to place the db inside the vm instead
|
||||
* use `--hist` or the `hist` volflag (`-v [...]:c,hist=/tmp/foo`) to place the db and thumbnails inside the vm instead
|
||||
* or, if you only want to move the db (and not the thumbnails), then use `--dbpath` or the `dbpath` volflag
|
||||
* also happens on mergerfs, so put the db elsewhere
|
||||
|
||||
* Ubuntu: dragging files from certain folders into firefox or chrome is impossible
|
||||
@@ -726,6 +733,7 @@ select which type of archive you want in the `[⚙️] config` tab:
|
||||
* `up2k.db` and `dir.txt` is always excluded
|
||||
* bsdtar supports streaming unzipping: `curl foo?zip | bsdtar -xv`
|
||||
* good, because copyparty's zip is faster than tar on small files
|
||||
* but `?tar` is better for large files, especially if the total exceeds 4 GiB
|
||||
* `zip_crc` will take longer to download since the server has to read each file twice
|
||||
* this is only to support MS-DOS PKZIP v2.04g (october 1993) and older
|
||||
* how are you accessing copyparty actually
|
||||
@@ -804,6 +812,8 @@ if you are resuming a massive upload and want to skip hashing the files which al
|
||||
|
||||
if the server is behind a proxy which imposes a request-size limit, you can configure up2k to sneak below the limit with server-option `--u2sz` (the default is 96 MiB to support Cloudflare)
|
||||
|
||||
if you want to replace existing files on the server with new uploads by default, run with `--u2ow 2` (only works if users have the delete-permission, and can still be disabled with `🛡️` in the UI)
|
||||
|
||||
|
||||
### file-search
|
||||
|
||||
@@ -1028,6 +1038,7 @@ click the `play` link next to an audio file, or copy the link target to [share i
|
||||
|
||||
open the `[🎺]` media-player-settings tab to configure it,
|
||||
* "switches":
|
||||
* `[🔁]` repeats one single song forever
|
||||
* `[🔀]` shuffles the files inside each folder
|
||||
* `[preload]` starts loading the next track when it's about to end, reduces the silence between songs
|
||||
* `[full]` does a full preload by downloading the entire next file; good for unreliable connections, bad for slow connections
|
||||
@@ -1592,6 +1603,8 @@ copyparty creates a subfolder named `.hist` inside each volume where it stores t
|
||||
this can instead be kept in a single place using the `--hist` argument, or the `hist=` volflag, or a mix of both:
|
||||
* `--hist ~/.cache/copyparty -v ~/music::r:c,hist=-` sets `~/.cache/copyparty` as the default place to put volume info, but `~/music` gets the regular `.hist` subfolder (`-` restores default behavior)
|
||||
|
||||
by default, the per-volume `up2k.db` sqlite3-database for `-e2d` and `-e2t` is stored next to the thumbnails according to the `--hist` option, but the global-option `--dbpath` and/or volflag `dbpath` can be used to put the database somewhere else
|
||||
|
||||
note:
|
||||
* putting the hist-folders on an SSD is strongly recommended for performance
|
||||
* markdown edits are always stored in a local `.hist` subdirectory
|
||||
@@ -2127,7 +2140,9 @@ buggy feature? rip it out by setting any of the following environment variables
|
||||
|
||||
| env-var | what it does |
|
||||
| -------------------- | ------------ |
|
||||
| `PRTY_NO_DB_LOCK` | do not lock session/shares-databases for exclusive access |
|
||||
| `PRTY_NO_IFADDR` | disable ip/nic discovery by poking into your OS with ctypes |
|
||||
| `PRTY_NO_IMPRESO` | do not try to load js/css files using `importlib.resources` |
|
||||
| `PRTY_NO_IPV6` | disable some ipv6 support (should not be necessary since windows 2000) |
|
||||
| `PRTY_NO_LZMA` | disable streaming xz compression of incoming uploads |
|
||||
| `PRTY_NO_MP` | disable all use of the python `multiprocessing` module (actual multithreading, cpu-count for parsers/thumbnailers) |
|
||||
@@ -2445,6 +2460,8 @@ below are some tweaks roughly ordered by usefulness:
|
||||
* `--no-hash .` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable
|
||||
* if your volumes are on a network-disk such as NFS / SMB / s3, specifying larger values for `--iobuf` and/or `--s-rd-sz` and/or `--s-wr-sz` may help; try setting all of them to `524288` or `1048576` or `4194304`
|
||||
* `--no-htp --hash-mt=0 --mtag-mt=1 --th-mt=1` minimizes the number of threads; can help in some eccentric environments (like the vscode debugger)
|
||||
* when running on AlpineLinux or other musl-based distro, try mimalloc for higher performance (and twice as much RAM usage); `apk add mimalloc2` and run copyparty with env-var `LD_PRELOAD=/usr/lib/libmimalloc-secure.so.2`
|
||||
* note that mimalloc requires special care when combined with prisonparty and/or bubbleparty/bubblewrap; you must give it access to `/proc` and `/sys` otherwise you'll encounter issues with FFmpeg (audio transcoding, thumbnails)
|
||||
* `-j0` enables multiprocessing (actual multithreading), can reduce latency to `20+80/numCores` percent and generally improve performance in cpu-intensive workloads, for example:
|
||||
* lots of connections (many users or heavy clients)
|
||||
* simultaneous downloads and uploads saturating a 20gbps connection
|
||||
|
||||
@@ -14,6 +14,8 @@ run copyparty with `--help-hooks` for usage details / hook type explanations (xm
|
||||
* [discord-announce.py](discord-announce.py) announces new uploads on discord using webhooks ([example](https://user-images.githubusercontent.com/241032/215304439-1c1cb3c8-ec6f-4c17-9f27-81f969b1811a.png))
|
||||
* [reject-mimetype.py](reject-mimetype.py) rejects uploads unless the mimetype is acceptable
|
||||
* [into-the-cache-it-goes.py](into-the-cache-it-goes.py) avoids bugs in caching proxies by immediately downloading each file that is uploaded
|
||||
* [podcast-normalizer.py](podcast-normalizer.py) creates a second file with dynamic-range-compression whenever an audio file is uploaded
|
||||
* good example of the `idx` [hook effect](https://github.com/9001/copyparty/blob/hovudstraum/docs/devnotes.md#hook-effects) to tell copyparty about additional files to scan/index
|
||||
|
||||
|
||||
# upload batches
|
||||
@@ -25,6 +27,7 @@ these are `--xiu` hooks; unlike `xbu` and `xau` (which get executed on every sin
|
||||
# before upload
|
||||
* [reject-extension.py](reject-extension.py) rejects uploads if they match a list of file extensions
|
||||
* [reloc-by-ext.py](reloc-by-ext.py) redirects an upload to another destination based on the file extension
|
||||
* good example of the `reloc` [hook effect](https://github.com/9001/copyparty/blob/hovudstraum/docs/devnotes.md#hook-effects)
|
||||
|
||||
|
||||
# on message
|
||||
|
||||
121
bin/hooks/podcast-normalizer.py
Executable file
121
bin/hooks/podcast-normalizer.py
Executable file
@@ -0,0 +1,121 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import subprocess as sp
|
||||
|
||||
|
||||
_ = r"""
|
||||
sends all uploaded audio files through an aggressive
|
||||
dynamic-range-compressor to even out the volume levels
|
||||
|
||||
dependencies:
|
||||
ffmpeg
|
||||
|
||||
being an xau hook, this gets eXecuted After Upload completion
|
||||
but before copyparty has started hashing/indexing the file, so
|
||||
we'll create a second normalized copy in a subfolder and tell
|
||||
copyparty to hash/index that additional file as well
|
||||
|
||||
example usage as global config:
|
||||
-e2d -e2t --xau j,c1,bin/hooks/podcast-normalizer.py
|
||||
|
||||
parameters explained,
|
||||
e2d/e2t = enable database and metadata indexing
|
||||
xau = execute after upload
|
||||
j = this hook needs upload information as json (not just the filename)
|
||||
c1 = this hook returns json on stdout, so tell copyparty to read that
|
||||
|
||||
example usage as a volflag (per-volume config):
|
||||
-v srv/inc/pods:inc/pods:r:rw,ed:c,xau=j,c1,bin/hooks/podcast-normalizer.py
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
(share fs-path srv/inc/pods at URL /inc/pods,
|
||||
readable by all, read-write for user ed,
|
||||
running this xau (exec-after-upload) plugin for all uploaded files)
|
||||
|
||||
example usage as a volflag in a copyparty config file:
|
||||
[/inc/pods]
|
||||
srv/inc/pods
|
||||
accs:
|
||||
r: *
|
||||
rw: ed
|
||||
flags:
|
||||
e2d # enables file indexing
|
||||
e2t # metadata tags too
|
||||
xau: j,c1,bin/hooks/podcast-normalizer.py
|
||||
|
||||
"""
|
||||
|
||||
########################################################################
|
||||
### CONFIG
|
||||
|
||||
# filetypes to process; ignores everything else
|
||||
EXTS = "mp3 flac ogg opus m4a aac wav wma"
|
||||
|
||||
# the name of the subdir to put the normalized files in
|
||||
SUBDIR = "normalized"
|
||||
|
||||
########################################################################
|
||||
|
||||
|
||||
# try to enable support for crazy filenames
|
||||
try:
|
||||
from copyparty.util import fsenc
|
||||
except:
|
||||
|
||||
def fsenc(p):
|
||||
return p.encode("utf-8")
|
||||
|
||||
|
||||
def main():
|
||||
# read info from copyparty
|
||||
inf = json.loads(sys.argv[1])
|
||||
vpath = inf["vp"]
|
||||
abspath = inf["ap"]
|
||||
|
||||
# check if the file-extension is on the to-be-processed list
|
||||
ext = abspath.lower().split(".")[-1]
|
||||
if ext not in EXTS.split():
|
||||
return
|
||||
|
||||
# jump into the folder where the file was uploaded
|
||||
# and create the subfolder to place the normalized copy inside
|
||||
dirpath, filename = os.path.split(abspath)
|
||||
os.chdir(fsenc(dirpath))
|
||||
os.makedirs(SUBDIR, exist_ok=True)
|
||||
|
||||
# the input and output filenames to give ffmpeg
|
||||
fname_in = fsenc(f"./{filename}")
|
||||
fname_out = fsenc(f"{SUBDIR}/{filename}.opus")
|
||||
|
||||
# fmt: off
|
||||
# create and run the ffmpeg command
|
||||
cmd = [
|
||||
b"ffmpeg",
|
||||
b"-nostdin",
|
||||
b"-hide_banner",
|
||||
b"-i", fname_in,
|
||||
b"-af", b"dynaudnorm=f=100:g=9", # the normalizer config
|
||||
b"-c:a", b"libopus",
|
||||
b"-b:a", b"128k",
|
||||
fname_out,
|
||||
]
|
||||
# fmt: on
|
||||
sp.check_output(cmd)
|
||||
|
||||
# and finally, tell copyparty about the new file
|
||||
# so it appears in the database and rss-feed:
|
||||
vpath = f"{SUBDIR}/{filename}.opus"
|
||||
print(json.dumps({"idx": {"vp": [vpath]}}))
|
||||
|
||||
# (it's fine to give it a relative path like that; it gets
|
||||
# resolved relative to the folder the file was uploaded into)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception as ex:
|
||||
print("podcast-normalizer failed; %r" % (ex,))
|
||||
@@ -50,6 +50,9 @@
|
||||
* give a 3rd argument to install it to your copyparty config
|
||||
* systemd service at [`systemd/cfssl.service`](systemd/cfssl.service)
|
||||
|
||||
### [`zfs-tune.py`](zfs-tune.py)
|
||||
* optimizes databases for optimal performance when stored on a zfs filesystem; also see [openzfs docs](https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html#database-workloads) and specifically the SQLite subsection
|
||||
|
||||
# OS integration
|
||||
init-scripts to start copyparty as a service
|
||||
* [`systemd/copyparty.service`](systemd/copyparty.service) runs the sfx normally
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Maintainer: icxes <dev.null@need.moe>
|
||||
pkgname=copyparty
|
||||
pkgver="1.16.17"
|
||||
pkgver="1.16.20"
|
||||
pkgrel=1
|
||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||
arch=("any")
|
||||
@@ -22,7 +22,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
||||
)
|
||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||
backup=("etc/${pkgname}.d/init" )
|
||||
sha256sums=("6dba0df650bfa6c47ebffcd0c9ef450b49dd998b87265778470799f7cdcd6b00")
|
||||
sha256sums=("eec5c16bca8251467b19e1d1baff5524b03479439604c0568eacedf57dbe2a78")
|
||||
|
||||
build() {
|
||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.16.17/copyparty-sfx.py",
|
||||
"version": "1.16.17",
|
||||
"hash": "sha256-D3hz4tr0/Qb8ySZvhI/eKTUvONbmb8RbwzTEHMWpA6o="
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.16.20/copyparty-sfx.py",
|
||||
"version": "1.16.20",
|
||||
"hash": "sha256-4qeJO9P7VjchSMcShRADJBIYgmCnPV7hqDhIqmvrweU="
|
||||
}
|
||||
107
contrib/zfs-tune.py
Executable file
107
contrib/zfs-tune.py
Executable file
@@ -0,0 +1,107 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import sqlite3
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
|
||||
"""
|
||||
when the up2k-database is stored on a zfs volume, this may give
|
||||
slightly higher performance (actual gains not measured yet)
|
||||
|
||||
NOTE: must be applied in combination with the related advice in the openzfs documentation;
|
||||
https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html#database-workloads
|
||||
and see specifically the SQLite subsection
|
||||
|
||||
it is assumed that all databases are stored in a single location,
|
||||
for example with `--hist /var/store/hists`
|
||||
|
||||
three alternatives for running this script:
|
||||
|
||||
1. copy it into /var/store/hists and run "python3 zfs-tune.py s"
|
||||
(s = modify all databases below folder containing script)
|
||||
|
||||
2. cd into /var/store/hists and run "python3 ~/zfs-tune.py w"
|
||||
(w = modify all databases below current working directory)
|
||||
|
||||
3. python3 ~/zfs-tune.py /var/store/hists
|
||||
|
||||
if you use docker, run copyparty with `--hist /cfg/hists`, copy this script into /cfg, and run this:
|
||||
podman run --rm -it --entrypoint /usr/bin/python3 ghcr.io/9001/copyparty-ac /cfg/zfs-tune.py s
|
||||
|
||||
"""
|
||||
|
||||
|
||||
PAGESIZE = 65536
|
||||
|
||||
|
||||
# borrowed from copyparty; short efficient stacktrace for errors
|
||||
def min_ex(max_lines: int = 8, reverse: bool = False) -> str:
|
||||
et, ev, tb = sys.exc_info()
|
||||
stb = traceback.extract_tb(tb) if tb else traceback.extract_stack()[:-1]
|
||||
fmt = "%s:%d <%s>: %s"
|
||||
ex = [fmt % (fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in stb]
|
||||
if et or ev or tb:
|
||||
ex.append("[%s] %s" % (et.__name__ if et else "(anonymous)", ev))
|
||||
return "\n".join(ex[-max_lines:][:: -1 if reverse else 1])
|
||||
|
||||
|
||||
def set_pagesize(db_path):
|
||||
try:
|
||||
# check current page_size
|
||||
with sqlite3.connect(db_path) as db:
|
||||
v = db.execute("pragma page_size").fetchone()[0]
|
||||
if v == PAGESIZE:
|
||||
print(" `-- OK")
|
||||
return
|
||||
|
||||
# https://www.sqlite.org/pragma.html#pragma_page_size
|
||||
# `- disable wal; set pagesize; vacuum
|
||||
# (copyparty will reenable wal if necessary)
|
||||
|
||||
with sqlite3.connect(db_path) as db:
|
||||
db.execute("pragma journal_mode=delete")
|
||||
db.commit()
|
||||
|
||||
with sqlite3.connect(db_path) as db:
|
||||
db.execute(f"pragma page_size = {PAGESIZE}")
|
||||
db.execute("vacuum")
|
||||
|
||||
print(" `-- new pagesize OK")
|
||||
|
||||
except Exception:
|
||||
err = min_ex().replace("\n", "\n -- ")
|
||||
print(f"FAILED: {db_path}\n -- {err}")
|
||||
|
||||
|
||||
def main():
|
||||
top = os.path.dirname(os.path.abspath(__file__))
|
||||
cwd = os.path.abspath(os.getcwd())
|
||||
try:
|
||||
x = sys.argv[1]
|
||||
except:
|
||||
print(f"""
|
||||
this script takes one mandatory argument:
|
||||
specify 's' to start recursing from folder containing this script file ({top})
|
||||
specify 'w' to start recursing from the current working directory ({cwd})
|
||||
specify a path to start recursing from there
|
||||
""")
|
||||
sys.exit(1)
|
||||
|
||||
if x.lower() == "w":
|
||||
top = cwd
|
||||
elif x.lower() != "s":
|
||||
top = x
|
||||
|
||||
for dirpath, dirs, files in os.walk(top):
|
||||
for fname in files:
|
||||
if not fname.endswith(".db"):
|
||||
continue
|
||||
db_path = os.path.join(dirpath, fname)
|
||||
print(db_path)
|
||||
set_pagesize(db_path)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -228,7 +228,23 @@ def init_E(EE: EnvParams) -> None:
|
||||
if E.mod.endswith("__init__"):
|
||||
E.mod = os.path.dirname(E.mod)
|
||||
|
||||
if sys.platform == "win32":
|
||||
try:
|
||||
p = os.environ.get("XDG_CONFIG_HOME")
|
||||
if not p:
|
||||
raise Exception()
|
||||
if p.startswith("~"):
|
||||
p = os.path.expanduser(p)
|
||||
p = os.path.abspath(os.path.realpath(p))
|
||||
p = os.path.join(p, "copyparty")
|
||||
if not os.path.isdir(p):
|
||||
os.mkdir(p)
|
||||
os.listdir(p)
|
||||
except:
|
||||
p = ""
|
||||
|
||||
if p:
|
||||
E.cfg = p
|
||||
elif sys.platform == "win32":
|
||||
bdir = os.environ.get("APPDATA") or os.environ.get("TEMP") or "."
|
||||
E.cfg = os.path.normpath(bdir + "/copyparty")
|
||||
elif sys.platform == "darwin":
|
||||
@@ -1011,7 +1027,7 @@ def add_upload(ap):
|
||||
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
|
||||
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
|
||||
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for \033[33mdefault\033[0m, and never exceed \033[33mmax\033[0m. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
|
||||
ap2.add_argument("--u2ow", metavar="NUM", type=int, default=0, help="web-client: default setting for when to overwrite existing files; [\033[32m0\033[0m]=never, [\033[32m1\033[0m]=if-client-newer, [\033[32m2\033[0m]=always (volflag=u2ow)")
|
||||
ap2.add_argument("--u2ow", metavar="NUM", type=int, default=0, help="web-client: default setting for when to replace/overwrite existing files; [\033[32m0\033[0m]=never, [\033[32m1\033[0m]=if-client-newer, [\033[32m2\033[0m]=always (volflag=u2ow)")
|
||||
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
|
||||
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
|
||||
|
||||
@@ -1361,6 +1377,7 @@ def add_thumbnail(ap):
|
||||
ap2.add_argument("--th-r-ffi", metavar="T,T", type=u, default="apng,avif,avifs,bmp,cbz,dds,dib,fit,fits,fts,gif,hdr,heic,heics,heif,heifs,icns,ico,jp2,jpeg,jpg,jpx,jxl,pbm,pcx,pfm,pgm,png,pnm,ppm,psd,qoi,sgi,tga,tif,tiff,webp,xbm,xpm", help="image formats to decode using ffmpeg")
|
||||
ap2.add_argument("--th-r-ffv", metavar="T,T", type=u, default="3gp,asf,av1,avc,avi,flv,h264,h265,hevc,m4v,mjpeg,mjpg,mkv,mov,mp4,mpeg,mpeg2,mpegts,mpg,mpg2,mts,nut,ogm,ogv,rm,ts,vob,webm,wmv", help="video formats to decode using ffmpeg")
|
||||
ap2.add_argument("--th-r-ffa", metavar="T,T", type=u, default="aac,ac3,aif,aiff,alac,alaw,amr,apac,ape,au,bonk,dfpwm,dts,flac,gsm,ilbc,it,itgz,itxz,itz,m4a,mdgz,mdxz,mdz,mo3,mod,mp2,mp3,mpc,mptm,mt2,mulaw,ogg,okt,opus,ra,s3m,s3gz,s3xz,s3z,tak,tta,ulaw,wav,wma,wv,xm,xmgz,xmxz,xmz,xpk", help="audio formats to decode using ffmpeg")
|
||||
ap2.add_argument("--th-spec-cnv", metavar="T,T", type=u, default="it,itgz,itxz,itz,mdgz,mdxz,mdz,mo3,mod,s3m,s3gz,s3xz,s3z,xm,xmgz,xmxz,xmz,xpk", help="audio formats which provoke https://trac.ffmpeg.org/ticket/10797 (huge ram usage for s3xmodit spectrograms)")
|
||||
ap2.add_argument("--au-unpk", metavar="E=F.C", type=u, default="mdz=mod.zip, mdgz=mod.gz, mdxz=mod.xz, s3z=s3m.zip, s3gz=s3m.gz, s3xz=s3m.xz, xmz=xm.zip, xmgz=xm.gz, xmxz=xm.xz, itz=it.zip, itgz=it.gz, itxz=it.xz, cbz=jpg.cbz", help="audio/image formats to decompress before passing to ffmpeg")
|
||||
|
||||
|
||||
@@ -1393,6 +1410,7 @@ def add_db_general(ap, hcores):
|
||||
ap2.add_argument("-e2vu", action="store_true", help="on hash mismatch: update the database with the new hash")
|
||||
ap2.add_argument("-e2vp", action="store_true", help="on hash mismatch: panic and quit copyparty")
|
||||
ap2.add_argument("--hist", metavar="PATH", type=u, default="", help="where to store volume data (db, thumbs); default is a folder named \".hist\" inside each volume (volflag=hist)")
|
||||
ap2.add_argument("--dbpath", metavar="PATH", type=u, default="", help="override where the volume databases are to be placed; default is the same as \033[33m--hist\033[0m (volflag=dbpath)")
|
||||
ap2.add_argument("--no-hash", metavar="PTN", type=u, default="", help="regex: disable hashing of matching absolute-filesystem-paths during e2ds folder scans (volflag=nohash)")
|
||||
ap2.add_argument("--no-idx", metavar="PTN", type=u, default=noidx, help="regex: disable indexing of matching absolute-filesystem-paths during e2ds folder scans (volflag=noidx)")
|
||||
ap2.add_argument("--no-dirsz", action="store_true", help="do not show total recursive size of folders in listings, show inode size instead; slightly faster (volflag=nodirsz)")
|
||||
@@ -1431,6 +1449,7 @@ def add_db_metadata(ap):
|
||||
|
||||
def add_txt(ap):
|
||||
ap2 = ap.add_argument_group('textfile options')
|
||||
ap2.add_argument("--md-hist", metavar="TXT", type=u, default="s", help="where to store old version of markdown files; [\033[32ms\033[0m]=subfolder, [\033[32mv\033[0m]=volume-histpath, [\033[32mn\033[0m]=nope/disabled (volflag=md_hist)")
|
||||
ap2.add_argument("-mcr", metavar="SEC", type=int, default=60, help="the textfile editor will check for serverside changes every \033[33mSEC\033[0m seconds")
|
||||
ap2.add_argument("-emp", action="store_true", help="enable markdown plugins -- neat but dangerous, big XSS risk")
|
||||
ap2.add_argument("--exp", action="store_true", help="enable textfile expansion -- replace {{self.ip}} and such; see \033[33m--help-exp\033[0m (volflag=exp)")
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# coding: utf-8
|
||||
|
||||
VERSION = (1, 16, 18)
|
||||
VERSION = (1, 16, 21)
|
||||
CODENAME = "COPYparty"
|
||||
BUILD_DT = (2025, 3, 23)
|
||||
BUILD_DT = (2025, 4, 20)
|
||||
|
||||
S_VERSION = ".".join(map(str, VERSION))
|
||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||
|
||||
@@ -360,6 +360,7 @@ class VFS(object):
|
||||
self.badcfg1 = False
|
||||
self.nodes: dict[str, VFS] = {} # child nodes
|
||||
self.histtab: dict[str, str] = {} # all realpath->histpath
|
||||
self.dbpaths: dict[str, str] = {} # all realpath->dbpath
|
||||
self.dbv: Optional[VFS] = None # closest full/non-jump parent
|
||||
self.lim: Optional[Lim] = None # upload limits; only set for dbv
|
||||
self.shr_src: Optional[tuple[VFS, str]] = None # source vfs+rem of a share
|
||||
@@ -381,12 +382,13 @@ class VFS(object):
|
||||
rp = realpath + ("" if realpath.endswith(os.sep) else os.sep)
|
||||
vp = vpath + ("/" if vpath else "")
|
||||
self.histpath = os.path.join(realpath, ".hist") # db / thumbcache
|
||||
self.dbpath = self.histpath
|
||||
self.all_vols = {vpath: self} # flattened recursive
|
||||
self.all_nodes = {vpath: self} # also jumpvols/shares
|
||||
self.all_aps = [(rp, self)]
|
||||
self.all_vps = [(vp, self)]
|
||||
else:
|
||||
self.histpath = ""
|
||||
self.histpath = self.dbpath = ""
|
||||
self.all_vols = {}
|
||||
self.all_nodes = {}
|
||||
self.all_aps = []
|
||||
@@ -461,17 +463,23 @@ class VFS(object):
|
||||
|
||||
def _copy_flags(self, name: str) -> dict[str, Any]:
|
||||
flags = {k: v for k, v in self.flags.items()}
|
||||
|
||||
hist = flags.get("hist")
|
||||
if hist and hist != "-":
|
||||
zs = "{}/{}".format(hist.rstrip("/"), name)
|
||||
flags["hist"] = os.path.expandvars(os.path.expanduser(zs))
|
||||
|
||||
dbp = flags.get("dbpath")
|
||||
if dbp and dbp != "-":
|
||||
zs = "{}/{}".format(dbp.rstrip("/"), name)
|
||||
flags["dbpath"] = os.path.expandvars(os.path.expanduser(zs))
|
||||
|
||||
return flags
|
||||
|
||||
def bubble_flags(self) -> None:
|
||||
if self.dbv:
|
||||
for k, v in self.dbv.flags.items():
|
||||
if k not in ["hist"]:
|
||||
if k not in ("hist", "dbpath"):
|
||||
self.flags[k] = v
|
||||
|
||||
for n in self.nodes.values():
|
||||
@@ -1759,7 +1767,7 @@ class AuthSrv(object):
|
||||
pass
|
||||
elif vflag:
|
||||
vflag = os.path.expandvars(os.path.expanduser(vflag))
|
||||
vol.histpath = uncyg(vflag) if WINDOWS else vflag
|
||||
vol.histpath = vol.dbpath = uncyg(vflag) if WINDOWS else vflag
|
||||
elif self.args.hist:
|
||||
for nch in range(len(hid)):
|
||||
hpath = os.path.join(self.args.hist, hid[: nch + 1])
|
||||
@@ -1780,12 +1788,45 @@ class AuthSrv(object):
|
||||
with open(powner, "wb") as f:
|
||||
f.write(me)
|
||||
|
||||
vol.histpath = hpath
|
||||
vol.histpath = vol.dbpath = hpath
|
||||
break
|
||||
|
||||
vol.histpath = absreal(vol.histpath)
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
hid = self.hid_cache[vol.realpath]
|
||||
vflag = vol.flags.get("dbpath")
|
||||
if vflag == "-":
|
||||
pass
|
||||
elif vflag:
|
||||
vflag = os.path.expandvars(os.path.expanduser(vflag))
|
||||
vol.dbpath = uncyg(vflag) if WINDOWS else vflag
|
||||
elif self.args.dbpath:
|
||||
for nch in range(len(hid)):
|
||||
hpath = os.path.join(self.args.dbpath, hid[: nch + 1])
|
||||
bos.makedirs(hpath)
|
||||
|
||||
powner = os.path.join(hpath, "owner.txt")
|
||||
try:
|
||||
with open(powner, "rb") as f:
|
||||
owner = f.read().rstrip()
|
||||
except:
|
||||
owner = None
|
||||
|
||||
me = afsenc(vol.realpath).rstrip()
|
||||
if owner not in [None, me]:
|
||||
continue
|
||||
|
||||
if owner is None:
|
||||
with open(powner, "wb") as f:
|
||||
f.write(me)
|
||||
|
||||
vol.dbpath = hpath
|
||||
break
|
||||
|
||||
vol.dbpath = absreal(vol.dbpath)
|
||||
if vol.dbv:
|
||||
if bos.path.exists(os.path.join(vol.histpath, "up2k.db")):
|
||||
if bos.path.exists(os.path.join(vol.dbpath, "up2k.db")):
|
||||
promote.append(vol)
|
||||
vol.dbv = None
|
||||
else:
|
||||
@@ -1800,9 +1841,7 @@ class AuthSrv(object):
|
||||
"\n the following jump-volumes were generated to assist the vfs.\n As they contain a database (probably from v0.11.11 or older),\n they are promoted to full volumes:"
|
||||
]
|
||||
for vol in promote:
|
||||
ta.append(
|
||||
" /{} ({}) ({})".format(vol.vpath, vol.realpath, vol.histpath)
|
||||
)
|
||||
ta.append(" /%s (%s) (%s)" % (vol.vpath, vol.realpath, vol.dbpath))
|
||||
|
||||
self.log("\n\n".join(ta) + "\n", c=3)
|
||||
|
||||
@@ -1813,13 +1852,27 @@ class AuthSrv(object):
|
||||
is_shr = shr and zv.vpath.split("/")[0] == shr
|
||||
if histp and not is_shr and histp in rhisttab:
|
||||
zv2 = rhisttab[histp]
|
||||
t = "invalid config; multiple volumes share the same histpath (database location):\n histpath: %s\n volume 1: /%s [%s]\n volume 2: %s [%s]"
|
||||
t = "invalid config; multiple volumes share the same histpath (database+thumbnails location):\n histpath: %s\n volume 1: /%s [%s]\n volume 2: %s [%s]"
|
||||
t = t % (histp, zv2.vpath, zv2.realpath, zv.vpath, zv.realpath)
|
||||
self.log(t, 1)
|
||||
raise Exception(t)
|
||||
rhisttab[histp] = zv
|
||||
vfs.histtab[zv.realpath] = histp
|
||||
|
||||
rdbpaths = {}
|
||||
vfs.dbpaths = {}
|
||||
for zv in vfs.all_vols.values():
|
||||
dbp = zv.dbpath
|
||||
is_shr = shr and zv.vpath.split("/")[0] == shr
|
||||
if dbp and not is_shr and dbp in rdbpaths:
|
||||
zv2 = rdbpaths[dbp]
|
||||
t = "invalid config; multiple volumes share the same dbpath (database location):\n dbpath: %s\n volume 1: /%s [%s]\n volume 2: %s [%s]"
|
||||
t = t % (dbp, zv2.vpath, zv2.realpath, zv.vpath, zv.realpath)
|
||||
self.log(t, 1)
|
||||
raise Exception(t)
|
||||
rdbpaths[dbp] = zv
|
||||
vfs.dbpaths[zv.realpath] = dbp
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
use = False
|
||||
for k in ["zipmaxn", "zipmaxs"]:
|
||||
|
||||
@@ -83,6 +83,7 @@ def vf_vmap() -> dict[str, str]:
|
||||
"md_sbf",
|
||||
"lg_sba",
|
||||
"md_sba",
|
||||
"md_hist",
|
||||
"nrand",
|
||||
"u2ow",
|
||||
"og_desc",
|
||||
@@ -204,6 +205,7 @@ flagcats = {
|
||||
"d2v": "disables file verification, overrides -e2v*",
|
||||
"d2d": "disables all database stuff, overrides -e2*",
|
||||
"hist=/tmp/cdb": "puts thumbnails and indexes at that location",
|
||||
"dbpath=/tmp/cdb": "puts indexes at that location",
|
||||
"scan=60": "scan for new files every 60sec, same as --re-maxage",
|
||||
"nohash=\\.iso$": "skips hashing file contents if path matches *.iso",
|
||||
"noidx=\\.iso$": "fully ignores the contents at paths matching *.iso",
|
||||
@@ -291,6 +293,7 @@ flagcats = {
|
||||
"og_ua": "if defined: only send OG html if useragent matches this regex",
|
||||
},
|
||||
"textfiles": {
|
||||
"md_hist": "where to put markdown backups; s=subfolder, v=volHist, n=nope",
|
||||
"exp": "enable textfile expansion; see --help-exp",
|
||||
"exp_md": "placeholders to expand in markdown files; see --help",
|
||||
"exp_lg": "placeholders to expand in prologue/epilogue; see --help",
|
||||
|
||||
@@ -57,6 +57,7 @@ from .util import (
|
||||
UnrecvEOF,
|
||||
WrongPostKey,
|
||||
absreal,
|
||||
afsenc,
|
||||
alltrace,
|
||||
atomic_move,
|
||||
b64dec,
|
||||
@@ -1205,11 +1206,6 @@ class HttpCli(object):
|
||||
else:
|
||||
return self.tx_res(res_path)
|
||||
|
||||
if res_path != undot(res_path):
|
||||
t = "malicious user; attempted path traversal; req(%r) vp(%r) => %r"
|
||||
self.log(t % (self.req, "/" + self.vpath, res_path), 1)
|
||||
self.cbonk(self.conn.hsrv.gmal, self.req, "trav", "path traversal")
|
||||
|
||||
self.tx_404()
|
||||
return False
|
||||
|
||||
@@ -2998,9 +2994,6 @@ class HttpCli(object):
|
||||
vfs, rem = self.asrv.vfs.get(vpath, self.uname, False, True)
|
||||
rem = sanitize_vpath(rem, "/")
|
||||
fn = vfs.canonical(rem)
|
||||
if not fn.startswith(vfs.realpath):
|
||||
self.log("invalid mkdir %r %r" % (self.gctx, vpath), 1)
|
||||
raise Pebkac(422)
|
||||
|
||||
if not nullwrite:
|
||||
fdir = os.path.dirname(fn)
|
||||
@@ -3501,6 +3494,7 @@ class HttpCli(object):
|
||||
|
||||
fp = os.path.join(fp, fn)
|
||||
rem = "{}/{}".format(rp, fn).strip("/")
|
||||
dbv, vrem = vfs.get_dbv(rem)
|
||||
|
||||
if not rem.endswith(".md") and not self.can_delete:
|
||||
raise Pebkac(400, "only markdown pls")
|
||||
@@ -3555,13 +3549,27 @@ class HttpCli(object):
|
||||
mdir, mfile = os.path.split(fp)
|
||||
fname, fext = mfile.rsplit(".", 1) if "." in mfile else (mfile, "md")
|
||||
mfile2 = "{}.{:.3f}.{}".format(fname, srv_lastmod, fext)
|
||||
try:
|
||||
|
||||
dp = ""
|
||||
hist_cfg = dbv.flags["md_hist"]
|
||||
if hist_cfg == "v":
|
||||
vrd = vsplit(vrem)[0]
|
||||
zb = hashlib.sha512(afsenc(vrd)).digest()
|
||||
zs = ub64enc(zb).decode("ascii")[:24].lower()
|
||||
dp = "%s/md/%s/%s/%s" % (dbv.histpath, zs[:2], zs[2:4], zs)
|
||||
self.log("moving old version to %s/%s" % (dp, mfile2))
|
||||
if bos.makedirs(dp):
|
||||
with open(os.path.join(dp, "dir.txt"), "wb") as f:
|
||||
f.write(afsenc(vrd))
|
||||
elif hist_cfg == "s":
|
||||
dp = os.path.join(mdir, ".hist")
|
||||
bos.mkdir(dp)
|
||||
hidedir(dp)
|
||||
except:
|
||||
pass
|
||||
wrename(self.log, fp, os.path.join(mdir, ".hist", mfile2), vfs.flags)
|
||||
try:
|
||||
bos.mkdir(dp)
|
||||
hidedir(dp)
|
||||
except:
|
||||
pass
|
||||
if dp:
|
||||
wrename(self.log, fp, os.path.join(dp, mfile2), vfs.flags)
|
||||
|
||||
assert self.parser.gen # !rm
|
||||
p_field, _, p_data = next(self.parser.gen)
|
||||
@@ -3634,13 +3642,12 @@ class HttpCli(object):
|
||||
wunlink(self.log, fp, vfs.flags)
|
||||
raise Pebkac(403, t)
|
||||
|
||||
vfs, rem = vfs.get_dbv(rem)
|
||||
self.conn.hsrv.broker.say(
|
||||
"up2k.hash_file",
|
||||
vfs.realpath,
|
||||
vfs.vpath,
|
||||
vfs.flags,
|
||||
vsplit(rem)[0],
|
||||
dbv.realpath,
|
||||
dbv.vpath,
|
||||
dbv.flags,
|
||||
vsplit(vrem)[0],
|
||||
fn,
|
||||
self.ip,
|
||||
new_lastmod,
|
||||
@@ -4239,6 +4246,7 @@ class HttpCli(object):
|
||||
self.log(t % (data_end / M, lower / M, upper / M), 6)
|
||||
with self.u2mutex:
|
||||
if data_end > self.u2fh.aps.get(ap_data, data_end):
|
||||
fhs: Optional[set[typing.BinaryIO]] = None
|
||||
try:
|
||||
fhs = self.u2fh.cache[ap_data].all_fhs
|
||||
for fh in fhs:
|
||||
@@ -4246,7 +4254,11 @@ class HttpCli(object):
|
||||
self.u2fh.aps[ap_data] = data_end
|
||||
self.log("pipe: flushed %d up2k-FDs" % (len(fhs),))
|
||||
except Exception as ex:
|
||||
self.log("pipe: u2fh flush failed: %r" % (ex,))
|
||||
if fhs is None:
|
||||
err = "file is not being written to right now"
|
||||
else:
|
||||
err = repr(ex)
|
||||
self.log("pipe: u2fh flush failed: " + err)
|
||||
|
||||
if lower >= data_end:
|
||||
if data_end:
|
||||
@@ -4870,7 +4882,7 @@ class HttpCli(object):
|
||||
self.reply(pt.encode("utf-8"), status=rc)
|
||||
return True
|
||||
|
||||
if "th" in self.ouparam:
|
||||
if "th" in self.ouparam and str(self.ouparam["th"])[:1] in "jw":
|
||||
return self.tx_svg("e" + pt[:3])
|
||||
|
||||
# most webdav clients will not send credentials until they
|
||||
@@ -5798,7 +5810,13 @@ class HttpCli(object):
|
||||
|
||||
thp = None
|
||||
if self.thumbcli and not nothumb:
|
||||
thp = self.thumbcli.get(dbv, vrem, int(st.st_mtime), th_fmt)
|
||||
try:
|
||||
thp = self.thumbcli.get(dbv, vrem, int(st.st_mtime), th_fmt)
|
||||
except Pebkac as ex:
|
||||
if ex.code == 500 and th_fmt[:1] in "jw":
|
||||
self.log("failed to convert [%s]:\n%s" % (abspath, ex), 3)
|
||||
return self.tx_svg("--error--\ncheck\nserver\nlog")
|
||||
raise
|
||||
|
||||
if thp:
|
||||
return self.tx_file(thp)
|
||||
@@ -6020,9 +6038,11 @@ class HttpCli(object):
|
||||
# check for old versions of files,
|
||||
# [num-backups, most-recent, hist-path]
|
||||
hist: dict[str, tuple[int, float, str]] = {}
|
||||
histdir = os.path.join(fsroot, ".hist")
|
||||
ptn = RE_MDV
|
||||
try:
|
||||
if vf["md_hist"] != "s":
|
||||
raise Exception()
|
||||
histdir = os.path.join(fsroot, ".hist")
|
||||
ptn = RE_MDV
|
||||
for hfn in bos.listdir(histdir):
|
||||
m = ptn.match(hfn)
|
||||
if not m:
|
||||
|
||||
@@ -94,10 +94,21 @@ class Ico(object):
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<svg version="1.1" viewBox="0 0 100 {}" xmlns="http://www.w3.org/2000/svg"><g>
|
||||
<rect width="100%" height="100%" fill="#{}" />
|
||||
<text x="50%" y="50%" dominant-baseline="middle" text-anchor="middle" xml:space="preserve"
|
||||
<text x="50%" y="{}" dominant-baseline="middle" text-anchor="middle" xml:space="preserve"
|
||||
fill="#{}" font-family="monospace" font-size="14px" style="letter-spacing:.5px">{}</text>
|
||||
</g></svg>
|
||||
"""
|
||||
svg = svg.format(h, c[:6], c[6:], html_escape(ext, True))
|
||||
|
||||
txt = html_escape(ext, True)
|
||||
if "\n" in txt:
|
||||
lines = txt.split("\n")
|
||||
n = len(lines)
|
||||
y = "20%" if n == 2 else "10%" if n == 3 else "0"
|
||||
zs = '<tspan x="50%%" dy="1.2em">%s</tspan>'
|
||||
txt = "".join([zs % (x,) for x in lines])
|
||||
else:
|
||||
y = "50%"
|
||||
|
||||
svg = svg.format(h, c[:6], y, c[6:], txt)
|
||||
|
||||
return "image/svg+xml", svg.encode("utf-8")
|
||||
|
||||
@@ -15,7 +15,7 @@ try:
|
||||
raise Exception()
|
||||
|
||||
HAVE_ARGON2 = True
|
||||
from argon2 import __version__ as argon2ver
|
||||
from argon2 import exceptions as argon2ex
|
||||
except:
|
||||
HAVE_ARGON2 = False
|
||||
|
||||
|
||||
@@ -64,6 +64,7 @@ from .util import (
|
||||
expat_ver,
|
||||
gzip,
|
||||
load_ipu,
|
||||
lock_file,
|
||||
min_ex,
|
||||
mp,
|
||||
odfusion,
|
||||
@@ -73,6 +74,9 @@ from .util import (
|
||||
ub64enc,
|
||||
)
|
||||
|
||||
if HAVE_SQLITE3:
|
||||
import sqlite3
|
||||
|
||||
if TYPE_CHECKING:
|
||||
try:
|
||||
from .mdns import MDNS
|
||||
@@ -84,6 +88,10 @@ if PY2:
|
||||
range = xrange # type: ignore
|
||||
|
||||
|
||||
VER_SESSION_DB = 1
|
||||
VER_SHARES_DB = 2
|
||||
|
||||
|
||||
class SvcHub(object):
|
||||
"""
|
||||
Hosts all services which cannot be parallelized due to reliance on monolithic resources.
|
||||
@@ -186,8 +194,14 @@ class SvcHub(object):
|
||||
|
||||
if not args.use_fpool and args.j != 1:
|
||||
args.no_fpool = True
|
||||
t = "multithreading enabled with -j {}, so disabling fpool -- this can reduce upload performance on some filesystems"
|
||||
self.log("root", t.format(args.j))
|
||||
t = "multithreading enabled with -j {}, so disabling fpool -- this can reduce upload performance on some filesystems, and make some antivirus-softwares "
|
||||
c = 0
|
||||
if ANYWIN:
|
||||
t += "(especially Microsoft Defender) stress your CPU and HDD severely during big uploads"
|
||||
c = 3
|
||||
else:
|
||||
t += "consume more resources (CPU/HDD) than normal"
|
||||
self.log("root", t.format(args.j), c)
|
||||
|
||||
if not args.no_fpool and args.j != 1:
|
||||
t = "WARNING: ignoring --use-fpool because multithreading (-j{}) is enabled"
|
||||
@@ -406,25 +420,49 @@ class SvcHub(object):
|
||||
self.log("root", t, 3)
|
||||
return
|
||||
|
||||
import sqlite3
|
||||
assert sqlite3 # type: ignore # !rm
|
||||
|
||||
# policy:
|
||||
# the sessions-db is whatever, if something looks broken then just nuke it
|
||||
|
||||
create = True
|
||||
db_path = self.args.ses_db
|
||||
self.log("root", "opening sessions-db %s" % (db_path,))
|
||||
for n in range(2):
|
||||
db_lock = db_path + ".lock"
|
||||
try:
|
||||
create = not os.path.getsize(db_path)
|
||||
except:
|
||||
create = True
|
||||
zs = "creating new" if create else "opening"
|
||||
self.log("root", "%s sessions-db %s" % (zs, db_path))
|
||||
|
||||
for tries in range(2):
|
||||
sver = 0
|
||||
try:
|
||||
db = sqlite3.connect(db_path)
|
||||
cur = db.cursor()
|
||||
try:
|
||||
zs = "select v from kv where k='sver'"
|
||||
sver = cur.execute(zs).fetchall()[0][0]
|
||||
if sver > VER_SESSION_DB:
|
||||
zs = "this version of copyparty only understands session-db v%d and older; the db is v%d"
|
||||
raise Exception(zs % (VER_SESSION_DB, sver))
|
||||
|
||||
cur.execute("select count(*) from us").fetchone()
|
||||
create = False
|
||||
break
|
||||
except:
|
||||
pass
|
||||
if sver:
|
||||
raise
|
||||
sver = 1
|
||||
self._create_session_db(cur)
|
||||
err = self._verify_session_db(cur, sver, db_path)
|
||||
if err:
|
||||
tries = 99
|
||||
self.args.no_ses = True
|
||||
self.log("root", err, 3)
|
||||
break
|
||||
|
||||
except Exception as ex:
|
||||
if n:
|
||||
if tries or sver > VER_SESSION_DB:
|
||||
raise
|
||||
t = "sessions-db corrupt; deleting and recreating: %r"
|
||||
t = "sessions-db is unusable; deleting and recreating: %r"
|
||||
self.log("root", t % (ex,), 3)
|
||||
try:
|
||||
cur.close() # type: ignore
|
||||
@@ -434,8 +472,13 @@ class SvcHub(object):
|
||||
db.close() # type: ignore
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
os.unlink(db_lock)
|
||||
except:
|
||||
pass
|
||||
os.unlink(db_path)
|
||||
|
||||
def _create_session_db(self, cur: "sqlite3.Cursor") -> None:
|
||||
sch = [
|
||||
r"create table kv (k text, v int)",
|
||||
r"create table us (un text, si text, t0 int)",
|
||||
@@ -445,17 +488,44 @@ class SvcHub(object):
|
||||
r"create index us_t0 on us(t0)",
|
||||
r"insert into kv values ('sver', 1)",
|
||||
]
|
||||
for cmd in sch:
|
||||
cur.execute(cmd)
|
||||
self.log("root", "created new sessions-db")
|
||||
|
||||
assert db # type: ignore # !rm
|
||||
assert cur # type: ignore # !rm
|
||||
if create:
|
||||
for cmd in sch:
|
||||
cur.execute(cmd)
|
||||
self.log("root", "created new sessions-db")
|
||||
db.commit()
|
||||
def _verify_session_db(self, cur: "sqlite3.Cursor", sver: int, db_path: str) -> str:
|
||||
# ensure writable (maybe owned by other user)
|
||||
db = cur.connection
|
||||
|
||||
try:
|
||||
zil = cur.execute("select v from kv where k='pid'").fetchall()
|
||||
if len(zil) > 1:
|
||||
raise Exception()
|
||||
owner = zil[0][0]
|
||||
except:
|
||||
owner = 0
|
||||
|
||||
if not lock_file(db_path + ".lock"):
|
||||
t = "the sessions-db [%s] is already in use by another copyparty instance (pid:%d). This is not supported; please provide another database with --ses-db or give this copyparty-instance its entirely separate config-folder by setting another path in the XDG_CONFIG_HOME env-var. You can also disable this safeguard by setting env-var PRTY_NO_DB_LOCK=1. Will now disable sessions and instead use plaintext passwords in cookies."
|
||||
return t % (db_path, owner)
|
||||
|
||||
vars = (("pid", os.getpid()), ("ts", int(time.time() * 1000)))
|
||||
if owner:
|
||||
# wear-estimate: 2 cells; offsets 0x10, 0x50, 0x19720
|
||||
for k, v in vars:
|
||||
cur.execute("update kv set v=? where k=?", (v, k))
|
||||
else:
|
||||
# wear-estimate: 3~4 cells; offsets 0x10, 0x50, 0x19180, 0x19710, 0x36000, 0x360b0, 0x36b90
|
||||
for k, v in vars:
|
||||
cur.execute("insert into kv values(?, ?)", (k, v))
|
||||
|
||||
if sver < VER_SESSION_DB:
|
||||
cur.execute("delete from kv where k='sver'")
|
||||
cur.execute("insert into kv values('sver',?)", (VER_SESSION_DB,))
|
||||
|
||||
db.commit()
|
||||
cur.close()
|
||||
db.close()
|
||||
return ""
|
||||
|
||||
def setup_share_db(self) -> None:
|
||||
al = self.args
|
||||
@@ -464,7 +534,7 @@ class SvcHub(object):
|
||||
al.shr = ""
|
||||
return
|
||||
|
||||
import sqlite3
|
||||
assert sqlite3 # type: ignore # !rm
|
||||
|
||||
al.shr = al.shr.strip("/")
|
||||
if "/" in al.shr or not al.shr:
|
||||
@@ -475,34 +545,48 @@ class SvcHub(object):
|
||||
al.shr = "/%s/" % (al.shr,)
|
||||
al.shr1 = al.shr[1:]
|
||||
|
||||
create = True
|
||||
modified = False
|
||||
# policy:
|
||||
# the shares-db is important, so panic if something is wrong
|
||||
|
||||
db_path = self.args.shr_db
|
||||
self.log("root", "opening shares-db %s" % (db_path,))
|
||||
for n in range(2):
|
||||
try:
|
||||
db = sqlite3.connect(db_path)
|
||||
cur = db.cursor()
|
||||
try:
|
||||
cur.execute("select count(*) from sh").fetchone()
|
||||
create = False
|
||||
break
|
||||
except:
|
||||
pass
|
||||
except Exception as ex:
|
||||
if n:
|
||||
raise
|
||||
t = "shares-db corrupt; deleting and recreating: %r"
|
||||
self.log("root", t % (ex,), 3)
|
||||
try:
|
||||
cur.close() # type: ignore
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
db.close() # type: ignore
|
||||
except:
|
||||
pass
|
||||
os.unlink(db_path)
|
||||
db_lock = db_path + ".lock"
|
||||
try:
|
||||
create = not os.path.getsize(db_path)
|
||||
except:
|
||||
create = True
|
||||
zs = "creating new" if create else "opening"
|
||||
self.log("root", "%s shares-db %s" % (zs, db_path))
|
||||
|
||||
sver = 0
|
||||
try:
|
||||
db = sqlite3.connect(db_path)
|
||||
cur = db.cursor()
|
||||
if not create:
|
||||
zs = "select v from kv where k='sver'"
|
||||
sver = cur.execute(zs).fetchall()[0][0]
|
||||
if sver > VER_SHARES_DB:
|
||||
zs = "this version of copyparty only understands shares-db v%d and older; the db is v%d"
|
||||
raise Exception(zs % (VER_SHARES_DB, sver))
|
||||
|
||||
cur.execute("select count(*) from sh").fetchone()
|
||||
except Exception as ex:
|
||||
t = "could not open shares-db; will now panic...\nthe following database must be repaired or deleted before you can launch copyparty:\n%s\n\nERROR: %s\n\nadditional details:\n%s\n"
|
||||
self.log("root", t % (db_path, ex, min_ex()), 1)
|
||||
raise
|
||||
|
||||
try:
|
||||
zil = cur.execute("select v from kv where k='pid'").fetchall()
|
||||
if len(zil) > 1:
|
||||
raise Exception()
|
||||
owner = zil[0][0]
|
||||
except:
|
||||
owner = 0
|
||||
|
||||
if not lock_file(db_lock):
|
||||
t = "the shares-db [%s] is already in use by another copyparty instance (pid:%d). This is not supported; please provide another database with --shr-db or give this copyparty-instance its entirely separate config-folder by setting another path in the XDG_CONFIG_HOME env-var. You can also disable this safeguard by setting env-var PRTY_NO_DB_LOCK=1. Will now panic."
|
||||
t = t % (db_path, owner)
|
||||
self.log("root", t, 1)
|
||||
raise Exception(t)
|
||||
|
||||
sch1 = [
|
||||
r"create table kv (k text, v int)",
|
||||
@@ -514,34 +598,37 @@ class SvcHub(object):
|
||||
r"create index sf_k on sf(k)",
|
||||
r"create index sh_k on sh(k)",
|
||||
r"create index sh_t1 on sh(t1)",
|
||||
r"insert into kv values ('sver', 2)",
|
||||
]
|
||||
|
||||
assert db # type: ignore # !rm
|
||||
assert cur # type: ignore # !rm
|
||||
if create:
|
||||
dver = 2
|
||||
modified = True
|
||||
if not sver:
|
||||
sver = VER_SHARES_DB
|
||||
for cmd in sch1 + sch2:
|
||||
cur.execute(cmd)
|
||||
self.log("root", "created new shares-db")
|
||||
else:
|
||||
(dver,) = cur.execute("select v from kv where k = 'sver'").fetchall()[0]
|
||||
|
||||
if dver == 1:
|
||||
modified = True
|
||||
if sver == 1:
|
||||
for cmd in sch2:
|
||||
cur.execute(cmd)
|
||||
cur.execute("update sh set st = 0")
|
||||
self.log("root", "shares-db schema upgrade ok")
|
||||
|
||||
if modified:
|
||||
for cmd in [
|
||||
r"delete from kv where k = 'sver'",
|
||||
r"insert into kv values ('sver', %d)" % (2,),
|
||||
]:
|
||||
cur.execute(cmd)
|
||||
db.commit()
|
||||
if sver < VER_SHARES_DB:
|
||||
cur.execute("delete from kv where k='sver'")
|
||||
cur.execute("insert into kv values('sver',?)", (VER_SHARES_DB,))
|
||||
|
||||
vars = (("pid", os.getpid()), ("ts", int(time.time() * 1000)))
|
||||
if owner:
|
||||
# wear-estimate: same as sessions-db
|
||||
for k, v in vars:
|
||||
cur.execute("update kv set v=? where k=?", (v, k))
|
||||
else:
|
||||
for k, v in vars:
|
||||
cur.execute("insert into kv values(?, ?)", (k, v))
|
||||
|
||||
db.commit()
|
||||
cur.close()
|
||||
db.close()
|
||||
|
||||
@@ -679,10 +766,11 @@ class SvcHub(object):
|
||||
t += ", "
|
||||
t += "\033[0mNG: \033[35m" + sng
|
||||
|
||||
t += "\033[0m, see --deps"
|
||||
self.log("dependencies", t, 6)
|
||||
t += "\033[0m, see --deps (this is fine btw)"
|
||||
self.log("optional-dependencies", t, 6)
|
||||
|
||||
def _check_env(self) -> None:
|
||||
al = self.args
|
||||
try:
|
||||
files = os.listdir(E.cfg)
|
||||
except:
|
||||
@@ -699,6 +787,21 @@ class SvcHub(object):
|
||||
if self.args.bauth_last:
|
||||
self.log("root", "WARNING: ignoring --bauth-last due to --no-bauth", 3)
|
||||
|
||||
have_tcp = False
|
||||
for zs in al.i:
|
||||
if not zs.startswith("unix:"):
|
||||
have_tcp = True
|
||||
if not have_tcp:
|
||||
zb = False
|
||||
zs = "z zm zm4 zm6 zmv zmvv zs zsv zv"
|
||||
for zs in zs.split():
|
||||
if getattr(al, zs, False):
|
||||
setattr(al, zs, False)
|
||||
zb = True
|
||||
if zb:
|
||||
t = "only listening on unix-sockets; cannot enable zeroconf/mdns/ssdp as requested"
|
||||
self.log("root", t, 3)
|
||||
|
||||
if not self.args.no_dav:
|
||||
from .dxml import DXML_OK
|
||||
|
||||
@@ -763,7 +866,7 @@ class SvcHub(object):
|
||||
vl = [os.path.expandvars(os.path.expanduser(x)) for x in vl]
|
||||
setattr(al, k, vl)
|
||||
|
||||
for k in "lo hist ssl_log".split(" "):
|
||||
for k in "lo hist dbpath ssl_log".split(" "):
|
||||
vs = getattr(al, k)
|
||||
if vs:
|
||||
vs = os.path.expandvars(os.path.expanduser(vs))
|
||||
|
||||
@@ -54,6 +54,7 @@ def gen_fdesc(sz: int, crc32: int, z64: bool) -> bytes:
|
||||
|
||||
def gen_hdr(
|
||||
h_pos: Optional[int],
|
||||
z64: bool,
|
||||
fn: str,
|
||||
sz: int,
|
||||
lastmod: int,
|
||||
@@ -70,7 +71,6 @@ def gen_hdr(
|
||||
# appnote 4.5 / zip 3.0 (2008) / unzip 6.0 (2009) says to add z64
|
||||
# extinfo for values which exceed H, but that becomes an off-by-one
|
||||
# (can't tell if it was clamped or exactly maxval), make it obvious
|
||||
z64 = sz >= 0xFFFFFFFF
|
||||
z64v = [sz, sz] if z64 else []
|
||||
if h_pos and h_pos >= 0xFFFFFFFF:
|
||||
# central, also consider ptr to original header
|
||||
@@ -244,6 +244,7 @@ class StreamZip(StreamArc):
|
||||
|
||||
sz = st.st_size
|
||||
ts = st.st_mtime
|
||||
h_pos = self.pos
|
||||
|
||||
crc = 0
|
||||
if self.pre_crc:
|
||||
@@ -252,8 +253,12 @@ class StreamZip(StreamArc):
|
||||
|
||||
crc &= 0xFFFFFFFF
|
||||
|
||||
h_pos = self.pos
|
||||
buf = gen_hdr(None, name, sz, ts, self.utf8, crc, self.pre_crc)
|
||||
# some unzip-programs expect a 64bit data-descriptor
|
||||
# even if the only 32bit-exceeding value is the offset,
|
||||
# so force that by placeholdering the filesize too
|
||||
z64 = h_pos >= 0xFFFFFFFF or sz >= 0xFFFFFFFF
|
||||
|
||||
buf = gen_hdr(None, z64, name, sz, ts, self.utf8, crc, self.pre_crc)
|
||||
yield self._ct(buf)
|
||||
|
||||
for buf in yieldfile(src, self.args.iobuf):
|
||||
@@ -266,8 +271,6 @@ class StreamZip(StreamArc):
|
||||
|
||||
self.items.append((name, sz, ts, crc, h_pos))
|
||||
|
||||
z64 = sz >= 4 * 1024 * 1024 * 1024
|
||||
|
||||
if z64 or not self.pre_crc:
|
||||
buf = gen_fdesc(sz, crc, z64)
|
||||
yield self._ct(buf)
|
||||
@@ -306,7 +309,8 @@ class StreamZip(StreamArc):
|
||||
|
||||
cdir_pos = self.pos
|
||||
for name, sz, ts, crc, h_pos in self.items:
|
||||
buf = gen_hdr(h_pos, name, sz, ts, self.utf8, crc, self.pre_crc)
|
||||
z64 = h_pos >= 0xFFFFFFFF or sz >= 0xFFFFFFFF
|
||||
buf = gen_hdr(h_pos, z64, name, sz, ts, self.utf8, crc, self.pre_crc)
|
||||
mbuf += self._ct(buf)
|
||||
if len(mbuf) >= 16384:
|
||||
yield mbuf
|
||||
|
||||
@@ -566,7 +566,7 @@ class TcpSrv(object):
|
||||
ip = None
|
||||
ips = list(t1) + list(t2)
|
||||
qri = self.args.qri
|
||||
if self.args.zm and not qri:
|
||||
if self.args.zm and not qri and ips:
|
||||
name = self.args.name + ".local"
|
||||
t1[name] = next(v for v in (t1 or t2).values())
|
||||
ips = [name] + ips
|
||||
|
||||
@@ -1,13 +1,15 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import errno
|
||||
import os
|
||||
import stat
|
||||
|
||||
from .__init__ import TYPE_CHECKING
|
||||
from .authsrv import VFS
|
||||
from .bos import bos
|
||||
from .th_srv import EXTS_AC, HAVE_WEBP, thumb_path
|
||||
from .util import Cooldown
|
||||
from .util import Cooldown, Pebkac
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
from typing import Optional, Union
|
||||
@@ -16,6 +18,9 @@ if TYPE_CHECKING:
|
||||
from .httpsrv import HttpSrv
|
||||
|
||||
|
||||
IOERROR = "reading the file was denied by the server os; either due to filesystem permissions, selinux, apparmor, or similar:\n%r"
|
||||
|
||||
|
||||
class ThumbCli(object):
|
||||
def __init__(self, hsrv: "HttpSrv") -> None:
|
||||
self.broker = hsrv.broker
|
||||
@@ -124,7 +129,7 @@ class ThumbCli(object):
|
||||
|
||||
tpath = thumb_path(histpath, rem, mtime, fmt, self.fmt_ffa)
|
||||
tpaths = [tpath]
|
||||
if fmt == "w":
|
||||
if fmt[:1] == "w":
|
||||
# also check for jpg (maybe webp is unavailable)
|
||||
tpaths.append(tpath.rsplit(".", 1)[0] + ".jpg")
|
||||
|
||||
@@ -157,8 +162,22 @@ class ThumbCli(object):
|
||||
if abort:
|
||||
return None
|
||||
|
||||
if not bos.path.getsize(os.path.join(ptop, rem)):
|
||||
return None
|
||||
ap = os.path.join(ptop, rem)
|
||||
try:
|
||||
st = bos.stat(ap)
|
||||
if not st.st_size or not stat.S_ISREG(st.st_mode):
|
||||
return None
|
||||
|
||||
with open(ap, "rb", 4) as f:
|
||||
if not f.read(4):
|
||||
raise Exception()
|
||||
except OSError as ex:
|
||||
if ex.errno == errno.ENOENT:
|
||||
raise Pebkac(404)
|
||||
else:
|
||||
raise Pebkac(500, IOERROR % (ex,))
|
||||
except Exception as ex:
|
||||
raise Pebkac(500, IOERROR % (ex,))
|
||||
|
||||
x = self.broker.ask("thumbsrv.get", ptop, rem, mtime, fmt)
|
||||
return x.get() # type: ignore
|
||||
|
||||
@@ -4,8 +4,10 @@ from __future__ import print_function, unicode_literals
|
||||
import hashlib
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import subprocess as sp
|
||||
import tempfile
|
||||
import threading
|
||||
import time
|
||||
|
||||
@@ -18,6 +20,7 @@ from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, au_unpk, ffprobe
|
||||
from .util import BytesIO # type: ignore
|
||||
from .util import (
|
||||
FFMPEG_URL,
|
||||
VF_CAREFUL,
|
||||
Cooldown,
|
||||
Daemon,
|
||||
afsenc,
|
||||
@@ -48,6 +51,10 @@ HAVE_WEBP = False
|
||||
|
||||
EXTS_TH = set(["jpg", "webp", "png"])
|
||||
EXTS_AC = set(["opus", "owa", "caf", "mp3"])
|
||||
EXTS_SPEC_SAFE = set("aif aiff flac mp3 opus wav".split())
|
||||
|
||||
PTN_TS = re.compile("^-?[0-9a-f]{8,10}$")
|
||||
|
||||
|
||||
try:
|
||||
if os.environ.get("PRTY_NO_PIL"):
|
||||
@@ -163,12 +170,15 @@ class ThumbSrv(object):
|
||||
|
||||
self.mutex = threading.Lock()
|
||||
self.busy: dict[str, list[threading.Condition]] = {}
|
||||
self.untemp: dict[str, list[str]] = {}
|
||||
self.ram: dict[str, float] = {}
|
||||
self.memcond = threading.Condition(self.mutex)
|
||||
self.stopping = False
|
||||
self.rm_nullthumbs = True # forget failed conversions on startup
|
||||
self.nthr = max(1, self.args.th_mt)
|
||||
|
||||
self.exts_spec_unsafe = set(self.args.th_spec_cnv.split(","))
|
||||
|
||||
self.q: Queue[Optional[tuple[str, str, str, VFS]]] = Queue(self.nthr * 4)
|
||||
for n in range(self.nthr):
|
||||
Daemon(self.worker, "thumb-{}-{}".format(n, self.nthr))
|
||||
@@ -385,8 +395,12 @@ class ThumbSrv(object):
|
||||
self.log(msg, c)
|
||||
if getattr(ex, "returncode", 0) != 321:
|
||||
if fun == funs[-1]:
|
||||
with open(ttpath, "wb") as _:
|
||||
pass
|
||||
try:
|
||||
with open(ttpath, "wb") as _:
|
||||
pass
|
||||
except Exception as ex:
|
||||
t = "failed to create the file [%s]: %r"
|
||||
self.log(t % (ttpath, ex), 3)
|
||||
else:
|
||||
# ffmpeg may spawn empty files on windows
|
||||
try:
|
||||
@@ -399,13 +413,24 @@ class ThumbSrv(object):
|
||||
|
||||
try:
|
||||
wrename(self.log, ttpath, tpath, vn.flags)
|
||||
except:
|
||||
except Exception as ex:
|
||||
if not os.path.exists(tpath):
|
||||
t = "failed to move [%s] to [%s]: %r"
|
||||
self.log(t % (ttpath, tpath, ex), 3)
|
||||
pass
|
||||
|
||||
untemp = []
|
||||
with self.mutex:
|
||||
subs = self.busy[tpath]
|
||||
del self.busy[tpath]
|
||||
self.ram.pop(ttpath, None)
|
||||
untemp = self.untemp.pop(ttpath, None) or []
|
||||
|
||||
for ap in untemp:
|
||||
try:
|
||||
wunlink(self.log, ap, VF_CAREFUL)
|
||||
except:
|
||||
pass
|
||||
|
||||
for x in subs:
|
||||
with x:
|
||||
@@ -659,15 +684,43 @@ class ThumbSrv(object):
|
||||
if "ac" not in ret:
|
||||
raise Exception("not audio")
|
||||
|
||||
fext = abspath.split(".")[-1].lower()
|
||||
|
||||
# https://trac.ffmpeg.org/ticket/10797
|
||||
# expect 1 GiB every 600 seconds when duration is tricky;
|
||||
# simple filetypes are generally safer so let's special-case those
|
||||
safe = ("flac", "wav", "aif", "aiff", "opus")
|
||||
coeff = 1800 if abspath.split(".")[-1].lower() in safe else 600
|
||||
dur = ret[".dur"][1] if ".dur" in ret else 300
|
||||
coeff = 1800 if fext in EXTS_SPEC_SAFE else 600
|
||||
dur = ret[".dur"][1] if ".dur" in ret else 900
|
||||
need = 0.2 + dur / coeff
|
||||
self.wait4ram(need, tpath)
|
||||
|
||||
infile = abspath
|
||||
if dur >= 900 or fext in self.exts_spec_unsafe:
|
||||
with tempfile.NamedTemporaryFile(suffix=".spec.flac", delete=False) as f:
|
||||
f.write(b"h")
|
||||
infile = f.name
|
||||
try:
|
||||
self.untemp[tpath].append(infile)
|
||||
except:
|
||||
self.untemp[tpath] = [infile]
|
||||
|
||||
# fmt: off
|
||||
cmd = [
|
||||
b"ffmpeg",
|
||||
b"-nostdin",
|
||||
b"-v", b"error",
|
||||
b"-hide_banner",
|
||||
b"-i", fsenc(abspath),
|
||||
b"-map", b"0:a:0",
|
||||
b"-ac", b"1",
|
||||
b"-ar", b"48000",
|
||||
b"-sample_fmt", b"s16",
|
||||
b"-t", b"900",
|
||||
b"-y", fsenc(infile),
|
||||
]
|
||||
# fmt: on
|
||||
self._run_ff(cmd, vn)
|
||||
|
||||
fc = "[0:a:0]aresample=48000{},showspectrumpic=s="
|
||||
if "3" in fmt:
|
||||
fc += "1280x1024,crop=1420:1056:70:48[o]"
|
||||
@@ -687,7 +740,7 @@ class ThumbSrv(object):
|
||||
b"-nostdin",
|
||||
b"-v", b"error",
|
||||
b"-hide_banner",
|
||||
b"-i", fsenc(abspath),
|
||||
b"-i", fsenc(infile),
|
||||
b"-filter_complex", fc.encode("utf-8"),
|
||||
b"-map", b"[o]",
|
||||
b"-frames:v", b"1",
|
||||
@@ -991,6 +1044,8 @@ class ThumbSrv(object):
|
||||
# thumb file
|
||||
try:
|
||||
b64, ts, ext = f.split(".")
|
||||
if len(ts) > 8 and PTN_TS.match(ts):
|
||||
ts = "yeahokay"
|
||||
if len(b64) != 24 or len(ts) != 8 or ext not in exts:
|
||||
raise Exception()
|
||||
except:
|
||||
|
||||
@@ -134,9 +134,9 @@ class U2idx(object):
|
||||
assert sqlite3 # type: ignore # !rm
|
||||
|
||||
ptop = vn.realpath
|
||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||
histpath = self.asrv.vfs.dbpaths.get(ptop)
|
||||
if not histpath:
|
||||
self.log("no histpath for %r" % (ptop,))
|
||||
self.log("no dbpath for %r" % (ptop,))
|
||||
return None
|
||||
|
||||
db_path = os.path.join(histpath, "up2k.db")
|
||||
|
||||
@@ -94,7 +94,7 @@ VF_AFFECTS_INDEXING = set(zsg.split(" "))
|
||||
|
||||
SBUSY = "cannot receive uploads right now;\nserver busy with %s.\nPlease wait; the client will retry..."
|
||||
|
||||
HINT_HISTPATH = "you could try moving the database to another location (preferably an SSD or NVME drive) using either the --hist argument (global option for all volumes), or the hist volflag (just for this volume)"
|
||||
HINT_HISTPATH = "you could try moving the database to another location (preferably an SSD or NVME drive) using either the --hist argument (global option for all volumes), or the hist volflag (just for this volume), or, if you want to keep the thumbnails in the current location and only move the database itself, then use --dbpath or volflag dbpath"
|
||||
|
||||
|
||||
NULLSTAT = os.stat_result((0, -1, -1, 0, 0, 0, 0, 0, 0, 0))
|
||||
@@ -1096,9 +1096,9 @@ class Up2k(object):
|
||||
self, ptop: str, flags: dict[str, Any]
|
||||
) -> Optional[tuple["sqlite3.Cursor", str]]:
|
||||
"""mutex(main,reg) me"""
|
||||
histpath = self.vfs.histtab.get(ptop)
|
||||
histpath = self.vfs.dbpaths.get(ptop)
|
||||
if not histpath:
|
||||
self.log("no histpath for %r" % (ptop,))
|
||||
self.log("no dbpath for %r" % (ptop,))
|
||||
return None
|
||||
|
||||
db_path = os.path.join(histpath, "up2k.db")
|
||||
@@ -1344,12 +1344,15 @@ class Up2k(object):
|
||||
]
|
||||
excl += [absreal(x) for x in excl]
|
||||
excl += list(self.vfs.histtab.values())
|
||||
excl += list(self.vfs.dbpaths.values())
|
||||
if WINDOWS:
|
||||
excl = [x.replace("/", "\\") for x in excl]
|
||||
else:
|
||||
# ~/.wine/dosdevices/z:/ and such
|
||||
excl.extend(("/dev", "/proc", "/run", "/sys"))
|
||||
|
||||
excl = list({k: 1 for k in excl})
|
||||
|
||||
if self.args.re_dirsz:
|
||||
db.c.execute("delete from ds")
|
||||
db.n += 1
|
||||
@@ -5102,7 +5105,7 @@ class Up2k(object):
|
||||
|
||||
def _snap_reg(self, ptop: str, reg: dict[str, dict[str, Any]]) -> None:
|
||||
now = time.time()
|
||||
histpath = self.vfs.histtab.get(ptop)
|
||||
histpath = self.vfs.dbpaths.get(ptop)
|
||||
if not histpath:
|
||||
return
|
||||
|
||||
|
||||
@@ -114,8 +114,14 @@ IP6ALL = "0:0:0:0:0:0:0:0"
|
||||
|
||||
|
||||
try:
|
||||
import ctypes
|
||||
import fcntl
|
||||
|
||||
HAVE_FCNTL = True
|
||||
except:
|
||||
HAVE_FCNTL = False
|
||||
|
||||
try:
|
||||
import ctypes
|
||||
import termios
|
||||
except:
|
||||
pass
|
||||
@@ -246,7 +252,7 @@ SYMTIME = PY36 and os.utime in os.supports_follow_symlinks
|
||||
META_NOBOTS = '<meta name="robots" content="noindex, nofollow">\n'
|
||||
|
||||
# smart enough to understand javascript while also ignoring rel="nofollow"
|
||||
BAD_BOTS = r"Barkrowler|bingbot|BLEXBot|Googlebot|GPTBot|PetalBot|SeekportBot|SemrushBot|YandexBot"
|
||||
BAD_BOTS = r"Barkrowler|bingbot|BLEXBot|Googlebot|GoogleOther|GPTBot|PetalBot|SeekportBot|SemrushBot|YandexBot"
|
||||
|
||||
FFMPEG_URL = "https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z"
|
||||
|
||||
@@ -1546,6 +1552,12 @@ def vol_san(vols: list["VFS"], txt: bytes) -> bytes:
|
||||
txt = txt.replace(bap.replace(b"\\", b"\\\\"), bvp)
|
||||
txt = txt.replace(bhp.replace(b"\\", b"\\\\"), bvph)
|
||||
|
||||
if vol.histpath != vol.dbpath:
|
||||
bdp = vol.dbpath.encode("utf-8")
|
||||
bdph = b"$db(/" + bvp + b")"
|
||||
txt = txt.replace(bdp, bdph)
|
||||
txt = txt.replace(bdp.replace(b"\\", b"\\\\"), bdph)
|
||||
|
||||
if txt != txt0:
|
||||
txt += b"\r\nNOTE: filepaths sanitized; see serverlog for correct values"
|
||||
|
||||
@@ -3934,8 +3946,75 @@ def hidedir(dp) -> None:
|
||||
pass
|
||||
|
||||
|
||||
_flocks = {}
|
||||
|
||||
|
||||
def _lock_file_noop(ap: str) -> bool:
|
||||
return True
|
||||
|
||||
|
||||
def _lock_file_ioctl(ap: str) -> bool:
|
||||
assert fcntl # type: ignore # !rm
|
||||
try:
|
||||
fd = _flocks.pop(ap)
|
||||
os.close(fd)
|
||||
except:
|
||||
pass
|
||||
|
||||
fd = os.open(ap, os.O_RDWR | os.O_CREAT, 438)
|
||||
# NOTE: the fcntl.lockf identifier is (pid,node);
|
||||
# the lock will be dropped if os.close(os.open(ap))
|
||||
# is performed anywhere else in this thread
|
||||
|
||||
try:
|
||||
fcntl.lockf(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
_flocks[ap] = fd
|
||||
return True
|
||||
except Exception as ex:
|
||||
eno = getattr(ex, "errno", -1)
|
||||
try:
|
||||
os.close(fd)
|
||||
except:
|
||||
pass
|
||||
if eno in (errno.EAGAIN, errno.EACCES):
|
||||
return False
|
||||
print("WARNING: unexpected errno %d from fcntl.lockf; %r" % (eno, ex))
|
||||
return True
|
||||
|
||||
|
||||
def _lock_file_windows(ap: str) -> bool:
|
||||
try:
|
||||
import msvcrt
|
||||
|
||||
try:
|
||||
fd = _flocks.pop(ap)
|
||||
os.close(fd)
|
||||
except:
|
||||
pass
|
||||
|
||||
fd = os.open(ap, os.O_RDWR | os.O_CREAT, 438)
|
||||
msvcrt.locking(fd, msvcrt.LK_NBLCK, 1)
|
||||
return True
|
||||
except Exception as ex:
|
||||
eno = getattr(ex, "errno", -1)
|
||||
if eno == errno.EACCES:
|
||||
return False
|
||||
print("WARNING: unexpected errno %d from msvcrt.locking; %r" % (eno, ex))
|
||||
return True
|
||||
|
||||
|
||||
if os.environ.get("PRTY_NO_DB_LOCK"):
|
||||
lock_file = _lock_file_noop
|
||||
elif ANYWIN:
|
||||
lock_file = _lock_file_windows
|
||||
elif HAVE_FCNTL:
|
||||
lock_file = _lock_file_ioctl
|
||||
else:
|
||||
lock_file = _lock_file_noop
|
||||
|
||||
|
||||
try:
|
||||
if sys.version_info < (3, 10):
|
||||
if sys.version_info < (3, 10) or os.environ.get("PRTY_NO_IMPRESO"):
|
||||
# py3.8 doesn't have .files
|
||||
# py3.9 has broken .is_file
|
||||
raise ImportError()
|
||||
|
||||
@@ -271,6 +271,7 @@ var Ls = {
|
||||
"ml_eq": "audio equalizer",
|
||||
"ml_drc": "dynamic range compressor",
|
||||
|
||||
"mt_loop": "loop/repeat one song\">🔁",
|
||||
"mt_shuf": "shuffle the songs in each folder\">🔀",
|
||||
"mt_aplay": "autoplay if there is a song-ID in the link you clicked to access the server$N$Ndisabling this will also stop the page URL from being updated with song-IDs when playing music, to prevent autoplay if these settings are lost but the URL remains\">a▶",
|
||||
"mt_preload": "start loading the next song near the end for gapless playback\">preload",
|
||||
@@ -316,6 +317,7 @@ var Ls = {
|
||||
"mm_eunk": "Unknown Errol",
|
||||
"mm_e404": "Could not play audio; error 404: File not found.",
|
||||
"mm_e403": "Could not play audio; error 403: Access denied.\n\nTry pressing F5 to reload, maybe you got logged out",
|
||||
"mm_e500": "Could not play audio; error 500: Check server logs.",
|
||||
"mm_e5xx": "Could not play audio; server error ",
|
||||
"mm_nof": "not finding any more audio files nearby",
|
||||
"mm_prescan": "Looking for music to play next...",
|
||||
@@ -330,6 +332,7 @@ var Ls = {
|
||||
"f_bigtxt": "this file is {0} MiB large -- really view as text?",
|
||||
"fbd_more": '<div id="blazy">showing <code>{0}</code> of <code>{1}</code> files; <a href="#" id="bd_more">show {2}</a> or <a href="#" id="bd_all">show all</a></div>',
|
||||
"fbd_all": '<div id="blazy">showing <code>{0}</code> of <code>{1}</code> files; <a href="#" id="bd_all">show all</a></div>',
|
||||
"f_anota": "only {0} of the {1} items were selected;\nto select the full folder, first scroll to the bottom",
|
||||
|
||||
"f_dls": 'the file links in the current folder have\nbeen changed into download links',
|
||||
|
||||
@@ -874,6 +877,7 @@ var Ls = {
|
||||
"ml_eq": "audio equalizer (tonejustering)",
|
||||
"ml_drc": "compressor (volum-utjevning)",
|
||||
|
||||
"mt_loop": "spill den samme sangen om og om igjen\">🔁",
|
||||
"mt_shuf": "sangene i hver mappe$Nspilles i tilfeldig rekkefølge\">🔀",
|
||||
"mt_aplay": "forsøk å starte avspilling hvis linken du klikket på for å åpne nettsiden inneholder en sang-ID$N$Nhvis denne deaktiveres så vil heller ikke nettside-URLen bli oppdatert med sang-ID'er når musikk spilles, i tilfelle innstillingene skulle gå tapt og nettsiden lastes på ny\">a▶",
|
||||
"mt_preload": "hent ned litt av neste sang i forkant,$Nslik at pausen i overgangen blir mindre\">forles",
|
||||
@@ -919,6 +923,7 @@ var Ls = {
|
||||
"mm_eunk": "Ukjent feil",
|
||||
"mm_e404": "Avspilling feilet: Fil ikke funnet.",
|
||||
"mm_e403": "Avspilling feilet: Tilgang nektet.\n\nKanskje du ble logget ut?\nPrøv å trykk F5 for å laste siden på nytt.",
|
||||
"mm_e500": "Avspilling feilet: Rusk i maskineriet, sjekk serverloggen.",
|
||||
"mm_e5xx": "Avspilling feilet: ",
|
||||
"mm_nof": "finner ikke flere sanger i nærheten",
|
||||
"mm_prescan": "Leter etter neste sang...",
|
||||
@@ -933,6 +938,7 @@ var Ls = {
|
||||
"f_bigtxt": "denne filen er hele {0} MiB -- vis som tekst?",
|
||||
"fbd_more": '<div id="blazy">viser <code>{0}</code> av <code>{1}</code> filer; <a href="#" id="bd_more">vis {2}</a> eller <a href="#" id="bd_all">vis alle</a></div>',
|
||||
"fbd_all": '<div id="blazy">viser <code>{0}</code> av <code>{1}</code> filer; <a href="#" id="bd_all">vis alle</a></div>',
|
||||
"f_anota": "kun {0} av totalt {1} elementer ble markert;\nfor å velge alt må du bla til bunnen av mappen først",
|
||||
|
||||
"f_dls": 'linkene i denne mappen er nå\nomgjort til nedlastningsknapper',
|
||||
|
||||
@@ -1477,6 +1483,7 @@ var Ls = {
|
||||
"ml_eq": "音频均衡器",
|
||||
"ml_drc": "动态范围压缩器",
|
||||
|
||||
"mt_loop": "循环播放当前的歌曲\">🔁", //m
|
||||
"mt_shuf": "在每个文件夹中随机播放歌曲\">🔀",
|
||||
"mt_aplay": "如果链接中有歌曲 ID,则自动播放,禁用此选项将停止在播放音乐时更新页面 URL 中的歌曲 ID,以防止在设置丢失但 URL 保留时自动播放\">自动播放▶",
|
||||
"mt_preload": "在歌曲快结束时开始加载下一首歌,以实现无缝播放\">预加载",
|
||||
@@ -1522,6 +1529,7 @@ var Ls = {
|
||||
"mm_eunk": "未知错误",
|
||||
"mm_e404": "无法播放音频;错误 404:文件未找到。",
|
||||
"mm_e403": "无法播放音频;错误 403:访问被拒绝。\n\n尝试按 F5 重新加载,也许你已被注销",
|
||||
"mm_e500": "无法播放音频;错误 500:检查服务器日志。", //m
|
||||
"mm_e5xx": "无法播放音频;服务器错误",
|
||||
"mm_nof": "附近找不到更多音频文件",
|
||||
"mm_prescan": "正在寻找下一首音乐...",
|
||||
@@ -2289,6 +2297,7 @@ var mpl = (function () {
|
||||
|
||||
ebi('op_player').innerHTML = (
|
||||
'<div><h3>' + L.cl_opts + '</h3><div>' +
|
||||
'<a href="#" class="tgl btn" id="au_loop" tt="' + L.mt_loop + '</a>' +
|
||||
'<a href="#" class="tgl btn" id="au_shuf" tt="' + L.mt_shuf + '</a>' +
|
||||
'<a href="#" class="tgl btn" id="au_aplay" tt="' + L.mt_aplay + '</a>' +
|
||||
'<a href="#" class="tgl btn" id="au_preload" tt="' + L.mt_preload + '</a>' +
|
||||
@@ -2340,6 +2349,10 @@ var mpl = (function () {
|
||||
"os_ctl": bcfg_get('au_os_ctl', have_mctl) && have_mctl,
|
||||
'traversals': 0,
|
||||
};
|
||||
bcfg_bind(r, 'loop', 'au_loop', false, function (v) {
|
||||
if (mp.au)
|
||||
mp.au.loop = v;
|
||||
});
|
||||
bcfg_bind(r, 'shuf', 'au_shuf', false, function () {
|
||||
mp.read_order(); // don't bind
|
||||
});
|
||||
@@ -2552,7 +2565,7 @@ var mpl = (function () {
|
||||
ebi('np_artist').textContent = np.artist || (fns.length > 1 ? fns[0] : '');
|
||||
ebi('np_title').textContent = np.title || '';
|
||||
ebi('np_dur').textContent = np['.dur'] || '';
|
||||
ebi('np_url').textContent = get_vpath() + np.file.split('?')[0];
|
||||
ebi('np_url').textContent = uricom_dec(get_evpath()) + np.file.split('?')[0];
|
||||
if (!MOBILE && cover)
|
||||
ebi('np_img').setAttribute('src', cover);
|
||||
else
|
||||
@@ -2818,6 +2831,14 @@ function MPlayer() {
|
||||
r.fau.loop = true;
|
||||
r.fau.play();
|
||||
};
|
||||
|
||||
r.set_ev = function () {
|
||||
mp.au.onended = evau_end;
|
||||
mp.au.onerror = evau_error;
|
||||
mp.au.onprogress = pbar.drawpos;
|
||||
mp.au.onplaying = mpui.progress_updater;
|
||||
mp.au.onloadeddata = mp.au.onloadedmetadata = mp.nopause;
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@@ -4055,10 +4076,7 @@ function play(tid, is_ev, seek) {
|
||||
else {
|
||||
mp.au = new Audio();
|
||||
mp.au2 = new Audio();
|
||||
mp.au.onerror = evau_error;
|
||||
mp.au.onprogress = pbar.drawpos;
|
||||
mp.au.onplaying = mpui.progress_updater;
|
||||
mp.au.onended = next_song;
|
||||
mp.set_ev();
|
||||
widget.open();
|
||||
}
|
||||
mp.init_fau();
|
||||
@@ -4071,13 +4089,9 @@ function play(tid, is_ev, seek) {
|
||||
var t = mp.au;
|
||||
mp.au = mp.au2;
|
||||
mp.au2 = t;
|
||||
t.onerror = t.onprogress = t.onended = null;
|
||||
t.onerror = t.onprogress = t.onended = t.loop = null;
|
||||
t.ld = 0; //owa
|
||||
mp.au.onerror = evau_error;
|
||||
mp.au.onprogress = pbar.drawpos;
|
||||
mp.au.onplaying = mpui.progress_updater;
|
||||
mp.au.onloadeddata = mp.au.onloadedmetadata = mp.nopause;
|
||||
mp.au.onended = next_song;
|
||||
mp.set_ev();
|
||||
t = mp.au.currentTime;
|
||||
if (isNum(t) && t > 0.1)
|
||||
mp.au.currentTime = 0;
|
||||
@@ -4115,6 +4129,7 @@ function play(tid, is_ev, seek) {
|
||||
|
||||
try {
|
||||
mp.nopause();
|
||||
mp.au.loop = mpl.loop;
|
||||
if (mpl.aplay || is_ev !== -1)
|
||||
mp.au.play();
|
||||
|
||||
@@ -4159,6 +4174,15 @@ function scroll2playing() {
|
||||
}
|
||||
|
||||
|
||||
function evau_end(e) {
|
||||
if (!mpl.loop)
|
||||
return next_song(e);
|
||||
ev(e);
|
||||
mp.au.currentTime = 0;
|
||||
mp.au.play();
|
||||
}
|
||||
|
||||
|
||||
// event from the audio object if something breaks
|
||||
function evau_error(e) {
|
||||
var err = '',
|
||||
@@ -4202,6 +4226,7 @@ function evau_error(e) {
|
||||
}
|
||||
var em = '' + eplaya.error.message,
|
||||
mfile = '\n\nFile: «' + uricom_dec(eplaya.src.split('/').pop()) + '»',
|
||||
e500 = L.mm_e500,
|
||||
e404 = L.mm_e404,
|
||||
e403 = L.mm_e403;
|
||||
|
||||
@@ -4214,6 +4239,9 @@ function evau_error(e) {
|
||||
if (em.startsWith('404: '))
|
||||
err = e404;
|
||||
|
||||
if (em.startsWith('500: '))
|
||||
err = e500;
|
||||
|
||||
toast.warn(15, esc(basenames(err + mfile)));
|
||||
console.log(basenames(err + mfile));
|
||||
|
||||
@@ -4225,7 +4253,9 @@ function evau_error(e) {
|
||||
if (this.status < 400)
|
||||
return;
|
||||
|
||||
err = this.status == 403 ? e403 : this.status == 404 ? e404 :
|
||||
err = this.status == 403 ? e403 :
|
||||
this.status == 404 ? e404 :
|
||||
this.status == 500 ? e500 :
|
||||
L.mm_e5xx + this.status;
|
||||
|
||||
toast.warn(15, esc(basenames(err + mfile)));
|
||||
@@ -4424,7 +4454,8 @@ function eval_hash() {
|
||||
|
||||
function read_dsort(txt) {
|
||||
dnsort = dnsort ? 1 : 0;
|
||||
clmod(ebi('nsort'), 'on', (sread('nsort') || dnsort) == 1);
|
||||
ENATSORT = NATSORT && (sread('nsort') || dnsort) == 1;
|
||||
clmod(ebi('nsort'), 'on', ENATSORT);
|
||||
try {
|
||||
var zt = (('' + txt).trim() || 'href').split(/,+/g);
|
||||
dsort = [];
|
||||
@@ -4470,9 +4501,6 @@ function sortfiles(nodes) {
|
||||
|
||||
sopts = sopts && sopts.length ? sopts : jcp(dsort);
|
||||
|
||||
var collator = !clgot(ebi('nsort'), 'on') ? null :
|
||||
new Intl.Collator([], {numeric: true});
|
||||
|
||||
try {
|
||||
var is_srch = false;
|
||||
if (nodes[0]['rp']) {
|
||||
@@ -4524,8 +4552,9 @@ function sortfiles(nodes) {
|
||||
}
|
||||
if (v2 === undefined) return 1 * rev;
|
||||
|
||||
var ret = rev * (typ == 'int' ? (v1 - v2) : collator ?
|
||||
collator.compare(v1, v2) : v1.localeCompare(v2));
|
||||
var ret = rev * (typ == 'int' ? (v1 - v2) :
|
||||
ENATSORT ? NATSORT.compare(v1, v2) :
|
||||
v1.localeCompare(v2));
|
||||
|
||||
if (ret === 0)
|
||||
ret = onodes.indexOf(n1) - onodes.indexOf(n2);
|
||||
@@ -5963,7 +5992,8 @@ var showfile = (function () {
|
||||
};
|
||||
|
||||
r.mktree = function () {
|
||||
var html = ['<li class="bn">' + L.tv_lst + '<br />' + linksplit(get_vpath()).join('<span>/</span>') + '</li>'];
|
||||
var crumbs = linksplit(get_evpath()).join('<span>/</span>'),
|
||||
html = ['<li class="bn">' + L.tv_lst + '<br />' + crumbs + '</li>'];
|
||||
for (var a = 0; a < r.files.length; a++) {
|
||||
var file = r.files[a];
|
||||
html.push('<li><a href="?doc=' +
|
||||
@@ -6538,8 +6568,8 @@ function tree_scrolltoo(q) {
|
||||
var ctr = ebi('tree'),
|
||||
em = parseFloat(getComputedStyle(act).fontSize),
|
||||
top = act.offsetTop + ul.offsetTop,
|
||||
min = top - 11 * em,
|
||||
max = top - (ctr.offsetHeight - 10 * em);
|
||||
min = top - 20 * em,
|
||||
max = top - (ctr.offsetHeight - 16 * em);
|
||||
|
||||
if (ctr.scrollTop > min)
|
||||
ctr.scrollTop = Math.floor(min);
|
||||
@@ -6710,7 +6740,8 @@ var ahotkeys = function (e) {
|
||||
return ebi('griden').click();
|
||||
}
|
||||
|
||||
if ((aet == 'tr' || aet == 'td') && ae.closest('#files')) {
|
||||
var in_ftab = (aet == 'tr' || aet == 'td') && ae.closest('#files');
|
||||
if (in_ftab) {
|
||||
var d = '', rem = 0;
|
||||
if (aet == 'td') ae = ae.closest('tr'); //ie11
|
||||
if (k == 'ArrowUp' || k == 'Up') d = 'previous';
|
||||
@@ -6727,12 +6758,19 @@ var ahotkeys = function (e) {
|
||||
msel.selui();
|
||||
return ev(e);
|
||||
}
|
||||
}
|
||||
if (in_ftab || !aet || (ae && ae.closest('#ggrid'))) {
|
||||
if ((k == 'KeyA' || k == 'a') && ctrl(e)) {
|
||||
var sel = msel.getsel(),
|
||||
var ntot = treectl.lsc.files.length + treectl.lsc.dirs.length,
|
||||
sel = msel.getsel(),
|
||||
all = msel.getall();
|
||||
|
||||
msel.evsel(e, sel.length < all.length);
|
||||
msel.origin_id(null);
|
||||
if (ntot > all.length)
|
||||
toast.warn(10, L.f_anota.format(all.length, ntot), L.f_anota);
|
||||
else if (toast.tag == L.f_anota)
|
||||
toast.hide();
|
||||
return ev(e);
|
||||
}
|
||||
}
|
||||
@@ -7258,6 +7296,7 @@ var treectl = (function () {
|
||||
treesz = clamp(icfg_get('treesz', 16), 10, 50);
|
||||
|
||||
var resort = function () {
|
||||
ENATSORT = NATSORT && clgot(ebi('nsort'), 'on');
|
||||
treectl.gentab(get_evpath(), treectl.lsc);
|
||||
};
|
||||
bcfg_bind(r, 'ireadme', 'ireadme', true);
|
||||
@@ -7586,8 +7625,8 @@ var treectl = (function () {
|
||||
};
|
||||
|
||||
function reload_tree() {
|
||||
var cdir = r.nextdir || get_vpath(),
|
||||
cevp = get_evpath(),
|
||||
var cevp = get_evpath(),
|
||||
cdir = r.nextdir || uricom_dec(cevp),
|
||||
links = QSA('#treeul a+a'),
|
||||
nowrap = QS('#tree.nowrap') && QS('#hovertree.on'),
|
||||
act = null;
|
||||
@@ -8130,9 +8169,16 @@ var treectl = (function () {
|
||||
}
|
||||
delete res['a'];
|
||||
var keys = Object.keys(res);
|
||||
keys.sort(function (a, b) { return a.localeCompare(b); });
|
||||
for (var a = 0; a < keys.length; a++)
|
||||
keys[a] = [uricom_dec(keys[a]), keys[a]];
|
||||
|
||||
if (ENATSORT)
|
||||
keys.sort(function (a, b) { return NATSORT.compare(a[0], b[0]); });
|
||||
else
|
||||
keys.sort(function (a, b) { return a[0].localeCompare(b[0]); });
|
||||
|
||||
for (var a = 0; a < keys.length; a++) {
|
||||
var kk = keys[a],
|
||||
var kk = keys[a][1],
|
||||
m = /(\?k=[^\n]+)/.exec(kk),
|
||||
kdk = m ? m[1] : '',
|
||||
ks = kk.replace(kdk, '').slice(1),
|
||||
@@ -9772,7 +9818,7 @@ function wintitle(txt, noname) {
|
||||
if (s_name && !noname)
|
||||
txt = s_name + ' ' + txt;
|
||||
|
||||
txt += get_vpath().slice(1, -1).split('/').pop();
|
||||
txt += uricom_dec(get_evpath()).slice(1, -1).split('/').pop();
|
||||
|
||||
document.title = txt;
|
||||
}
|
||||
|
||||
@@ -122,7 +122,7 @@
|
||||
<input type="hidden" id="la" name="act" value="login" />
|
||||
<input type="password" id="lp" name="cppwd" placeholder=" password" />
|
||||
<input type="hidden" name="uhash" id="uhash" value="x" />
|
||||
<input type="submit" id="ls" value="Login" />
|
||||
<input type="submit" id="ls" value="login" />
|
||||
{% if chpw %}
|
||||
<a id="x" href="#">change password</a>
|
||||
{% endif %}
|
||||
|
||||
@@ -381,6 +381,9 @@ html.y .btn:focus {
|
||||
box-shadow: 0 .1em .2em #037 inset;
|
||||
outline: #037 solid .1em;
|
||||
}
|
||||
input, button {
|
||||
font-family: var(--font-main), sans-serif;
|
||||
}
|
||||
input[type="submit"] {
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
@@ -1415,7 +1415,7 @@ function up2k_init(subtle) {
|
||||
if (FIREFOX && good_files.length > 3000)
|
||||
msg.push(L.u_ff_many + "\n\n");
|
||||
|
||||
msg.push(L.u_asku.format(good_files.length, esc(get_vpath())) + '<ul>');
|
||||
msg.push(L.u_asku.format(good_files.length, esc(uricom_dec(get_evpath()))) + '<ul>');
|
||||
for (var a = 0, aa = Math.min(20, good_files.length); a < aa; a++)
|
||||
msg.push('<li>' + esc(good_files[a][1]) + '</li>');
|
||||
|
||||
|
||||
@@ -461,6 +461,13 @@ function namesan(txt, win, fslash) {
|
||||
}
|
||||
|
||||
|
||||
var NATSORT, ENATSORT;
|
||||
try {
|
||||
NATSORT = new Intl.Collator([], {numeric: true});
|
||||
}
|
||||
catch (ex) { }
|
||||
|
||||
|
||||
var crctab = (function () {
|
||||
var c, tab = [];
|
||||
for (var n = 0; n < 256; n++) {
|
||||
@@ -614,6 +621,33 @@ function showsort(tab) {
|
||||
}
|
||||
}
|
||||
}
|
||||
function st_cmp_num(a, b) {
|
||||
a = a[0];
|
||||
b = b[0];
|
||||
return (
|
||||
a === null ? -1 :
|
||||
b === null ? 1 :
|
||||
(a - b)
|
||||
);
|
||||
}
|
||||
function st_cmp_nat(a, b) {
|
||||
a = a[0];
|
||||
b = b[0];
|
||||
return (
|
||||
a === null ? -1 :
|
||||
b === null ? 1 :
|
||||
NATSORT.compare(a, b)
|
||||
);
|
||||
}
|
||||
function st_cmp_gen(a, b) {
|
||||
a = a[0];
|
||||
b = b[0];
|
||||
return (
|
||||
a === null ? -1 :
|
||||
b === null ? 1 :
|
||||
a.localeCompare(b)
|
||||
);
|
||||
}
|
||||
function sortTable(table, col, cb) {
|
||||
var tb = table.tBodies[0],
|
||||
th = table.tHead.rows[0].cells,
|
||||
@@ -659,19 +693,17 @@ function sortTable(table, col, cb) {
|
||||
}
|
||||
vl.push([v, a]);
|
||||
}
|
||||
vl.sort(function (a, b) {
|
||||
a = a[0];
|
||||
b = b[0];
|
||||
if (a === null)
|
||||
return -1;
|
||||
if (b === null)
|
||||
return 1;
|
||||
|
||||
if (stype == 'int') {
|
||||
return reverse * (a - b);
|
||||
}
|
||||
return reverse * (a.localeCompare(b));
|
||||
});
|
||||
if (stype == 'int')
|
||||
vl.sort(st_cmp_num);
|
||||
else if (ENATSORT)
|
||||
vl.sort(st_cmp_nat);
|
||||
else
|
||||
vl.sort(st_cmp_gen);
|
||||
|
||||
if (reverse < 0)
|
||||
vl.reverse();
|
||||
|
||||
if (sread('dir1st') !== '0') {
|
||||
var r1 = [], r2 = [];
|
||||
for (var i = 0; i < tr.length; i++) {
|
||||
@@ -857,11 +889,6 @@ function get_evpath() {
|
||||
}
|
||||
|
||||
|
||||
function get_vpath() {
|
||||
return uricom_dec(get_evpath());
|
||||
}
|
||||
|
||||
|
||||
function noq_href(el) {
|
||||
return el.getAttribute('href').split('?')[0];
|
||||
}
|
||||
|
||||
@@ -1,3 +1,97 @@
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0413-2151 `v1.16.20` all sorted
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* when enabled, natural-sort will now also apply to tags, not just filenames 7b2bd6da
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* some sorting-related stuff 7b2bd6da
|
||||
* folders with non-ascii names would sort incorrectly in the navpane/sidebar
|
||||
* natural-sort didn't apply correctly after changing the sort order
|
||||
* workaround [ffmpeg-bug 10797](https://trac.ffmpeg.org/ticket/10797) 98dcaee2
|
||||
* reduces ram usage from 1534 to 230 MiB when generating spectrograms of s3xmodit songs (amiga chiptunes)
|
||||
* disable mdns if only listening on uds (unix-sockets) ffc16109 361aebf8
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* hotkey CTRL-A will now select all files in gridview 233075ae
|
||||
* and it toggles (just like in list-view) so try pressing it again
|
||||
* copyparty.exe: upgrade to pillow v11.2.1 c7aa1a35
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0408-2132 `v1.16.19` GHOST
|
||||
|
||||
did you know that every song named `GHOST` is a banger? it's true! [ghost](https://www.youtube.com/watch?v=NoUAwC4yiAw) // [ghost](https://www.youtube.com/watch?v=IKKar5SS29E) // [ghost](https://www.youtube.com/watch?v=tFSFlgm_tsw)
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* option to store markdown backups out-of-volume fc883418
|
||||
* the default is still a subfolder named `.hist` next to the markdown file
|
||||
* `--md-hist v` puts them in the volume's hist-folder instead
|
||||
* `--md-hist n` disables markdown-backups entirely
|
||||
* #149 option to store the volume sqlite databases at a custom locations outside the hist-folder e1b9ac63
|
||||
* new option `--dbpath` works like `--hist` but it only moves the database file, not the thumbnails
|
||||
* they can be combined, in which case `--hist` is applied to thumbnails, `--dbpath` to the db
|
||||
* useful when you're squeezing every last drop of performance out of your filesystem (see the issue)
|
||||
* actively prevent sharing certain databases (sessions/shares) between multiple copyparty instances acfaacbd
|
||||
* an errormessage was added to explain some different alternatives for doing this safely
|
||||
* for example by setting `XDG_CONFIG_HOME` which now works on all platforms b17ccc38
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* #151 mkdir did not work in locations outside the volume root (via symlinks) 2b50fc20
|
||||
* improve the ui feedback when trying to play an audio file which failed to transcode f9954bc4
|
||||
* also helps with server-filesystem issues, including image-thumbs
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* #152 custom fonts are also applied to textboxes and buttons (thx @thaddeuskkr) d450f615
|
||||
* be more careful with the shares-db 8e0364ef
|
||||
* be less careful with the sessions-db 8e0364ef
|
||||
* update deps c0becc64
|
||||
* web: dompurify
|
||||
* copyparty.exe: python 3.12.10
|
||||
* rephrase `-j0` warning on windows to also mention that Microsoft Defender will freak out c0becc64
|
||||
* #149 add [a script](https://github.com/9001/copyparty/tree/hovudstraum/contrib#zfs-tunepy) to optimize the sqlite databases for storage on zfs 4f397b9b
|
||||
* block `GoogleOther` (another recalcitrant bot) from zip-downloads c2034f7b
|
||||
* rephrase `-j0` warning on windows to also mention that Microsoft Defender will freak out c0becc64
|
||||
* update [contributing.md](https://github.com/9001/copyparty/blob/hovudstraum/CONTRIBUTING.md) with a section regarding LLM/AI-written code cec3bee0
|
||||
* the [helptext](https://ocv.me/copyparty/helptext.html) will also be uploaded to each github release from now on, [permalink](https://github.com/9001/copyparty/releases/latest/download/helptext.html)
|
||||
* add review from ixbt forums b383c08c
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0323-2216 `v1.16.18` zlib-ng
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* prefer zlib-ng when available 57a56073
|
||||
* download-as-tar-gz becomes 2.5x faster
|
||||
* default-enabled in docker-images
|
||||
* not enabled in copyparty.exe yet; coming in a future python version
|
||||
* docker: add mimalloc (optional, default-disabled) de2c9788
|
||||
* gives twice the speed, and twice the ram usage
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* small up2k glitch 3c90cec0
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* rename logues/readmes when uploaded with write-only access 2525d594
|
||||
* since they are used as helptext when viewing the page
|
||||
* try to block google and other bad bots from `?doc` and `?zip` 99f63adf
|
||||
* apparently `rel="nofollow"` means nothing these days
|
||||
|
||||
### the docker images for this release were built from e1dea7ef
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0316-2002 `v1.16.17` boot2party
|
||||
|
||||
|
||||
@@ -281,8 +281,11 @@ on writing your own [hooks](../README.md#event-hooks)
|
||||
hooks can cause intentional side-effects, such as redirecting an upload into another location, or creating+indexing additional files, or deleting existing files, by returning json on stdout
|
||||
|
||||
* `reloc` can redirect uploads before/after uploading has finished, based on filename, extension, file contents, uploader ip/name etc.
|
||||
* example: [reloc-by-ext](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/reloc-by-ext.py)
|
||||
* `idx` informs copyparty about a new file to index as a consequence of this upload
|
||||
* example: [podcast-normalizer.py](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/podcast-normalizer.py)
|
||||
* `del` tells copyparty to delete an unrelated file by vpath
|
||||
* example: ( ´・ω・) nyoro~n
|
||||
|
||||
for these to take effect, the hook must be defined with the `c1` flag; see example [reloc-by-ext](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/reloc-by-ext.py)
|
||||
|
||||
|
||||
@@ -11,9 +11,14 @@ services:
|
||||
- ./:/cfg:z
|
||||
- /path/to/your/fileshare/top/folder:/w:z
|
||||
|
||||
# enabling mimalloc by replacing "NOPE" with "2" will make some stuff twice as fast, but everything will use twice as much ram:
|
||||
environment:
|
||||
LD_PRELOAD: /usr/lib/libmimalloc-secure.so.NOPE
|
||||
|
||||
stop_grace_period: 15s # thumbnailer is allowed to continue finishing up for 10s after the shutdown signal
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --spider -q 127.0.0.1:3923/?reset"]
|
||||
# hide it from logs with "/._" so it matches the default --lf-url filter
|
||||
test: ["CMD-SHELL", "wget --spider -q 127.0.0.1:3923/?reset=/._"]
|
||||
interval: 1m
|
||||
timeout: 2s
|
||||
retries: 5
|
||||
|
||||
@@ -23,6 +23,9 @@ services:
|
||||
- 'traefik.http.routers.copyparty.tls=true'
|
||||
- 'traefik.http.routers.copyparty.middlewares=authelia@docker'
|
||||
stop_grace_period: 15s # thumbnailer is allowed to continue finishing up for 10s after the shutdown signal
|
||||
environment:
|
||||
LD_PRELOAD: /usr/lib/libmimalloc-secure.so.NOPE
|
||||
# enable mimalloc by replacing "NOPE" with "2" for a nice speed-boost (will use twice as much ram)
|
||||
|
||||
authelia:
|
||||
image: authelia/authelia:v4.38.0-beta3 # the config files in the authelia folder use the new syntax
|
||||
|
||||
@@ -22,13 +22,10 @@ services:
|
||||
- 'traefik.http.routers.fs.rule=Host(`fs.example.com`)'
|
||||
- 'traefik.http.routers.fs.entrypoints=http'
|
||||
#- 'traefik.http.routers.fs.middlewares=authelia@docker' # TODO: ???
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --spider -q 127.0.0.1:3923/?reset"]
|
||||
interval: 1m
|
||||
timeout: 2s
|
||||
retries: 5
|
||||
start_period: 15s
|
||||
stop_grace_period: 15s # thumbnailer is allowed to continue finishing up for 10s after the shutdown signal
|
||||
environment:
|
||||
LD_PRELOAD: /usr/lib/libmimalloc-secure.so.NOPE
|
||||
# enable mimalloc by replacing "NOPE" with "2" for a nice speed-boost (will use twice as much ram)
|
||||
|
||||
traefik:
|
||||
image: traefik:v2.11
|
||||
|
||||
@@ -33,12 +33,6 @@ if you are introducing a new ttf/woff font, don't forget to declare the font its
|
||||
}
|
||||
```
|
||||
|
||||
and because textboxes don't inherit fonts by default, you can force it like this:
|
||||
|
||||
```css
|
||||
input[type=text], input[type=submit], input[type=button] { font-family: var(--font-main) }
|
||||
```
|
||||
|
||||
and if you want to have a monospace font in the fancy markdown editor, do this:
|
||||
|
||||
```css
|
||||
|
||||
@@ -3,7 +3,7 @@ WORKDIR /z
|
||||
ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
|
||||
ver_hashwasm=4.12.0 \
|
||||
ver_marked=4.3.0 \
|
||||
ver_dompf=3.2.4 \
|
||||
ver_dompf=3.2.5 \
|
||||
ver_mde=2.18.0 \
|
||||
ver_codemirror=5.65.18 \
|
||||
ver_fontawesome=5.13.0 \
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
set -ex
|
||||
|
||||
# use zlib-ng if available
|
||||
f=/z/base/zlib_ng-0.5.1-cp312-cp312-linux_$(uname -m).whl
|
||||
f=/z/base/zlib_ng-0.5.1-cp312-cp312-linux_$(cat /etc/apk/arch).whl
|
||||
[ "$1" != min ] && [ -e $f ] && {
|
||||
apk add -t .bd !pyc py3-pip
|
||||
rm -f /usr/lib/python3*/EXTERNALLY-MANAGED
|
||||
@@ -32,6 +32,9 @@ rm -rf \
|
||||
/tmp/pe-* /z/copyparty-sfx.py \
|
||||
ensurepip pydoc_data turtle.py turtledemo lib2to3
|
||||
|
||||
# speedhack
|
||||
sed -ri 's/os.environ.get\("PRTY_NO_IMPRESO"\)/"1"/' /usr/lib/python3.*/site-packages/copyparty/util.py
|
||||
|
||||
# drop bytecode
|
||||
find / -xdev -name __pycache__ -print0 | xargs -0 rm -rf
|
||||
|
||||
@@ -65,7 +68,11 @@ for n in $(seq 1 200); do sleep 0.2
|
||||
done
|
||||
[ -z "$v" ] && echo SNAAAAAKE && exit 1
|
||||
|
||||
wget -O- http://${v/ /:}/?tar=gz:1 | tar -xzO top/innvikler.sh | cmp innvikler.sh
|
||||
for n in $(seq 1 200); do sleep 0.2
|
||||
wget -O- http://${v/ /:}/?tar=gz:1 >tf && break
|
||||
done
|
||||
tar -xzO top/innvikler.sh <tf | cmp innvikler.sh
|
||||
rm tf
|
||||
|
||||
kill $pid; wait $pid
|
||||
|
||||
|
||||
@@ -237,6 +237,8 @@ necho() {
|
||||
tar -zxf $f
|
||||
mv partftpy-*/partftpy .
|
||||
rm -rf partftpy-* partftpy/bin
|
||||
#(cd partftpy && "$pybin" ../../scripts/strip_hints/a.py; rm uh) # dont need the full thing, just this:
|
||||
sed -ri 's/from typing import TYPE_CHECKING$/TYPE_CHECKING = False/' partftpy/TftpShared.py
|
||||
|
||||
necho collecting python-magic
|
||||
v=0.4.27
|
||||
|
||||
@@ -27,7 +27,7 @@ ac96786e5d35882e0c5b724794329c9125c2b86ae7847f17acfc49f0d294312c6afc1c3f248655de
|
||||
6df21f0da408a89f6504417c7cdf9aaafe4ed88cfa13e9b8fa8414f604c0401f885a04bbad0484dc51a29284af5d1548e33c6cc6bfb9896d9992c1b1074f332d MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl
|
||||
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
|
||||
0203ec2551c4836696cfab0b2c9fff603352f03fa36e7476e2e1ca7ec57a3a0c24bd791fcd92f342bf817f0887854d9f072e0271c643de4b313d8c9569ba8813 packaging-24.1-py3-none-any.whl
|
||||
12d7921dc7dfd8a4b0ea0fa2bae8f1354fcdd59ece3d7f4e075aed631f9ba791dc142c70b1ccd1e6287c43139df1db26bd57a7a217c8da3a77326036495cdb57 pillow-11.1.0-cp312-cp312-win_amd64.whl
|
||||
c9051daaf34ec934962c743a5ac2dbe55a9b0cababb693a8cde0001d24d4a50b67bd534d714d935def6ca7b898ec0a352e58bd9ccdce01c54eaf2281b18e478d pillow-11.2.1-cp312-cp312-win_amd64.whl
|
||||
f0463895e9aee97f31a2003323de235fed1b26289766dc0837261e3f4a594a31162b69e9adbb0e9a31e2e2d4b5f25c762ed1669553df7dc89a8ba4f85d297873 pyinstaller-6.11.1-py3-none-win_amd64.whl
|
||||
d550a0a14428386945533de2220c4c2e37c0c890fc51a600f626c6ca90a32d39572c121ec04c157ba3a8d6601cb021f8433d871b5c562a3d342c804fffec90c1 pyinstaller_hooks_contrib-2024.11-py3-none-any.whl
|
||||
17b64ff6744004a05d475c8f6de3e48286db4069afad4cae690f83b3555f8e35ceafb210eeba69a11e983d0da3001099de284b6696ed0f1bf9cd791938a7f2cd python-3.12.9-amd64.exe
|
||||
4f9a4d9f65c93e2d851e2674057343a9599f30f5dc582ffca485522237d4fcf43653b3d393ed5eb11e518c4ba93714a07134bbb13a97d421cce211e1da34682e python-3.12.10-amd64.exe
|
||||
|
||||
@@ -38,10 +38,10 @@ fns=(
|
||||
MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl
|
||||
mutagen-1.47.0-py3-none-any.whl
|
||||
packaging-24.1-py3-none-any.whl
|
||||
pillow-11.1.0-cp312-cp312-win_amd64.whl
|
||||
pillow-11.2.1-cp312-cp312-win_amd64.whl
|
||||
pyinstaller-6.10.0-py3-none-win_amd64.whl
|
||||
pyinstaller_hooks_contrib-2024.8-py3-none-any.whl
|
||||
python-3.12.9-amd64.exe
|
||||
python-3.12.10-amd64.exe
|
||||
)
|
||||
[ $w7 ] && fns+=(
|
||||
future-1.0.0-py3-none-any.whl
|
||||
|
||||
@@ -413,6 +413,9 @@ def run_i(ld):
|
||||
for x in ld:
|
||||
sys.path.insert(0, x)
|
||||
|
||||
e = os.environ
|
||||
e["PRTY_NO_IMPRESO"] = "1"
|
||||
|
||||
from copyparty.__main__ import main as p
|
||||
|
||||
p()
|
||||
|
||||
@@ -84,6 +84,9 @@ def uh2(fp):
|
||||
if " # !rm" in ln:
|
||||
continue
|
||||
|
||||
if ln.endswith("TYPE_CHECKING"):
|
||||
ln = ln.replace("from typing import TYPE_CHECKING", "TYPE_CHECKING = False")
|
||||
|
||||
lns.append(ln)
|
||||
|
||||
cs = "\n".join(lns)
|
||||
|
||||
@@ -357,6 +357,7 @@ var tl_browser = {
|
||||
"ml_eq": "audio equalizer",
|
||||
"ml_drc": "dynamic range compressor",
|
||||
|
||||
"mt_loop": "loop/repeat one song\">🔁",
|
||||
"mt_shuf": "shuffle the songs in each folder\">🔀",
|
||||
"mt_aplay": "autoplay if there is a song-ID in the link you clicked to access the server$N$Ndisabling this will also stop the page URL from being updated with song-IDs when playing music, to prevent autoplay if these settings are lost but the URL remains\">a▶",
|
||||
"mt_preload": "start loading the next song near the end for gapless playback\">preload",
|
||||
@@ -402,6 +403,7 @@ var tl_browser = {
|
||||
"mm_eunk": "Unknown Errol",
|
||||
"mm_e404": "Could not play audio; error 404: File not found.",
|
||||
"mm_e403": "Could not play audio; error 403: Access denied.\n\nTry pressing F5 to reload, maybe you got logged out",
|
||||
"mm_e500": "Could not play audio; error 500: Check server logs.",
|
||||
"mm_e5xx": "Could not play audio; server error ",
|
||||
"mm_nof": "not finding any more audio files nearby",
|
||||
"mm_prescan": "Looking for music to play next...",
|
||||
@@ -416,6 +418,7 @@ var tl_browser = {
|
||||
"f_bigtxt": "this file is {0} MiB large -- really view as text?",
|
||||
"fbd_more": '<div id="blazy">showing <code>{0}</code> of <code>{1}</code> files; <a href="#" id="bd_more">show {2}</a> or <a href="#" id="bd_all">show all</a></div>',
|
||||
"fbd_all": '<div id="blazy">showing <code>{0}</code> of <code>{1}</code> files; <a href="#" id="bd_all">show all</a></div>',
|
||||
"f_anota": "only {0} of the {1} items were selected;\nto select the full folder, first scroll to the bottom",
|
||||
|
||||
"f_dls": 'the file links in the current folder have\nbeen changed into download links',
|
||||
|
||||
|
||||
@@ -371,7 +371,7 @@ class TestDots(unittest.TestCase):
|
||||
return " ".join(tar)
|
||||
|
||||
def curl(self, url, uname, binary=False, req=b""):
|
||||
req = req or hdr(url, uname)
|
||||
req = req or hdr(url.replace("th=x", "th=j"), uname)
|
||||
conn = self.conn.setbuf(req)
|
||||
HttpCli(conn).run()
|
||||
if binary:
|
||||
|
||||
@@ -135,7 +135,7 @@ class Cfg(Namespace):
|
||||
ex = "dav_inf dedup dotpart dotsrch hook_v no_dhash no_fastboot no_fpool no_htp no_rescan no_sendfile no_ses no_snap no_up_list no_voldump re_dhash plain_ip"
|
||||
ka.update(**{k: True for k in ex.split()})
|
||||
|
||||
ex = "ah_cli ah_gen css_browser hist ipu js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua ua_nodoc ua_nozip"
|
||||
ex = "ah_cli ah_gen css_browser dbpath hist ipu js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua ua_nodoc ua_nozip"
|
||||
ka.update(**{k: None for k in ex.split()})
|
||||
|
||||
ex = "hash_mt hsortn safe_dedup srch_time u2abort u2j u2sz"
|
||||
@@ -174,6 +174,7 @@ class Cfg(Namespace):
|
||||
lang="eng",
|
||||
log_badpwd=1,
|
||||
logout=573,
|
||||
md_hist="s",
|
||||
mte={"a": True},
|
||||
mth={},
|
||||
mtp=[],
|
||||
|
||||
Reference in New Issue
Block a user