Compare commits
47 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
accd003d15 | ||
|
|
9c2c423761 | ||
|
|
999789c742 | ||
|
|
14bb299918 | ||
|
|
0a33336dd4 | ||
|
|
6a2644fece | ||
|
|
5ab09769e1 | ||
|
|
782084056d | ||
|
|
494179bd1c | ||
|
|
29a17ae2b7 | ||
|
|
815d46f2c4 | ||
|
|
8417098c68 | ||
|
|
25974d660d | ||
|
|
12fcb42201 | ||
|
|
16462ee573 | ||
|
|
540664e0c2 | ||
|
|
b5cb763ab1 | ||
|
|
c24a0ec364 | ||
|
|
4accef00fb | ||
|
|
d779525500 | ||
|
|
65a7706f77 | ||
|
|
5e12abbb9b | ||
|
|
e0fe2b97be | ||
|
|
bd33863f9f | ||
|
|
a011139894 | ||
|
|
36866f1d36 | ||
|
|
407531bcb1 | ||
|
|
3adbb2ff41 | ||
|
|
499ae1c7a1 | ||
|
|
438ea6ccb0 | ||
|
|
598a29a733 | ||
|
|
6d102fc826 | ||
|
|
fca07fbb62 | ||
|
|
cdedcc24b8 | ||
|
|
60d5f27140 | ||
|
|
cb413bae49 | ||
|
|
e9f78ea70c | ||
|
|
6858cb066f | ||
|
|
4be0d426f4 | ||
|
|
7d7d5d6c3c | ||
|
|
0422387e90 | ||
|
|
2ed5fd9ac4 | ||
|
|
2beb2acc24 | ||
|
|
56ce591908 | ||
|
|
b190e676b4 | ||
|
|
19520b2ec9 | ||
|
|
eeb96ae8b5 |
1
.github/ISSUE_TEMPLATE/bug_report.md
vendored
1
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -8,6 +8,7 @@ assignees: '9001'
|
||||
---
|
||||
|
||||
NOTE:
|
||||
**please use english, or include an english translation.** aside from that,
|
||||
all of the below are optional, consider them as inspiration, delete and rewrite at will, thx md
|
||||
|
||||
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/feature_request.md
vendored
2
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -7,6 +7,8 @@ assignees: '9001'
|
||||
|
||||
---
|
||||
|
||||
NOTE:
|
||||
**please use english, or include an english translation.** aside from that,
|
||||
all of the below are optional, consider them as inspiration, delete and rewrite at will
|
||||
|
||||
**is your feature request related to a problem? Please describe.**
|
||||
|
||||
73
README.md
73
README.md
@@ -94,10 +94,13 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
|
||||
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
|
||||
* [reverse-proxy performance](#reverse-proxy-performance)
|
||||
* [permanent cloudflare tunnel](#permanent-cloudflare-tunnel) - if you have a domain and want to get your copyparty online real quick
|
||||
* [prometheus](#prometheus) - metrics/stats can be enabled
|
||||
* [other extremely specific features](#other-extremely-specific-features) - you'll never find a use for these
|
||||
* [custom mimetypes](#custom-mimetypes) - change the association of a file extension
|
||||
* [GDPR compliance](#GDPR-compliance) - imagine using copyparty professionally...
|
||||
* [feature chickenbits](#feature-chickenbits) - buggy feature? rip it out
|
||||
* [feature beefybits](#feature-beefybits) - force-enable incompatible features
|
||||
* [packages](#packages) - the party might be closer than you think
|
||||
* [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes)
|
||||
* [fedora package](#fedora-package) - does not exist yet
|
||||
@@ -159,8 +162,8 @@ enable thumbnails (images/audio/video), media indexing, and audio transcoding by
|
||||
* **MacOS:** `port install py-Pillow ffmpeg`
|
||||
* **MacOS** (alternative): `brew install pillow ffmpeg`
|
||||
* **Windows:** `python -m pip install --user -U Pillow`
|
||||
* install python and ffmpeg manually; do not use `winget` or `Microsoft Store` (it breaks $PATH)
|
||||
* copyparty.exe comes with `Pillow` and only needs `ffmpeg`
|
||||
* install [python](https://www.python.org/downloads/windows/) and [ffmpeg](#optional-dependencies) manually; do not use `winget` or `Microsoft Store` (it breaks $PATH)
|
||||
* copyparty.exe comes with `Pillow` and only needs [ffmpeg](#optional-dependencies) for mediatags/videothumbs
|
||||
* see [optional dependencies](#optional-dependencies) to enable even more features
|
||||
|
||||
running copyparty without arguments (for example doubleclicking it on Windows) will give everyone read/write access to the current folder; you may want [accounts and volumes](#accounts-and-volumes)
|
||||
@@ -183,6 +186,8 @@ first download [cloudflared](https://developers.cloudflare.com/cloudflare-one/co
|
||||
|
||||
as the tunnel starts, it will show a URL which you can share to let anyone browse your stash or upload files to you
|
||||
|
||||
but if you have a domain, then you probably want to skip the random autogenerated URL and instead make a [permanent cloudflare tunnel](#permanent-cloudflare-tunnel)
|
||||
|
||||
since people will be connecting through cloudflare, run copyparty with `--xff-hdr cf-connecting-ip` to detect client IPs correctly
|
||||
|
||||
|
||||
@@ -224,6 +229,7 @@ also see [comparison to similar software](./docs/versus.md)
|
||||
* ☑ [upnp / zeroconf / mdns / ssdp](#zeroconf)
|
||||
* ☑ [event hooks](#event-hooks) / script runner
|
||||
* ☑ [reverse-proxy support](https://github.com/9001/copyparty#reverse-proxy)
|
||||
* ☑ cross-platform (Windows, Linux, Macos, Android, FreeBSD, arm32/arm64, ppc64le, s390x, risc-v/riscv64)
|
||||
* upload
|
||||
* ☑ basic: plain multipart, ie6 support
|
||||
* ☑ [up2k](#uploading): js, resumable, multithreaded
|
||||
@@ -401,6 +407,9 @@ upgrade notes
|
||||
|
||||
"frequently" asked questions
|
||||
|
||||
* can I change the 🌲 spinning pine-tree loading animation?
|
||||
* [yeah...](https://github.com/9001/copyparty/tree/hovudstraum/docs/rice#boring-loader-spinner) :-(
|
||||
|
||||
* is it possible to block read-access to folders unless you know the exact URL for a particular file inside?
|
||||
* yes, using the [`g` permission](#accounts-and-volumes), see the examples there
|
||||
* you can also do this with linux filesystem permissions; `chmod 111 music` will make it possible to access files and folders inside the `music` folder but not list the immediate contents -- also works with other software, not just copyparty
|
||||
@@ -423,6 +432,14 @@ upgrade notes
|
||||
* copyparty seems to think I am using http, even though the URL is https
|
||||
* your reverse-proxy is not sending the `X-Forwarded-Proto: https` header; this could be because your reverse-proxy itself is confused. Ensure that none of the intermediates (such as cloudflare) are terminating https before the traffic hits your entrypoint
|
||||
|
||||
* thumbnails are broken (you get a colorful square which says the filetype instead)
|
||||
* you need to install `FFmpeg` or `Pillow`; see [thumbnails](#thumbnails)
|
||||
|
||||
* thumbnails are broken (some images appear, but other files just get a blank box, and/or the broken-image placeholder)
|
||||
* probably due to a reverse-proxy messing with the request URLs and stripping the query parameters (`?th=w`), so check your URL rewrite rules
|
||||
* could also be due to incorrect caching settings in reverse-proxies and/or CDNs, so make sure that nothing is set to ignore the query string
|
||||
* could also be due to misbehaving privacy-related browser extensions, so try to disable those
|
||||
|
||||
* i want to learn python and/or programming and am considering looking at the copyparty source code in that occasion
|
||||
* ```bash
|
||||
_| _ __ _ _|_
|
||||
@@ -653,6 +670,7 @@ press `g` or `田` to toggle grid-view instead of the file listing and `t` togg
|
||||
it does static images with Pillow / pyvips / FFmpeg, and uses FFmpeg for video files, so you may want to `--no-thumb` or maybe just `--no-vthumb` depending on how dangerous your users are
|
||||
* pyvips is 3x faster than Pillow, Pillow is 3x faster than FFmpeg
|
||||
* disable thumbnails for specific volumes with volflag `dthumb` for all, or `dvthumb` / `dathumb` / `dithumb` for video/audio/images only
|
||||
* for installing FFmpeg on windows, see [optional dependencies](#optional-dependencies)
|
||||
|
||||
audio files are converted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`)
|
||||
|
||||
@@ -764,8 +782,11 @@ the up2k UI is the epitome of polished intuitive experiences:
|
||||
* "parallel uploads" specifies how many chunks to upload at the same time
|
||||
* `[🏃]` analysis of other files should continue while one is uploading
|
||||
* `[🥔]` shows a simpler UI for faster uploads from slow devices
|
||||
* `[🛡️]` decides when to overwrite existing files on the server
|
||||
* `🛡️` = never (generate a new filename instead)
|
||||
* `🕒` = overwrite if the server-file is older
|
||||
* `♻️` = always overwrite if the files are different
|
||||
* `[🎲]` generate random filenames during upload
|
||||
* `[📅]` preserve last-modified timestamps; server times will match yours
|
||||
* `[🔎]` switch between upload and [file-search](#file-search) mode
|
||||
* ignore `[🔎]` if you add files by dragging them into the browser
|
||||
|
||||
@@ -1816,7 +1837,7 @@ tell search engines you don't wanna be indexed, either using the good old [robo
|
||||
* volflag `[...]:c,norobots` does the same thing for that single volume
|
||||
* volflag `[...]:c,robots` ALLOWS search-engine crawling for that volume, even if `--no-robots` is set globally
|
||||
|
||||
also, `--force-js` disables the plain HTML folder listing, making things harder to parse for search engines
|
||||
also, `--force-js` disables the plain HTML folder listing, making things harder to parse for *some* search engines -- note that crawlers which understand javascript (such as google) will not be affected
|
||||
|
||||
|
||||
## themes
|
||||
@@ -1982,6 +2003,26 @@ in summary, `haproxy > caddy > traefik > nginx > apache > lighttpd`, and use uds
|
||||
* if these results are bullshit because my config exampels are bad, please submit corrections!
|
||||
|
||||
|
||||
## permanent cloudflare tunnel
|
||||
|
||||
if you have a domain and want to get your copyparty online real quick, either from your home-PC behind a CGNAT or from a server without an existing [reverse-proxy](#reverse-proxy) setup, one approach is to create a [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/) (formerly "Argo Tunnel")
|
||||
|
||||
I'd recommend making a `Locally-managed tunnel` for more control, but if you prefer to make a `Remotely-managed tunnel` then this is currently how:
|
||||
|
||||
* `cloudflare dashboard` » `zero trust` » `networks` » `tunnels` » `create a tunnel` » `cloudflared` » choose a cool `subdomain` and leave the `path` blank, and use `service type` = `http` and `URL` = `127.0.0.1:3923`
|
||||
|
||||
* and if you want to just run the tunnel without installing it, skip the `cloudflared service install BASE64` step and instead do `cloudflared --no-autoupdate tunnel run --token BASE64`
|
||||
|
||||
NOTE: since people will be connecting through cloudflare, as mentioned in [real-ip](#real-ip) you should run copyparty with `--xff-hdr cf-connecting-ip` to detect client IPs correctly
|
||||
|
||||
config file example:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
xff-hdr: cf-connecting-ip
|
||||
```
|
||||
|
||||
|
||||
## prometheus
|
||||
|
||||
metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0)
|
||||
@@ -2068,6 +2109,18 @@ in a config file, this is the same as:
|
||||
run copyparty with `--mimes` to list all the default mappings
|
||||
|
||||
|
||||
### GDPR compliance
|
||||
|
||||
imagine using copyparty professionally... **TINLA/IANAL; EU laws are hella confusing**
|
||||
|
||||
* remember to disable logging, or configure logrotation to an acceptable timeframe with `-lo cpp-%Y-%m%d.txt.xz` or similar
|
||||
|
||||
* if running with the database enabled (recommended), then have it forget uploader-IPs after some time using `--forget-ip 43200`
|
||||
* don't set it too low; [unposting](#unpost) a file is no longer possible after this takes effect
|
||||
|
||||
* if you actually *are* a lawyer then I'm open for feedback, would be fun
|
||||
|
||||
|
||||
### feature chickenbits
|
||||
|
||||
buggy feature? rip it out by setting any of the following environment variables to disable its associated bell or whistle,
|
||||
@@ -2085,6 +2138,15 @@ buggy feature? rip it out by setting any of the following environment variables
|
||||
example: `PRTY_NO_IFADDR=1 python3 copyparty-sfx.py`
|
||||
|
||||
|
||||
### feature beefybits
|
||||
|
||||
force-enable features with known issues on your OS/env by setting any of the following environment variables, also affectionately known as `fuckitbits` or `hail-mary-bits`
|
||||
|
||||
| env-var | what it does |
|
||||
| ------------------------ | ------------ |
|
||||
| `PRTY_FORCE_MP` | force-enable multiprocessing (real multithreading) on MacOS and other broken platforms |
|
||||
|
||||
|
||||
# packages
|
||||
|
||||
the party might be closer than you think
|
||||
@@ -2255,6 +2317,7 @@ quick summary of more eccentric web-browsers trying to view a directory index:
|
||||
| **ie4** and **netscape** 4.0 | can browse, upload with `?b=u`, auth with `&pw=wark` |
|
||||
| **ncsa mosaic** 2.7 | does not get a pass, [pic1](https://user-images.githubusercontent.com/241032/174189227-ae816026-cf6f-4be5-a26e-1b3b072c1b2f.png) - [pic2](https://user-images.githubusercontent.com/241032/174189225-5651c059-5152-46e9-ac26-7e98e497901b.png) |
|
||||
| **SerenityOS** (7e98457) | hits a page fault, works with `?b=u`, file upload not-impl |
|
||||
| **sony psp** 5.50 | can browse, upload/mkdir/msg (thx dwarf) [screenshot](https://github.com/user-attachments/assets/9d21f020-1110-4652-abeb-6fc09c533d4f) |
|
||||
| **nintendo 3ds** | can browse, upload, view thumbnails (thx bnjmn) |
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/88deab3d-6cad-4017-8841-2f041472b853" /></p>
|
||||
@@ -2585,6 +2648,8 @@ enable [smb](#smb-server) support (**not** recommended): `impacket==0.12.0`
|
||||
|
||||
`pyvips` gives higher quality thumbnails than `Pillow` and is 320% faster, using 270% more ram: `sudo apt install libvips42 && python3 -m pip install --user -U pyvips`
|
||||
|
||||
to install FFmpeg on Windows, grab [a recent build](https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z) -- you need `ffmpeg.exe` and `ffprobe.exe` from inside the `bin` folder; copy them into `C:\Windows\System32` or any other folder that's in your `%PATH%`
|
||||
|
||||
|
||||
### dependency chickenbits
|
||||
|
||||
|
||||
@@ -21,6 +21,7 @@ each plugin must define a `main()` which takes 3 arguments;
|
||||
## on404
|
||||
|
||||
* [redirect.py](redirect.py) sends an HTTP 301 or 302, redirecting the client to another page/file
|
||||
* [randpic.py](randpic.py) redirects `/foo/bar/randpic.jpg` to a random pic in `/foo/bar/`
|
||||
* [sorry.py](answer.py) replies with a custom message instead of the usual 404
|
||||
* [nooo.py](nooo.py) replies with an endless noooooooooooooo
|
||||
* [never404.py](never404.py) 100% guarantee that 404 will never be a thing again as it automatically creates dummy files whenever necessary
|
||||
|
||||
35
bin/handlers/randpic.py
Normal file
35
bin/handlers/randpic.py
Normal file
@@ -0,0 +1,35 @@
|
||||
import os
|
||||
import random
|
||||
from urllib.parse import quote
|
||||
|
||||
|
||||
# assuming /foo/bar/ is a valid URL but /foo/bar/randpic.png does not exist,
|
||||
# hijack the 404 with a redirect to a random pic in that folder
|
||||
#
|
||||
# thx to lia & kipu for the idea
|
||||
|
||||
|
||||
def main(cli, vn, rem):
|
||||
req_fn = rem.split("/")[-1]
|
||||
if not cli.can_read or not req_fn.startswith("randpic"):
|
||||
return
|
||||
|
||||
req_abspath = vn.canonical(rem)
|
||||
req_ap_dir = os.path.dirname(req_abspath)
|
||||
files_in_dir = os.listdir(req_ap_dir)
|
||||
|
||||
if "." in req_fn:
|
||||
file_ext = "." + req_fn.split(".")[-1]
|
||||
files_in_dir = [x for x in files_in_dir if x.lower().endswith(file_ext)]
|
||||
|
||||
if not files_in_dir:
|
||||
return
|
||||
|
||||
selected_file = random.choice(files_in_dir)
|
||||
|
||||
req_url = "/".join([vn.vpath, rem]).strip("/")
|
||||
req_dir = req_url.rsplit("/", 1)[0]
|
||||
new_url = "/".join([req_dir, quote(selected_file)]).strip("/")
|
||||
|
||||
cli.reply(b"redirecting...", 302, headers={"Location": "/" + new_url})
|
||||
return "true"
|
||||
@@ -1,11 +1,13 @@
|
||||
// see usb-eject.py for usage
|
||||
|
||||
function usbclick() {
|
||||
QS('#treeul a[href="/usb/"]').click();
|
||||
var o = QS('#treeul a[dst="/usb/"]') || QS('#treepar a[dst="/usb/"]');
|
||||
if (o)
|
||||
o.click();
|
||||
}
|
||||
|
||||
function eject_cb() {
|
||||
var t = this.responseText;
|
||||
var t = ('' + this.responseText).trim();
|
||||
if (t.indexOf('can be safely unplugged') < 0 && t.indexOf('Device can be removed') < 0)
|
||||
return toast.err(30, 'usb eject failed:\n\n' + t);
|
||||
|
||||
@@ -19,11 +21,14 @@ function add_eject_2(a) {
|
||||
return;
|
||||
|
||||
var v = aw[2],
|
||||
k = 'umount_' + v,
|
||||
o = ebi(k);
|
||||
k = 'umount_' + v;
|
||||
|
||||
if (o)
|
||||
for (var b = 0; b < 9; b++) {
|
||||
var o = ebi(k);
|
||||
if (!o)
|
||||
break;
|
||||
o.parentNode.removeChild(o);
|
||||
}
|
||||
|
||||
a.appendChild(mknod('span', k, '⏏'), a);
|
||||
o = ebi(k);
|
||||
@@ -40,7 +45,7 @@ function add_eject_2(a) {
|
||||
};
|
||||
|
||||
function add_eject() {
|
||||
var o = QSA('#treeul a[href^="/usb/"]');
|
||||
var o = QSA('#treeul a[href^="/usb/"]') || QSA('#treepar a[href^="/usb/"]');
|
||||
for (var a = o.length - 1; a > 0; a--)
|
||||
add_eject_2(o[a]);
|
||||
};
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
S_VERSION = "2.9"
|
||||
S_BUILD_DT = "2025-01-27"
|
||||
S_VERSION = "2.10"
|
||||
S_BUILD_DT = "2025-02-19"
|
||||
|
||||
"""
|
||||
u2c.py: upload to copyparty
|
||||
@@ -807,7 +807,9 @@ def handshake(ar, file, search):
|
||||
else:
|
||||
if ar.touch:
|
||||
req["umod"] = True
|
||||
if ar.ow:
|
||||
if ar.owo:
|
||||
req["replace"] = "mt"
|
||||
elif ar.ow:
|
||||
req["replace"] = True
|
||||
|
||||
file.recheck = False
|
||||
@@ -1538,6 +1540,7 @@ source file/folder selection uses rsync syntax, meaning that:
|
||||
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
|
||||
ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)")
|
||||
ap.add_argument("--ow", action="store_true", help="overwrite existing files instead of autorenaming")
|
||||
ap.add_argument("--owo", action="store_true", help="overwrite existing files if server-file is older")
|
||||
ap.add_argument("--spd", action="store_true", help="print speeds for each file")
|
||||
ap.add_argument("--version", action="store_true", help="show version and exit")
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Maintainer: icxes <dev.null@need.moe>
|
||||
pkgname=copyparty
|
||||
pkgver="1.16.12"
|
||||
pkgver="1.16.16"
|
||||
pkgrel=1
|
||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||
arch=("any")
|
||||
@@ -22,7 +22,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
||||
)
|
||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||
backup=("etc/${pkgname}.d/init" )
|
||||
sha256sums=("b5b65103198a3dd8a3f9b15c3d6aff6c21147bf87627ceacc64205493c248997")
|
||||
sha256sums=("5ad7a70c4369633a297fb6904710a1e429d2d17ee46e8e8e5e40c3edeee229d7")
|
||||
|
||||
build() {
|
||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||
|
||||
@@ -64,4 +64,5 @@ in stdenv.mkDerivation {
|
||||
--set PATH '${lib.makeBinPath ([ utillinux ] ++ lib.optional withMediaProcessing ffmpeg)}:$PATH' \
|
||||
--add-flags "$out/share/copyparty-sfx.py"
|
||||
'';
|
||||
meta.mainProgram = "copyparty";
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.16.12/copyparty-sfx.py",
|
||||
"version": "1.16.12",
|
||||
"hash": "sha256-gZZqd88/8PEseVtWspocqrWV7Ck8YQAhcsa4ED3F4JU="
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.16.16/copyparty-sfx.py",
|
||||
"version": "1.16.16",
|
||||
"hash": "sha256-Fyz2QEM6xEMXfFAODgg71XfvpZ0b6cY4JidfsnKHIEg="
|
||||
}
|
||||
@@ -65,6 +65,7 @@ from .util import (
|
||||
load_resource,
|
||||
min_ex,
|
||||
pybin,
|
||||
read_utf8,
|
||||
termsize,
|
||||
wrap,
|
||||
)
|
||||
@@ -255,8 +256,7 @@ def get_srvname(verbose) -> str:
|
||||
if verbose:
|
||||
lprint("using hostname from {}\n".format(fp))
|
||||
try:
|
||||
with open(fp, "rb") as f:
|
||||
ret = f.read().decode("utf-8", "replace").strip()
|
||||
return read_utf8(None, fp, True).strip()
|
||||
except:
|
||||
ret = ""
|
||||
namelen = 5
|
||||
@@ -265,47 +265,18 @@ def get_srvname(verbose) -> str:
|
||||
ret = re.sub("[234567=]", "", ret)[:namelen]
|
||||
with open(fp, "wb") as f:
|
||||
f.write(ret.encode("utf-8") + b"\n")
|
||||
|
||||
return ret
|
||||
return ret
|
||||
|
||||
|
||||
def get_fk_salt() -> str:
|
||||
fp = os.path.join(E.cfg, "fk-salt.txt")
|
||||
def get_salt(name: str, nbytes: int) -> str:
|
||||
fp = os.path.join(E.cfg, "%s-salt.txt" % (name,))
|
||||
try:
|
||||
with open(fp, "rb") as f:
|
||||
ret = f.read().strip()
|
||||
return read_utf8(None, fp, True).strip()
|
||||
except:
|
||||
ret = b64enc(os.urandom(18))
|
||||
ret = b64enc(os.urandom(nbytes))
|
||||
with open(fp, "wb") as f:
|
||||
f.write(ret + b"\n")
|
||||
|
||||
return ret.decode("utf-8")
|
||||
|
||||
|
||||
def get_dk_salt() -> str:
|
||||
fp = os.path.join(E.cfg, "dk-salt.txt")
|
||||
try:
|
||||
with open(fp, "rb") as f:
|
||||
ret = f.read().strip()
|
||||
except:
|
||||
ret = b64enc(os.urandom(30))
|
||||
with open(fp, "wb") as f:
|
||||
f.write(ret + b"\n")
|
||||
|
||||
return ret.decode("utf-8")
|
||||
|
||||
|
||||
def get_ah_salt() -> str:
|
||||
fp = os.path.join(E.cfg, "ah-salt.txt")
|
||||
try:
|
||||
with open(fp, "rb") as f:
|
||||
ret = f.read().strip()
|
||||
except:
|
||||
ret = b64enc(os.urandom(18))
|
||||
with open(fp, "wb") as f:
|
||||
f.write(ret + b"\n")
|
||||
|
||||
return ret.decode("utf-8")
|
||||
return ret.decode("utf-8")
|
||||
|
||||
|
||||
def ensure_locale() -> None:
|
||||
@@ -1039,6 +1010,7 @@ def add_upload(ap):
|
||||
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
|
||||
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
|
||||
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for \033[33mdefault\033[0m, and never exceed \033[33mmax\033[0m. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
|
||||
ap2.add_argument("--u2ow", metavar="NUM", type=int, default=0, help="web-client: default setting for when to overwrite existing files; [\033[32m0\033[0m]=never, [\033[32m1\033[0m]=if-client-newer, [\033[32m2\033[0m]=always (volflag=u2ow)")
|
||||
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
|
||||
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
|
||||
|
||||
@@ -1264,12 +1236,16 @@ def add_optouts(ap):
|
||||
ap2.add_argument("-nih", action="store_true", help="no info hostname -- don't show in UI")
|
||||
ap2.add_argument("-nid", action="store_true", help="no info disk-usage -- don't show in UI")
|
||||
ap2.add_argument("-nb", action="store_true", help="no powered-by-copyparty branding in UI")
|
||||
ap2.add_argument("--zipmaxn", metavar="N", type=u, default="0", help="reject download-as-zip if more than \033[33mN\033[0m files in total; optionally takes a unit suffix: [\033[32m256\033[0m], [\033[32m9K\033[0m], [\033[32m4G\033[0m] (volflag=zipmaxn)")
|
||||
ap2.add_argument("--zipmaxs", metavar="SZ", type=u, default="0", help="reject download-as-zip if total download size exceeds \033[33mSZ\033[0m bytes; optionally takes a unit suffix: [\033[32m256M\033[0m], [\033[32m4G\033[0m], [\033[32m2T\033[0m] (volflag=zipmaxs)")
|
||||
ap2.add_argument("--zipmaxt", metavar="TXT", type=u, default="", help="custom errormessage when download size exceeds max (volflag=zipmaxt)")
|
||||
ap2.add_argument("--zipmaxu", action="store_true", help="authenticated users bypass the zip size limit (volflag=zipmaxu)")
|
||||
ap2.add_argument("--zip-who", metavar="LVL", type=int, default=3, help="who can download as zip/tar? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=authenticated-with-read-access, [\033[32m3\033[0m]=everyone-with-read-access (volflag=zip_who)\n\033[1;31mWARNING:\033[0m if a nested volume has a more restrictive value than a parent volume, then this will be \033[33mignored\033[0m if the download is initiated from the parent, more lenient volume")
|
||||
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar; same as \033[33m--zip-who=0\033[0m")
|
||||
ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)")
|
||||
ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time")
|
||||
ap2.add_argument("--no-pipe", action="store_true", help="disable race-the-beam (lockstep download of files which are currently being uploaded) (volflag=nopipe)")
|
||||
ap2.add_argument("--no-db-ip", action="store_true", help="do not write uploader IPs into the database")
|
||||
ap2.add_argument("--no-db-ip", action="store_true", help="do not write uploader-IP into the database; will also disable unpost, you may want \033[32m--forget-ip\033[0m instead (volflag=no_db_ip)")
|
||||
|
||||
|
||||
def add_safety(ap):
|
||||
@@ -1419,6 +1395,7 @@ def add_db_general(ap, hcores):
|
||||
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
|
||||
ap2.add_argument("--re-dhash", action="store_true", help="force a cache rebuild on startup; enable this once if it gets out of sync (should never be necessary)")
|
||||
ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice -- only useful for offloading uploads to a cloud service or something (volflag=noforget)")
|
||||
ap2.add_argument("--forget-ip", metavar="MIN", type=int, default=0, help="remove uploader-IP from database (and make unpost impossible) \033[33mMIN\033[0m minutes after upload, for GDPR reasons. Default [\033[32m0\033[0m] is never-forget. [\033[32m1440\033[0m]=day, [\033[32m10080\033[0m]=week, [\033[32m43200\033[0m]=month. (volflag=forget_ip)")
|
||||
ap2.add_argument("--dbd", metavar="PROFILE", default="wal", help="database durability profile; sets the tradeoff between robustness and speed, see \033[33m--help-dbd\033[0m (volflag=dbd)")
|
||||
ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (probably buggy, not recommended) (volflag=xlink)")
|
||||
ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing")
|
||||
@@ -1550,9 +1527,9 @@ def run_argparse(
|
||||
|
||||
cert_path = os.path.join(E.cfg, "cert.pem")
|
||||
|
||||
fk_salt = get_fk_salt()
|
||||
dk_salt = get_dk_salt()
|
||||
ah_salt = get_ah_salt()
|
||||
fk_salt = get_salt("fk", 18)
|
||||
dk_salt = get_salt("dk", 30)
|
||||
ah_salt = get_salt("ah", 18)
|
||||
|
||||
# alpine peaks at 5 threads for some reason,
|
||||
# all others scale past that (but try to avoid SMT),
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# coding: utf-8
|
||||
|
||||
VERSION = (1, 16, 13)
|
||||
VERSION = (1, 16, 17)
|
||||
CODENAME = "COPYparty"
|
||||
BUILD_DT = (2025, 2, 13)
|
||||
BUILD_DT = (2025, 3, 16)
|
||||
|
||||
S_VERSION = ".".join(map(str, VERSION))
|
||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||
|
||||
@@ -33,6 +33,7 @@ from .util import (
|
||||
get_df,
|
||||
humansize,
|
||||
odfusion,
|
||||
read_utf8,
|
||||
relchk,
|
||||
statdir,
|
||||
ub64enc,
|
||||
@@ -46,7 +47,7 @@ from .util import (
|
||||
if True: # pylint: disable=using-constant-test
|
||||
from collections.abc import Iterable
|
||||
|
||||
from typing import Any, Generator, Optional, Union
|
||||
from typing import Any, Generator, Optional, Sequence, Union
|
||||
|
||||
from .util import NamedLogger, RootLogger
|
||||
|
||||
@@ -71,6 +72,8 @@ SSEELOG = " ({})".format(SEE_LOG)
|
||||
BAD_CFG = "invalid config; {}".format(SEE_LOG)
|
||||
SBADCFG = " ({})".format(BAD_CFG)
|
||||
|
||||
PTN_U_GRP = re.compile(r"\$\{u%([+-])([^}]+)\}")
|
||||
|
||||
|
||||
class CfgEx(Exception):
|
||||
pass
|
||||
@@ -342,22 +345,26 @@ class VFS(object):
|
||||
log: Optional["RootLogger"],
|
||||
realpath: str,
|
||||
vpath: str,
|
||||
vpath0: str,
|
||||
axs: AXS,
|
||||
flags: dict[str, Any],
|
||||
) -> None:
|
||||
self.log = log
|
||||
self.realpath = realpath # absolute path on host filesystem
|
||||
self.vpath = vpath # absolute path in the virtual filesystem
|
||||
self.vpath0 = vpath0 # original vpath (before idp expansion)
|
||||
self.axs = axs
|
||||
self.flags = flags # config options
|
||||
self.root = self
|
||||
self.dev = 0 # st_dev
|
||||
self.badcfg1 = False
|
||||
self.nodes: dict[str, VFS] = {} # child nodes
|
||||
self.histtab: dict[str, str] = {} # all realpath->histpath
|
||||
self.dbv: Optional[VFS] = None # closest full/non-jump parent
|
||||
self.lim: Optional[Lim] = None # upload limits; only set for dbv
|
||||
self.shr_src: Optional[tuple[VFS, str]] = None # source vfs+rem of a share
|
||||
self.shr_files: set[str] = set() # filenames to include from shr_src
|
||||
self.shr_owner: str = "" # uname
|
||||
self.aread: dict[str, list[str]] = {}
|
||||
self.awrite: dict[str, list[str]] = {}
|
||||
self.amove: dict[str, list[str]] = {}
|
||||
@@ -375,7 +382,7 @@ class VFS(object):
|
||||
vp = vpath + ("/" if vpath else "")
|
||||
self.histpath = os.path.join(realpath, ".hist") # db / thumbcache
|
||||
self.all_vols = {vpath: self} # flattened recursive
|
||||
self.all_nodes = {vpath: self} # also jumpvols
|
||||
self.all_nodes = {vpath: self} # also jumpvols/shares
|
||||
self.all_aps = [(rp, self)]
|
||||
self.all_vps = [(vp, self)]
|
||||
else:
|
||||
@@ -415,7 +422,7 @@ class VFS(object):
|
||||
for v in self.nodes.values():
|
||||
v.get_all_vols(vols, nodes, aps, vps)
|
||||
|
||||
def add(self, src: str, dst: str) -> "VFS":
|
||||
def add(self, src: str, dst: str, dst0: str) -> "VFS":
|
||||
"""get existing, or add new path to the vfs"""
|
||||
assert src == "/" or not src.endswith("/") # nosec
|
||||
assert not dst.endswith("/") # nosec
|
||||
@@ -423,20 +430,22 @@ class VFS(object):
|
||||
if "/" in dst:
|
||||
# requires breadth-first population (permissions trickle down)
|
||||
name, dst = dst.split("/", 1)
|
||||
name0, dst0 = dst0.split("/", 1)
|
||||
if name in self.nodes:
|
||||
# exists; do not manipulate permissions
|
||||
return self.nodes[name].add(src, dst)
|
||||
return self.nodes[name].add(src, dst, dst0)
|
||||
|
||||
vn = VFS(
|
||||
self.log,
|
||||
os.path.join(self.realpath, name) if self.realpath else "",
|
||||
"{}/{}".format(self.vpath, name).lstrip("/"),
|
||||
"{}/{}".format(self.vpath0, name0).lstrip("/"),
|
||||
self.axs,
|
||||
self._copy_flags(name),
|
||||
)
|
||||
vn.dbv = self.dbv or self
|
||||
self.nodes[name] = vn
|
||||
return vn.add(src, dst)
|
||||
return vn.add(src, dst, dst0)
|
||||
|
||||
if dst in self.nodes:
|
||||
# leaf exists; return as-is
|
||||
@@ -444,7 +453,8 @@ class VFS(object):
|
||||
|
||||
# leaf does not exist; create and keep permissions blank
|
||||
vp = "{}/{}".format(self.vpath, dst).lstrip("/")
|
||||
vn = VFS(self.log, src, vp, AXS(), {})
|
||||
vp0 = "{}/{}".format(self.vpath0, dst0).lstrip("/")
|
||||
vn = VFS(self.log, src, vp, vp0, AXS(), {})
|
||||
vn.dbv = self.dbv or self
|
||||
self.nodes[dst] = vn
|
||||
return vn
|
||||
@@ -861,7 +871,7 @@ class AuthSrv(object):
|
||||
self.indent = ""
|
||||
|
||||
# fwd-decl
|
||||
self.vfs = VFS(log_func, "", "", AXS(), {})
|
||||
self.vfs = VFS(log_func, "", "", "", AXS(), {})
|
||||
self.acct: dict[str, str] = {} # uname->pw
|
||||
self.iacct: dict[str, str] = {} # pw->uname
|
||||
self.ases: dict[str, str] = {} # uname->session
|
||||
@@ -929,7 +939,7 @@ class AuthSrv(object):
|
||||
self,
|
||||
src: str,
|
||||
dst: str,
|
||||
mount: dict[str, str],
|
||||
mount: dict[str, tuple[str, str]],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
un_gns: dict[str, list[str]],
|
||||
@@ -945,12 +955,24 @@ class AuthSrv(object):
|
||||
un_gn = [("", "")]
|
||||
|
||||
for un, gn in un_gn:
|
||||
m = PTN_U_GRP.search(dst0)
|
||||
if m:
|
||||
req, gnc = m.groups()
|
||||
hit = gnc in (un_gns.get(un) or [])
|
||||
if req == "+":
|
||||
if not hit:
|
||||
continue
|
||||
elif hit:
|
||||
continue
|
||||
|
||||
# if ap/vp has a user/group placeholder, make sure to keep
|
||||
# track so the same user/group is mapped when setting perms;
|
||||
# otherwise clear un/gn to indicate it's a regular volume
|
||||
|
||||
src1 = src0.replace("${u}", un or "\n")
|
||||
dst1 = dst0.replace("${u}", un or "\n")
|
||||
src1 = PTN_U_GRP.sub(un or "\n", src1)
|
||||
dst1 = PTN_U_GRP.sub(un or "\n", dst1)
|
||||
if src0 == src1 and dst0 == dst1:
|
||||
un = ""
|
||||
|
||||
@@ -967,7 +989,7 @@ class AuthSrv(object):
|
||||
continue
|
||||
visited.add(label)
|
||||
|
||||
src, dst = self._map_volume(src, dst, mount, daxs, mflags)
|
||||
src, dst = self._map_volume(src, dst, dst0, mount, daxs, mflags)
|
||||
if src:
|
||||
ret.append((src, dst, un, gn))
|
||||
if un or gn:
|
||||
@@ -979,7 +1001,8 @@ class AuthSrv(object):
|
||||
self,
|
||||
src: str,
|
||||
dst: str,
|
||||
mount: dict[str, str],
|
||||
dst0: str,
|
||||
mount: dict[str, tuple[str, str]],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
) -> tuple[str, str]:
|
||||
@@ -989,13 +1012,13 @@ class AuthSrv(object):
|
||||
|
||||
if dst in mount:
|
||||
t = "multiple filesystem-paths mounted at [/{}]:\n [{}]\n [{}]"
|
||||
self.log(t.format(dst, mount[dst], src), c=1)
|
||||
self.log(t.format(dst, mount[dst][0], src), c=1)
|
||||
raise Exception(BAD_CFG)
|
||||
|
||||
if src in mount.values():
|
||||
t = "filesystem-path [{}] mounted in multiple locations:"
|
||||
t = t.format(src)
|
||||
for v in [k for k, v in mount.items() if v == src] + [dst]:
|
||||
for v in [k for k, v in mount.items() if v[0] == src] + [dst]:
|
||||
t += "\n /{}".format(v)
|
||||
|
||||
self.log(t, c=3)
|
||||
@@ -1004,7 +1027,7 @@ class AuthSrv(object):
|
||||
if not bos.path.isdir(src):
|
||||
self.log("warning: filesystem-path does not exist: {}".format(src), 3)
|
||||
|
||||
mount[dst] = src
|
||||
mount[dst] = (src, dst0)
|
||||
daxs[dst] = AXS()
|
||||
mflags[dst] = {}
|
||||
return (src, dst)
|
||||
@@ -1065,7 +1088,7 @@ class AuthSrv(object):
|
||||
grps: dict[str, list[str]],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
mount: dict[str, str],
|
||||
mount: dict[str, tuple[str, str]],
|
||||
) -> None:
|
||||
self.line_ctr = 0
|
||||
|
||||
@@ -1090,7 +1113,7 @@ class AuthSrv(object):
|
||||
grps: dict[str, list[str]],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
mount: dict[str, str],
|
||||
mount: dict[str, tuple[str, str]],
|
||||
npass: int,
|
||||
) -> None:
|
||||
self.line_ctr = 0
|
||||
@@ -1389,8 +1412,16 @@ class AuthSrv(object):
|
||||
name = name.lower()
|
||||
|
||||
# volflags are snake_case, but a leading dash is the removal operator
|
||||
if name not in flagdescs and "-" in name[1:]:
|
||||
name = name[:1] + name[1:].replace("-", "_")
|
||||
stripped = name.lstrip("-")
|
||||
zi = len(name) - len(stripped)
|
||||
if zi > 1:
|
||||
t = "WARNING: the config for volume [/%s] specified a volflag with multiple leading hyphens (%s); use one hyphen to remove, or zero hyphens to add a flag. Will now enable flag [%s]"
|
||||
self.log(t % (vpath, name, stripped), 3)
|
||||
name = stripped
|
||||
zi = 0
|
||||
|
||||
if stripped not in flagdescs and "-" in stripped:
|
||||
name = ("-" * zi) + stripped.replace("-", "_")
|
||||
|
||||
desc = flagdescs.get(name.lstrip("-"), "?").replace("\n", " ")
|
||||
|
||||
@@ -1441,8 +1472,8 @@ class AuthSrv(object):
|
||||
acct: dict[str, str] = {} # username:password
|
||||
grps: dict[str, list[str]] = {} # groupname:usernames
|
||||
daxs: dict[str, AXS] = {}
|
||||
mflags: dict[str, dict[str, Any]] = {} # moutpoint:flags
|
||||
mount: dict[str, str] = {} # dst:src (mountpoint:realpath)
|
||||
mflags: dict[str, dict[str, Any]] = {} # vpath:flags
|
||||
mount: dict[str, tuple[str, str]] = {} # dst:src (vp:(ap,vp0))
|
||||
|
||||
self.idp_vols = {} # yolo
|
||||
|
||||
@@ -1521,8 +1552,8 @@ class AuthSrv(object):
|
||||
# case-insensitive; normalize
|
||||
if WINDOWS:
|
||||
cased = {}
|
||||
for k, v in mount.items():
|
||||
cased[k] = absreal(v)
|
||||
for vp, (ap, vp0) in mount.items():
|
||||
cased[vp] = (absreal(ap), vp0)
|
||||
|
||||
mount = cased
|
||||
|
||||
@@ -1537,25 +1568,28 @@ class AuthSrv(object):
|
||||
t = "Read-access has been disabled due to failsafe: No volumes were defined by the config-file. This failsafe is to prevent unintended access if this is due to accidental loss of config. You can override this safeguard and allow read/write to the working-directory by adding the following arguments: -v .::rw"
|
||||
self.log(t, 1)
|
||||
axs = AXS()
|
||||
vfs = VFS(self.log_func, absreal("."), "", axs, {})
|
||||
vfs = VFS(self.log_func, absreal("."), "", "", axs, {})
|
||||
if not axs.uread:
|
||||
vfs.badcfg1 = True
|
||||
elif "" not in mount:
|
||||
# there's volumes but no root; make root inaccessible
|
||||
zsd = {"d2d": True, "tcolor": self.args.tcolor}
|
||||
vfs = VFS(self.log_func, "", "", AXS(), zsd)
|
||||
vfs = VFS(self.log_func, "", "", "", AXS(), zsd)
|
||||
|
||||
maxdepth = 0
|
||||
for dst in sorted(mount.keys(), key=lambda x: (x.count("/"), len(x))):
|
||||
depth = dst.count("/")
|
||||
assert maxdepth <= depth # nosec
|
||||
maxdepth = depth
|
||||
src, dst0 = mount[dst]
|
||||
|
||||
if dst == "":
|
||||
# rootfs was mapped; fully replaces the default CWD vfs
|
||||
vfs = VFS(self.log_func, mount[dst], dst, daxs[dst], mflags[dst])
|
||||
vfs = VFS(self.log_func, src, dst, dst0, daxs[dst], mflags[dst])
|
||||
continue
|
||||
|
||||
assert vfs # type: ignore
|
||||
zv = vfs.add(mount[dst], dst)
|
||||
zv = vfs.add(src, dst, dst0)
|
||||
zv.axs = daxs[dst]
|
||||
zv.flags = mflags[dst]
|
||||
zv.dbv = None
|
||||
@@ -1576,7 +1610,8 @@ class AuthSrv(object):
|
||||
for vol in vfs.all_vols.values():
|
||||
unknown_flags = set()
|
||||
for k, v in vol.flags.items():
|
||||
if k not in flagdescs and k not in k_ign:
|
||||
ks = k.lstrip("-")
|
||||
if ks not in flagdescs and ks not in k_ign:
|
||||
unknown_flags.add(k)
|
||||
if unknown_flags:
|
||||
t = "WARNING: the config for volume [/%s] has unrecognized volflags; will ignore: '%s'"
|
||||
@@ -1588,7 +1623,8 @@ class AuthSrv(object):
|
||||
if enshare:
|
||||
import sqlite3
|
||||
|
||||
shv = VFS(self.log_func, "", shr, AXS(), {})
|
||||
zsd = {"d2d": True, "tcolor": self.args.tcolor}
|
||||
shv = VFS(self.log_func, "", shr, shr, AXS(), zsd)
|
||||
|
||||
db_path = self.args.shr_db
|
||||
db = sqlite3.connect(db_path)
|
||||
@@ -1622,9 +1658,8 @@ class AuthSrv(object):
|
||||
|
||||
# don't know the abspath yet + wanna ensure the user
|
||||
# still has the privs they granted, so nullmap it
|
||||
shv.nodes[s_k] = VFS(
|
||||
self.log_func, "", "%s/%s" % (shr, s_k), s_axs, shv.flags.copy()
|
||||
)
|
||||
vp = "%s/%s" % (shr, s_k)
|
||||
shv.nodes[s_k] = VFS(self.log_func, "", vp, vp, s_axs, shv.flags.copy())
|
||||
|
||||
vfs.nodes[shr] = vfs.all_vols[shr] = shv
|
||||
for vol in shv.nodes.values():
|
||||
@@ -1785,6 +1820,24 @@ class AuthSrv(object):
|
||||
rhisttab[histp] = zv
|
||||
vfs.histtab[zv.realpath] = histp
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
use = False
|
||||
for k in ["zipmaxn", "zipmaxs"]:
|
||||
try:
|
||||
zs = vol.flags[k]
|
||||
except:
|
||||
zs = getattr(self.args, k)
|
||||
if zs in ("", "0"):
|
||||
vol.flags[k] = 0
|
||||
continue
|
||||
|
||||
zf = unhumanize(zs)
|
||||
vol.flags[k + "_v"] = zf
|
||||
if zf:
|
||||
use = True
|
||||
if use:
|
||||
vol.flags["zipmax"] = True
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
lim = Lim(self.log_func)
|
||||
use = False
|
||||
@@ -1943,11 +1996,8 @@ class AuthSrv(object):
|
||||
if vf not in vol.flags:
|
||||
vol.flags[vf] = getattr(self.args, ga)
|
||||
|
||||
for k in ("nrand",):
|
||||
if k not in vol.flags:
|
||||
vol.flags[k] = getattr(self.args, k)
|
||||
|
||||
for k in ("nrand", "u2abort", "ups_who", "zip_who"):
|
||||
zs = "forget_ip nrand u2abort u2ow ups_who zip_who"
|
||||
for k in zs.split():
|
||||
if k in vol.flags:
|
||||
vol.flags[k] = int(vol.flags[k])
|
||||
|
||||
@@ -2171,8 +2221,13 @@ class AuthSrv(object):
|
||||
for vol in vfs.all_nodes.values():
|
||||
for k in list(vol.flags.keys()):
|
||||
if re.match("^-[^-]+$", k):
|
||||
vol.flags.pop(k[1:], None)
|
||||
vol.flags.pop(k)
|
||||
zs = k[1:]
|
||||
if zs in vol.flags:
|
||||
vol.flags.pop(k[1:])
|
||||
else:
|
||||
t = "WARNING: the config for volume [/%s] tried to remove volflag [%s] by specifying [%s] but that volflag was not already set"
|
||||
self.log(t % (vol.vpath, zs, k), 3)
|
||||
|
||||
if vol.flags.get("dots"):
|
||||
for name in vol.axs.uread:
|
||||
@@ -2265,22 +2320,56 @@ class AuthSrv(object):
|
||||
except Pebkac:
|
||||
self.warn_anonwrite = True
|
||||
|
||||
idp_err = "WARNING! The following IdP volumes are mounted directly below another volume where anonymous users can read and/or write files. This is a SECURITY HAZARD!! When copyparty is restarted, it will not know about these IdP volumes yet. These volumes will then be accessible by anonymous users UNTIL one of the users associated with their volume sends a request to the server. RECOMMENDATION: You should create a restricted volume where nobody can read/write files, and make sure that all IdP volumes are configured to appear somewhere below that volume."
|
||||
self.idp_warn = []
|
||||
self.idp_err = []
|
||||
for idp_vp in self.idp_vols:
|
||||
parent_vp = vsplit(idp_vp)[0]
|
||||
vn, _ = vfs.get(parent_vp, "*", False, False)
|
||||
zs = (
|
||||
"READABLE"
|
||||
if "*" in vn.axs.uread
|
||||
else "WRITABLE"
|
||||
if "*" in vn.axs.uwrite
|
||||
else ""
|
||||
)
|
||||
if zs:
|
||||
t = '\nWARNING: Volume "/%s" appears below "/%s" and would be WORLD-%s'
|
||||
idp_err += t % (idp_vp, vn.vpath, zs)
|
||||
if "\n" in idp_err:
|
||||
self.log(idp_err, 1)
|
||||
idp_vn, _ = vfs.get(idp_vp, "*", False, False)
|
||||
idp_vp0 = idp_vn.vpath0
|
||||
|
||||
sigils = set(re.findall(r"(\${[ug][}%])", idp_vp0))
|
||||
if len(sigils) > 1:
|
||||
t = '\nWARNING: IdP-volume "/%s" created by "/%s" has multiple IdP placeholders: %s'
|
||||
self.idp_warn.append(t % (idp_vp, idp_vp0, list(sigils)))
|
||||
continue
|
||||
|
||||
sigil = sigils.pop()
|
||||
par_vp = idp_vp
|
||||
while par_vp:
|
||||
par_vp = vsplit(par_vp)[0]
|
||||
par_vn, _ = vfs.get(par_vp, "*", False, False)
|
||||
if sigil in par_vn.vpath0:
|
||||
continue # parent was spawned for and by same user
|
||||
|
||||
oth_read = []
|
||||
oth_write = []
|
||||
for usr in par_vn.axs.uread:
|
||||
if usr not in idp_vn.axs.uread:
|
||||
oth_read.append(usr)
|
||||
for usr in par_vn.axs.uwrite:
|
||||
if usr not in idp_vn.axs.uwrite:
|
||||
oth_write.append(usr)
|
||||
|
||||
if "*" in oth_read:
|
||||
taxs = "WORLD-READABLE"
|
||||
elif "*" in oth_write:
|
||||
taxs = "WORLD-WRITABLE"
|
||||
elif oth_read:
|
||||
taxs = "READABLE BY %r" % (oth_read,)
|
||||
elif oth_write:
|
||||
taxs = "WRITABLE BY %r" % (oth_write,)
|
||||
else:
|
||||
break # no sigil; not idp; safe to stop
|
||||
|
||||
t = '\nWARNING: IdP-volume "/%s" created by "/%s" has parent/grandparent "/%s" and would be %s'
|
||||
self.idp_err.append(t % (idp_vp, idp_vp0, par_vn.vpath, taxs))
|
||||
|
||||
if self.idp_warn:
|
||||
t = "WARNING! Some IdP volumes include multiple IdP placeholders; this is too complex to automatically determine if safe or not. To ensure that no users gain unintended access, please use only a single placeholder for each IdP volume."
|
||||
self.log(t + "".join(self.idp_warn), 1)
|
||||
|
||||
if self.idp_err:
|
||||
t = "WARNING! The following IdP volumes are mounted below another volume where other users can read and/or write files. This is a SECURITY HAZARD!! When copyparty is restarted, it will not know about these IdP volumes yet. These volumes will then be accessible by an unexpected set of permissions UNTIL one of the users associated with their volume sends a request to the server. RECOMMENDATION: You should create a restricted volume where nobody can read/write files, and make sure that all IdP volumes are configured to appear somewhere below that volume."
|
||||
self.log(t + "".join(self.idp_err), 1)
|
||||
|
||||
self.vfs = vfs
|
||||
self.acct = acct
|
||||
@@ -2315,11 +2404,6 @@ class AuthSrv(object):
|
||||
for x, y in vfs.all_vols.items()
|
||||
if x != shr and not x.startswith(shrs)
|
||||
}
|
||||
vfs.all_nodes = {
|
||||
x: y
|
||||
for x, y in vfs.all_nodes.items()
|
||||
if x != shr and not x.startswith(shrs)
|
||||
}
|
||||
|
||||
assert db and cur and cur2 and shv # type: ignore
|
||||
for row in cur.execute("select * from sh"):
|
||||
@@ -2349,6 +2433,7 @@ class AuthSrv(object):
|
||||
else:
|
||||
shn.ls = shn._ls
|
||||
|
||||
shn.shr_owner = s_un
|
||||
shn.shr_src = (s_vfs, s_rem)
|
||||
shn.realpath = s_vfs.canonical(s_rem)
|
||||
|
||||
@@ -2366,7 +2451,7 @@ class AuthSrv(object):
|
||||
continue # also fine
|
||||
for zs in svn.nodes.keys():
|
||||
# hide subvolume
|
||||
vn.nodes[zs] = VFS(self.log_func, "", "", AXS(), {})
|
||||
vn.nodes[zs] = VFS(self.log_func, "", "", "", AXS(), {})
|
||||
|
||||
cur2.close()
|
||||
cur.close()
|
||||
@@ -2374,7 +2459,9 @@ class AuthSrv(object):
|
||||
|
||||
self.js_ls = {}
|
||||
self.js_htm = {}
|
||||
for vn in self.vfs.all_nodes.values():
|
||||
for vp, vn in self.vfs.all_nodes.items():
|
||||
if enshare and vp.startswith(shrs):
|
||||
continue # propagates later in this func
|
||||
vf = vn.flags
|
||||
vn.js_ls = {
|
||||
"idx": "e2d" in vf,
|
||||
@@ -2422,6 +2509,7 @@ class AuthSrv(object):
|
||||
"u2j": self.args.u2j,
|
||||
"u2sz": self.args.u2sz,
|
||||
"u2ts": vf["u2ts"],
|
||||
"u2ow": vf["u2ow"],
|
||||
"frand": bool(vf.get("rand")),
|
||||
"lifetime": vn.js_ls["lifetime"],
|
||||
"u2sort": self.args.u2sort,
|
||||
@@ -2431,8 +2519,12 @@ class AuthSrv(object):
|
||||
vols = list(vfs.all_nodes.values())
|
||||
if enshare:
|
||||
assert shv # type: ignore # !rm
|
||||
vols.append(shv)
|
||||
vols.extend(list(shv.nodes.values()))
|
||||
for vol in shv.nodes.values():
|
||||
if vol.vpath not in vfs.all_nodes:
|
||||
self.log("BUG: /%s not in all_nodes" % (vol.vpath,), 1)
|
||||
vols.append(vol)
|
||||
if shr in vfs.all_nodes:
|
||||
self.log("BUG: %s found in all_nodes" % (shr,), 1)
|
||||
|
||||
for vol in vols:
|
||||
dbv = vol.get_dbv("")[0]
|
||||
@@ -2535,8 +2627,8 @@ class AuthSrv(object):
|
||||
if not bos.path.exists(ap):
|
||||
pwdb = {}
|
||||
else:
|
||||
with open(ap, "r", encoding="utf-8") as f:
|
||||
pwdb = json.load(f)
|
||||
jtxt = read_utf8(self.log, ap, True)
|
||||
pwdb = json.loads(jtxt)
|
||||
|
||||
pwdb = [x for x in pwdb if x[0] != uname]
|
||||
pwdb.append((uname, self.defpw[uname], hpw))
|
||||
@@ -2559,8 +2651,8 @@ class AuthSrv(object):
|
||||
if not self.args.chpw or not bos.path.exists(ap):
|
||||
return
|
||||
|
||||
with open(ap, "r", encoding="utf-8") as f:
|
||||
pwdb = json.load(f)
|
||||
jtxt = read_utf8(self.log, ap, True)
|
||||
pwdb = json.loads(jtxt)
|
||||
|
||||
useen = set()
|
||||
urst = set()
|
||||
@@ -2674,7 +2766,7 @@ class AuthSrv(object):
|
||||
def dbg_ls(self) -> None:
|
||||
users = self.args.ls
|
||||
vol = "*"
|
||||
flags: list[str] = []
|
||||
flags: Sequence[str] = []
|
||||
|
||||
try:
|
||||
users, vol = users.split(",", 1)
|
||||
@@ -3056,8 +3148,9 @@ def expand_config_file(
|
||||
ipath += " -> " + fp
|
||||
ret.append("#\033[36m opening cfg file{}\033[0m".format(ipath))
|
||||
|
||||
with open(fp, "rb") as f:
|
||||
for oln in [x.decode("utf-8").rstrip() for x in f]:
|
||||
cfg_lines = read_utf8(log, fp, True).split("\n")
|
||||
if True: # diff-golf
|
||||
for oln in [x.rstrip() for x in cfg_lines]:
|
||||
ln = oln.split(" #")[0].strip()
|
||||
if ln.startswith("% "):
|
||||
pad = " " * len(oln.split("%")[0])
|
||||
|
||||
@@ -43,6 +43,7 @@ def vf_bmap() -> dict[str, str]:
|
||||
"gsel",
|
||||
"hardlink",
|
||||
"magic",
|
||||
"no_db_ip",
|
||||
"no_sb_md",
|
||||
"no_sb_lg",
|
||||
"nsort",
|
||||
@@ -54,6 +55,7 @@ def vf_bmap() -> dict[str, str]:
|
||||
"xdev",
|
||||
"xlink",
|
||||
"xvol",
|
||||
"zipmaxu",
|
||||
):
|
||||
ret[k] = k
|
||||
return ret
|
||||
@@ -73,6 +75,7 @@ def vf_vmap() -> dict[str, str]:
|
||||
}
|
||||
for k in (
|
||||
"dbd",
|
||||
"forget_ip",
|
||||
"hsortn",
|
||||
"html_head",
|
||||
"lg_sbf",
|
||||
@@ -80,6 +83,7 @@ def vf_vmap() -> dict[str, str]:
|
||||
"lg_sba",
|
||||
"md_sba",
|
||||
"nrand",
|
||||
"u2ow",
|
||||
"og_desc",
|
||||
"og_site",
|
||||
"og_th",
|
||||
@@ -98,6 +102,9 @@ def vf_vmap() -> dict[str, str]:
|
||||
"u2ts",
|
||||
"ups_who",
|
||||
"zip_who",
|
||||
"zipmaxn",
|
||||
"zipmaxs",
|
||||
"zipmaxt",
|
||||
):
|
||||
ret[k] = k
|
||||
return ret
|
||||
@@ -156,7 +163,8 @@ flagcats = {
|
||||
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
|
||||
"nosub": "forces all uploads into the top folder of the vfs",
|
||||
"magic": "enables filetype detection for nameless uploads",
|
||||
"gz": "allows server-side gzip of uploads with ?gz (also c,xz)",
|
||||
"gz": "allows server-side gzip compression of uploads with ?gz",
|
||||
"xz": "allows server-side lzma compression of uploads with ?xz",
|
||||
"pk": "forces server-side compression, optional arg: xz,9",
|
||||
},
|
||||
"upload rules": {
|
||||
@@ -167,6 +175,7 @@ flagcats = {
|
||||
"medialinks": "return medialinks for non-up2k uploads (not hotlinks)",
|
||||
"rand": "force randomized filenames, 9 chars long by default",
|
||||
"nrand=N": "randomized filenames are N chars long",
|
||||
"u2ow=N": "overwrite existing files? 0=no 1=if-older 2=always",
|
||||
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
|
||||
"u2abort=1": "allow aborting unfinished uploads? 0=no 1=strict 2=ip-chk 3=acct-chk",
|
||||
"sz=1k-3m": "allow filesizes between 1 KiB and 3MiB",
|
||||
@@ -197,6 +206,8 @@ flagcats = {
|
||||
"nohash=\\.iso$": "skips hashing file contents if path matches *.iso",
|
||||
"noidx=\\.iso$": "fully ignores the contents at paths matching *.iso",
|
||||
"noforget": "don't forget files when deleted from disk",
|
||||
"forget_ip=43200": "forget uploader-IP after 30 days (GDPR)",
|
||||
"no_db_ip": "never store uploader-IP in the db; disables unpost",
|
||||
"fat32": "avoid excessive reindexing on android sdcardfs",
|
||||
"dbd=[acid|swal|wal|yolo]": "database speed-durability tradeoff",
|
||||
"xlink": "cross-volume dupe detection / linking (dangerous)",
|
||||
@@ -286,9 +297,16 @@ flagcats = {
|
||||
"dots": "allow all users with read-access to\nenable the option to show dotfiles in listings",
|
||||
"fk=8": 'generates per-file accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
|
||||
"fka=8": 'generates slightly weaker per-file accesskeys,\nwhich are then required at the "g" permission;\nnot affected by filesize or inode numbers',
|
||||
"dk=8": 'generates per-directory accesskeys,\nwhich are then required at the "g" permission;\nkeys are invalidated if filesize or inode changes',
|
||||
"dks": "per-directory accesskeys allow browsing into subdirs",
|
||||
"dky": 'allow seeing files (not folders) inside a specific folder\nwith "g" perm, and does not require a valid dirkey to do so',
|
||||
"rss": "allow '?rss' URL suffix (experimental)",
|
||||
"ups_who=2": "restrict viewing the list of recent uploads",
|
||||
"zip_who=2": "restrict access to download-as-zip/tar",
|
||||
"zipmaxn=9k": "reject download-as-zip if more than 9000 files",
|
||||
"zipmaxs=2g": "reject download-as-zip if size over 2 GiB",
|
||||
"zipmaxt=no": "reply with 'no' if download-as-zip exceeds max",
|
||||
"zipmaxu": "zip-size-limit does not apply to authenticated users",
|
||||
"nopipe": "disable race-the-beam (download unfinished uploads)",
|
||||
"mv_retry": "ms-windows: timeout for renaming busy files",
|
||||
"rm_retry": "ms-windows: timeout for deleting busy files",
|
||||
|
||||
@@ -78,7 +78,7 @@ class Fstab(object):
|
||||
return vid
|
||||
|
||||
def build_fallback(self) -> None:
|
||||
self.tab = VFS(self.log_func, "idk", "/", AXS(), {})
|
||||
self.tab = VFS(self.log_func, "idk", "/", "/", AXS(), {})
|
||||
self.trusted = False
|
||||
|
||||
def build_tab(self) -> None:
|
||||
@@ -111,9 +111,10 @@ class Fstab(object):
|
||||
|
||||
tab1.sort(key=lambda x: (len(x[0]), x[0]))
|
||||
path1, fs1 = tab1[0]
|
||||
tab = VFS(self.log_func, fs1, path1, AXS(), {})
|
||||
tab = VFS(self.log_func, fs1, path1, path1, AXS(), {})
|
||||
for path, fs in tab1[1:]:
|
||||
tab.add(fs, path.lstrip("/"))
|
||||
zs = path.lstrip("/")
|
||||
tab.add(fs, zs, zs)
|
||||
|
||||
self.tab = tab
|
||||
self.srctab = srctab
|
||||
@@ -130,9 +131,10 @@ class Fstab(object):
|
||||
if not self.trusted:
|
||||
# no mtab access; have to build as we go
|
||||
if "/" in rem:
|
||||
self.tab.add("idk", os.path.join(vn.vpath, rem.split("/")[0]))
|
||||
zs = os.path.join(vn.vpath, rem.split("/")[0])
|
||||
self.tab.add("idk", zs, zs)
|
||||
if rem:
|
||||
self.tab.add(nval, path)
|
||||
self.tab.add(nval, path, path)
|
||||
else:
|
||||
vn.realpath = nval
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ from datetime import datetime
|
||||
from operator import itemgetter
|
||||
|
||||
import jinja2 # typechk
|
||||
from ipaddress import IPv6Network
|
||||
|
||||
try:
|
||||
if os.environ.get("PRTY_NO_LZMA"):
|
||||
@@ -89,6 +90,7 @@ from .util import (
|
||||
read_socket,
|
||||
read_socket_chunked,
|
||||
read_socket_unbounded,
|
||||
read_utf8,
|
||||
relchk,
|
||||
ren_open,
|
||||
runhook,
|
||||
@@ -152,6 +154,8 @@ RE_HSAFE = re.compile(r"[\x00-\x1f<>\"'&]") # search always much faster
|
||||
RE_HOST = re.compile(r"[^][0-9a-zA-Z.:_-]") # search faster <=17ch
|
||||
RE_MHOST = re.compile(r"^[][0-9a-zA-Z.:_-]+$") # match faster >=18ch
|
||||
RE_K = re.compile(r"[^0-9a-zA-Z_-]") # search faster <=17ch
|
||||
RE_HR = re.compile(r"[<>\"'&]")
|
||||
RE_MDV = re.compile(r"(.*)\.([0-9]+\.[0-9]{3})(\.[Mm][Dd])$")
|
||||
|
||||
UPARAM_CC_OK = set("doc move tree".split())
|
||||
|
||||
@@ -385,11 +389,12 @@ class HttpCli(object):
|
||||
t += ' Note: if you are behind cloudflare, then this default header is not a good choice; please first make sure your local reverse-proxy (if any) does not allow non-cloudflare IPs from providing cf-* headers, and then add this additional global setting: "--xff-hdr=cf-connecting-ip"'
|
||||
else:
|
||||
t += ' Note: depending on your reverse-proxy, and/or WAF, and/or other intermediates, you may want to read the true client IP from another header by also specifying "--xff-hdr=SomeOtherHeader"'
|
||||
zs = (
|
||||
".".join(pip.split(".")[:2]) + "."
|
||||
if "." in pip
|
||||
else ":".join(pip.split(":")[:4]) + ":"
|
||||
) + "0.0/16"
|
||||
|
||||
if "." in pip:
|
||||
zs = ".".join(pip.split(".")[:2]) + ".0.0/16"
|
||||
else:
|
||||
zs = IPv6Network(pip + "/64", False).compressed
|
||||
|
||||
zs2 = ' or "--xff-src=lan"' if self.conn.xff_lan.map(pip) else ""
|
||||
self.log(t % (self.args.xff_hdr, pip, cli_ip, zso, zs, zs2), 3)
|
||||
self.bad_xff = True
|
||||
@@ -866,8 +871,7 @@ class HttpCli(object):
|
||||
html = html.replace("%", "", 1)
|
||||
|
||||
if html.startswith("@"):
|
||||
with open(html[1:], "rb") as f:
|
||||
html = f.read().decode("utf-8")
|
||||
html = read_utf8(self.log, html[1:], True)
|
||||
|
||||
if html.startswith("%"):
|
||||
html = html[1:]
|
||||
@@ -1234,14 +1238,7 @@ class HttpCli(object):
|
||||
return self.tx_404(True)
|
||||
else:
|
||||
vfs = self.asrv.vfs
|
||||
if (
|
||||
not vfs.nodes
|
||||
and not vfs.axs.uread
|
||||
and not vfs.axs.uwrite
|
||||
and not vfs.axs.uget
|
||||
and not vfs.axs.uhtml
|
||||
and not vfs.axs.uadmin
|
||||
):
|
||||
if vfs.badcfg1:
|
||||
t = "<h2>access denied due to failsafe; check server log</h2>"
|
||||
html = self.j2s("splash", this=self, msg=t)
|
||||
self.reply(html.encode("utf-8", "replace"), 500)
|
||||
@@ -1811,7 +1808,8 @@ class HttpCli(object):
|
||||
dst = unquotep(dst)
|
||||
|
||||
# overwrite=True is default; rfc4918 9.8.4
|
||||
overwrite = self.headers.get("overwrite", "").lower() != "f"
|
||||
zs = self.headers.get("overwrite", "").lower()
|
||||
overwrite = zs not in ["f", "false"]
|
||||
|
||||
try:
|
||||
fun = self._cp if self.mode == "COPY" else self._mv
|
||||
@@ -3735,8 +3733,7 @@ class HttpCli(object):
|
||||
continue
|
||||
fn = "%s/%s" % (abspath, fn)
|
||||
if bos.path.isfile(fn):
|
||||
with open(fsenc(fn), "rb") as f:
|
||||
logues[n] = f.read().decode("utf-8")
|
||||
logues[n] = read_utf8(self.log, fsenc(fn), False)
|
||||
if "exp" in vn.flags:
|
||||
logues[n] = self._expand(
|
||||
logues[n], vn.flags.get("exp_lg") or []
|
||||
@@ -3757,9 +3754,8 @@ class HttpCli(object):
|
||||
for fn in fns:
|
||||
fn = "%s/%s" % (abspath, fn)
|
||||
if bos.path.isfile(fn):
|
||||
with open(fsenc(fn), "rb") as f:
|
||||
txt = f.read().decode("utf-8")
|
||||
break
|
||||
txt = read_utf8(self.log, fsenc(fn), False)
|
||||
break
|
||||
|
||||
if txt and "exp" in vn.flags:
|
||||
txt = self._expand(txt, vn.flags.get("exp_md") or [])
|
||||
@@ -3792,6 +3788,16 @@ class HttpCli(object):
|
||||
|
||||
return txt
|
||||
|
||||
def _can_zip(self, volflags: dict[str, Any]) -> str:
|
||||
lvl = volflags["zip_who"]
|
||||
if self.args.no_zip or not lvl:
|
||||
return "download-as-zip/tar is disabled in server config"
|
||||
elif lvl <= 1 and not self.can_admin:
|
||||
return "download-as-zip/tar is admin-only on this server"
|
||||
elif lvl <= 2 and self.uname in ("", "*"):
|
||||
return "you must be authenticated to download-as-zip/tar on this server"
|
||||
return ""
|
||||
|
||||
def tx_res(self, req_path: str) -> bool:
|
||||
status = 200
|
||||
logmsg = "{:4} {} ".format("", self.req)
|
||||
@@ -4324,13 +4330,8 @@ class HttpCli(object):
|
||||
rem: str,
|
||||
items: list[str],
|
||||
) -> bool:
|
||||
lvl = vn.flags["zip_who"]
|
||||
if self.args.no_zip or not lvl:
|
||||
raise Pebkac(400, "download-as-zip/tar is disabled in server config")
|
||||
elif lvl <= 1 and not self.can_admin:
|
||||
raise Pebkac(400, "download-as-zip/tar is admin-only on this server")
|
||||
elif lvl <= 2 and self.uname in ("", "*"):
|
||||
t = "you must be authenticated to download-as-zip/tar on this server"
|
||||
t = self._can_zip(vn.flags)
|
||||
if t:
|
||||
raise Pebkac(400, t)
|
||||
|
||||
logmsg = "{:4} {} ".format("", self.req)
|
||||
@@ -4363,6 +4364,33 @@ class HttpCli(object):
|
||||
else:
|
||||
fn = self.host.split(":")[0]
|
||||
|
||||
if vn.flags.get("zipmax") and (not self.uname or not "zipmaxu" in vn.flags):
|
||||
maxs = vn.flags.get("zipmaxs_v") or 0
|
||||
maxn = vn.flags.get("zipmaxn_v") or 0
|
||||
nf = 0
|
||||
nb = 0
|
||||
fgen = vn.zipgen(
|
||||
vpath, rem, set(items), self.uname, False, not self.args.no_scandir
|
||||
)
|
||||
t = "total size exceeds a limit specified in server config"
|
||||
t = vn.flags.get("zipmaxt") or t
|
||||
if maxs and maxn:
|
||||
for zd in fgen:
|
||||
nf += 1
|
||||
nb += zd["st"].st_size
|
||||
if maxs < nb or maxn < nf:
|
||||
raise Pebkac(400, t)
|
||||
elif maxs:
|
||||
for zd in fgen:
|
||||
nb += zd["st"].st_size
|
||||
if maxs < nb:
|
||||
raise Pebkac(400, t)
|
||||
elif maxn:
|
||||
for zd in fgen:
|
||||
nf += 1
|
||||
if maxn < nf:
|
||||
raise Pebkac(400, t)
|
||||
|
||||
safe = (string.ascii_letters + string.digits).replace("%", "")
|
||||
afn = "".join([x if x in safe.replace('"', "") else "_" for x in fn])
|
||||
bascii = unicode(safe).encode("utf-8")
|
||||
@@ -5009,6 +5037,8 @@ class HttpCli(object):
|
||||
def get_dls(self) -> list[list[Any]]:
|
||||
ret = []
|
||||
dls = self.conn.hsrv.tdls
|
||||
enshare = self.args.shr
|
||||
shrs = enshare[1:]
|
||||
for dl_id, (t0, sz, vn, vp, uname) in self.conn.hsrv.tdli.items():
|
||||
t1, sent = dls[dl_id]
|
||||
if sent > 0x100000: # 1m; buffers 2~4
|
||||
@@ -5017,6 +5047,15 @@ class HttpCli(object):
|
||||
vp = ""
|
||||
elif self.uname not in vn.axs.udot and (vp.startswith(".") or "/." in vp):
|
||||
vp = ""
|
||||
elif (
|
||||
enshare
|
||||
and vp.startswith(shrs)
|
||||
and self.uname != vn.shr_owner
|
||||
and self.uname not in vn.axs.uadmin
|
||||
and self.uname not in self.args.shr_adm
|
||||
and not dl_id.startswith(self.ip + ":")
|
||||
):
|
||||
vp = ""
|
||||
if self.uname not in vn.axs.uadmin:
|
||||
dl_id = uname = ""
|
||||
|
||||
@@ -5968,7 +6007,7 @@ class HttpCli(object):
|
||||
# [num-backups, most-recent, hist-path]
|
||||
hist: dict[str, tuple[int, float, str]] = {}
|
||||
histdir = os.path.join(fsroot, ".hist")
|
||||
ptn = re.compile(r"(.*)\.([0-9]+\.[0-9]{3})(\.[^\.]+)$")
|
||||
ptn = RE_MDV
|
||||
try:
|
||||
for hfn in bos.listdir(histdir):
|
||||
m = ptn.match(hfn)
|
||||
@@ -5999,8 +6038,11 @@ class HttpCli(object):
|
||||
zs = self.gen_fk(2, self.args.dk_salt, abspath, 0, 0)[:add_dk]
|
||||
ls_ret["dk"] = cgv["dk"] = zs
|
||||
|
||||
no_zip = bool(self._can_zip(vf))
|
||||
|
||||
dirs = []
|
||||
files = []
|
||||
ptn_hr = RE_HR
|
||||
for fn in ls_names:
|
||||
base = ""
|
||||
href = fn
|
||||
@@ -6023,7 +6065,7 @@ class HttpCli(object):
|
||||
is_dir = stat.S_ISDIR(inf.st_mode)
|
||||
if is_dir:
|
||||
href += "/"
|
||||
if self.args.no_zip:
|
||||
if no_zip:
|
||||
margin = "DIR"
|
||||
elif add_dk:
|
||||
zs = absreal(fspath)
|
||||
@@ -6036,7 +6078,7 @@ class HttpCli(object):
|
||||
quotep(href),
|
||||
)
|
||||
elif fn in hist:
|
||||
margin = '<a href="%s.hist/%s">#%s</a>' % (
|
||||
margin = '<a href="%s.hist/%s" rel="nofollow">#%s</a>' % (
|
||||
base,
|
||||
html_escape(hist[fn][2], quot=True, crlf=True),
|
||||
hist[fn][0],
|
||||
@@ -6055,11 +6097,13 @@ class HttpCli(object):
|
||||
zd.second,
|
||||
)
|
||||
|
||||
try:
|
||||
ext = "---" if is_dir else fn.rsplit(".", 1)[1]
|
||||
if is_dir:
|
||||
ext = "---"
|
||||
elif "." in fn:
|
||||
ext = ptn_hr.sub("@", fn.rsplit(".", 1)[1])
|
||||
if len(ext) > 16:
|
||||
ext = ext[:16]
|
||||
except:
|
||||
else:
|
||||
ext = "%"
|
||||
|
||||
if add_fk and not is_dir:
|
||||
@@ -6246,9 +6290,7 @@ class HttpCli(object):
|
||||
docpath = os.path.join(abspath, doc)
|
||||
sz = bos.path.getsize(docpath)
|
||||
if sz < 1024 * self.args.txt_max:
|
||||
with open(fsenc(docpath), "rb") as f:
|
||||
doctxt = f.read().decode("utf-8", "replace")
|
||||
|
||||
doctxt = read_utf8(self.log, fsenc(docpath), False)
|
||||
if doc.lower().endswith(".md") and "exp" in vn.flags:
|
||||
doctxt = self._expand(doctxt, vn.flags.get("exp_md") or [])
|
||||
else:
|
||||
|
||||
@@ -1260,7 +1260,7 @@ class SvcHub(object):
|
||||
raise
|
||||
|
||||
def check_mp_support(self) -> str:
|
||||
if MACOS:
|
||||
if MACOS and not os.environ.get("PRTY_FORCE_MP"):
|
||||
return "multiprocessing is wonky on mac osx;"
|
||||
elif sys.version_info < (3, 3):
|
||||
return "need python 3.3 or newer for multiprocessing;"
|
||||
@@ -1280,7 +1280,7 @@ class SvcHub(object):
|
||||
return False
|
||||
|
||||
try:
|
||||
if mp.cpu_count() <= 1:
|
||||
if mp.cpu_count() <= 1 and not os.environ.get("PRTY_FORCE_MP"):
|
||||
raise Exception()
|
||||
except:
|
||||
self.log("svchub", "only one CPU detected; multiprocessing disabled")
|
||||
|
||||
@@ -557,6 +557,7 @@ class Up2k(object):
|
||||
else:
|
||||
# important; not deferred by db_act
|
||||
timeout = self._check_lifetimes()
|
||||
timeout = min(self._check_forget_ip(), timeout)
|
||||
try:
|
||||
if self.args.shr:
|
||||
timeout = min(self._check_shares(), timeout)
|
||||
@@ -617,6 +618,43 @@ class Up2k(object):
|
||||
for v in vols:
|
||||
volage[v] = now
|
||||
|
||||
def _check_forget_ip(self) -> float:
|
||||
now = time.time()
|
||||
timeout = now + 9001
|
||||
for vp, vol in sorted(self.vfs.all_vols.items()):
|
||||
maxage = vol.flags["forget_ip"]
|
||||
if not maxage:
|
||||
continue
|
||||
|
||||
cur = self.cur.get(vol.realpath)
|
||||
if not cur:
|
||||
continue
|
||||
|
||||
cutoff = now - maxage * 60
|
||||
|
||||
for _ in range(2):
|
||||
q = "select ip, at from up where ip > '' order by +at limit 1"
|
||||
hits = cur.execute(q).fetchall()
|
||||
if not hits:
|
||||
break
|
||||
|
||||
remains = hits[0][1] - cutoff
|
||||
if remains > 0:
|
||||
timeout = min(timeout, now + remains)
|
||||
break
|
||||
|
||||
q = "update up set ip = '' where ip > '' and at <= %d"
|
||||
cur.execute(q % (cutoff,))
|
||||
zi = cur.rowcount
|
||||
cur.connection.commit()
|
||||
|
||||
t = "forget-ip(%d) removed %d IPs from db [/%s]"
|
||||
self.log(t % (maxage, zi, vol.vpath))
|
||||
|
||||
timeout = min(timeout, now + 900)
|
||||
|
||||
return timeout
|
||||
|
||||
def _check_lifetimes(self) -> float:
|
||||
now = time.time()
|
||||
timeout = now + 9001
|
||||
@@ -1081,7 +1119,7 @@ class Up2k(object):
|
||||
ft = "\033[0;32m{}{:.0}"
|
||||
ff = "\033[0;35m{}{:.0}"
|
||||
fv = "\033[0;36m{}:\033[90m{}"
|
||||
zs = "html_head mv_re_r mv_re_t rm_re_r rm_re_t srch_re_dots srch_re_nodot"
|
||||
zs = "ext_th_d html_head mv_re_r mv_re_t rm_re_r rm_re_t srch_re_dots srch_re_nodot zipmax zipmaxn_v zipmaxs_v"
|
||||
fx = set(zs.split())
|
||||
fd = vf_bmap()
|
||||
fd.update(vf_cmap())
|
||||
@@ -2916,9 +2954,14 @@ class Up2k(object):
|
||||
self.salt, cj["size"], cj["lmod"], cj["prel"], cj["name"]
|
||||
)
|
||||
|
||||
if vfs.flags.get("up_ts", "") == "fu" or not cj["lmod"]:
|
||||
zi = cj["lmod"]
|
||||
bad_mt = zi <= 0 or zi > 0xAAAAAAAA
|
||||
if bad_mt or vfs.flags.get("up_ts", "") == "fu":
|
||||
# force upload time rather than last-modified
|
||||
cj["lmod"] = int(time.time())
|
||||
if zi and bad_mt:
|
||||
t = "ignoring impossible last-modified time from client: %s"
|
||||
self.log(t % (zi,), 6)
|
||||
|
||||
alts: list[tuple[int, int, dict[str, Any], "sqlite3.Cursor", str, str]] = []
|
||||
for ptop, cur in vols:
|
||||
@@ -3335,7 +3378,17 @@ class Up2k(object):
|
||||
return fname
|
||||
|
||||
fp = djoin(fdir, fname)
|
||||
if job.get("replace") and bos.path.exists(fp):
|
||||
|
||||
ow = job.get("replace") and bos.path.exists(fp)
|
||||
if ow and "mt" in str(job["replace"]).lower():
|
||||
mts = bos.stat(fp).st_mtime
|
||||
mtc = job["lmod"]
|
||||
if mtc < mts:
|
||||
t = "will not overwrite; server %d sec newer than client; %d > %d %r"
|
||||
self.log(t % (mts - mtc, mts, mtc, fp))
|
||||
ow = False
|
||||
|
||||
if ow:
|
||||
self.log("replacing existing file at %r" % (fp,))
|
||||
cur = None
|
||||
ptop = job["ptop"]
|
||||
@@ -3373,6 +3426,7 @@ class Up2k(object):
|
||||
rm: bool = False,
|
||||
lmod: float = 0,
|
||||
fsrc: Optional[str] = None,
|
||||
is_mv: bool = False,
|
||||
) -> None:
|
||||
if src == dst or (fsrc and fsrc == dst):
|
||||
t = "symlinking a file to itself?? orig(%s) fsrc(%s) link(%s)"
|
||||
@@ -3389,7 +3443,7 @@ class Up2k(object):
|
||||
|
||||
linked = False
|
||||
try:
|
||||
if not flags.get("dedup"):
|
||||
if not is_mv and not flags.get("dedup"):
|
||||
raise Exception("dedup is disabled in config")
|
||||
|
||||
lsrc = src
|
||||
@@ -3789,7 +3843,7 @@ class Up2k(object):
|
||||
db_ip = ""
|
||||
else:
|
||||
# plugins may expect this to look like an actual IP
|
||||
db_ip = "1.1.1.1" if self.args.no_db_ip else ip
|
||||
db_ip = "1.1.1.1" if "no_db_ip" in vflags else ip
|
||||
|
||||
sql = "insert into up values (?,?,?,?,?,?,?)"
|
||||
v = (dwark, int(ts), sz, rd, fn, db_ip, int(at or 0))
|
||||
@@ -4548,7 +4602,7 @@ class Up2k(object):
|
||||
dlink = bos.readlink(sabs)
|
||||
dlink = os.path.join(os.path.dirname(sabs), dlink)
|
||||
dlink = bos.path.abspath(dlink)
|
||||
self._symlink(dlink, dabs, dvn.flags, lmod=ftime)
|
||||
self._symlink(dlink, dabs, dvn.flags, lmod=ftime, is_mv=True)
|
||||
wunlink(self.log, sabs, svn.flags)
|
||||
else:
|
||||
atomic_move(self.log, sabs, dabs, svn.flags)
|
||||
@@ -4767,7 +4821,7 @@ class Up2k(object):
|
||||
flags = self.flags.get(ptop) or {}
|
||||
atomic_move(self.log, sabs, slabs, flags)
|
||||
bos.utime(slabs, (int(time.time()), int(mt)), False)
|
||||
self._symlink(slabs, sabs, flags, False)
|
||||
self._symlink(slabs, sabs, flags, False, is_mv=True)
|
||||
full[slabs] = (ptop, rem)
|
||||
sabs = slabs
|
||||
|
||||
@@ -4826,7 +4880,9 @@ class Up2k(object):
|
||||
# (for example a volume with symlinked dupes but no --dedup);
|
||||
# fsrc=sabs is then a source that currently resolves to copy
|
||||
|
||||
self._symlink(dabs, alink, flags, False, lmod=lmod or 0, fsrc=sabs)
|
||||
self._symlink(
|
||||
dabs, alink, flags, False, lmod=lmod or 0, fsrc=sabs, is_mv=True
|
||||
)
|
||||
|
||||
return len(full) + len(links)
|
||||
|
||||
|
||||
@@ -594,6 +594,38 @@ except Exception as ex:
|
||||
print("using fallback base64 codec due to %r" % (ex,))
|
||||
|
||||
|
||||
class NotUTF8(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def read_utf8(log: Optional["NamedLogger"], ap: Union[str, bytes], strict: bool) -> str:
|
||||
with open(ap, "rb") as f:
|
||||
buf = f.read()
|
||||
|
||||
try:
|
||||
return buf.decode("utf-8", "strict")
|
||||
except UnicodeDecodeError as ex:
|
||||
eo = ex.start
|
||||
eb = buf[eo : eo + 1]
|
||||
|
||||
if not strict:
|
||||
t = "WARNING: The file [%s] is not using the UTF-8 character encoding; some characters in the file will be skipped/ignored. The first unreadable character was byte %r at offset %d. Please convert this file to UTF-8 by opening the file in your text-editor and saving it as UTF-8."
|
||||
t = t % (ap, eb, eo)
|
||||
if log:
|
||||
log(t, 3)
|
||||
else:
|
||||
print(t)
|
||||
return buf.decode("utf-8", "replace")
|
||||
|
||||
t = "ERROR: The file [%s] is not using the UTF-8 character encoding, and cannot be loaded. The first unreadable character was byte %r at offset %d. Please convert this file to UTF-8 by opening the file in your text-editor and saving it as UTF-8."
|
||||
t = t % (ap, eb, eo)
|
||||
if log:
|
||||
log(t, 3)
|
||||
else:
|
||||
print(t)
|
||||
raise NotUTF8(t)
|
||||
|
||||
|
||||
class Daemon(threading.Thread):
|
||||
def __init__(
|
||||
self,
|
||||
|
||||
@@ -1695,7 +1695,7 @@ html.y #tree.nowrap .ntree a+a:hover {
|
||||
line-height: 0;
|
||||
}
|
||||
.dumb_loader_thing {
|
||||
display: inline-block;
|
||||
display: block;
|
||||
margin: 1em .3em 1em 1em;
|
||||
padding: 0 1.2em 0 0;
|
||||
font-size: 4em;
|
||||
@@ -1703,9 +1703,16 @@ html.y #tree.nowrap .ntree a+a:hover {
|
||||
min-height: 1em;
|
||||
opacity: 0;
|
||||
animation: 1s linear .15s infinite forwards spin, .2s ease .15s 1 forwards fadein;
|
||||
position: absolute;
|
||||
position: fixed;
|
||||
top: .3em;
|
||||
z-index: 9;
|
||||
}
|
||||
#dlt_t {
|
||||
left: 0;
|
||||
}
|
||||
#dlt_f {
|
||||
right: .5em;
|
||||
}
|
||||
#files .cfg {
|
||||
display: none;
|
||||
font-size: 2em;
|
||||
|
||||
@@ -151,7 +151,8 @@ var Ls = {
|
||||
|
||||
"ul_par": "parallel uploads:",
|
||||
"ut_rand": "randomize filenames",
|
||||
"ut_u2ts": "copy the last-modified timestamp$Nfrom your filesystem to the server",
|
||||
"ut_u2ts": "copy the last-modified timestamp$Nfrom your filesystem to the server\">📅",
|
||||
"ut_ow": "overwrite existing files on the server?$N🛡️: never (will generate a new filename instead)$N🕒: overwrite if server-file is older than yours$N♻️: always overwrite if the files are different",
|
||||
"ut_mt": "continue hashing other files while uploading$N$Nmaybe disable if your CPU or HDD is a bottleneck",
|
||||
"ut_ask": 'ask for confirmation before upload starts">💭',
|
||||
"ut_pot": "improve upload speed on slow devices$Nby making the UI less complex",
|
||||
@@ -538,8 +539,10 @@ var Ls = {
|
||||
"u_ewrite": 'you do not have write-access to this folder',
|
||||
"u_eread": 'you do not have read-access to this folder',
|
||||
"u_enoi": 'file-search is not enabled in server config',
|
||||
"u_enoow": "overwrite will not work here; need Delete-permission",
|
||||
"u_badf": 'These {0} files (of {1} total) were skipped, possibly due to filesystem permissions:\n\n',
|
||||
"u_blankf": 'These {0} files (of {1} total) are blank / empty; upload them anyways?\n\n',
|
||||
"u_applef": 'These {0} files (of {1} total) are probably undesirable;\nPress <code>OK/Enter</code> to SKIP the following files,\nPress <code>Cancel/ESC</code> to NOT exclude, and UPLOAD those as well:\n\n',
|
||||
"u_just1": '\nMaybe it works better if you select just one file',
|
||||
"u_ff_many": "if you're using <b>Linux / MacOS / Android,</b> then this amount of files <a href=\"https://bugzilla.mozilla.org/show_bug.cgi?id=1790500\" target=\"_blank\"><em>may</em> crash Firefox!</a>\nif that happens, please try again (or use Chrome).",
|
||||
"u_up_life": "This upload will be deleted from the server\n{0} after it completes",
|
||||
@@ -751,7 +754,8 @@ var Ls = {
|
||||
|
||||
"ul_par": "samtidige handl.:",
|
||||
"ut_rand": "finn opp nye tilfeldige filnavn",
|
||||
"ut_u2ts": "gi filen på serveren samme$Ntidsstempel som lokalt hos deg",
|
||||
"ut_u2ts": "gi filen på serveren samme$Ntidsstempel som lokalt hos deg\">📅",
|
||||
"ut_ow": "overskrive eksisterende filer på serveren?$N🛡️: aldri (finner på et nytt filnavn istedenfor)$N🕒: overskriv hvis serverens fil er eldre$N♻️: alltid, gitt at innholdet er forskjellig",
|
||||
"ut_mt": "fortsett å befare køen mens opplastning foregår$N$Nskru denne av dersom du har en$Ntreg prosessor eller harddisk",
|
||||
"ut_ask": 'bekreft filutvalg før opplastning starter">💭',
|
||||
"ut_pot": "forbedre ytelsen på trege enheter ved å$Nforenkle brukergrensesnittet",
|
||||
@@ -1138,8 +1142,10 @@ var Ls = {
|
||||
"u_ewrite": 'du har ikke skrivetilgang i denne mappen',
|
||||
"u_eread": 'du har ikke lesetilgang i denne mappen',
|
||||
"u_enoi": 'filsøk er deaktivert i serverkonfigurasjonen',
|
||||
"u_enoow": "kan ikke overskrive filer her (Delete-rettigheten er nødvendig)",
|
||||
"u_badf": 'Disse {0} filene (av totalt {1}) kan ikke leses, kanskje pga rettighetsproblemer i filsystemet på datamaskinen din:\n\n',
|
||||
"u_blankf": 'Disse {0} filene (av totalt {1}) er blanke / uten innhold; ønsker du å laste dem opp uansett?\n\n',
|
||||
"u_applef": 'Disse {0} filene (av totalt {1}) er antagelig uønskede;\nTrykk <code>OK/Enter</code> for å HOPPE OVER disse filene,\nTrykk <code>Avbryt/ESC</code> for å LASTE OPP disse filene også:\n\n',
|
||||
"u_just1": '\nFunker kanskje bedre hvis du bare tar én fil om gangen',
|
||||
"u_ff_many": 'Hvis du bruker <b>Linux / MacOS / Android,</b> så kan dette antallet filer<br /><a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500" target="_blank"><em>kanskje</em> krasje Firefox!</a> Hvis det skjer, så prøv igjen (eller bruk Chrome).',
|
||||
"u_up_life": "Filene slettes fra serveren {0}\netter at opplastningen er fullført",
|
||||
@@ -1351,7 +1357,8 @@ var Ls = {
|
||||
|
||||
"ul_par": "并行上传:",
|
||||
"ut_rand": "随机化文件名",
|
||||
"ut_u2ts": "将最后修改的时间戳$N从你的文件系统复制到服务器",
|
||||
"ut_u2ts": "将最后修改的时间戳$N从你的文件系统复制到服务器\">📅",
|
||||
"ut_ow": "覆盖服务器上的现有文件?$N🛡️: 从不(会生成一个新文件名)$N🕒: 服务器文件较旧则覆盖$N♻️: 总是覆盖,如果文件内容不同", //m
|
||||
"ut_mt": "在上传时继续哈希其他文件$N$N如果你的 CPU 或硬盘是瓶颈,可能需要禁用",
|
||||
"ut_ask": '上传开始前询问确认">💭',
|
||||
"ut_pot": "通过简化 UI 来$N提高慢设备上的上传速度",
|
||||
@@ -1738,8 +1745,10 @@ var Ls = {
|
||||
"u_ewrite": '你对这个文件夹没有写入权限',
|
||||
"u_eread": '你对这个文件夹没有读取权限',
|
||||
"u_enoi": '文件搜索在服务器配置中未启用',
|
||||
"u_enoow": "无法覆盖此处的文件;需要删除权限", //m
|
||||
"u_badf": '这些 {0} 个文件(共 {1} 个)被跳过,可能是由于文件系统权限:\n\n',
|
||||
"u_blankf": '这些 {0} 个文件(共 {1} 个)是空白的;是否仍然上传?\n\n',
|
||||
"u_applef": "这些 {0} 个文件(共 {1} 个)可能是不需要的;\n按 <code>确定/Enter</code> 跳过以下文件,\n按 <code>取消/ESC</code> 取消排除,并上传这些文件:\n\n", //m
|
||||
"u_just1": '\n也许如果你只选择一个文件会更好',
|
||||
"u_ff_many": "如果你使用的是 <b>Linux / MacOS / Android,</b> 那么这个文件数量 <a href=\"https://bugzilla.mozilla.org/show_bug.cgi?id=1790500\" target=\"_blank\"><em>可能</em> 崩溃 Firefox!</a>\n如果发生这种情况,请再试一次(或使用 Chrome)。",
|
||||
"u_up_life": "此上传将在 {0} 后从服务器删除",
|
||||
@@ -1918,8 +1927,8 @@ ebi('op_up2k').innerHTML = (
|
||||
' <label for="u2rand" tt="' + L.ut_rand + '">🎲</label>\n' +
|
||||
' </td>\n' +
|
||||
' <td class="c" rowspan="2">\n' +
|
||||
' <input type="checkbox" id="u2ts" />\n' +
|
||||
' <label for="u2ts" tt="' + L.ut_u2ts + '">📅</a>\n' +
|
||||
' <input type="checkbox" id="u2ow" />\n' +
|
||||
' <label for="u2ow" tt="' + L.ut_ow + '">?</a>\n' +
|
||||
' </td>\n' +
|
||||
' <td class="c" data-perm="read" data-dep="idx" rowspan="2">\n' +
|
||||
' <input type="checkbox" id="fsearch" />\n' +
|
||||
@@ -2037,6 +2046,7 @@ ebi('op_cfg').innerHTML = (
|
||||
' <h3>' + L.cl_uopts + '</h3>\n' +
|
||||
' <div>\n' +
|
||||
' <a id="ask_up" class="tgl btn" href="#" tt="' + L.ut_ask + '</a>\n' +
|
||||
' <a id="u2ts" class="tgl btn" href="#" tt="' + L.ut_u2ts + '</a>\n' +
|
||||
' <a id="umod" class="tgl btn" href="#" tt="' + L.cut_umod + '</a>\n' +
|
||||
' <a id="hashw" class="tgl btn" href="#" tt="' + L.cut_mt + '</a>\n' +
|
||||
' <a id="u2turbo" class="tgl btn ttb" href="#" tt="' + L.cut_turbo + '</a>\n' +
|
||||
@@ -2173,6 +2183,11 @@ function goto(dest) {
|
||||
}
|
||||
|
||||
|
||||
var m = SPINNER.split(','),
|
||||
SPINNER_CSS = SPINNER.slice(1 + m[0].length);
|
||||
SPINNER = m[0];
|
||||
|
||||
|
||||
var SBW, SBH; // scrollbar size
|
||||
(function () {
|
||||
var el = mknod('div');
|
||||
@@ -4426,7 +4441,7 @@ function read_dsort(txt) {
|
||||
}
|
||||
}
|
||||
catch (ex) {
|
||||
toast.warn(10, 'failed to apply default sort order [' + txt + ']:\n' + ex);
|
||||
toast.warn(10, 'failed to apply default sort order [' + esc('' + txt) + ']:\n' + ex);
|
||||
dsort = [['href', 1, '']];
|
||||
}
|
||||
}
|
||||
@@ -5759,7 +5774,7 @@ var showfile = (function () {
|
||||
|
||||
td.innerHTML = '<a href="#" id="t' +
|
||||
link.id + '" class="doc bri" hl="' +
|
||||
link.id + '">-txt-</a>';
|
||||
link.id + '" rel="nofollow">-txt-</a>';
|
||||
|
||||
td.getElementsByTagName('a')[0].setAttribute('href', '?doc=' + fn);
|
||||
}
|
||||
@@ -7498,7 +7513,7 @@ var treectl = (function () {
|
||||
xhr.open('GET', addq(dst, 'tree=' + top + (r.dots ? '&dots' : '') + k), true);
|
||||
xhr.onload = xhr.onerror = r.recvtree;
|
||||
xhr.send();
|
||||
enspin('#tree');
|
||||
enspin('t');
|
||||
}
|
||||
|
||||
r.recvtree = function () {
|
||||
@@ -7546,7 +7561,7 @@ var treectl = (function () {
|
||||
}
|
||||
}
|
||||
}
|
||||
despin('#tree');
|
||||
qsr('#dlt_t');
|
||||
|
||||
try {
|
||||
QS('#treeul>li>a+a').textContent = '[root]';
|
||||
@@ -7706,8 +7721,8 @@ var treectl = (function () {
|
||||
r.sb_msg = false;
|
||||
r.nextdir = xhr.top;
|
||||
clearTimeout(mpl.t_eplay);
|
||||
enspin('#tree');
|
||||
enspin(thegrid.en ? '#gfiles' : '#files');
|
||||
enspin('t');
|
||||
enspin('f');
|
||||
window.removeEventListener('scroll', r.tscroll);
|
||||
}
|
||||
|
||||
@@ -7802,9 +7817,8 @@ var treectl = (function () {
|
||||
}
|
||||
|
||||
r.gentab(this.top, res);
|
||||
despin('#tree');
|
||||
despin('#files');
|
||||
despin('#gfiles');
|
||||
qsr('#dlt_t');
|
||||
qsr('#dlt_f');
|
||||
|
||||
var lg0 = res.logues ? res.logues[0] || "" : "",
|
||||
lg1 = res.logues ? res.logues[1] || "" : "",
|
||||
@@ -7922,7 +7936,7 @@ var treectl = (function () {
|
||||
|
||||
if (tn.lead == '-')
|
||||
tn.lead = '<a href="?doc=' + bhref + '" id="t' + id +
|
||||
'" class="doc' + (lang ? ' bri' : '') +
|
||||
'" rel="nofollow" class="doc' + (lang ? ' bri' : '') +
|
||||
'" hl="' + id + '" name="' + hname + '">-txt-</a>';
|
||||
|
||||
var cl = /\.PARTIAL$/.exec(fname) ? ' class="fade"' : '',
|
||||
@@ -8193,27 +8207,15 @@ var treectl = (function () {
|
||||
})();
|
||||
|
||||
|
||||
var m = SPINNER.split(','),
|
||||
SPINNER_CSS = m.length < 2 ? '' : SPINNER.slice(m[0].length + 1);
|
||||
SPINNER = m[0];
|
||||
|
||||
|
||||
function enspin(sel) {
|
||||
despin(sel);
|
||||
var d = mknod('div');
|
||||
function enspin(i) {
|
||||
i = 'dlt_' + i;
|
||||
if (ebi(i))
|
||||
return;
|
||||
var d = mknod('div', i, SPINNER);
|
||||
d.className = 'dumb_loader_thing';
|
||||
d.innerHTML = SPINNER;
|
||||
if (SPINNER_CSS)
|
||||
d.style.cssText = SPINNER_CSS;
|
||||
var tgt = QS(sel);
|
||||
tgt.insertBefore(d, tgt.childNodes[0]);
|
||||
}
|
||||
|
||||
|
||||
function despin(sel) {
|
||||
var o = QSA(sel + '>.dumb_loader_thing');
|
||||
for (var a = o.length - 1; a >= 0; a--)
|
||||
o[a].parentNode.removeChild(o[a]);
|
||||
document.body.appendChild(d);
|
||||
}
|
||||
|
||||
|
||||
@@ -8380,7 +8382,7 @@ function mk_files_header(taglist) {
|
||||
var tag = taglist[a],
|
||||
c1 = tag.slice(0, 1).toUpperCase();
|
||||
|
||||
tag = c1 + tag.slice(1);
|
||||
tag = esc(c1 + tag.slice(1));
|
||||
if (c1 == '.')
|
||||
tag = '<th name="tags/' + tag + '" sort="int"><span>' + tag.slice(1);
|
||||
else
|
||||
@@ -8697,7 +8699,17 @@ var mukey = (function () {
|
||||
|
||||
var light, theme, themen;
|
||||
var settheme = (function () {
|
||||
var ax = 'abcdefghijklmnopqrstuvwx';
|
||||
var r = {},
|
||||
ax = 'abcdefghijklmnopqrstuvwx',
|
||||
tre = '🌲',
|
||||
chldr = !SPINNER_CSS && SPINNER == tre;
|
||||
|
||||
r.ldr = {
|
||||
'4':['🌴'],
|
||||
'5':['🌭', 'padding:0 0 .7em .7em;filter:saturate(3)'],
|
||||
'6':['📞', 'padding:0;filter:brightness(2) sepia(1) saturate(3) hue-rotate(60deg)'],
|
||||
'7':['▲', 'font-size:3em'], //cp437
|
||||
};
|
||||
|
||||
theme = sread('cpp_thm') || 'a';
|
||||
if (!/^[a-x][yz]/.exec(theme))
|
||||
@@ -8727,13 +8739,19 @@ var settheme = (function () {
|
||||
ebi('themes').innerHTML = html.join('');
|
||||
var btns = QSA('#themes a');
|
||||
for (var a = 0; a < themes; a++)
|
||||
btns[a].onclick = settheme;
|
||||
btns[a].onclick = r.go;
|
||||
|
||||
if (chldr) {
|
||||
var x = r.ldr[itheme] || [tre];
|
||||
SPINNER = x[0];
|
||||
SPINNER_CSS = x[1];
|
||||
}
|
||||
|
||||
bcfg_set('light', light);
|
||||
tt.att(ebi('themes'));
|
||||
}
|
||||
|
||||
function settheme(e) {
|
||||
r.go = function (e) {
|
||||
var i = e;
|
||||
try { ev(e); i = e.target.textContent; } catch (ex) { }
|
||||
light = i % 2 == 1;
|
||||
@@ -8746,7 +8764,7 @@ var settheme = (function () {
|
||||
}
|
||||
|
||||
freshen();
|
||||
return settheme;
|
||||
return r;
|
||||
})();
|
||||
|
||||
|
||||
|
||||
@@ -1078,26 +1078,28 @@ action_stack = (function () {
|
||||
var p1 = from.length,
|
||||
p2 = to.length;
|
||||
|
||||
while (p1-- > 0 && p2-- > 0)
|
||||
while (p1 --> 0 && p2 --> 0)
|
||||
if (from[p1] != to[p2])
|
||||
break;
|
||||
|
||||
if (car > ++p1) {
|
||||
if (car > ++p1)
|
||||
car = p1;
|
||||
}
|
||||
|
||||
var txt = from.substring(car, p1)
|
||||
return {
|
||||
car: car,
|
||||
cdr: ++p2,
|
||||
cdr: p2 + (car && 1),
|
||||
txt: txt,
|
||||
cpos: cpos
|
||||
};
|
||||
}
|
||||
|
||||
var undiff = function (from, change) {
|
||||
var t1 = from.substring(0, change.car),
|
||||
t2 = from.substring(change.cdr);
|
||||
|
||||
return {
|
||||
txt: from.substring(0, change.car) + change.txt + from.substring(change.cdr),
|
||||
txt: t1 + change.txt + t2,
|
||||
cpos: change.cpos
|
||||
};
|
||||
}
|
||||
|
||||
@@ -885,6 +885,25 @@ function up2k_init(subtle) {
|
||||
bcfg_bind(uc, 'upnag', 'upnag', false, set_upnag);
|
||||
bcfg_bind(uc, 'upsfx', 'upsfx', false, set_upsfx);
|
||||
|
||||
uc.ow = parseInt(sread('u2ow', ['0', '1', '2']) || u2ow);
|
||||
uc.owt = ['🛡️', '🕒', '♻️'];
|
||||
function set_ow() {
|
||||
QS('label[for="u2ow"]').innerHTML = uc.owt[uc.ow];
|
||||
ebi('u2ow').checked = true; //cosmetic
|
||||
}
|
||||
ebi('u2ow').onclick = function (e) {
|
||||
ev(e);
|
||||
if (++uc.ow > 2)
|
||||
uc.ow = 0;
|
||||
swrite('u2ow', uc.ow);
|
||||
set_ow();
|
||||
if (uc.ow && !has(perms, 'delete'))
|
||||
toast.warn(10, L.u_enoow, 'noow');
|
||||
else if (toast.tag == 'noow')
|
||||
toast.hide();
|
||||
};
|
||||
set_ow();
|
||||
|
||||
var st = {
|
||||
"files": [],
|
||||
"nfile": {
|
||||
@@ -1300,7 +1319,7 @@ function up2k_init(subtle) {
|
||||
if (bad_files.length) {
|
||||
var msg = L.u_badf.format(bad_files.length, ntot);
|
||||
for (var a = 0, aa = Math.min(20, bad_files.length); a < aa; a++)
|
||||
msg += '-- ' + bad_files[a][1] + '\n';
|
||||
msg += '-- ' + esc(bad_files[a][1]) + '\n';
|
||||
|
||||
msg += L.u_just1;
|
||||
return modal.alert(msg, function () {
|
||||
@@ -1312,7 +1331,7 @@ function up2k_init(subtle) {
|
||||
if (nil_files.length) {
|
||||
var msg = L.u_blankf.format(nil_files.length, ntot);
|
||||
for (var a = 0, aa = Math.min(20, nil_files.length); a < aa; a++)
|
||||
msg += '-- ' + nil_files[a][1] + '\n';
|
||||
msg += '-- ' + esc(nil_files[a][1]) + '\n';
|
||||
|
||||
msg += L.u_just1;
|
||||
return modal.confirm(msg, function () {
|
||||
@@ -1324,10 +1343,68 @@ function up2k_init(subtle) {
|
||||
});
|
||||
}
|
||||
|
||||
var fps = new Set(), pdp = '';
|
||||
for (var a = 0; a < good_files.length; a++) {
|
||||
var fp = good_files[a][1],
|
||||
dp = vsplit(fp)[0];
|
||||
fps.add(fp);
|
||||
if (pdp != dp) {
|
||||
pdp = dp;
|
||||
dp = dp.slice(0, -1);
|
||||
while (dp) {
|
||||
fps.add(dp);
|
||||
dp = vsplit(dp)[0].slice(0, -1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var junk = [], rmi = [];
|
||||
for (var a = 0; a < good_files.length; a++) {
|
||||
var fn = good_files[a][1];
|
||||
if (fn.indexOf("/.") < 0 && fn.indexOf("/__MACOS") < 0)
|
||||
continue;
|
||||
|
||||
if (/\/__MACOS|\/\.(DS_Store|AppleDouble|LSOverride|DocumentRevisions-|fseventsd|Spotlight-V[0-9]|TemporaryItems|Trashes|VolumeIcon\.icns|com\.apple\.timemachine\.donotpresent|AppleDB|AppleDesktop|apdisk)/.exec(fn)) {
|
||||
junk.push(good_files[a]);
|
||||
rmi.push(a);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (fn.indexOf("/._") + 1 &&
|
||||
fps.has(fn.replace("/._", "/")) &&
|
||||
fn.split("/").pop().startsWith("._") &&
|
||||
!has(rmi, a)
|
||||
) {
|
||||
junk.push(good_files[a]);
|
||||
rmi.push(a);
|
||||
}
|
||||
}
|
||||
|
||||
if (!junk.length)
|
||||
return gotallfiles2(good_files);
|
||||
|
||||
junk.sort();
|
||||
rmi.sort(function (a, b) { return a - b; });
|
||||
|
||||
var msg = L.u_applef.format(junk.length, good_files.length);
|
||||
for (var a = 0, aa = Math.min(1000, junk.length); a < aa; a++)
|
||||
msg += '-- ' + esc(junk[a][1]) + '\n';
|
||||
|
||||
return modal.confirm(msg, function () {
|
||||
for (var a = rmi.length - 1; a >= 0; a--)
|
||||
good_files.splice(rmi[a], 1);
|
||||
|
||||
start_actx();
|
||||
gotallfiles2(good_files);
|
||||
}, function () {
|
||||
start_actx();
|
||||
gotallfiles2(good_files);
|
||||
});
|
||||
}
|
||||
|
||||
function gotallfiles2(good_files) {
|
||||
good_files.sort(function (a, b) {
|
||||
a = a[1];
|
||||
b = b[1];
|
||||
return a < b ? -1 : a > b ? 1 : 0;
|
||||
return a[1] < b[1] ? -1 : 1;
|
||||
});
|
||||
|
||||
var msg = [];
|
||||
@@ -1380,9 +1457,7 @@ function up2k_init(subtle) {
|
||||
|
||||
if (!uc.az)
|
||||
good_files.sort(function (a, b) {
|
||||
a = a[0].size;
|
||||
b = b[0].size;
|
||||
return a < b ? -1 : a > b ? 1 : 0;
|
||||
return a[0].size - b[0].size;
|
||||
});
|
||||
|
||||
for (var a = 0; a < good_files.length; a++) {
|
||||
@@ -1390,7 +1465,7 @@ function up2k_init(subtle) {
|
||||
name = good_files[a][1],
|
||||
fdir = evpath,
|
||||
now = Date.now(),
|
||||
lmod = uc.u2ts ? (fobj.lastModified || now) : 0,
|
||||
lmod = (uc.u2ts && fobj.lastModified) || 0,
|
||||
ofs = name.lastIndexOf('/') + 1;
|
||||
|
||||
if (ofs) {
|
||||
@@ -2054,8 +2129,8 @@ function up2k_init(subtle) {
|
||||
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
|
||||
};
|
||||
reader.onerror = function () {
|
||||
var err = reader.error + '';
|
||||
var handled = false;
|
||||
var err = esc('' + reader.error),
|
||||
handled = false;
|
||||
|
||||
if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender
|
||||
err.indexOf('NotFoundError') !== -1 // macos-firefox permissions
|
||||
@@ -2279,7 +2354,7 @@ function up2k_init(subtle) {
|
||||
xhr.onerror = xhr.ontimeout = function () {
|
||||
console.log('head onerror, retrying', t.name, t);
|
||||
if (!toast.visible)
|
||||
toast.warn(9.98, L.u_enethd + "\n\nfile: " + t.name, t);
|
||||
toast.warn(9.98, L.u_enethd + "\n\nfile: " + esc(t.name), t);
|
||||
|
||||
apop(st.busy.head, t);
|
||||
st.todo.head.unshift(t);
|
||||
@@ -2354,7 +2429,7 @@ function up2k_init(subtle) {
|
||||
return console.log('zombie handshake onerror', t.name, t);
|
||||
|
||||
if (!toast.visible)
|
||||
toast.warn(9.98, L.u_eneths + "\n\nfile: " + t.name, t);
|
||||
toast.warn(9.98, L.u_eneths + "\n\nfile: " + esc(t.name), t);
|
||||
|
||||
console.log('handshake onerror, retrying', t.name, t);
|
||||
apop(st.busy.handshake, t);
|
||||
@@ -2459,7 +2534,7 @@ function up2k_init(subtle) {
|
||||
var idx = t.hash.indexOf(missing[a]);
|
||||
if (idx < 0)
|
||||
return modal.alert('wtf negative index for hash "{0}" in task:\n{1}'.format(
|
||||
missing[a], JSON.stringify(t)));
|
||||
missing[a], esc(JSON.stringify(t))));
|
||||
|
||||
t.postlist.push(idx);
|
||||
cbd[idx] = 0;
|
||||
@@ -2613,7 +2688,7 @@ function up2k_init(subtle) {
|
||||
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
|
||||
|
||||
err = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
|
||||
xhrchk(xhr, err + "\n\nfile: " + t.name + "\n\nerror ", "404, target folder not found", "warn", t);
|
||||
xhrchk(xhr, err + "\n\nfile: " + esc(t.name) + "\n\nerror ", "404, target folder not found", "warn", t);
|
||||
}
|
||||
}
|
||||
xhr.onload = function (e) {
|
||||
@@ -2634,6 +2709,13 @@ function up2k_init(subtle) {
|
||||
else if (t.umod)
|
||||
req.umod = true;
|
||||
|
||||
if (!t.srch) {
|
||||
if (uc.ow == 1)
|
||||
req.replace = 'mt';
|
||||
if (uc.ow == 2)
|
||||
req.replace = true;
|
||||
}
|
||||
|
||||
xhr.open('POST', t.purl, true);
|
||||
xhr.responseType = 'text';
|
||||
xhr.timeout = 42000 + (t.srch || t.t_uploaded ? 0 :
|
||||
@@ -2763,7 +2845,7 @@ function up2k_init(subtle) {
|
||||
toast.inf(10, L.u_cbusy);
|
||||
}
|
||||
else {
|
||||
xhrchk(xhr, L.u_cuerr2.format(snpart, Math.ceil(t.size / chunksize), t.name), "404, target folder not found (???)", "warn", t);
|
||||
xhrchk(xhr, L.u_cuerr2.format(snpart, Math.ceil(t.size / chunksize), esc(t.name)), "404, target folder not found (???)", "warn", t);
|
||||
chill(t);
|
||||
}
|
||||
orz2(xhr);
|
||||
@@ -2807,7 +2889,7 @@ function up2k_init(subtle) {
|
||||
xhr.bsent = 0;
|
||||
|
||||
if (!toast.visible)
|
||||
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), t.name), t);
|
||||
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), esc(t.name)), t);
|
||||
|
||||
t.nojoin = t.nojoin || t.postlist.length; // maybe rproxy postsize limit
|
||||
console.log('chunkpit onerror,', t.name, t);
|
||||
|
||||
@@ -64,7 +64,7 @@ onmessage = (d) => {
|
||||
};
|
||||
reader.onerror = function () {
|
||||
busy = false;
|
||||
var err = reader.error + '';
|
||||
var err = esc('' + reader.error);
|
||||
|
||||
if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender
|
||||
err.indexOf('NotFoundError') !== -1 // macos-firefox permissions
|
||||
|
||||
@@ -1,3 +1,163 @@
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0228-1846 `v1.16.16` lemon melon cookie
|
||||
|
||||
<img src="https://github.com/9001/copyparty/raw/hovudstraum/docs/logo.svg" width="250" align="right"/>
|
||||
|
||||
webdev is [like a lemon](https://youtu.be/HPURbfKb7to) sometimes
|
||||
|
||||
* read-only demo server at https://a.ocv.me/pub/demo/
|
||||
* [docker image](https://github.com/9001/copyparty/tree/hovudstraum/scripts/docker) ╱ [similar software](https://github.com/9001/copyparty/blob/hovudstraum/docs/versus.md) ╱ [client testbed](https://cd.ocv.me/b/)
|
||||
|
||||
there is a [discord server](https://discord.gg/25J8CdTT6G) with an `@everyone` in case of future important updates, such as [vulnerabilities](https://github.com/9001/copyparty/security) (most recently 2025-02-25)
|
||||
|
||||
## recent important news
|
||||
|
||||
* [v1.16.15 (2025-02-25)](https://github.com/9001/copyparty/releases/tag/v1.16.15) fixed low-severity xss when uploading maliciously-named files
|
||||
* [v1.15.0 (2024-09-08)](https://github.com/9001/copyparty/releases/tag/v1.15.0) changed upload deduplication to be default-disabled
|
||||
* [v1.14.3 (2024-08-30)](https://github.com/9001/copyparty/releases/tag/v1.14.3) fixed a bug that was introduced in v1.13.8 (2024-08-13); this bug could lead to **data loss** -- see the v1.14.3 release-notes for details
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* #142 workaround android-chrome timestamp bug 5e12abbb
|
||||
* all files were uploaded with last-modified year 1601 in specific recent versions of chrome
|
||||
* https://issues.chromium.org/issues/393149335 has the actual fix; will be out soon
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* add helptext for volflags `dk`, `dks`, `dky` 65a7706f
|
||||
* fix false-positive warning when disabling a global option per-volume by unsetting the volflag
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* #140 nixos: @daimond113 fixed a warning in the nixpkg (thx!) e0fe2b97
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0225-0017 `v1.16.15` fix low-severity vuln
|
||||
|
||||
<img src="https://github.com/9001/copyparty/raw/hovudstraum/docs/logo.svg" width="250" align="right"/>
|
||||
|
||||
* read-only demo server at https://a.ocv.me/pub/demo/
|
||||
* [docker image](https://github.com/9001/copyparty/tree/hovudstraum/scripts/docker) ╱ [similar software](https://github.com/9001/copyparty/blob/hovudstraum/docs/versus.md) ╱ [client testbed](https://cd.ocv.me/b/)
|
||||
|
||||
## ⚠️ this fixes a minor vulnerability; CVE-score `3.6`/`10`
|
||||
|
||||
[GHSA-m2jw-cj8v-937r](https://github.com/9001/copyparty/security/advisories/GHSA-m2jw-cj8v-937r) aka [CVE-2025-27145](https://www.cve.org/CVERecord?id=CVE-2025-27145) could let an attacker run arbitrary javascript by tricking an authenticated user into uploading files with malicious filenames
|
||||
|
||||
* ...but it required some clever social engineering, and is **not likely** to be a cause for concern... ah, better safe than sorry
|
||||
|
||||
there is a [discord server](https://discord.gg/25J8CdTT6G) with an `@everyone` in case of future important updates, such as [vulnerabilities](https://github.com/9001/copyparty/security) (most recently 2025-02-25)
|
||||
|
||||
## recent important news
|
||||
|
||||
* [v1.15.0 (2024-09-08)](https://github.com/9001/copyparty/releases/tag/v1.15.0) changed upload deduplication to be default-disabled
|
||||
* [v1.14.3 (2024-08-30)](https://github.com/9001/copyparty/releases/tag/v1.14.3) fixed a bug that was introduced in v1.13.8 (2024-08-13); this bug could lead to **data loss** -- see the v1.14.3 release-notes for details
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* nothing this time
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* fix [GHSA-m2jw-cj8v-937r](https://github.com/9001/copyparty/security/advisories/GHSA-m2jw-cj8v-937r) / [CVE-2025-27145](https://www.cve.org/CVERecord?id=CVE-2025-27145) in 438ea6cc
|
||||
* when trying to upload an empty files by dragging it into the browser, the filename would be rendered as HTML, allowing javascript injection if the filename was malicious
|
||||
* issue discovered and reported by @JayPatel48 (thx!)
|
||||
* related issues in errorhandling of uploads 499ae1c7 36866f1d
|
||||
* these all had the same consequences as the GHSA above, but a network outage was necessary to trigger them
|
||||
* which would probably have the lucky side-effect of blocking the javascript download, nice
|
||||
* paranoid fixing of probably-not-even-issues 3adbb2ff
|
||||
* fix some markdown / texteditor bugs 407531bc
|
||||
* only indicate file-versions for markdown files in listings, since it's tricky to edit non-textfiles otherwise
|
||||
* CTRL-C followed by CTRL-V and CTRL-Z in a single-line file would make a character fall off
|
||||
* ensure safety of extensions
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* readme:
|
||||
* mention support for running the server on risc-v 6d102fc8
|
||||
* mention that the [sony psp](https://github.com/user-attachments/assets/9d21f020-1110-4652-abeb-6fc09c533d4f) can browse and upload 598a29a7
|
||||
|
||||
----
|
||||
|
||||
# 💾 what to download?
|
||||
| download link | is it good? | description |
|
||||
| -- | -- | -- |
|
||||
| **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py)** | ✅ the best 👍 | runs anywhere! only needs python |
|
||||
| [a docker image](https://github.com/9001/copyparty/blob/hovudstraum/scripts/docker/README.md) | it's ok | good if you prefer docker 🐋 |
|
||||
| [copyparty.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty.exe) | ⚠️ [acceptable](https://github.com/9001/copyparty#copypartyexe) | for [win8](https://user-images.githubusercontent.com/241032/221445946-1e328e56-8c5b-44a9-8b9f-dee84d942535.png) or later; built-in thumbnailer |
|
||||
| [u2c.exe](https://github.com/9001/copyparty/releases/download/v1.16.14/u2c.exe) | ⚠️ acceptable | [CLI uploader](https://github.com/9001/copyparty/blob/hovudstraum/bin/u2c.py) as a win7+ exe ([video](https://a.ocv.me/pub/demo/pics-vids/u2cli.webm)) |
|
||||
| [copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty.pyz) | ⚠️ acceptable | similar to the regular sfx, [mostly worse](https://github.com/9001/copyparty#zipapp) |
|
||||
| [copyparty32.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty32.exe) | ⛔️ [dangerous](https://github.com/9001/copyparty#copypartyexe) | for [win7](https://user-images.githubusercontent.com/241032/221445944-ae85d1f4-d351-4837-b130-82cab57d6cca.png) -- never expose to the internet! |
|
||||
| [cpp-winpe64.exe](https://github.com/9001/copyparty/releases/download/v1.16.5/copyparty-winpe64.exe) | ⛔️ dangerous | runs on [64bit WinPE](https://user-images.githubusercontent.com/241032/205454984-e6b550df-3c49-486d-9267-1614078dd0dd.png), otherwise useless |
|
||||
|
||||
* except for [u2c.exe](https://github.com/9001/copyparty/releases/download/v1.16.14/u2c.exe), all of the options above are mostly equivalent
|
||||
* the zip and tar.gz files below are just source code
|
||||
* python packages are available at [PyPI](https://pypi.org/project/copyparty/#files)
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0219-2309 `v1.16.14` overwrite by upload
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* #139 overwrite existing files by uploading over them e9f78ea7
|
||||
* default-disabled; a new togglebutton in the upload-UI configures it
|
||||
* can optionally compare last-modified-time and only overwrite older files
|
||||
* [GDPR compliance](https://github.com/9001/copyparty#GDPR-compliance) (maybe/probably) 4be0d426
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* some cosmetic volflag stuff, all harmless b190e676
|
||||
* disabling a volflag `foo` with `-foo` shows a warning that `-foo` was not a recognized volflag, but it still does the right thing
|
||||
* some volflags give the *"unrecognized volflag, will ignore"* warning, but not to worry, they still work just fine:
|
||||
* `xz` to allow serverside xz-compression of uploaded files
|
||||
* the option to customize the loader-spinner would glitch out during the initial page load 7d7d5d6c
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* [randpic.py](https://github.com/9001/copyparty/blob/hovudstraum/bin/handlers/randpic.py), new 404-handler example, returns a random pic from a folder 60d5f271
|
||||
* readme: [howto permanent cloudflare tunnel](https://github.com/9001/copyparty#permanent-cloudflare-tunnel) for easy hosting from home 2beb2acc
|
||||
* [synology-dsm](https://github.com/9001/copyparty/blob/hovudstraum/docs/synology-dsm.md): mention how to update the docker image 56ce5919
|
||||
* spinner improvements 6858cb06
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0213-2057 `v1.16.13` configure with confidence
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* make the config-parser more helpful regarding volflags a255db70
|
||||
* if an unrecognized volflag is specified, print a warning instead of silently ignoring it
|
||||
* understand volflag-names with Uppercase and/or kebab-case (dashes), and not just snake_case (underscores)
|
||||
* improve `--help-flags` to mention and explain all available flags
|
||||
* #136 WebDAV: support COPY 62ee7f69
|
||||
* also support overwrite of existing target files (default-enabled according to the spec)
|
||||
* the user must have the delete-permission to actually replace files
|
||||
* option to specify custom icons for certain file extensions 7e4702cf
|
||||
* see `--ext-th` mentioned briefly in the [thumbnails section](https://github.com/9001/copyparty/#thumbnails)
|
||||
* option to replace the loading-spinner animation 685f0869
|
||||
* including how to [make it exceptionally normal-looking](https://github.com/9001/copyparty/tree/hovudstraum/docs/rice#boring-loader-spinner)
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* #136 WebDAV fixes 62ee7f69
|
||||
* COPY/MOVE/MKCOL: challenge clients to provide the password as necessary
|
||||
* most clients only need this in PROPFIND, but KDE-Dolphin is more picky
|
||||
* MOVE: support `webdav://` Destination prefix as used by Dolphin, probably others
|
||||
* #136 WebDAV: improve support for KDE-Dolphin as client 9d769027
|
||||
* it masquerades as a graphical browser yet still expects 401, so special-case it with a useragent scan
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* Docker-only: quick hacky fix for the [musl CVE](https://www.openwall.com/lists/musl/2025/02/13/1) until the official fix is out 4d6626b0
|
||||
* the docker images will be rebuilt when `musl-1.2.5-r9.apk` is released, in 6~24h or so
|
||||
* until then, there is no support for reading korean XML files when running in docker
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2025-0209-2331 `v1.16.12` RTT
|
||||
|
||||
|
||||
@@ -13,6 +13,8 @@
|
||||
# because that is the data-volume in the docker containers,
|
||||
# because a deployment like this (with an IdP) is more commonly
|
||||
# seen in containerized environments -- but this is not required
|
||||
#
|
||||
# the example group "su" (super-user) is the admins group
|
||||
|
||||
|
||||
[global]
|
||||
@@ -78,6 +80,18 @@
|
||||
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
|
||||
|
||||
|
||||
[/sus/${u%+su}] # users which ARE members of group "su" gets /sus/username
|
||||
/w/tank1/${u} # which will be "tank1/username" in the docker data volume
|
||||
accs:
|
||||
rwmda: ${u} # read-write-move-delete-admin for that username
|
||||
|
||||
|
||||
[/m8s/${u%-su}] # users which are NOT members of group "su" gets /m8s/username
|
||||
/w/tank2/${u} # which will be "tank2/username" in the docker data volume
|
||||
accs:
|
||||
rwmda: ${u} # read-write-move-delete-admin for that username
|
||||
|
||||
|
||||
# and create some strategic volumes to prevent anyone from gaining
|
||||
# unintended access to priv folders if the users/groups db is lost
|
||||
[/u]
|
||||
@@ -88,3 +102,7 @@
|
||||
/w/lounge
|
||||
accs:
|
||||
rwmda: @su
|
||||
[/sus]
|
||||
/w/tank1
|
||||
[/m8s]
|
||||
/w/tank2
|
||||
|
||||
@@ -115,6 +115,16 @@ note that if you only want to share some folders inside your data volume, and no
|
||||
|
||||
|
||||
|
||||
## updating
|
||||
|
||||
to update to a new copyparty version: `Container Manager` » `Images` » `Update available` » `Update`
|
||||
|
||||
* DSM checks for updates every 12h; you can force a check with `sudo /var/packages/ContainerManager/target/tool/image_upgradable_checker`
|
||||
|
||||
* there is no auto-update feature, and beware that watchtower does not support DSM
|
||||
|
||||
|
||||
|
||||
## regarding ram usage
|
||||
|
||||
the ram usage indicator in both `Docker` and `Container Manager` is misleading because it also counts the kernel disk cache which makes the number insanely high -- the synology resource monitor shows the correct values, usually less than 100 MiB
|
||||
|
||||
@@ -131,6 +131,7 @@ symbol legend,
|
||||
| runs on Linux | █ | ╱ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
|
||||
| runs on Macos | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | |
|
||||
| runs on FreeBSD | █ | | | • | █ | █ | █ | • | █ | █ | | █ | |
|
||||
| runs on Risc-V | █ | | | █ | █ | █ | | • | | █ | | | |
|
||||
| portable binary | █ | █ | █ | | | █ | █ | | | █ | | █ | █ |
|
||||
| zero setup, just go | █ | █ | █ | | | ╱ | █ | | | █ | | ╱ | █ |
|
||||
| android app | ╱ | | | █ | █ | | | | | | | | |
|
||||
|
||||
@@ -1,13 +1,6 @@
|
||||
#!/bin/ash
|
||||
set -ex
|
||||
|
||||
# patch musl cve https://www.openwall.com/lists/musl/2025/02/13/1
|
||||
apk add -U grep
|
||||
grep -aobRE 'euckr[^\w]ksc5601[^\w]ksx1001[^\w]cp949[^\w]' /lib/ | awk -F: '$2>999{printf "%d %s\n",$2,$1}' | while read ofs fn
|
||||
do printf -- '-----\0-------\0-------\0-----\0' | dd bs=1 iflag=fullblock conv=notrunc seek=$ofs of=$fn; done 2>&1 |
|
||||
tee /dev/stderr | grep -E copied, | wc -l | grep '^2$'
|
||||
apk del grep
|
||||
|
||||
# cleanup for flavors with python build steps (dj/iv)
|
||||
rm -rf /var/cache/apk/* /root/.cache
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ ac96786e5d35882e0c5b724794329c9125c2b86ae7847f17acfc49f0d294312c6afc1c3f248655de
|
||||
# win10
|
||||
0a2cd4cadf0395f0374974cd2bc2407e5cc65c111275acdffb6ecc5a2026eee9e1bb3da528b35c7f0ff4b64563a74857d5c2149051e281cc09ebd0d1968be9aa en-us_windows_10_enterprise_ltsc_2021_x64_dvd_d289cf96.iso
|
||||
16cc0c58b5df6c7040893089f3eb29c074aed61d76dae6cd628d8a89a05f6223ac5d7f3f709a12417c147594a87a94cc808d1e04a6f1e407cc41f7c9f47790d1 virtio-win-0.1.248.iso
|
||||
18b9e8cfa682da51da1b682612652030bd7f10e4a1d5ea5220ab32bde734b0e6fe1c7dbd903ac37928c0171fd45d5ca602952054de40a4e55e9ed596279516b5 jinja2-3.1.5-py3-none-any.whl
|
||||
9a7f40edc6f9209a2acd23793f3cbd6213c94f36064048cb8bf6eb04f1bdb2c2fe991cb09f77fe8b13e5cd85c618ef23573e79813b2fef899ab2f290cd129779 jinja2-3.1.6-py3-none-any.whl
|
||||
6df21f0da408a89f6504417c7cdf9aaafe4ed88cfa13e9b8fa8414f604c0401f885a04bbad0484dc51a29284af5d1548e33c6cc6bfb9896d9992c1b1074f332d MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl
|
||||
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
|
||||
0203ec2551c4836696cfab0b2c9fff603352f03fa36e7476e2e1ca7ec57a3a0c24bd791fcd92f342bf817f0887854d9f072e0271c643de4b313d8c9569ba8813 packaging-24.1-py3-none-any.whl
|
||||
|
||||
@@ -34,7 +34,7 @@ fns=(
|
||||
upx-4.2.4-win32.zip
|
||||
)
|
||||
[ $w10 ] && fns+=(
|
||||
jinja2-3.1.4-py3-none-any.whl
|
||||
jinja2-3.1.6-py3-none-any.whl
|
||||
MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl
|
||||
mutagen-1.47.0-py3-none-any.whl
|
||||
packaging-24.1-py3-none-any.whl
|
||||
|
||||
@@ -148,6 +148,7 @@ var tl_browser = {
|
||||
["U/O", "skip 10sec back/fwd"],
|
||||
["0..9", "jump to 0%..90%"],
|
||||
["P", "play/pause (also initiates)"],
|
||||
["S", "select playing song"],
|
||||
["Y", "download song"],
|
||||
], [
|
||||
"image-viewer",
|
||||
@@ -156,6 +157,7 @@ var tl_browser = {
|
||||
["F", "fullscreen"],
|
||||
["R", "rotate clockwise"],
|
||||
["🡅 R", "rotate ccw"],
|
||||
["S", "select pic"],
|
||||
["Y", "download pic"],
|
||||
], [
|
||||
"video-player",
|
||||
@@ -235,7 +237,8 @@ var tl_browser = {
|
||||
|
||||
"ul_par": "parallel uploads:",
|
||||
"ut_rand": "randomize filenames",
|
||||
"ut_u2ts": "copy the last-modified timestamp$Nfrom your filesystem to the server",
|
||||
"ut_u2ts": "copy the last-modified timestamp$Nfrom your filesystem to the server\">📅",
|
||||
"ut_ow": "overwrite existing files on the server?$N🛡️: never (will generate a new filename instead)$N🕒: overwrite if server-file is older than yours$N♻️: always overwrite if the files are different",
|
||||
"ut_mt": "continue hashing other files while uploading$N$Nmaybe disable if your CPU or HDD is a bottleneck",
|
||||
"ut_ask": 'ask for confirmation before upload starts">💭',
|
||||
"ut_pot": "improve upload speed on slow devices$Nby making the UI less complex",
|
||||
@@ -327,7 +330,7 @@ var tl_browser = {
|
||||
"cut_nag": "OS notification when upload completes$N(only if the browser or tab is not active)",
|
||||
"cut_sfx": "audible alert when upload completes$N(only if the browser or tab is not active)",
|
||||
|
||||
"cut_mt": "use multithreading to accelerate file hashing$N$Nthis uses web-workers and requires$Nmore RAM (up to 512 MiB extra)$N$N30% faster https, 4.5x faster http,$Nand 5.3x faster on android phones\">mt",
|
||||
"cut_mt": "use multithreading to accelerate file hashing$N$Nthis uses web-workers and requires$Nmore RAM (up to 512 MiB extra)$N$Nmakes https 30% faster, http 4.5x faster\">mt",
|
||||
|
||||
"cft_text": "favicon text (blank and refresh to disable)",
|
||||
"cft_fg": "foreground color",
|
||||
@@ -349,6 +352,7 @@ var tl_browser = {
|
||||
"ml_pmode": "at end of folder...",
|
||||
"ml_btns": "cmds",
|
||||
"ml_tcode": "transcode",
|
||||
"ml_tcode2": "transcode to",
|
||||
"ml_tint": "tint",
|
||||
"ml_eq": "audio equalizer",
|
||||
"ml_drc": "dynamic range compressor",
|
||||
@@ -372,6 +376,14 @@ var tl_browser = {
|
||||
"mt_cflac": "convert flac / wav to opus\">flac",
|
||||
"mt_caac": "convert aac / m4a to opus\">aac",
|
||||
"mt_coth": "convert all others (not mp3) to opus\">oth",
|
||||
"mt_c2opus": "best choice for desktops, laptops, android\">opus",
|
||||
"mt_c2owa": "opus-weba, for iOS 17.5 and newer\">owa",
|
||||
"mt_c2caf": "opus-caf, for iOS 11 through 17\">caf",
|
||||
"mt_c2mp3": "use this on very old devices\">mp3",
|
||||
"mt_c2ok": "nice, good choice",
|
||||
"mt_c2nd": "that's not the recommended output format for your device, but that's fine",
|
||||
"mt_c2ng": "your device does not seem to support this output format, but let's try anyways",
|
||||
"mt_xowa": "there are bugs in iOS preventing background playback using this format; please use caf or mp3 instead",
|
||||
"mt_tint": "background level (0-100) on the seekbar$Nto make buffering less distracting",
|
||||
"mt_eq": "enables the equalizer and gain control;$N$Nboost <code>0</code> = standard 100% volume (unmodified)$N$Nwidth <code>1 </code> = standard stereo (unmodified)$Nwidth <code>0.5</code> = 50% left-right crossfeed$Nwidth <code>0 </code> = mono$N$Nboost <code>-0.8</code> & width <code>10</code> = vocal removal :^)$N$Nenabling the equalizer makes gapless albums fully gapless, so leave it on with all the values at zero (except width = 1) if you care about that",
|
||||
"mt_drc": "enables the dynamic range compressor (volume flattener / brickwaller); will also enable EQ to balance the spaghetti, so set all EQ fields except for 'width' to 0 if you don't want it$N$Nlowers the volume of audio above THRESHOLD dB; for every RATIO dB past THRESHOLD there is 1 dB of output, so default values of tresh -24 and ratio 12 means it should never get louder than -22 dB and it is safe to increase the equalizer boost to 0.8, or even 1.8 with ATK 0 and a huge RLS like 90 (only works in firefox; RLS is max 1 in other browsers)$N$N(see wikipedia, they explain it much better)",
|
||||
@@ -613,8 +625,10 @@ var tl_browser = {
|
||||
"u_ewrite": 'you do not have write-access to this folder',
|
||||
"u_eread": 'you do not have read-access to this folder',
|
||||
"u_enoi": 'file-search is not enabled in server config',
|
||||
"u_enoow": "overwrite will not work here; need Delete-permission",
|
||||
"u_badf": 'These {0} files (of {1} total) were skipped, possibly due to filesystem permissions:\n\n',
|
||||
"u_blankf": 'These {0} files (of {1} total) are blank / empty; upload them anyways?\n\n',
|
||||
"u_applef": 'These {0} files (of {1} total) are probably undesirable;\nPress <code>OK/Enter</code> to SKIP the following files,\nPress <code>Cancel/ESC</code> to NOT exclude, and UPLOAD those as well:\n\n',
|
||||
"u_just1": '\nMaybe it works better if you select just one file',
|
||||
"u_ff_many": "if you're using <b>Linux / MacOS / Android,</b> then this amount of files <a href=\"https://bugzilla.mozilla.org/show_bug.cgi?id=1790500\" target=\"_blank\"><em>may</em> crash Firefox!</a>\nif that happens, please try again (or use Chrome).",
|
||||
"u_up_life": "This upload will be deleted from the server\n{0} after it completes",
|
||||
|
||||
@@ -129,7 +129,7 @@ class Cfg(Namespace):
|
||||
def __init__(self, a=None, v=None, c=None, **ka0):
|
||||
ka = {}
|
||||
|
||||
ex = "chpw daw dav_auth dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_bauth no_clone no_cp no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nsort nw og og_no_head og_s_title ohead q rand re_dirsz rss smb srch_dbg srch_excl stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
|
||||
ex = "chpw daw dav_auth dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_bauth no_clone no_cp no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nsort nw og og_no_head og_s_title ohead q rand re_dirsz rss smb srch_dbg srch_excl stats uqe vague_403 vc ver write_uplog xdev xlink xvol zipmaxu zs"
|
||||
ka.update(**{k: False for k in ex.split()})
|
||||
|
||||
ex = "dav_inf dedup dotpart dotsrch hook_v no_dhash no_fastboot no_fpool no_htp no_rescan no_sendfile no_ses no_snap no_up_list no_voldump re_dhash plain_ip"
|
||||
@@ -144,10 +144,10 @@ class Cfg(Namespace):
|
||||
ex = "au_vol dl_list mtab_age reg_cap s_thead s_tbody th_convt ups_who zip_who"
|
||||
ka.update(**{k: 9 for k in ex.split()})
|
||||
|
||||
ex = "db_act k304 loris no304 re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
|
||||
ex = "db_act forget_ip k304 loris no304 re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo u2ow zipmaxn zipmaxs"
|
||||
ka.update(**{k: 0 for k in ex.split()})
|
||||
|
||||
ex = "ah_alg bname chpw_db doctitle df exit favico idp_h_usr ipa html_head lg_sba lg_sbf log_fk md_sba md_sbf name og_desc og_site og_th og_title og_title_a og_title_v og_title_i shr tcolor textfiles unlist vname xff_src R RS SR"
|
||||
ex = "ah_alg bname chpw_db doctitle df exit favico idp_h_usr ipa html_head lg_sba lg_sbf log_fk md_sba md_sbf name og_desc og_site og_th og_title og_title_a og_title_v og_title_i shr tcolor textfiles unlist vname xff_src zipmaxt R RS SR"
|
||||
ka.update(**{k: "" for k in ex.split()})
|
||||
|
||||
ex = "ban_403 ban_404 ban_422 ban_pw ban_url spinner"
|
||||
|
||||
Reference in New Issue
Block a user