Compare commits

...

23 Commits

Author SHA1 Message Date
ed
a011139894 v1.16.15 2025-02-25 00:17:58 +00:00
ed
36866f1d36 dangit.wav 2025-02-25 00:11:57 +00:00
ed
407531bcb1 fix markdown / text-editor jank
* only indicate file-history for markdown files since
   other files won't load into the editor which makes
   that entirely pointless; do file extension instead

* text-editor: in files containing one single line,
   ^C followed by ^V ^Z would accidentally a letter

and fix unhydrated extensions
2025-02-25 00:03:22 +00:00
ed
3adbb2ff41 https://youtu.be/WyXebd3I3Vo 2025-02-24 23:32:03 +00:00
ed
499ae1c7a1 other minor html-escaping fixes
mostly related to error-handling for uploads, network-loss etc,
nothing worse than the dom-xss just now
2025-02-24 22:42:05 +00:00
ed
438ea6ccb0 fix GHSA-m2jw-cj8v-937r ;
this fixes a DOM-Based XSS when preparing files for upload;
empty files would have their filenames rendered as HTML in
a messagebox, making it possible to trick users into running
arbitrary javascript by giving them maliciously-named files

note that, being a general-purpose webserver, it is still
intentionally possible to upload and execute arbitrary
javascript, just not in this unexpected manner
2025-02-24 21:23:13 +00:00
ed
598a29a733 mention sony psp support (thx dwarf) 2025-02-23 21:37:21 +00:00
ed
6d102fc826 mention risc-v support 2025-02-20 04:51:04 +00:00
ed
fca07fbb62 update pkgs to 1.16.14 2025-02-19 23:35:05 +00:00
ed
cdedcc24b8 v1.16.14 2025-02-19 23:09:14 +00:00
ed
60d5f27140 new example: randpic.py 2025-02-19 22:41:30 +00:00
ed
cb413bae49 webdav: a healthy dash of paranoia
there's probably at least one client sending `Overwrite: False`
instead of the spec-correct `Overwrite: F`
2025-02-19 22:07:26 +00:00
ed
e9f78ea70c up2k: tristate option for overwriting files; closes #139
adds a third possible value for the `replace` property in handshakes:

* absent or False: never overwrite an existing file on the server,
   and instead generate a new filename to avoid collision

* True: always overwrite existing files on the server

* "mt": only overwrite if client's last-modified is more recent
   (this is the new option)

the new UI button toggles between all three options,
defaulting to never-overwrite
2025-02-19 21:58:56 +00:00
ed
6858cb066f spinner: themes + improve positioning
loading-spinner is either `#dlt_t` or `#dlt_f`
(tree or files), appearing top-left or top-right,
regardless of page/tree scroll (position:fixed)
2025-02-19 18:55:33 +00:00
ed
4be0d426f4 option to forget uploader-IP from db after some time
does this mean copyparty is GDPR-compliant now? idklol
2025-02-17 23:47:59 +00:00
ed
7d7d5d6c3c fix custom spinner css on initial page load 2025-02-17 23:26:21 +00:00
ed
0422387e90 readme: changing the loading spinner (#138) 2025-02-16 19:28:57 +00:00
ed
2ed5fd9ac4 readme: diagnosing broken thumbnails (#137) 2025-02-16 19:22:17 +00:00
ed
2beb2acc24 readme: permanent cloudflare tunnel (#137) 2025-02-16 18:59:18 +00:00
ed
56ce591908 synology dsm: add updating 2025-02-16 18:12:35 +00:00
ed
b190e676b4 fix cosmetic volflag stuff:
* `xz` would show the "unrecognized volflag" warning,
   but it still applied correctly

* removing volflags with `-foo` would also show the warning
   but it would still get removed correctly

* hide `ext_th_d` in the startup volume-listing
2025-02-14 20:54:13 +00:00
ed
19520b2ec9 remove patch for musl cve (no longer necessary) 2025-02-14 09:15:52 +00:00
ed
eeb96ae8b5 update pkgs to 1.16.13 2025-02-13 21:43:32 +00:00
22 changed files with 375 additions and 92 deletions

View File

@@ -94,9 +94,11 @@ turn almost any device into a file server with resumable uploads/downloads using
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites * [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
* [real-ip](#real-ip) - teaching copyparty how to see client IPs * [real-ip](#real-ip) - teaching copyparty how to see client IPs
* [reverse-proxy performance](#reverse-proxy-performance) * [reverse-proxy performance](#reverse-proxy-performance)
* [permanent cloudflare tunnel](#permanent-cloudflare-tunnel) - if you have a domain and want to get your copyparty online real quick
* [prometheus](#prometheus) - metrics/stats can be enabled * [prometheus](#prometheus) - metrics/stats can be enabled
* [other extremely specific features](#other-extremely-specific-features) - you'll never find a use for these * [other extremely specific features](#other-extremely-specific-features) - you'll never find a use for these
* [custom mimetypes](#custom-mimetypes) - change the association of a file extension * [custom mimetypes](#custom-mimetypes) - change the association of a file extension
* [GDPR compliance](#GDPR-compliance) - imagine using copyparty professionally...
* [feature chickenbits](#feature-chickenbits) - buggy feature? rip it out * [feature chickenbits](#feature-chickenbits) - buggy feature? rip it out
* [packages](#packages) - the party might be closer than you think * [packages](#packages) - the party might be closer than you think
* [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes) * [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes)
@@ -159,8 +161,8 @@ enable thumbnails (images/audio/video), media indexing, and audio transcoding by
* **MacOS:** `port install py-Pillow ffmpeg` * **MacOS:** `port install py-Pillow ffmpeg`
* **MacOS** (alternative): `brew install pillow ffmpeg` * **MacOS** (alternative): `brew install pillow ffmpeg`
* **Windows:** `python -m pip install --user -U Pillow` * **Windows:** `python -m pip install --user -U Pillow`
* install python and ffmpeg manually; do not use `winget` or `Microsoft Store` (it breaks $PATH) * install [python](https://www.python.org/downloads/windows/) and [ffmpeg](#optional-dependencies) manually; do not use `winget` or `Microsoft Store` (it breaks $PATH)
* copyparty.exe comes with `Pillow` and only needs `ffmpeg` * copyparty.exe comes with `Pillow` and only needs [ffmpeg](#optional-dependencies) for mediatags/videothumbs
* see [optional dependencies](#optional-dependencies) to enable even more features * see [optional dependencies](#optional-dependencies) to enable even more features
running copyparty without arguments (for example doubleclicking it on Windows) will give everyone read/write access to the current folder; you may want [accounts and volumes](#accounts-and-volumes) running copyparty without arguments (for example doubleclicking it on Windows) will give everyone read/write access to the current folder; you may want [accounts and volumes](#accounts-and-volumes)
@@ -183,6 +185,8 @@ first download [cloudflared](https://developers.cloudflare.com/cloudflare-one/co
as the tunnel starts, it will show a URL which you can share to let anyone browse your stash or upload files to you as the tunnel starts, it will show a URL which you can share to let anyone browse your stash or upload files to you
but if you have a domain, then you probably want to skip the random autogenerated URL and instead make a [permanent cloudflare tunnel](#permanent-cloudflare-tunnel)
since people will be connecting through cloudflare, run copyparty with `--xff-hdr cf-connecting-ip` to detect client IPs correctly since people will be connecting through cloudflare, run copyparty with `--xff-hdr cf-connecting-ip` to detect client IPs correctly
@@ -224,6 +228,7 @@ also see [comparison to similar software](./docs/versus.md)
* ☑ [upnp / zeroconf / mdns / ssdp](#zeroconf) * ☑ [upnp / zeroconf / mdns / ssdp](#zeroconf)
* ☑ [event hooks](#event-hooks) / script runner * ☑ [event hooks](#event-hooks) / script runner
* ☑ [reverse-proxy support](https://github.com/9001/copyparty#reverse-proxy) * ☑ [reverse-proxy support](https://github.com/9001/copyparty#reverse-proxy)
* ☑ cross-platform (Windows, Linux, Macos, Android, FreeBSD, arm32/arm64, ppc64le, s390x, risc-v/riscv64)
* upload * upload
* ☑ basic: plain multipart, ie6 support * ☑ basic: plain multipart, ie6 support
* ☑ [up2k](#uploading): js, resumable, multithreaded * ☑ [up2k](#uploading): js, resumable, multithreaded
@@ -401,6 +406,9 @@ upgrade notes
"frequently" asked questions "frequently" asked questions
* can I change the 🌲 spinning pine-tree loading animation?
* [yeah...](https://github.com/9001/copyparty/tree/hovudstraum/docs/rice#boring-loader-spinner) :-(
* is it possible to block read-access to folders unless you know the exact URL for a particular file inside? * is it possible to block read-access to folders unless you know the exact URL for a particular file inside?
* yes, using the [`g` permission](#accounts-and-volumes), see the examples there * yes, using the [`g` permission](#accounts-and-volumes), see the examples there
* you can also do this with linux filesystem permissions; `chmod 111 music` will make it possible to access files and folders inside the `music` folder but not list the immediate contents -- also works with other software, not just copyparty * you can also do this with linux filesystem permissions; `chmod 111 music` will make it possible to access files and folders inside the `music` folder but not list the immediate contents -- also works with other software, not just copyparty
@@ -423,6 +431,14 @@ upgrade notes
* copyparty seems to think I am using http, even though the URL is https * copyparty seems to think I am using http, even though the URL is https
* your reverse-proxy is not sending the `X-Forwarded-Proto: https` header; this could be because your reverse-proxy itself is confused. Ensure that none of the intermediates (such as cloudflare) are terminating https before the traffic hits your entrypoint * your reverse-proxy is not sending the `X-Forwarded-Proto: https` header; this could be because your reverse-proxy itself is confused. Ensure that none of the intermediates (such as cloudflare) are terminating https before the traffic hits your entrypoint
* thumbnails are broken (you get a colorful square which says the filetype instead)
* you need to install `FFmpeg` or `Pillow`; see [thumbnails](#thumbnails)
* thumbnails are broken (some images appear, but other files just get a blank box, and/or the broken-image placeholder)
* probably due to a reverse-proxy messing with the request URLs and stripping the query parameters (`?th=w`), so check your URL rewrite rules
* could also be due to incorrect caching settings in reverse-proxies and/or CDNs, so make sure that nothing is set to ignore the query string
* could also be due to misbehaving privacy-related browser extensions, so try to disable those
* i want to learn python and/or programming and am considering looking at the copyparty source code in that occasion * i want to learn python and/or programming and am considering looking at the copyparty source code in that occasion
* ```bash * ```bash
_| _ __ _ _|_ _| _ __ _ _|_
@@ -653,6 +669,7 @@ press `g` or `田` to toggle grid-view instead of the file listing and `t` togg
it does static images with Pillow / pyvips / FFmpeg, and uses FFmpeg for video files, so you may want to `--no-thumb` or maybe just `--no-vthumb` depending on how dangerous your users are it does static images with Pillow / pyvips / FFmpeg, and uses FFmpeg for video files, so you may want to `--no-thumb` or maybe just `--no-vthumb` depending on how dangerous your users are
* pyvips is 3x faster than Pillow, Pillow is 3x faster than FFmpeg * pyvips is 3x faster than Pillow, Pillow is 3x faster than FFmpeg
* disable thumbnails for specific volumes with volflag `dthumb` for all, or `dvthumb` / `dathumb` / `dithumb` for video/audio/images only * disable thumbnails for specific volumes with volflag `dthumb` for all, or `dvthumb` / `dathumb` / `dithumb` for video/audio/images only
* for installing FFmpeg on windows, see [optional dependencies](#optional-dependencies)
audio files are converted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`) audio files are converted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`)
@@ -764,8 +781,11 @@ the up2k UI is the epitome of polished intuitive experiences:
* "parallel uploads" specifies how many chunks to upload at the same time * "parallel uploads" specifies how many chunks to upload at the same time
* `[🏃]` analysis of other files should continue while one is uploading * `[🏃]` analysis of other files should continue while one is uploading
* `[🥔]` shows a simpler UI for faster uploads from slow devices * `[🥔]` shows a simpler UI for faster uploads from slow devices
* `[🛡️]` decides when to overwrite existing files on the server
* `🛡️` = never (generate a new filename instead)
* `🕒` = overwrite if the server-file is older
* `♻️` = always overwrite if the files are different
* `[🎲]` generate random filenames during upload * `[🎲]` generate random filenames during upload
* `[📅]` preserve last-modified timestamps; server times will match yours
* `[🔎]` switch between upload and [file-search](#file-search) mode * `[🔎]` switch between upload and [file-search](#file-search) mode
* ignore `[🔎]` if you add files by dragging them into the browser * ignore `[🔎]` if you add files by dragging them into the browser
@@ -1982,6 +2002,26 @@ in summary, `haproxy > caddy > traefik > nginx > apache > lighttpd`, and use uds
* if these results are bullshit because my config exampels are bad, please submit corrections! * if these results are bullshit because my config exampels are bad, please submit corrections!
## permanent cloudflare tunnel
if you have a domain and want to get your copyparty online real quick, either from your home-PC behind a CGNAT or from a server without an existing [reverse-proxy](#reverse-proxy) setup, one approach is to create a [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/) (formerly "Argo Tunnel")
I'd recommend making a `Locally-managed tunnel` for more control, but if you prefer to make a `Remotely-managed tunnel` then this is currently how:
* `cloudflare dashboard` » `zero trust` » `networks` » `tunnels` » `create a tunnel` » `cloudflared` » choose a cool `subdomain` and leave the `path` blank, and use `service type` = `http` and `URL` = `127.0.0.1:3923`
* and if you want to just run the tunnel without installing it, skip the `cloudflared service install BASE64` step and instead do `cloudflared --no-autoupdate tunnel run --token BASE64`
NOTE: since people will be connecting through cloudflare, as mentioned in [real-ip](#real-ip) you should run copyparty with `--xff-hdr cf-connecting-ip` to detect client IPs correctly
config file example:
```yaml
[global]
xff-hdr: cf-connecting-ip
```
## prometheus ## prometheus
metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0) metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0)
@@ -2068,6 +2108,18 @@ in a config file, this is the same as:
run copyparty with `--mimes` to list all the default mappings run copyparty with `--mimes` to list all the default mappings
### GDPR compliance
imagine using copyparty professionally... **TINLA/IANAL; EU laws are hella confusing**
* remember to disable logging, or configure logrotation to an acceptable timeframe with `-lo cpp-%Y-%m%d.txt.xz` or similar
* if running with the database enabled (recommended), then have it forget uploader-IPs after some time using `--forget-ip 43200`
* don't set it too low; [unposting](#unpost) a file is no longer possible after this takes effect
* if you actually *are* a lawyer then I'm open for feedback, would be fun
### feature chickenbits ### feature chickenbits
buggy feature? rip it out by setting any of the following environment variables to disable its associated bell or whistle, buggy feature? rip it out by setting any of the following environment variables to disable its associated bell or whistle,
@@ -2255,6 +2307,7 @@ quick summary of more eccentric web-browsers trying to view a directory index:
| **ie4** and **netscape** 4.0 | can browse, upload with `?b=u`, auth with `&pw=wark` | | **ie4** and **netscape** 4.0 | can browse, upload with `?b=u`, auth with `&pw=wark` |
| **ncsa mosaic** 2.7 | does not get a pass, [pic1](https://user-images.githubusercontent.com/241032/174189227-ae816026-cf6f-4be5-a26e-1b3b072c1b2f.png) - [pic2](https://user-images.githubusercontent.com/241032/174189225-5651c059-5152-46e9-ac26-7e98e497901b.png) | | **ncsa mosaic** 2.7 | does not get a pass, [pic1](https://user-images.githubusercontent.com/241032/174189227-ae816026-cf6f-4be5-a26e-1b3b072c1b2f.png) - [pic2](https://user-images.githubusercontent.com/241032/174189225-5651c059-5152-46e9-ac26-7e98e497901b.png) |
| **SerenityOS** (7e98457) | hits a page fault, works with `?b=u`, file upload not-impl | | **SerenityOS** (7e98457) | hits a page fault, works with `?b=u`, file upload not-impl |
| **sony psp** 5.50 | can browse, upload/mkdir/msg (thx dwarf) [screenshot](https://github.com/user-attachments/assets/9d21f020-1110-4652-abeb-6fc09c533d4f) |
| **nintendo 3ds** | can browse, upload, view thumbnails (thx bnjmn) | | **nintendo 3ds** | can browse, upload, view thumbnails (thx bnjmn) |
<p align="center"><img src="https://github.com/user-attachments/assets/88deab3d-6cad-4017-8841-2f041472b853" /></p> <p align="center"><img src="https://github.com/user-attachments/assets/88deab3d-6cad-4017-8841-2f041472b853" /></p>
@@ -2585,6 +2638,8 @@ enable [smb](#smb-server) support (**not** recommended): `impacket==0.12.0`
`pyvips` gives higher quality thumbnails than `Pillow` and is 320% faster, using 270% more ram: `sudo apt install libvips42 && python3 -m pip install --user -U pyvips` `pyvips` gives higher quality thumbnails than `Pillow` and is 320% faster, using 270% more ram: `sudo apt install libvips42 && python3 -m pip install --user -U pyvips`
to install FFmpeg on Windows, grab [a recent build](https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z) -- you need `ffmpeg.exe` and `ffprobe.exe` from inside the `bin` folder; copy them into `C:\Windows\System32` or any other folder that's in your `%PATH%`
### dependency chickenbits ### dependency chickenbits

View File

@@ -21,6 +21,7 @@ each plugin must define a `main()` which takes 3 arguments;
## on404 ## on404
* [redirect.py](redirect.py) sends an HTTP 301 or 302, redirecting the client to another page/file * [redirect.py](redirect.py) sends an HTTP 301 or 302, redirecting the client to another page/file
* [randpic.py](randpic.py) redirects `/foo/bar/randpic.jpg` to a random pic in `/foo/bar/`
* [sorry.py](answer.py) replies with a custom message instead of the usual 404 * [sorry.py](answer.py) replies with a custom message instead of the usual 404
* [nooo.py](nooo.py) replies with an endless noooooooooooooo * [nooo.py](nooo.py) replies with an endless noooooooooooooo
* [never404.py](never404.py) 100% guarantee that 404 will never be a thing again as it automatically creates dummy files whenever necessary * [never404.py](never404.py) 100% guarantee that 404 will never be a thing again as it automatically creates dummy files whenever necessary

35
bin/handlers/randpic.py Normal file
View File

@@ -0,0 +1,35 @@
import os
import random
from urllib.parse import quote
# assuming /foo/bar/ is a valid URL but /foo/bar/randpic.png does not exist,
# hijack the 404 with a redirect to a random pic in that folder
#
# thx to lia & kipu for the idea
def main(cli, vn, rem):
req_fn = rem.split("/")[-1]
if not cli.can_read or not req_fn.startswith("randpic"):
return
req_abspath = vn.canonical(rem)
req_ap_dir = os.path.dirname(req_abspath)
files_in_dir = os.listdir(req_ap_dir)
if "." in req_fn:
file_ext = "." + req_fn.split(".")[-1]
files_in_dir = [x for x in files_in_dir if x.lower().endswith(file_ext)]
if not files_in_dir:
return
selected_file = random.choice(files_in_dir)
req_url = "/".join([vn.vpath, rem]).strip("/")
req_dir = req_url.rsplit("/", 1)[0]
new_url = "/".join([req_dir, quote(selected_file)]).strip("/")
cli.reply(b"redirecting...", 302, headers={"Location": "/" + new_url})
return "true"

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
S_VERSION = "2.9" S_VERSION = "2.10"
S_BUILD_DT = "2025-01-27" S_BUILD_DT = "2025-02-19"
""" """
u2c.py: upload to copyparty u2c.py: upload to copyparty
@@ -807,7 +807,9 @@ def handshake(ar, file, search):
else: else:
if ar.touch: if ar.touch:
req["umod"] = True req["umod"] = True
if ar.ow: if ar.owo:
req["replace"] = "mt"
elif ar.ow:
req["replace"] = True req["replace"] = True
file.recheck = False file.recheck = False
@@ -1538,6 +1540,7 @@ source file/folder selection uses rsync syntax, meaning that:
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible") ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)") ap.add_argument("--touch", action="store_true", help="if last-modified timestamps differ, push local to server (need write+delete perms)")
ap.add_argument("--ow", action="store_true", help="overwrite existing files instead of autorenaming") ap.add_argument("--ow", action="store_true", help="overwrite existing files instead of autorenaming")
ap.add_argument("--owo", action="store_true", help="overwrite existing files if server-file is older")
ap.add_argument("--spd", action="store_true", help="print speeds for each file") ap.add_argument("--spd", action="store_true", help="print speeds for each file")
ap.add_argument("--version", action="store_true", help="show version and exit") ap.add_argument("--version", action="store_true", help="show version and exit")

View File

@@ -1,6 +1,6 @@
# Maintainer: icxes <dev.null@need.moe> # Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty pkgname=copyparty
pkgver="1.16.12" pkgver="1.16.14"
pkgrel=1 pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++" pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any") arch=("any")
@@ -22,7 +22,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
) )
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz") source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" ) backup=("etc/${pkgname}.d/init" )
sha256sums=("b5b65103198a3dd8a3f9b15c3d6aff6c21147bf87627ceacc64205493c248997") sha256sums=("62ecebf89ebd30e8537e06d0ed533542fe8bbb15147e02714131d7412ea60425")
build() { build() {
cd "${srcdir}/${pkgname}-${pkgver}" cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,5 +1,5 @@
{ {
"url": "https://github.com/9001/copyparty/releases/download/v1.16.12/copyparty-sfx.py", "url": "https://github.com/9001/copyparty/releases/download/v1.16.14/copyparty-sfx.py",
"version": "1.16.12", "version": "1.16.14",
"hash": "sha256-gZZqd88/8PEseVtWspocqrWV7Ck8YQAhcsa4ED3F4JU=" "hash": "sha256-hFIJdIOt1n2Raw9VdvTmX+C/xr8tgLMa956lhJ7DZvo="
} }

View File

@@ -1039,6 +1039,7 @@ def add_upload(ap):
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck") ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)") ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for \033[33mdefault\033[0m, and never exceed \033[33mmax\033[0m. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]") ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for \033[33mdefault\033[0m, and never exceed \033[33mmax\033[0m. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
ap2.add_argument("--u2ow", metavar="NUM", type=int, default=0, help="web-client: default setting for when to overwrite existing files; [\033[32m0\033[0m]=never, [\033[32m1\033[0m]=if-client-newer, [\033[32m2\033[0m]=always (volflag=u2ow)")
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine") ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory") ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
@@ -1269,7 +1270,7 @@ def add_optouts(ap):
ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)") ap2.add_argument("--no-tarcmp", action="store_true", help="disable download as compressed tar (?tar=gz, ?tar=bz2, ?tar=xz, ?tar=gz:9, ...)")
ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time") ap2.add_argument("--no-lifetime", action="store_true", help="do not allow clients (or server config) to schedule an upload to be deleted after a given time")
ap2.add_argument("--no-pipe", action="store_true", help="disable race-the-beam (lockstep download of files which are currently being uploaded) (volflag=nopipe)") ap2.add_argument("--no-pipe", action="store_true", help="disable race-the-beam (lockstep download of files which are currently being uploaded) (volflag=nopipe)")
ap2.add_argument("--no-db-ip", action="store_true", help="do not write uploader IPs into the database") ap2.add_argument("--no-db-ip", action="store_true", help="do not write uploader-IP into the database; will also disable unpost, you may want \033[32m--forget-ip\033[0m instead (volflag=no_db_ip)")
def add_safety(ap): def add_safety(ap):
@@ -1419,6 +1420,7 @@ def add_db_general(ap, hcores):
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower") ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
ap2.add_argument("--re-dhash", action="store_true", help="force a cache rebuild on startup; enable this once if it gets out of sync (should never be necessary)") ap2.add_argument("--re-dhash", action="store_true", help="force a cache rebuild on startup; enable this once if it gets out of sync (should never be necessary)")
ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice -- only useful for offloading uploads to a cloud service or something (volflag=noforget)") ap2.add_argument("--no-forget", action="store_true", help="never forget indexed files, even when deleted from disk -- makes it impossible to ever upload the same file twice -- only useful for offloading uploads to a cloud service or something (volflag=noforget)")
ap2.add_argument("--forget-ip", metavar="MIN", type=int, default=0, help="remove uploader-IP from database (and make unpost impossible) \033[33mMIN\033[0m minutes after upload, for GDPR reasons. Default [\033[32m0\033[0m] is never-forget. [\033[32m1440\033[0m]=day, [\033[32m10080\033[0m]=week, [\033[32m43200\033[0m]=month. (volflag=forget_ip)")
ap2.add_argument("--dbd", metavar="PROFILE", default="wal", help="database durability profile; sets the tradeoff between robustness and speed, see \033[33m--help-dbd\033[0m (volflag=dbd)") ap2.add_argument("--dbd", metavar="PROFILE", default="wal", help="database durability profile; sets the tradeoff between robustness and speed, see \033[33m--help-dbd\033[0m (volflag=dbd)")
ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (probably buggy, not recommended) (volflag=xlink)") ap2.add_argument("--xlink", action="store_true", help="on upload: check all volumes for dupes, not just the target volume (probably buggy, not recommended) (volflag=xlink)")
ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing") ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing")

View File

@@ -1,8 +1,8 @@
# coding: utf-8 # coding: utf-8
VERSION = (1, 16, 13) VERSION = (1, 16, 15)
CODENAME = "COPYparty" CODENAME = "COPYparty"
BUILD_DT = (2025, 2, 13) BUILD_DT = (2025, 2, 25)
S_VERSION = ".".join(map(str, VERSION)) S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT) S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -1389,8 +1389,16 @@ class AuthSrv(object):
name = name.lower() name = name.lower()
# volflags are snake_case, but a leading dash is the removal operator # volflags are snake_case, but a leading dash is the removal operator
if name not in flagdescs and "-" in name[1:]: stripped = name.lstrip("-")
name = name[:1] + name[1:].replace("-", "_") zi = len(name) - len(stripped)
if zi > 1:
t = "WARNING: the config for volume [/%s] specified a volflag with multiple leading hyphens (%s); use one hyphen to remove, or zero hyphens to add a flag. Will now enable flag [%s]"
self.log(t % (vpath, name, stripped), 3)
name = stripped
zi = 0
if stripped not in flagdescs and "-" in stripped:
name = ("-" * zi) + stripped.replace("-", "_")
desc = flagdescs.get(name.lstrip("-"), "?").replace("\n", " ") desc = flagdescs.get(name.lstrip("-"), "?").replace("\n", " ")
@@ -1576,6 +1584,11 @@ class AuthSrv(object):
for vol in vfs.all_vols.values(): for vol in vfs.all_vols.values():
unknown_flags = set() unknown_flags = set()
for k, v in vol.flags.items(): for k, v in vol.flags.items():
stripped = k.lstrip("-")
if k != stripped and stripped not in vol.flags:
t = "WARNING: the config for volume [/%s] tried to remove volflag [%s] by specifying [%s] but that volflag was not already set"
self.log(t % (vol.vpath, stripped, k), 3)
k = stripped
if k not in flagdescs and k not in k_ign: if k not in flagdescs and k not in k_ign:
unknown_flags.add(k) unknown_flags.add(k)
if unknown_flags: if unknown_flags:
@@ -1943,11 +1956,8 @@ class AuthSrv(object):
if vf not in vol.flags: if vf not in vol.flags:
vol.flags[vf] = getattr(self.args, ga) vol.flags[vf] = getattr(self.args, ga)
for k in ("nrand",): zs = "forget_ip nrand u2abort u2ow ups_who zip_who"
if k not in vol.flags: for k in zs.split():
vol.flags[k] = getattr(self.args, k)
for k in ("nrand", "u2abort", "ups_who", "zip_who"):
if k in vol.flags: if k in vol.flags:
vol.flags[k] = int(vol.flags[k]) vol.flags[k] = int(vol.flags[k])
@@ -2422,6 +2432,7 @@ class AuthSrv(object):
"u2j": self.args.u2j, "u2j": self.args.u2j,
"u2sz": self.args.u2sz, "u2sz": self.args.u2sz,
"u2ts": vf["u2ts"], "u2ts": vf["u2ts"],
"u2ow": vf["u2ow"],
"frand": bool(vf.get("rand")), "frand": bool(vf.get("rand")),
"lifetime": vn.js_ls["lifetime"], "lifetime": vn.js_ls["lifetime"],
"u2sort": self.args.u2sort, "u2sort": self.args.u2sort,

View File

@@ -43,6 +43,7 @@ def vf_bmap() -> dict[str, str]:
"gsel", "gsel",
"hardlink", "hardlink",
"magic", "magic",
"no_db_ip",
"no_sb_md", "no_sb_md",
"no_sb_lg", "no_sb_lg",
"nsort", "nsort",
@@ -73,6 +74,7 @@ def vf_vmap() -> dict[str, str]:
} }
for k in ( for k in (
"dbd", "dbd",
"forget_ip",
"hsortn", "hsortn",
"html_head", "html_head",
"lg_sbf", "lg_sbf",
@@ -80,6 +82,7 @@ def vf_vmap() -> dict[str, str]:
"lg_sba", "lg_sba",
"md_sba", "md_sba",
"nrand", "nrand",
"u2ow",
"og_desc", "og_desc",
"og_site", "og_site",
"og_th", "og_th",
@@ -156,7 +159,8 @@ flagcats = {
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files", "daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
"nosub": "forces all uploads into the top folder of the vfs", "nosub": "forces all uploads into the top folder of the vfs",
"magic": "enables filetype detection for nameless uploads", "magic": "enables filetype detection for nameless uploads",
"gz": "allows server-side gzip of uploads with ?gz (also c,xz)", "gz": "allows server-side gzip compression of uploads with ?gz",
"xz": "allows server-side lzma compression of uploads with ?xz",
"pk": "forces server-side compression, optional arg: xz,9", "pk": "forces server-side compression, optional arg: xz,9",
}, },
"upload rules": { "upload rules": {
@@ -167,6 +171,7 @@ flagcats = {
"medialinks": "return medialinks for non-up2k uploads (not hotlinks)", "medialinks": "return medialinks for non-up2k uploads (not hotlinks)",
"rand": "force randomized filenames, 9 chars long by default", "rand": "force randomized filenames, 9 chars long by default",
"nrand=N": "randomized filenames are N chars long", "nrand=N": "randomized filenames are N chars long",
"u2ow=N": "overwrite existing files? 0=no 1=if-older 2=always",
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time", "u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
"u2abort=1": "allow aborting unfinished uploads? 0=no 1=strict 2=ip-chk 3=acct-chk", "u2abort=1": "allow aborting unfinished uploads? 0=no 1=strict 2=ip-chk 3=acct-chk",
"sz=1k-3m": "allow filesizes between 1 KiB and 3MiB", "sz=1k-3m": "allow filesizes between 1 KiB and 3MiB",
@@ -197,6 +202,8 @@ flagcats = {
"nohash=\\.iso$": "skips hashing file contents if path matches *.iso", "nohash=\\.iso$": "skips hashing file contents if path matches *.iso",
"noidx=\\.iso$": "fully ignores the contents at paths matching *.iso", "noidx=\\.iso$": "fully ignores the contents at paths matching *.iso",
"noforget": "don't forget files when deleted from disk", "noforget": "don't forget files when deleted from disk",
"forget_ip=43200": "forget uploader-IP after 30 days (GDPR)",
"no_db_ip": "never store uploader-IP in the db; disables unpost",
"fat32": "avoid excessive reindexing on android sdcardfs", "fat32": "avoid excessive reindexing on android sdcardfs",
"dbd=[acid|swal|wal|yolo]": "database speed-durability tradeoff", "dbd=[acid|swal|wal|yolo]": "database speed-durability tradeoff",
"xlink": "cross-volume dupe detection / linking (dangerous)", "xlink": "cross-volume dupe detection / linking (dangerous)",

View File

@@ -152,6 +152,8 @@ RE_HSAFE = re.compile(r"[\x00-\x1f<>\"'&]") # search always much faster
RE_HOST = re.compile(r"[^][0-9a-zA-Z.:_-]") # search faster <=17ch RE_HOST = re.compile(r"[^][0-9a-zA-Z.:_-]") # search faster <=17ch
RE_MHOST = re.compile(r"^[][0-9a-zA-Z.:_-]+$") # match faster >=18ch RE_MHOST = re.compile(r"^[][0-9a-zA-Z.:_-]+$") # match faster >=18ch
RE_K = re.compile(r"[^0-9a-zA-Z_-]") # search faster <=17ch RE_K = re.compile(r"[^0-9a-zA-Z_-]") # search faster <=17ch
RE_HR = re.compile(r"[<>\"'&]")
RE_MDV = re.compile(r"(.*)\.([0-9]+\.[0-9]{3})(\.[Mm][Dd])$")
UPARAM_CC_OK = set("doc move tree".split()) UPARAM_CC_OK = set("doc move tree".split())
@@ -1811,7 +1813,8 @@ class HttpCli(object):
dst = unquotep(dst) dst = unquotep(dst)
# overwrite=True is default; rfc4918 9.8.4 # overwrite=True is default; rfc4918 9.8.4
overwrite = self.headers.get("overwrite", "").lower() != "f" zs = self.headers.get("overwrite", "").lower()
overwrite = zs not in ["f", "false"]
try: try:
fun = self._cp if self.mode == "COPY" else self._mv fun = self._cp if self.mode == "COPY" else self._mv
@@ -5968,7 +5971,7 @@ class HttpCli(object):
# [num-backups, most-recent, hist-path] # [num-backups, most-recent, hist-path]
hist: dict[str, tuple[int, float, str]] = {} hist: dict[str, tuple[int, float, str]] = {}
histdir = os.path.join(fsroot, ".hist") histdir = os.path.join(fsroot, ".hist")
ptn = re.compile(r"(.*)\.([0-9]+\.[0-9]{3})(\.[^\.]+)$") ptn = RE_MDV
try: try:
for hfn in bos.listdir(histdir): for hfn in bos.listdir(histdir):
m = ptn.match(hfn) m = ptn.match(hfn)
@@ -6001,6 +6004,7 @@ class HttpCli(object):
dirs = [] dirs = []
files = [] files = []
ptn_hr = RE_HR
for fn in ls_names: for fn in ls_names:
base = "" base = ""
href = fn href = fn
@@ -6055,11 +6059,13 @@ class HttpCli(object):
zd.second, zd.second,
) )
try: if is_dir:
ext = "---" if is_dir else fn.rsplit(".", 1)[1] ext = "---"
elif "." in fn:
ext = ptn_hr.sub("@", fn.rsplit(".", 1)[1])
if len(ext) > 16: if len(ext) > 16:
ext = ext[:16] ext = ext[:16]
except: else:
ext = "%" ext = "%"
if add_fk and not is_dir: if add_fk and not is_dir:

View File

@@ -557,6 +557,7 @@ class Up2k(object):
else: else:
# important; not deferred by db_act # important; not deferred by db_act
timeout = self._check_lifetimes() timeout = self._check_lifetimes()
timeout = min(self._check_forget_ip(), timeout)
try: try:
if self.args.shr: if self.args.shr:
timeout = min(self._check_shares(), timeout) timeout = min(self._check_shares(), timeout)
@@ -617,6 +618,43 @@ class Up2k(object):
for v in vols: for v in vols:
volage[v] = now volage[v] = now
def _check_forget_ip(self) -> float:
now = time.time()
timeout = now + 9001
for vp, vol in sorted(self.vfs.all_vols.items()):
maxage = vol.flags["forget_ip"]
if not maxage:
continue
cur = self.cur.get(vol.realpath)
if not cur:
continue
cutoff = now - maxage * 60
for _ in range(2):
q = "select ip, at from up where ip > '' order by +at limit 1"
hits = cur.execute(q).fetchall()
if not hits:
break
remains = hits[0][1] - cutoff
if remains > 0:
timeout = min(timeout, now + remains)
break
q = "update up set ip = '' where ip > '' and at <= %d"
cur.execute(q % (cutoff,))
zi = cur.rowcount
cur.connection.commit()
t = "forget-ip(%d) removed %d IPs from db [/%s]"
self.log(t % (maxage, zi, vol.vpath))
timeout = min(timeout, now + 900)
return timeout
def _check_lifetimes(self) -> float: def _check_lifetimes(self) -> float:
now = time.time() now = time.time()
timeout = now + 9001 timeout = now + 9001
@@ -1081,7 +1119,7 @@ class Up2k(object):
ft = "\033[0;32m{}{:.0}" ft = "\033[0;32m{}{:.0}"
ff = "\033[0;35m{}{:.0}" ff = "\033[0;35m{}{:.0}"
fv = "\033[0;36m{}:\033[90m{}" fv = "\033[0;36m{}:\033[90m{}"
zs = "html_head mv_re_r mv_re_t rm_re_r rm_re_t srch_re_dots srch_re_nodot" zs = "ext_th_d html_head mv_re_r mv_re_t rm_re_r rm_re_t srch_re_dots srch_re_nodot"
fx = set(zs.split()) fx = set(zs.split())
fd = vf_bmap() fd = vf_bmap()
fd.update(vf_cmap()) fd.update(vf_cmap())
@@ -3335,7 +3373,17 @@ class Up2k(object):
return fname return fname
fp = djoin(fdir, fname) fp = djoin(fdir, fname)
if job.get("replace") and bos.path.exists(fp):
ow = job.get("replace") and bos.path.exists(fp)
if ow and "mt" in str(job["replace"]).lower():
mts = bos.stat(fp).st_mtime
mtc = job["lmod"]
if mtc < mts:
t = "will not overwrite; server %d sec newer than client; %d > %d %r"
self.log(t % (mts - mtc, mts, mtc, fp))
ow = False
if ow:
self.log("replacing existing file at %r" % (fp,)) self.log("replacing existing file at %r" % (fp,))
cur = None cur = None
ptop = job["ptop"] ptop = job["ptop"]
@@ -3789,7 +3837,7 @@ class Up2k(object):
db_ip = "" db_ip = ""
else: else:
# plugins may expect this to look like an actual IP # plugins may expect this to look like an actual IP
db_ip = "1.1.1.1" if self.args.no_db_ip else ip db_ip = "1.1.1.1" if "no_db_ip" in vflags else ip
sql = "insert into up values (?,?,?,?,?,?,?)" sql = "insert into up values (?,?,?,?,?,?,?)"
v = (dwark, int(ts), sz, rd, fn, db_ip, int(at or 0)) v = (dwark, int(ts), sz, rd, fn, db_ip, int(at or 0))

View File

@@ -1695,7 +1695,7 @@ html.y #tree.nowrap .ntree a+a:hover {
line-height: 0; line-height: 0;
} }
.dumb_loader_thing { .dumb_loader_thing {
display: inline-block; display: block;
margin: 1em .3em 1em 1em; margin: 1em .3em 1em 1em;
padding: 0 1.2em 0 0; padding: 0 1.2em 0 0;
font-size: 4em; font-size: 4em;
@@ -1703,9 +1703,16 @@ html.y #tree.nowrap .ntree a+a:hover {
min-height: 1em; min-height: 1em;
opacity: 0; opacity: 0;
animation: 1s linear .15s infinite forwards spin, .2s ease .15s 1 forwards fadein; animation: 1s linear .15s infinite forwards spin, .2s ease .15s 1 forwards fadein;
position: absolute; position: fixed;
top: .3em;
z-index: 9; z-index: 9;
} }
#dlt_t {
left: 0;
}
#dlt_f {
right: .5em;
}
#files .cfg { #files .cfg {
display: none; display: none;
font-size: 2em; font-size: 2em;

View File

@@ -151,7 +151,8 @@ var Ls = {
"ul_par": "parallel uploads:", "ul_par": "parallel uploads:",
"ut_rand": "randomize filenames", "ut_rand": "randomize filenames",
"ut_u2ts": "copy the last-modified timestamp$Nfrom your filesystem to the server", "ut_u2ts": "copy the last-modified timestamp$Nfrom your filesystem to the server\">📅",
"ut_ow": "overwrite existing files on the server?$N🛡: never (will generate a new filename instead)$N🕒: overwrite if server-file is older than yours$N♻: always overwrite if the files are different",
"ut_mt": "continue hashing other files while uploading$N$Nmaybe disable if your CPU or HDD is a bottleneck", "ut_mt": "continue hashing other files while uploading$N$Nmaybe disable if your CPU or HDD is a bottleneck",
"ut_ask": 'ask for confirmation before upload starts">💭', "ut_ask": 'ask for confirmation before upload starts">💭',
"ut_pot": "improve upload speed on slow devices$Nby making the UI less complex", "ut_pot": "improve upload speed on slow devices$Nby making the UI less complex",
@@ -538,6 +539,7 @@ var Ls = {
"u_ewrite": 'you do not have write-access to this folder', "u_ewrite": 'you do not have write-access to this folder',
"u_eread": 'you do not have read-access to this folder', "u_eread": 'you do not have read-access to this folder',
"u_enoi": 'file-search is not enabled in server config', "u_enoi": 'file-search is not enabled in server config',
"u_enoow": "overwrite will not work here; need Delete-permission",
"u_badf": 'These {0} files (of {1} total) were skipped, possibly due to filesystem permissions:\n\n', "u_badf": 'These {0} files (of {1} total) were skipped, possibly due to filesystem permissions:\n\n',
"u_blankf": 'These {0} files (of {1} total) are blank / empty; upload them anyways?\n\n', "u_blankf": 'These {0} files (of {1} total) are blank / empty; upload them anyways?\n\n',
"u_just1": '\nMaybe it works better if you select just one file', "u_just1": '\nMaybe it works better if you select just one file',
@@ -751,7 +753,8 @@ var Ls = {
"ul_par": "samtidige handl.:", "ul_par": "samtidige handl.:",
"ut_rand": "finn opp nye tilfeldige filnavn", "ut_rand": "finn opp nye tilfeldige filnavn",
"ut_u2ts": "gi filen på serveren samme$Ntidsstempel som lokalt hos deg", "ut_u2ts": "gi filen på serveren samme$Ntidsstempel som lokalt hos deg\">📅",
"ut_ow": "overskrive eksisterende filer på serveren?$N🛡: aldri (finner på et nytt filnavn istedenfor)$N🕒: overskriv hvis serverens fil er eldre$N♻: alltid, gitt at innholdet er forskjellig",
"ut_mt": "fortsett å befare køen mens opplastning foregår$N$Nskru denne av dersom du har en$Ntreg prosessor eller harddisk", "ut_mt": "fortsett å befare køen mens opplastning foregår$N$Nskru denne av dersom du har en$Ntreg prosessor eller harddisk",
"ut_ask": 'bekreft filutvalg før opplastning starter">💭', "ut_ask": 'bekreft filutvalg før opplastning starter">💭',
"ut_pot": "forbedre ytelsen på trege enheter ved å$Nforenkle brukergrensesnittet", "ut_pot": "forbedre ytelsen på trege enheter ved å$Nforenkle brukergrensesnittet",
@@ -1138,6 +1141,7 @@ var Ls = {
"u_ewrite": 'du har ikke skrivetilgang i denne mappen', "u_ewrite": 'du har ikke skrivetilgang i denne mappen',
"u_eread": 'du har ikke lesetilgang i denne mappen', "u_eread": 'du har ikke lesetilgang i denne mappen',
"u_enoi": 'filsøk er deaktivert i serverkonfigurasjonen', "u_enoi": 'filsøk er deaktivert i serverkonfigurasjonen',
"u_enoow": "kan ikke overskrive filer her (Delete-rettigheten er nødvendig)",
"u_badf": 'Disse {0} filene (av totalt {1}) kan ikke leses, kanskje pga rettighetsproblemer i filsystemet på datamaskinen din:\n\n', "u_badf": 'Disse {0} filene (av totalt {1}) kan ikke leses, kanskje pga rettighetsproblemer i filsystemet på datamaskinen din:\n\n',
"u_blankf": 'Disse {0} filene (av totalt {1}) er blanke / uten innhold; ønsker du å laste dem opp uansett?\n\n', "u_blankf": 'Disse {0} filene (av totalt {1}) er blanke / uten innhold; ønsker du å laste dem opp uansett?\n\n',
"u_just1": '\nFunker kanskje bedre hvis du bare tar én fil om gangen', "u_just1": '\nFunker kanskje bedre hvis du bare tar én fil om gangen',
@@ -1351,7 +1355,8 @@ var Ls = {
"ul_par": "并行上传:", "ul_par": "并行上传:",
"ut_rand": "随机化文件名", "ut_rand": "随机化文件名",
"ut_u2ts": "将最后修改的时间戳$N从你的文件系统复制到服务器", "ut_u2ts": "将最后修改的时间戳$N从你的文件系统复制到服务器\">📅",
"ut_ow": "覆盖服务器上的现有文件?$N🛡: 从不(会生成一个新文件名)$N🕒: 服务器文件较旧则覆盖$N♻: 总是覆盖,如果文件内容不同", //m
"ut_mt": "在上传时继续哈希其他文件$N$N如果你的 CPU 或硬盘是瓶颈,可能需要禁用", "ut_mt": "在上传时继续哈希其他文件$N$N如果你的 CPU 或硬盘是瓶颈,可能需要禁用",
"ut_ask": '上传开始前询问确认">💭', "ut_ask": '上传开始前询问确认">💭',
"ut_pot": "通过简化 UI 来$N提高慢设备上的上传速度", "ut_pot": "通过简化 UI 来$N提高慢设备上的上传速度",
@@ -1738,6 +1743,7 @@ var Ls = {
"u_ewrite": '你对这个文件夹没有写入权限', "u_ewrite": '你对这个文件夹没有写入权限',
"u_eread": '你对这个文件夹没有读取权限', "u_eread": '你对这个文件夹没有读取权限',
"u_enoi": '文件搜索在服务器配置中未启用', "u_enoi": '文件搜索在服务器配置中未启用',
"u_enoow": "无法覆盖此处的文件;需要删除权限", //m
"u_badf": '这些 {0} 个文件(共 {1} 个)被跳过,可能是由于文件系统权限:\n\n', "u_badf": '这些 {0} 个文件(共 {1} 个)被跳过,可能是由于文件系统权限:\n\n',
"u_blankf": '这些 {0} 个文件(共 {1} 个)是空白的;是否仍然上传?\n\n', "u_blankf": '这些 {0} 个文件(共 {1} 个)是空白的;是否仍然上传?\n\n',
"u_just1": '\n也许如果你只选择一个文件会更好', "u_just1": '\n也许如果你只选择一个文件会更好',
@@ -1918,8 +1924,8 @@ ebi('op_up2k').innerHTML = (
' <label for="u2rand" tt="' + L.ut_rand + '">🎲</label>\n' + ' <label for="u2rand" tt="' + L.ut_rand + '">🎲</label>\n' +
' </td>\n' + ' </td>\n' +
' <td class="c" rowspan="2">\n' + ' <td class="c" rowspan="2">\n' +
' <input type="checkbox" id="u2ts" />\n' + ' <input type="checkbox" id="u2ow" />\n' +
' <label for="u2ts" tt="' + L.ut_u2ts + '">📅</a>\n' + ' <label for="u2ow" tt="' + L.ut_ow + '">?</a>\n' +
' </td>\n' + ' </td>\n' +
' <td class="c" data-perm="read" data-dep="idx" rowspan="2">\n' + ' <td class="c" data-perm="read" data-dep="idx" rowspan="2">\n' +
' <input type="checkbox" id="fsearch" />\n' + ' <input type="checkbox" id="fsearch" />\n' +
@@ -2037,6 +2043,7 @@ ebi('op_cfg').innerHTML = (
' <h3>' + L.cl_uopts + '</h3>\n' + ' <h3>' + L.cl_uopts + '</h3>\n' +
' <div>\n' + ' <div>\n' +
' <a id="ask_up" class="tgl btn" href="#" tt="' + L.ut_ask + '</a>\n' + ' <a id="ask_up" class="tgl btn" href="#" tt="' + L.ut_ask + '</a>\n' +
' <a id="u2ts" class="tgl btn" href="#" tt="' + L.ut_u2ts + '</a>\n' +
' <a id="umod" class="tgl btn" href="#" tt="' + L.cut_umod + '</a>\n' + ' <a id="umod" class="tgl btn" href="#" tt="' + L.cut_umod + '</a>\n' +
' <a id="hashw" class="tgl btn" href="#" tt="' + L.cut_mt + '</a>\n' + ' <a id="hashw" class="tgl btn" href="#" tt="' + L.cut_mt + '</a>\n' +
' <a id="u2turbo" class="tgl btn ttb" href="#" tt="' + L.cut_turbo + '</a>\n' + ' <a id="u2turbo" class="tgl btn ttb" href="#" tt="' + L.cut_turbo + '</a>\n' +
@@ -2173,6 +2180,11 @@ function goto(dest) {
} }
var m = SPINNER.split(','),
SPINNER_CSS = SPINNER.slice(1 + m[0].length);
SPINNER = m[0];
var SBW, SBH; // scrollbar size var SBW, SBH; // scrollbar size
(function () { (function () {
var el = mknod('div'); var el = mknod('div');
@@ -4426,7 +4438,7 @@ function read_dsort(txt) {
} }
} }
catch (ex) { catch (ex) {
toast.warn(10, 'failed to apply default sort order [' + txt + ']:\n' + ex); toast.warn(10, 'failed to apply default sort order [' + esc('' + txt) + ']:\n' + ex);
dsort = [['href', 1, '']]; dsort = [['href', 1, '']];
} }
} }
@@ -7498,7 +7510,7 @@ var treectl = (function () {
xhr.open('GET', addq(dst, 'tree=' + top + (r.dots ? '&dots' : '') + k), true); xhr.open('GET', addq(dst, 'tree=' + top + (r.dots ? '&dots' : '') + k), true);
xhr.onload = xhr.onerror = r.recvtree; xhr.onload = xhr.onerror = r.recvtree;
xhr.send(); xhr.send();
enspin('#tree'); enspin('t');
} }
r.recvtree = function () { r.recvtree = function () {
@@ -7546,7 +7558,7 @@ var treectl = (function () {
} }
} }
} }
despin('#tree'); qsr('#dlt_t');
try { try {
QS('#treeul>li>a+a').textContent = '[root]'; QS('#treeul>li>a+a').textContent = '[root]';
@@ -7706,8 +7718,8 @@ var treectl = (function () {
r.sb_msg = false; r.sb_msg = false;
r.nextdir = xhr.top; r.nextdir = xhr.top;
clearTimeout(mpl.t_eplay); clearTimeout(mpl.t_eplay);
enspin('#tree'); enspin('t');
enspin(thegrid.en ? '#gfiles' : '#files'); enspin('f');
window.removeEventListener('scroll', r.tscroll); window.removeEventListener('scroll', r.tscroll);
} }
@@ -7802,9 +7814,8 @@ var treectl = (function () {
} }
r.gentab(this.top, res); r.gentab(this.top, res);
despin('#tree'); qsr('#dlt_t');
despin('#files'); qsr('#dlt_f');
despin('#gfiles');
var lg0 = res.logues ? res.logues[0] || "" : "", var lg0 = res.logues ? res.logues[0] || "" : "",
lg1 = res.logues ? res.logues[1] || "" : "", lg1 = res.logues ? res.logues[1] || "" : "",
@@ -8193,27 +8204,15 @@ var treectl = (function () {
})(); })();
var m = SPINNER.split(','), function enspin(i) {
SPINNER_CSS = m.length < 2 ? '' : SPINNER.slice(m[0].length + 1); i = 'dlt_' + i;
SPINNER = m[0]; if (ebi(i))
return;
var d = mknod('div', i, SPINNER);
function enspin(sel) {
despin(sel);
var d = mknod('div');
d.className = 'dumb_loader_thing'; d.className = 'dumb_loader_thing';
d.innerHTML = SPINNER;
if (SPINNER_CSS) if (SPINNER_CSS)
d.style.cssText = SPINNER_CSS; d.style.cssText = SPINNER_CSS;
var tgt = QS(sel); document.body.appendChild(d);
tgt.insertBefore(d, tgt.childNodes[0]);
}
function despin(sel) {
var o = QSA(sel + '>.dumb_loader_thing');
for (var a = o.length - 1; a >= 0; a--)
o[a].parentNode.removeChild(o[a]);
} }
@@ -8380,7 +8379,7 @@ function mk_files_header(taglist) {
var tag = taglist[a], var tag = taglist[a],
c1 = tag.slice(0, 1).toUpperCase(); c1 = tag.slice(0, 1).toUpperCase();
tag = c1 + tag.slice(1); tag = esc(c1 + tag.slice(1));
if (c1 == '.') if (c1 == '.')
tag = '<th name="tags/' + tag + '" sort="int"><span>' + tag.slice(1); tag = '<th name="tags/' + tag + '" sort="int"><span>' + tag.slice(1);
else else
@@ -8697,7 +8696,17 @@ var mukey = (function () {
var light, theme, themen; var light, theme, themen;
var settheme = (function () { var settheme = (function () {
var ax = 'abcdefghijklmnopqrstuvwx'; var r = {},
ax = 'abcdefghijklmnopqrstuvwx',
tre = '🌲',
chldr = !SPINNER_CSS && SPINNER == tre;
r.ldr = {
'4':['🌴'],
'5':['🌭', 'padding:0 0 .7em .7em;filter:saturate(3)'],
'6':['📞', 'padding:0;filter:brightness(2) sepia(1) saturate(3) hue-rotate(60deg)'],
'7':['▲', 'font-size:3em'], //cp437
};
theme = sread('cpp_thm') || 'a'; theme = sread('cpp_thm') || 'a';
if (!/^[a-x][yz]/.exec(theme)) if (!/^[a-x][yz]/.exec(theme))
@@ -8727,13 +8736,19 @@ var settheme = (function () {
ebi('themes').innerHTML = html.join(''); ebi('themes').innerHTML = html.join('');
var btns = QSA('#themes a'); var btns = QSA('#themes a');
for (var a = 0; a < themes; a++) for (var a = 0; a < themes; a++)
btns[a].onclick = settheme; btns[a].onclick = r.go;
if (chldr) {
var x = r.ldr[itheme] || [tre];
SPINNER = x[0];
SPINNER_CSS = x[1];
}
bcfg_set('light', light); bcfg_set('light', light);
tt.att(ebi('themes')); tt.att(ebi('themes'));
} }
function settheme(e) { r.go = function (e) {
var i = e; var i = e;
try { ev(e); i = e.target.textContent; } catch (ex) { } try { ev(e); i = e.target.textContent; } catch (ex) { }
light = i % 2 == 1; light = i % 2 == 1;
@@ -8746,7 +8761,7 @@ var settheme = (function () {
} }
freshen(); freshen();
return settheme; return r;
})(); })();

View File

@@ -1078,26 +1078,28 @@ action_stack = (function () {
var p1 = from.length, var p1 = from.length,
p2 = to.length; p2 = to.length;
while (p1-- > 0 && p2-- > 0) while (p1 --> 0 && p2 --> 0)
if (from[p1] != to[p2]) if (from[p1] != to[p2])
break; break;
if (car > ++p1) { if (car > ++p1)
car = p1; car = p1;
}
var txt = from.substring(car, p1) var txt = from.substring(car, p1)
return { return {
car: car, car: car,
cdr: ++p2, cdr: p2 + (car && 1),
txt: txt, txt: txt,
cpos: cpos cpos: cpos
}; };
} }
var undiff = function (from, change) { var undiff = function (from, change) {
var t1 = from.substring(0, change.car),
t2 = from.substring(change.cdr);
return { return {
txt: from.substring(0, change.car) + change.txt + from.substring(change.cdr), txt: t1 + change.txt + t2,
cpos: change.cpos cpos: change.cpos
}; };
} }

View File

@@ -885,6 +885,25 @@ function up2k_init(subtle) {
bcfg_bind(uc, 'upnag', 'upnag', false, set_upnag); bcfg_bind(uc, 'upnag', 'upnag', false, set_upnag);
bcfg_bind(uc, 'upsfx', 'upsfx', false, set_upsfx); bcfg_bind(uc, 'upsfx', 'upsfx', false, set_upsfx);
uc.ow = parseInt(sread('u2ow', ['0', '1', '2']) || u2ow);
uc.owt = ['🛡️', '🕒', '♻️'];
function set_ow() {
QS('label[for="u2ow"]').innerHTML = uc.owt[uc.ow];
ebi('u2ow').checked = true; //cosmetic
}
ebi('u2ow').onclick = function (e) {
ev(e);
if (++uc.ow > 2)
uc.ow = 0;
swrite('u2ow', uc.ow);
set_ow();
if (uc.ow && !has(perms, 'delete'))
toast.warn(10, L.u_enoow, 'noow');
else if (toast.tag == 'noow')
toast.hide();
};
set_ow();
var st = { var st = {
"files": [], "files": [],
"nfile": { "nfile": {
@@ -1300,7 +1319,7 @@ function up2k_init(subtle) {
if (bad_files.length) { if (bad_files.length) {
var msg = L.u_badf.format(bad_files.length, ntot); var msg = L.u_badf.format(bad_files.length, ntot);
for (var a = 0, aa = Math.min(20, bad_files.length); a < aa; a++) for (var a = 0, aa = Math.min(20, bad_files.length); a < aa; a++)
msg += '-- ' + bad_files[a][1] + '\n'; msg += '-- ' + esc(bad_files[a][1]) + '\n';
msg += L.u_just1; msg += L.u_just1;
return modal.alert(msg, function () { return modal.alert(msg, function () {
@@ -1312,7 +1331,7 @@ function up2k_init(subtle) {
if (nil_files.length) { if (nil_files.length) {
var msg = L.u_blankf.format(nil_files.length, ntot); var msg = L.u_blankf.format(nil_files.length, ntot);
for (var a = 0, aa = Math.min(20, nil_files.length); a < aa; a++) for (var a = 0, aa = Math.min(20, nil_files.length); a < aa; a++)
msg += '-- ' + nil_files[a][1] + '\n'; msg += '-- ' + esc(nil_files[a][1]) + '\n';
msg += L.u_just1; msg += L.u_just1;
return modal.confirm(msg, function () { return modal.confirm(msg, function () {
@@ -2054,8 +2073,8 @@ function up2k_init(subtle) {
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); } try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
}; };
reader.onerror = function () { reader.onerror = function () {
var err = reader.error + ''; var err = esc('' + reader.error),
var handled = false; handled = false;
if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender
err.indexOf('NotFoundError') !== -1 // macos-firefox permissions err.indexOf('NotFoundError') !== -1 // macos-firefox permissions
@@ -2279,7 +2298,7 @@ function up2k_init(subtle) {
xhr.onerror = xhr.ontimeout = function () { xhr.onerror = xhr.ontimeout = function () {
console.log('head onerror, retrying', t.name, t); console.log('head onerror, retrying', t.name, t);
if (!toast.visible) if (!toast.visible)
toast.warn(9.98, L.u_enethd + "\n\nfile: " + t.name, t); toast.warn(9.98, L.u_enethd + "\n\nfile: " + esc(t.name), t);
apop(st.busy.head, t); apop(st.busy.head, t);
st.todo.head.unshift(t); st.todo.head.unshift(t);
@@ -2354,7 +2373,7 @@ function up2k_init(subtle) {
return console.log('zombie handshake onerror', t.name, t); return console.log('zombie handshake onerror', t.name, t);
if (!toast.visible) if (!toast.visible)
toast.warn(9.98, L.u_eneths + "\n\nfile: " + t.name, t); toast.warn(9.98, L.u_eneths + "\n\nfile: " + esc(t.name), t);
console.log('handshake onerror, retrying', t.name, t); console.log('handshake onerror, retrying', t.name, t);
apop(st.busy.handshake, t); apop(st.busy.handshake, t);
@@ -2459,7 +2478,7 @@ function up2k_init(subtle) {
var idx = t.hash.indexOf(missing[a]); var idx = t.hash.indexOf(missing[a]);
if (idx < 0) if (idx < 0)
return modal.alert('wtf negative index for hash "{0}" in task:\n{1}'.format( return modal.alert('wtf negative index for hash "{0}" in task:\n{1}'.format(
missing[a], JSON.stringify(t))); missing[a], esc(JSON.stringify(t))));
t.postlist.push(idx); t.postlist.push(idx);
cbd[idx] = 0; cbd[idx] = 0;
@@ -2613,7 +2632,7 @@ function up2k_init(subtle) {
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, '')); return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
err = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit; err = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
xhrchk(xhr, err + "\n\nfile: " + t.name + "\n\nerror ", "404, target folder not found", "warn", t); xhrchk(xhr, err + "\n\nfile: " + esc(t.name) + "\n\nerror ", "404, target folder not found", "warn", t);
} }
} }
xhr.onload = function (e) { xhr.onload = function (e) {
@@ -2634,6 +2653,13 @@ function up2k_init(subtle) {
else if (t.umod) else if (t.umod)
req.umod = true; req.umod = true;
if (!t.srch) {
if (uc.ow == 1)
req.replace = 'mt';
if (uc.ow == 2)
req.replace = true;
}
xhr.open('POST', t.purl, true); xhr.open('POST', t.purl, true);
xhr.responseType = 'text'; xhr.responseType = 'text';
xhr.timeout = 42000 + (t.srch || t.t_uploaded ? 0 : xhr.timeout = 42000 + (t.srch || t.t_uploaded ? 0 :
@@ -2763,7 +2789,7 @@ function up2k_init(subtle) {
toast.inf(10, L.u_cbusy); toast.inf(10, L.u_cbusy);
} }
else { else {
xhrchk(xhr, L.u_cuerr2.format(snpart, Math.ceil(t.size / chunksize), t.name), "404, target folder not found (???)", "warn", t); xhrchk(xhr, L.u_cuerr2.format(snpart, Math.ceil(t.size / chunksize), esc(t.name)), "404, target folder not found (???)", "warn", t);
chill(t); chill(t);
} }
orz2(xhr); orz2(xhr);
@@ -2807,7 +2833,7 @@ function up2k_init(subtle) {
xhr.bsent = 0; xhr.bsent = 0;
if (!toast.visible) if (!toast.visible)
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), t.name), t); toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), esc(t.name)), t);
t.nojoin = t.nojoin || t.postlist.length; // maybe rproxy postsize limit t.nojoin = t.nojoin || t.postlist.length; // maybe rproxy postsize limit
console.log('chunkpit onerror,', t.name, t); console.log('chunkpit onerror,', t.name, t);

View File

@@ -64,7 +64,7 @@ onmessage = (d) => {
}; };
reader.onerror = function () { reader.onerror = function () {
busy = false; busy = false;
var err = reader.error + ''; var err = esc('' + reader.error);
if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender
err.indexOf('NotFoundError') !== -1 // macos-firefox permissions err.indexOf('NotFoundError') !== -1 // macos-firefox permissions

View File

@@ -1,3 +1,64 @@
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2025-0219-2309 `v1.16.14` overwrite by upload
## 🧪 new features
* #139 overwrite existing files by uploading over them e9f78ea7
* default-disabled; a new togglebutton in the upload-UI configures it
* can optionally compare last-modified-time and only overwrite older files
* [GDPR compliance](https://github.com/9001/copyparty#GDPR-compliance) (maybe/probably) 4be0d426
## 🩹 bugfixes
* some cosmetic volflag stuff, all harmless b190e676
* disabling a volflag `foo` with `-foo` shows a warning that `-foo` was not a recognized volflag, but it still does the right thing
* some volflags give the *"unrecognized volflag, will ignore"* warning, but not to worry, they still work just fine:
* `xz` to allow serverside xz-compression of uploaded files
* the option to customize the loader-spinner would glitch out during the initial page load 7d7d5d6c
## 🔧 other changes
* [randpic.py](https://github.com/9001/copyparty/blob/hovudstraum/bin/handlers/randpic.py), new 404-handler example, returns a random pic from a folder 60d5f271
* readme: [howto permanent cloudflare tunnel](https://github.com/9001/copyparty#permanent-cloudflare-tunnel) for easy hosting from home 2beb2acc
* [synology-dsm](https://github.com/9001/copyparty/blob/hovudstraum/docs/synology-dsm.md): mention how to update the docker image 56ce5919
* spinner improvements 6858cb06
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2025-0213-2057 `v1.16.13` configure with confidence
## 🧪 new features
* make the config-parser more helpful regarding volflags a255db70
* if an unrecognized volflag is specified, print a warning instead of silently ignoring it
* understand volflag-names with Uppercase and/or kebab-case (dashes), and not just snake_case (underscores)
* improve `--help-flags` to mention and explain all available flags
* #136 WebDAV: support COPY 62ee7f69
* also support overwrite of existing target files (default-enabled according to the spec)
* the user must have the delete-permission to actually replace files
* option to specify custom icons for certain file extensions 7e4702cf
* see `--ext-th` mentioned briefly in the [thumbnails section](https://github.com/9001/copyparty/#thumbnails)
* option to replace the loading-spinner animation 685f0869
* including how to [make it exceptionally normal-looking](https://github.com/9001/copyparty/tree/hovudstraum/docs/rice#boring-loader-spinner)
## 🩹 bugfixes
* #136 WebDAV fixes 62ee7f69
* COPY/MOVE/MKCOL: challenge clients to provide the password as necessary
* most clients only need this in PROPFIND, but KDE-Dolphin is more picky
* MOVE: support `webdav://` Destination prefix as used by Dolphin, probably others
* #136 WebDAV: improve support for KDE-Dolphin as client 9d769027
* it masquerades as a graphical browser yet still expects 401, so special-case it with a useragent scan
## 🔧 other changes
* Docker-only: quick hacky fix for the [musl CVE](https://www.openwall.com/lists/musl/2025/02/13/1) until the official fix is out 4d6626b0
* the docker images will be rebuilt when `musl-1.2.5-r9.apk` is released, in 6~24h or so
* until then, there is no support for reading korean XML files when running in docker
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2025-0209-2331 `v1.16.12` RTT # 2025-0209-2331 `v1.16.12` RTT

View File

@@ -115,6 +115,16 @@ note that if you only want to share some folders inside your data volume, and no
## updating
to update to a new copyparty version: `Container Manager` » `Images` » `Update available` » `Update`
* DSM checks for updates every 12h; you can force a check with `sudo /var/packages/ContainerManager/target/tool/image_upgradable_checker`
* there is no auto-update feature, and beware that watchtower does not support DSM
## regarding ram usage ## regarding ram usage
the ram usage indicator in both `Docker` and `Container Manager` is misleading because it also counts the kernel disk cache which makes the number insanely high -- the synology resource monitor shows the correct values, usually less than 100 MiB the ram usage indicator in both `Docker` and `Container Manager` is misleading because it also counts the kernel disk cache which makes the number insanely high -- the synology resource monitor shows the correct values, usually less than 100 MiB

View File

@@ -131,6 +131,7 @@ symbol legend,
| runs on Linux | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | runs on Linux | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
| runs on Macos | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | | | runs on Macos | █ | | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | |
| runs on FreeBSD | █ | | | • | █ | █ | █ | • | █ | █ | | █ | | | runs on FreeBSD | █ | | | • | █ | █ | █ | • | █ | █ | | █ | |
| runs on Risc-V | █ | | | █ | █ | █ | | • | | █ | | | |
| portable binary | █ | █ | █ | | | █ | █ | | | █ | | █ | █ | | portable binary | █ | █ | █ | | | █ | █ | | | █ | | █ | █ |
| zero setup, just go | █ | █ | █ | | | | █ | | | █ | | | █ | | zero setup, just go | █ | █ | █ | | | | █ | | | █ | | | █ |
| android app | | | | █ | █ | | | | | | | | | | android app | | | | █ | █ | | | | | | | | |

View File

@@ -1,13 +1,6 @@
#!/bin/ash #!/bin/ash
set -ex set -ex
# patch musl cve https://www.openwall.com/lists/musl/2025/02/13/1
apk add -U grep
grep -aobRE 'euckr[^\w]ksc5601[^\w]ksx1001[^\w]cp949[^\w]' /lib/ | awk -F: '$2>999{printf "%d %s\n",$2,$1}' | while read ofs fn
do printf -- '-----\0-------\0-------\0-----\0' | dd bs=1 iflag=fullblock conv=notrunc seek=$ofs of=$fn; done 2>&1 |
tee /dev/stderr | grep -E copied, | wc -l | grep '^2$'
apk del grep
# cleanup for flavors with python build steps (dj/iv) # cleanup for flavors with python build steps (dj/iv)
rm -rf /var/cache/apk/* /root/.cache rm -rf /var/cache/apk/* /root/.cache

View File

@@ -144,7 +144,7 @@ class Cfg(Namespace):
ex = "au_vol dl_list mtab_age reg_cap s_thead s_tbody th_convt ups_who zip_who" ex = "au_vol dl_list mtab_age reg_cap s_thead s_tbody th_convt ups_who zip_who"
ka.update(**{k: 9 for k in ex.split()}) ka.update(**{k: 9 for k in ex.split()})
ex = "db_act k304 loris no304 re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo" ex = "db_act forget_ip k304 loris no304 re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo u2ow"
ka.update(**{k: 0 for k in ex.split()}) ka.update(**{k: 0 for k in ex.split()})
ex = "ah_alg bname chpw_db doctitle df exit favico idp_h_usr ipa html_head lg_sba lg_sbf log_fk md_sba md_sbf name og_desc og_site og_th og_title og_title_a og_title_v og_title_i shr tcolor textfiles unlist vname xff_src R RS SR" ex = "ah_alg bname chpw_db doctitle df exit favico idp_h_usr ipa html_head lg_sba lg_sbf log_fk md_sba md_sbf name og_desc og_site og_th og_title og_title_a og_title_v og_title_i shr tcolor textfiles unlist vname xff_src R RS SR"