18:17:26 &ed | what's wrong with it
18:17:38 +Mai | that you don't know it's the volume bar before you try it
18:17:46 &ed | oh
18:17:48 &ed | yeah i guess
18:17:54 +Mai | especially when it's at 100
18:18:00 &ed | how do i fix it tho
18:19:50 +Mai | you could add an icon that's also a mute button (to not make it a useless icon)
18:22:38 &ed | i'll make the volume text always visible and include a speaker icon before it
18:23:53 +Mai | that is better at least
when deleting a folder, any dotfiles/folders within would only
be deleted if the user had the dot-permission to see dotfiles;
this gave the confusing behavior of not removing the "empty"
folders after deleting them
fix this to only require the delete-permission, and always
delete the entire folder, including any dotfiles within
similar behavior would also apply to moves, renames, and copies;
fix moves and renames to only require the move-permission in
the source volume; dotfiles will now always be included,
regardless of whether the user does (or does not) have the
dot-permission in either the source and/or destination volumes
copying folders now also behaves more intuitively: if the user has
the dot-permission in the target volume, then dotfiles will only be
included from source folders where the user also has the dot-perm,
to prevent the user from seeing intentionally hidden files/folders
as processing of a HTTP request begins (GET, HEAD, PUT, POST, ...),
the original query line is printed in its encoded form. This makes
debugging easier, since there is no ambiguity in how the client
phrased its request.
however, this results in very opaque logs for non-ascii languages;
basically a wall of percent-encoded characters. Avoid this issue
by printing an additional log-message if the URL contains `%`,
immediately below the original url-encoded entry.
also fix tests on macos, and an unrelated bad logmsg in up2k
chrome (and chromium-based browsers) can OOM when:
* the OS is Windows, MacOS, or Android (but not Linux?)
* the website is hosted on a remote IP (not localhost)
* webworkers are used to read files
unfortunately this also applies to Android, which heavily relies
on webworkers to make read-speeds anywhere close to acceptable
as for android, there are diminishing returns with more than 4
webworkers (1=1x, 2=2.3x, 3=3.8x, 4=4.2x, 6=4.5x, 8=5.3x), and
limiting the number of workers to ensure at least one idle core
appears to sufficiently reduce the OOM probability
on desktop, webworkers are only necessary for hashwasm, so
limit the number of workers to 2 if crypto.subtle is available
and otherwise use the nproc-1 rule for hashwasm in workers
bug report: https://issues.chromium.org/issues/383568268
if a NIC is brought up with several IPs,
it would only mention one of the new IPs in the logs
or if a PCIe bus crashes and all NICs drop dead,
it would only mention one of the IPs that disappeared
as both scenarios are oddly common, be more verbose
previously, when IdP was enabled, the password-based login would be
entirely disabled. This was a semi-conscious decision, based on the
assumption that you would always want to use IdP after enabling it.
it makes more sense to keep password-based login working as usual,
conditionally disengaging it for requests which contains a valid
IdP username header. This makes it possible to define fallback
users, or API-only users, and all similar escape hatches.
if someone accidentally starts uploading a file in the wrong folder,
it was not obvious that you can forget that upload in the unpost tab
this '(explain)' button in the upload-error hopefully explains that,
and upload immediately commences when the initial attempt is aborted
on the backend, cleanup the dupesched when an upload is
aborted, and save some cpu by adding unique entries only
url-param / header `ck` specifies hashing algo;
md5 sha1 sha256 sha512 b2 blake2 b2s blake2s
value 'no' or blank disables checksumming,
for when copyparty is running on ancient gear
and you don't really care about file integrity
shadowing is the act of intentinoally blocking off access to
files in a volume by placing another volume atop of a file/folder.
say you have volume '/' with a file '/a/b/c/d.txt'; if you create a
volume at '/a/b', then all files/folders inside the original folder
becomes inaccessible, and replaced with the contents of the new vol
the initial code for forgetting shadowed files from the parent vol
database would only forget files which were discovered during a
filesystem scan; any uploaded files would be intentionally preseved
in the parent volume's database, probably to avoid losing uploader
info in the event of a brief mistaken config change, where a volume
is shadowed by accident.
this precaution was a mistake, currently causing far more
issues than it solves (#61 and #120), so away it goes.
huge thanks to @Gremious for doing all the legwork on this!
* return 403 instead of 404 in the following sitations:
* viewing an RSS feed without necessary auth
* accessing a file with the wrong filekey
* accessing a file/folder without necessary auth
(would previously 404 for intentional ambiguity)
* only allow PROPFIND if user has either read or write;
previously a blank response was returned if user has
get-access, but this could confuse webdav clients into
skipping authentication (for example AuthPass)
* return 401 basic-challenge instead of 403 if the client
appears to be non-graphical, because many webdav clients
do not provide the credentials until they're challenged.
There is a heavy bias towards assuming the client is a
browser, because browsers must NEVER EVER get a 401
(tricky state that is near-impossible to deal with)
* return 401 basic-challenge instead of 403 if a PUT
is attempted without any credentials included; this
should be safe, as graphical browsers never do that
this fixes the interoperability issues mentioned in
https://github.com/authpass/authpass/issues/379
where AuthPass would GET files without providing the
password because it expected a 401 instead of a 403;
AuthPass is behaving correctly, this is not a bug
a better alternative to using `--no-idx` for this purpose since
this also excludes recent uploads, not just during fs-indexing,
and it doesn't prevent deduplication
also speeds up searches by a tiny amount due to building the
sanchecks into the exclude-filter while parsing the config,
instead of during each search query
* if free ram on startup is less than 2 GiB,
use smaller chunks for parallel file hashing
* if --th-max-ram is lower than 0.25 (256 MiB),
print a warning that thumbnails will not work
* make thumbnail cleaner immediately do a sweep on startup,
forgetting any failed conversions so they can be retried
in case the memory limit was increased since last run
whenever a new idp user is registered, up2k will continuously
reload in the background until all users have been processed
just like before, this blocks up2k uploads from each user
until said user makes it into a reload, but as of now,
reloads will batch and execute without interrupting read-access
needs further testing before next release,
probably some rough edges to sand down
show a spinning halfcircle around the +/- instead of
moving the focus to the selected folder in the sidebar,
since that could mess with keyboard scrolling
an extremely brutish workaround for issues such as #110 where
browsers receive an HTTP 304 and misinterpret as HTTP 200
option `--no304=1` adds the button `no304` to the controlpanel
which can be enabled to force-disable caching in that browser
the button is default-disabled; by specifying `--no304=2`
instead of `--no304=1` the button becomes default-enabled
can also always be enabled by accessing `/?setck=no304=y`
these response headers are usually not included in 304 replies,
and their presence are suspected to confuse some clients (#110)
also strip `out_headerlist` (primarily cookie assignments)
add support for the `If-Range` header which is generally used to
prevent resuming a partial download after the source file on the
server has been modified, by returning HTTP 200 instead of a 206
also simplifies `If-Modified-Since` and `If-Range` handling;
previously this was a spec-compliant lexical comparison,
now it's a basic string-comparison instead. The server will now
reply 200 also when the server mtime is older than the client's.
This is technically not according to spec, but should be safer,
as it allows backdating timestamps without purging client cache
previously, it only accepted the 3-tuple `min,default,max`
if given a single integer (or any other unexpected value),
the up2k js would enter an infinite loop, eat all the ram
and crash the browser (nice)
fix this by accepting a single integer (for example 96)
and translating it to `1,96,96`
PUT uploads, as used by webdav, would stat the absolute
path of the file to be created, which would throw ENOENT
strip components until the path is an existing directory
and also try to enforce disk space / volume size limits
even when the incoming file is of unknown size
the first time a file is to be hashed after a website refresh,
a set of webworkers are launched for efficient parallelization
in the unlikely event of a network outage exactly at this point,
the workers will fail to start, and the hashing would never begin
add a ping/pong sequence to smoketest the workers, and
fallback to hashing on the main-thread when necessary
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096
`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets
subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time
if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
previously and currently, as an upload completes, its "done" flag
is not set until all the data has been flushed to disk
however, the list of missing chunks becomes empty before the flush,
and that list was incorrectly used to determine completion state
in some dedup-related logic
as a result, duplicate uploads could initially fail, and would
succeed after the client automatically retried a handful of times