Compare commits

..

1 Commits
v0.13.0 ... vcr

Author SHA1 Message Date
ed
ff8313d0fb add mistake 2021-07-01 21:49:44 +02:00
65 changed files with 2432 additions and 6277 deletions

5
.vscode/tasks.json vendored
View File

@@ -9,10 +9,7 @@
{
"label": "no_dbg",
"type": "shell",
"command": "${config:python.pythonPath}",
"args": [
".vscode/launch.py"
]
"command": "${config:python.pythonPath} .vscode/launch.py"
}
]
}

373
README.md
View File

@@ -23,33 +23,28 @@ turn your phone or raspi into a portable file server with resumable uploads/down
* [on debian](#on-debian)
* [notes](#notes)
* [status](#status)
* [testimonials](#testimonials)
* [bugs](#bugs)
* [general bugs](#general-bugs)
* [not my bugs](#not-my-bugs)
* [the browser](#the-browser)
* [tabs](#tabs)
* [hotkeys](#hotkeys)
* [navpane](#navpane)
* [tree-mode](#tree-mode)
* [thumbnails](#thumbnails)
* [zip downloads](#zip-downloads)
* [uploading](#uploading)
* [file-search](#file-search)
* [file manager](#file-manager)
* [batch rename](#batch-rename)
* [markdown viewer](#markdown-viewer)
* [other tricks](#other-tricks)
* [searching](#searching)
* [server config](#server-config)
* [upload rules](#upload-rules)
* [database location](#database-location)upload rules
* [searching](#searching)
* [search configuration](#search-configuration)
* [database location](#database-location)
* [metadata from audio files](#metadata-from-audio-files)
* [file parser plugins](#file-parser-plugins)
* [complete examples](#complete-examples)
* [browser support](#browser-support)
* [client examples](#client-examples)
* [up2k](#up2k)
* [performance](#performance)
* [dependencies](#dependencies)
* [optional dependencies](#optional-dependencies)
* [install recommended deps](#install-recommended-deps)
@@ -62,21 +57,21 @@ turn your phone or raspi into a portable file server with resumable uploads/down
* [just the sfx](#just-the-sfx)
* [complete release](#complete-release)
* [todo](#todo)
* [discarded ideas](#discarded-ideas)
## quickstart
download [copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py) and you're all set!
running the sfx without arguments (for example doubleclicking it on Windows) will give everyone full access to the current folder; see `-h` for help if you want [accounts and volumes](#accounts-and-volumes) etc
running the sfx without arguments (for example doubleclicking it on Windows) will give everyone full access to the current folder; see `-h` for help if you want accounts and volumes etc
some recommended options:
* `-e2dsa` enables general file indexing, see [search configuration](#search-configuration)
* `-e2ts` enables audio metadata indexing (needs either FFprobe or Mutagen), see [optional dependencies](#optional-dependencies)
* `-v /mnt/music:/music:r:rw,foo -a foo:bar` shares `/mnt/music` as `/music`, `r`eadable by anyone, and read-write for user `foo`, password `bar`
* replace `:r:rw,foo` with `:r,foo` to only make the folder readable by `foo` and nobody else
* see [accounts and volumes](#accounts-and-volumes) for the syntax and other access levels (`r`ead, `w`rite, `m`ove, `d`elete)
* `-e2ts` enables audio metadata indexing (needs either FFprobe or mutagen), see [optional dependencies](#optional-dependencies)
* `-v /mnt/music:/music:r:afoo -a foo:bar` shares `/mnt/music` as `/music`, `r`eadable by anyone, with user `foo` as `a`dmin (read/write), password `bar`
* the syntax is `-v src:dst:perm:perm:...` so local-path, url-path, and one or more permissions to set
* replace `:r:afoo` with `:rfoo` to only make the folder readable by `foo` and nobody else
* in addition to `r`ead and `a`dmin, `w`rite makes a folder write-only, so cannot list/access files in it
* `--ls '**,*,ln,p,r'` to crash on startup if any of the volumes contain a symlink which point outside the volume, as that could give users unintended access
you may also want these, especially on servers:
@@ -117,57 +112,50 @@ summary: all planned features work! now please enjoy the bloatening
* backend stuff
* ☑ sanic multipart parser
*multiprocessing (actual multithreading)
*load balancer (multiprocessing)
* ☑ volumes (mountpoints)
*[accounts](#accounts-and-volumes)
* ☑ accounts
* upload
* ☑ basic: plain multipart, ie6 support
*[up2k](#uploading): js, resumable, multithreaded
* ☑ up2k: js, resumable, multithreaded
* ☑ stash: simple PUT filedropper
* ☑ unpost: undo/delete accidental uploads
* ☑ symlink/discard existing files (content-matching)
* download
* ☑ single files in browser
*[folders as zip / tar files](#zip-downloads)
* ☑ folders as zip / tar files
* ☑ FUSE client (read-only)
* browser
*navpane (directory tree sidebar)
* ☑ file manager (cut/paste, delete, [batch-rename](#batch-rename))
*tree-view
* ☑ audio player (with OS media controls)
*image gallery with webm player
*[thumbnails](#thumbnails)
*...of images using Pillow
* ☑ ...of videos using FFmpeg
*thumbnails
*images using Pillow
*videos using FFmpeg
* ☑ cache eviction (max-age; maybe max-size eventually)
* ☑ image gallery
* ☑ SPA (browse while uploading)
* if you use the navpane to navigate, not folders in the file list
* if you use the file-tree on the left only, not folders in the file list
* server indexing
*[locate files by contents](#file-search)
* ☑ locate files by contents
* ☑ search by name/path/date/size
*[search by ID3-tags etc.](#searching)
* ☑ search by ID3-tags etc.
* markdown
*[viewer](#markdown-viewer)
* ☑ viewer
* ☑ editor (sure why not)
## testimonials
small collection of user feedback
`good enough`, `surprisingly correct`, `certified good software`, `just works`, `why`
# bugs
* Windows: python 3.7 and older cannot read tags with FFprobe, so use Mutagen or upgrade
* Windows: python 3.7 and older cannot read tags with ffprobe, so use mutagen or upgrade
* Windows: python 2.7 cannot index non-ascii filenames with `-e2d`
* Windows: python 2.7 cannot handle filenames with mojibake
* `--th-ff-jpg` may fix video thumbnails on some FFmpeg versions
* MacOS: `--th-ff-jpg` may fix thumbnails using macports-FFmpeg
## general bugs
* all volumes must exist / be available on startup; up2k (mtp especially) gets funky otherwise
* cannot mount something at `/d1/d2/d3` unless `d2` exists inside `d1`
* dupe files will not have metadata (audio tags etc) displayed in the file listing
* because they don't get `up` entries in the db (probably best fix) and `tx_browser` does not `lstat`
* probably more, pls let me know
## not my bugs
@@ -175,36 +163,9 @@ small collection of user feedback
* Windows: folders cannot be accessed if the name ends with `.`
* python or windows bug
* Windows: msys2-python 3.8.6 occasionally throws `RuntimeError: release unlocked lock` when leaving a scoped mutex in up2k
* Windows: msys2-python 3.8.6 occasionally throws "RuntimeError: release unlocked lock" when leaving a scoped mutex in up2k
* this is an msys2 bug, the regular windows edition of python is fine
* VirtualBox: sqlite throws `Disk I/O Error` when running in a VM and the up2k database is in a vboxsf
* use `--hist` or the `hist` volflag (`-v [...]:c,hist=/tmp/foo`) to place the db inside the vm instead
# accounts and volumes
* `-a usr:pwd` adds account `usr` with password `pwd`
* `-v .::r` adds current-folder `.` as the webroot, `r`eadable by anyone
* the syntax is `-v src:dst:perm:perm:...` so local-path, url-path, and one or more permissions to set
* when granting permissions to an account, the names are comma-separated: `-v .::r,usr1,usr2:rw,usr3,usr4`
permissions:
* `r` (read): browse folder contents, download files, download as zip/tar
* `w` (write): upload files, move files *into* folder
* `m` (move): move files/folders *from* folder
* `d` (delete): delete files/folders
example:
* add accounts named u1, u2, u3 with passwords p1, p2, p3: `-a u1:p1 -a u2:p2 -a u3:p3`
* make folder `/srv` the root of the filesystem, read-only by anyone: `-v /srv::r`
* make folder `/mnt/music` available at `/music`, read-only for u1 and u2, read-write for u3: `-v /mnt/music:music:r,u1,u2:rw,u3`
* unauthorized users accessing the webroot can see that the `music` folder exists, but cannot open it
* make folder `/mnt/incoming` available at `/inc`, write-only for u1, read-move for u2: `-v /mnt/incoming:inc:w,u1:rm,u2`
* unauthorized users accessing the webroot can see that the `inc` folder exists, but cannot open it
* `u1` can open the `inc` folder, but cannot see the contents, only upload new files to it
* `u2` can browse it and move files *from* `/inc` into any folder where `u2` has write-access
# the browser
@@ -214,65 +175,36 @@ example:
## tabs
* `[🔎]` search by size, date, path/name, mp3-tags ... see [searching](#searching)
* `[🧯]` unpost: undo/delete accidental uploads
* `[🚀]` and `[🎈]` are the uploaders, see [uploading](#uploading)
* `[📂]` mkdir: create directories
* `[📝]` new-md: create a new markdown document
* `[📟]` send-msg: either to server-log or into textfiles if `--urlform save`
* `[🎺]` audio-player config options
* `[⚙️]` general client config options
* `[📂]` mkdir, create directories
* `[📝]` new-md, create a new markdown document
* `[📟]` send-msg, either to server-log or into textfiles if `--urlform save`
* `[⚙️]` client configuration options
## hotkeys
the browser has the following hotkeys (assumes qwerty, ignores actual layout)
* `B` toggle breadcrumbs / navpane
the browser has the following hotkeys
* `B` toggle breadcrumbs / directory tree
* `I/K` prev/next folder
* `M` parent folder (or unexpand current)
* `M` parent folder
* `G` toggle list / grid view
* `T` toggle thumbnails / icons
* `ctrl-X` cut selected files/folders
* `ctrl-V` paste
* `F2` rename selected file/folder
* when a file/folder is selected (in not-grid-view):
* `Up/Down` move cursor
* shift+`Up/Down` select and move cursor
* ctrl+`Up/Down` move cursor and scroll viewport
* `Space` toggle file selection
* `Ctrl-A` toggle select all
* when playing audio:
* `J/L` prev/next song
* `0..9` jump to 10%..90%
* `U/O` skip 10sec back/forward
* `0..9` jump to 0%..90%
* `J/L` prev/next song
* `P` play/pause (also starts playing the folder)
* when viewing images / playing videos:
* `J/L, Left/Right` prev/next file
* `Home/End` first/last file
* `Esc` close viewer
* videos:
* `U/O` skip 10sec back/forward
* `P/K/Space` play/pause
* `F` fullscreen
* `C` continue playing next video
* `R` loop
* `M` mute
* when the navpane is open:
* when tree-sidebar is open:
* `A/D` adjust tree width
* in the grid view:
* `S` toggle multiselect
* shift+`A/D` zoom
* in the markdown editor:
* `^s` save
* `^h` header
* `^k` autoformat table
* `^u` jump to next unicode character
* `^e` toggle editor / preview
* `^up, ^down` jump paragraphs
## navpane
## tree-mode
by default there's a breadcrumbs path; you can replace this with a navpane (tree-browser sidebar thing) by clicking the `🌲` or pressing the `B` hotkey
by default there's a breadcrumbs path; you can replace this with a tree-browser sidebar thing by clicking the `🌲` or pressing the `B` hotkey
click `[-]` and `[+]` (or hotkeys `A`/`D`) to adjust the size, and the `[a]` toggles if the tree should widen dynamically as you go deeper or stay fixed-size
@@ -281,9 +213,9 @@ click `[-]` and `[+]` (or hotkeys `A`/`D`) to adjust the size, and the `[a]` tog
![copyparty-thumbs-fs8](https://user-images.githubusercontent.com/241032/120070302-10836b00-c08a-11eb-8eb4-82004a34c342.png)
it does static images with Pillow and uses FFmpeg for video files, so you may want to `--no-thumb` or maybe just `--no-vthumb` depending on how dangerous your users are
it does static images with Pillow and uses FFmpeg for video files, so you may want to `--no-thumb` or maybe just `--no-vthumb` depending on how destructive your users are
images with the following names (see `--th-covers`) become the thumbnail of the folder they're in: `folder.png`, `folder.jpg`, `cover.png`, `cover.jpg`
images named `folder.jpg` and `folder.png` become the thumbnail of the folder they're in
in the grid/thumbnail view, if the audio player panel is open, songs will start playing when clicked
@@ -300,10 +232,9 @@ the `zip` link next to folders can produce various types of zip/tar files using
| `zip_crc` | `?zip=crc` | cp437 with crc32 computed early for truly ancient software |
* hidden files (dotfiles) are excluded unless `-ed`
* `up2k.db` and `dir.txt` is always excluded
* the up2k.db is always excluded
* `zip_crc` will take longer to download since the server has to read each file twice
* this is only to support MS-DOS PKZIP v2.04g (october 1993) and older
* how are you accessing copyparty actually
* please let me know if you find a program old enough to actually need this
you can also zip a selection of files or folders by clicking them in the browser, that brings up a selection editor and zip button in the bottom right
@@ -312,19 +243,15 @@ you can also zip a selection of files or folders by clicking them in the browser
## uploading
two upload methods are available in the html client:
* `[🎈] bup`, the basic uploader, supports almost every browser since netscape 4.0
* `[🚀] up2k`, the fancy one
you can undo/delete uploads using `[🧯] unpost` if the server is running with `-e2d`
* `🎈 bup`, the basic uploader, supports almost every browser since netscape 4.0
* `🚀 up2k`, the fancy one
up2k has several advantages:
* you can drop folders into the browser (files are added recursively)
* files are processed in chunks, and each chunk is checksummed
* uploads autoresume if they are interrupted by network issues
* uploads resume if you reboot your browser or pc, just upload the same files again
* uploads resume if they are interrupted (for example by a reboot)
* server detects any corruption; the client reuploads affected chunks
* the client doesn't upload anything that already exists on the server
* much higher speeds than ftp/scp/tarpipe on some internet connections (mainly american ones) thanks to parallel connections
* the last-modified timestamp of the file is preserved
see [up2k](#up2k) for details on how it works
@@ -357,65 +284,11 @@ in the `[🚀 up2k]` tab, after toggling the `[🔎]` switch green, any files/fo
files go into `[ok]` if they exist (and you get a link to where it is), otherwise they land in `[ng]`
* the main reason filesearch is combined with the uploader is cause the code was too spaghetti to separate it out somewhere else, this is no longer the case but now i've warmed up to the idea too much
adding the same file multiple times is blocked, so if you first search for a file and then decide to upload it, you have to click the `[cleanup]` button to discard `[done]` files (or just refresh the page)
adding the same file multiple times is blocked, so if you first search for a file and then decide to upload it, you have to click the `[cleanup]` button to discard `[done]` files
note that since up2k has to read the file twice, `[🎈 bup]` can be up to 2x faster in extreme cases (if your internet connection is faster than the read-speed of your HDD)
up2k has saved a few uploads from becoming corrupted in-transfer already; caught an android phone on wifi redhanded in wireshark with a bitflip, however bup with https would *probably* have noticed as well (thanks to tls also functioning as an integrity check)
## file manager
if you have the required permissions, you can cut/paste, rename, and delete files/folders
you can move files across browser tabs (cut in one tab, paste in another)
## batch rename
![batch-rename-fs8](https://user-images.githubusercontent.com/241032/128434204-eb136680-3c07-4ec7-92e0-ae86af20c241.png)
select some files and press F2 to bring up the rename UI
quick explanation of the buttons,
* `[✅ apply rename]` confirms and begins renaming
* `[❌ cancel]` aborts and closes the rename window
* `[↺ reset]` reverts any filename changes back to the original name
* `[decode]` does a URL-decode on the filename, fixing stuff like `&` and `%20`
* `[advanced]` toggles advanced mode
advanced mode: rename files based on rules to decide the new names, based on the original name (regex), or based on the tags collected from the file (artist/title/...), or a mix of both
in advanced mode,
* `[case]` toggles case-sensitive regex
* `regex` is the regex pattern to apply to the original filename; any files which don't match will be skipped
* `format` is the new filename, taking values from regex capturing groups and/or from file tags
* very loosely based on foobar2000 syntax
* `presets` lets you save rename rules for later
available functions:
* `$lpad(text, length, pad_char)`
* `$rpad(text, length, pad_char)`
so,
say you have a file named [`meganeko - Eclipse - 07 Sirius A.mp3`](https://www.youtube.com/watch?v=-dtb0vDPruI) (absolutely fantastic album btw) and the tags are: `Album:Eclipse`, `Artist:meganeko`, `Title:Sirius A`, `tn:7`
you could use just regex to rename it:
* `regex` = `(.*) - (.*) - ([0-9]{2}) (.*)`
* `format` = `(3). (1) - (4)`
* `output` = `07. meganeko - Sirius A.mp3`
or you could use just tags:
* `format` = `$lpad((tn),2,0). (artist) - (title).(ext)`
* `output` = `7. meganeko - Sirius A.mp3`
or a mix of both:
* `regex` = ` - ([0-9]{2}) `
* `format` = `(1). (artist) - (title).(ext)`
* `output` = `07. meganeko - Sirius A.mp3`
the metadata keys you can use in the format field are the ones in the file-browser table header (whatever is collected with `-mte` and `-mtp`)
up2k has saved a few uploads from becoming corrupted in-transfer already; caught an android phone on wifi redhanded in wireshark with a bitflip, however bup with https would *probably* have noticed as well thanks to tls also functioning as an integrity check
## markdown viewer
@@ -432,7 +305,7 @@ the metadata keys you can use in the format field are the ones in the file-brows
* if you are using media hotkeys to switch songs and are getting tired of seeing the OSD popup which Windows doesn't let you disable, consider https://ocv.me/dev/?media-osd-bgone.ps1
## searching
# searching
![copyparty-search-fs8](https://user-images.githubusercontent.com/241032/115978060-6772bd80-a57d-11eb-81d3-174e869b72c3.png)
@@ -444,75 +317,36 @@ path/name queries are space-separated, AND'ed together, and words are negated wi
* path: `shibayan -bossa` finds all files where one of the folders contain `shibayan` but filters out any results where `bossa` exists somewhere in the path
* name: `demetori styx` gives you [good stuff](https://www.youtube.com/watch?v=zGh0g14ZJ8I&list=PL3A147BD151EE5218&index=9)
add the argument `-e2ts` to also scan/index tags from music files, which brings us over to:
add `-e2ts` to also scan/index tags from music files:
# server config
## search configuration
file indexing relies on two databases, the up2k filetree (`-e2d`) and the metadata tags (`-e2t`). Configuration can be done through arguments, volume flags, or a mix of both.
searching relies on two databases, the up2k filetree (`-e2d`) and the metadata tags (`-e2t`). Configuration can be done through arguments, volume flags, or a mix of both.
through arguments:
* `-e2d` enables file indexing on upload
* `-e2ds` scans writable folders for new files on startup
* `-e2ds` scans writable folders on startup
* `-e2dsa` scans all mounted volumes (including readonly ones)
* `-e2t` enables metadata indexing on upload
* `-e2ts` scans for tags in all files that don't have tags yet
* `-e2tsr` deletes all existing tags, does a full reindex
* `-e2tsr` deletes all existing tags, so a full reindex
the same arguments can be set as volume flags, in addition to `d2d` and `d2t` for disabling:
* `-v ~/music::r:c,e2dsa:c,e2tsr` does a full reindex of everything on startup
* `-v ~/music::r:c,d2d` disables **all** indexing, even if any `-e2*` are on
* `-v ~/music::r:c,d2t` disables all `-e2t*` (tags), does not affect `-e2d*`
* `-v ~/music::r:ce2dsa:ce2tsr` does a full reindex of everything on startup
* `-v ~/music::r:cd2d` disables **all** indexing, even if any `-e2*` are on
* `-v ~/music::r:cd2t` disables all `-e2t*` (tags), does not affect `-e2d*`
note:
* `e2tsr` is probably always overkill, since `e2ds`/`e2dsa` would pick up any file modifications and `e2ts` would then reindex those, unless there is a new copyparty version with new parsers and the release note says otherwise
* `e2tsr` is probably always overkill, since `e2ds`/`e2dsa` would pick up any file modifications and `e2ts` would then reindex those
* the rescan button in the admin panel has no effect unless the volume has `-e2ds` or higher
you can choose to only index filename/path/size/last-modified (and not the hash of the file contents) by setting `--no-hash` or the volume-flag `:c,dhash`, this has the following consequences:
* initial indexing is way faster, especially when the volume is on a network disk
you can choose to only index filename/path/size/last-modified (and not the hash of the file contents) by setting `--no-hash` or the volume-flag `cdhash`, this has the following consequences:
* initial indexing is way faster, especially when the volume is on a networked disk
* makes it impossible to [file-search](#file-search)
* if someone uploads the same file contents, the upload will not be detected as a dupe, so it will not get symlinked or rejected
if you set `--no-hash`, you can enable hashing for specific volumes using flag `:c,ehash`
## upload rules
you can set upload rules using volume flags, some examples:
* `:c,sz=1k-3m` sets allowed filesize between 1 KiB and 3 MiB inclusive (suffixes: b, k, m, g)
* `:c,nosub` disallow uploading into subdirectories; goes well with `rotn` and `rotf`:
* `:c,rotn=1000,2` moves uploads into subfolders, up to 1000 files in each folder before making a new one, two levels deep (must be at least 1)
* `:c,rotf=%Y/%m/%d/%H` enforces files to be uploaded into a structure of subfolders according to that date format
* if someone uploads to `/foo/bar` the path would be rewritten to `/foo/bar/2021/08/06/23` for example
* but the actual value is not verified, just the structure, so the uploader can choose any values which conform to the format string
* just to avoid additional complexity in up2k which is enough of a mess already
you can also set transaction limits which apply per-IP and per-volume, but these assume `-j 1` (default) otherwise the limits will be off, for example `-j 4` would allow anywhere between 1x and 4x the limits you set depending on which processing node the client gets routed to
* `:c,maxn=250,3600` allows 250 files over 1 hour from each IP (tracked per-volume)
* `:c,maxb=1g,300` allows 1 GiB total over 5 minutes from each IP (tracked per-volume)
## compress uploads
files can be autocompressed on upload, either on user-request (if config allows) or forced by server-config
* volume flag `gz` allows gz compression
* volume flag `xz` allows lzma compression
* volume flag `pk` **forces** compression on all files
* url parameter `pk` requests compression with server-default algorithm
* url parameter `gz` or `xz` requests compression with a specific algorithm
* url parameter `xz` requests xz compression
things to note,
* the `gz` and `xz` arguments take a single optional argument, the compression level (range 0 to 9)
* the `pk` volume flag takes the optional argument `ALGORITHM,LEVEL` which will then be forced for all uploads, for example `gz,9` or `xz,0`
* default compression is gzip level 9
* all upload methods except up2k are supported
* the files will be indexed after compression, so dupe-detection and file-search will not work as expected
some examples,
if you set `--no-hash`, you can enable hashing for specific volumes using flag `cehash`
## database location
@@ -520,7 +354,7 @@ some examples,
copyparty creates a subfolder named `.hist` inside each volume where it stores the database, thumbnails, and some other stuff
this can instead be kept in a single place using the `--hist` argument, or the `hist=` volume flag, or a mix of both:
* `--hist ~/.cache/copyparty -v ~/music::r:c,hist=-` sets `~/.cache/copyparty` as the default place to put volume info, but `~/music` gets the regular `.hist` subfolder (`-` restores default behavior)
* `--hist ~/.cache/copyparty -v ~/music::r:chist=-` sets `~/.cache/copyparty` as the default place to put volume info, but `~/music` gets the regular `.hist` subfolder (`-` restores default behavior)
note:
* markdown edits are always stored in a local `.hist` subdirectory
@@ -531,33 +365,31 @@ note:
## metadata from audio files
`-mte` decides which tags to index and display in the browser (and also the display order), this can be changed per-volume:
* `-v ~/music::r:c,mte=title,artist` indexes and displays *title* followed by *artist*
* `-v ~/music::r:cmte=title,artist` indexes and displays *title* followed by *artist*
if you add/remove a tag from `mte` you will need to run with `-e2tsr` once to rebuild the database, otherwise only new files will be affected
but instead of using `-mte`, `-mth` is a better way to hide tags in the browser: these tags will not be displayed by default, but they still get indexed and become searchable, and users can choose to unhide them in the settings pane
`-mtm` can be used to add or redefine a metadata mapping, say you have media files with `foo` and `bar` tags and you want them to display as `qux` in the browser (preferring `foo` if both are present), then do `-mtm qux=foo,bar` and now you can `-mte artist,title,qux`
tags that start with a `.` such as `.bpm` and `.dur`(ation) indicate numeric value
see the beautiful mess of a dictionary in [mtag.py](https://github.com/9001/copyparty/blob/master/copyparty/mtag.py) for the default mappings (should cover mp3,opus,flac,m4a,wav,aif,)
`--no-mutagen` disables Mutagen and uses FFprobe instead, which...
* is about 20x slower than Mutagen
* catches a few tags that Mutagen doesn't
`--no-mutagen` disables mutagen and uses ffprobe instead, which...
* is about 20x slower than mutagen
* catches a few tags that mutagen doesn't
* melodic key, video resolution, framerate, pixfmt
* avoids pulling any GPL code into copyparty
* more importantly runs FFprobe on incoming files which is bad if your FFmpeg has a cve
* more importantly runs ffprobe on incoming files which is bad if your ffmpeg has a cve
## file parser plugins
copyparty can invoke external programs to collect additional metadata for files using `mtp` (either as argument or volume flag), there is a default timeout of 30sec
copyparty can invoke external programs to collect additional metadata for files using `mtp` (as argument or volume flag), there is a default timeout of 30sec
* `-mtp .bpm=~/bin/audio-bpm.py` will execute `~/bin/audio-bpm.py` with the audio file as argument 1 to provide the `.bpm` tag, if that does not exist in the audio metadata
* `-mtp key=f,t5,~/bin/audio-key.py` uses `~/bin/audio-key.py` to get the `key` tag, replacing any existing metadata tag (`f,`), aborting if it takes longer than 5sec (`t5,`)
* `-v ~/music::r:c,mtp=.bpm=~/bin/audio-bpm.py:c,mtp=key=f,t5,~/bin/audio-key.py` both as a per-volume config wow this is getting ugly
* `-v ~/music::r:cmtp=.bpm=~/bin/audio-bpm.py:cmtp=key=f,t5,~/bin/audio-key.py` both as a per-volume config wow this is getting ugly
*but wait, there's more!* `-mtp` can be used for non-audio files as well using the `a` flag: `ay` only do audio files, `an` only do non-audio files, or `ad` do all files (d as in dontcare)
@@ -585,15 +417,13 @@ copyparty can invoke external programs to collect additional metadata for files
| send message | yep | yep | yep | yep | yep | yep | yep | yep |
| set sort order | - | yep | yep | yep | yep | yep | yep | yep |
| zip selection | - | yep | yep | yep | yep | yep | yep | yep |
| navpane | - | - | `*1` | yep | yep | yep | yep | yep |
| directory tree | - | - | `*1` | yep | yep | yep | yep | yep |
| up2k | - | - | yep | yep | yep | yep | yep | yep |
| icons work | - | - | yep | yep | yep | yep | yep | yep |
| markdown editor | - | - | yep | yep | yep | yep | yep | yep |
| markdown viewer | - | - | yep | yep | yep | yep | yep | yep |
| play mp3/m4a | - | yep | yep | yep | yep | yep | yep | yep |
| play ogg/opus | - | - | - | - | yep | yep | `*2` | yep |
| thumbnail view | - | - | - | - | yep | yep | yep | yep |
| image viewer | - | - | - | - | yep | yep | yep | yep |
| **= feature =** | ie6 | ie9 | ie10 | ie11 | ff 52 | c 49 | iOS | Andr |
* internet explorer 6 to 8 behave the same
* firefox 52 and chrome 49 are the last winxp versions
@@ -611,7 +441,7 @@ quick summary of more eccentric web-browsers trying to view a directory index:
| **w3m** (0.5.3/macports) | can browse, login, upload at 100kB/s, mkdir/msg |
| **netsurf** (3.10/arch) | is basically ie6 with much better css (javascript has almost no effect) |
| **ie4** and **netscape** 4.0 | can browse (text is yellow on white), upload with `?b=u` |
| **SerenityOS** (7e98457) | hits a page fault, works with `?b=u`, file upload not-impl |
| **SerenityOS** (22d13d8) | hits a page fault, works with `?b=u`, file input not-impl, url params are multiplying |
# client examples
@@ -621,11 +451,11 @@ quick summary of more eccentric web-browsers trying to view a directory index:
* `var xhr = new XMLHttpRequest(); xhr.open('POST', 'https://127.0.0.1:3923/msgs?raw'); xhr.send('foo');`
* curl/wget: upload some files (post=file, chunk=stdin)
* `post(){ curl -b cppwd=wark -F act=bput -F f=@"$1" http://127.0.0.1:3923/;}`
* `post(){ curl -b cppwd=wark http://127.0.0.1:3923/ -F act=bput -F f=@"$1";}`
`post movie.mkv`
* `post(){ wget --header='Cookie: cppwd=wark' --post-file="$1" -O- http://127.0.0.1:3923/?raw;}`
* `post(){ wget --header='Cookie: cppwd=wark' http://127.0.0.1:3923/?raw --post-file="$1" -O-;}`
`post movie.mkv`
* `chunk(){ curl -b cppwd=wark -T- http://127.0.0.1:3923/;}`
* `chunk(){ curl -b cppwd=wark http://127.0.0.1:3923/ -T-;}`
`chunk <movie.mkv`
* FUSE: mount a copyparty server as a local filesystem
@@ -656,23 +486,6 @@ quick outline of the up2k protocol, see [uploading](#uploading) for the web-clie
* client does another handshake with the hashlist; server replies with OK or a list of chunks to reupload
# performance
defaults are good for most cases, don't mind the `cannot efficiently use multiple CPU cores` message, it's very unlikely to be a problem
below are some tweaks roughly ordered by usefulness:
* `-q` disables logging and can help a bunch, even when combined with `-lo` to redirect logs to file
* `--http-only` or `--https-only` (unless you want to support both protocols) will reduce the delay before a new connection is established
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
* `--no-hash` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable
* `-j` enables multiprocessing (actual multithreading) and can make copyparty perform better in cpu-intensive workloads, for example:
* huge amount of short-lived connections
* really heavy traffic (downloads/uploads)
...however it adds an overhead to internal communication so it might be a net loss, see if it works 4 u
# dependencies
* `jinja2` (is built into the SFX)
@@ -682,18 +495,18 @@ below are some tweaks roughly ordered by usefulness:
enable music tags:
* either `mutagen` (fast, pure-python, skips a few tags, makes copyparty GPL? idk)
* or `ffprobe` (20x slower, more accurate, possibly dangerous depending on your distro and users)
* or `FFprobe` (20x slower, more accurate, possibly dangerous depending on your distro and users)
enable thumbnails of images:
enable image thumbnails:
* `Pillow` (requires py2.7 or py3.5+)
enable thumbnails of videos:
enable video thumbnails:
* `ffmpeg` and `ffprobe` somewhere in `$PATH`
enable thumbnails of HEIF pictures:
enable reading HEIF pictures:
* `pyheif-pillow-opener` (requires Linux or a C compiler)
enable thumbnails of AVIF pictures:
enable reading AVIF pictures:
* `pillow-avif-plugin`
@@ -707,7 +520,7 @@ python -m pip install --user -U jinja2 mutagen Pillow
some bundled tools have copyleft dependencies, see [./bin/#mtag](bin/#mtag)
these are standalone programs and will never be imported / evaluated by copyparty, and must be enabled through `-mtp` configs
these are standalone programs and will never be imported / evaluated by copyparty
# sfx
@@ -723,10 +536,10 @@ pls note that `copyparty-sfx.sh` will fail if you rename `copyparty-sfx.py` to `
## sfx repack
if you don't need all the features, you can repack the sfx and save a bunch of space; all you need is an sfx and a copy of this repo (nothing else to download or build, except if you're on windows then you need msys2 or WSL)
* `525k` size of original sfx.py as of v0.11.30
* `315k` after `./scripts/make-sfx.sh re no-ogv`
* `223k` after `./scripts/make-sfx.sh re no-ogv no-cm`
if you don't need all the features you can repack the sfx and save a bunch of space; all you need is an sfx and a copy of this repo (nothing else to download or build, except for either msys2 or WSL if you're on windows)
* `724K` original size as of v0.4.0
* `256K` after `./scripts/make-sfx.sh re no-ogv`
* `164K` after `./scripts/make-sfx.sh re no-ogv no-cm`
the features you can opt to drop are
* `ogv`.js, the opus/vorbis decoder which is needed by apple devices to play foss audio files
@@ -773,7 +586,7 @@ rm -rf copyparty/web/deps
curl -L https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py >x.py
python3 x.py -h
rm x.py
mv /tmp/pe-copyparty/copyparty/web/deps/ copyparty/web/deps/
mv /tmp/pe-copyparty/copyparty/web/deps/ copyparty/web/
```
then build the sfx using any of the following examples:
@@ -801,16 +614,13 @@ in the `scripts` folder:
roughly sorted by priority
* hls framework for Someone Else to drop code into :^)
* readme.md as epilogue
## discarded ideas
* reduce up2k roundtrips
* start from a chunk index and just go
* terminate client on bad data
* not worth the effort, just throw enough conncetions at it
discarded ideas
* single sha512 across all up2k chunks?
* crypto.subtle cannot into streaming, would have to use hashwasm, expensive
* separate sqlite table per tag
@@ -827,6 +637,3 @@ roughly sorted by priority
* nah
* look into android thumbnail cache file format
* absolutely not
* indexedDB for hashes, cfg enable/clear/sz, 2gb avail, ~9k for 1g, ~4k for 100m, 500k items before autoeviction
* blank hashlist when up-ok to skip handshake
* too many confusing side-effects

View File

@@ -345,7 +345,7 @@ class Gateway(object):
except:
pass
def sendreq(self, meth, path, headers, **kwargs):
def sendreq(self, *args, headers={}, **kwargs):
if self.password:
headers["Cookie"] = "=".join(["cppwd", self.password])
@@ -354,21 +354,21 @@ class Gateway(object):
if c.rx_path:
raise Exception()
c.request(meth, path, headers=headers, **kwargs)
c.request(*list(args), headers=headers, **kwargs)
c.rx = c.getresponse()
return c
except:
tid = threading.current_thread().ident
dbg(
"\033[1;37;44mbad conn {:x}\n {} {}\n {}\033[0m".format(
tid, meth, path, c.rx_path if c else "(null)"
"\033[1;37;44mbad conn {:x}\n {}\n {}\033[0m".format(
tid, " ".join(str(x) for x in args), c.rx_path if c else "(null)"
)
)
self.closeconn(c)
c = self.getconn()
try:
c.request(meth, path, headers=headers, **kwargs)
c.request(*list(args), headers=headers, **kwargs)
c.rx = c.getresponse()
return c
except:
@@ -386,7 +386,7 @@ class Gateway(object):
path = dewin(path)
web_path = self.quotep("/" + "/".join([self.web_root, path])) + "?dots"
c = self.sendreq("GET", web_path, {})
c = self.sendreq("GET", web_path)
if c.rx.status != 200:
self.closeconn(c)
log(
@@ -440,7 +440,7 @@ class Gateway(object):
)
)
c = self.sendreq("GET", web_path, {"Range": hdr_range})
c = self.sendreq("GET", web_path, headers={"Range": hdr_range})
if c.rx.status != http.client.PARTIAL_CONTENT:
self.closeconn(c)
raise Exception(

View File

@@ -54,13 +54,10 @@ MACOS = platform.system() == "Darwin"
info = log = dbg = None
print(
"{} v{} @ {}".format(
platform.python_implementation(),
".".join([str(x) for x in sys.version_info]),
sys.executable,
)
)
print("{} v{} @ {}".format(
platform.python_implementation(),
".".join([str(x) for x in sys.version_info]),
sys.executable))
try:
@@ -302,14 +299,14 @@ class Gateway(object):
except:
pass
def sendreq(self, meth, path, headers, **kwargs):
def sendreq(self, *args, headers={}, **kwargs):
tid = get_tid()
if self.password:
headers["Cookie"] = "=".join(["cppwd", self.password])
try:
c = self.getconn(tid)
c.request(meth, path, headers=headers, **kwargs)
c.request(*list(args), headers=headers, **kwargs)
return c.getresponse()
except:
dbg("bad conn")
@@ -317,7 +314,7 @@ class Gateway(object):
self.closeconn(tid)
try:
c = self.getconn(tid)
c.request(meth, path, headers=headers, **kwargs)
c.request(*list(args), headers=headers, **kwargs)
return c.getresponse()
except:
info("http connection failed:\n" + traceback.format_exc())
@@ -334,7 +331,7 @@ class Gateway(object):
path = dewin(path)
web_path = self.quotep("/" + "/".join([self.web_root, path])) + "?dots&ls"
r = self.sendreq("GET", web_path, {})
r = self.sendreq("GET", web_path)
if r.status != 200:
self.closeconn()
log(
@@ -371,7 +368,7 @@ class Gateway(object):
)
)
r = self.sendreq("GET", web_path, {"Range": hdr_range})
r = self.sendreq("GET", web_path, headers={"Range": hdr_range})
if r.status != http.client.PARTIAL_CONTENT:
self.closeconn()
raise Exception(

View File

@@ -4,7 +4,6 @@ some of these rely on libraries which are not MIT-compatible
* [audio-bpm.py](./audio-bpm.py) detects the BPM of music using the BeatRoot Vamp Plugin; imports GPL2
* [audio-key.py](./audio-key.py) detects the melodic key of music using the Mixxx fork of keyfinder; imports GPL3
* [media-hash.py](./media-hash.py) generates checksums for audio and video streams; uses FFmpeg (LGPL or GPL)
# dependencies
@@ -19,10 +18,7 @@ run [`install-deps.sh`](install-deps.sh) to build/install most dependencies requ
# usage from copyparty
`copyparty -e2dsa -e2ts` followed by any combination of these:
* `-mtp key=f,audio-key.py`
* `-mtp .bpm=f,audio-bpm.py`
* `-mtp ahash,vhash=f,media-hash.py`
`copyparty -e2dsa -e2ts -mtp key=f,audio-key.py -mtp .bpm=f,audio-bpm.py`
* `f,` makes the detected value replace any existing values
* the `.` in `.bpm` indicates numeric value
@@ -33,9 +29,6 @@ run [`install-deps.sh`](install-deps.sh) to build/install most dependencies requ
## usage with volume-flags
instead of affecting all volumes, you can set the options for just one volume like so:
`copyparty -v /mnt/nas/music:/music:r:c,e2dsa:c,e2ts` immediately followed by any combination of these:
* `:c,mtp=key=f,audio-key.py`
* `:c,mtp=.bpm=f,audio-bpm.py`
* `:c,mtp=ahash,vhash=f,media-hash.py`
```
copyparty -v /mnt/nas/music:/music:r:cmtp=key=f,audio-key.py:cmtp=.bpm=f,audio-bpm.py:ce2dsa:ce2ts
```

View File

@@ -1,73 +0,0 @@
#!/usr/bin/env python
import re
import sys
import json
import time
import base64
import hashlib
import subprocess as sp
try:
from copyparty.util import fsenc
except:
def fsenc(p):
return p
"""
dep: ffmpeg
"""
def det():
# fmt: off
cmd = [
"ffmpeg",
"-nostdin",
"-hide_banner",
"-v", "fatal",
"-i", fsenc(sys.argv[1]),
"-f", "framemd5",
"-"
]
# fmt: on
p = sp.Popen(cmd, stdout=sp.PIPE)
# ps = io.TextIOWrapper(p.stdout, encoding="utf-8")
ps = p.stdout
chans = {}
for ln in ps:
if ln.startswith(b"#stream#"):
break
m = re.match(r"^#media_type ([0-9]): ([a-zA-Z])", ln.decode("utf-8"))
if m:
chans[m.group(1)] = m.group(2)
hashers = [hashlib.sha512(), hashlib.sha512()]
for ln in ps:
n = int(ln[:1])
v = ln.rsplit(b",", 1)[-1].strip()
hashers[n].update(v)
r = {}
for k, v in chans.items():
dg = hashers[int(k)].digest()[:12]
dg = base64.urlsafe_b64encode(dg).decode("ascii")
r[v[0].lower() + "hash"] = dg
print(json.dumps(r, indent=4))
def main():
try:
det()
except:
pass # mute
if __name__ == "__main__":
main()

View File

@@ -1,32 +0,0 @@
// ==UserScript==
// @name youtube-playerdata-hub
// @match https://youtube.com/*
// @match https://*.youtube.com/*
// @version 1.0
// @grant GM_addStyle
// ==/UserScript==
function main() {
var sent = {};
function send(txt) {
if (sent[txt])
return;
fetch('https://127.0.0.1:3923/playerdata?_=' + Date.now(), { method: "PUT", body: txt });
console.log('[yt-ipr] yeet %d bytes', txt.length);
sent[txt] = 1;
}
function collect() {
setTimeout(collect, 60 * 1000);
var pd = document.querySelector('ytd-watch-flexy');
if (pd)
send(JSON.stringify(pd.playerData));
}
setTimeout(collect, 5000);
}
var scr = document.createElement('script');
scr.textContent = '(' + main.toString() + ')();';
(document.head || document.getElementsByTagName('head')[0]).appendChild(scr);
console.log('[yt-ipr] a');

View File

@@ -1,61 +0,0 @@
#!/usr/bin/env python
import re
import sys
import gzip
import json
from datetime import datetime
"""
youtube initial player response
example usage:
-v srv/playerdata:playerdata:w
:c,e2tsr:c,e2dsa
:c,mtp=yt-id,yt-title,yt-author,yt-channel,yt-views,yt-private,yt-expires=bin/mtag/yt-ipr.py
:c,mte=yt-id,yt-title,yt-author,yt-channel,yt-views,yt-private,yt-expires
see res/yt-ipr.user.js for the example userscript to go with this
"""
def main():
try:
with gzip.open(sys.argv[1], "rt", encoding="utf-8", errors="replace") as f:
txt = f.read()
except:
with open(sys.argv[1], "r", encoding="utf-8", errors="replace") as f:
txt = f.read()
txt = "{" + txt.split("{", 1)[1]
try:
obj = json.loads(txt)
except json.decoder.JSONDecodeError as ex:
obj = json.loads(txt[: ex.pos])
# print(json.dumps(obj, indent=2))
vd = obj["videoDetails"]
sd = obj["streamingData"]
et = sd["adaptiveFormats"][0]["url"]
et = re.search(r"[?&]expire=([0-9]+)", et).group(1)
et = datetime.utcfromtimestamp(int(et))
et = et.strftime("%Y-%m-%d, %H:%M")
r = {
"yt-id": vd["videoId"],
"yt-title": vd["title"],
"yt-author": vd["author"],
"yt-channel": vd["channelId"],
"yt-views": vd["viewCount"],
"yt-private": vd["isPrivate"],
# "yt-expires": sd["expiresInSeconds"],
"yt-expires": et,
}
print(json.dumps(r))
if __name__ == "__main__":
main()

View File

@@ -1,15 +1,7 @@
# when running copyparty behind a reverse proxy,
# the following arguments are recommended:
#
# -nc 512 important, see next paragraph
# --http-only lower latency on initial connection
# -i 127.0.0.1 only accept connections from nginx
#
# -nc must match or exceed the webserver's max number of concurrent clients;
# nginx default is 512 (worker_processes 1, worker_connections 512)
#
# you may also consider adding -j0 for CPU-intensive configurations
# (not that i can really think of any good examples)
# when running copyparty behind a reverse-proxy,
# make sure that copyparty allows at least as many clients as the proxy does,
# so run copyparty with -nc 512 if your nginx has the default limits
# (worker_processes 1, worker_connections 512)
upstream cpp {
server 127.0.0.1:3923;

View File

@@ -7,26 +7,11 @@
# you may want to:
# change '/usr/bin/python' to another interpreter
# change '/mnt::a' to another location or permission-set
#
# with `Type=notify`, copyparty will signal systemd when it is ready to
# accept connections; correctly delaying units depending on copyparty.
# But note that journalctl will get the timestamps wrong due to
# python disabling line-buffering, so messages are out-of-order:
# https://user-images.githubusercontent.com/241032/126040249-cb535cc7-c599-4931-a796-a5d9af691bad.png
#
# enable line-buffering for realtime logging (slight performance cost):
# modify ExecStart and prefix it with `/usr/bin/stdbuf -oL` like so:
# ExecStart=/usr/bin/stdbuf -oL /usr/bin/python3 [...]
# but some systemd versions require this instead (higher performance cost):
# inside the [Service] block, add the following line:
# Environment=PYTHONUNBUFFERED=x
[Unit]
Description=copyparty file server
[Service]
Type=notify
SyslogIdentifier=copyparty
ExecStart=/usr/bin/python3 /usr/local/bin/copyparty-sfx.py -q -v /mnt::a
ExecStartPre=/bin/bash -c 'mkdir -p /run/tmpfiles.d/ && echo "x /tmp/pe-copyparty*" > /run/tmpfiles.d/copyparty.conf'

View File

@@ -9,9 +9,6 @@ import os
PY2 = sys.version_info[0] == 2
if PY2:
sys.dont_write_bytecode = True
unicode = unicode
else:
unicode = str
WINDOWS = False
if platform.system() == "Windows":

View File

@@ -20,11 +20,10 @@ import threading
import traceback
from textwrap import dedent
from .__init__ import E, WINDOWS, VT100, PY2, unicode
from .__init__ import E, WINDOWS, VT100, PY2
from .__version__ import S_VERSION, S_BUILD_DT, CODENAME
from .svchub import SvcHub
from .util import py_desc, align_tab, IMPLICATIONS, ansi_re
from .authsrv import re_vol
from .util import py_desc, align_tab, IMPLICATIONS, alltrace
HAVE_SSL = True
try:
@@ -32,8 +31,6 @@ try:
except:
HAVE_SSL = False
printed = ""
class RiceFormatter(argparse.HelpFormatter):
def _get_help_string(self, action):
@@ -64,19 +61,8 @@ class Dodge11874(RiceFormatter):
super(Dodge11874, self).__init__(*args, **kwargs)
def lprint(*a, **ka):
global printed
txt = " ".join(unicode(x) for x in a) + ka.get("end", "\n")
printed += txt
if not VT100:
txt = ansi_re.sub("", txt)
print(txt, **ka)
def warn(msg):
lprint("\033[1mwarning:\033[0;33m {}\033[0m\n".format(msg))
print("\033[1mwarning:\033[0;33m {}\033[0m\n".format(msg))
def ensure_locale():
@@ -87,7 +73,7 @@ def ensure_locale():
]:
try:
locale.setlocale(locale.LC_ALL, x)
lprint("Locale:", x)
print("Locale:", x)
break
except:
continue
@@ -108,7 +94,7 @@ def ensure_cert():
try:
if filecmp.cmp(cert_cfg, cert_insec):
lprint(
print(
"\033[33m using default TLS certificate; https will be insecure."
+ "\033[36m\n certificate location: {}\033[0m\n".format(cert_cfg)
)
@@ -137,7 +123,7 @@ def configure_ssl_ver(al):
if "help" in sslver:
avail = [terse_sslver(x[6:]) for x in flags]
avail = " ".join(sorted(avail) + ["all"])
lprint("\navailable ssl/tls versions:\n " + avail)
print("\navailable ssl/tls versions:\n " + avail)
sys.exit(0)
al.ssl_flags_en = 0
@@ -157,7 +143,7 @@ def configure_ssl_ver(al):
for k in ["ssl_flags_en", "ssl_flags_de"]:
num = getattr(al, k)
lprint("{}: {:8x} ({})".format(k, num, num))
print("{}: {:8x} ({})".format(k, num, num))
# think i need that beer now
@@ -174,13 +160,13 @@ def configure_ssl_ciphers(al):
try:
ctx.set_ciphers(al.ciphers)
except:
lprint("\n\033[1;31mfailed to set ciphers\033[0m\n")
print("\n\033[1;31mfailed to set ciphers\033[0m\n")
if not hasattr(ctx, "get_ciphers"):
lprint("cannot read cipher list: openssl or python too old")
print("cannot read cipher list: openssl or python too old")
else:
ciphers = [x["description"] for x in ctx.get_ciphers()]
lprint("\n ".join(["\nenabled ciphers:"] + align_tab(ciphers) + [""]))
print("\n ".join(["\nenabled ciphers:"] + align_tab(ciphers) + [""]))
if is_help:
sys.exit(0)
@@ -196,40 +182,42 @@ def sighandler(sig=None, frame=None):
print("\n".join(msg))
def stackmon(fp, ival):
ctr = 0
while True:
ctr += 1
time.sleep(ival)
st = "{}, {}\n{}".format(ctr, time.time(), alltrace())
with open(fp, "wb") as f:
f.write(st.encode("utf-8", "replace"))
def run_argparse(argv, formatter):
ap = argparse.ArgumentParser(
formatter_class=formatter,
prog="copyparty",
description="http file sharing hub v{} ({})".format(S_VERSION, S_BUILD_DT),
)
sects = [
[
"accounts",
"accounts and volumes",
dedent(
"""
epilog=dedent(
"""
-a takes username:password,
-v takes src:dst:perm1:perm2:permN:cflag1:cflag2:cflagN:...
where "perm" is "accesslevels,username1,username2,..."
-v takes src:dst:permset:permset:cflag:cflag:...
where "permset" is accesslevel followed by username (no separator)
and "cflag" is config flags to set on this volume
list of accesslevels:
"r" (read): list folder contents, download files
"w" (write): upload files; need "r" to see the uploads
"m" (move): move files and folders; need "w" at destination
"d" (delete): permanently delete files and folders
too many cflags to list here, see the other sections
list of cflags:
"cnodupe" rejects existing files (instead of symlinking them)
"ce2d" sets -e2d (all -e2* args can be set using ce2* cflags)
"cd2t" disables metadata collection, overrides -e2t*
"cd2d" disables all database stuff, overrides -e2*
example:\033[35m
-a ed:hunter2 -v .::r:rw,ed -v ../inc:dump:w:rw,ed:c,nodupe \033[36m
-a ed:hunter2 -v .::r:aed -v ../inc:dump:w:aed:cnodupe \033[36m
mount current directory at "/" with
* r (read-only) for everyone
* rw (read+write) for ed
* a (read+write) for ed
mount ../inc at "/dump" with
* w (write-only) for everyone
* rw (read+write) for ed
* a (read+write) for ed
* reject duplicate files \033[0m
if no accounts or volumes are configured,
@@ -237,135 +225,70 @@ def run_argparse(argv, formatter):
consider the config file for more flexible account/volume management,
including dynamic reload at runtime (and being more readable w)
"""
),
],
[
"cflags",
"list of cflags",
dedent(
"""
cflags are appended to volume definitions, for example,
to create a write-only volume with the \033[33mnodupe\033[0m and \033[32mnosub\033[0m flags:
\033[35m-v /mnt/inc:/inc:w\033[33m:c,nodupe\033[32m:c,nosub
\033[0muploads, general:
\033[36mnodupe\033[35m rejects existing files (instead of symlinking them)
\033[36mnosub\033[35m forces all uploads into the top folder of the vfs
\033[36mgz\033[35m allows server-side gzip of uploads with ?gz (also c,xz)
\033[36mpk\033[35m forces server-side compression, optional arg: xz,9
\033[0mupload rules:
\033[36mmaxn=250,600\033[35m max 250 uploads over 15min
\033[36mmaxb=1g,300\033[35m max 1 GiB over 5min (suffixes: b, k, m, g)
\033[36msz=1k-3m\033[35m allow filesizes between 1 KiB and 3MiB
\033[0mupload rotation:
(moves all uploads into the specified folder structure)
\033[36mrotn=100,3\033[35m 3 levels of subfolders with 100 entries in each
\033[36mrotf=%Y-%m/%d-%H\033[35m date-formatted organizing
\033[0mdatabase, general:
\033[36me2d\033[35m sets -e2d (all -e2* args can be set using ce2* cflags)
\033[36md2t\033[35m disables metadata collection, overrides -e2t*
\033[36md2d\033[35m disables all database stuff, overrides -e2*
\033[36mdhash\033[35m disables file hashing on initial scans, also ehash
\033[36mhist=/tmp/cdb\033[35m puts thumbnails and indexes at that location
\033[0mdatabase, audio tags:
"mte", "mth", "mtp", "mtm" all work the same as -mte, -mth, ...
\033[36mmtp=.bpm=f,audio-bpm.py\033[35m uses the "audio-bpm.py" program to
generate ".bpm" tags from uploads (f = overwrite tags)
\033[36mmtp=ahash,vhash=media-hash.py\033[35m collects two tags at once
\033[0m"""
),
],
[
"urlform",
"",
dedent(
"""
values for --urlform:
\033[36mstash\033[35m dumps the data to file and returns length + checksum
\033[36msave,get\033[35m dumps to file and returns the page like a GET
\033[36mprint,get\033[35m prints the data in the log and returns GET
"stash" dumps the data to file and returns length + checksum
"save,get" dumps to file and returns the page like a GET
"print,get" prints the data in the log and returns GET
(leave out the ",get" to return an error instead)
"""
),
],
[
"ls",
"volume inspection",
dedent(
"""
\033[35m--ls USR,VOL,FLAGS
\033[36mUSR\033[0m is a user to browse as; * is anonymous, ** is all users
\033[36mVOL\033[0m is a single volume to scan, default is * (all vols)
\033[36mFLAG\033[0m is flags;
\033[36mv\033[0m in addition to realpaths, print usernames and vpaths
\033[36mln\033[0m only prints symlinks leaving the volume mountpoint
\033[36mp\033[0m exits 1 if any such symlinks are found
\033[36mr\033[0m resumes startup after the listing
values for --ls:
"USR" is a user to browse as; * is anonymous, ** is all users
"VOL" is a single volume to scan, default is * (all vols)
"FLAG" is flags;
"v" in addition to realpaths, print usernames and vpaths
"ln" only prints symlinks leaving the volume mountpoint
"p" exits 1 if any such symlinks are found
"r" resumes startup after the listing
examples:
--ls '**' # list all files which are possible to read
--ls '**,*,ln' # check for dangerous symlinks
--ls '**,*,ln,p,r' # check, then start normally if safe
\033[0m
"""
),
],
]
),
)
# fmt: off
u = unicode
ap2 = ap.add_argument_group('general options')
ap2.add_argument("-c", metavar="PATH", type=u, action="append", help="add config file")
ap2.add_argument("-nc", metavar="NUM", type=int, default=64, help="max num clients")
ap2.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores")
ap2.add_argument("-a", metavar="ACCT", type=u, action="append", help="add account, USER:PASS; example [ed:wark")
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, SRC:DST:FLAG; example [.::r], [/mnt/nas/music:/music:r:aed")
ap2.add_argument("-ed", action="store_true", help="enable ?dots")
ap2.add_argument("-emp", action="store_true", help="enable markdown plugins")
ap2.add_argument("-mcr", metavar="SEC", type=int, default=60, help="md-editor mod-chk rate")
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,get", help="how to handle url-forms; examples: [stash], [save,get]")
ap2 = ap.add_argument_group('upload options')
ap2.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads")
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="up2k min.size threshold (mswin-only)")
ap2.add_argument("--unpost", metavar="SEC", type=int, default=3600*12, help="grace period where uploads can be deleted by the uploader, even without delete permissions; 0=disabled")
ap.add_argument("-c", metavar="PATH", type=str, action="append", help="add config file")
ap.add_argument("-nc", metavar="NUM", type=int, default=64, help="max num clients")
ap.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores")
ap.add_argument("-a", metavar="ACCT", type=str, action="append", help="add account, USER:PASS; example [ed:wark")
ap.add_argument("-v", metavar="VOL", type=str, action="append", help="add volume, SRC:DST:FLAG; example [.::r], [/mnt/nas/music:/music:r:aed")
ap.add_argument("-ed", action="store_true", help="enable ?dots")
ap.add_argument("-emp", action="store_true", help="enable markdown plugins")
ap.add_argument("-mcr", metavar="SEC", type=int, default=60, help="md-editor mod-chk rate")
ap.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads")
ap.add_argument("--sparse", metavar="MiB", type=int, default=4, help="up2k min.size threshold (mswin-only)")
ap.add_argument("--urlform", metavar="MODE", type=str, default="print,get", help="how to handle url-forms; examples: [stash], [save,get]")
ap2 = ap.add_argument_group('network options')
ap2.add_argument("-i", metavar="IP", type=u, default="0.0.0.0", help="ip to bind (comma-sep.)")
ap2.add_argument("-p", metavar="PORT", type=u, default="3923", help="ports to bind (comma/range)")
ap2.add_argument("-i", metavar="IP", type=str, default="0.0.0.0", help="ip to bind (comma-sep.)")
ap2.add_argument("-p", metavar="PORT", type=str, default="3923", help="ports to bind (comma/range)")
ap2.add_argument("--rproxy", metavar="DEPTH", type=int, default=1, help="which ip to keep; 0 = tcp, 1 = origin (first x-fwd), 2 = cloudflare, 3 = nginx, -1 = closest proxy")
ap2 = ap.add_argument_group('SSL/TLS options')
ap2.add_argument("--http-only", action="store_true", help="disable ssl/tls")
ap2.add_argument("--https-only", action="store_true", help="disable plaintext")
ap2.add_argument("--ssl-ver", metavar="LIST", type=u, help="set allowed ssl/tls versions; [help] shows available versions; default is what your python version considers safe")
ap2.add_argument("--ciphers", metavar="LIST", type=u, help="set allowed ssl/tls ciphers; [help] shows available ciphers")
ap2.add_argument("--ssl-ver", metavar="LIST", type=str, help="set allowed ssl/tls versions; [help] shows available versions; default is what your python version considers safe")
ap2.add_argument("--ciphers", metavar="LIST", help="set allowed ssl/tls ciphers; [help] shows available ciphers")
ap2.add_argument("--ssl-dbg", action="store_true", help="dump some tls info")
ap2.add_argument("--ssl-log", metavar="PATH", type=u, help="log master secrets")
ap2.add_argument("--ssl-log", metavar="PATH", help="log master secrets")
ap2 = ap.add_argument_group('opt-outs')
ap2.add_argument("-nw", action="store_true", help="disable writes (benchmark)")
ap2.add_argument("--no-del", action="store_true", help="disable delete operations")
ap2.add_argument("--no-mv", action="store_true", help="disable move/rename operations")
ap2.add_argument("-nih", action="store_true", help="no info hostname")
ap2.add_argument("-nid", action="store_true", help="no info disk-usage")
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar")
ap2 = ap.add_argument_group('safety options')
ap2.add_argument("--ls", metavar="U[,V[,F]]", type=u, help="scan all volumes; arguments USER,VOL,FLAGS; example [**,*,ln,p,r]")
ap2.add_argument("--salt", type=u, default="hunter2", help="up2k file-hash salt")
ap2.add_argument("--ls", metavar="U[,V[,F]]", help="scan all volumes; arguments USER,VOL,FLAGS; example [**,*,ln,p,r]")
ap2.add_argument("--salt", type=str, default="hunter2", help="up2k file-hash salt")
ap2 = ap.add_argument_group('logging options')
ap2.add_argument("-q", action="store_true", help="quiet")
ap2.add_argument("-lo", metavar="PATH", type=u, help="logfile, example: cpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz")
ap2.add_argument("--no-voldump", action="store_true", help="do not list volumes and permissions on startup")
ap2.add_argument("--log-conn", action="store_true", help="print tcp-server msgs")
ap2.add_argument("--log-htp", action="store_true", help="print http-server threadpool scaling")
ap2.add_argument("--ihead", metavar="HEADER", type=u, action='append', help="dump incoming header")
ap2.add_argument("--lf-url", metavar="RE", type=u, default=r"^/\.cpr/|\?th=[wj]$", help="dont log URLs matching")
ap2.add_argument("--ihead", metavar="HEADER", action='append', help="dump incoming header")
ap2.add_argument("--lf-url", metavar="RE", type=str, default=r"^/\.cpr/|\?th=[wj]$", help="dont log URLs matching")
ap2 = ap.add_argument_group('admin panel options')
ap2.add_argument("--no-rescan", action="store_true", help="disable ?scan (volume reindexing)")
@@ -380,60 +303,42 @@ def run_argparse(argv, formatter):
ap2.add_argument("--th-no-webp", action="store_true", help="disable webp output")
ap2.add_argument("--th-ff-jpg", action="store_true", help="force jpg for video thumbs")
ap2.add_argument("--th-poke", metavar="SEC", type=int, default=300, help="activity labeling cooldown")
ap2.add_argument("--th-clean", metavar="SEC", type=int, default=43200, help="cleanup interval; 0=disabled")
ap2.add_argument("--th-clean", metavar="SEC", type=int, default=43200, help="cleanup interval")
ap2.add_argument("--th-maxage", metavar="SEC", type=int, default=604800, help="max folder age")
ap2.add_argument("--th-covers", metavar="N,N", type=u, default="folder.png,folder.jpg,cover.png,cover.jpg", help="folder thumbnails to stat for")
ap2.add_argument("--th-covers", metavar="N,N", type=str, default="folder.png,folder.jpg,cover.png,cover.jpg", help="folder thumbnails to stat for")
ap2 = ap.add_argument_group('general db options')
ap2 = ap.add_argument_group('database options')
ap2.add_argument("-e2d", action="store_true", help="enable up2k database")
ap2.add_argument("-e2ds", action="store_true", help="enable up2k db-scanner, sets -e2d")
ap2.add_argument("-e2dsa", action="store_true", help="scan all folders (for search), sets -e2ds")
ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume data (db, thumbs)")
ap2.add_argument("--no-hash", action="store_true", help="disable hashing during e2ds folder scans")
ap2.add_argument("--re-int", metavar="SEC", type=int, default=30, help="disk rescan check interval")
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="disk rescan volume interval (0=off)")
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=30, help="search deadline")
ap2 = ap.add_argument_group('metadata db options')
ap2.add_argument("-e2t", action="store_true", help="enable metadata indexing")
ap2.add_argument("-e2ts", action="store_true", help="enable metadata scanner, sets -e2t")
ap2.add_argument("-e2tsr", action="store_true", help="rescan all metadata, sets -e2ts")
ap2.add_argument("--no-mutagen", action="store_true", help="use FFprobe for tags instead")
ap2.add_argument("--hist", metavar="PATH", type=str, help="where to store volume state")
ap2.add_argument("--no-hash", action="store_true", help="disable hashing during e2ds folder scans")
ap2.add_argument("--no-mutagen", action="store_true", help="use ffprobe for tags instead")
ap2.add_argument("--no-mtag-mt", action="store_true", help="disable tag-read parallelism")
ap2.add_argument("--no-mtag-ff", action="store_true", help="never use FFprobe as tag reader")
ap2.add_argument("-mtm", metavar="M=t,t,t", type=u, action="append", help="add/replace metadata mapping")
ap2.add_argument("-mte", metavar="M,M,M", type=u, help="tags to index/display (comma-sep.)",
default="circle,album,.tn,artist,title,.bpm,key,.dur,.q,.vq,.aq,vc,ac,res,.fps,ahash,vhash")
ap2.add_argument("-mth", metavar="M,M,M", type=u, help="tags to hide by default (comma-sep.)",
default=".vq,.aq,vc,ac,res,.fps")
ap2.add_argument("-mtp", metavar="M=[f,]bin", type=u, action="append", help="read tag M using bin")
ap2.add_argument("-mtm", metavar="M=t,t,t", action="append", type=str, help="add/replace metadata mapping")
ap2.add_argument("-mte", metavar="M,M,M", type=str, help="tags to index/display (comma-sep.)",
default="circle,album,.tn,artist,title,.bpm,key,.dur,.q,.vq,.aq,ac,vc,res,.fps")
ap2.add_argument("-mtp", metavar="M=[f,]bin", action="append", type=str, help="read tag M using bin")
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=30, help="search deadline")
ap2 = ap.add_argument_group('video streaming options')
ap2.add_argument("--vcr", action="store_true", help="enable video streaming")
ap2 = ap.add_argument_group('appearance options')
ap2.add_argument("--css-browser", metavar="L", type=u, help="URL to additional CSS to include")
ap2.add_argument("--css-browser", metavar="L", help="URL to additional CSS to include")
ap2 = ap.add_argument_group('debug options')
ap2.add_argument("--no-sendfile", action="store_true", help="disable sendfile")
ap2.add_argument("--no-scandir", action="store_true", help="disable scandir")
ap2.add_argument("--no-fastboot", action="store_true", help="wait for up2k indexing")
ap2.add_argument("--no-htp", action="store_true", help="disable httpserver threadpool, create threads as-needed instead")
ap2.add_argument("--stackmon", metavar="P,S", type=u, help="write stacktrace to Path every S second")
ap2.add_argument("--log-thrs", metavar="SEC", type=float, help="list active threads every SEC")
ap2.add_argument("--stackmon", metavar="P,S", help="write stacktrace to Path every S second")
return ap.parse_args(args=argv[1:])
# fmt: on
ap2 = ap.add_argument_group("help sections")
for k, h, _ in sects:
ap2.add_argument("--help-" + k, action="store_true", help=h)
ret = ap.parse_args(args=argv[1:])
for k, h, t in sects:
k2 = "help_" + k.replace("-", "_")
if vars(ret)[k2]:
lprint("# {} help page".format(k))
lprint(t + "\033[0m")
sys.exit(0)
return ret
def main(argv=None):
time.strptime("19970815", "%Y%m%d") # python#7980
@@ -446,7 +351,7 @@ def main(argv=None):
desc = py_desc().replace("[", "\033[1;30m[")
f = '\033[36mcopyparty v{} "\033[35m{}\033[36m" ({})\n{}\033[0m\n'
lprint(f.format(S_VERSION, CODENAME, S_BUILD_DT, desc))
print(f.format(S_VERSION, CODENAME, S_BUILD_DT, desc))
ensure_locale()
if HAVE_SSL:
@@ -460,7 +365,7 @@ def main(argv=None):
continue
msg = "\033[1;31mWARNING:\033[0;1m\n {} \033[0;33mwas replaced with\033[0;1m {} \033[0;33mand will be removed\n\033[0m"
lprint(msg.format(dk, nk))
print(msg.format(dk, nk))
argv[idx] = nk
time.sleep(2)
@@ -469,41 +374,15 @@ def main(argv=None):
except AssertionError:
al = run_argparse(argv, Dodge11874)
nstrs = []
anymod = False
for ostr in al.v or []:
m = re_vol.match(ostr)
if not m:
# not our problem
nstrs.append(ostr)
continue
src, dst, perms = m.groups()
na = [src, dst]
mod = False
for opt in perms.split(":"):
if re.match("c[^,]", opt):
mod = True
na.append("c," + opt[1:])
elif re.sub("^[rwmd]*", "", opt) and "," not in opt:
mod = True
perm = opt[0]
if perm == "a":
perm = "rw"
na.append(perm + "," + opt[1:])
else:
na.append(opt)
nstr = ":".join(na)
nstrs.append(nstr if mod else ostr)
if mod:
msg = "\033[1;31mWARNING:\033[0;1m\n -v {} \033[0;33mwas replaced with\033[0;1m\n -v {} \n\033[0m"
lprint(msg.format(ostr, nstr))
anymod = True
if anymod:
al.v = nstrs
time.sleep(2)
if al.stackmon:
fp, f = al.stackmon.rsplit(",", 1)
f = int(f)
t = threading.Thread(
target=stackmon,
args=(fp, f),
)
t.daemon = True
t.start()
# propagate implications
for k1, k2 in IMPLICATIONS:
@@ -540,7 +419,7 @@ def main(argv=None):
# signal.signal(signal.SIGINT, sighandler)
SvcHub(al, argv, printed).run()
SvcHub(al).run()
if __name__ == "__main__":

View File

@@ -1,8 +1,8 @@
# coding: utf-8
VERSION = (0, 13, 0)
CODENAME = "future-proof"
BUILD_DT = (2021, 8, 8)
VERSION = (0, 11, 29)
CODENAME = "the grid"
BUILD_DT = (2021, 6, 30)
S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -5,226 +5,40 @@ import re
import os
import sys
import stat
import time
import base64
import hashlib
import threading
from datetime import datetime
from .__init__ import WINDOWS
from .util import (
IMPLICATIONS,
uncyg,
undot,
unhumanize,
absreal,
Pebkac,
fsenc,
statdir,
)
from .bos import bos
class AXS(object):
def __init__(self, uread=None, uwrite=None, umove=None, udel=None):
self.uread = {} if uread is None else {k: 1 for k in uread}
self.uwrite = {} if uwrite is None else {k: 1 for k in uwrite}
self.umove = {} if umove is None else {k: 1 for k in umove}
self.udel = {} if udel is None else {k: 1 for k in udel}
def __repr__(self):
return "AXS({})".format(
", ".join(
"{}={!r}".format(k, self.__dict__[k])
for k in "uread uwrite umove udel".split()
)
)
class Lim(object):
def __init__(self):
self.nups = {} # num tracker
self.bups = {} # byte tracker list
self.bupc = {} # byte tracker cache
self.nosub = False # disallow subdirectories
self.smin = None # filesize min
self.smax = None # filesize max
self.bwin = None # bytes window
self.bmax = None # bytes max
self.nwin = None # num window
self.nmax = None # num max
self.rotn = None # rot num files
self.rotl = None # rot depth
self.rotf = None # rot datefmt
self.rot_re = None # rotf check
def set_rotf(self, fmt):
self.rotf = fmt
r = re.escape(fmt).replace("%Y", "[0-9]{4}").replace("%j", "[0-9]{3}")
r = re.sub("%[mdHMSWU]", "[0-9]{2}", r)
self.rot_re = re.compile("(^|/)" + r + "$")
def all(self, ip, rem, sz, abspath):
self.chk_nup(ip)
self.chk_bup(ip)
self.chk_rem(rem)
if sz != -1:
self.chk_sz(sz)
ap2, vp2 = self.rot(abspath)
if abspath == ap2:
return ap2, rem
return ap2, ("{}/{}".format(rem, vp2) if rem else vp2)
def chk_sz(self, sz):
if self.smin is not None and sz < self.smin:
raise Pebkac(400, "file too small")
if self.smax is not None and sz > self.smax:
raise Pebkac(400, "file too big")
def chk_rem(self, rem):
if self.nosub and rem:
raise Pebkac(500, "no subdirectories allowed")
def rot(self, path):
if not self.rotf and not self.rotn:
return path, ""
if self.rotf:
path = path.rstrip("/\\")
if self.rot_re.search(path.replace("\\", "/")):
return path, ""
suf = datetime.utcnow().strftime(self.rotf)
if path:
path += "/"
return path + suf, suf
ret = self.dive(path, self.rotl)
if not ret:
raise Pebkac(500, "no available slots in volume")
d = ret[len(path) :].strip("/\\").replace("\\", "/")
return ret, d
def dive(self, path, lvs):
items = bos.listdir(path)
if not lvs:
# at leaf level
return None if len(items) >= self.rotn else ""
dirs = [int(x) for x in items if x and all(y in "1234567890" for y in x)]
dirs.sort()
if not dirs:
# no branches yet; make one
sub = os.path.join(path, "0")
bos.mkdir(sub)
else:
# try newest branch only
sub = os.path.join(path, str(dirs[-1]))
ret = self.dive(sub, lvs - 1)
if ret is not None:
return os.path.join(sub, ret)
if len(dirs) >= self.rotn:
# full branch or root
return None
# make a branch
sub = os.path.join(path, str(dirs[-1] + 1))
bos.mkdir(sub)
ret = self.dive(sub, lvs - 1)
if ret is None:
raise Pebkac(500, "rotation bug")
return os.path.join(sub, ret)
def nup(self, ip):
try:
self.nups[ip].append(time.time())
except:
self.nups[ip] = [time.time()]
def bup(self, ip, nbytes):
v = [time.time(), nbytes]
try:
self.bups[ip].append(v)
self.bupc[ip] += nbytes
except:
self.bups[ip] = [v]
self.bupc[ip] = nbytes
def chk_nup(self, ip):
if not self.nmax or ip not in self.nups:
return
nups = self.nups[ip]
cutoff = time.time() - self.nwin
while nups and nups[0] < cutoff:
nups.pop(0)
if len(nups) >= self.nmax:
raise Pebkac(429, "too many uploads")
def chk_bup(self, ip):
if not self.bmax or ip not in self.bups:
return
bups = self.bups[ip]
cutoff = time.time() - self.bwin
mark = self.bupc[ip]
while bups and bups[0][0] < cutoff:
mark -= bups.pop(0)[1]
self.bupc[ip] = mark
if mark >= self.bmax:
raise Pebkac(429, "ingress saturated")
from .util import IMPLICATIONS, uncyg, undot, Pebkac, fsdec, fsenc, statdir, nuprint
class VFS(object):
"""single level in the virtual fs"""
def __init__(self, log, realpath, vpath, axs, flags):
self.log = log
def __init__(self, realpath, vpath, uread=[], uwrite=[], uadm=[], flags={}):
self.realpath = realpath # absolute path on host filesystem
self.vpath = vpath # absolute path in the virtual filesystem
self.axs = axs # type: AXS
self.flags = flags # config options
self.uread = uread # users who can read this
self.uwrite = uwrite # users who can write this
self.uadm = uadm # users who are regular admins
self.flags = flags # config switches
self.nodes = {} # child nodes
self.histtab = None # all realpath->histpath
self.dbv = None # closest full/non-jump parent
self.lim = None # type: Lim # upload limits; only set for dbv
if realpath:
self.histpath = os.path.join(realpath, ".hist") # db / thumbcache
self.all_vols = {vpath: self} # flattened recursive
self.aread = {}
self.awrite = {}
self.amove = {}
self.adel = {}
else:
self.histpath = None
self.all_vols = None
self.aread = None
self.awrite = None
self.amove = None
self.adel = None
def __repr__(self):
return "VFS({})".format(
", ".join(
"{}={!r}".format(k, self.__dict__[k])
for k in "realpath vpath axs flags".split()
for k in "realpath vpath uread uwrite uadm flags".split()
)
)
@@ -248,10 +62,11 @@ class VFS(object):
return self.nodes[name].add(src, dst)
vn = VFS(
self.log,
os.path.join(self.realpath, name) if self.realpath else None,
"{}/{}".format(self.vpath, name).lstrip("/"),
self.axs,
self.uread,
self.uwrite,
self.uadm,
self._copy_flags(name),
)
vn.dbv = self.dbv or self
@@ -264,7 +79,7 @@ class VFS(object):
# leaf does not exist; create and keep permissions blank
vp = "{}/{}".format(self.vpath, dst).lstrip("/")
vn = VFS(self.log, src, vp, AXS(), {})
vn = VFS(src, vp)
vn.dbv = self.dbv or self
self.nodes[dst] = vn
return vn
@@ -304,37 +119,27 @@ class VFS(object):
return [self, vpath]
def can_access(self, vpath, uname):
# type: (str, str) -> tuple[bool, bool, bool, bool]
"""can Read,Write,Move,Delete"""
"""return [readable,writable]"""
vn, _ = self._find(vpath)
c = vn.axs
return [
uname in c.uread or "*" in c.uread,
uname in c.uwrite or "*" in c.uwrite,
uname in c.umove or "*" in c.umove,
uname in c.udel or "*" in c.udel,
uname in vn.uread or "*" in vn.uread,
uname in vn.uwrite or "*" in vn.uwrite,
]
def get(self, vpath, uname, will_read, will_write, will_move=False, will_del=False):
# type: (str, str, bool, bool, bool, bool) -> tuple[VFS, str]
def get(self, vpath, uname, will_read, will_write):
# type: (str, str, bool, bool) -> tuple[VFS, str]
"""returns [vfsnode,fs_remainder] if user has the requested permissions"""
vn, rem = self._find(vpath)
c = vn.axs
for req, d, msg in [
[will_read, c.uread, "read"],
[will_write, c.uwrite, "write"],
[will_move, c.umove, "move"],
[will_del, c.udel, "delete"],
]:
if req and (uname not in d and "*" not in d):
m = "you don't have {}-access for this location"
raise Pebkac(403, m.format(msg))
if will_read and (uname not in vn.uread and "*" not in vn.uread):
raise Pebkac(403, "you don't have read-access for this location")
if will_write and (uname not in vn.uwrite and "*" not in vn.uwrite):
raise Pebkac(403, "you don't have write-access for this location")
return vn, rem
def get_dbv(self, vrem):
# type: (str) -> tuple[VFS, str]
dbv = self.dbv
if not dbv:
return self, vrem
@@ -343,58 +148,68 @@ class VFS(object):
vrem = "/".join([x for x in vrem if x])
return dbv, vrem
def canonical(self, rem, resolve=True):
def canonical(self, rem):
"""returns the canonical path (fully-resolved absolute fs path)"""
rp = self.realpath
if rem:
rp += "/" + rem
return absreal(rp) if resolve else rp
try:
return fsdec(os.path.realpath(fsenc(rp)))
except:
if not WINDOWS:
raise
def ls(self, rem, uname, scandir, permsets, lstat=False):
# type: (str, str, bool, list[list[bool]], bool) -> tuple[str, str, dict[str, VFS]]
# cpython bug introduced in 3.8, still exists in 3.9.1;
# some win7sp1 and win10:20H2 boxes cannot realpath a
# networked drive letter such as b"n:" or b"n:\\"
#
# requirements to trigger:
# * bytestring (not unicode str)
# * just the drive letter (subfolders are ok)
# * networked drive (regular disks and vmhgfs are ok)
# * on an enterprise network (idk, cannot repro with samba)
#
# hits the following exceptions in succession:
# * access denied at L601: "path = _getfinalpathname(path)"
# * "cant concat str to bytes" at L621: "return path + tail"
#
return os.path.realpath(rp)
def ls(self, rem, uname, scandir, incl_wo=False, lstat=False):
# type: (str, str, bool, bool, bool) -> tuple[str, str, dict[str, VFS]]
"""return user-readable [fsdir,real,virt] items at vpath"""
virt_vis = {} # nodes readable by user
abspath = self.canonical(rem)
real = list(statdir(self.log, scandir, lstat, abspath))
real = list(statdir(nuprint, scandir, lstat, abspath))
real.sort()
if not rem:
# no vfs nodes in the list of real inodes
real = [x for x in real if x[0] not in self.nodes]
for name, vn2 in sorted(self.nodes.items()):
ok = False
axs = vn2.axs
axs = [axs.uread, axs.uwrite, axs.umove, axs.udel]
for pset in permsets:
ok = True
for req, lst in zip(pset, axs):
if req and uname not in lst and "*" not in lst:
ok = False
if ok:
break
ok = uname in vn2.uread or "*" in vn2.uread
if not ok and incl_wo:
ok = uname in vn2.uwrite or "*" in vn2.uwrite
if ok:
virt_vis[name] = vn2
# no vfs nodes in the list of real inodes
real = [x for x in real if x[0] not in self.nodes]
return [abspath, real, virt_vis]
def walk(self, rel, rem, seen, uname, permsets, dots, scandir, lstat):
def walk(self, rel, rem, seen, uname, dots, scandir, lstat):
"""
recursively yields from ./rem;
rel is a unix-style user-defined vpath (not vfs-related)
"""
fsroot, vfs_ls, vfs_virt = self.ls(rem, uname, scandir, permsets, lstat=lstat)
dbv, vrem = self.get_dbv(rem)
fsroot, vfs_ls, vfs_virt = self.ls(
rem, uname, scandir, incl_wo=False, lstat=lstat
)
if (
seen
and (not fsroot.startswith(seen[-1]) or fsroot == seen[-1])
and fsroot in seen
):
m = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}/{}"
self.log("vfs.walk", m.format(seen[-1], fsroot, self.vpath, rem), 3)
if seen and not fsroot.startswith(seen[-1]) and fsroot in seen:
print("bailing from symlink loop,\n {}\n {}".format(seen[-1], fsroot))
return
seen = seen[:] + [fsroot]
@@ -404,7 +219,7 @@ class VFS(object):
rfiles.sort()
rdirs.sort()
yield dbv, vrem, rel, fsroot, rfiles, rdirs, vfs_virt
yield rel, fsroot, rfiles, rdirs, vfs_virt
for rdir, _ in rdirs:
if not dots and rdir.startswith("."):
@@ -412,7 +227,7 @@ class VFS(object):
wrel = (rel + "/" + rdir).lstrip("/")
wrem = (rem + "/" + rdir).lstrip("/")
for x in self.walk(wrel, wrem, seen, uname, permsets, dots, scandir, lstat):
for x in self.walk(wrel, wrem, seen, uname, dots, scandir, lstat):
yield x
for n, vfs in sorted(vfs_virt.items()):
@@ -420,19 +235,16 @@ class VFS(object):
continue
wrel = (rel + "/" + n).lstrip("/")
for x in vfs.walk(wrel, "", seen, uname, permsets, dots, scandir, lstat):
for x in vfs.walk(wrel, "", seen, uname, dots, scandir, lstat):
yield x
def zipgen(self, vrem, flt, uname, dots, scandir):
if flt:
flt = {k: True for k in flt}
f1 = "{0}.hist{0}up2k.".format(os.sep)
f2a = os.sep + "dir.txt"
f2b = "{0}.hist{0}".format(os.sep)
g = self.walk("", vrem, [], uname, [[True]], dots, scandir, False)
for _, _, vpath, apath, files, rd, vd in g:
for vpath, apath, files, rd, vd in self.walk(
"", vrem, [], uname, dots, scandir, False
):
if flt:
files = [x for x in files if x[0] in flt]
@@ -463,20 +275,24 @@ class VFS(object):
del vd[x]
# up2k filetring based on actual abspath
files = [
x
for x in files
if f1 not in x[1] and (not x[1].endswith(f2a) or f2b not in x[1])
]
files = [x for x in files if "{0}.hist{0}up2k.".format(os.sep) not in x[1]]
for f in [{"vp": v, "ap": a, "st": n[1]} for v, a, n in files]:
yield f
def user_tree(self, uname, readable, writable, admin):
is_readable = False
if uname in self.uread or "*" in self.uread:
readable.append(self.vpath)
is_readable = True
if WINDOWS:
re_vol = re.compile(r"^([a-zA-Z]:[\\/][^:]*|[^:]*):([^:]*):(.*)$")
else:
re_vol = re.compile(r"^([^:]*):([^:]*):(.*)$")
if uname in self.uwrite or "*" in self.uwrite:
writable.append(self.vpath)
if is_readable:
admin.append(self.vpath)
for _, vn in sorted(self.nodes.items()):
vn.user_tree(uname, readable, writable, admin)
class AuthSrv(object):
@@ -488,6 +304,11 @@ class AuthSrv(object):
self.warn_anonwrite = warn_anonwrite
self.line_ctr = 0
if WINDOWS:
self.re_vol = re.compile(r"^([a-zA-Z]:[\\/][^:]*|[^:]*):([^:]*):(.*)$")
else:
self.re_vol = re.compile(r"^([^:]*):([^:]*):(.*)$")
self.mutex = threading.Lock()
self.reload()
@@ -505,8 +326,7 @@ class AuthSrv(object):
yield prev, True
def _parse_config_file(self, fd, acct, daxs, mflags, mount):
# type: (any, str, dict[str, AXS], any, str) -> None
def _parse_config_file(self, fd, user, mread, mwrite, madm, mflags, mount):
vol_src = None
vol_dst = None
self.line_ctr = 0
@@ -522,7 +342,7 @@ class AuthSrv(object):
if vol_src is None:
if ln.startswith("u "):
u, p = ln[2:].split(":", 1)
acct[u] = p
user[u] = p
else:
vol_src = ln
continue
@@ -533,50 +353,50 @@ class AuthSrv(object):
raise Exception('invalid mountpoint "{}"'.format(vol_dst))
# cfg files override arguments and previous files
vol_src = bos.path.abspath(vol_src)
vol_src = fsdec(os.path.abspath(fsenc(vol_src)))
vol_dst = vol_dst.strip("/")
mount[vol_dst] = vol_src
daxs[vol_dst] = AXS()
mread[vol_dst] = []
mwrite[vol_dst] = []
madm[vol_dst] = []
mflags[vol_dst] = {}
continue
try:
lvl, uname = ln.split(" ", 1)
except:
if len(ln) > 1:
lvl, uname = ln.split(" ")
else:
lvl = ln
uname = "*"
if lvl == "a":
m = "WARNING (config-file): permission flag 'a' is deprecated; please use 'rw' instead"
self.log(m, 1)
self._read_vol_str(
lvl,
uname,
mread[vol_dst],
mwrite[vol_dst],
madm[vol_dst],
mflags[vol_dst],
)
self._read_vol_str(lvl, uname, daxs[vol_dst], mflags[vol_dst])
def _read_vol_str(self, lvl, uname, axs, flags):
# type: (str, str, AXS, any) -> None
def _read_vol_str(self, lvl, uname, mr, mw, ma, mf):
if lvl == "c":
cval = True
if "=" in uname:
uname, cval = uname.split("=", 1)
self._read_volflag(flags, uname, cval, False)
self._read_volflag(mf, uname, cval, False)
return
if uname == "":
uname = "*"
for un in uname.split(","):
if "r" in lvl:
axs.uread[un] = 1
if lvl in "ra":
mr.append(uname)
if "w" in lvl:
axs.uwrite[un] = 1
if lvl in "wa":
mw.append(uname)
if "m" in lvl:
axs.umove[un] = 1
if "d" in lvl:
axs.udel[un] = 1
if lvl == "a":
ma.append(uname)
def _read_volflag(self, flags, name, value, is_list):
if name not in ["mtp"]:
@@ -598,26 +418,23 @@ class AuthSrv(object):
before finally building the VFS
"""
acct = {} # username:password
daxs = {} # type: dict[str, AXS]
user = {} # username:password
mread = {} # mountpoint:[username]
mwrite = {} # mountpoint:[username]
madm = {} # mountpoint:[username]
mflags = {} # mountpoint:[flag]
mount = {} # dst:src (mountpoint:realpath)
if self.args.a:
# list of username:password
for x in self.args.a:
try:
u, p = x.split(":", 1)
acct[u] = p
except:
m = '\n invalid value "{}" for argument -a, must be username:password'
raise Exception(m.format(x))
for u, p in [x.split(":", 1) for x in self.args.a]:
user[u] = p
if self.args.v:
# list of src:dst:permset:permset:...
# permset is <rwmd>[,username][,username] or <c>,<flag>[=args]
# permset is [rwa]username or [c]flag
for v_str in self.args.v:
m = re_vol.match(v_str)
m = self.re_vol.match(v_str)
if not m:
raise Exception("invalid -v argument: [{}]".format(v_str))
@@ -626,41 +443,49 @@ class AuthSrv(object):
src = uncyg(src)
# print("\n".join([src, dst, perms]))
src = bos.path.abspath(src)
src = fsdec(os.path.abspath(fsenc(src)))
dst = dst.strip("/")
mount[dst] = src
daxs[dst] = AXS()
mread[dst] = []
mwrite[dst] = []
madm[dst] = []
mflags[dst] = {}
for x in perms.split(":"):
lvl, uname = x.split(",", 1) if "," in x else [x, ""]
self._read_vol_str(lvl, uname, daxs[dst], mflags[dst])
perms = perms.split(":")
for (lvl, uname) in [[x[0], x[1:]] for x in perms]:
self._read_vol_str(
lvl, uname, mread[dst], mwrite[dst], madm[dst], mflags[dst]
)
if self.args.c:
for cfg_fn in self.args.c:
with open(cfg_fn, "rb") as f:
try:
self._parse_config_file(f, acct, daxs, mflags, mount)
self._parse_config_file(
f, user, mread, mwrite, madm, mflags, mount
)
except:
m = "\n\033[1;31m\nerror in config file {} on line {}:\n\033[0m"
self.log(m.format(cfg_fn, self.line_ctr), 1)
print(m.format(cfg_fn, self.line_ctr))
raise
# case-insensitive; normalize
if WINDOWS:
cased = {}
for k, v in mount.items():
cased[k] = absreal(v)
try:
cased[k] = fsdec(os.path.realpath(fsenc(v)))
except:
cased[k] = v
mount = cased
if not mount:
# -h says our defaults are CWD at root and read/write for everyone
axs = AXS(["*"], ["*"], None, None)
vfs = VFS(self.log_func, bos.path.abspath("."), "", axs, {})
vfs = VFS(os.path.abspath("."), "", ["*"], ["*"])
elif "" not in mount:
# there's volumes but no root; make root inaccessible
vfs = VFS(self.log_func, None, "", AXS(), {})
vfs = VFS(None, "")
vfs.flags["d2d"] = True
maxdepth = 0
@@ -671,34 +496,26 @@ class AuthSrv(object):
if dst == "":
# rootfs was mapped; fully replaces the default CWD vfs
vfs = VFS(self.log_func, mount[dst], dst, daxs[dst], mflags[dst])
vfs = VFS(
mount[dst], dst, mread[dst], mwrite[dst], madm[dst], mflags[dst]
)
continue
v = vfs.add(mount[dst], dst)
v.axs = daxs[dst]
v.uread = mread[dst]
v.uwrite = mwrite[dst]
v.uadm = madm[dst]
v.flags = mflags[dst]
v.dbv = None
vfs.all_vols = {}
vfs.get_all_vols(vfs.all_vols)
for perm in "read write move del".split():
axs_key = "u" + perm
unames = ["*"] + list(acct.keys())
umap = {x: [] for x in unames}
for usr in unames:
for mp, vol in vfs.all_vols.items():
if usr in getattr(vol.axs, axs_key):
umap[usr].append(mp)
setattr(vfs, "a" + perm, umap)
all_users = {}
missing_users = {}
for axs in daxs.values():
for d in [axs.uread, axs.uwrite, axs.umove, axs.udel]:
for usr in d.keys():
all_users[usr] = 1
if usr != "*" and usr not in acct:
for d in [mread, mwrite]:
for _, ul in d.items():
for usr in ul:
if usr != "*" and usr not in user:
missing_users[usr] = 1
if missing_users:
@@ -722,7 +539,10 @@ class AuthSrv(object):
elif self.args.hist:
for nch in range(len(hid)):
hpath = os.path.join(self.args.hist, hid[: nch + 1])
bos.makedirs(hpath)
try:
os.makedirs(hpath)
except:
pass
powner = os.path.join(hpath, "owner.txt")
try:
@@ -742,9 +562,9 @@ class AuthSrv(object):
vol.histpath = hpath
break
vol.histpath = absreal(vol.histpath)
vol.histpath = os.path.realpath(vol.histpath)
if vol.dbv:
if bos.path.exists(os.path.join(vol.histpath, "up2k.db")):
if os.path.exists(os.path.join(vol.histpath, "up2k.db")):
promote.append(vol)
vol.dbv = None
else:
@@ -767,50 +587,10 @@ class AuthSrv(object):
vfs.histtab = {v.realpath: v.histpath for v in vfs.all_vols.values()}
for vol in vfs.all_vols.values():
lim = Lim()
use = False
if vol.flags.get("nosub"):
use = True
lim.nosub = True
v = vol.flags.get("sz")
if v:
use = True
lim.smin, lim.smax = [unhumanize(x) for x in v.split("-")]
v = vol.flags.get("rotn")
if v:
use = True
lim.rotn, lim.rotl = [int(x) for x in v.split(",")]
v = vol.flags.get("rotf")
if v:
use = True
lim.set_rotf(v)
v = vol.flags.get("maxn")
if v:
use = True
lim.nmax, lim.nwin = [int(x) for x in v.split(",")]
v = vol.flags.get("maxb")
if v:
use = True
lim.bmax, lim.bwin = [unhumanize(x) for x in v.split(",")]
if use:
vol.lim = lim
for vol in vfs.all_vols.values():
if "pk" in vol.flags and "gz" not in vol.flags and "xz" not in vol.flags:
vol.flags["gz"] = False # def.pk
all_mte = {}
errors = False
for vol in vfs.all_vols.values():
if (self.args.e2ds and vol.axs.uwrite) or self.args.e2dsa:
if (self.args.e2ds and vol.uwrite) or self.args.e2dsa:
vol.flags["e2ds"] = True
if self.args.e2d or "e2ds" in vol.flags:
@@ -828,11 +608,9 @@ class AuthSrv(object):
if k1 in vol.flags:
vol.flags[k2] = True
# default tag cfgs if unset
# default tag-list if unset
if "mte" not in vol.flags:
vol.flags["mte"] = self.args.mte
if "mth" not in vol.flags:
vol.flags["mth"] = self.args.mth
# append parsers from argv to volume-flags
self._read_volflag(vol.flags, "mtp", self.args.mtp, True)
@@ -901,27 +679,6 @@ class AuthSrv(object):
vfs.bubble_flags()
m = "volumes and permissions:\n"
for v in vfs.all_vols.values():
if not self.warn_anonwrite:
break
m += '\n\033[36m"/{}" \033[33m{}\033[0m'.format(v.vpath, v.realpath)
for txt, attr in [
[" read", "uread"],
[" write", "uwrite"],
[" move", "umove"],
["delete", "udel"],
]:
u = list(sorted(getattr(v.axs, attr).keys()))
u = ", ".join("\033[35meverybody\033[0m" if x == "*" else x for x in u)
u = u if u else "\033[36m--none--\033[0m"
m += "\n| {}: {}".format(txt, u)
m += "\n"
if self.warn_anonwrite and not self.args.no_voldump:
self.log(m)
try:
v, _ = vfs.get("/", "*", False, True)
if self.warn_anonwrite and os.getcwd() == v.realpath:
@@ -933,14 +690,17 @@ class AuthSrv(object):
with self.mutex:
self.vfs = vfs
self.acct = acct
self.iacct = {v: k for k, v in acct.items()}
self.user = user
self.iuser = {v: k for k, v in user.items()}
self.re_pwd = None
pwds = [re.escape(x) for x in self.iacct.keys()]
pwds = [re.escape(x) for x in self.iuser.keys()]
if pwds:
self.re_pwd = re.compile("=(" + "|".join(pwds) + ")([]&; ]|$)")
# import pprint
# pprint.pprint({"usr": user, "rd": mread, "wr": mwrite, "mnt": mount})
def dbg_ls(self):
users = self.args.ls
vols = "*"
@@ -958,12 +718,12 @@ class AuthSrv(object):
pass
if users == "**":
users = list(self.acct.keys()) + ["*"]
users = list(self.user.keys()) + ["*"]
else:
users = [users]
for u in users:
if u not in self.acct and u != "*":
if u not in self.user and u != "*":
raise Exception("user not found: " + u)
if vols == "*":
@@ -979,10 +739,8 @@ class AuthSrv(object):
raise Exception("volume not found: " + v)
self.log({"users": users, "vols": vols, "flags": flags})
m = "/{}: read({}) write({}) move({}) del({})"
for k, v in self.vfs.all_vols.items():
vc = v.axs
self.log(m.format(k, vc.uread, vc.uwrite, vc.umove, vc.udel))
self.log("/{}: read({}) write({})".format(k, v.uread, v.uwrite))
flag_v = "v" in flags
flag_ln = "ln" in flags
@@ -996,15 +754,13 @@ class AuthSrv(object):
for u in users:
self.log("checking /{} as {}".format(v, u))
try:
vn, _ = self.vfs.get(v, u, True, False, False, False)
vn, _ = self.vfs.get(v, u, True, False)
except:
continue
atop = vn.realpath
g = vn.walk(
vn.vpath, "", [], u, [[True]], True, not self.args.no_scandir, False
)
for _, _, vpath, apath, files, _, _ in g:
g = vn.walk("", "", [], u, True, not self.args.no_scandir, False)
for vpath, apath, files, _, _ in g:
fnames = [n[0] for n in files]
vpaths = [vpath + "/" + n for n in fnames] if vpath else fnames
vpaths = [vtop + x for x in vpaths]
@@ -1024,7 +780,7 @@ class AuthSrv(object):
msg = [x[1] for x in files]
if msg:
self.log("\n" + "\n".join(msg))
nuprint("\n".join(msg))
if n_bads and flag_p:
raise Exception("found symlink leaving volume, and strict is set")

View File

@@ -1,59 +0,0 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import os
from ..util import fsenc, fsdec
from . import path
# grep -hRiE '(^|[^a-zA-Z_\.-])os\.' . | gsed -r 's/ /\n/g;s/\(/(\n/g' | grep -hRiE '(^|[^a-zA-Z_\.-])os\.' | sort | uniq -c
# printf 'os\.(%s)' "$(grep ^def bos/__init__.py | gsed -r 's/^def //;s/\(.*//' | tr '\n' '|' | gsed -r 's/.$//')"
def chmod(p, mode):
return os.chmod(fsenc(p), mode)
def listdir(p="."):
return [fsdec(x) for x in os.listdir(fsenc(p))]
def lstat(p):
return os.lstat(fsenc(p))
def makedirs(name, mode=0o755, exist_ok=True):
bname = fsenc(name)
try:
os.makedirs(bname, mode=mode)
except:
if not exist_ok or not os.path.isdir(bname):
raise
def mkdir(p, mode=0o755):
return os.mkdir(fsenc(p), mode=mode)
def rename(src, dst):
return os.rename(fsenc(src), fsenc(dst))
def replace(src, dst):
return os.replace(fsenc(src), fsenc(dst))
def rmdir(p):
return os.rmdir(fsenc(p))
def stat(p):
return os.stat(fsenc(p))
def unlink(p):
return os.unlink(fsenc(p))
def utime(p, times=None):
return os.utime(fsenc(p), times)

View File

@@ -1,33 +0,0 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import os
from ..util import fsenc, fsdec
def abspath(p):
return fsdec(os.path.abspath(fsenc(p)))
def exists(p):
return os.path.exists(fsenc(p))
def getmtime(p):
return os.path.getmtime(fsenc(p))
def getsize(p):
return os.path.getsize(fsenc(p))
def isdir(p):
return os.path.isdir(fsenc(p))
def islink(p):
return os.path.islink(fsenc(p))
def realpath(p):
return fsdec(os.path.realpath(fsenc(p)))

View File

@@ -4,11 +4,17 @@ from __future__ import print_function, unicode_literals
import time
import threading
from .__init__ import PY2, WINDOWS, VT100
from .broker_util import try_exec
from .broker_mpw import MpWorker
from .util import mp
if PY2 and not WINDOWS:
from multiprocessing.reduction import ForkingPickler
from StringIO import StringIO as MemesIO # pylint: disable=import-error
class BrokerMp(object):
"""external api; manages MpWorkers"""
@@ -22,19 +28,24 @@ class BrokerMp(object):
self.retpend_mutex = threading.Lock()
self.mutex = threading.Lock()
self.num_workers = self.args.j or mp.cpu_count()
self.log("broker", "booting {} subprocesses".format(self.num_workers))
for n in range(1, self.num_workers + 1):
cores = self.args.j
if not cores:
cores = mp.cpu_count()
self.log("broker", "booting {} subprocesses".format(cores))
for n in range(cores):
q_pend = mp.Queue(1)
q_yield = mp.Queue(64)
proc = mp.Process(target=MpWorker, args=(q_pend, q_yield, self.args, n))
proc.q_pend = q_pend
proc.q_yield = q_yield
proc.nid = n
proc.clients = {}
proc.workload = 0
thr = threading.Thread(
target=self.collector, args=(proc,), name="mp-sink-{}".format(n)
target=self.collector, args=(proc,), name="mp-collector"
)
thr.daemon = True
thr.start()
@@ -42,6 +53,13 @@ class BrokerMp(object):
self.procs.append(proc)
proc.start()
if not self.args.q:
thr = threading.Thread(
target=self.debug_load_balancer, name="mp-dbg-loadbalancer"
)
thr.daemon = True
thr.start()
def shutdown(self):
self.log("broker", "shutting down")
for n, proc in enumerate(self.procs):
@@ -71,6 +89,20 @@ class BrokerMp(object):
if dest == "log":
self.log(*args)
elif dest == "workload":
with self.mutex:
proc.workload = args[0]
elif dest == "httpdrop":
addr = args[0]
with self.mutex:
del proc.clients[addr]
if not proc.clients:
proc.workload = 0
self.hub.tcpsrv.num_clients.add(-1)
elif dest == "retq":
# response from previous ipc call
with self.retpend_mutex:
@@ -96,12 +128,38 @@ class BrokerMp(object):
returns a Queue object which eventually contains the response if want_retval
(not-impl here since nothing uses it yet)
"""
if dest == "listen":
for p in self.procs:
p.q_pend.put([0, dest, [args[0], len(self.procs)]])
if dest == "httpconn":
sck, addr = args
sck2 = sck
if PY2:
buf = MemesIO()
ForkingPickler(buf).dump(sck)
sck2 = buf.getvalue()
elif dest == "cb_httpsrv_up":
self.hub.cb_httpsrv_up()
proc = sorted(self.procs, key=lambda x: x.workload)[0]
proc.q_pend.put([0, dest, [sck2, addr]])
with self.mutex:
proc.clients[addr] = 50
proc.workload += 50
else:
raise Exception("what is " + str(dest))
def debug_load_balancer(self):
fmt = "\033[1m{}\033[0;36m{:4}\033[0m "
if not VT100:
fmt = "({}{:4})"
last = ""
while self.procs:
msg = ""
for proc in self.procs:
msg += fmt.format(len(proc.clients), proc.workload)
if msg != last:
last = msg
with self.hub.log_mutex:
print(msg)
time.sleep(0.1)

View File

@@ -1,14 +1,19 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
from copyparty.authsrv import AuthSrv
import sys
import time
import signal
import threading
from .__init__ import PY2, WINDOWS
from .broker_util import ExceptionalQueue
from .httpsrv import HttpSrv
from .util import FAKE_MP
from copyparty.authsrv import AuthSrv
if PY2 and not WINDOWS:
import pickle # nosec
class MpWorker(object):
@@ -20,23 +25,22 @@ class MpWorker(object):
self.args = args
self.n = n
self.log = self._log_disabled if args.q and not args.lo else self._log_enabled
self.retpend = {}
self.retpend_mutex = threading.Lock()
self.mutex = threading.Lock()
self.workload_thr_alive = False
# we inherited signal_handler from parent,
# replace it with something harmless
if not FAKE_MP:
for sig in [signal.SIGINT, signal.SIGTERM]:
signal.signal(sig, self.signal_handler)
signal.signal(signal.SIGINT, self.signal_handler)
# starting to look like a good idea
self.asrv = AuthSrv(args, None, False)
# instantiate all services here (TODO: inheritance?)
self.httpsrv = HttpSrv(self, n)
self.httpsrv = HttpSrv(self, True)
self.httpsrv.disconnect_func = self.httpdrop
# on winxp and some other platforms,
# use thr.join() to block all signals
@@ -45,19 +49,19 @@ class MpWorker(object):
thr.start()
thr.join()
def signal_handler(self, sig, frame):
def signal_handler(self, signal, frame):
# print('k')
pass
def _log_enabled(self, src, msg, c=0):
def log(self, src, msg, c=0):
self.q_yield.put([0, "log", [src, msg, c]])
def _log_disabled(self, src, msg, c=0):
pass
def logw(self, msg, c=0):
self.log("mp{}".format(self.n), msg, c)
def httpdrop(self, addr):
self.q_yield.put([0, "httpdrop", [addr]])
def main(self):
while True:
retq_id, dest, args = self.q_pend.get()
@@ -69,8 +73,24 @@ class MpWorker(object):
sys.exit(0)
return
elif dest == "listen":
self.httpsrv.listen(args[0], args[1])
elif dest == "httpconn":
sck, addr = args
if PY2:
sck = pickle.loads(sck) # nosec
if self.args.log_conn:
self.log("%s %s" % addr, "|%sC-qpop" % ("-" * 4,), c="1;30")
self.httpsrv.accept(sck, addr)
with self.mutex:
if not self.workload_thr_alive:
self.workload_thr_alive = True
thr = threading.Thread(
target=self.thr_workload, name="mpw-workload"
)
thr.daemon = True
thr.start()
elif dest == "retq":
# response from previous ipc call
@@ -94,3 +114,16 @@ class MpWorker(object):
self.q_yield.put([retq_id, dest, args])
return retq
def thr_workload(self):
"""announce workloads to MpSrv (the mp controller / loadbalancer)"""
# avoid locking in extract_filedata by tracking difference here
while True:
time.sleep(0.2)
with self.mutex:
if self.httpsrv.num_clients() == 0:
# no clients rn, termiante thread
self.workload_thr_alive = False
return
self.q_yield.put([0, "workload", [self.httpsrv.workload]])

View File

@@ -3,6 +3,7 @@ from __future__ import print_function, unicode_literals
import threading
from .authsrv import AuthSrv
from .httpsrv import HttpSrv
from .broker_util import ExceptionalQueue, try_exec
@@ -17,10 +18,10 @@ class BrokerThr(object):
self.asrv = hub.asrv
self.mutex = threading.Lock()
self.num_workers = 1
# instantiate all services here (TODO: inheritance?)
self.httpsrv = HttpSrv(self, None)
self.httpsrv = HttpSrv(self)
self.httpsrv.disconnect_func = self.httpdrop
def shutdown(self):
# self.log("broker", "shutting down")
@@ -28,8 +29,12 @@ class BrokerThr(object):
pass
def put(self, want_retval, dest, *args):
if dest == "listen":
self.httpsrv.listen(args[0], 1)
if dest == "httpconn":
sck, addr = args
if self.args.log_conn:
self.log("%s %s" % addr, "|%sC-qpop" % ("-" * 4,), c="1;30")
self.httpsrv.accept(sck, addr)
else:
# new ipc invoking managed service in hub
@@ -46,3 +51,6 @@ class BrokerThr(object):
retq = ExceptionalQueue(1)
retq.put(rv)
return retq
def httpdrop(self, addr):
self.hub.tcpsrv.num_clients.add(-1)

View File

@@ -13,17 +13,16 @@ import ctypes
from datetime import datetime
import calendar
try:
import lzma
except:
pass
from .__init__ import E, PY2, WINDOWS, ANYWIN, unicode
from .__init__ import E, PY2, WINDOWS, ANYWIN
from .util import * # noqa # pylint: disable=unused-wildcard-import
from .bos import bos
from .authsrv import AuthSrv, Lim
from .authsrv import AuthSrv
from .szip import StreamZip
from .star import StreamTar
from .vcr import VCR_Direct
from .th_srv import FMT_FF
if not PY2:
unicode = str
NO_CACHE = {"Cache-Control": "no-cache"}
@@ -43,6 +42,7 @@ class HttpCli(object):
self.ip = conn.addr[0]
self.addr = conn.addr # type: tuple[str, int]
self.args = conn.args
self.is_mp = conn.is_mp
self.asrv = conn.asrv # type: AuthSrv
self.ico = conn.ico
self.thumbcli = conn.thumbcli
@@ -64,12 +64,9 @@ class HttpCli(object):
def unpwd(self, m):
a, b = m.groups()
return "=\033[7m {} \033[27m{}".format(self.asrv.iacct[a], b)
def _check_nonfatal(self, ex, post):
if post:
return ex.code < 300
return "=\033[7m {} \033[27m{}".format(self.asrv.iuser[a], b)
def _check_nonfatal(self, ex):
return ex.code < 400 or ex.code in [404, 429]
def _assert_safe_rem(self, rem):
@@ -103,15 +100,11 @@ class HttpCli(object):
try:
self.mode, self.req, self.http_ver = headerlines[0].split(" ")
except:
msg = " ]\n#[ ".join(headerlines)
raise Pebkac(400, "bad headers:\n#[ " + msg + " ]")
raise Pebkac(400, "bad headers:\n" + "\n".join(headerlines))
except Pebkac as ex:
self.mode = "GET"
self.req = "[junk]"
self.http_ver = "HTTP/1.1"
# self.log("pebkac at httpcli.run #1: " + repr(ex))
self.keepalive = False
self.keepalive = self._check_nonfatal(ex)
self.loud_reply(unicode(ex), status=ex.code)
return self.keepalive
@@ -185,19 +178,14 @@ class HttpCli(object):
if kc in cookies and ku not in uparam:
uparam[ku] = cookies[kc]
if len(uparam) > 10 or len(cookies) > 50:
raise Pebkac(400, "u wot m8")
self.uparam = uparam
self.cookies = cookies
self.vpath = unquotep(vpath) # not query, so + means +
self.vpath = unquotep(vpath)
pwd = uparam.get("pw")
self.uname = self.asrv.iacct.get(pwd, "*")
self.rvol = self.asrv.vfs.aread[self.uname]
self.wvol = self.asrv.vfs.awrite[self.uname]
self.mvol = self.asrv.vfs.amove[self.uname]
self.dvol = self.asrv.vfs.adel[self.uname]
self.uname = self.asrv.iuser.get(pwd, "*")
self.rvol, self.wvol, self.avol = [[], [], []]
self.asrv.vfs.user_tree(self.uname, self.rvol, self.wvol, self.avol)
if pwd and "pw" in self.ouparam and pwd != cookies.get("cppwd"):
self.out_headers["Set-Cookie"] = self.get_pwd_cookie(pwd)[0]
@@ -227,8 +215,7 @@ class HttpCli(object):
except Pebkac as ex:
try:
# self.log("pebkac at httpcli.run #2: " + repr(ex))
post = self.mode in ["POST", "PUT"] or "content-length" in self.headers
if not self._check_nonfatal(ex, post):
if not self._check_nonfatal(ex):
self.keepalive = False
self.log("{}\033[0m, {}".format(str(ex), self.vpath), 3)
@@ -241,18 +228,19 @@ class HttpCli(object):
except Pebkac:
return False
def send_headers(self, length, status=200, mime=None, headers=None):
def send_headers(self, length, status=200, mime=None, headers={}):
response = ["{} {} {}".format(self.http_ver, status, HTTPCODE[status])]
if length is not None:
if length is None:
self.keepalive = False
else:
response.append("Content-Length: " + unicode(length))
# close if unknown length, otherwise take client's preference
response.append("Connection: " + ("Keep-Alive" if self.keepalive else "Close"))
# headers{} overrides anything set previously
if headers:
self.out_headers.update(headers)
self.out_headers.update(headers)
# default to utf8 html if no content-type is set
if not mime:
@@ -269,7 +257,7 @@ class HttpCli(object):
except:
raise Pebkac(400, "client d/c while replying headers")
def reply(self, body, status=200, mime=None, headers=None):
def reply(self, body, status=200, mime=None, headers={}):
# TODO something to reply with user-supplied values safely
self.send_headers(len(body), status, mime, headers)
@@ -285,7 +273,7 @@ class HttpCli(object):
self.log(body.rstrip())
self.reply(b"<pre>" + body.encode("utf-8") + b"\r\n", *list(args), **kwargs)
def urlq(self, add, rm):
def urlq(self, add={}, rm=[]):
"""
generates url query based on uparam (b, pw, all others)
removing anything in rm, adding pairs in add
@@ -354,37 +342,9 @@ class HttpCli(object):
static_path = os.path.join(E.mod, "web/", self.vpath[5:])
return self.tx_file(static_path)
x = self.asrv.vfs.can_access(self.vpath, self.uname)
self.can_read, self.can_write, self.can_move, self.can_delete = x
if not self.can_read and not self.can_write:
if self.vpath:
self.log("inaccessible: [{}]".format(self.vpath))
raise Pebkac(404)
self.uparam["h"] = False
if "tree" in self.uparam:
return self.tx_tree()
if "delete" in self.uparam:
return self.handle_rm()
if "move" in self.uparam:
return self.handle_mv()
if "scan" in self.uparam:
return self.scanvol()
if not self.vpath:
if "stack" in self.uparam:
return self.tx_stack()
if "ups" in self.uparam:
return self.tx_ups()
if "h" in self.uparam:
return self.tx_mounts()
# conditional redirect to single volumes
if self.vpath == "" and not self.ouparam:
nread = len(self.rvol)
@@ -399,6 +359,24 @@ class HttpCli(object):
self.redirect(vpath, flavor="redirecting to", use302=True)
return True
self.readable, self.writable = self.asrv.vfs.can_access(self.vpath, self.uname)
if not self.readable and not self.writable:
if self.vpath:
self.log("inaccessible: [{}]".format(self.vpath))
raise Pebkac(404)
self.uparam = {"h": False}
if "h" in self.uparam:
self.vpath = None
return self.tx_mounts()
if "scan" in self.uparam:
return self.scanvol()
if "stack" in self.uparam:
return self.tx_stack()
return self.tx_browser()
def handle_options(self):
@@ -499,94 +477,20 @@ class HttpCli(object):
def dump_to_file(self):
reader, remains = self.get_body_reader()
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
lim = vfs.get_dbv(rem)[0].lim
fdir = os.path.join(vfs.realpath, rem)
if lim:
fdir, rem = lim.all(self.ip, rem, remains, fdir)
bos.makedirs(fdir)
addr = self.ip.replace(":", ".")
fn = "put-{:.6f}-{}.bin".format(time.time(), addr)
path = os.path.join(fdir, fn)
if self.args.nw:
path = os.devnull
open_f = open
open_a = [fsenc(path), "wb", 512 * 1024]
open_ka = {}
with open(fsenc(path), "wb", 512 * 1024) as f:
post_sz, _, sha_b64 = hashcopy(self.conn, reader, f)
# user-request || config-force
if ("gz" in vfs.flags or "xz" in vfs.flags) and (
"pk" in vfs.flags
or "pk" in self.uparam
or "gz" in self.uparam
or "xz" in self.uparam
):
fb = {"gz": 9, "xz": 0} # default/fallback level
lv = {} # selected level
alg = None # selected algo (gz=preferred)
vfs, vrem = vfs.get_dbv(rem)
# user-prefs first
if "gz" in self.uparam or "pk" in self.uparam: # def.pk
alg = "gz"
if "xz" in self.uparam:
alg = "xz"
if alg:
v = self.uparam.get(alg)
lv[alg] = fb[alg] if v is None else int(v)
if alg not in vfs.flags:
alg = "gz" if "gz" in vfs.flags else "xz"
# then server overrides
pk = vfs.flags.get("pk")
if pk is not None:
# config-forced on
alg = alg or "gz" # def.pk
try:
# config-forced opts
alg, lv = pk.split(",")
lv[alg] = int(lv)
except:
pass
lv[alg] = lv.get(alg) or fb.get(alg)
self.log("compressing with {} level {}".format(alg, lv.get(alg)))
if alg == "gz":
open_f = gzip.GzipFile
open_a = [fsenc(path), "wb", lv[alg], None, 0x5FEE6600] # 2021-01-01
elif alg == "xz":
open_f = lzma.open
open_a = [fsenc(path), "wb"]
open_ka = {"preset": lv[alg]}
else:
self.log("fallthrough? thats a bug", 1)
with open_f(*open_a, **open_ka) as f:
post_sz, _, sha_b64 = hashcopy(reader, f)
if lim:
lim.nup(self.ip)
lim.bup(self.ip, post_sz)
try:
lim.chk_sz(post_sz)
except:
bos.unlink(path)
raise
if not self.args.nw:
vfs, vrem = vfs.get_dbv(rem)
self.conn.hsrv.broker.put(
False,
"up2k.hash_file",
vfs.realpath,
vfs.flags,
vrem,
fn,
self.ip,
time.time(),
)
self.conn.hsrv.broker.put(
False, "up2k.hash_file", vfs.realpath, vfs.flags, vrem, fn
)
return post_sz, sha_b64, remains, path
@@ -656,7 +560,7 @@ class HttpCli(object):
try:
remains = int(self.headers["content-length"])
except:
raise Pebkac(411)
raise Pebkac(400, "you must supply a content-length for JSON POST")
if remains > 1024 * 1024:
raise Pebkac(413, "json 2big")
@@ -679,17 +583,17 @@ class HttpCli(object):
if "srch" in self.uparam or "srch" in body:
return self.handle_search(body)
if "delete" in self.uparam:
return self.handle_rm(body)
# up2k-php compat
for k in "chunkpit.php", "handshake.php":
if self.vpath.endswith(k):
self.vpath = self.vpath[: -len(k)]
sub = None
name = undot(body["name"])
if "/" in name:
raise Pebkac(400, "your client is old; press CTRL-SHIFT-R and try again")
sub, name = name.rsplit("/", 1)
self.vpath = "/".join([self.vpath, sub]).strip("/")
body["name"] = name
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
dbv, vrem = vfs.get_dbv(rem)
@@ -700,26 +604,28 @@ class HttpCli(object):
body["addr"] = self.ip
body["vcfg"] = dbv.flags
if rem:
if sub:
try:
dst = os.path.join(vfs.realpath, rem)
if not bos.path.isdir(dst):
bos.makedirs(dst)
if not os.path.isdir(fsenc(dst)):
os.makedirs(fsenc(dst))
except OSError as ex:
self.log("makedirs failed [{}]".format(dst))
if not bos.path.isdir(dst):
if ex.errno == 13:
raise Pebkac(500, "the server OS denied write-access")
if ex.errno == 13:
raise Pebkac(500, "the server OS denied write-access")
if ex.errno == 17:
raise Pebkac(400, "some file got your folder name")
if ex.errno == 17:
raise Pebkac(400, "some file got your folder name")
raise Pebkac(500, min_ex())
raise Pebkac(500, min_ex())
except:
raise Pebkac(500, min_ex())
x = self.conn.hsrv.broker.put(True, "up2k.handle_json", body)
ret = x.get()
if sub:
ret["name"] = "/".join([sub, ret["name"]])
ret = json.dumps(ret)
self.log(ret)
self.reply(ret.encode("utf-8"), mime="application/json")
@@ -809,7 +715,7 @@ class HttpCli(object):
with open(fsenc(path), "rb+", 512 * 1024) as f:
f.seek(cstart[0])
post_sz, _, sha_b64 = hashcopy(reader, f)
post_sz, _, sha_b64 = hashcopy(self.conn, reader, f)
if sha_b64 != chash:
raise Pebkac(
@@ -850,7 +756,7 @@ class HttpCli(object):
times = (int(time.time()), int(lastmod))
self.log("no more chunks, setting times {}".format(times))
try:
bos.utime(path, times)
os.utime(fsenc(path), times)
except:
self.log("failed to utime ({}, {})".format(path, times))
@@ -869,7 +775,7 @@ class HttpCli(object):
return True
def get_pwd_cookie(self, pwd):
if pwd in self.asrv.iacct:
if pwd in self.asrv.iuser:
msg = "login ok"
dt = datetime.utcfromtimestamp(time.time() + 60 * 60 * 24 * 365)
exp = dt.strftime("%a, %d %b %Y %H:%M:%S GMT")
@@ -889,20 +795,20 @@ class HttpCli(object):
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
self._assert_safe_rem(rem)
sanitized = sanitize_fn(new_dir, "", [])
sanitized = sanitize_fn(new_dir)
if not nullwrite:
fdir = os.path.join(vfs.realpath, rem)
fn = os.path.join(fdir, sanitized)
if not bos.path.isdir(fdir):
if not os.path.isdir(fsenc(fdir)):
raise Pebkac(500, "parent folder does not exist")
if bos.path.isdir(fn):
if os.path.isdir(fsenc(fn)):
raise Pebkac(500, "that folder exists already")
try:
bos.mkdir(fn)
os.mkdir(fsenc(fn))
except OSError as ex:
if ex.errno == 13:
raise Pebkac(500, "the server OS denied write-access")
@@ -926,13 +832,13 @@ class HttpCli(object):
if not new_file.endswith(".md"):
new_file += ".md"
sanitized = sanitize_fn(new_file, "", [])
sanitized = sanitize_fn(new_file)
if not nullwrite:
fdir = os.path.join(vfs.realpath, rem)
fn = os.path.join(fdir, sanitized)
if bos.path.exists(fn):
if os.path.exists(fsenc(fn)):
raise Pebkac(500, "that file exists already")
with open(fsenc(fn), "wb") as f:
@@ -947,11 +853,6 @@ class HttpCli(object):
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
self._assert_safe_rem(rem)
lim = vfs.get_dbv(rem)[0].lim
fdir_base = os.path.join(vfs.realpath, rem)
if lim:
fdir_base, rem = lim.all(self.ip, rem, -1, fdir_base)
files = []
errmsg = ""
t0 = time.time()
@@ -961,10 +862,13 @@ class HttpCli(object):
self.log("discarding incoming file without filename")
# fallthrough
fdir = fdir_base
fname = sanitize_fn(p_file, "", [".prologue.html", ".epilogue.html"])
if p_file and not nullwrite:
if not bos.path.isdir(fdir):
fdir = os.path.join(vfs.realpath, rem)
fname = sanitize_fn(
p_file, bad=[".prologue.html", ".epilogue.html"]
)
if not os.path.isdir(fsenc(fdir)):
raise Pebkac(404, "that folder does not exist")
suffix = ".{:.6f}-{}".format(time.time(), self.ip)
@@ -974,43 +878,25 @@ class HttpCli(object):
fname = os.devnull
fdir = ""
if lim:
lim.chk_bup(self.ip)
lim.chk_nup(self.ip)
if not nullwrite:
bos.makedirs(fdir)
try:
with ren_open(fname, "wb", 512 * 1024, **open_args) as f:
f, fname = f["orz"]
abspath = os.path.join(fdir, fname)
self.log("writing to {}".format(abspath))
sz, sha512_hex, _ = hashcopy(p_data, f)
self.log("writing to {}/{}".format(fdir, fname))
sz, sha512_hex, _ = hashcopy(self.conn, p_data, f)
if sz == 0:
raise Pebkac(400, "empty files in post")
if lim:
lim.nup(self.ip)
lim.bup(self.ip, sz)
try:
lim.chk_sz(sz)
except:
bos.unlink(abspath)
raise
files.append([sz, sha512_hex, p_file, fname])
dbv, vrem = vfs.get_dbv(rem)
self.conn.hsrv.broker.put(
False,
"up2k.hash_file",
dbv.realpath,
dbv.flags,
vrem,
fname,
self.ip,
time.time(),
)
self.conn.nbyte += sz
files.append([sz, sha512_hex, p_file, fname])
dbv, vrem = vfs.get_dbv(rem)
self.conn.hsrv.broker.put(
False,
"up2k.hash_file",
dbv.realpath,
dbv.flags,
vrem,
fname,
)
self.conn.nbyte += sz
except Pebkac:
if fname != os.devnull:
@@ -1021,10 +907,10 @@ class HttpCli(object):
suffix = ".PARTIAL"
try:
bos.rename(fp, fp2 + suffix)
os.rename(fsenc(fp), fsenc(fp2 + suffix))
except:
fp2 = fp2[: -len(suffix) - 1]
bos.rename(fp, fp2 + suffix)
os.rename(fsenc(fp), fsenc(fp2 + suffix))
raise
@@ -1108,20 +994,13 @@ class HttpCli(object):
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
self._assert_safe_rem(rem)
clen = int(self.headers.get("content-length", -1))
if clen == -1:
raise Pebkac(411)
rp, fn = vsplit(rem)
fp = os.path.join(vfs.realpath, rp)
lim = vfs.get_dbv(rem)[0].lim
if lim:
fp, rp = lim.all(self.ip, rp, clen, fp)
bos.makedirs(fp)
fp = os.path.join(fp, fn)
rem = "{}/{}".format(rp, fn).strip("/")
# TODO:
# the per-volume read/write permissions must be replaced with permission flags
# which would decide how to handle uploads to filenames which are taken,
# current behavior of creating a new name is a good default for binary files
# but should also offer a flag to takeover the filename and rename the old one
#
# stopgap:
if not rem.endswith(".md"):
raise Pebkac(400, "only markdown pls")
@@ -1133,9 +1012,10 @@ class HttpCli(object):
self.reply(response.encode("utf-8"))
return True
fp = os.path.join(vfs.realpath, rem)
srv_lastmod = srv_lastmod3 = -1
try:
st = bos.stat(fp)
st = os.stat(fsenc(fp))
srv_lastmod = st.st_mtime
srv_lastmod3 = int(srv_lastmod * 1000)
except OSError as ex:
@@ -1171,31 +1051,23 @@ class HttpCli(object):
self.reply(response.encode("utf-8"))
return True
# TODO another hack re: pending permissions rework
mdir, mfile = os.path.split(fp)
mfile2 = "{}.{:.3f}.md".format(mfile[:-3], srv_lastmod)
try:
bos.mkdir(os.path.join(mdir, ".hist"))
os.mkdir(fsenc(os.path.join(mdir, ".hist")))
except:
pass
bos.rename(fp, os.path.join(mdir, ".hist", mfile2))
os.rename(fsenc(fp), fsenc(os.path.join(mdir, ".hist", mfile2)))
p_field, _, p_data = next(self.parser.gen)
if p_field != "body":
raise Pebkac(400, "expected body, got {}".format(p_field))
with open(fsenc(fp), "wb", 512 * 1024) as f:
sz, sha512, _ = hashcopy(p_data, f)
sz, sha512, _ = hashcopy(self.conn, p_data, f)
if lim:
lim.nup(self.ip)
lim.bup(self.ip, sz)
try:
lim.chk_sz(sz)
except:
bos.unlink(fp)
raise
new_lastmod = bos.stat(fp).st_mtime
new_lastmod = os.stat(fsenc(fp)).st_mtime
new_lastmod3 = int(new_lastmod * 1000)
sha512 = sha512[:56]
@@ -1240,7 +1112,7 @@ class HttpCli(object):
for ext in ["", ".gz", ".br"]:
try:
fs_path = req_path + ext
st = bos.stat(fs_path)
st = os.stat(fsenc(fs_path))
file_ts = max(file_ts, st.st_mtime)
editions[ext or "plain"] = [fs_path, st.st_size]
except:
@@ -1383,7 +1255,8 @@ class HttpCli(object):
if use_sendfile:
remains = sendfile_kern(lower, upper, f, self.s)
else:
remains = sendfile_py(lower, upper, f, self.s)
actor = self.conn if self.is_mp else None
remains = sendfile_py(lower, upper, f, self.s, actor)
if remains > 0:
logmsg += " \033[31m" + unicode(upper - remains) + "\033[0m"
@@ -1417,9 +1290,11 @@ class HttpCli(object):
else:
fn = self.headers.get("host", "hey")
safe = (string.ascii_letters + string.digits).replace("%", "")
afn = "".join([x if x in safe.replace('"', "") else "_" for x in fn])
bascii = unicode(safe).encode("utf-8")
afn = "".join(
[x if x in (string.ascii_letters + string.digits) else "_" for x in fn]
)
bascii = unicode(string.ascii_letters + string.digits).encode("utf-8")
ufn = fn.encode("utf-8", "xmlcharrefreplace")
if PY2:
ufn = [unicode(x) if x in bascii else "%{:02x}".format(ord(x)) for x in ufn]
@@ -1434,12 +1309,11 @@ class HttpCli(object):
cdis = "attachment; filename=\"{}.{}\"; filename*=UTF-8''{}.{}"
cdis = cdis.format(afn, fmt, ufn, fmt)
self.log(cdis)
self.send_headers(None, mime=mime, headers={"Content-Disposition": cdis})
fgen = vn.zipgen(rem, items, self.uname, dots, not self.args.no_scandir)
# for f in fgen: print(repr({k: f[k] for k in ["vp", "ap"]}))
bgen = packer(self.log, fgen, utf8="utf" in uarg, pre_crc="crc" in uarg)
bgen = packer(fgen, utf8="utf" in uarg, pre_crc="crc" in uarg)
bsent = 0
for buf in bgen.gen():
if not buf:
@@ -1491,10 +1365,10 @@ class HttpCli(object):
html_path = os.path.join(E.mod, "web", "{}.html".format(tpl))
template = self.j2(tpl)
st = bos.stat(fs_path)
st = os.stat(fsenc(fs_path))
ts_md = st.st_mtime
st = bos.stat(html_path)
st = os.stat(fsenc(html_path))
ts_html = st.st_mtime
sz_md = 0
@@ -1503,7 +1377,7 @@ class HttpCli(object):
for c, v in [[b"&", 4], [b"<", 3], [b">", 3]]:
sz_md += (len(buf) - len(buf.replace(c, b""))) * v
file_ts = max(ts_md, ts_html, E.t0)
file_ts = max(ts_md, ts_html)
file_lastmod, do_send = self._chk_lastmod(file_ts)
self.out_headers["Last-Modified"] = file_lastmod
self.out_headers.update(NO_CACHE)
@@ -1550,14 +1424,13 @@ class HttpCli(object):
return True
def tx_mounts(self):
suf = self.urlq({}, ["h"])
avol = [x for x in self.wvol if x in self.rvol]
suf = self.urlq(rm=["h"])
rvol, wvol, avol = [
[("/" + x).rstrip("/") + "/" for x in y]
for y in [self.rvol, self.wvol, avol]
for y in [self.rvol, self.wvol, self.avol]
]
if avol and not self.args.no_rescan:
if self.avol and not self.args.no_rescan:
x = self.conn.hsrv.broker.put(True, "up2k.get_state")
vs = json.loads(x.get())
vstate = {("/" + k).rstrip("/") + "/": v for k, v in vs["volstate"].items()}
@@ -1582,8 +1455,8 @@ class HttpCli(object):
return True
def scanvol(self):
if not self.can_read or not self.can_write:
raise Pebkac(403, "not allowed for user " + self.uname)
if not self.readable or not self.writable:
raise Pebkac(403, "not admin")
if self.args.no_rescan:
raise Pebkac(403, "disabled by argv")
@@ -1601,8 +1474,8 @@ class HttpCli(object):
raise Pebkac(500, x)
def tx_stack(self):
if not [x for x in self.wvol if x in self.rvol]:
raise Pebkac(403, "not allowed for user " + self.uname)
if not self.readable or not self.writable:
raise Pebkac(403, "not admin")
if self.args.no_stack:
raise Pebkac(403, "disabled by argv")
@@ -1640,7 +1513,7 @@ class HttpCli(object):
try:
vn, rem = self.asrv.vfs.get(top, self.uname, True, False)
fsroot, vfs_ls, vfs_virt = vn.ls(
rem, self.uname, not self.args.no_scandir, [[True], [False, True]]
rem, self.uname, not self.args.no_scandir, incl_wo=True
)
except:
vfs_ls = []
@@ -1667,74 +1540,6 @@ class HttpCli(object):
ret["a"] = dirs
return ret
def tx_ups(self):
if not self.args.unpost:
raise Pebkac(400, "the unpost feature was disabled by server config")
filt = self.uparam.get("filter")
lm = "ups [{}]".format(filt)
self.log(lm)
ret = []
t0 = time.time()
idx = self.conn.get_u2idx()
lim = time.time() - self.args.unpost
for vol in self.asrv.vfs.all_vols.values():
cur = idx.get_cur(vol.realpath)
if not cur:
continue
q = "select sz, rd, fn, at from up where ip=? and at>?"
for sz, rd, fn, at in cur.execute(q, (self.ip, lim)):
vp = "/" + "/".join([rd, fn]).strip("/")
if filt and filt not in vp:
continue
ret.append({"vp": vp, "sz": sz, "at": at})
if len(ret) > 3000:
ret.sort(key=lambda x: x["at"], reverse=True)
ret = ret[:2000]
ret.sort(key=lambda x: x["at"], reverse=True)
ret = ret[:2000]
jtxt = json.dumps(ret, indent=2, sort_keys=True).encode("utf-8", "replace")
self.log("{} #{} {:.2f}sec".format(lm, len(ret), time.time() - t0))
self.reply(jtxt, mime="application/json")
def handle_rm(self, req=None):
if not req and not self.can_delete:
raise Pebkac(403, "not allowed for user " + self.uname)
if self.args.no_del:
raise Pebkac(403, "disabled by argv")
if not req:
req = [self.vpath]
x = self.conn.hsrv.broker.put(True, "up2k.handle_rm", self.uname, self.ip, req)
self.loud_reply(x.get())
def handle_mv(self):
if not self.can_move:
raise Pebkac(403, "not allowed for user " + self.uname)
if self.args.no_mv:
raise Pebkac(403, "disabled by argv")
# full path of new loc (incl filename)
dst = self.uparam.get("move")
if not dst:
raise Pebkac(400, "need dst vpath")
# x-www-form-urlencoded (url query part) uses
# either + or %20 for 0x20 so handle both
dst = unquotep(dst.replace("+", " "))
x = self.conn.hsrv.broker.put(
True, "up2k.handle_mv", self.uname, self.vpath, dst
)
self.loud_reply(x.get())
def tx_browser(self):
vpath = ""
vpnodes = [["", "/"]]
@@ -1747,28 +1552,37 @@ class HttpCli(object):
vpnodes.append([quotep(vpath) + "/", html_escape(node, crlf=True)])
vn, rem = self.asrv.vfs.get(self.vpath, self.uname, False, False)
vn, rem = self.asrv.vfs.get(
self.vpath, self.uname, self.readable, self.writable
)
abspath = vn.canonical(rem)
dbv, vrem = vn.get_dbv(rem)
try:
st = bos.stat(abspath)
st = os.stat(fsenc(abspath))
except:
raise Pebkac(404)
if self.can_read:
if rem.startswith(".hist/up2k.") or (
rem.endswith("/dir.txt") and rem.startswith(".hist/th/")
):
if self.readable:
if rem.startswith(".hist/up2k."):
raise Pebkac(403)
if "vcr" in self.uparam:
ext = abspath.rsplit(".")[-1]
if not self.args.vcr or ext not in FMT_FF:
raise Pebkac(403)
vcr = VCR_Direct(self, abspath)
vcr.run()
return False
is_dir = stat.S_ISDIR(st.st_mode)
th_fmt = self.uparam.get("th")
if th_fmt is not None:
if is_dir:
for fn in self.args.th_covers.split(","):
fp = os.path.join(abspath, fn)
if bos.path.exists(fp):
if os.path.exists(fp):
vrem = "{}/{}".format(vrem.rstrip("/"), fn)
is_dir = False
break
@@ -1823,16 +1637,12 @@ class HttpCli(object):
srv_info = "</span> /// <span>".join(srv_info)
perms = []
if self.can_read:
if self.readable:
perms.append("read")
if self.can_write:
if self.writable:
perms.append("write")
if self.can_move:
perms.append("move")
if self.can_delete:
perms.append("delete")
url_suf = self.urlq({}, [])
url_suf = self.urlq()
is_ls = "ls" in self.uparam
tpl = "browser"
@@ -1842,7 +1652,7 @@ class HttpCli(object):
logues = ["", ""]
for n, fn in enumerate([".prologue.html", ".epilogue.html"]):
fn = os.path.join(abspath, fn)
if bos.path.exists(fn):
if os.path.exists(fsenc(fn)):
with open(fsenc(fn), "rb") as f:
logues[n] = f.read().decode("utf-8")
@@ -1851,7 +1661,6 @@ class HttpCli(object):
"files": [],
"taglist": [],
"srvinf": srv_info,
"acct": self.uname,
"perms": perms,
"logues": logues,
}
@@ -1859,23 +1668,19 @@ class HttpCli(object):
"vdir": quotep(self.vpath),
"vpnodes": vpnodes,
"files": [],
"acct": self.uname,
"perms": json.dumps(perms),
"taglist": [],
"def_hcols": [],
"tag_order": [],
"have_up2k_idx": ("e2d" in vn.flags),
"have_tags_idx": ("e2t" in vn.flags),
"have_mv": (not self.args.no_mv),
"have_del": (not self.args.no_del),
"have_zip": (not self.args.no_zip),
"have_unpost": (self.args.unpost > 0),
"have_b_u": (self.can_write and self.uparam.get("b") == "u"),
"have_b_u": (self.writable and self.uparam.get("b") == "u"),
"url_suf": url_suf,
"logues": logues,
"title": html_escape(self.vpath, crlf=True),
"srv_info": srv_info,
}
if not self.can_read:
if not self.readable:
if is_ls:
ret = json.dumps(ls_ret)
self.reply(
@@ -1898,7 +1703,7 @@ class HttpCli(object):
return self.tx_zip(k, v, vn, rem, [], self.args.ed)
fsroot, vfs_ls, vfs_virt = vn.ls(
rem, self.uname, not self.args.no_scandir, [[True], [False, True]]
rem, self.uname, not self.args.no_scandir, incl_wo=True
)
stats = {k: v for k, v in vfs_ls}
vfs_ls = [x[0] for x in vfs_ls]
@@ -1909,7 +1714,7 @@ class HttpCli(object):
histdir = os.path.join(fsroot, ".hist")
ptn = re.compile(r"(.*)\.([0-9]+\.[0-9]{3})(\.[^\.]+)$")
try:
for hfn in bos.listdir(histdir):
for hfn in os.listdir(histdir):
m = ptn.match(hfn)
if not m:
continue
@@ -1950,7 +1755,7 @@ class HttpCli(object):
fspath = fsroot + "/" + fn
try:
inf = stats.get(fn) or bos.stat(fspath)
inf = stats.get(fn) or os.stat(fsenc(fspath))
except:
self.log("broken symlink: {}".format(repr(fspath)))
continue
@@ -2059,8 +1864,8 @@ class HttpCli(object):
j2a["logues"] = logues
j2a["taglist"] = taglist
if "mth" in vn.flags:
j2a["def_hcols"] = vn.flags["mth"].split(",")
if "mte" in vn.flags:
j2a["tag_order"] = json.dumps(vn.flags["mte"].split(","))
if self.args.css_browser:
j2a["css"] = self.args.css_browser

View File

@@ -34,6 +34,7 @@ class HttpConn(object):
self.args = hsrv.args
self.asrv = hsrv.asrv
self.is_mp = hsrv.is_mp
self.cert_path = hsrv.cert_path
enth = HAVE_PIL and not self.args.no_thumb
@@ -44,6 +45,7 @@ class HttpConn(object):
self.stopping = False
self.nreq = 0
self.nbyte = 0
self.workload = 0
self.u2idx = None
self.log_func = hsrv.log
self.lf_url = re.compile(self.args.lf_url) if self.args.lf_url else None
@@ -182,6 +184,11 @@ class HttpConn(object):
self.sr = Unrecv(self.s)
while not self.stopping:
if self.is_mp:
self.workload += 50
if self.workload >= 2 ** 31:
self.workload = 100
self.nreq += 1
cli = HttpCli(self)
if not cli.run():

View File

@@ -4,8 +4,8 @@ from __future__ import print_function, unicode_literals
import os
import sys
import time
import math
import base64
import struct
import socket
import threading
@@ -26,16 +26,9 @@ except ImportError:
)
sys.exit(1)
from .__init__ import E, PY2, MACOS
from .util import spack, min_ex, start_stackmon, start_log_thrs
from .bos import bos
from .__init__ import E, MACOS
from .httpconn import HttpConn
if PY2:
import Queue as queue
else:
import queue
class HttpSrv(object):
"""
@@ -43,26 +36,19 @@ class HttpSrv(object):
relying on MpSrv for performance (HttpSrv is just plain threads)
"""
def __init__(self, broker, nid):
def __init__(self, broker, is_mp=False):
self.broker = broker
self.nid = nid
self.is_mp = is_mp
self.args = broker.args
self.log = broker.log
self.asrv = broker.asrv
self.name = "httpsrv" + ("-n{}-i{:x}".format(nid, os.getpid()) if nid else "")
self.disconnect_func = None
self.mutex = threading.Lock()
self.stopping = False
self.tp_nthr = 0 # actual
self.tp_ncli = 0 # fading
self.tp_time = None # latest worker collect
self.tp_q = None if self.args.no_htp else queue.LifoQueue()
self.srvs = []
self.ncli = 0 # exact
self.clients = {} # laggy
self.nclimax = 0
self.clients = {}
self.workload = 0
self.workload_thr_alive = False
self.cb_ts = 0
self.cb_v = 0
@@ -74,162 +60,29 @@ class HttpSrv(object):
}
cert_path = os.path.join(E.cfg, "cert.pem")
if bos.path.exists(cert_path):
if os.path.exists(cert_path):
self.cert_path = cert_path
else:
self.cert_path = None
if self.tp_q:
self.start_threads(4)
name = "httpsrv-scaler" + ("-{}".format(nid) if nid else "")
t = threading.Thread(target=self.thr_scaler, name=name)
t.daemon = True
t.start()
if nid:
if self.args.stackmon:
start_stackmon(self.args.stackmon, nid)
if self.args.log_thrs:
start_log_thrs(self.log, self.args.log_thrs, nid)
def start_threads(self, n):
self.tp_nthr += n
if self.args.log_htp:
self.log(self.name, "workers += {} = {}".format(n, self.tp_nthr), 6)
for _ in range(n):
thr = threading.Thread(
target=self.thr_poolw,
name=self.name + "-poolw",
)
thr.daemon = True
thr.start()
def stop_threads(self, n):
self.tp_nthr -= n
if self.args.log_htp:
self.log(self.name, "workers -= {} = {}".format(n, self.tp_nthr), 6)
for _ in range(n):
self.tp_q.put(None)
def thr_scaler(self):
while True:
time.sleep(2 if self.tp_ncli else 30)
with self.mutex:
self.tp_ncli = max(self.ncli, self.tp_ncli - 2)
if self.tp_nthr > self.tp_ncli + 8:
self.stop_threads(4)
def listen(self, sck, nlisteners):
ip, port = sck.getsockname()
self.srvs.append(sck)
self.nclimax = math.ceil(self.args.nc * 1.0 / nlisteners)
t = threading.Thread(
target=self.thr_listen,
args=(sck,),
name="httpsrv-n{}-listen-{}-{}".format(self.nid or "0", ip, port),
)
t.daemon = True
t.start()
def thr_listen(self, srv_sck):
"""listens on a shared tcp server"""
ip, port = srv_sck.getsockname()
fno = srv_sck.fileno()
msg = "subscribed @ {}:{} f{}".format(ip, port, fno)
self.log(self.name, msg)
self.broker.put(False, "cb_httpsrv_up")
while not self.stopping:
if self.args.log_conn:
self.log(self.name, "|%sC-ncli" % ("-" * 1,), c="1;30")
if self.ncli >= self.nclimax:
self.log(self.name, "at connection limit; waiting", 3)
while self.ncli >= self.nclimax:
time.sleep(0.1)
if self.args.log_conn:
self.log(self.name, "|%sC-acc1" % ("-" * 2,), c="1;30")
try:
sck, addr = srv_sck.accept()
except (OSError, socket.error) as ex:
self.log(self.name, "accept({}): {}".format(fno, ex), c=6)
time.sleep(0.02)
continue
if self.args.log_conn:
m = "|{}C-acc2 \033[0;36m{} \033[3{}m{}".format(
"-" * 3, ip, port % 8, port
)
self.log("%s %s" % addr, m, c="1;30")
self.accept(sck, addr)
def accept(self, sck, addr):
"""takes an incoming tcp connection and creates a thread to handle it"""
now = time.time()
if now - (self.tp_time or now) > 300:
m = "httpserver threadpool died: tpt {:.2f}, now {:.2f}, nthr {}, ncli {}"
self.log(self.name, m.format(self.tp_time, now, self.tp_nthr, self.ncli), 1)
self.tp_time = None
self.tp_q = None
with self.mutex:
self.ncli += 1
if self.tp_q:
self.tp_time = self.tp_time or now
self.tp_ncli = max(self.tp_ncli, self.ncli)
if self.tp_nthr < self.ncli + 4:
self.start_threads(8)
self.tp_q.put((sck, addr))
return
if not self.args.no_htp:
m = "looks like the httpserver threadpool died; please make an issue on github and tell me the story of how you pulled that off, thanks and dog bless\n"
self.log(self.name, m, 1)
if self.args.log_conn:
self.log("%s %s" % addr, "|%sC-cthr" % ("-" * 5,), c="1;30")
thr = threading.Thread(
target=self.thr_client,
args=(sck, addr),
name="httpconn-{}-{}".format(addr[0].split(".", 2)[-1][-6:], addr[1]),
name="httpsrv-{}-{}".format(addr[0].split(".", 2)[-1][-6:], addr[1]),
)
thr.daemon = True
thr.start()
def thr_poolw(self):
while True:
task = self.tp_q.get()
if not task:
break
with self.mutex:
self.tp_time = None
try:
sck, addr = task
me = threading.current_thread()
me.name = "httpconn-{}-{}".format(
addr[0].split(".", 2)[-1][-6:], addr[1]
)
self.thr_client(sck, addr)
me.name = self.name + "-poolw"
except:
self.log(self.name, "thr_client: " + min_ex(), 3)
def num_clients(self):
with self.mutex:
return len(self.clients)
def shutdown(self):
self.stopping = True
for srv in self.srvs:
try:
srv.close()
except:
pass
clients = list(self.clients.keys())
for cli in clients:
try:
@@ -237,14 +90,7 @@ class HttpSrv(object):
except:
pass
if self.tp_q:
self.stop_threads(self.tp_nthr)
for _ in range(10):
time.sleep(0.05)
if self.tp_q.empty():
break
self.log(self.name, "ok bye")
self.log("httpsrv-n", "ok bye")
def thr_client(self, sck, addr):
"""thread managing one tcp client"""
@@ -254,15 +100,25 @@ class HttpSrv(object):
with self.mutex:
self.clients[cli] = 0
if self.is_mp:
self.workload += 50
if not self.workload_thr_alive:
self.workload_thr_alive = True
thr = threading.Thread(
target=self.thr_workload, name="httpsrv-workload"
)
thr.daemon = True
thr.start()
fno = sck.fileno()
try:
if self.args.log_conn:
self.log("%s %s" % addr, "|%sC-crun" % ("-" * 4,), c="1;30")
self.log("%s %s" % addr, "|%sC-crun" % ("-" * 6,), c="1;30")
cli.run()
except (OSError, socket.error) as ex:
if ex.errno not in [10038, 10054, 107, 57, 49, 9]:
if ex.errno not in [10038, 10054, 107, 57, 9]:
self.log(
"%s %s" % addr,
"run({}): {}".format(fno, ex),
@@ -272,7 +128,7 @@ class HttpSrv(object):
finally:
sck = cli.s
if self.args.log_conn:
self.log("%s %s" % addr, "|%sC-cdone" % ("-" * 5,), c="1;30")
self.log("%s %s" % addr, "|%sC-cdone" % ("-" * 7,), c="1;30")
try:
fno = sck.fileno()
@@ -296,7 +152,35 @@ class HttpSrv(object):
finally:
with self.mutex:
del self.clients[cli]
self.ncli -= 1
if self.disconnect_func:
self.disconnect_func(addr) # pylint: disable=not-callable
def thr_workload(self):
"""indicates the python interpreter workload caused by this HttpSrv"""
# avoid locking in extract_filedata by tracking difference here
while True:
time.sleep(0.2)
with self.mutex:
if not self.clients:
# no clients rn, termiante thread
self.workload_thr_alive = False
self.workload = 0
return
total = 0
with self.mutex:
for cli in self.clients.keys():
now = cli.workload
delta = now - self.clients[cli]
if delta < 0:
# was reset in HttpCli to prevent overflow
delta = now
total += delta
self.clients[cli] = now
self.workload = total
def cachebuster(self):
if time.time() - self.cb_ts < 1:
@@ -310,12 +194,12 @@ class HttpSrv(object):
try:
with os.scandir(os.path.join(E.mod, "web")) as dh:
for fh in dh:
inf = fh.stat()
inf = fh.stat(follow_symlinks=False)
v = max(v, inf.st_mtime)
except:
pass
v = base64.urlsafe_b64encode(spack(b">xxL", int(v)))
v = base64.urlsafe_b64encode(struct.pack(">xxL", int(v)))
self.cb_v = v.decode("ascii")[-4:]
self.cb_ts = time.time()
return self.cb_v

View File

@@ -7,9 +7,11 @@ import json
import shutil
import subprocess as sp
from .__init__ import PY2, WINDOWS, unicode
from .__init__ import PY2, WINDOWS
from .util import fsenc, fsdec, uncyg, REKOBO_LKEY
from .bos import bos
if not PY2:
unicode = str
def have_ff(cmd):
@@ -45,7 +47,7 @@ class MParser(object):
if WINDOWS:
bp = uncyg(bp)
if bos.path.exists(bp):
if os.path.exists(bp):
self.bin = bp
return
except:
@@ -228,47 +230,37 @@ def parse_ffprobe(txt):
class MTag(object):
def __init__(self, log_func, args):
self.log_func = log_func
self.args = args
self.usable = True
self.prefer_mt = not args.no_mtag_ff
self.backend = "ffprobe" if args.no_mutagen else "mutagen"
self.can_ffprobe = (
HAVE_FFPROBE
and not args.no_mtag_ff
and (not WINDOWS or sys.version_info >= (3, 8))
)
self.prefer_mt = False
mappings = args.mtm
or_ffprobe = " or FFprobe"
self.backend = "ffprobe" if args.no_mutagen else "mutagen"
or_ffprobe = " or ffprobe"
if self.backend == "mutagen":
self.get = self.get_mutagen
try:
import mutagen
except:
self.log("could not load Mutagen, trying FFprobe instead", c=3)
self.log("could not load mutagen, trying ffprobe instead", c=3)
self.backend = "ffprobe"
if self.backend == "ffprobe":
self.usable = self.can_ffprobe
self.get = self.get_ffprobe
self.prefer_mt = True
# about 20x slower
self.usable = HAVE_FFPROBE
if not HAVE_FFPROBE:
pass
elif args.no_mtag_ff:
msg = "found FFprobe but it was disabled by --no-mtag-ff"
self.log(msg, c=3)
elif WINDOWS and sys.version_info < (3, 8):
if self.usable and WINDOWS and sys.version_info < (3, 8):
self.usable = False
or_ffprobe = " or python >= 3.8"
msg = "found FFprobe but your python is too old; need 3.8 or newer"
msg = "found ffprobe but your python is too old; need 3.8 or newer"
self.log(msg, c=1)
if not self.usable:
msg = "need Mutagen{} to read media tags so please run this:\n{}{} -m pip install --user mutagen\n"
pybin = os.path.basename(sys.executable)
self.log(msg.format(or_ffprobe, " " * 37, pybin), c=1)
msg = "need mutagen{} to read media tags so please run this:\n{}{} -m pip install --user mutagen\n"
self.log(
msg.format(or_ffprobe, " " * 37, os.path.basename(sys.executable)), c=1
)
return
# https://picard-docs.musicbrainz.org/downloads/MusicBrainz_Picard_Tag_Map.html
@@ -398,7 +390,7 @@ class MTag(object):
v2 = r2.get(k)
if v1 == v2:
print(" ", k, v1)
elif v1 != "0000": # FFprobe date=0
elif v1 != "0000": # ffprobe date=0
diffs.append(k)
print(" 1", k, v1)
print(" 2", k, v2)
@@ -419,41 +411,20 @@ class MTag(object):
md = mutagen.File(fsenc(abspath), easy=True)
x = md.info.length
except Exception as ex:
return self.get_ffprobe(abspath) if self.can_ffprobe else {}
return {}
sz = bos.path.getsize(abspath)
ret = {".q": [0, int((sz / md.info.length) / 128)]}
for attr, k, norm in [
["codec", "ac", unicode],
["channels", "chs", int],
["sample_rate", ".hz", int],
["bitrate", ".aq", int],
["length", ".dur", int],
]:
ret = {}
try:
dur = int(md.info.length)
try:
v = getattr(md.info, attr)
q = int(md.info.bitrate / 1024)
except:
if k != "ac":
continue
q = int((os.path.getsize(fsenc(abspath)) / dur) / 128)
try:
v = str(md.info).split(".")[1]
if v.startswith("ogg"):
v = v[3:]
except:
continue
if not v:
continue
if k == ".aq":
v /= 1000
if k == "ac" and v.startswith("mp4a.40."):
v = "aac"
ret[k] = [0, norm(v)]
ret[".dur"] = [0, dur]
ret[".q"] = [0, q]
except:
pass
return self.normalize_tags(ret, md)

View File

@@ -1,12 +1,12 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import os
import tarfile
import threading
from .sutil import errdesc
from .util import Queue, fsenc
from .bos import bos
class QFile(object):
@@ -33,11 +33,10 @@ class QFile(object):
class StreamTar(object):
"""construct in-memory tar file from the given path"""
def __init__(self, log, fgen, **kwargs):
def __init__(self, fgen, **kwargs):
self.ci = 0
self.co = 0
self.qfile = QFile()
self.log = log
self.fgen = fgen
self.errf = None
@@ -61,7 +60,7 @@ class StreamTar(object):
yield None
if self.errf:
bos.unlink(self.errf["ap"])
os.unlink(self.errf["ap"])
def ser(self, f):
name = f["vp"]
@@ -92,8 +91,7 @@ class StreamTar(object):
errors.append([f["vp"], repr(ex)])
if errors:
self.errf, txt = errdesc(errors)
self.log("\n".join(([repr(self.errf)] + txt[1:])))
self.errf = errdesc(errors)
self.ser(self.errf)
self.tar.close()

View File

@@ -1,12 +1,11 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import os
import time
import tempfile
from datetime import datetime
from .bos import bos
def errdesc(errors):
report = ["copyparty failed to add the following files to the archive:", ""]
@@ -18,11 +17,12 @@ def errdesc(errors):
tf_path = tf.name
tf.write("\r\n".join(report).encode("utf-8", "replace"))
dt = datetime.utcnow().strftime("%Y-%m%d-%H%M%S")
dt = datetime.utcfromtimestamp(time.time())
dt = dt.strftime("%Y-%m%d-%H%M%S")
bos.chmod(tf_path, 0o444)
os.chmod(tf_path, 0o444)
return {
"vp": "archive-errors-{}.txt".format(dt),
"ap": tf_path,
"st": bos.stat(tf_path),
}, report
"st": os.stat(tf_path),
}

View File

@@ -5,16 +5,12 @@ import re
import os
import sys
import time
import shlex
import string
import signal
import socket
import threading
from datetime import datetime, timedelta
import calendar
from .__init__ import E, PY2, WINDOWS, ANYWIN, MACOS, VT100, unicode
from .util import mp, start_log_thrs, start_stackmon, min_ex, ansi_re
from .__init__ import PY2, WINDOWS, MACOS, VT100
from .util import mp
from .authsrv import AuthSrv
from .tcpsrv import TcpSrv
from .up2k import Up2k
@@ -32,30 +28,17 @@ class SvcHub(object):
put() can return a queue (if want_reply=True) which has a blocking get() with the response.
"""
def __init__(self, args, argv, printed):
def __init__(self, args):
self.args = args
self.argv = argv
self.logf = None
self.stop_req = False
self.stopping = False
self.stop_cond = threading.Condition()
self.httpsrv_up = 0
self.ansi_re = re.compile("\033\\[[^m]*m")
self.log_mutex = threading.Lock()
self.next_day = 0
self.log = self._log_disabled if args.q else self._log_enabled
if args.lo:
self._setup_logfile(printed)
if args.stackmon:
start_stackmon(args.stackmon, 0)
if args.log_thrs:
start_log_thrs(self.log, args.log_thrs, 0)
# initiate all services to manage
self.asrv = AuthSrv(self.args, self.log)
self.asrv = AuthSrv(self.args, self.log, False)
if args.ls:
self.asrv.dbg_ls()
@@ -86,138 +69,22 @@ class SvcHub(object):
self.broker = Broker(self)
def thr_httpsrv_up(self):
time.sleep(5)
failed = self.broker.num_workers - self.httpsrv_up
if not failed:
return
m = "{}/{} workers failed to start"
m = m.format(failed, self.broker.num_workers)
self.log("root", m, 1)
os._exit(1)
def cb_httpsrv_up(self):
self.httpsrv_up += 1
if self.httpsrv_up != self.broker.num_workers:
return
self.log("root", "workers OK\n")
self.up2k.init_vols()
thr = threading.Thread(target=self.sd_notify, name="sd-notify")
thr.daemon = True
thr.start()
def _logname(self):
dt = datetime.utcnow()
fn = self.args.lo
for fs in "YmdHMS":
fs = "%" + fs
if fs in fn:
fn = fn.replace(fs, dt.strftime(fs))
return fn
def _setup_logfile(self, printed):
base_fn = fn = sel_fn = self._logname()
if fn != self.args.lo:
ctr = 0
# yup this is a race; if started sufficiently concurrently, two
# copyparties can grab the same logfile (considered and ignored)
while os.path.exists(sel_fn):
ctr += 1
sel_fn = "{}.{}".format(fn, ctr)
fn = sel_fn
try:
import lzma
lh = lzma.open(fn, "wt", encoding="utf-8", errors="replace", preset=0)
except:
import codecs
lh = codecs.open(fn, "w", encoding="utf-8", errors="replace")
lh.base_fn = base_fn
argv = [sys.executable] + self.argv
if hasattr(shlex, "quote"):
argv = [shlex.quote(x) for x in argv]
else:
argv = ['"{}"'.format(x) for x in argv]
msg = "[+] opened logfile [{}]\n".format(fn)
printed += msg
lh.write("t0: {:.3f}\nargv: {}\n\n{}".format(E.t0, " ".join(argv), printed))
self.logf = lh
print(msg, end="")
def run(self):
self.tcpsrv.run()
thr = threading.Thread(target=self.thr_httpsrv_up)
thr = threading.Thread(target=self.tcpsrv.run, name="svchub-main")
thr.daemon = True
thr.start()
for sig in [signal.SIGINT, signal.SIGTERM]:
signal.signal(sig, self.signal_handler)
# macos hangs after shutdown on sigterm with while-sleep,
# windows cannot ^c stop_cond (and win10 does the macos thing but winxp is fine??)
# linux is fine with both,
# never lucky
if ANYWIN:
# msys-python probably fine but >msys-python
thr = threading.Thread(target=self.stop_thr, name="svchub-sig")
thr.daemon = True
thr.start()
try:
while not self.stop_req:
time.sleep(1)
except:
pass
self.shutdown()
thr.join()
else:
self.stop_thr()
def stop_thr(self):
while not self.stop_req:
with self.stop_cond:
self.stop_cond.wait(9001)
self.shutdown()
def signal_handler(self, sig, frame):
if self.stopping:
return
self.stop_req = True
with self.stop_cond:
self.stop_cond.notify_all()
def shutdown(self):
if self.stopping:
return
self.stopping = True
self.stop_req = True
with self.stop_cond:
self.stop_cond.notify_all()
ret = 1
# winxp/py2.7 support: thr.join() kills signals
try:
while True:
time.sleep(9001)
except KeyboardInterrupt:
with self.log_mutex:
print("OPYTHAT")
self.tcpsrv.shutdown()
self.broker.shutdown()
self.up2k.shutdown()
if self.thumbsrv:
self.thumbsrv.shutdown()
@@ -230,40 +97,11 @@ class SvcHub(object):
print("waiting for thumbsrv (10sec)...")
print("nailed it", end="")
ret = 0
finally:
print("\033[0m")
if self.logf:
self.logf.close()
sys.exit(ret)
def _log_disabled(self, src, msg, c=0):
if not self.logf:
return
with self.log_mutex:
ts = datetime.utcnow().strftime("%Y-%m%d-%H%M%S.%f")[:-3]
self.logf.write("@{} [{}] {}\n".format(ts, src, msg))
now = time.time()
if now >= self.next_day:
self._set_next_day()
def _set_next_day(self):
if self.next_day and self.logf and self.logf.base_fn != self._logname():
self.logf.close()
self._setup_logfile("")
dt = datetime.utcnow()
# unix timestamp of next 00:00:00 (leap-seconds safe)
day_now = dt.day
while dt.day == day_now:
dt += timedelta(hours=12)
dt = dt.replace(hour=0, minute=0, second=0)
self.next_day = calendar.timegm(dt.utctimetuple())
pass
def _log_enabled(self, src, msg, c=0):
"""handles logging from all components"""
@@ -272,15 +110,22 @@ class SvcHub(object):
if now >= self.next_day:
dt = datetime.utcfromtimestamp(now)
print("\033[36m{}\033[0m\n".format(dt.strftime("%Y-%m-%d")), end="")
self._set_next_day()
# unix timestamp of next 00:00:00 (leap-seconds safe)
day_now = dt.day
while dt.day == day_now:
dt += timedelta(hours=12)
dt = dt.replace(hour=0, minute=0, second=0)
self.next_day = calendar.timegm(dt.utctimetuple())
fmt = "\033[36m{} \033[33m{:21} \033[0m{}\n"
if not VT100:
fmt = "{} {:21} {}\n"
if "\033" in msg:
msg = ansi_re.sub("", msg)
msg = self.ansi_re.sub("", msg)
if "\033" in src:
src = ansi_re.sub("", src)
src = self.ansi_re.sub("", src)
elif c:
if isinstance(c, int):
msg = "\033[3{}m{}".format(c, msg)
@@ -299,20 +144,20 @@ class SvcHub(object):
except:
print(msg.encode("ascii", "replace").decode(), end="")
if self.logf:
self.logf.write(msg)
def check_mp_support(self):
vmin = sys.version_info[1]
if WINDOWS:
msg = "need python 3.3 or newer for multiprocessing;"
if PY2 or vmin < 3:
if PY2:
# py2 pickler doesn't support winsock
return msg
elif vmin < 3:
return msg
elif MACOS:
return "multiprocessing is wonky on mac osx;"
else:
msg = "need python 3.3+ for multiprocessing;"
if PY2 or vmin < 3:
msg = "need python 2.7 or 3.3+ for multiprocessing;"
if not PY2 and vmin < 3:
return msg
try:
@@ -344,24 +189,5 @@ class SvcHub(object):
if not err:
return True
else:
self.log("svchub", err)
self.log("root", err)
return False
def sd_notify(self):
try:
addr = os.getenv("NOTIFY_SOCKET")
if not addr:
return
addr = unicode(addr)
if addr.startswith("@"):
addr = "\0" + addr[1:]
m = "".join(x for x in addr if x in string.printable)
self.log("sd_notify", m)
sck = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
sck.connect(addr)
sck.sendall(b"READY=1")
except:
self.log("sd_notify", min_ex())

View File

@@ -4,15 +4,15 @@ from __future__ import print_function, unicode_literals
import os
import time
import zlib
import struct
from datetime import datetime
from .sutil import errdesc
from .util import yieldfile, sanitize_fn, spack, sunpack
from .bos import bos
from .util import yieldfile, sanitize_fn
def dostime2unix(buf):
t, d = sunpack(b"<HH", buf)
t, d = struct.unpack("<HH", buf)
ts = (t & 0x1F) * 2
tm = (t >> 5) & 0x3F
@@ -36,13 +36,13 @@ def unixtime2dos(ts):
bd = ((dy - 1980) << 9) + (dm << 5) + dd
bt = (th << 11) + (tm << 5) + ts // 2
return spack(b"<HH", bt, bd)
return struct.pack("<HH", bt, bd)
def gen_fdesc(sz, crc32, z64):
ret = b"\x50\x4b\x07\x08"
fmt = b"<LQQ" if z64 else b"<LLL"
ret += spack(fmt, crc32, sz, sz)
fmt = "<LQQ" if z64 else "<LLL"
ret += struct.pack(fmt, crc32, sz, sz)
return ret
@@ -66,7 +66,7 @@ def gen_hdr(h_pos, fn, sz, lastmod, utf8, crc32, pre_crc):
req_ver = b"\x2d\x00" if z64 else b"\x0a\x00"
if crc32:
crc32 = spack(b"<L", crc32)
crc32 = struct.pack("<L", crc32)
else:
crc32 = b"\x00" * 4
@@ -87,14 +87,14 @@ def gen_hdr(h_pos, fn, sz, lastmod, utf8, crc32, pre_crc):
# however infozip does actual sz and it even works on winxp
# (same reasning for z64 extradata later)
vsz = 0xFFFFFFFF if z64 else sz
ret += spack(b"<LL", vsz, vsz)
ret += struct.pack("<LL", vsz, vsz)
# windows support (the "?" replace below too)
fn = sanitize_fn(fn, "/", [])
fn = sanitize_fn(fn, ok="/")
bfn = fn.encode("utf-8" if utf8 else "cp437", "replace").replace(b"?", b"_")
z64_len = len(z64v) * 8 + 4 if z64v else 0
ret += spack(b"<HH", len(bfn), z64_len)
ret += struct.pack("<HH", len(bfn), z64_len)
if h_pos is not None:
# 2b comment, 2b diskno
@@ -106,12 +106,12 @@ def gen_hdr(h_pos, fn, sz, lastmod, utf8, crc32, pre_crc):
ret += b"\x01\x00\x00\x00\xa4\x81"
# 4b local-header-ofs
ret += spack(b"<L", min(h_pos, 0xFFFFFFFF))
ret += struct.pack("<L", min(h_pos, 0xFFFFFFFF))
ret += bfn
if z64v:
ret += spack(b"<HH" + b"Q" * len(z64v), 1, len(z64v) * 8, *z64v)
ret += struct.pack("<HH" + "Q" * len(z64v), 1, len(z64v) * 8, *z64v)
return ret
@@ -136,7 +136,7 @@ def gen_ecdr(items, cdir_pos, cdir_end):
need_64 = nitems == 0xFFFF or 0xFFFFFFFF in [csz, cpos]
# 2b tnfiles, 2b dnfiles, 4b dir sz, 4b dir pos
ret += spack(b"<HHLL", nitems, nitems, csz, cpos)
ret += struct.pack("<HHLL", nitems, nitems, csz, cpos)
# 2b comment length
ret += b"\x00\x00"
@@ -163,7 +163,7 @@ def gen_ecdr64(items, cdir_pos, cdir_end):
# 8b tnfiles, 8b dnfiles, 8b dir sz, 8b dir pos
cdir_sz = cdir_end - cdir_pos
ret += spack(b"<QQQQ", len(items), len(items), cdir_sz, cdir_pos)
ret += struct.pack("<QQQQ", len(items), len(items), cdir_sz, cdir_pos)
return ret
@@ -178,14 +178,13 @@ def gen_ecdr64_loc(ecdr64_pos):
ret = b"\x50\x4b\x06\x07"
# 4b cdisk, 8b start of ecdr64, 4b ndisks
ret += spack(b"<LQL", 0, ecdr64_pos, 1)
ret += struct.pack("<LQL", 0, ecdr64_pos, 1)
return ret
class StreamZip(object):
def __init__(self, log, fgen, utf8=False, pre_crc=False):
self.log = log
def __init__(self, fgen, utf8=False, pre_crc=False):
self.fgen = fgen
self.utf8 = utf8
self.pre_crc = pre_crc
@@ -248,8 +247,8 @@ class StreamZip(object):
errors.append([f["vp"], repr(ex)])
if errors:
errf, txt = errdesc(errors)
self.log("\n".join(([repr(errf)] + txt[1:])))
errf = errdesc(errors)
print(repr(errf))
for x in self.ser(errf):
yield x
@@ -272,4 +271,4 @@ class StreamZip(object):
yield self._ct(ecdr)
if errors:
bos.unlink(errf["ap"])
os.unlink(errf["ap"])

View File

@@ -2,10 +2,11 @@
from __future__ import print_function, unicode_literals
import re
import time
import socket
import select
from .__init__ import MACOS, ANYWIN
from .util import chkcmd
from .util import chkcmd, Counter
class TcpSrv(object):
@@ -19,6 +20,7 @@ class TcpSrv(object):
self.args = hub.args
self.log = hub.log
self.num_clients = Counter()
self.stopping = False
ip = "127.0.0.1"
@@ -30,16 +32,14 @@ class TcpSrv(object):
for x in nonlocals:
eps[x] = "external"
msgs = []
m = "available @ http://{}:{}/ (\033[33m{}\033[0m)"
for ip, desc in sorted(eps.items(), key=lambda x: x[1]):
for port in sorted(self.args.p):
msgs.append(m.format(ip, port, desc))
if msgs:
msgs[-1] += "\n"
for m in msgs:
self.log("tcpsrv", m)
self.log(
"tcpsrv",
"available @ http://{}:{}/ (\033[33m{}\033[0m)".format(
ip, port, desc
),
)
self.srv = []
for ip in self.args.i:
@@ -66,13 +66,44 @@ class TcpSrv(object):
for srv in self.srv:
srv.listen(self.args.nc)
ip, port = srv.getsockname()
fno = srv.fileno()
msg = "listening @ {}:{} f{}".format(ip, port, fno)
self.log("tcpsrv", msg)
if self.args.q:
print(msg)
self.log("tcpsrv", "listening @ {0}:{1}".format(ip, port))
self.hub.broker.put(False, "listen", srv)
while not self.stopping:
if self.args.log_conn:
self.log("tcpsrv", "|%sC-ncli" % ("-" * 1,), c="1;30")
if self.num_clients.v >= self.args.nc:
time.sleep(0.1)
continue
if self.args.log_conn:
self.log("tcpsrv", "|%sC-acc1" % ("-" * 2,), c="1;30")
try:
# macos throws bad-fd
ready, _, _ = select.select(self.srv, [], [])
except:
ready = []
if not self.stopping:
raise
for srv in ready:
if self.stopping:
break
sck, addr = srv.accept()
sip, sport = srv.getsockname()
if self.args.log_conn:
self.log(
"%s %s" % addr,
"|{}C-acc2 \033[0;36m{} \033[3{}m{}".format(
"-" * 3, sip, sport % 8, sport
),
c="1;30",
)
self.num_clients.add()
self.hub.broker.put(False, "httpconn", sck, addr)
def shutdown(self):
self.stopping = True
@@ -84,100 +115,25 @@ class TcpSrv(object):
self.log("tcpsrv", "ok bye")
def ips_linux(self):
eps = {}
try:
txt, _ = chkcmd(["ip", "addr"])
except:
return eps
r = re.compile(r"^\s+inet ([^ ]+)/.* (.*)")
for ln in txt.split("\n"):
try:
ip, dev = r.match(ln.rstrip()).groups()
eps[ip] = dev
except:
pass
return eps
def ips_macos(self):
eps = {}
try:
txt, _ = chkcmd(["ifconfig"])
except:
return eps
rdev = re.compile(r"^([^ ]+):")
rip = re.compile(r"^\tinet ([0-9\.]+) ")
dev = None
for ln in txt.split("\n"):
m = rdev.match(ln)
if m:
dev = m.group(1)
m = rip.match(ln)
if m:
eps[m.group(1)] = dev
dev = None
return eps
def ips_windows_ipconfig(self):
eps = {}
try:
txt, _ = chkcmd(["ipconfig"])
except:
return eps
rdev = re.compile(r"(^[^ ].*):$")
rip = re.compile(r"^ +IPv?4? [^:]+: *([0-9\.]{7,15})$")
dev = None
for ln in txt.replace("\r", "").split("\n"):
m = rdev.match(ln)
if m:
dev = m.group(1).split(" adapter ", 1)[-1]
m = rip.match(ln)
if m and dev:
eps[m.group(1)] = dev
dev = None
return eps
def ips_windows_netsh(self):
eps = {}
try:
txt, _ = chkcmd("netsh interface ip show address".split())
except:
return eps
rdev = re.compile(r'.* "([^"]+)"$')
rip = re.compile(r".* IP\b.*: +([0-9\.]{7,15})$")
dev = None
for ln in txt.replace("\r", "").split("\n"):
m = rdev.match(ln)
if m:
dev = m.group(1)
m = rip.match(ln)
if m and dev:
eps[m.group(1)] = dev
dev = None
return eps
def detect_interfaces(self, listen_ips):
if MACOS:
eps = self.ips_macos()
elif ANYWIN:
eps = self.ips_windows_ipconfig() # sees more interfaces
eps.update(self.ips_windows_netsh()) # has better names
else:
eps = self.ips_linux()
eps = {}
if "0.0.0.0" not in listen_ips:
eps = {k: v for k, v in eps if k in listen_ips}
# get all ips and their interfaces
try:
ip_addr, _ = chkcmd("ip", "addr")
except:
ip_addr = None
if ip_addr:
r = re.compile(r"^\s+inet ([^ ]+)/.* (.*)")
for ln in ip_addr.split("\n"):
try:
ip, dev = r.match(ln.rstrip()).groups()
for lip in listen_ips:
if lip in ["0.0.0.0", ip]:
eps[ip] = dev
except:
pass
default_route = None
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

View File

@@ -5,7 +5,6 @@ import os
from .util import Cooldown
from .th_srv import thumb_path, THUMBABLE, FMT_FF
from .bos import bos
class ThumbCli(object):
@@ -26,9 +25,6 @@ class ThumbCli(object):
if is_vid and self.args.no_vthumb:
return None
if rem.startswith(".hist/th/") and rem.split(".")[-1] in ["webp", "jpg"]:
return os.path.join(ptop, rem)
if fmt == "j" and self.args.th_no_jpg:
fmt = "w"
@@ -40,7 +36,7 @@ class ThumbCli(object):
tpath = thumb_path(histpath, rem, mtime, fmt)
ret = None
try:
st = bos.stat(tpath)
st = os.stat(tpath)
if st.st_size:
ret = tpath
else:

View File

@@ -9,12 +9,15 @@ import hashlib
import threading
import subprocess as sp
from .__init__ import PY2, unicode
from .util import fsenc, vsplit, runcmd, Queue, Cooldown, BytesIO, min_ex
from .bos import bos
from .__init__ import PY2
from .util import fsenc, runcmd, Queue, Cooldown, BytesIO, min_ex
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, ffprobe
if not PY2:
unicode = str
HAVE_PIL = False
HAVE_HEIF = False
HAVE_AVIF = False
@@ -50,7 +53,7 @@ except:
# https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html
# ffmpeg -formats
FMT_PIL = "bmp dib gif icns ico jpg jpeg jp2 jpx pcx png pbm pgm ppm pnm sgi tga tif tiff webp xbm dds xpm"
FMT_FF = "av1 asf avi flv m4v mkv mjpeg mjpg mpg mpeg mpg2 mpeg2 h264 avc mts h265 hevc mov 3gp mp4 ts mpegts nut ogv ogm rm vob webm wmv"
FMT_FF = "av1 asf avi flv m4v mkv mjpeg mjpg mpg mpeg mpg2 mpeg2 h264 avc h265 hevc mov 3gp mp4 ts mpegts nut ogv ogm rm vob webm wmv"
if HAVE_HEIF:
FMT_PIL += " heif heifs heic heics"
@@ -74,7 +77,12 @@ def thumb_path(histpath, rem, mtime, fmt):
# base16 = 16 = 256
# b64-lc = 38 = 1444
# base64 = 64 = 4096
rd, fn = vsplit(rem)
try:
rd, fn = rem.rsplit("/", 1)
except:
rd = ""
fn = rem
if rd:
h = hashlib.sha512(fsenc(rd)).digest()
b64 = base64.urlsafe_b64encode(h).decode("ascii")[:24]
@@ -117,19 +125,18 @@ class ThumbSrv(object):
if not self.args.no_vthumb and (not HAVE_FFMPEG or not HAVE_FFPROBE):
missing = []
if not HAVE_FFMPEG:
missing.append("FFmpeg")
missing.append("ffmpeg")
if not HAVE_FFPROBE:
missing.append("FFprobe")
missing.append("ffprobe")
msg = "cannot create video thumbnails because some of the required programs are not available: "
msg += ", ".join(missing)
self.log(msg, c=3)
if self.args.th_clean:
t = threading.Thread(target=self.cleaner, name="thumb-cleaner")
t.daemon = True
t.start()
t = threading.Thread(target=self.cleaner, name="thumb-cleaner")
t.daemon = True
t.start()
def log(self, msg, c=0):
self.log_func("thumb", msg, c)
@@ -155,10 +162,13 @@ class ThumbSrv(object):
self.log("wait {}".format(tpath))
except:
thdir = os.path.dirname(tpath)
bos.makedirs(thdir)
try:
os.makedirs(thdir)
except:
pass
inf_path = os.path.join(thdir, "dir.txt")
if not bos.path.exists(inf_path):
if not os.path.exists(inf_path):
with open(inf_path, "wb") as f:
f.write(fsenc(os.path.dirname(abspath)))
@@ -178,7 +188,7 @@ class ThumbSrv(object):
cond.wait(3)
try:
st = bos.stat(tpath)
st = os.stat(tpath)
if st.st_size:
return tpath
except:
@@ -195,7 +205,7 @@ class ThumbSrv(object):
abspath, tpath = task
ext = abspath.split(".")[-1].lower()
fun = None
if not bos.path.exists(tpath):
if not os.path.exists(tpath):
if ext in FMT_PIL:
fun = self.conv_pil
elif ext in FMT_FF:
@@ -205,8 +215,8 @@ class ThumbSrv(object):
try:
fun(abspath, tpath)
except:
msg = "{} could not create thumbnail of {}\n{}"
self.log(msg.format(fun.__name__, abspath, min_ex()), "1;30")
msg = "{} failed on {}\n{}"
self.log(msg.format(fun.__name__, abspath, min_ex()), 3)
with open(tpath, "wb") as _:
pass
@@ -253,7 +263,7 @@ class ThumbSrv(object):
pass # default q = 75
if im.mode not in fmts:
# print("conv {}".format(im.mode))
print("conv {}".format(im.mode))
im = im.convert("RGB")
im.save(tpath, quality=40, method=6)
@@ -286,9 +296,8 @@ class ThumbSrv(object):
cmd += seek
cmd += [
b"-i", fsenc(abspath),
b"-map", b"0:v:0",
b"-vf", scale,
b"-frames:v", b"1",
b"-vframes", b"1",
]
# fmt: on
@@ -307,11 +316,10 @@ class ThumbSrv(object):
cmd += [fsenc(tpath)]
ret, sout, serr = runcmd(cmd)
ret, sout, serr = runcmd(*cmd)
if ret != 0:
m = "FFmpeg failed (probably a corrupt video file):\n"
m += "\n".join(["ff: {}".format(x) for x in serr.split("\n")])
self.log(m, c="1;30")
msg = ["ff: {}".format(x) for x in serr.split("\n")]
self.log("FFmpeg failed:\n" + "\n".join(msg), c="1;30")
raise sp.CalledProcessError(ret, (cmd[0], b"...", cmd[-1]))
def poke(self, tdir):
@@ -323,7 +331,7 @@ class ThumbSrv(object):
p1 = os.path.dirname(tdir)
p2 = os.path.dirname(p1)
for dp in [tdir, p1, p2]:
bos.utime(dp, (ts, ts))
os.utime(fsenc(dp), (ts, ts))
except:
pass
@@ -350,7 +358,7 @@ class ThumbSrv(object):
prev_b64 = None
prev_fp = None
try:
ents = bos.listdir(thumbpath)
ents = os.listdir(thumbpath)
except:
return 0
@@ -361,7 +369,7 @@ class ThumbSrv(object):
# "top" or b64 prefix/full (a folder)
if len(f) <= 3 or len(f) == 24:
age = now - bos.path.getmtime(fp)
age = now - os.path.getmtime(fp)
if age > maxage:
with self.mutex:
safe = True
@@ -393,7 +401,7 @@ class ThumbSrv(object):
if b64 == prev_b64:
self.log("rm replaced [{}]".format(fp))
bos.unlink(prev_fp)
os.unlink(prev_fp)
prev_b64 = b64
prev_fp = fp

View File

@@ -7,9 +7,7 @@ import time
import threading
from datetime import datetime
from .__init__ import unicode
from .util import s3dec, Pebkac, min_ex
from .bos import bos
from .up2k import up2k_wark_from_hashlist
@@ -68,7 +66,7 @@ class U2idx(object):
histpath = self.asrv.vfs.histtab[ptop]
db_path = os.path.join(histpath, "up2k.db")
if not bos.path.exists(db_path):
if not os.path.exists(db_path):
return None
cur = sqlite3.connect(db_path, 2).cursor()
@@ -88,12 +86,10 @@ class U2idx(object):
is_date = False
kw_key = ["(", ")", "and ", "or ", "not "]
kw_val = ["==", "=", "!=", ">", ">=", "<", "<=", "like "]
ptn_mt = re.compile(r"^\.?[a-z_-]+$")
ptn_mt = re.compile(r"^\.?[a-z]+$")
mt_ctr = 0
mt_keycmp = "substr(up.w,1,16)"
mt_keycmp2 = None
ptn_lc = re.compile(r" (mt[0-9]+\.v) ([=<!>]+) \? $")
ptn_lcv = re.compile(r"[a-zA-Z]")
while True:
uq = uq.strip()
@@ -186,21 +182,6 @@ class U2idx(object):
va.append(v)
is_key = True
# lowercase tag searches
m = ptn_lc.search(q)
if not m or not ptn_lcv.search(unicode(v)):
continue
va.pop()
va.append(v.lower())
q = q[: m.start()]
field, oper = m.groups()
if oper in ["=", "=="]:
q += " {} like ? ".format(field)
else:
q += " lower({}) {} ? ".format(field, oper)
try:
return self.run_query(vols, joins + "where " + q, va)
except Exception as ex:
@@ -244,7 +225,7 @@ class U2idx(object):
sret = []
c = cur.execute(q, v)
for hit in c:
w, ts, sz, rd, fn, ip, at = hit
w, ts, sz, rd, fn = hit
lim -= 1
if lim <= 0:
break

View File

@@ -23,20 +23,15 @@ from .util import (
ProgressPrinter,
fsdec,
fsenc,
absreal,
sanitize_fn,
ren_open,
atomic_move,
vsplit,
s3enc,
s3dec,
rmdirs,
statdir,
s2hms,
min_ex,
)
from .bos import bos
from .authsrv import AuthSrv
from .mtag import MTag, MParser
try:
@@ -45,13 +40,20 @@ try:
except:
HAVE_SQLITE3 = False
DB_VER = 5
DB_VER = 4
class Up2k(object):
"""
TODO:
* documentation
* registry persistence
* ~/.config flatfiles for active jobs
"""
def __init__(self, hub):
self.hub = hub
self.asrv = hub.asrv # type: AuthSrv
self.asrv = hub.asrv
self.args = hub.args
self.log_func = hub.log
@@ -65,7 +67,6 @@ class Up2k(object):
self.n_hashq = 0
self.n_tagq = 0
self.volstate = {}
self.need_rescan = {}
self.registry = {}
self.entags = {}
self.flags = {}
@@ -100,14 +101,13 @@ class Up2k(object):
if self.args.no_fastboot:
self.deferred_init()
def init_vols(self):
if self.args.no_fastboot:
return
t = threading.Thread(target=self.deferred_init, name="up2k-deferred-init")
t.daemon = True
t.start()
else:
t = threading.Thread(
target=self.deferred_init,
name="up2k-deferred-init",
)
t.daemon = True
t.start()
def deferred_init(self):
all_vols = self.asrv.vfs.all_vols
@@ -122,10 +122,6 @@ class Up2k(object):
thr.daemon = True
thr.start()
thr = threading.Thread(target=self._sched_rescan, name="up2k-rescan")
thr.daemon = True
thr.start()
if self.mtag:
thr = threading.Thread(target=self._tagger, name="up2k-tagger")
thr.daemon = True
@@ -175,38 +171,6 @@ class Up2k(object):
t.start()
return None
def _sched_rescan(self):
maxage = self.args.re_maxage
volage = {}
while True:
time.sleep(self.args.re_int)
now = time.time()
vpaths = list(sorted(self.asrv.vfs.all_vols.keys()))
with self.mutex:
if maxage:
for vp in vpaths:
if vp not in volage:
volage[vp] = now
if now - volage[vp] >= maxage:
self.need_rescan[vp] = 1
if not self.need_rescan:
continue
vols = list(sorted(self.need_rescan.keys()))
self.need_rescan = {}
err = self.rescan(self.asrv.vfs.all_vols, vols)
if err:
for v in vols:
self.need_rescan[v] = True
continue
for v in vols:
volage[v] = now
def _vis_job_progress(self, job):
perc = 100 - (len(job["need"]) * 100.0 / len(job["hash"]))
path = os.path.join(job["ptop"], job["prel"], job["name"])
@@ -229,7 +193,7 @@ class Up2k(object):
return True, ret
def init_indexes(self, all_vols, scan_vols=None):
def init_indexes(self, all_vols, scan_vols=[]):
self.pp = ProgressPrinter()
vols = all_vols.values()
t0 = time.time()
@@ -252,7 +216,7 @@ class Up2k(object):
# only need to protect register_vpath but all in one go feels right
for vol in vols:
try:
bos.listdir(vol.realpath)
os.listdir(vol.realpath)
except:
self.volstate[vol.vpath] = "OFFLINE (cannot access folder)"
self.log("cannot access " + vol.realpath, c=1)
@@ -338,7 +302,7 @@ class Up2k(object):
self.log(msg.format(len(vols), time.time() - t0))
if needed_mutagen:
msg = "could not read tags because no backends are available (Mutagen or FFprobe)"
msg = "could not read tags because no backends are available (mutagen or ffprobe)"
self.log(msg, c=1)
thr = None
@@ -378,26 +342,18 @@ class Up2k(object):
for k, v in flags.items()
]
if a:
vpath = "?"
for k, v in self.asrv.vfs.all_vols.items():
if v.realpath == ptop:
vpath = k
if vpath:
vpath += "/"
self.log("/{} {}".format(vpath, " ".join(sorted(a))), "35")
self.log(" ".join(sorted(a)) + "\033[0m")
reg = {}
path = os.path.join(histpath, "up2k.snap")
if "e2d" in flags and bos.path.exists(path):
if "e2d" in flags and os.path.exists(path):
with gzip.GzipFile(path, "rb") as f:
j = f.read().decode("utf-8")
reg2 = json.loads(j)
for k, job in reg2.items():
path = os.path.join(job["ptop"], job["prel"], job["name"])
if bos.path.exists(path):
if os.path.exists(fsenc(path)):
reg[k] = job
job["poke"] = time.time()
else:
@@ -412,7 +368,10 @@ class Up2k(object):
if not HAVE_SQLITE3 or "e2d" not in flags or "d2d" in flags:
return None
bos.makedirs(histpath)
try:
os.makedirs(histpath)
except:
pass
try:
cur = self._open_db(db_path)
@@ -442,7 +401,7 @@ class Up2k(object):
if WINDOWS:
excl = [x.replace("/", "\\") for x in excl]
n_add = self._build_dir(dbw, top, set(excl), top, nohash, [])
n_add = self._build_dir(dbw, top, set(excl), top, nohash)
n_rm = self._drop_lost(dbw[0], top)
if dbw[1]:
self.log("commit {} new files".format(dbw[1]))
@@ -450,18 +409,11 @@ class Up2k(object):
return True, n_add or n_rm or do_vac
def _build_dir(self, dbw, top, excl, cdir, nohash, seen):
rcdir = absreal(cdir) # a bit expensive but worth
if rcdir in seen:
m = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}"
self.log(m.format(seen[-1], rcdir, cdir), 3)
return 0
seen = seen + [cdir]
def _build_dir(self, dbw, top, excl, cdir, nohash):
self.pp.msg = "a{} {}".format(self.pp.n, cdir)
histpath = self.asrv.vfs.histtab[top]
ret = 0
g = statdir(self.log_func, not self.args.no_scandir, False, cdir)
g = statdir(self.log, not self.args.no_scandir, False, cdir)
for iname, inf in sorted(g):
abspath = os.path.join(cdir, iname)
lmod = int(inf.st_mtime)
@@ -470,7 +422,7 @@ class Up2k(object):
if abspath in excl or abspath == histpath:
continue
# self.log(" dir: {}".format(abspath))
ret += self._build_dir(dbw, top, excl, abspath, nohash, seen)
ret += self._build_dir(dbw, top, excl, abspath, nohash)
else:
# self.log("file: {}".format(abspath))
rp = abspath[len(top) + 1 :]
@@ -522,7 +474,7 @@ class Up2k(object):
wark = up2k_wark_from_hashlist(self.salt, sz, hashes)
self.db_add(dbw[0], wark, rd, fn, lmod, sz, "", 0)
self.db_add(dbw[0], wark, rd, fn, lmod, sz)
dbw[1] += 1
ret += 1
td = time.time() - dbw[2]
@@ -537,8 +489,8 @@ class Up2k(object):
rm = []
nchecked = 0
nfiles = next(cur.execute("select count(w) from up"))[0]
c = cur.execute("select rd, fn from up")
for drd, dfn in c:
c = cur.execute("select * from up")
for dwark, dts, dsz, drd, dfn in c:
nchecked += 1
if drd.startswith("//") or dfn.startswith("//"):
drd, dfn = s3dec(drd, dfn)
@@ -547,7 +499,7 @@ class Up2k(object):
# almost zero overhead dw
self.pp.msg = "b{} {}".format(nfiles - nchecked, abspath)
try:
if not bos.path.exists(abspath):
if not os.path.exists(fsenc(abspath)):
rm.append([drd, dfn])
except Exception as ex:
self.log("stat-rm: {} @ [{}]".format(repr(ex), abspath))
@@ -620,7 +572,7 @@ class Up2k(object):
c2 = conn.cursor()
c3 = conn.cursor()
n_left = cur.execute("select count(w) from up").fetchone()[0]
for w, rd, fn in cur.execute("select w, rd, fn from up order by rd, fn"):
for w, rd, fn in cur.execute("select w, rd, fn from up"):
n_left -= 1
q = "select w from mt where w = ?"
if c2.execute(q, (w[:16],)).fetchone():
@@ -935,21 +887,12 @@ class Up2k(object):
# x.set_trace_callback(trace)
def _open_db(self, db_path):
existed = bos.path.exists(db_path)
existed = os.path.exists(db_path)
cur = self._orz(db_path)
ver = self._read_ver(cur)
if not existed and ver is None:
return self._create_db(db_path, cur)
if ver == 4:
try:
m = "creating backup before upgrade: "
cur = self._backup_db(db_path, cur, ver, m)
self._upgrade_v4(cur)
ver = 5
except:
self.log("WARN: failed to upgrade from v4", 3)
if ver == DB_VER:
try:
nfiles = next(cur.execute("select count(w) from up"))[0]
@@ -962,38 +905,19 @@ class Up2k(object):
m = "database is version {}, this copyparty only supports versions <= {}"
raise Exception(m.format(ver, DB_VER))
bak = "{}.bak.{:x}.v{}".format(db_path, int(time.time()), ver)
db = cur.connection
cur.close()
db.close()
msg = "creating new DB (old is bad); backup: {}"
if ver:
msg = "creating new DB (too old to upgrade); backup: {}"
cur = self._backup_db(db_path, cur, ver, msg)
db = cur.connection
cur.close()
db.close()
bos.unlink(db_path)
self.log(msg.format(bak))
os.rename(fsenc(db_path), fsenc(bak))
return self._create_db(db_path, None)
def _backup_db(self, db_path, cur, ver, msg):
bak = "{}.bak.{:x}.v{}".format(db_path, int(time.time()), ver)
self.log(msg + bak)
try:
c2 = sqlite3.connect(bak)
with c2:
cur.connection.backup(c2)
return cur
except:
m = "native sqlite3 backup failed; using fallback method:\n"
self.log(m + min_ex())
finally:
c2.close()
db = cur.connection
cur.close()
db.close()
shutil.copy2(fsenc(db_path), fsenc(bak))
return self._orz(db_path)
def _read_ver(self, cur):
for tab in ["ki", "kv"]:
try:
@@ -1020,10 +944,9 @@ class Up2k(object):
idx = r"create index up_w on up(w)"
for cmd in [
r"create table up (w text, mt int, sz int, rd text, fn text, ip text, at int)",
r"create table up (w text, mt int, sz int, rd text, fn text)",
r"create index up_rd on up(rd)",
r"create index up_fn on up(fn)",
r"create index up_ip on up(ip)",
idx,
r"create table mt (w text, k text, v int)",
r"create index mt_w on mt(w)",
@@ -1038,24 +961,13 @@ class Up2k(object):
self.log("created DB at {}".format(db_path))
return cur
def _upgrade_v4(self, cur):
for cmd in [
r"alter table up add column ip text",
r"alter table up add column at int",
r"create index up_ip on up(ip)",
r"update kv set v=5 where k='sver'",
]:
cur.execute(cmd)
cur.connection.commit()
def handle_json(self, cj):
with self.mutex:
if not self.register_vpath(cj["ptop"], cj["vcfg"]):
if cj["ptop"] not in self.registry:
raise Pebkac(410, "location unavailable")
cj["name"] = sanitize_fn(cj["name"], "", [".prologue.html", ".epilogue.html"])
cj["name"] = sanitize_fn(cj["name"], bad=[".prologue.html", ".epilogue.html"])
cj["poke"] = time.time()
wark = self._get_wark(cj)
now = time.time()
@@ -1072,13 +984,13 @@ class Up2k(object):
argv = (wark[:16], wark)
cur = cur.execute(q, argv)
for _, dtime, dsize, dp_dir, dp_fn, ip, at in cur:
for _, dtime, dsize, dp_dir, dp_fn in cur:
if dp_dir.startswith("//") or dp_fn.startswith("//"):
dp_dir, dp_fn = s3dec(dp_dir, dp_fn)
dp_abs = "/".join([cj["ptop"], dp_dir, dp_fn])
# relying on path.exists to return false on broken symlinks
if bos.path.exists(dp_abs):
if os.path.exists(fsenc(dp_abs)):
job = {
"name": dp_fn,
"prel": dp_dir,
@@ -1086,8 +998,6 @@ class Up2k(object):
"ptop": cj["ptop"],
"size": dsize,
"lmod": dtime,
"addr": ip,
"at": at,
"hash": [],
"need": [],
}
@@ -1104,7 +1014,7 @@ class Up2k(object):
for fn in names:
path = os.path.join(job["ptop"], job["prel"], fn)
try:
if bos.path.getsize(path) > 0:
if os.path.getsize(fsenc(path)) > 0:
# upload completed or both present
break
except:
@@ -1137,27 +1047,10 @@ class Up2k(object):
pdir = os.path.join(cj["ptop"], cj["prel"])
job["name"] = self._untaken(pdir, cj["name"], now, cj["addr"])
dst = os.path.join(job["ptop"], job["prel"], job["name"])
if not self.args.nw:
bos.unlink(dst) # TODO ed pls
self._symlink(src, dst)
if cur:
a = [cj[x] for x in "prel name lmod size addr".split()]
a += [cj.get("at") or time.time()]
self.db_add(cur, wark, *a)
cur.connection.commit()
os.unlink(fsenc(dst)) # TODO ed pls
self._symlink(src, dst)
if not job:
vfs = self.asrv.vfs.all_vols[cj["vtop"]]
if vfs.lim:
ap1 = os.path.join(cj["ptop"], cj["prel"])
ap2, cj["prel"] = vfs.lim.all(
cj["addr"], cj["prel"], cj["size"], ap1
)
bos.makedirs(ap2)
vfs.lim.nup(cj["addr"])
vfs.lim.bup(cj["addr"], cj["size"])
job = {
"wark": wark,
"t0": now,
@@ -1188,11 +1081,8 @@ class Up2k(object):
self._new_upload(job)
purl = "/{}/".format("{}/{}".format(job["vtop"], job["prel"]).strip("/"))
return {
"name": job["name"],
"purl": purl,
"size": job["size"],
"lmod": job["lmod"],
"hash": job["need"],
@@ -1209,18 +1099,17 @@ class Up2k(object):
with ren_open(fname, "wb", fdir=fdir, suffix=suffix) as f:
return f["orz"][1]
def _symlink(self, src, dst, verbose=True):
if verbose:
self.log("linking dupe:\n {0}\n {1}".format(src, dst))
def _symlink(self, src, dst):
# TODO store this in linktab so we never delete src if there are links to it
self.log("linking dupe:\n {0}\n {1}".format(src, dst))
if self.args.nw:
return
try:
lsrc = src
ldst = dst
fs1 = bos.stat(os.path.dirname(src)).st_dev
fs2 = bos.stat(os.path.dirname(dst)).st_dev
fs1 = os.stat(fsenc(os.path.split(src)[0])).st_dev
fs2 = os.stat(fsenc(os.path.split(dst)[0])).st_dev
if fs1 == 0:
# py2 on winxp or other unsupported combination
raise OSError()
@@ -1243,7 +1132,7 @@ class Up2k(object):
hops = len(ndst[nc:]) - 1
lsrc = "../" * hops + "/".join(lsrc)
os.symlink(fsenc(lsrc), fsenc(ldst))
except Exception as ex:
except (AttributeError, OSError) as ex:
self.log("cannot symlink; creating copy: " + repr(ex))
shutil.copy2(fsenc(src), fsenc(dst))
@@ -1303,21 +1192,27 @@ class Up2k(object):
a = [dst, job["size"], (int(time.time()), int(job["lmod"]))]
self.lastmod_q.put(a)
a = [job[x] for x in "ptop wark prel name lmod size addr".split()]
a += [job.get("at") or time.time()]
if self.idx_wark(*a):
# legit api sware 2 me mum
if self.idx_wark(
job["ptop"],
job["wark"],
job["prel"],
job["name"],
job["lmod"],
job["size"],
):
del self.registry[ptop][wark]
# in-memory registry is reserved for unfinished uploads
return ret, dst
def idx_wark(self, ptop, wark, rd, fn, lmod, sz, ip, at):
def idx_wark(self, ptop, wark, rd, fn, lmod, sz):
cur = self.cur.get(ptop)
if not cur:
return False
self.db_rm(cur, rd, fn)
self.db_add(cur, wark, rd, fn, lmod, sz, ip, at)
self.db_add(cur, wark, rd, fn, int(lmod), sz)
cur.connection.commit()
if "e2t" in self.flags[ptop]:
@@ -1333,312 +1228,16 @@ class Up2k(object):
except:
db.execute(sql, s3enc(self.mem_cur, rd, fn))
def db_add(self, db, wark, rd, fn, ts, sz, ip, at):
sql = "insert into up values (?,?,?,?,?,?,?)"
v = (wark, int(ts), sz, rd, fn, ip or "", int(at or 0))
def db_add(self, db, wark, rd, fn, ts, sz):
sql = "insert into up values (?,?,?,?,?)"
v = (wark, int(ts), sz, rd, fn)
try:
db.execute(sql, v)
except:
rd, fn = s3enc(self.mem_cur, rd, fn)
v = (wark, int(ts), sz, rd, fn, ip or "", int(at or 0))
v = (wark, ts, sz, rd, fn)
db.execute(sql, v)
def handle_rm(self, uname, ip, vpaths):
n_files = 0
ok = {}
ng = {}
for vp in vpaths:
a, b, c = self._handle_rm(uname, ip, vp)
n_files += a
for k in b:
ok[k] = 1
for k in c:
ng[k] = 1
ng = {k: 1 for k in ng if k not in ok}
ok = len(ok)
ng = len(ng)
return "deleted {} files (and {}/{} folders)".format(n_files, ok, ok + ng)
def _handle_rm(self, uname, ip, vpath):
try:
permsets = [[True, False, False, True]]
vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0])
unpost = False
except:
# unpost with missing permissions? try read+write and verify with db
if not self.args.unpost:
raise Pebkac(400, "the unpost feature was disabled by server config")
unpost = True
permsets = [[True, True]]
vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0])
_, _, _, _, dip, dat = self._find_from_vpath(vn.realpath, rem)
m = "you cannot delete this: "
if not dip:
m += "file not found"
elif dip != ip:
m += "not uploaded by (You)"
elif dat < time.time() - self.args.unpost:
m += "uploaded too long ago"
else:
m = None
if m:
raise Pebkac(400, m)
ptop = vn.realpath
atop = vn.canonical(rem, False)
adir, fn = os.path.split(atop)
st = bos.lstat(atop)
scandir = not self.args.no_scandir
if stat.S_ISLNK(st.st_mode) or stat.S_ISREG(st.st_mode):
dbv, vrem = self.asrv.vfs.get(vpath, uname, *permsets[0])
dbv, vrem = dbv.get_dbv(vrem)
voldir = vsplit(vrem)[0]
vpath_dir = vsplit(vpath)[0]
g = [[dbv, voldir, vpath_dir, adir, [[fn, 0]], [], []]]
else:
g = vn.walk("", rem, [], uname, permsets, True, scandir, True)
if unpost:
raise Pebkac(400, "cannot unpost folders")
n_files = 0
for dbv, vrem, _, adir, files, rd, vd in g:
for fn in [x[0] for x in files]:
n_files += 1
abspath = os.path.join(adir, fn)
volpath = "{}/{}".format(vrem, fn).strip("/")
vpath = "{}/{}".format(dbv.vpath, volpath).strip("/")
self.log("rm {}\n {}".format(vpath, abspath))
_ = dbv.get(volpath, uname, *permsets[0])
with self.mutex:
try:
ptop = dbv.realpath
cur, wark, _, _, _, _ = self._find_from_vpath(ptop, volpath)
self._forget_file(ptop, volpath, cur, wark, True)
finally:
cur.connection.commit()
bos.unlink(abspath)
rm = rmdirs(self.log_func, scandir, True, atop)
return n_files, rm[0], rm[1]
def handle_mv(self, uname, svp, dvp):
svn, srem = self.asrv.vfs.get(svp, uname, True, False, True)
svn, srem = svn.get_dbv(srem)
sabs = svn.canonical(srem, False)
if not srem:
raise Pebkac(400, "mv: cannot move a mountpoint")
st = bos.lstat(sabs)
if stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode):
with self.mutex:
return self._mv_file(uname, svp, dvp)
jail = svn.get_dbv(srem)[0]
permsets = [[True, False, True]]
scandir = not self.args.no_scandir
# following symlinks is too scary
g = svn.walk("", srem, [], uname, permsets, True, scandir, True)
for dbv, vrem, _, atop, files, rd, vd in g:
if dbv != jail:
# fail early (prevent partial moves)
raise Pebkac(400, "mv: source folder contains other volumes")
g = svn.walk("", srem, [], uname, permsets, True, scandir, True)
for dbv, vrem, _, atop, files, rd, vd in g:
if dbv != jail:
# the actual check (avoid toctou)
raise Pebkac(400, "mv: source folder contains other volumes")
for fn in files:
svpf = "/".join(x for x in [dbv.vpath, vrem, fn[0]] if x)
if not svpf.startswith(svp + "/"): # assert
raise Pebkac(500, "mv: bug at {}, top {}".format(svpf, svp))
dvpf = dvp + svpf[len(svp) :]
with self.mutex:
self._mv_file(uname, svpf, dvpf)
rmdirs(self.log_func, scandir, True, sabs)
return "k"
def _mv_file(self, uname, svp, dvp):
svn, srem = self.asrv.vfs.get(svp, uname, True, False, True)
svn, srem = svn.get_dbv(srem)
dvn, drem = self.asrv.vfs.get(dvp, uname, False, True)
dvn, drem = dvn.get_dbv(drem)
sabs = svn.canonical(srem, False)
dabs = dvn.canonical(drem)
drd, dfn = vsplit(drem)
if bos.path.exists(dabs):
raise Pebkac(400, "mv2: target file exists")
bos.makedirs(os.path.dirname(dabs))
if bos.path.islink(sabs):
dlabs = absreal(sabs)
m = "moving symlink from [{}] to [{}], target [{}]"
self.log(m.format(sabs, dabs, dlabs))
os.unlink(sabs)
self._symlink(dlabs, dabs, False)
# folders are too scary, schedule rescan of both vols
self.need_rescan[svn.vpath] = 1
self.need_rescan[dvn.vpath] = 1
return "k"
c1, w, ftime, fsize, ip, at = self._find_from_vpath(svn.realpath, srem)
c2 = self.cur.get(dvn.realpath)
if ftime is None:
st = bos.stat(sabs)
ftime = st.st_mtime
fsize = st.st_size
if w:
if c2 and c2 != c1:
self._copy_tags(c1, c2, w)
self._forget_file(svn.realpath, srem, c1, w, c1 != c2)
self._relink(w, svn.realpath, srem, dabs)
c1.connection.commit()
if c2:
self.db_add(c2, w, drd, dfn, ftime, fsize, ip, at)
c2.connection.commit()
else:
self.log("not found in src db: [{}]".format(svp))
bos.rename(sabs, dabs)
return "k"
def _copy_tags(self, csrc, cdst, wark):
"""copy all tags for wark from src-db to dst-db"""
w = wark[:16]
if cdst.execute("select * from mt where w=? limit 1", (w,)).fetchone():
return # existing tags in dest db
for _, k, v in csrc.execute("select * from mt where w=?", (w,)):
cdst.execute("insert into mt values(?,?,?)", (w, k, v))
def _find_from_vpath(self, ptop, vrem):
cur = self.cur.get(ptop)
if not cur:
return None, None
rd, fn = vsplit(vrem)
q = "select w, mt, sz, ip, at from up where rd=? and fn=? limit 1"
try:
c = cur.execute(q, (rd, fn))
except:
c = cur.execute(q, s3enc(self.mem_cur, rd, fn))
hit = c.fetchone()
if hit:
wark, ftime, fsize, ip, at = hit
return cur, wark, ftime, fsize, ip, at
return cur, None, None, None, None, None
def _forget_file(self, ptop, vrem, cur, wark, drop_tags):
"""forgets file in db, fixes symlinks, does not delete"""
srd, sfn = vsplit(vrem)
self.log("forgetting {}".format(vrem))
if wark:
self.log("found {} in db".format(wark))
if drop_tags:
if self._relink(wark, ptop, vrem, None):
drop_tags = False
if drop_tags:
q = "delete from mt where w=?"
cur.execute(q, (wark[:16],))
self.db_rm(cur, srd, sfn)
reg = self.registry.get(ptop)
if reg:
if not wark:
wark = [
x
for x, y in reg.items()
if fn in [y["name"], y.get("tnam")] and y["prel"] == vrem
]
if wark and wark in reg:
m = "forgetting partial upload {} ({})"
p = self._vis_job_progress(wark)
self.log(m.format(wark, p))
del reg[wark]
def _relink(self, wark, sptop, srem, dabs):
"""
update symlinks from file at svn/srem to dabs (rename),
or to first remaining full if no dabs (delete)
"""
dupes = []
sabs = os.path.join(sptop, srem)
q = "select rd, fn from up where substr(w,1,16)=? and w=?"
for ptop, cur in self.cur.items():
for rd, fn in cur.execute(q, (wark[:16], wark)):
if rd.startswith("//") or fn.startswith("//"):
rd, fn = s3dec(rd, fn)
dvrem = "/".join([rd, fn]).strip("/")
if ptop != sptop or srem != dvrem:
dupes.append([ptop, dvrem])
self.log("found {} dupe: [{}] {}".format(wark, ptop, dvrem))
if not dupes:
return 0
full = {}
links = {}
for ptop, vp in dupes:
ap = os.path.join(ptop, vp)
try:
d = links if bos.path.islink(ap) else full
d[ap] = [ptop, vp]
except:
self.log("relink: not found: [{}]".format(ap))
if not dabs and not full and links:
# deleting final remaining full copy; swap it with a symlink
slabs = list(sorted(links.keys()))[0]
ptop, rem = links.pop(slabs)
self.log("linkswap [{}] and [{}]".format(sabs, slabs))
bos.unlink(slabs)
bos.rename(sabs, slabs)
self._symlink(slabs, sabs, False)
full[slabs] = [ptop, rem]
if not dabs:
dabs = list(sorted(full.keys()))[0]
for alink in links.keys():
try:
if alink != sabs and absreal(alink) != sabs:
continue
self.log("relinking [{}] to [{}]".format(alink, dabs))
bos.unlink(alink)
except:
pass
self._symlink(dabs, alink, False)
return len(full) + len(links)
def _get_wark(self, cj):
if len(cj["name"]) > 1024 or len(cj["hash"]) > 512 * 1024: # 16TiB
raise Pebkac(400, "name or numchunks not according to spec")
@@ -1660,7 +1259,7 @@ class Up2k(object):
def _hashlist_from_file(self, path):
pp = self.pp if hasattr(self, "pp") else None
fsz = bos.path.getsize(path)
fsz = os.path.getsize(fsenc(path))
csz = up2k_chunksize(fsz)
ret = []
with open(fsenc(path), "rb", 512 * 1024) as f:
@@ -1728,7 +1327,7 @@ class Up2k(object):
for path, sz, times in ready:
self.log("lmod: setting times {} on {}".format(times, path))
try:
bos.utime(path, times)
os.utime(fsenc(path), times)
except:
self.log("lmod: failed to utime ({}, {})".format(path, times))
@@ -1739,22 +1338,19 @@ class Up2k(object):
self.log("could not unsparse [{}]".format(path), 3)
def _snapshot(self):
self.snap_persist_interval = 300 # persist unfinished index every 5 min
self.snap_discard_interval = 21600 # drop unfinished after 6 hours inactivity
self.snap_prev = {}
persist_interval = 30 # persist unfinished uploads index every 30 sec
discard_interval = 21600 # drop unfinished uploads after 6 hours inactivity
prev = {}
while True:
time.sleep(self.snap_persist_interval)
self.do_snapshot()
time.sleep(persist_interval)
with self.mutex:
for k, reg in self.registry.items():
self._snap_reg(prev, k, reg, discard_interval)
def do_snapshot(self):
with self.mutex:
for k, reg in self.registry.items():
self._snap_reg(k, reg)
def _snap_reg(self, ptop, reg):
def _snap_reg(self, prev, ptop, reg, discard_interval):
now = time.time()
histpath = self.asrv.vfs.histtab[ptop]
rm = [x for x in reg.values() if now - x["poke"] > self.snap_discard_interval]
rm = [x for x in reg.values() if now - x["poke"] > discard_interval]
if rm:
m = "dropping {} abandoned uploads in {}".format(len(rm), ptop)
vis = [self._vis_job_progress(x) for x in rm]
@@ -1764,30 +1360,33 @@ class Up2k(object):
try:
# remove the filename reservation
path = os.path.join(job["ptop"], job["prel"], job["name"])
if bos.path.getsize(path) == 0:
bos.unlink(path)
if os.path.getsize(fsenc(path)) == 0:
os.unlink(fsenc(path))
if len(job["hash"]) == len(job["need"]):
# PARTIAL is empty, delete that too
path = os.path.join(job["ptop"], job["prel"], job["tnam"])
bos.unlink(path)
os.unlink(fsenc(path))
except:
pass
path = os.path.join(histpath, "up2k.snap")
if not reg:
if ptop not in self.snap_prev or self.snap_prev[ptop] is not None:
self.snap_prev[ptop] = None
if bos.path.exists(path):
bos.unlink(path)
if ptop not in prev or prev[ptop] is not None:
prev[ptop] = None
if os.path.exists(fsenc(path)):
os.unlink(fsenc(path))
return
newest = max(x["poke"] for _, x in reg.items()) if reg else 0
etag = [len(reg), newest]
if etag == self.snap_prev.get(ptop):
if etag == prev.get(ptop):
return
bos.makedirs(histpath)
try:
os.makedirs(histpath)
except:
pass
path2 = "{}.{}".format(path, os.getpid())
j = json.dumps(reg, indent=2, sort_keys=True).encode("utf-8")
@@ -1797,7 +1396,7 @@ class Up2k(object):
atomic_move(path2, path)
self.log("snap: {} |{}|".format(path, len(reg.keys())))
self.snap_prev[ptop] = etag
prev[ptop] = etag
def _tagger(self):
with self.mutex:
@@ -1845,31 +1444,26 @@ class Up2k(object):
self.n_hashq -= 1
# self.log("hashq {}".format(self.n_hashq))
ptop, rd, fn, ip, at = self.hashq.get()
ptop, rd, fn = self.hashq.get()
# self.log("hashq {} pop {}/{}/{}".format(self.n_hashq, ptop, rd, fn))
if "e2d" not in self.flags[ptop]:
continue
abspath = os.path.join(ptop, rd, fn)
self.log("hashing " + abspath)
inf = bos.stat(abspath)
inf = os.stat(fsenc(abspath))
hashes = self._hashlist_from_file(abspath)
wark = up2k_wark_from_hashlist(self.salt, inf.st_size, hashes)
with self.mutex:
self.idx_wark(ptop, wark, rd, fn, inf.st_mtime, inf.st_size, ip, at)
self.idx_wark(ptop, wark, rd, fn, inf.st_mtime, inf.st_size)
def hash_file(self, ptop, flags, rd, fn, ip, at):
def hash_file(self, ptop, flags, rd, fn):
with self.mutex:
self.register_vpath(ptop, flags)
self.hashq.put([ptop, rd, fn, ip, at])
self.hashq.put([ptop, rd, fn])
self.n_hashq += 1
# self.log("hashq {} push {}/{}/{}".format(self.n_hashq, ptop, rd, fn))
def shutdown(self):
if hasattr(self, "snap_prev"):
self.log("writing snapshot")
self.do_snapshot()
def up2k_chunksize(filesize):
chunksize = 1024 * 1024
@@ -1885,7 +1479,7 @@ def up2k_chunksize(filesize):
def up2k_wark_from_hashlist(salt, filesize, hashes):
"""server-reproducible file identifier, independent of name or location"""
""" server-reproducible file identifier, independent of name or location """
ident = [salt, str(filesize)]
ident.extend(hashes)
ident = "\n".join(ident)

View File

@@ -4,7 +4,6 @@ from __future__ import print_function, unicode_literals
import re
import os
import sys
import stat
import time
import base64
import select
@@ -17,7 +16,6 @@ import mimetypes
import contextlib
import subprocess as sp # nosec
from datetime import datetime
from collections import Counter
from .__init__ import PY2, WINDOWS, ANYWIN
from .stolen import surrogateescape
@@ -44,23 +42,6 @@ else:
from Queue import Queue # pylint: disable=import-error,no-name-in-module
from StringIO import StringIO as BytesIO
try:
struct.unpack(b">i", b"idgi")
spack = struct.pack
sunpack = struct.unpack
except:
def spack(f, *a, **ka):
return struct.pack(f.decode("ascii"), *a, **ka)
def sunpack(f, *a, **ka):
return struct.unpack(f.decode("ascii"), *a, **ka)
ansi_re = re.compile("\033\\[[^m]*m")
surrogateescape.register_surrogateescape()
FS_ENCODING = sys.getfilesystemencoding()
if WINDOWS and PY2:
@@ -80,7 +61,6 @@ HTTPCODE = {
403: "Forbidden",
404: "Not Found",
405: "Method Not Allowed",
411: "Length Required",
413: "Payload Too Large",
416: "Requested Range Not Satisfiable",
422: "Unprocessable Entity",
@@ -143,6 +123,20 @@ REKOBO_KEY = {
REKOBO_LKEY = {k.lower(): v for k, v in REKOBO_KEY.items()}
class Counter(object):
def __init__(self, v=0):
self.v = v
self.mutex = threading.Lock()
def add(self, delta=1):
with self.mutex:
self.v += delta
def set(self, absval):
with self.mutex:
self.v = absval
class Cooldown(object):
def __init__(self, maxage):
self.maxage = maxage
@@ -237,7 +231,7 @@ def nuprint(msg):
def rice_tid():
tid = threading.current_thread().ident
c = sunpack(b"B" * 5, spack(b">Q", tid)[-5:])
c = struct.unpack(b"B" * 5, struct.pack(b">Q", tid)[-5:])
return "".join("\033[1;37;48;5;{}m{:02x}".format(x, x) for x in c) + "\033[0m"
@@ -288,69 +282,15 @@ def alltrace():
return "\n".join(rret + bret)
def start_stackmon(arg_str, nid):
suffix = "-{}".format(nid) if nid else ""
fp, f = arg_str.rsplit(",", 1)
f = int(f)
t = threading.Thread(
target=stackmon,
args=(fp, f, suffix),
name="stackmon" + suffix,
)
t.daemon = True
t.start()
def stackmon(fp, ival, suffix):
ctr = 0
while True:
ctr += 1
time.sleep(ival)
st = "{}, {}\n{}".format(ctr, time.time(), alltrace())
with open(fp + suffix, "wb") as f:
f.write(st.encode("utf-8", "replace"))
def start_log_thrs(logger, ival, nid):
ival = int(ival)
tname = lname = "log-thrs"
if nid:
tname = "logthr-n{}-i{:x}".format(nid, os.getpid())
lname = tname[3:]
t = threading.Thread(
target=log_thrs,
args=(logger, ival, lname),
name=tname,
)
t.daemon = True
t.start()
def log_thrs(log, ival, name):
while True:
time.sleep(ival)
tv = [x.name for x in threading.enumerate()]
tv = [
x.split("-")[0]
if x.startswith("httpconn-") or x.startswith("thumb-")
else "listen"
if "-listen-" in x
else x
for x in tv
if not x.startswith("pydevd.")
]
tv = ["{}\033[36m{}".format(v, k) for k, v in sorted(Counter(tv).items())]
log(name, "\033[0m \033[33m".join(tv), 3)
def min_ex():
et, ev, tb = sys.exc_info()
tb = traceback.extract_tb(tb)
fmt = "{} @ {} <{}>: {}"
ex = [fmt.format(fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in tb]
ex.append("[{}] {}".format(et.__name__, ev))
return "\n".join(ex[-8:])
tb = traceback.extract_tb(tb, 2)
ex = [
"{} @ {} <{}>: {}".format(fp.split(os.sep)[-1], ln, fun, txt)
for fp, ln, fun, txt in tb
]
ex.append("{}: {}".format(et.__name__, ev))
return "\n".join(ex)
@contextlib.contextmanager
@@ -688,17 +628,6 @@ def humansize(sz, terse=False):
return ret.replace("iB", "").replace(" ", "")
def unhumanize(sz):
try:
return float(sz)
except:
pass
mul = sz[-1:].lower()
mul = {"k": 1024, "m": 1024 * 1024, "g": 1024 * 1024 * 1024}.get(mul, 1)
return float(sz[:-1]) * mul
def get_spd(nbyte, t0, t=None):
if t is None:
t = time.time()
@@ -745,7 +674,7 @@ def undot(path):
return "/".join(ret)
def sanitize_fn(fn, ok, bad):
def sanitize_fn(fn, ok="", bad=[]):
if "/" not in ok:
fn = fn.replace("\\", "/").split("/")[-1]
@@ -774,19 +703,6 @@ def sanitize_fn(fn, ok, bad):
return fn.strip()
def absreal(fpath):
try:
return fsdec(os.path.abspath(os.path.realpath(fsenc(fpath))))
except:
if not WINDOWS:
raise
# cpython bug introduced in 3.8, still exists in 3.9.1,
# some win7sp1 and win10:20H2 boxes cannot realpath a
# networked drive letter such as b"n:" or b"n:\\"
return os.path.abspath(os.path.realpath(fpath))
def u8safe(txt):
try:
return txt.encode("utf-8", "xmlcharrefreplace").decode("utf-8", "replace")
@@ -844,13 +760,6 @@ def unquotep(txt):
return w8dec(unq2)
def vsplit(vpath):
if "/" not in vpath:
return "", vpath
return vpath.rsplit("/", 1)
def w8dec(txt):
"""decodes filesystem-bytes to wtf8"""
if PY2:
@@ -995,10 +904,16 @@ def yieldfile(fn):
yield buf
def hashcopy(fin, fout):
def hashcopy(actor, fin, fout):
is_mp = actor.is_mp
hashobj = hashlib.sha512()
tlen = 0
for buf in fin:
if is_mp:
actor.workload += 1
if actor.workload > 2 ** 31:
actor.workload = 100
tlen += len(buf)
hashobj.update(buf)
fout.write(buf)
@@ -1009,10 +924,15 @@ def hashcopy(fin, fout):
return tlen, hashobj.hexdigest(), digest_b64
def sendfile_py(lower, upper, f, s):
def sendfile_py(lower, upper, f, s, actor=None):
remains = upper - lower
f.seek(lower)
while remains > 0:
if actor:
actor.workload += 1
if actor.workload > 2 ** 31:
actor.workload = 100
# time.sleep(0.01)
buf = f.read(min(1024 * 32, remains))
if not buf:
@@ -1050,9 +970,6 @@ def sendfile_kern(lower, upper, f, s):
def statdir(logger, scandir, lstat, top):
if lstat and not os.supports_follow_symlinks:
scandir = False
try:
btop = fsenc(top)
if scandir and hasattr(os, "scandir"):
@@ -1062,7 +979,8 @@ def statdir(logger, scandir, lstat, top):
try:
yield [fsdec(fh.name), fh.stat(follow_symlinks=not lstat)]
except Exception as ex:
logger(src, "[s] {} @ {}".format(repr(ex), fsdec(fh.path)), 6)
msg = "scan-stat: \033[36m{} @ {}"
logger(msg.format(repr(ex), fsdec(fh.path)))
else:
src = "listdir"
fun = os.lstat if lstat else os.stat
@@ -1071,33 +989,11 @@ def statdir(logger, scandir, lstat, top):
try:
yield [fsdec(name), fun(abspath)]
except Exception as ex:
logger(src, "[s] {} @ {}".format(repr(ex), fsdec(abspath)), 6)
msg = "list-stat: \033[36m{} @ {}"
logger(msg.format(repr(ex), fsdec(abspath)))
except Exception as ex:
logger(src, "{} @ {}".format(repr(ex), top), 1)
def rmdirs(logger, scandir, lstat, top):
if not os.path.exists(fsenc(top)) or not os.path.isdir(fsenc(top)):
top = os.path.dirname(top)
dirs = statdir(logger, scandir, lstat, top)
dirs = [x[0] for x in dirs if stat.S_ISDIR(x[1].st_mode)]
dirs = [os.path.join(top, x) for x in dirs]
ok = []
ng = []
for d in dirs[::-1]:
a, b = rmdirs(logger, scandir, lstat, d)
ok += a
ng += b
try:
os.rmdir(fsenc(top))
ok.append(top)
except:
ng.append(top)
return ok, ng
logger("{}: \033[31m{} @ {}".format(src, repr(ex), top))
def unescape_cookie(orig):
@@ -1139,11 +1035,11 @@ def guess_mime(url, fallback="application/octet-stream"):
if ";" not in ret:
if ret.startswith("text/") or ret.endswith("/javascript"):
ret += "; charset=UTF-8"
return ret
def runcmd(argv):
def runcmd(*argv):
p = sp.Popen(argv, stdout=sp.PIPE, stderr=sp.PIPE)
stdout, stderr = p.communicate()
stdout = stdout.decode("utf-8", "replace")
@@ -1151,8 +1047,8 @@ def runcmd(argv):
return [p.returncode, stdout, stderr]
def chkcmd(argv):
ok, sout, serr = runcmd(argv)
def chkcmd(*argv):
ok, sout, serr = runcmd(*argv)
if ok != 0:
raise Exception(serr)
@@ -1174,7 +1070,10 @@ def gzip_orig_sz(fn):
with open(fsenc(fn), "rb") as f:
f.seek(-4, 2)
rv = f.read(4)
return sunpack(b"I", rv)[0]
try:
return struct.unpack(b"I", rv)[0]
except:
return struct.unpack("I", rv)[0]
def py_desc():

80
copyparty/vcr.py Normal file
View File

@@ -0,0 +1,80 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import time
import shlex
import subprocess as sp
from .__init__ import PY2
from .util import fsenc
class VCR_Direct(object):
def __init__(self, cli, fpath):
self.cli = cli
self.fpath = fpath
self.log_func = cli.log_func
self.log_src = cli.log_src
def log(self, msg, c=0):
self.log_func(self.log_src, "vcr: {}".format(msg), c)
def run(self):
opts = self.cli.uparam
# fmt: off
cmd = [
"ffmpeg",
"-nostdin",
"-hide_banner",
"-v", "warning",
"-i", fsenc(self.fpath),
"-vf", "scale=640:-4",
"-c:a", "libopus",
"-b:a", "128k",
"-c:v", "libvpx",
"-deadline", "realtime",
"-row-mt", "1"
]
# fmt: on
if "ss" in opts:
cmd.extend(["-ss", opts["ss"]])
if "crf" in opts:
cmd.extend(["-b:v", "0", "-crf", opts["crf"]])
else:
cmd.extend(["-b:v", "{}M".format(opts.get("mbps", 1.2))])
cmd.extend(["-f", "webm", "-"])
comp = str if not PY2 else unicode
cmd = [x.encode("utf-8") if isinstance(x, comp) else x for x in cmd]
self.log(" ".join([shlex.quote(x.decode("utf-8", "replace")) for x in cmd]))
p = sp.Popen(cmd, stdout=sp.PIPE)
self.cli.send_headers(None, mime="video/webm")
fails = 0
while True:
self.log("read")
buf = p.stdout.read(1024 * 4)
if not buf:
fails += 1
if p.poll() is not None or fails > 30:
self.log("ffmpeg exited")
return
time.sleep(0.1)
continue
fails = 0
try:
self.cli.s.sendall(buf)
except:
self.log("client disconnected")
p.kill()
return

View File

@@ -13,7 +13,7 @@ window.baguetteBox = (function () {
captions: true,
buttons: 'auto',
noScrollbars: false,
bodyClass: 'bbox-open',
bodyClass: 'baguetteBox-open',
titleTag: false,
async: false,
preload: 2,
@@ -22,46 +22,37 @@ window.baguetteBox = (function () {
afterHide: null,
onChange: null,
},
overlay, slider, btnPrev, btnNext, btnHelp, btnVmode, btnClose,
overlay, slider, previousButton, nextButton, closeButton,
currentGallery = [],
currentIndex = 0,
isOverlayVisible = false,
touch = {}, // start-pos
touchFlag = false, // busy
re_i = /.+\.(gif|jpe?g|png|webp)(\?|$)/i,
re_v = /.+\.(webm|mp4)(\?|$)/i,
regex = /.+\.(gif|jpe?g|png|webp)/i,
data = {}, // all galleries
imagesElements = [],
documentLastFocus = null,
isFullscreen = false,
vmute = false,
vloop = false,
vnext = false,
resume_mp = false;
documentLastFocus = null;
var onFSC = function (e) {
isFullscreen = !!document.fullscreenElement;
};
var overlayClickHandler = function (e) {
if (e.target.id.indexOf('baguette-img') !== -1)
var overlayClickHandler = function (event) {
if (event.target.id.indexOf('baguette-img') !== -1) {
hideOverlay();
}
};
var touchstartHandler = function (e) {
var touchstartHandler = function (event) {
touch.count++;
if (touch.count > 1)
if (touch.count > 1) {
touch.multitouch = true;
touch.startX = e.changedTouches[0].pageX;
touch.startY = e.changedTouches[0].pageY;
}
touch.startX = event.changedTouches[0].pageX;
touch.startY = event.changedTouches[0].pageY;
};
var touchmoveHandler = function (e) {
if (touchFlag || touch.multitouch)
var touchmoveHandler = function (event) {
if (touchFlag || touch.multitouch) {
return;
e.preventDefault ? e.preventDefault() : e.returnValue = false;
var touchEvent = e.touches[0] || e.changedTouches[0];
}
event.preventDefault ? event.preventDefault() : event.returnValue = false;
var touchEvent = event.touches[0] || event.changedTouches[0];
if (touchEvent.pageX - touch.startX > 40) {
touchFlag = true;
showPreviousImage();
@@ -74,19 +65,19 @@ window.baguetteBox = (function () {
};
var touchendHandler = function () {
touch.count--;
if (touch.count <= 0)
if (touch.count <= 0) {
touch.multitouch = false;
}
touchFlag = false;
};
var contextmenuHandler = function () {
touchendHandler();
};
var trapFocusInsideOverlay = function (e) {
if (overlay.style.display === 'block' && (overlay.contains && !overlay.contains(e.target))) {
e.stopPropagation();
btnClose.focus();
var trapFocusInsideOverlay = function (event) {
if (overlay.style.display === 'block' && (overlay.contains && !overlay.contains(event.target))) {
event.stopPropagation();
initFocus();
}
};
@@ -97,7 +88,7 @@ window.baguetteBox = (function () {
}
function bindImageClickListeners(selector, userOptions) {
var galleryNodeList = QSA(selector);
var galleryNodeList = document.querySelectorAll(selector);
var selectorData = {
galleries: [],
nodeList: galleryNodeList
@@ -105,26 +96,33 @@ window.baguetteBox = (function () {
data[selector] = selectorData;
[].forEach.call(galleryNodeList, function (galleryElement) {
if (userOptions && userOptions.filter) {
regex = userOptions.filter;
}
var tagsNodeList = [];
if (galleryElement.tagName === 'A')
if (galleryElement.tagName === 'A') {
tagsNodeList = [galleryElement];
else
} else {
tagsNodeList = galleryElement.getElementsByTagName('a');
}
tagsNodeList = [].filter.call(tagsNodeList, function (element) {
if (element.className.indexOf(userOptions && userOptions.ignoreClass) === -1)
return re_i.test(element.href) || re_v.test(element.href);
if (element.className.indexOf(userOptions && userOptions.ignoreClass) === -1) {
return regex.test(element.href);
}
});
if (!tagsNodeList.length)
if (tagsNodeList.length === 0) {
return;
}
var gallery = [];
[].forEach.call(tagsNodeList, function (imageElement, imageIndex) {
var imageElementClickHandler = function (e) {
if (ctrl(e))
var imageElementClickHandler = function (event) {
if (event && (event.ctrlKey || event.metaKey))
return true;
e.preventDefault ? e.preventDefault() : e.returnValue = false;
event.preventDefault ? event.preventDefault() : event.returnValue = false;
prepareOverlay(gallery, userOptions);
showOverlay(imageIndex);
};
@@ -142,186 +140,93 @@ window.baguetteBox = (function () {
}
function clearCachedData() {
for (var selector in data)
if (data.hasOwnProperty(selector))
for (var selector in data) {
if (data.hasOwnProperty(selector)) {
removeFromCache(selector);
}
}
}
function removeFromCache(selector) {
if (!data.hasOwnProperty(selector))
if (!data.hasOwnProperty(selector)) {
return;
}
var galleries = data[selector].galleries;
[].forEach.call(galleries, function (gallery) {
[].forEach.call(gallery, function (imageItem) {
unbind(imageItem.imageElement, 'click', imageItem.eventHandler);
});
if (currentGallery === gallery)
if (currentGallery === gallery) {
currentGallery = [];
}
});
delete data[selector];
}
function buildOverlay() {
overlay = ebi('bbox-overlay');
if (!overlay) {
var ctr = mknod('div');
ctr.innerHTML = (
'<div id="bbox-overlay" role="dialog">' +
'<div id="bbox-slider"></div>' +
'<button id="bbox-prev" class="bbox-btn" type="button" aria-label="Previous">&lt;</button>' +
'<button id="bbox-next" class="bbox-btn" type="button" aria-label="Next">&gt;</button>' +
'<div id="bbox-btns">' +
'<button id="bbox-help" type="button">?</button>' +
'<button id="bbox-vmode" type="button" tt="a"></button>' +
'<button id="bbox-close" type="button" aria-label="Close">X</button>' +
'</div></div>'
);
overlay = ctr.firstChild;
QS('body').appendChild(overlay);
tt.att(overlay);
overlay = ebi('baguetteBox-overlay');
if (overlay) {
slider = ebi('baguetteBox-slider');
previousButton = ebi('previous-button');
nextButton = ebi('next-button');
closeButton = ebi('close-button');
return;
}
slider = ebi('bbox-slider');
btnPrev = ebi('bbox-prev');
btnNext = ebi('bbox-next');
btnHelp = ebi('bbox-help');
btnVmode = ebi('bbox-vmode');
btnClose = ebi('bbox-close');
overlay = mknod('div');
overlay.setAttribute('role', 'dialog');
overlay.id = 'baguetteBox-overlay';
document.getElementsByTagName('body')[0].appendChild(overlay);
slider = mknod('div');
slider.id = 'baguetteBox-slider';
overlay.appendChild(slider);
previousButton = mknod('button');
previousButton.setAttribute('type', 'button');
previousButton.id = 'previous-button';
previousButton.setAttribute('aria-label', 'Previous');
previousButton.innerHTML = '&lt;';
overlay.appendChild(previousButton);
nextButton = mknod('button');
nextButton.setAttribute('type', 'button');
nextButton.id = 'next-button';
nextButton.setAttribute('aria-label', 'Next');
nextButton.innerHTML = '&gt;';
overlay.appendChild(nextButton);
closeButton = mknod('button');
closeButton.setAttribute('type', 'button');
closeButton.id = 'close-button';
closeButton.setAttribute('aria-label', 'Close');
closeButton.innerHTML = '&times;';
overlay.appendChild(closeButton);
previousButton.className = nextButton.className = closeButton.className = 'baguetteBox-button';
bindEvents();
}
function halp() {
if (ebi('bbox-halp'))
return;
var list = [
['<b># hotkey</b>', '<b># operation</b>'],
['escape', 'close'],
['left, J', 'previous file'],
['right, L', 'next file'],
['home', 'first file'],
['end', 'last file'],
['space, P, K', 'video: play / pause'],
['U', 'video: seek 10sec back'],
['P', 'video: seek 10sec ahead'],
['M', 'video: toggle mute'],
['R', 'video: toggle loop'],
['C', 'video: toggle auto-next'],
['F', 'video: toggle fullscreen'],
],
d = mknod('table'),
html = ['<tbody>'];
for (var a = 0; a < list.length; a++)
html.push('<tr><td>' + list[a][0] + '</td><td>' + list[a][1] + '</td></tr>');
d.innerHTML = html.join('\n') + '</tbody>';
d.setAttribute('id', 'bbox-halp');
d.onclick = function () {
overlay.removeChild(d);
};
overlay.appendChild(d);
}
function keyDownHandler(e) {
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing)
return;
var k = e.code + '', v = vid();
if (k == "ArrowLeft" || k == "KeyJ")
showPreviousImage();
else if (k == "ArrowRight" || k == "KeyL")
showNextImage();
else if (k == "Escape")
hideOverlay();
else if (k == "Home")
showFirstImage(e);
else if (k == "End")
showLastImage(e);
else if (k == "Space" || k == "KeyP" || k == "KeyK")
playpause();
else if (k == "KeyU" || k == "KeyO")
relseek(k == "KeyU" ? -10 : 10);
else if (k == "KeyM" && v) {
v.muted = vmute = !vmute;
mp_ctl();
function keyDownHandler(event) {
switch (event.keyCode) {
case 37: // Left
showPreviousImage();
break;
case 39: // Right
showNextImage();
break;
case 27: // Esc
hideOverlay();
break;
case 36: // Home
showFirstImage(event);
break;
case 35: // End
showLastImage(event);
break;
}
else if (k == "KeyR" && v) {
vloop = !vloop;
vnext = vnext && !vloop;
setVmode();
}
else if (k == "KeyC" && v) {
vnext = !vnext;
vloop = vloop && !vnext;
setVmode();
}
else if (k == "KeyF")
try {
if (isFullscreen)
document.exitFullscreen();
else
v.requestFullscreen();
}
catch (ex) { }
}
function setVmode() {
var v = vid();
ebi('bbox-vmode').style.display = v ? '' : 'none';
if (!v)
return;
var msg = 'When video ends, ', tts = '', lbl;
if (vloop) {
lbl = 'Loop';
msg += 'repeat it';
tts = '$NHotkey: R';
}
else if (vnext) {
lbl = 'Cont';
msg += 'continue to next';
tts = '$NHotkey: C';
}
else {
lbl = 'Stop';
msg += 'just stop'
}
btnVmode.setAttribute('aria-label', msg);
btnVmode.setAttribute('tt', msg + tts);
btnVmode.textContent = lbl;
v.loop = vloop
if (vloop && v.paused)
v.play();
}
function tglVmode() {
if (vloop) {
vnext = true;
vloop = false;
}
else if (vnext)
vnext = false;
else
vloop = true;
setVmode();
if (tt.en)
tt.show.bind(this)();
}
function keyUpHandler(e) {
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing)
return;
var k = e.code + '';
if (k == "Space")
ev(e);
}
var passiveSupp = false;
@@ -343,11 +248,9 @@ window.baguetteBox = (function () {
function bindEvents() {
bind(overlay, 'click', overlayClickHandler);
bind(btnPrev, 'click', showPreviousImage);
bind(btnNext, 'click', showNextImage);
bind(btnClose, 'click', hideOverlay);
bind(btnVmode, 'click', tglVmode);
bind(btnHelp, 'click', halp);
bind(previousButton, 'click', showPreviousImage);
bind(nextButton, 'click', showNextImage);
bind(closeButton, 'click', hideOverlay);
bind(slider, 'contextmenu', contextmenuHandler);
bind(overlay, 'touchstart', touchstartHandler, nonPassiveEvent);
bind(overlay, 'touchmove', touchmoveHandler, passiveEvent);
@@ -357,11 +260,9 @@ window.baguetteBox = (function () {
function unbindEvents() {
unbind(overlay, 'click', overlayClickHandler);
unbind(btnPrev, 'click', showPreviousImage);
unbind(btnNext, 'click', showNextImage);
unbind(btnClose, 'click', hideOverlay);
unbind(btnVmode, 'click', tglVmode);
unbind(btnHelp, 'click', halp);
unbind(previousButton, 'click', showPreviousImage);
unbind(nextButton, 'click', showNextImage);
unbind(closeButton, 'click', hideOverlay);
unbind(slider, 'contextmenu', contextmenuHandler);
unbind(overlay, 'touchstart', touchstartHandler, nonPassiveEvent);
unbind(overlay, 'touchmove', touchmoveHandler, passiveEvent);
@@ -370,9 +271,9 @@ window.baguetteBox = (function () {
}
function prepareOverlay(gallery, userOptions) {
if (currentGallery === gallery)
if (currentGallery === gallery) {
return;
}
currentGallery = gallery;
setOptions(userOptions);
slider.innerHTML = '';
@@ -386,8 +287,8 @@ window.baguetteBox = (function () {
fullImage.id = 'baguette-img-' + i;
imagesElements.push(fullImage);
imagesFiguresIds.push('bbox-figure-' + i);
imagesCaptionsIds.push('bbox-figcaption-' + i);
imagesFiguresIds.push('baguetteBox-figure-' + i);
imagesCaptionsIds.push('baguetteBox-figcaption-' + i);
slider.appendChild(imagesElements[i]);
}
overlay.setAttribute('aria-labelledby', imagesFiguresIds.join(' '));
@@ -395,21 +296,23 @@ window.baguetteBox = (function () {
}
function setOptions(newOptions) {
if (!newOptions)
if (!newOptions) {
newOptions = {};
}
for (var item in defaults) {
options[item] = defaults[item];
if (typeof newOptions[item] !== 'undefined')
if (typeof newOptions[item] !== 'undefined') {
options[item] = newOptions[item];
}
}
slider.style.transition = (options.animation === 'fadeIn' ? 'opacity .4s ease' :
options.animation === 'slideIn' ? '' : 'none');
if (options.buttons === 'auto' && ('ontouchstart' in window || currentGallery.length === 1))
if (options.buttons === 'auto' && ('ontouchstart' in window || currentGallery.length === 1)) {
options.buttons = false;
}
btnPrev.style.display = btnNext.style.display = (options.buttons ? '' : 'none');
previousButton.style.display = nextButton.style.display = (options.buttons ? '' : 'none');
}
function showOverlay(chosenImageIndex) {
@@ -417,12 +320,11 @@ window.baguetteBox = (function () {
document.documentElement.style.overflowY = 'hidden';
document.body.style.overflowY = 'scroll';
}
if (overlay.style.display === 'block')
if (overlay.style.display === 'block') {
return;
}
bind(document, 'keydown', keyDownHandler);
bind(document, 'keyup', keyUpHandler);
bind(document, 'fullscreenchange', onFSC);
currentIndex = chosenImageIndex;
touch = {
count: 0,
@@ -439,48 +341,50 @@ window.baguetteBox = (function () {
// Fade in overlay
setTimeout(function () {
overlay.className = 'visible';
if (options.bodyClass && document.body.classList)
if (options.bodyClass && document.body.classList) {
document.body.classList.add(options.bodyClass);
if (options.afterShow)
}
if (options.afterShow) {
options.afterShow();
}
}, 50);
if (options.onChange)
if (options.onChange) {
options.onChange(currentIndex, imagesElements.length);
}
documentLastFocus = document.activeElement;
btnClose.focus();
initFocus();
isOverlayVisible = true;
}
function initFocus() {
if (options.buttons) {
previousButton.focus();
} else {
closeButton.focus();
}
}
function hideOverlay(e) {
ev(e);
playvid(false);
if (options.noScrollbars) {
document.documentElement.style.overflowY = 'auto';
document.body.style.overflowY = 'auto';
}
if (overlay.style.display === 'none')
if (overlay.style.display === 'none') {
return;
}
unbind(document, 'keydown', keyDownHandler);
unbind(document, 'keyup', keyUpHandler);
unbind(document, 'fullscreenchange', onFSC);
// Fade out and hide the overlay
overlay.className = '';
setTimeout(function () {
overlay.style.display = 'none';
if (options.bodyClass && document.body.classList)
if (options.bodyClass && document.body.classList) {
document.body.classList.remove(options.bodyClass);
var h = ebi('bbox-halp');
if (h)
h.parentNode.removeChild(h);
if (options.afterHide)
}
if (options.afterHide) {
options.afterHide();
}
documentLastFocus && documentLastFocus.focus();
isOverlayVisible = false;
}, 500);
@@ -490,68 +394,59 @@ window.baguetteBox = (function () {
var imageContainer = imagesElements[index];
var galleryItem = currentGallery[index];
if (typeof imageContainer === 'undefined' || typeof galleryItem === 'undefined')
if (typeof imageContainer === 'undefined' || typeof galleryItem === 'undefined') {
return; // out-of-bounds or gallery dirty
}
if (imageContainer.querySelector('img, video'))
// was loaded, cb and bail
return callback ? callback() : null;
// maybe unloaded video
while (imageContainer.firstChild)
imageContainer.removeChild(imageContainer.firstChild);
if (imageContainer.getElementsByTagName('img')[0]) {
// image is loaded, cb and bail
if (callback) {
callback();
}
return;
}
var imageElement = galleryItem.imageElement,
imageSrc = imageElement.href,
is_vid = re_v.test(imageSrc),
thumbnailElement = imageElement.querySelector('img, video'),
thumbnailElement = imageElement.getElementsByTagName('img')[0],
imageCaption = typeof options.captions === 'function' ?
options.captions.call(currentGallery, imageElement) :
imageElement.getAttribute('data-caption') || imageElement.title;
imageSrc += imageSrc.indexOf('?') < 0 ? '?cache' : '&cache';
if (is_vid && index != currentIndex)
return; // no preload
var figure = mknod('figure');
figure.id = 'bbox-figure-' + index;
figure.innerHTML = '<div class="bbox-spinner">' +
'<div class="bbox-double-bounce1"></div>' +
'<div class="bbox-double-bounce2"></div>' +
figure.id = 'baguetteBox-figure-' + index;
figure.innerHTML = '<div class="baguetteBox-spinner">' +
'<div class="baguetteBox-double-bounce1"></div>' +
'<div class="baguetteBox-double-bounce2"></div>' +
'</div>';
if (options.captions && imageCaption) {
var figcaption = mknod('figcaption');
figcaption.id = 'bbox-figcaption-' + index;
figcaption.id = 'baguetteBox-figcaption-' + index;
figcaption.innerHTML = imageCaption;
figure.appendChild(figcaption);
}
imageContainer.appendChild(figure);
var image = mknod(is_vid ? 'video' : 'img');
clmod(imageContainer, 'vid', is_vid);
image.addEventListener(is_vid ? 'loadedmetadata' : 'load', function () {
var image = mknod('img');
image.onload = function () {
// Remove loader element
var spinner = QS('#baguette-img-' + index + ' .bbox-spinner');
var spinner = document.querySelector('#baguette-img-' + index + ' .baguetteBox-spinner');
figure.removeChild(spinner);
if (!options.async && callback)
if (!options.async && callback) {
callback();
});
}
};
image.setAttribute('src', imageSrc);
if (is_vid) {
image.setAttribute('controls', 'controls');
image.onended = vidEnd;
}
image.alt = thumbnailElement ? thumbnailElement.alt || '' : '';
if (options.titleTag && imageCaption)
if (options.titleTag && imageCaption) {
image.title = imageCaption;
}
figure.appendChild(image);
if (options.async && callback)
if (options.async && callback) {
callback();
}
}
function showNextImage(e) {
@@ -564,20 +459,26 @@ window.baguetteBox = (function () {
return show(currentIndex - 1);
}
function showFirstImage(e) {
if (e)
e.preventDefault();
function showFirstImage(event) {
if (event) {
event.preventDefault();
}
return show(0);
}
function showLastImage(e) {
if (e)
e.preventDefault();
function showLastImage(event) {
if (event) {
event.preventDefault();
}
return show(currentGallery.length - 1);
}
/**
* Move the gallery to a specific index
* @param `index` {number} - the position of the image
* @param `gallery` {array} - gallery which should be opened, if omitted assumes the currently opened one
* @return {boolean} - true on success or false if the index is invalid
*/
function show(index, gallery) {
if (!isOverlayVisible && index >= 0 && index < gallery.length) {
prepareOverlay(gallery, options);
@@ -585,25 +486,18 @@ window.baguetteBox = (function () {
return true;
}
if (index < 0) {
if (options.animation)
if (options.animation) {
bounceAnimation('left');
}
return false;
}
if (index >= imagesElements.length) {
if (options.animation)
if (options.animation) {
bounceAnimation('right');
}
return false;
}
var v = vid();
if (v) {
v.src = '';
v.load();
v.parentNode.removeChild(v);
}
currentIndex = index;
loadImage(currentIndex, function () {
preloadNext(currentIndex);
@@ -611,49 +505,17 @@ window.baguetteBox = (function () {
});
updateOffset();
if (options.onChange)
if (options.onChange) {
options.onChange(currentIndex, imagesElements.length);
}
return true;
}
function vid() {
return imagesElements[currentIndex].querySelector('video');
}
function playvid(play) {
if (vid())
vid()[play ? 'play' : 'pause']();
}
function playpause() {
var v = vid();
if (v)
v[v.paused ? "play" : "pause"]();
}
function relseek(sec) {
if (vid())
vid().currentTime += sec;
}
function vidEnd() {
if (this == vid() && vnext)
showNextImage();
}
function mp_ctl() {
var v = vid();
if (!vmute && v && mp.au && !mp.au.paused) {
mp.fade_out();
resume_mp = true;
}
else if (resume_mp && (vmute || !v) && mp.au && mp.au.paused) {
mp.fade_in();
resume_mp = false;
}
}
/**
* Triggers the bounce animation
* @param {('left'|'right')} direction - Direction of the movement
*/
function bounceAnimation(direction) {
slider.className = 'bounce-from-' + direction;
setTimeout(function () {
@@ -672,30 +534,21 @@ window.baguetteBox = (function () {
} else {
slider.style.transform = 'translate3d(' + offset + ',0,0)';
}
playvid(false);
var v = vid();
if (v) {
playvid(true);
v.muted = vmute;
v.loop = vloop;
}
mp_ctl();
setVmode();
}
function preloadNext(index) {
if (index - currentIndex >= options.preload)
if (index - currentIndex >= options.preload) {
return;
}
loadImage(index + 1, function () {
preloadNext(index + 1);
});
}
function preloadPrev(index) {
if (currentIndex - index >= options.preload)
if (currentIndex - index >= options.preload) {
return;
}
loadImage(index - 1, function () {
preloadPrev(index - 1);
});
@@ -713,8 +566,7 @@ window.baguetteBox = (function () {
unbindEvents();
clearCachedData();
unbind(document, 'keydown', keyDownHandler);
unbind(document, 'keyup', keyUpHandler);
document.getElementsByTagName('body')[0].removeChild(ebi('bbox-overlay'));
document.getElementsByTagName('body')[0].removeChild(ebi('baguetteBox-overlay'));
data = {};
currentGallery = [];
currentIndex = 0;
@@ -725,8 +577,6 @@ window.baguetteBox = (function () {
show: show,
showNext: showNextImage,
showPrevious: showPreviousImage,
relseek: relseek,
playpause: playpause,
hide: hideOverlay,
destroy: destroyPlugin
};

View File

@@ -22,8 +22,37 @@ html, body {
margin: 0;
padding: 0;
}
pre, code, tt {
body {
padding-bottom: 5em;
}
#tt {
position: fixed;
max-width: 34em;
background: #222;
border: 0 solid #555;
overflow: hidden;
margin-top: 1em;
padding: 0 1em;
height: 0;
opacity: .1;
transition: opacity 0.14s, height 0.14s, padding 0.14s;
box-shadow: 0 .2em .5em #222;
border-radius: .4em;
z-index: 9001;
}
#tt.show {
padding: 1em;
height: auto;
border-width: .2em 0;
opacity: 1;
}
#tt code {
background: #3c3c3c;
padding: .2em .3em;
border-top: 1px solid #777;
border-radius: .3em;
font-family: monospace, monospace;
line-height: 2em;
}
#path,
#path * {
@@ -55,10 +84,6 @@ pre, code, tt {
padding: .3em 0;
scroll-margin-top: 45vh;
}
#files tr {
scroll-margin-top: 25vh;
scroll-margin-bottom: 20vh;
}
#files tbody div a {
color: #f5a;
}
@@ -113,7 +138,8 @@ a, #files tbody div a:last-child {
border-top: 1px solid #383838;
}
#files tbody td:nth-child(3) {
font-family: monospace, monospace;
font-family: monospace;
font-size: 1.3em;
text-align: right;
padding-right: 1em;
white-space: nowrap;
@@ -173,31 +199,15 @@ a, #files tbody div a:last-child {
margin: .8em 0;
}
#srv_info {
color: #a73;
background: #333;
position: absolute;
opacity: .5;
font-size: .8em;
top: .5em;
color: #fc5;
position: absolute;
top: .5em;
left: 2em;
padding-right: .5em;
}
#srv_info span {
color: #aaa;
}
#acc_info {
position: absolute;
font-size: .81em;
top: .5em;
right: 2em;
color: #999;
}
#acc_info span {
color: #999;
margin-right: .6em;
}
#acc_info span.warn {
color: #f4c;
border-bottom: 1px solid rgba(255,68,204,0.6);
color: #fff;
}
#files tbody a.play {
color: #e70;
@@ -224,7 +234,6 @@ html.light #ggrid a.sel {
border-color: #c37;
}
#files tbody tr.sel:hover td,
#files tbody tr.sel:focus td,
#ggrid a.sel:hover,
html.light #ggrid a.sel:hover {
color: #fff;
@@ -259,21 +268,6 @@ html.light #ggrid a.sel {
color: #fff;
text-shadow: 0 0 1px #fff;
}
#files tr:focus {
outline: none;
position: relative;
}
#files tr:focus td {
background: #111;
border-color: #fc0 #111 #fc0 #111;
box-shadow: 0 .2em 0 #fc0, 0 -.2em 0 #fc0;
}
#files tr:focus td:first-child {
box-shadow: -.2em .2em 0 #fc0, -.2em -.2em 0 #fc0;
}
#files tr:focus+tr td {
border-top: 1px solid transparent;
}
#blocked {
position: fixed;
top: 0;
@@ -331,18 +325,10 @@ html.light #ggrid a.sel {
height: 100%;
background: #3c3c3c;
}
#wtgrid,
#wtico {
cursor: url(/.cpr/dd/4.png), pointer;
animation: cursor 500ms;
position: relative;
top: -.06em;
}
#wtgrid {
font-size: .8em;
top: -.12em;
}
#wtgrid:hover,
#wtico:hover {
animation: cursor 500ms infinite;
}
@@ -358,9 +344,9 @@ html.light #ggrid a.sel {
}
#wtoggle {
position: absolute;
white-space: nowrap;
top: -1.2em;
right: 0;
width: 1.2em;
height: 1em;
font-size: 2em;
line-height: 1em;
@@ -369,7 +355,7 @@ html.light #ggrid a.sel {
background: #3c3c3c;
box-shadow: 0 0 .5em #222;
border-radius: .3em 0 0 0;
padding: .2em .2em;
padding: .2em 0 0 .07em;
color: #fff;
}
#wzip, #wnp {
@@ -391,6 +377,12 @@ html.light #ggrid a.sel {
#wtoggle * {
line-height: 1em;
}
#wtoggle.np {
width: 5.5em;
}
#wtoggle.sel {
width: 6.4em;
}
#wtoggle.sel #wzip,
#wtoggle.np #wnp {
display: inline-block;
@@ -398,42 +390,15 @@ html.light #ggrid a.sel {
#wtoggle.sel.np #wnp {
display: none;
}
#wfm a,
#wzip a {
font-size: .5em;
font-size: .4em;
padding: 0 .3em;
margin: -.3em .2em;
position: relative;
display: inline-block;
}
#wfm span {
font-size: .6em;
display: block;
}
#wfm a:not(.en) {
opacity: .3;
color: #f6c;
}
html.light #wfm a:not(.en) {
color: #c4a;
}
#files tbody tr.c1 td {
animation: fcut1 .5s ease-out;
}
#files tbody tr.c2 td {
animation: fcut2 .5s ease-out;
}
@keyframes fcut1 {
0% {opacity:0}
100% {opacity:1}
}
@keyframes fcut2 {
0% {opacity:0}
100% {opacity:1}
}
#wzip a {
font-size: .4em;
margin: -.3em .3em;
#wzip a+a {
margin-left: .8em;
}
#wtoggle.sel #wzip #selzip {
top: -.6em;
@@ -499,17 +464,6 @@ html.light #wfm a:not(.en) {
max-width: 9em;
}
}
@media (max-width: 35em) {
#ops>a[data-dest="new_md"],
#ops>a[data-dest="msg"] {
display: none;
}
#op_mkdir.act+div,
#op_mkdir.act+div+div {
display: block;
margin-top: 1em;
}
}
@@ -691,7 +645,6 @@ input.eq_gain {
#wrap {
margin-top: 2em;
min-height: 90vh;
padding-bottom: 5em;
}
#tree {
display: none;
@@ -808,14 +761,9 @@ input.eq_gain {
display: block;
width: 1em;
border-radius: .2em;
margin: -1.2em auto 0 auto;
top: 2em;
position: relative;
margin: -1.3em auto 0 auto;
background: #444;
}
#files th span {
position: relative;
}
#files>thead>tr>th.min,
#files td.min {
display: none;
@@ -837,8 +785,7 @@ input.eq_gain {
color: #300;
background: #fea;
}
.opwide,
#op_unpost {
.opwide {
max-width: none;
margin-right: 1.5em;
}
@@ -865,13 +812,11 @@ input.eq_gain {
border-bottom: 1px solid #555;
}
#thumbs,
#au_osd_cv,
#u2tdate {
#au_osd_cv {
opacity: .3;
}
#griden.on+#thumbs,
#au_os_ctl.on+#au_osd_cv,
#u2turbo.on+#u2tdate {
#au_os_ctl.on+#au_osd_cv {
opacity: 1;
}
#ghead {
@@ -940,80 +885,10 @@ html.light #ggrid a:hover {
color: #015;
box-shadow: 0 .1em .5em #aaa;
}
#op_unpost {
padding: 1em;
}
#op_unpost td {
padding: .2em .4em;
}
#op_unpost a {
margin: 0;
padding: 0;
}
#rui {
position: fixed;
top: 0;
left: 0;
width: calc(100% - 2em);
height: auto;
overflow: auto;
max-height: calc(100% - 2em);
border-bottom: .5em solid #999;
box-shadow: 0 0 5em rgba(0,0,0,0.8);
background: #333;
padding: 1em;
z-index: 765;
}
html.light #rui {
color: #fff;
}
#rui div+div {
margin-top: 1em;
}
#rui table {
width: 100%;
border-collapse: collapse;
}
#rui td+td {
padding: .2em 0 .2em .5em;
}
#rn_vadv input {
font-family: monospace, monospace;
}
#rui td+td,
#rui td input[type="text"] {
width: 100%;
}
#rn_f.m td:first-child {
white-space: nowrap;
}
#rn_f.m td+td {
width: 50%;
}
#rn_f .err td {
background: #c00;
}
#rn_f .err input[readonly] {
background: #600;
color: #fc0;
}
#rui input[readonly] {
color: #fff;
background: #444;
border: 1px solid #777;
padding: .2em .25em;
}
#rui h1 {
margin: 0 0 .3em 0;
padding: 0;
font-size: 1.5em;
}
#pvol,
#barbuf,
#barpos,
#u2conf label,
#rui label,
#ops {
#u2conf label {
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
@@ -1034,11 +909,6 @@ html.light #rui {
@@ -1049,6 +919,15 @@ html.light {
background: #eee;
text-shadow: none;
}
html.light #tt {
background: #fff;
border-color: #888;
box-shadow: 0 .3em 1em rgba(0,0,0,0.4);
}
html.light #tt code {
background: #060;
color: #fff;
}
html.light #ops,
html.light .opbox,
html.light #srch_form {
@@ -1079,14 +958,10 @@ html.light .tgl.btn.on {
}
html.light #srv_info {
color: #c83;
background: #eee;
}
html.light #srv_info,
html.light #acc_info {
text-shadow: 1px 1px 0 #fff;
}
html.light #srv_info span {
color: #777;
color: #000;
}
html.light #treeul a+a {
background: inherit;
@@ -1133,17 +1008,6 @@ html.light #files td {
html.light #files tbody tr:last-child td {
border-bottom: .2em solid #ccc;
}
html.light #files tr:focus td {
background: #fff;
border-color: #c37;
box-shadow: 0 .2em 0 #e80 , 0 -.2em 0 #e80;
}
html.light #files tr:focus td:first-child {
box-shadow: -.2em .2em 0 #e80, -.2em -.2em 0 #e80;
}
html.light #files tr.sel td {
background: #925;
}
html.light #files td:nth-child(2n) {
color: #d38;
}
@@ -1197,8 +1061,7 @@ html.light #wnp {
html.light #barbuf {
background: none;
}
html.light #files tr.sel:hover td,
html.light #files tr.sel:focus td {
html.light #files tr.sel:hover td {
background: #c37;
}
html.light #files tr.sel td {
@@ -1265,31 +1128,7 @@ html.light #tree::-webkit-scrollbar {
/* bbox */
#bbox-overlay {
#baguetteBox-overlay {
display: none;
opacity: 0;
position: fixed;
@@ -1299,66 +1138,58 @@ html.light #tree::-webkit-scrollbar {
left: 0;
width: 100%;
height: 100%;
z-index: 10;
z-index: 1000000;
background: rgba(0, 0, 0, 0.8);
transition: opacity .3s ease;
}
#bbox-overlay.visible {
#baguetteBox-overlay.visible {
opacity: 1;
}
.full-image {
#baguetteBox-overlay .full-image {
display: inline-block;
position: relative;
width: 100%;
height: 100%;
text-align: center;
}
.full-image figure {
#baguetteBox-overlay .full-image figure {
display: inline;
margin: 0;
height: 100%;
}
.full-image img,
.full-image video {
#baguetteBox-overlay .full-image img {
display: inline-block;
width: auto;
height: auto;
max-width: 100%;
max-height: 100%;
max-height: calc(100% - 1.4em);
margin-bottom: 1.4em;
max-width: 100%;
vertical-align: middle;
box-shadow: 0 0 8px rgba(0, 0, 0, 0.6);
}
.full-image video {
background: #333;
}
.full-image figcaption {
#baguetteBox-overlay .full-image figcaption {
display: block;
position: fixed;
bottom: .1em;
position: absolute;
bottom: 0;
width: 100%;
text-align: center;
line-height: 1.8;
white-space: normal;
color: #ccc;
}
#bbox-overlay figcaption a {
#baguetteBox-overlay figcaption a {
background: rgba(0, 0, 0, 0.6);
border-radius: .4em;
padding: .3em .6em;
}
html.light #bbox-overlay figcaption a {
color: #0bf;
}
.full-image:before {
#baguetteBox-overlay .full-image:before {
content: "";
display: inline-block;
height: 50%;
width: 1px;
margin-right: -1px;
}
#bbox-slider {
position: fixed;
#baguetteBox-slider {
position: absolute;
left: 0;
top: 0;
height: 100%;
@@ -1366,10 +1197,10 @@ html.light #bbox-overlay figcaption a {
white-space: nowrap;
transition: left .2s ease, transform .2s ease;
}
.bounce-from-right {
#baguetteBox-slider.bounce-from-right {
animation: bounceFromRight .4s ease-out;
}
.bounce-from-left {
#baguetteBox-slider.bounce-from-left {
animation: bounceFromLeft .4s ease-out;
}
@keyframes bounceFromRight {
@@ -1382,63 +1213,48 @@ html.light #bbox-overlay figcaption a {
50% {margin-left: 30px}
100% {margin-left: 0}
}
#bbox-next,
#bbox-prev {
.baguetteBox-button#next-button,
.baguetteBox-button#previous-button {
top: 50%;
top: calc(50% - 30px);
width: 44px;
height: 60px;
}
.bbox-btn {
position: fixed;
}
#bbox-overlay button {
.baguetteBox-button {
position: absolute;
cursor: pointer;
outline: none;
padding: 0 .3em;
margin: 0 .4em;
padding: 0;
margin: 0;
border: 0;
border-radius: 15%;
background: rgba(50, 50, 50, 0.5);
color: rgba(255,255,255,0.7);
color: #ddd;
font: 1.6em sans-serif;
transition: background-color .3s ease;
transition: color .3s ease;
font-size: 1.4em;
line-height: 1.4em;
vertical-align: top;
}
#bbox-overlay button:focus,
#bbox-overlay button:hover {
color: rgba(255,255,255,0.9);
.baguetteBox-button:focus,
.baguetteBox-button:hover {
background: rgba(50, 50, 50, 0.9);
}
#bbox-next {
right: 1%;
}
#bbox-prev {
left: 1%;
}
#bbox-btns {
top: .5em;
#next-button {
right: 2%;
position: fixed;
}
#bbox-halp {
color: #fff;
background: #333;
#previous-button {
left: 2%;
}
#close-button {
top: 20px;
right: 2%;
width: 30px;
height: 30px;
}
.baguetteBox-button svg {
position: absolute;
top: 0;
left: 0;
z-index: 20;
padding: .4em;
top: 0;
}
#bbox-halp td {
padding: .2em .5em;
}
#bbox-halp td:first-child {
text-align: right;
}
.bbox-spinner {
.baguetteBox-spinner {
width: 40px;
height: 40px;
display: inline-block;
@@ -1448,8 +1264,8 @@ html.light #bbox-overlay figcaption a {
margin-top: -20px;
margin-left: -20px;
}
.bbox-double-bounce1,
.bbox-double-bounce2 {
.baguetteBox-double-bounce1,
.baguetteBox-double-bounce2 {
width: 100%;
height: 100%;
border-radius: 50%;
@@ -1460,338 +1276,10 @@ html.light #bbox-overlay figcaption a {
left: 0;
animation: bounce 2s infinite ease-in-out;
}
.bbox-double-bounce2 {
.baguetteBox-double-bounce2 {
animation-delay: -1s;
}
@keyframes bounce {
0%, 100% {transform: scale(0)}
50% {transform: scale(1)}
}
/* upload.css */
#op_up2k {
padding: 0 1em 1em 1em;
}
#u2form {
position: absolute;
top: 0;
left: 0;
width: 2px;
height: 2px;
overflow: hidden;
}
#u2form input {
background: #444;
border: 0px solid #444;
outline: none;
}
#u2err.err {
color: #f87;
padding: .5em;
}
#u2err.msg {
color: #999;
padding: .5em;
font-size: .9em;
}
#u2btn {
color: #eee;
background: #555;
background: -moz-linear-gradient(top, #367 0%, #489 50%, #38788a 51%, #367 100%);
background: -webkit-linear-gradient(top, #367 0%, #489 50%, #38788a 51%, #367 100%);
background: linear-gradient(to bottom, #367 0%, #489 50%, #38788a 51%, #367 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#489', endColorstr='#38788a', GradientType=0);
text-decoration: none;
line-height: 1.3em;
border: 1px solid #222;
border-radius: .4em;
text-align: center;
font-size: 1.5em;
margin: .5em auto;
padding: .8em 0;
width: 16em;
cursor: pointer;
box-shadow: .4em .4em 0 #111;
}
#op_up2k.srch #u2btn {
background: linear-gradient(to bottom, #ca3 0%, #fd8 50%, #fc6 51%, #b92 100%);
text-shadow: 1px 1px 1px #fc6;
color: #333;
}
#u2conf #u2btn {
margin: -1.5em 0;
padding: .8em 0;
width: 100%;
max-width: 12em;
display: inline-block;
}
#u2conf #u2btn_cw {
text-align: right;
}
#u2notbtn {
display: none;
text-align: center;
background: #333;
padding-top: 1em;
}
#u2notbtn * {
line-height: 1.3em;
}
#u2tab {
margin: 3em auto;
width: calc(100% - 2em);
max-width: 100em;
}
#op_up2k.srch #u2tab {
max-width: none;
}
#u2tab td {
border: 1px solid #ccc;
border-width: 0 0px 1px 0;
padding: .1em .3em;
}
#u2tab td:nth-child(2) {
width: 5em;
white-space: nowrap;
}
#u2tab td:nth-child(3) {
width: 40%;
}
#op_up2k.srch td.prog {
font-family: sans-serif;
font-size: 1em;
width: auto;
}
#u2tab tbody tr:hover td {
background: #222;
}
#u2cards {
padding: 1em 0 .3em 1em;
margin: 1.5em auto -2.5em auto;
white-space: nowrap;
text-align: center;
overflow: hidden;
}
#u2cards.w {
width: 45em;
text-align: left;
}
#u2cards a {
padding: .2em 1em;
border: 1px solid #777;
border-width: 0 0 1px 0;
background: linear-gradient(to bottom, #333, #222);
}
#u2cards a:first-child {
border-radius: .4em 0 0 0;
}
#u2cards a:last-child {
border-radius: 0 .4em 0 0;
}
#u2cards a.act {
padding-bottom: .5em;
border-width: 1px 1px .1em 1px;
border-radius: .3em .3em 0 0;
margin-left: -1px;
background: linear-gradient(to bottom, #464, #333 80%);
box-shadow: 0 -.17em .67em #280;
border-color: #7c5 #583 #333 #583;
position: relative;
color: #fd7;
}
#u2cards span {
color: #fff;
}
#u2conf {
margin: 1em auto;
width: 30em;
}
#u2conf.has_btn {
width: 48em;
}
#u2conf * {
text-align: center;
line-height: 1em;
margin: 0;
padding: 0;
border: none;
outline: none;
}
#u2conf .txtbox {
width: 3em;
color: #fff;
background: #444;
border: 1px solid #777;
font-size: 1.2em;
padding: .15em 0;
height: 1.05em;
}
#u2conf .txtbox.err {
background: #922;
}
#u2conf a {
color: #fff;
background: #c38;
text-decoration: none;
border-radius: .1em;
font-size: 1.5em;
padding: .1em 0;
margin: 0 -1px;
width: 1.5em;
height: 1em;
display: inline-block;
position: relative;
bottom: -0.08em;
}
#u2conf input+a {
background: #d80;
}
#u2conf label {
font-size: 1.6em;
width: 2em;
height: 1em;
padding: .4em 0;
display: block;
border-radius: .25em;
}
#u2conf input[type="checkbox"] {
position: relative;
opacity: .02;
top: 2em;
}
#u2conf input[type="checkbox"]+label {
position: relative;
background: #603;
border-bottom: .2em solid #a16;
box-shadow: 0 .1em .3em #a00 inset;
}
#u2conf input[type="checkbox"]:checked+label {
background: #6a1;
border-bottom: .2em solid #efa;
box-shadow: 0 .1em .5em #0c0;
}
#u2conf input[type="checkbox"]+label:hover {
box-shadow: 0 .1em .3em #fb0;
border-color: #fb0;
}
#op_up2k.srch #u2conf td:nth-child(1)>*,
#op_up2k.srch #u2conf td:nth-child(2)>*,
#op_up2k.srch #u2conf td:nth-child(3)>* {
background: #777;
border-color: #ccc;
box-shadow: none;
opacity: .2;
}
#u2foot {
color: #fff;
font-style: italic;
}
#u2foot .warn {
font-size: 1.3em;
padding: .5em .8em;
margin: 1em -.6em;
color: #f74;
background: #322;
border: 1px solid #633;
border-width: .1em 0;
text-align: center;
}
#u2foot .warn span {
color: #f86;
}
html.light #u2foot .warn {
color: #b00;
background: #fca;
border-color: #f70;
}
html.light #u2foot .warn span {
color: #930;
}
#u2foot span {
color: #999;
font-size: .9em;
font-weight: normal;
}
#u2footfoot {
margin-bottom: -1em;
}
.prog {
font-family: monospace, monospace;
}
#u2tab a>span {
font-weight: bold;
font-style: italic;
color: #fff;
padding-left: .2em;
}
#u2cleanup {
float: right;
margin-bottom: -.3em;
}
.fsearch_explain {
padding-left: .7em;
font-size: 1.1em;
line-height: 0;
}
html.light #u2btn {
box-shadow: .4em .4em 0 #ccc;
}
html.light #u2cards span {
color: #000;
}
html.light #u2cards a {
background: linear-gradient(to bottom, #eee, #fff);
}
html.light #u2cards a.act {
color: #037;
background: inherit;
box-shadow: 0 -.17em .67em #0ad;
border-color: #09c #05a #eee #05a;
}
html.light #u2conf .txtbox {
background: #fff;
color: #444;
}
html.light #u2conf .txtbox.err {
background: #f96;
color: #300;
}
html.light #op_up2k.srch #u2btn {
border-color: #a80;
}
html.light #u2foot {
color: #000;
}
html.light #u2tab tbody tr:hover td {
background: #fff;
}

View File

@@ -6,10 +6,10 @@
<title>⇆🎉 {{ title }}</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.8">
<link rel="stylesheet" media="screen" href="/.cpr/ui.css?_={{ ts }}">
<link rel="stylesheet" media="screen" href="/.cpr/browser.css?_={{ ts }}">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/browser.css?_={{ ts }}">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/upload.css?_={{ ts }}">
{%- if css %}
<link rel="stylesheet" media="screen" href="{{ css }}?_={{ ts }}">
<link rel="stylesheet" type="text/css" media="screen" href="{{ css }}?_={{ ts }}">
{%- endif %}
</head>
@@ -39,34 +39,32 @@
<div id="op_mkdir" class="opview opbox act">
<form method="post" enctype="multipart/form-data" accept-charset="utf-8" action="{{ url_suf }}">
<input type="hidden" name="act" value="mkdir" />
📂<input type="text" name="name" size="30">
<input type="submit" value="make directory">
<input type="text" name="name" size="30">
<input type="submit" value="mkdir">
</form>
</div>
<div id="op_new_md" class="opview opbox">
<form method="post" enctype="multipart/form-data" accept-charset="utf-8" action="{{ url_suf }}">
<input type="hidden" name="act" value="new_md" />
📝<input type="text" name="name" size="30">
<input type="submit" value="new markdown doc">
<input type="text" name="name" size="30">
<input type="submit" value="create doc">
</form>
</div>
<div id="op_msg" class="opview opbox act">
<form method="post" enctype="application/x-www-form-urlencoded" accept-charset="utf-8" action="{{ url_suf }}">
📟<input type="text" name="msg" size="30">
<input type="submit" value="send msg to server log">
<input type="text" name="msg" size="30">
<input type="submit" value="send msg">
</form>
</div>
<div id="op_unpost" class="opview opbox"></div>
<div id="op_up2k" class="opview"></div>
<div id="op_cfg" class="opview opbox opwide"></div>
<h1 id="path">
<a href="#" id="entree" tt="show navpane (directory tree sidebar)$NHotkey: B">🌲</a>
<a href="#" id="entree" tt="show directory tree$NHotkey: B">🌲</a>
{%- for n in vpnodes %}
<a href="/{{ n[0] }}">{{ n[1] }}</a>
{%- endfor %}
@@ -123,14 +121,10 @@
<div id="widget"></div>
<script>
var acct = "{{ acct }}",
perms = {{ perms }},
def_hcols = {{ def_hcols|tojson }},
var perms = {{ perms }},
tag_order_cfg = {{ tag_order }},
have_up2k_idx = {{ have_up2k_idx|tojson }},
have_tags_idx = {{ have_tags_idx|tojson }},
have_mv = {{ have_mv|tojson }},
have_del = {{ have_del|tojson }},
have_unpost = {{ have_unpost|tojson }},
have_zip = {{ have_zip|tojson }};
</script>
<script src="/.cpr/util.js?_={{ ts }}"></script>

File diff suppressed because it is too large Load Diff

View File

@@ -15,10 +15,6 @@ html, body {
margin: 0 auto;
padding: 0 1.5em;
}
#toast {
bottom: auto;
top: 1.4em;
}
pre, code, a {
color: #480;
background: #f7f7f7;
@@ -30,7 +26,7 @@ pre, code, a {
code {
font-size: .96em;
}
pre, code, tt {
pre, code {
font-family: 'scp', monospace, monospace;
white-space: pre-wrap;
word-break: break-all;
@@ -170,7 +166,7 @@ small {
z-index: 99;
position: relative;
display: inline-block;
font-family: 'scp', monospace, monospace;
font-family: monospace, monospace;
font-weight: bold;
font-size: 1.3em;
line-height: .1em;

View File

@@ -1,12 +1,11 @@
<!DOCTYPE html><html><head>
<meta charset="utf-8">
<title>📝🎉 {{ title }}</title>
<title>📝🎉 {{ title }}</title> <!-- 📜 -->
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.7">
<link rel="stylesheet" href="/.cpr/ui.css?_={{ ts }}">
<link rel="stylesheet" href="/.cpr/md.css?_={{ ts }}">
<link href="/.cpr/md.css?_={{ ts }}" rel="stylesheet">
{%- if edit %}
<link rel="stylesheet" href="/.cpr/md2.css?_={{ ts }}">
<link href="/.cpr/md2.css?_={{ ts }}" rel="stylesheet">
{%- endif %}
</head>
<body>
@@ -15,9 +14,9 @@
<a id="lightswitch" href="#">go dark</a>
<a id="navtoggle" href="#">hide nav</a>
{%- if edit %}
<a id="save" href="?edit" tt="Hotkey: ctrl-s">save</a>
<a id="sbs" href="#" tt="editor and preview side by side">sbs</a>
<a id="nsbs" href="#" tt="switch between editor and preview$NHotkey: ctrl-e">editor</a>
<a id="save" href="?edit">save</a>
<a id="sbs" href="#">sbs</a>
<a id="nsbs" href="#">editor</a>
<div id="toolsbox">
<a id="tools" href="#">tools</a>
<a id="fmt_table" href="#">prettify table (ctrl-k)</a>
@@ -27,8 +26,8 @@
<a id="help" href="#">help</a>
</div>
{%- else %}
<a href="?edit" tt="good: higher performance$Ngood: same document width as viewer$Nbad: assumes you know markdown">edit (basic)</a>
<a href="?edit2" tt="not in-house so probably less buggy">edit (fancy)</a>
<a href="?edit">edit (basic)</a>
<a href="?edit2">edit (fancy)</a>
<a href="?raw">view raw</a>
{%- endif %}
</div>
@@ -132,18 +131,18 @@ var md_opt = {
};
(function () {
var l = localStorage,
drk = l.getItem('lightmode') != 1,
btn = document.getElementById("lightswitch"),
f = function (e) {
if (e) { e.preventDefault(); drk = !drk; }
document.documentElement.setAttribute("class", drk? "dark":"light");
btn.innerHTML = "go " + (drk ? "light":"dark");
l.setItem('lightmode', drk? 0:1);
};
btn.onclick = f;
f();
var btn = document.getElementById("lightswitch");
var toggle = function (e) {
if (e) e.preventDefault();
var dark = !document.documentElement.getAttribute("class");
document.documentElement.setAttribute("class", dark ? "dark" : "");
btn.innerHTML = "go " + (dark ? "light" : "dark");
if (window.localStorage)
localStorage.setItem('lightmode', dark ? 0 : 1);
};
btn.onclick = toggle;
if (window.localStorage && localStorage.getItem('lightmode') != 1)
toggle();
})();
</script>

View File

@@ -176,7 +176,7 @@ function md_plug_err(ex, js) {
var lns = js.split('\n');
if (ln < lns.length) {
o = mknod('span');
o.style.cssText = "color:#ac2;font-size:.9em;font-family:'scp',monospace,monospace;display:block";
o.style.cssText = 'color:#ac2;font-size:.9em;font-family:scp;display:block';
o.textContent = lns[ln - 1];
}
}
@@ -185,7 +185,7 @@ function md_plug_err(ex, js) {
errbox.style.cssText = 'position:absolute;top:0;left:0;padding:1em .5em;background:#2b2b2b;color:#fc5'
errbox.textContent = msg;
errbox.onclick = function () {
modal.alert('<pre>' + ex.stack + '</pre>');
alert('' + ex.stack);
};
if (o) {
errbox.appendChild(o);
@@ -530,6 +530,3 @@ dom_navtgl.onclick = function () {
if (sread('hidenav') == 1)
dom_navtgl.onclick();
if (window['tt'])
tt.init();

View File

@@ -84,10 +84,13 @@ html.dark #save.force-save {
#save.disabled {
opacity: .4;
}
#helpbox {
#helpbox,
#toast {
background: #f7f7f7;
border-radius: .4em;
z-index: 9001;
}
#helpbox {
display: none;
position: fixed;
padding: 2em;
@@ -104,7 +107,19 @@ html.dark #save.force-save {
}
html.dark #helpbox {
box-shadow: 0 .5em 2em #444;
}
html.dark #helpbox,
html.dark #toast {
background: #222;
border: 1px solid #079;
border-width: 1px 0;
}
#toast {
font-weight: bold;
text-align: center;
padding: .6em 0;
position: fixed;
top: 30%;
transition: opacity 0.2s ease-in-out;
opacity: 1;
}

View File

@@ -236,7 +236,7 @@ function Modpoll() {
var skip = null;
if (toast.visible)
if (ebi('toast'))
skip = 'toast';
else if (this.skip_one)
@@ -285,15 +285,16 @@ function Modpoll() {
console.log("modpoll diff |" + server_ref.length + "|, |" + server_now.length + "|");
this.modpoll.disabled = true;
var msg = [
"The document has changed on the server.",
"The document has changed on the server.<br />" +
"The changes will NOT be loaded into your editor automatically.",
"",
"Press F5 or CTRL-R to refresh the page,",
"Press F5 or CTRL-R to refresh the page,<br />" +
"replacing your document with the server copy.",
"",
"You can close this message to ignore and contnue."
"You can click this message to ignore and contnue."
];
return toast.warn(0, msg.join('\n'));
return toast(false, "box-shadow:0 1em 2em rgba(64,64,64,0.8);font-weight:normal",
36, "<p>" + msg.join('</p>\n<p>') + '</p>');
}
console.log('modpoll eq');
@@ -322,51 +323,52 @@ function save(e) {
var save_btn = ebi("save"),
save_cls = save_btn.getAttribute('class') + '';
if (save_cls.indexOf('disabled') >= 0)
return toast.inf(2, "no changes");
var force = (save_cls.indexOf('force-save') >= 0);
function save2() {
var txt = dom_src.value,
fd = new FormData();
fd.append("act", "tput");
fd.append("lastmod", (force ? -1 : last_modified));
fd.append("body", txt);
var url = (document.location + '').split('?')[0];
var xhr = new XMLHttpRequest();
xhr.open('POST', url, true);
xhr.responseType = 'text';
xhr.onreadystatechange = save_cb;
xhr.btn = save_btn;
xhr.txt = txt;
modpoll.skip_one = true; // skip one iteration while we save
xhr.send(fd);
if (save_cls.indexOf('disabled') >= 0) {
toast(true, ";font-size:2em;color:#c90", 9, "no changes");
return;
}
if (!force)
save2();
else
modal.confirm('confirm that you wish to lose the changes made on the server since you opened this document', save2, function () {
toast.inf(3, 'aborted');
});
var force = (save_cls.indexOf('force-save') >= 0);
if (force && !confirm('confirm that you wish to lose the changes made on the server since you opened this document')) {
alert('ok, aborted');
return;
}
var txt = dom_src.value;
var fd = new FormData();
fd.append("act", "tput");
fd.append("lastmod", (force ? -1 : last_modified));
fd.append("body", txt);
var url = (document.location + '').split('?')[0];
var xhr = new XMLHttpRequest();
xhr.open('POST', url, true);
xhr.responseType = 'text';
xhr.onreadystatechange = save_cb;
xhr.btn = save_btn;
xhr.txt = txt;
modpoll.skip_one = true; // skip one iteration while we save
xhr.send(fd);
}
function save_cb() {
if (this.readyState != XMLHttpRequest.DONE)
return;
if (this.status !== 200)
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
if (this.status !== 200) {
alert('Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
return;
}
var r;
try {
r = JSON.parse(this.responseText);
}
catch (ex) {
return toast.err(0, 'Failed to parse reply from server:\n\n' + this.responseText);
alert('Failed to parse reply from server:\n\n' + this.responseText);
return;
}
if (!r.ok) {
@@ -381,10 +383,12 @@ function save_cb() {
r.lastmod + ' lastmod on the server now,',
r.now + ' server time now,\n',
];
return toast.err(0, msg.join('\n'));
alert(msg.join('\n'));
}
else
return toast.err(0, 'Error! Save failed. Maybe this JSON explains why:\n\n' + this.responseText);
else {
alert('Error! Save failed. Maybe this JSON explains why:\n\n' + this.responseText);
}
return;
}
this.btn.classList.remove('force-save');
@@ -411,8 +415,10 @@ function savechk_cb() {
if (this.readyState != XMLHttpRequest.DONE)
return;
if (this.status !== 200)
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
if (this.status !== 200) {
alert('Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
return;
}
var doc1 = this.txt.replace(/\r\n/g, "\n");
var doc2 = this.responseText.replace(/\r\n/g, "\n");
@@ -425,22 +431,58 @@ function savechk_cb() {
}, 100);
return;
}
modal.alert(
alert(
'Error! The document on the server does not appear to have saved correctly (your editor contents and the server copy is not identical). Place the document on your clipboard for now and check the server logs for hints\n\n' +
'Length: yours=' + doc1.length + ', server=' + doc2.length
);
modal.alert('yours, ' + doc1.length + ' byte:\n[' + doc1 + ']');
modal.alert('server, ' + doc2.length + ' byte:\n[' + doc2 + ']');
alert('yours, ' + doc1.length + ' byte:\n[' + doc1 + ']');
alert('server, ' + doc2.length + ' byte:\n[' + doc2 + ']');
return;
}
last_modified = this.lastmod;
server_md = this.txt;
draw_md();
toast.ok(2, 'save OK' + (this.ntry ? '\nattempt ' + this.ntry : ''));
toast(true, ";font-size:6em;font-family:serif;color:#9b4", 4,
'OK✔<span style="font-size:.2em;color:#999;position:absolute">' + this.ntry + '</span>');
modpoll.disabled = false;
}
function toast(autoclose, style, width, msg) {
var ok = ebi("toast");
if (ok)
ok.parentNode.removeChild(ok);
style = "width:" + width + "em;left:calc(50% - " + (width / 2) + "em);" + style;
ok = mknod('div');
ok.setAttribute('id', 'toast');
ok.setAttribute('style', style);
ok.innerHTML = msg;
var parent = ebi('m');
document.documentElement.appendChild(ok);
var hide = function (delay) {
delay = delay || 0;
setTimeout(function () {
ok.style.opacity = 0;
}, delay);
setTimeout(function () {
if (ok.parentNode)
ok.parentNode.removeChild(ok);
}, delay + 250);
}
ok.onclick = function () {
hide(0);
};
if (autoclose)
hide(500);
}
// firefox bug: initial selection offset isn't cleared properly through js
var ff_clearsel = (function () {
@@ -719,7 +761,7 @@ function fmt_table(e) {
var ind2 = tab[a].match(re_ind)[0];
if (ind != ind2 && a != 1) // the table can be a list entry or something, ignore [0]
return toast.err(7, err + 'indentation mismatch on row#2 and ' + row_name + ',\n' + tab[a]);
return alert(err + 'indentation mismatch on row#2 and ' + row_name + ',\n' + tab[a]);
var t = tab[a].slice(ind.length);
t = t.replace(re_lpipe, "");
@@ -729,7 +771,7 @@ function fmt_table(e) {
if (a == 0)
ncols = tab[a].length;
else if (ncols < tab[a].length)
return toast.err(7, err + 'num.columns(' + row_name + ') exceeding row#2; ' + ncols + ' < ' + tab[a].length);
return alert(err + 'num.columns(' + row_name + ') exceeding row#2; ' + ncols + ' < ' + tab[a].length);
// if row has less columns than row2, fill them in
while (tab[a].length < ncols)
@@ -746,7 +788,7 @@ function fmt_table(e) {
for (var col = 0; col < tab[1].length; col++) {
var m = tab[1][col].match(re_align);
if (!m)
return toast.err(7, err + 'invalid column specification, row#2, col ' + (col + 1) + ', [' + tab[1][col] + ']');
return alert(err + 'invalid column specification, row#2, col ' + (col + 1) + ', [' + tab[1][col] + ']');
if (m[2]) {
if (m[1])
@@ -834,9 +876,10 @@ function mark_uni(e) {
ptn = new RegExp('([^' + js_uni_whitelist + ']+)', 'g'),
mod = txt.replace(/\r/g, "").replace(ptn, "\u2588\u2770$1\u2771");
if (txt == mod)
return toast.inf(5, 'no results; no modifications were made');
if (txt == mod) {
alert('no results; no modifications were made');
return;
}
dom_src.value = mod;
}
@@ -850,9 +893,10 @@ function iter_uni(e) {
re = new RegExp('([^' + js_uni_whitelist + ']+)'),
m = re.exec(txt.slice(ofs));
if (!m)
return toast.inf(5, 'no more hits from cursor onwards');
if (!m) {
alert('no more hits from cursor onwards');
return;
}
ofs += m.index;
dom_src.setSelectionRange(ofs, ofs + m[0].length, "forward");
@@ -867,10 +911,12 @@ function iter_uni(e) {
function cfg_uni(e) {
if (e) e.preventDefault();
modal.prompt("unicode whitelist", esc_uni_whitelist, function (reply) {
esc_uni_whitelist = reply;
js_uni_whitelist = eval('\'' + esc_uni_whitelist + '\'');
}, null);
var reply = prompt("unicode whitelist", esc_uni_whitelist);
if (reply === null)
return;
esc_uni_whitelist = reply;
js_uni_whitelist = eval('\'' + esc_uni_whitelist + '\'');
}
@@ -878,9 +924,10 @@ function cfg_uni(e) {
(function () {
function keydown(ev) {
ev = ev || window.event;
var kc = ev.code || ev.keyCode || ev.which;
//console.log(ev.key, ev.code, ev.keyCode, ev.which);
if (ctrl(ev) && (ev.code == "KeyS" || kc == 83)) {
var kc = ev.keyCode || ev.which;
var ctrl = ev.ctrlKey || ev.metaKey;
//console.log(ev.code, kc);
if (ctrl && (ev.code == "KeyS" || kc == 83)) {
save();
return false;
}
@@ -889,15 +936,23 @@ function cfg_uni(e) {
if (d)
d.click();
}
if (document.activeElement != dom_src)
return true;
if (ctrl(ev)) {
if (ev.code == "KeyH" || kc == 72) {
if (document.activeElement == dom_src) {
if (ev.code == "Tab" || kc == 9) {
md_indent(ev.shiftKey);
return false;
}
if (ctrl && (ev.code == "KeyH" || kc == 72)) {
md_header(ev.shiftKey);
return false;
}
if (ev.code == "KeyZ" || kc == 90) {
if (!ctrl && (ev.code == "Home" || kc == 36)) {
md_home(ev.shiftKey);
return false;
}
if (!ctrl && !ev.shiftKey && (ev.code == "Enter" || kc == 13)) {
return md_newline();
}
if (ctrl && (ev.code == "KeyZ" || kc == 90)) {
if (ev.shiftKey)
action_stack.redo();
else
@@ -905,45 +960,33 @@ function cfg_uni(e) {
return false;
}
if (ev.code == "KeyY" || kc == 89) {
if (ctrl && (ev.code == "KeyY" || kc == 89)) {
action_stack.redo();
return false;
}
if (ev.code == "KeyK") {
if (!ctrl && !ev.shiftKey && kc == 8) {
return md_backspace();
}
if (ctrl && (ev.code == "KeyK")) {
fmt_table();
return false;
}
if (ev.code == "KeyU") {
if (ctrl && (ev.code == "KeyU")) {
iter_uni();
return false;
}
if (ev.code == "KeyE") {
if (ctrl && (ev.code == "KeyE")) {
dom_nsbs.click();
//fmt_table();
return false;
}
var up = ev.code == "ArrowUp" || kc == 38;
var dn = ev.code == "ArrowDown" || kc == 40;
if (up || dn) {
if (ctrl && (up || dn)) {
md_p_jump(dn);
return false;
}
}
else {
if (ev.code == "Tab" || kc == 9) {
md_indent(ev.shiftKey);
return false;
}
if (ev.code == "Home" || kc == 36) {
md_home(ev.shiftKey);
return false;
}
if (!ev.shiftKey && (ev.code == "Enter" || kc == 13)) {
return md_newline();
}
if (!ev.shiftKey && kc == 8) {
return md_backspace();
}
}
}
document.onkeydown = keydown;
ebi('save').onclick = save;

View File

@@ -18,10 +18,6 @@ html, body {
background: #f7f7f7;
color: #333;
}
#toast {
bottom: auto;
top: 1.4em;
}
#mn {
font-weight: normal;
margin: 1.3em 0 .7em 1em;

View File

@@ -3,10 +3,9 @@
<title>📝🎉 {{ title }}</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.7">
<link rel="stylesheet" href="/.cpr/ui.css?_={{ ts }}">
<link rel="stylesheet" href="/.cpr/mde.css?_={{ ts }}">
<link rel="stylesheet" href="/.cpr/deps/mini-fa.css?_={{ ts }}">
<link rel="stylesheet" href="/.cpr/deps/easymde.css?_={{ ts }}">
<link href="/.cpr/mde.css?_={{ ts }}" rel="stylesheet">
<link href="/.cpr/deps/mini-fa.css?_={{ ts }}" rel="stylesheet">
<link href="/.cpr/deps/easymde.css?_={{ ts }}" rel="stylesheet">
</head>
<body>
<div id="mw">
@@ -31,15 +30,16 @@ var md_opt = {
};
var lightswitch = (function () {
var l = localStorage,
drk = l.getItem('lightmode') != 1,
f = function (e) {
if (e) drk = !drk;
document.documentElement.setAttribute("class", drk? "dark":"light");
l.setItem('lightmode', drk? 0:1);
};
f();
return f;
var fun = function () {
var dark = !document.documentElement.getAttribute("class");
document.documentElement.setAttribute("class", dark ? "dark" : "");
if (window.localStorage)
localStorage.setItem('lightmode', dark ? 0 : 1);
};
if (window.localStorage && localStorage.getItem('lightmode') != 1)
fun();
return fun;
})();
</script>

View File

@@ -75,7 +75,7 @@ function set_jumpto() {
}
function jumpto(ev) {
var tgt = ev.target;
var tgt = ev.target || ev.srcElement;
var ln = null;
while (tgt && !ln) {
ln = tgt.getAttribute('data-ln');
@@ -106,50 +106,50 @@ function md_changed(mde, on_srv) {
function save(mde) {
var save_btn = QS('.editor-toolbar button.save');
if (save_btn.classList.contains('disabled'))
return toast.inf(2, 'no changes');
if (save_btn.classList.contains('disabled')) {
alert('there is nothing to save');
return;
}
var force = save_btn.classList.contains('force-save');
function save2() {
var txt = mde.value();
var fd = new FormData();
fd.append("act", "tput");
fd.append("lastmod", (force ? -1 : last_modified));
fd.append("body", txt);
var url = (document.location + '').split('?')[0];
var xhr = new XMLHttpRequest();
xhr.open('POST', url, true);
xhr.responseType = 'text';
xhr.onreadystatechange = save_cb;
xhr.btn = save_btn;
xhr.mde = mde;
xhr.txt = txt;
xhr.send(fd);
if (force && !confirm('confirm that you wish to lose the changes made on the server since you opened this document')) {
alert('ok, aborted');
return;
}
if (!force)
save2();
else
modal.confirm('confirm that you wish to lose the changes made on the server since you opened this document', save2, function () {
toast.inf(3, 'aborted');
});
var txt = mde.value();
var fd = new FormData();
fd.append("act", "tput");
fd.append("lastmod", (force ? -1 : last_modified));
fd.append("body", txt);
var url = (document.location + '').split('?')[0];
var xhr = new XMLHttpRequest();
xhr.open('POST', url, true);
xhr.responseType = 'text';
xhr.onreadystatechange = save_cb;
xhr.btn = save_btn;
xhr.mde = mde;
xhr.txt = txt;
xhr.send(fd);
}
function save_cb() {
if (this.readyState != XMLHttpRequest.DONE)
return;
if (this.status !== 200)
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
if (this.status !== 200) {
alert('Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
return;
}
var r;
try {
r = JSON.parse(this.responseText);
}
catch (ex) {
return toast.err(0, 'Failed to parse reply from server:\n\n' + this.responseText);
alert('Failed to parse reply from server:\n\n' + this.responseText);
return;
}
if (!r.ok) {
@@ -164,10 +164,12 @@ function save_cb() {
r.lastmod + ' lastmod on the server now,',
r.now + ' server time now,\n',
];
return toast.err(0, msg.join('\n'));
alert(msg.join('\n'));
}
else
return toast.err(0, 'Error! Save failed. Maybe this JSON explains why:\n\n' + this.responseText);
else {
alert('Error! Save failed. Maybe this JSON explains why:\n\n' + this.responseText);
}
return;
}
this.btn.classList.remove('force-save');
@@ -190,23 +192,35 @@ function save_chk() {
if (this.readyState != XMLHttpRequest.DONE)
return;
if (this.status !== 200)
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
if (this.status !== 200) {
alert('Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
return;
}
var doc1 = this.txt.replace(/\r\n/g, "\n");
var doc2 = this.responseText.replace(/\r\n/g, "\n");
if (doc1 != doc2) {
modal.alert(
alert(
'Error! The document on the server does not appear to have saved correctly (your editor contents and the server copy is not identical). Place the document on your clipboard for now and check the server logs for hints\n\n' +
'Length: yours=' + doc1.length + ', server=' + doc2.length
);
modal.alert('yours, ' + doc1.length + ' byte:\n[' + doc1 + ']');
modal.alert('server, ' + doc2.length + ' byte:\n[' + doc2 + ']');
alert('yours, ' + doc1.length + ' byte:\n[' + doc1 + ']');
alert('server, ' + doc2.length + ' byte:\n[' + doc2 + ']');
return;
}
last_modified = this.lastmod;
md_changed(this.mde, true);
toast.ok(2, 'save OK' + (this.ntry ? '\nattempt ' + this.ntry : ''));
var ok = mknod('div');
ok.setAttribute('style', 'font-size:6em;font-family:serif;font-weight:bold;color:#cf6;background:#444;border-radius:.3em;padding:.6em 0;position:fixed;top:30%;left:calc(50% - 2em);width:4em;text-align:center;z-index:9001;transition:opacity 0.2s ease-in-out;opacity:1');
ok.innerHTML = 'OK✔';
var parent = ebi('m');
document.documentElement.appendChild(ok);
setTimeout(function () {
ok.style.opacity = 0;
}, 500);
setTimeout(function () {
ok.parentNode.removeChild(ok);
}, 750);
}

View File

@@ -16,6 +16,9 @@ html, body {
margin: 0;
padding: 0;
}
body {
padding-bottom: 5em;
}
#box {
padding: .5em 1em;
background: #2c2c2c;

View File

@@ -6,7 +6,7 @@
<title>copyparty</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.8">
<link rel="stylesheet" media="screen" href="/.cpr/msg.css?_={{ ts }}">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/msg.css?_={{ ts }}">
</head>
<body>

View File

@@ -6,7 +6,7 @@
<title>copyparty</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.8">
<link rel="stylesheet" media="screen" href="/.cpr/splash.css?_={{ ts }}">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/splash.css?_={{ ts }}">
</head>
<body>
@@ -35,7 +35,7 @@
</table>
</td></tr></table>
<div class="btns">
<a href="/?stack">dump stack</a>
<a href="{{ avol[0] }}?stack">dump stack</a>
</div>
{%- endif %}
@@ -68,7 +68,7 @@
</div>
<script>
if (localStorage.getItem('lightmode') != 1)
if (window.localStorage && localStorage.getItem('lightmode') != 1)
document.documentElement.setAttribute("class", "dark");
</script>

View File

@@ -1,199 +0,0 @@
#tt, #toast {
position: fixed;
max-width: 34em;
background: #222;
border: 0 solid #777;
box-shadow: 0 .2em .5em #222;
border-radius: .4em;
z-index: 9001;
}
#tt {
overflow: hidden;
margin-top: 1em;
padding: 0 1.3em;
height: 0;
opacity: .1;
transition: opacity 0.14s, height 0.14s, padding 0.14s;
}
#toast {
bottom: 5em;
right: -1em;
line-height: 1.5em;
padding: 1em 1.3em;
border-width: .4em 0;
transform: translateX(100%);
transition:
transform .4s cubic-bezier(.2, 1.2, .5, 1),
right .4s cubic-bezier(.2, 1.2, .5, 1);
text-shadow: 1px 1px 0 #000;
color: #fff;
}
#toastc {
display: inline-block;
position: absolute;
overflow: hidden;
left: 0;
width: 0;
opacity: 0;
padding: .3em 0;
margin: -.3em 0 0 0;
line-height: 1.5em;
color: #000;
border: none;
outline: none;
text-shadow: none;
border-radius: .5em 0 0 .5em;
transition: left .3s, width .3s, padding .3s, opacity .3s;
}
#toast pre {
margin: 0;
}
#toast.vis {
right: 1.3em;
transform: unset;
}
#toast.vis #toastc {
left: -2em;
width: .4em;
padding: .3em .8em;
opacity: 1;
}
#toast.inf {
background: #07a;
border-color: #0be;
}
#toast.inf #toastc {
background: #0be;
}
#toast.ok {
background: #380;
border-color: #8e4;
}
#toast.ok #toastc {
background: #8e4;
}
#toast.warn {
background: #970;
border-color: #fc0;
}
#toast.warn #toastc {
background: #fc0;
}
#toast.err {
background: #900;
border-color: #d06;
}
#toast.err #toastc {
background: #d06;
}
#tt.b {
padding: 0 2em;
border-radius: .5em;
box-shadow: 0 .2em 1em #000;
}
#tt.show {
padding: 1em 1.3em;
border-width: .4em 0;
height: auto;
opacity: 1;
}
#tt.show.b {
padding: 1.5em 2em;
border-width: .5em 0;
}
#tt code {
background: #3c3c3c;
padding: .1em .3em;
border-top: 1px solid #777;
border-radius: .3em;
line-height: 1.7em;
}
#tt em {
color: #f6a;
}
html.light #tt {
background: #fff;
border-color: #888 #000 #777 #000;
}
html.light #tt,
html.light #toast {
box-shadow: 0 .3em 1em rgba(0,0,0,0.4);
}
html.light #tt code {
background: #060;
color: #fff;
}
html.light #tt em {
color: #d38;
}
#modal {
position: fixed;
overflow: auto;
top: 0;
left: 0;
right: 0;
bottom: 0;
width: 100%;
height: 100%;
z-index: 9001;
background: rgba(64,64,64,0.6);
}
#modal>table {
width: 100%;
height: 100%;
}
#modal td {
text-align: center;
}
#modalc {
display: inline-block;
background: #f7f7f7;
color: #333;
text-shadow: none;
text-align: left;
margin: 3em;
padding: 1em 1.1em;
border-radius: .6em;
box-shadow: 0 .3em 3em rgba(0,0,0,0.5);
max-width: 50em;
max-height: 30em;
overflow: auto;
}
@media (min-width: 40em) {
#modalc {
min-width: 30em;
}
}
#modalb {
text-align: right;
padding-top: 1em;
}
#modalb a {
color: #000;
background: #ccc;
display: inline-block;
border-radius: .3em;
padding: .5em 1em;
outline: none;
border: none;
}
#modalb a:focus,
#modalb a:hover {
background: #06d;
color: #fff;
}
#modalb a+a {
margin-left: .5em;
}
#modali {
display: block;
width: calc(100% - 1.25em);
margin: 1em -.1em 0 -.1em;
padding: .5em;
outline: none;
border: .25em solid #ccc;
border-radius: .4em;
}
#modali:focus {
border-color: #06d;
}

File diff suppressed because it is too large Load Diff

274
copyparty/web/upload.css Normal file
View File

@@ -0,0 +1,274 @@
#op_up2k {
padding: 0 1em 1em 1em;
}
#u2form {
position: absolute;
top: 0;
left: 0;
width: 2px;
height: 2px;
overflow: hidden;
}
#u2form input {
background: #444;
border: 0px solid #444;
outline: none;
}
#u2err.err {
color: #f87;
padding: .5em;
}
#u2err.msg {
color: #999;
padding: .5em;
font-size: .9em;
}
#u2btn {
color: #eee;
background: #555;
background: -moz-linear-gradient(top, #367 0%, #489 50%, #38788a 51%, #367 100%);
background: -webkit-linear-gradient(top, #367 0%, #489 50%, #38788a 51%, #367 100%);
background: linear-gradient(to bottom, #367 0%, #489 50%, #38788a 51%, #367 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#489', endColorstr='#38788a', GradientType=0);
text-decoration: none;
line-height: 1.3em;
border: 1px solid #222;
border-radius: .4em;
text-align: center;
font-size: 1.5em;
margin: .5em auto;
padding: .8em 0;
width: 16em;
cursor: pointer;
box-shadow: .4em .4em 0 #111;
}
#op_up2k.srch #u2btn {
background: linear-gradient(to bottom, #ca3 0%, #fd8 50%, #fc6 51%, #b92 100%);
text-shadow: 1px 1px 1px #fc6;
color: #333;
}
#u2conf #u2btn {
margin: -1.5em 0;
padding: .8em 0;
width: 100%;
max-width: 12em;
display: inline-block;
}
#u2conf #u2btn_cw {
text-align: right;
}
#u2notbtn {
display: none;
text-align: center;
background: #333;
padding-top: 1em;
}
#u2notbtn * {
line-height: 1.3em;
}
#u2tab {
margin: 3em auto;
width: calc(100% - 2em);
max-width: 100em;
}
#op_up2k.srch #u2tab {
max-width: none;
}
#u2tab td {
border: 1px solid #ccc;
border-width: 0 0px 1px 0;
padding: .1em .3em;
}
#u2tab td:nth-child(2) {
width: 5em;
white-space: nowrap;
}
#u2tab td:nth-child(3) {
width: 40%;
}
#op_up2k.srch #u2tab td:nth-child(3) {
font-family: sans-serif;
width: auto;
}
#u2tab tbody tr:hover td {
background: #222;
}
#u2cards {
padding: 1em 0 .3em 1em;
margin: 1.5em auto -2.5em auto;
white-space: nowrap;
text-align: center;
overflow: hidden;
}
#u2cards.w {
width: 45em;
text-align: left;
}
#u2cards a {
padding: .2em 1em;
border: 1px solid #777;
border-width: 0 0 1px 0;
background: linear-gradient(to bottom, #333, #222);
}
#u2cards a:first-child {
border-radius: .4em 0 0 0;
}
#u2cards a:last-child {
border-radius: 0 .4em 0 0;
}
#u2cards a.act {
padding-bottom: .5em;
border-width: 1px 1px .1em 1px;
border-radius: .3em .3em 0 0;
margin-left: -1px;
background: linear-gradient(to bottom, #464, #333 80%);
box-shadow: 0 -.17em .67em #280;
border-color: #7c5 #583 #333 #583;
position: relative;
color: #fd7;
}
#u2cards span {
color: #fff;
}
#u2conf {
margin: 1em auto;
width: 30em;
}
#u2conf.has_btn {
width: 48em;
}
#u2conf * {
text-align: center;
line-height: 1em;
margin: 0;
padding: 0;
border: none;
outline: none;
}
#u2conf .txtbox {
width: 3em;
color: #fff;
background: #444;
border: 1px solid #777;
font-size: 1.2em;
padding: .15em 0;
height: 1.05em;
}
#u2conf .txtbox.err {
background: #922;
}
#u2conf a {
color: #fff;
background: #c38;
text-decoration: none;
border-radius: .1em;
font-size: 1.5em;
padding: .1em 0;
margin: 0 -1px;
width: 1.5em;
height: 1em;
display: inline-block;
position: relative;
bottom: -0.08em;
}
#u2conf input+a {
background: #d80;
}
#u2conf label {
font-size: 1.6em;
width: 2em;
height: 1em;
padding: .4em 0;
display: block;
border-radius: .25em;
}
#u2conf input[type="checkbox"] {
position: relative;
opacity: .02;
top: 2em;
}
#u2conf input[type="checkbox"]+label {
position: relative;
background: #603;
border-bottom: .2em solid #a16;
box-shadow: 0 .1em .3em #a00 inset;
}
#u2conf input[type="checkbox"]:checked+label {
background: #6a1;
border-bottom: .2em solid #efa;
box-shadow: 0 .1em .5em #0c0;
}
#u2conf input[type="checkbox"]+label:hover {
box-shadow: 0 .1em .3em #fb0;
border-color: #fb0;
}
#op_up2k.srch #u2conf td:nth-child(1)>*,
#op_up2k.srch #u2conf td:nth-child(2)>*,
#op_up2k.srch #u2conf td:nth-child(3)>* {
background: #777;
border-color: #ccc;
box-shadow: none;
opacity: .2;
}
#u2foot {
color: #fff;
font-style: italic;
}
#u2foot span {
color: #999;
font-size: .9em;
}
#u2footfoot {
margin-bottom: -1em;
}
.prog {
font-family: monospace;
}
#u2tab a>span {
font-weight: bold;
font-style: italic;
color: #fff;
padding-left: .2em;
}
#u2cleanup {
float: right;
margin-bottom: -.3em;
}
html.light #u2btn {
box-shadow: .4em .4em 0 #ccc;
}
html.light #u2cards span {
color: #000;
}
html.light #u2cards a {
background: linear-gradient(to bottom, #eee, #fff);
}
html.light #u2cards a.act {
color: #037;
background: inherit;
box-shadow: 0 -.17em .67em #0ad;
border-color: #09c #05a #eee #05a;
}
html.light #u2conf .txtbox {
background: #fff;
color: #444;
}
html.light #u2conf .txtbox.err {
background: #f96;
color: #300;
}
html.light #op_up2k.srch #u2btn {
border-color: #a80;
}
html.light #u2foot {
color: #000;
}
html.light #u2tab tbody tr:hover td {
background: #fff;
}

View File

@@ -7,14 +7,7 @@ if (!window['console'])
var is_touch = 'ontouchstart' in window,
IPHONE = /iPhone|iPad|iPod/i.test(navigator.userAgent),
ANDROID = /android/i.test(navigator.userAgent);
var ebi = document.getElementById.bind(document),
QS = document.querySelector.bind(document),
QSA = document.querySelectorAll.bind(document),
mknod = document.createElement.bind(document);
ANDROID = /(android)/i.test(navigator.userAgent);
// error handler for mobile devices
@@ -28,65 +21,36 @@ function esc(txt) {
}[c];
});
}
var crashed = false, ignexd = {};
function vis_exh(msg, url, lineNo, columnNo, error) {
if ((msg + '').indexOf('ResizeObserver') !== -1)
return; // chrome issue 809574 (benign, from <video>)
var ekey = url + '\n' + lineNo + '\n' + msg;
if (ignexd[ekey] || crashed)
if (!window.onerror)
return;
crashed = true;
window.onerror = undefined;
var html = ['<h1>you hit a bug!</h1><p style="font-size:1.3em;margin:0">try to <a href="#" onclick="localStorage.clear();location.reload();">reset copyparty settings</a> if you are stuck here, or <a href="#" onclick="ignex();">ignore this</a> / <a href="#" onclick="ignex(true);">ignore all</a></p><p>please send me a screenshot arigathanks gozaimuch: <code>ed/irc.rizon.net</code> or <code>ed#2644</code><br />&nbsp; (and if you can, press F12 and include the "Console" tab in the screenshot too)</p><p>',
esc(url + ' @' + lineNo + ':' + columnNo), '<br />' + esc(String(msg)) + '</p>'];
window['vis_exh'] = null;
var html = ['<h1>you hit a bug!</h1><p>please send me a screenshot arigathanks gozaimuch: <code>ed/irc.rizon.net</code> or <code>ed#2644</code><br />&nbsp; (and if you can, press F12 and include the "Console" tab in the screenshot too)</p><p>',
esc(String(msg)), '</p><p>', esc(url + ' @' + lineNo + ':' + columnNo), '</p>'];
try {
if (error) {
var find = ['desc', 'stack', 'trace'];
for (var a = 0; a < find.length; a++)
if (String(error[find[a]]) !== 'undefined')
html.push('<h3>' + find[a] + '</h3>' +
esc(String(error[find[a]])).replace(/\n/g, '<br />\n'));
}
ignexd[ekey] = true;
html.push('<h3>localStore</h3>' + esc(JSON.stringify(localStorage)));
if (error) {
var find = ['desc', 'stack', 'trace'];
for (var a = 0; a < find.length; a++)
if (String(error[find[a]]) !== 'undefined')
html.push('<h2>' + find[a] + '</h2>' +
esc(String(error[find[a]])).replace(/\n/g, '<br />\n'));
}
catch (e) { }
document.body.innerHTML = html.join('\n');
try {
var exbox = ebi('exbox');
if (!exbox) {
exbox = mknod('div');
exbox.setAttribute('id', 'exbox');
document.body.appendChild(exbox);
var s = mknod('style');
s.innerHTML = 'body{background:#333;color:#ddd;font-family:sans-serif;font-size:0.8em;padding:0 1em 1em 1em} code{color:#bf7;background:#222;padding:.1em;margin:.2em;font-size:1.1em;font-family:monospace,monospace} *{line-height:1.5em}';
document.head.appendChild(s);
var s = mknod('style');
s.innerHTML = '#exbox{background:#333;color:#ddd;font-family:sans-serif;font-size:0.8em;padding:0 1em 1em 1em;z-index:80386;position:fixed;top:0;left:0;right:0;bottom:0;width:100%;height:100%} #exbox h1{margin:.5em 1em 0 0;padding:0} #exbox h3{border-top:1px solid #999;margin:1em 0 0 0} #exbox a{text-decoration:underline;color:#fc0} #exbox code{color:#bf7;background:#222;padding:.1em;margin:.2em;font-size:1.1em;font-family:monospace,monospace} #exbox *{line-height:1.5em}';
document.head.appendChild(s);
}
exbox.innerHTML = html.join('\n');
exbox.style.display = 'block';
}
catch (e) {
document.body.innerHTML = html.join('\n');
}
throw 'fatal_err';
}
function ignex(all) {
var o = ebi('exbox');
o.style.display = 'none';
o.innerHTML = '';
crashed = false;
if (!all)
window.onerror = vis_exh;
}
function ctrl(e) {
return e && (e.ctrlKey || e.metaKey);
}
var ebi = document.getElementById.bind(document),
QS = document.querySelector.bind(document),
QSA = document.querySelectorAll.bind(document),
mknod = document.createElement.bind(document);
function ev(e) {
@@ -123,15 +87,6 @@ if (!String.startsWith) {
return this.substring(i, i + s.length) === s;
};
}
if (!Element.prototype.closest) {
Element.prototype.closest = function (s) {
var el = this;
do {
if (el.msMatchesSelector(s)) return el;
el = el.parentElement || el.parentNode;
} while (el !== null && el.nodeType === 1);
}
}
// https://stackoverflow.com/a/950146
@@ -140,10 +95,10 @@ function import_js(url, cb) {
var script = mknod('script');
script.type = 'text/javascript';
script.src = url;
script.onreadystatechange = cb;
script.onload = cb;
script.onerror = function () {
toast.err(0, 'Failed to load module:\n' + url);
};
head.appendChild(script);
}
@@ -361,18 +316,6 @@ function linksplit(rp) {
}
function vsplit(vp) {
if (vp.endsWith('/'))
vp = vp.slice(0, -1);
var ofs = vp.lastIndexOf('/') + 1,
base = vp.slice(0, ofs),
fn = vp.slice(ofs);
return [base, fn];
}
function uricom_enc(txt, do_fb_enc) {
try {
return encodeURIComponent(txt);
@@ -398,15 +341,6 @@ function uricom_dec(txt) {
}
function uricom_adec(arr) {
var ret = [];
for (var a = 0; a < arr.length; a++)
ret.push(uricom_dec(arr[a])[0]);
return ret;
}
function get_evpath() {
var ret = document.location.pathname;
@@ -455,27 +389,25 @@ function has(haystack, needle) {
}
function apop(arr, v) {
var ofs = arr.indexOf(v);
if (ofs !== -1)
arr.splice(ofs, 1);
}
function jcp(obj) {
return JSON.parse(JSON.stringify(obj));
}
function sread(key) {
return localStorage.getItem(key);
if (window.localStorage)
return localStorage.getItem(key);
return null;
}
function swrite(key, val) {
if (val === undefined || val === null)
localStorage.removeItem(key);
else
localStorage.setItem(key, val);
if (window.localStorage) {
if (val === undefined || val === null)
localStorage.removeItem(key);
else
localStorage.setItem(key, val);
}
}
function jread(key, fb) {
@@ -558,20 +490,13 @@ function hist_replace(url) {
var tt = (function () {
var r = {
"tt": mknod("div"),
"en": true,
"el": null,
"skip": false
"en": true
};
r.tt.setAttribute('id', 'tt');
document.body.appendChild(r.tt);
r.show = function () {
if (r.skip) {
r.skip = false;
return;
}
function show() {
var cfg = sread('tooltips');
if (cfg !== null && cfg != '1')
return;
@@ -580,62 +505,21 @@ var tt = (function () {
if (!msg)
return;
r.el = this;
var pos = this.getBoundingClientRect(),
dir = this.getAttribute('ttd') || '',
left = pos.left < window.innerWidth / 2,
top = pos.top < window.innerHeight / 2,
big = this.className.indexOf(' ttb') !== -1;
top = pos.top < window.innerHeight / 2;
if (dir.indexOf('u') + 1) top = false;
if (dir.indexOf('d') + 1) top = true;
if (dir.indexOf('l') + 1) left = false;
if (dir.indexOf('r') + 1) left = true;
clmod(r.tt, 'b', big);
r.tt.style.top = top ? pos.bottom + 'px' : 'auto';
r.tt.style.bottom = top ? 'auto' : (window.innerHeight - pos.top) + 'px';
r.tt.style.left = left ? pos.left + 'px' : 'auto';
r.tt.style.right = left ? 'auto' : (window.innerWidth - pos.right) + 'px';
r.tt.innerHTML = msg.replace(/\$N/g, "<br />");
r.el.addEventListener('mouseleave', r.hide);
clmod(r.tt, 'show', 1);
};
r.hide = function (e) {
ev(e);
clmod(r.tt, 'show');
if (r.el)
r.el.removeEventListener('mouseleave', r.hide);
};
if (is_touch && IPHONE) {
var f1 = r.show,
f2 = r.hide;
r.show = function () {
setTimeout(f1.bind(this), 301);
};
r.hide = function () {
setTimeout(f2.bind(this), 301);
};
}
r.tt.onclick = r.hide;
r.att = function (ctr) {
var _show = r.en ? r.show : null,
_hide = r.en ? r.hide : null,
o = ctr.querySelectorAll('*[tt]');
for (var a = o.length - 1; a >= 0; a--) {
o[a].onfocus = _show;
o[a].onblur = _hide;
o[a].onmouseenter = _show;
o[a].onmouseleave = _hide;
}
r.hide();
function hide() {
clmod(r.tt, 'show');
}
r.init = function () {
@@ -649,174 +533,19 @@ var tt = (function () {
};
r.en = bcfg_get('tooltips', true)
}
r.att(document);
};
return r;
})();
var _show = r.en ? show : null,
_hide = r.en ? hide : null;
function lf2br(txt) {
var html = '', hp = txt.split(/(?=<.?pre>)/i);
for (var a = 0; a < hp.length; a++)
html += hp[a].startsWith('<pre>') ? hp[a] :
hp[a].replace(/<br ?.?>\n/g, '\n').replace(/\n<br ?.?>/g, '\n').replace(/\n/g, '<br />\n');
return html;
}
var toast = (function () {
var r = {},
te = null,
obj = mknod('div');
obj.setAttribute('id', 'toast');
document.body.appendChild(obj);
r.visible = false;
r.hide = function (e) {
ev(e);
clearTimeout(te);
clmod(obj, 'vis');
r.visible = false;
};
r.show = function (cl, ms, txt) {
clearTimeout(te);
if (ms)
te = setTimeout(r.hide, ms * 1000);
obj.innerHTML = '<a href="#" id="toastc">x</a>' + lf2br(txt);
obj.className = cl;
ms += obj.offsetWidth;
obj.className += ' vis';
ebi('toastc').onclick = r.hide;
r.visible = true;
};
r.ok = function (ms, txt) {
r.show('ok', ms, txt);
};
r.inf = function (ms, txt) {
r.show('inf', ms, txt);
};
r.warn = function (ms, txt) {
r.show('warn', ms, txt);
};
r.err = function (ms, txt) {
r.show('err', ms, txt);
};
return r;
})();
var modal = (function () {
var r = {},
q = [],
o = null,
cb_ok = null,
cb_ng = null;
r.busy = false;
r.show = function (html) {
o = mknod('div');
o.setAttribute('id', 'modal');
o.innerHTML = '<table><tr><td><div id="modalc">' + html + '</div></td></tr></table>';
document.body.appendChild(o);
document.addEventListener('keydown', onkey);
r.busy = true;
var a = ebi('modal-ng');
if (a)
a.onclick = ng;
a = ebi('modal-ok');
a.onclick = ok;
(ebi('modali') || a).focus();
};
r.hide = function () {
o.parentNode.removeChild(o);
document.removeEventListener('keydown', onkey);
r.busy = false;
setTimeout(next, 50);
};
function ok(e) {
ev(e);
var v = ebi('modali');
v = v ? v.value : true;
r.hide();
if (cb_ok)
cb_ok(v);
}
function ng(e) {
ev(e);
r.hide();
if (cb_ng)
cb_ng(null);
}
function onkey(e) {
if (e.code == 'Enter') {
var a = ebi('modal-ng');
if (a && document.activeElement == a)
return ng();
return ok();
var o = QSA('*[tt]');
for (var a = o.length - 1; a >= 0; a--) {
o[a].onfocus = _show;
o[a].onblur = _hide;
o[a].onmouseenter = _show;
o[a].onmouseleave = _hide;
}
if (e.code == 'Escape')
return ng();
}
function next() {
if (!r.busy && q.length)
q.shift()();
}
r.alert = function (html, cb) {
q.push(function () {
_alert(lf2br(html), cb);
});
next();
hide();
};
function _alert(html, cb) {
cb_ok = cb_ng = cb;
html += '<div id="modalb"><a href="#" id="modal-ok">OK</a></div>';
r.show(html);
}
r.confirm = function (html, cok, cng) {
q.push(function () {
_confirm(lf2br(html), cok, cng);
});
next();
}
function _confirm(html, cok, cng) {
cb_ok = cok;
cb_ng = cng === undefined ? cok : null;
html += '<div id="modalb"><a href="#" id="modal-ok">OK</a><a href="#" id="modal-ng">Cancel</a></div>';
r.show(html);
}
r.prompt = function (html, v, cok, cng) {
q.push(function () {
_prompt(lf2br(html), v, cok, cng);
});
next();
}
function _prompt(html, v, cok, cng) {
cb_ok = cok;
cb_ng = cng === undefined ? cok : null;
html += '<input id="modali" type="text" /><div id="modalb"><a href="#" id="modal-ok">OK</a><a href="#" id="modal-ng">Cancel</a></div>';
r.show(html);
ebi('modali').value = v || '';
}
return r;
})();

View File

@@ -10,25 +10,19 @@ u k:k
# share "." (the current directory)
# as "/" (the webroot) for the following users:
# "r" grants read-access for anyone
# "rw ed" grants read-write to ed
# "a ed" grants read-write to ed
.
/
r
rw ed
a ed
# custom permissions for the "priv" folder:
# user "k" can only see/read the contents
# user "ed" gets read-write access
# user "k" can see/read the contents
# and "ed" gets read-write access
./priv
/priv
r k
rw ed
# this does the same thing:
./priv
/priv
r ed k
w ed
a ed
# share /home/ed/Music/ as /music and let anyone read it
# (this will replace any folder called "music" in the webroot)
@@ -47,5 +41,5 @@ c e2d
c nodupe
# this entire config file can be replaced with these arguments:
# -u ed:123 -u k:k -v .::r:a,ed -v priv:priv:r,k:rw,ed -v /home/ed/Music:music:r -v /home/ed/inc:dump:w:c,e2d:c,nodupe
# -u ed:123 -u k:k -v .::r:aed -v priv:priv:rk:aed -v /home/ed/Music:music:r -v /home/ed/inc:dump:w
# but note that the config file always wins in case of conflicts

View File

@@ -1,51 +0,0 @@
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<title>hls-test</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
</head><body>
<video id="vid" controls></video>
<script src="hls.light.js"></script>
<script>
var video = document.getElementById('vid');
var hls = new Hls({
debug: true,
autoStartLoad: false
});
hls.loadSource('live/v.m3u8');
hls.attachMedia(video);
hls.on(Hls.Events.MANIFEST_PARSED, function() {
hls.startLoad(0);
});
hls.on(Hls.Events.MEDIA_ATTACHED, function() {
video.muted = true;
video.play();
});
/*
general good news:
- doesn't need fixed-length segments; ok to let x264 pick optimal keyframes and slice on those
- hls.js polls the m3u8 for new segments, scales the duration accordingly, seeking works great
- the sfx will grow by 66 KiB since that's how small hls.js can get, wait thats not good
# vod, creates m3u8 at the end, fixed keyframes, v bad
ffmpeg -hide_banner -threads 0 -flags -global_header -i ..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -g 120 -keyint_min 120 -sc_threshold 0 -hls_time 4 -hls_playlist_type vod -hls_segment_filename v%05d.ts v.m3u8
# live, updates m3u8 as it goes, dynamic keyframes, streamable with hls.js
ffmpeg -hide_banner -threads 0 -flags -global_header -i ..\..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -f segment -segment_list v.m3u8 -segment_format mpegts -segment_list_flags live v%05d.ts
# fmp4 (fragmented mp4), doesn't work with hls.js, gets duratoin 149:07:51 (536871s), probably the tkhd/mdhd 0xffffffff (timebase 8000? ok)
ffmpeg -re -hide_banner -threads 0 -flags +cgop -i ..\..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -f segment -segment_list v.m3u8 -segment_format fmp4 -segment_list_flags live v%05d.mp4
# try 2, works, uses tempfiles for m3u8 updates, good, 6% smaller
ffmpeg -re -hide_banner -threads 0 -flags +cgop -i ..\..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -f hls -hls_segment_type fmp4 -hls_list_size 0 -hls_segment_filename v%05d.mp4 v.m3u8
more notes
- adding -hls_flags single_file makes duration wack during playback (for both fmp4 and ts), ok once finalized and refreshed, gives no size reduction anyways
- bebop op has good keyframe spacing for testing hls.js, in particular it hops one seg back and immediately resumes if it hits eof with the explicit hls.startLoad(0); otherwise it jumps into the middle of a seg and becomes art
- can probably -c:v copy most of the time, is there a way to check for cgop? todo
*/
</script>
</body></html>

View File

@@ -44,7 +44,7 @@ avg() { awk 'function pr(ncsz) {if (nsmp>0) {printf "%3s %s\n", csz, sum/nsmp} c
dirs=("$HOME/vfs/ほげ" "$HOME/vfs/ほげ/ぴよ" "$HOME/vfs/$(printf \\xed\\x91)" "$HOME/vfs/$(printf \\xed\\x91/\\xed\\x92)")
mkdir -p "${dirs[@]}"
for dir in "${dirs[@]}"; do for fn in ふが "$(printf \\xed\\x93)" 'qwe,rty;asd fgh+jkl%zxc&vbn <qwe>"rty'"'"'uio&asd&nbsp;fgh'; do echo "$dir" > "$dir/$fn.html"; done; done
# qw er+ty%20ui%%20op<as>df&gh&amp;jk#zx'cv"bn`m=qw*er^ty?ui@op,as.df-gh_jk
##
## upload mojibake
@@ -79,10 +79,6 @@ command -v gdate && date() { gdate "$@"; }; while true; do t=$(date +%s.%N); (ti
# get all up2k search result URLs
var t=[]; var b=document.location.href.split('#')[0].slice(0, -1); document.querySelectorAll('#u2tab .prog a').forEach((x) => {t.push(b+encodeURI(x.getAttribute("href")))}); console.log(t.join("\n"));
# rename all selected songs to <leading-track-number> + <Title> + <extension>
var sel=msel.getsel(), ci=find_file_col('Title')[0], re=[]; for (var a=0; a<sel.length; a++) { var url=sel[a].vp, tag=ebi(sel[a].id).closest('tr').querySelectorAll('td')[ci].textContent, name=uricom_dec(vsplit(url)[1])[0], m=/^([0-9]+[\. -]+)?.*(\.[^\.]+$)/.exec(name), name2=(m[1]||'')+tag+m[2], url2=vsplit(url)[0]+uricom_enc(name2,false); if (url!=url2) re.push([url, url2]); }
console.log(JSON.stringify(re, null, ' '));
function f() { if (!re.length) return treectl.goto(get_evpath()); var [u1,u2] = re.shift(); fetch(u1+'?move='+u2).then((rsp) => {if (rsp.ok) f(); }); }; f();
##
## bash oneliners
@@ -170,10 +166,7 @@ dbg.asyncStore.pendingBreakpoints = {}
about:config >> devtools.debugger.prefs-schema-version = -1
# determine server version
git pull; git reset --hard origin/HEAD && git log --format=format:"%H %ai %d" --decorate=full > ../revs && cat ../{util,browser,up2k}.js >../vr && cat ../revs | while read -r rev extra; do (git reset --hard $rev >/dev/null 2>/dev/null && dsz=$(cat copyparty/web/{util,browser,up2k}.js >../vg 2>/dev/null && diff -wNarU0 ../{vg,vr} | wc -c) && printf '%s %6s %s\n' "$rev" $dsz "$extra") </dev/null; done
# download all sfx versions
curl https://api.github.com/repos/9001/copyparty/releases?per_page=100 | jq -r '.[] | .tag_name + " " + .name' | while read v t; do fn="copyparty $v $t.py"; [ -e $fn ] || curl https://github.com/9001/copyparty/releases/download/$v/copyparty-sfx.py -Lo "$fn"; done
git pull; git reset --hard origin/HEAD && git log --format=format:"%H %ai %d" --decorate=full > ../revs && cat ../{util,browser}.js >../vr && cat ../revs | while read -r rev extra; do (git reset --hard $rev >/dev/null 2>/dev/null && dsz=$(cat copyparty/web/{util,browser}.js >../vg 2>/dev/null && diff -wNarU0 ../{vg,vr} | wc -c) && printf '%s %6s %s\n' "$rev" $dsz "$extra") </dev/null; done
##

View File

@@ -2,7 +2,6 @@
set -e
echo
help() { exec cat <<'EOF'
# optional args:
#
@@ -21,14 +20,7 @@ help() { exec cat <<'EOF'
#
# `no-cm` saves ~90k by removing easymde/codemirror
# (the fancy markdown editor)
#
# `no-fnt` saves ~9k by removing the source-code-pro font
# (mainly used my the markdown viewer/editor)
#
# `no-dd` saves ~2k by removing the mouse cursor
EOF
}
# port install gnutar findutils gsed coreutils
gtar=$(command -v gtar || command -v gnutar) || true
@@ -37,7 +29,6 @@ gtar=$(command -v gtar || command -v gnutar) || true
sed() { gsed "$@"; }
find() { gfind "$@"; }
sort() { gsort "$@"; }
sha1sum() { shasum "$@"; }
unexpand() { gunexpand "$@"; }
command -v grealpath >/dev/null &&
realpath() { grealpath "$@"; }
@@ -66,19 +57,14 @@ use_gz=
do_sh=1
do_py=1
while [ ! -z "$1" ]; do
case $1 in
clean) clean=1 ; ;;
re) repack=1 ; ;;
gz) use_gz=1 ; ;;
no-ogv) no_ogv=1 ; ;;
no-fnt) no_fnt=1 ; ;;
no-dd) no_dd=1 ; ;;
no-cm) no_cm=1 ; ;;
no-sh) do_sh= ; ;;
no-py) do_py= ; ;;
*) help ; ;;
esac
shift
[ "$1" = clean ] && clean=1 && shift && continue
[ "$1" = re ] && repack=1 && shift && continue
[ "$1" = gz ] && use_gz=1 && shift && continue
[ "$1" = no-ogv ] && no_ogv=1 && shift && continue
[ "$1" = no-cm ] && no_cm=1 && shift && continue
[ "$1" = no-sh ] && do_sh= && shift && continue
[ "$1" = no-py ] && do_py= && shift && continue
break
done
tmv() {
@@ -86,23 +72,16 @@ tmv() {
mv t "$1"
}
stamp=$(
for d in copyparty scripts; do
find $d -type f -printf '%TY-%Tm-%Td %TH:%TM:%TS %p\n'
done | sort | tail -n 1 | sha1sum | cut -c-16
)
rm -rf sfx/*
mkdir -p sfx build
cd sfx
tmpdir="$(
printf '%s\n' "$TMPDIR" /tmp |
awk '/./ {print; exit}'
)"
[ $repack ] && {
old="$tmpdir/pe-copyparty"
old="$(
printf '%s\n' "$TMPDIR" /tmp |
awk '/./ {print; exit}'
)/pe-copyparty"
echo "repack of files in $old"
cp -pR "$old/"*{dep-j2,copyparty} .
}
@@ -184,12 +163,12 @@ mkdir -p ../dist
sfx_out=../dist/copyparty-sfx
echo cleanup
find -name '*.pyc' -delete
find -name __pycache__ -delete
find .. -name '*.pyc' -delete
find .. -name __pycache__ -delete
# especially prevent osx from leaking your lan ip (wtf apple)
find -type f \( -name .DS_Store -or -name ._.DS_Store \) -delete
find -type f -name ._\* | while IFS= read -r f; do cmp <(printf '\x00\x05\x16') <(head -c 3 -- "$f") && rm -f -- "$f"; done
find .. -type f \( -name .DS_Store -or -name ._.DS_Store \) -delete
find .. -type f -name ._\* | while IFS= read -r f; do cmp <(printf '\x00\x05\x16') <(head -c 3 -- "$f") && rm -f -- "$f"; done
echo use smol web deps
rm -f copyparty/web/deps/*.full.* copyparty/web/dbg-* copyparty/web/Makefile
@@ -208,24 +187,7 @@ done
rm -rf copyparty/web/mde.* copyparty/web/deps/easymde*
echo h > copyparty/web/mde.html
f=copyparty/web/md.html
sed -r '/edit2">edit \(fancy/d' <$f >t
tmv "$f"
}
[ $no_fnt ] && {
rm -f copyparty/web/deps/scp.woff2
f=copyparty/web/md.css
gzip -d "$f"
sed -r '/scp\.woff2/d' <$f >t
tmv "$f"
}
[ $no_dd ] && {
rm -rf copyparty/web/dd
f=copyparty/web/browser.css
gzip -d "$f"
sed -r 's/(cursor: )url\([^)]+\), (pointer)/\1\2/; /[0-9]+% \{cursor:/d; /animation: cursor/d' <$f >t
tmv "$f"
sed -r '/edit2">edit \(fancy/d' <$f >t && tmv "$f"
}
[ $repack ] ||
@@ -258,42 +220,20 @@ find | grep -E '\.(js|html)$' | while IFS= read -r f; do
tmv "$f"
done
gzres() {
command -v pigz &&
pk='pigz -11 -I 256' ||
pk='gzip'
command -v pigz &&
pk='pigz -11 -J 34 -I 100' ||
pk='gzip'
echo "$pk"
find | grep -E '\.(js|css)$' | grep -vF /deps/ | while IFS= read -r f; do
echo -n .
$pk "$f"
done
echo
}
zdir="$tmpdir/cpp-mksfx"
[ -e "$zdir/$stamp" ] || rm -rf "$zdir"
mkdir -p "$zdir"
echo a > "$zdir/$stamp"
nf=$(ls -1 "$zdir"/arc.* | wc -l)
[ $nf -ge 2 ] && [ ! $repack ] && use_zdir=1 || use_zdir=
[ $use_zdir ] || {
echo "$nf alts += 1"
gzres
[ $repack ] ||
tar -cf "$zdir/arc.$(date +%s)" copyparty/web/*.gz
}
[ $use_zdir ] && {
arcs=("$zdir"/arc.*)
arc="${arcs[$RANDOM % ${#arcs[@]} ] }"
echo "using $arc"
tar -xf "$arc"
for f in copyparty/web/*.gz; do
rm "${f%.*}"
done
echo "$pk"
find | grep -E '\.(js|css)$' | grep -vF /deps/ | while IFS= read -r f; do
echo -n .
$pk "$f"
done
echo
}
gzres
echo gen tarlist

View File

@@ -6,8 +6,8 @@ import re, os, sys, time, shutil, signal, threading, tarfile, hashlib, platform,
import subprocess as sp
"""
to edit this file, use HxD or "vim -b"
(there is compressed stuff at the end)
pls don't edit this file with a text editor,
it breaks the compressed stuff at the end
run me with any version of python, i will unpack and run copyparty
@@ -380,7 +380,7 @@ def run(tmp, j2):
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
except Exception as ex:
if not WINDOWS:
msg("\033[31mflock:{!r}\033[0m".format(ex))
msg("\033[31mflock:", repr(ex))
t = threading.Thread(target=utime, args=(tmp,))
t.daemon = True

View File

@@ -124,7 +124,7 @@ def tc1():
arg = "{}:{}:{}".format(pd, ud, p, hp)
if hp:
arg += ":c,hist=" + hp
arg += ":chist=" + hp
args += ["-v", arg]

View File

@@ -65,9 +65,9 @@ def uncomment(fpath):
def main():
print("uncommenting", end="", flush=True)
print("uncommenting", end="")
for f in sys.argv[1:]:
print(".", end="", flush=True)
print(".", end="")
uncomment(f)
print("k")

View File

@@ -99,7 +99,6 @@ args = {
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Environment :: Console",

View File

@@ -23,19 +23,14 @@ def hdr(query):
class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None):
def __init__(self, a=[], v=[], c=None):
super(Cfg, self).__init__(
a=a or [],
v=v or [],
a=a,
v=v,
c=c,
rproxy=0,
ed=False,
nw=False,
unpost=600,
no_mv=False,
no_del=False,
no_zip=False,
no_voldump=True,
no_scandir=False,
no_sendfile=True,
no_rescan=True,
@@ -43,7 +38,6 @@ class Cfg(Namespace):
nih=True,
mtp=[],
mte="a",
mth="",
hist=None,
no_hash=False,
css_browser=None,
@@ -95,7 +89,7 @@ class TestHttpCli(unittest.TestCase):
if not vol.startswith(top):
continue
mode = vol[-2].replace("a", "rwmd")
mode = vol[-2]
usr = vol[-1]
if usr == "a":
usr = ""
@@ -104,7 +98,7 @@ class TestHttpCli(unittest.TestCase):
vol += "/"
top, sub = vol.split("/", 1)
vcfg.append("{0}/{1}:{1}:{2},{3}".format(top, sub, mode, usr))
vcfg.append("{0}/{1}:{1}:{2}{3}".format(top, sub, mode, usr))
pprint.pprint(vcfg)

View File

@@ -16,20 +16,18 @@ from copyparty import util
class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None):
ex = {k: False for k in "nw e2d e2ds e2dsa e2t e2ts e2tsr".split()}
def __init__(self, a=[], v=[], c=None):
ex = {k: False for k in "e2d e2ds e2dsa e2t e2ts e2tsr".split()}
ex2 = {
"mtp": [],
"mte": "a",
"mth": "",
"hist": None,
"no_hash": False,
"css_browser": None,
"no_voldump": True,
"rproxy": 0,
}
ex.update(ex2)
super(Cfg, self).__init__(a=a or [], v=v or [], c=c, **ex)
super(Cfg, self).__init__(a=a, v=v, c=c, **ex)
class TestVFS(unittest.TestCase):
@@ -59,8 +57,8 @@ class TestVFS(unittest.TestCase):
# type: (VFS, str, str) -> tuple[str, str, str]
"""helper for resolving and listing a folder"""
vn, rem = vfs.get(vpath, uname, True, False)
r1 = vn.ls(rem, uname, False, [[True]])
r2 = vn.ls(rem, uname, False, [[True]])
r1 = vn.ls(rem, uname, False)
r2 = vn.ls(rem, uname, False)
self.assertEqual(r1, r2)
fsdir, real, virt = r1
@@ -70,11 +68,6 @@ class TestVFS(unittest.TestCase):
def log(self, src, msg, c=0):
pass
def assertAxs(self, dct, lst):
t1 = list(sorted(dct.keys()))
t2 = list(sorted(lst))
self.assertEqual(t1, t2)
def test(self):
td = os.path.join(self.td, "vfs")
os.mkdir(td)
@@ -95,53 +88,53 @@ class TestVFS(unittest.TestCase):
self.assertEqual(vfs.nodes, {})
self.assertEqual(vfs.vpath, "")
self.assertEqual(vfs.realpath, td)
self.assertAxs(vfs.axs.uread, ["*"])
self.assertAxs(vfs.axs.uwrite, ["*"])
self.assertEqual(vfs.uread, ["*"])
self.assertEqual(vfs.uwrite, ["*"])
# single read-only rootfs (relative path)
vfs = AuthSrv(Cfg(v=["a/ab/::r"]), self.log).vfs
self.assertEqual(vfs.nodes, {})
self.assertEqual(vfs.vpath, "")
self.assertEqual(vfs.realpath, os.path.join(td, "a", "ab"))
self.assertAxs(vfs.axs.uread, ["*"])
self.assertAxs(vfs.axs.uwrite, [])
self.assertEqual(vfs.uread, ["*"])
self.assertEqual(vfs.uwrite, [])
# single read-only rootfs (absolute path)
vfs = AuthSrv(Cfg(v=[td + "//a/ac/../aa//::r"]), self.log).vfs
self.assertEqual(vfs.nodes, {})
self.assertEqual(vfs.vpath, "")
self.assertEqual(vfs.realpath, os.path.join(td, "a", "aa"))
self.assertAxs(vfs.axs.uread, ["*"])
self.assertAxs(vfs.axs.uwrite, [])
self.assertEqual(vfs.uread, ["*"])
self.assertEqual(vfs.uwrite, [])
# read-only rootfs with write-only subdirectory (read-write for k)
vfs = AuthSrv(
Cfg(a=["k:k"], v=[".::r:rw,k", "a/ac/acb:a/ac/acb:w:rw,k"]),
Cfg(a=["k:k"], v=[".::r:ak", "a/ac/acb:a/ac/acb:w:ak"]),
self.log,
).vfs
self.assertEqual(len(vfs.nodes), 1)
self.assertEqual(vfs.vpath, "")
self.assertEqual(vfs.realpath, td)
self.assertAxs(vfs.axs.uread, ["*", "k"])
self.assertAxs(vfs.axs.uwrite, ["k"])
self.assertEqual(vfs.uread, ["*", "k"])
self.assertEqual(vfs.uwrite, ["k"])
n = vfs.nodes["a"]
self.assertEqual(len(vfs.nodes), 1)
self.assertEqual(n.vpath, "a")
self.assertEqual(n.realpath, os.path.join(td, "a"))
self.assertAxs(n.axs.uread, ["*", "k"])
self.assertAxs(n.axs.uwrite, ["k"])
self.assertEqual(n.uread, ["*", "k"])
self.assertEqual(n.uwrite, ["k"])
n = n.nodes["ac"]
self.assertEqual(len(vfs.nodes), 1)
self.assertEqual(n.vpath, "a/ac")
self.assertEqual(n.realpath, os.path.join(td, "a", "ac"))
self.assertAxs(n.axs.uread, ["*", "k"])
self.assertAxs(n.axs.uwrite, ["k"])
self.assertEqual(n.uread, ["*", "k"])
self.assertEqual(n.uwrite, ["k"])
n = n.nodes["acb"]
self.assertEqual(n.nodes, {})
self.assertEqual(n.vpath, "a/ac/acb")
self.assertEqual(n.realpath, os.path.join(td, "a", "ac", "acb"))
self.assertAxs(n.axs.uread, ["k"])
self.assertAxs(n.axs.uwrite, ["*", "k"])
self.assertEqual(n.uread, ["k"])
self.assertEqual(n.uwrite, ["*", "k"])
# something funky about the windows path normalization,
# doesn't really matter but makes the test messy, TODO?
@@ -180,24 +173,24 @@ class TestVFS(unittest.TestCase):
# admin-only rootfs with all-read-only subfolder
vfs = AuthSrv(
Cfg(a=["k:k"], v=[".::rw,k", "a:a:r"]),
Cfg(a=["k:k"], v=[".::ak", "a:a:r"]),
self.log,
).vfs
self.assertEqual(len(vfs.nodes), 1)
self.assertEqual(vfs.vpath, "")
self.assertEqual(vfs.realpath, td)
self.assertAxs(vfs.axs.uread, ["k"])
self.assertAxs(vfs.axs.uwrite, ["k"])
self.assertEqual(vfs.uread, ["k"])
self.assertEqual(vfs.uwrite, ["k"])
n = vfs.nodes["a"]
self.assertEqual(len(vfs.nodes), 1)
self.assertEqual(n.vpath, "a")
self.assertEqual(n.realpath, os.path.join(td, "a"))
self.assertAxs(n.axs.uread, ["*"])
self.assertAxs(n.axs.uwrite, [])
self.assertEqual(vfs.can_access("/", "*"), [False, False, False, False])
self.assertEqual(vfs.can_access("/", "k"), [True, True, False, False])
self.assertEqual(vfs.can_access("/a", "*"), [True, False, False, False])
self.assertEqual(vfs.can_access("/a", "k"), [True, False, False, False])
self.assertEqual(n.uread, ["*"])
self.assertEqual(n.uwrite, [])
self.assertEqual(vfs.can_access("/", "*"), [False, False])
self.assertEqual(vfs.can_access("/", "k"), [True, True])
self.assertEqual(vfs.can_access("/a", "*"), [True, False])
self.assertEqual(vfs.can_access("/a", "k"), [True, False])
# breadth-first construction
vfs = AuthSrv(
@@ -254,26 +247,26 @@ class TestVFS(unittest.TestCase):
./src
/dst
r a
rw asd
a asd
"""
).encode("utf-8")
)
au = AuthSrv(Cfg(c=[cfg_path]), self.log)
self.assertEqual(au.acct["a"], "123")
self.assertEqual(au.acct["asd"], "fgh:jkl")
self.assertEqual(au.user["a"], "123")
self.assertEqual(au.user["asd"], "fgh:jkl")
n = au.vfs
# root was not defined, so PWD with no access to anyone
self.assertEqual(n.vpath, "")
self.assertEqual(n.realpath, None)
self.assertAxs(n.axs.uread, [])
self.assertAxs(n.axs.uwrite, [])
self.assertEqual(n.uread, [])
self.assertEqual(n.uwrite, [])
self.assertEqual(len(n.nodes), 1)
n = n.nodes["dst"]
self.assertEqual(n.vpath, "dst")
self.assertEqual(n.realpath, os.path.join(td, "src"))
self.assertAxs(n.axs.uread, ["a", "asd"])
self.assertAxs(n.axs.uwrite, ["asd"])
self.assertEqual(n.uread, ["a", "asd"])
self.assertEqual(n.uwrite, ["asd"])
self.assertEqual(len(n.nodes), 0)
os.unlink(cfg_path)

View File

@@ -31,7 +31,7 @@ if MACOS:
from copyparty.util import Unrecv
def runcmd(argv):
def runcmd(*argv):
p = sp.Popen(argv, stdout=sp.PIPE, stderr=sp.PIPE)
stdout, stderr = p.communicate()
stdout = stdout.decode("utf-8")
@@ -39,8 +39,8 @@ def runcmd(argv):
return [p.returncode, stdout, stderr]
def chkcmd(argv):
ok, sout, serr = runcmd(argv)
def chkcmd(*argv):
ok, sout, serr = runcmd(*argv)
if ok != 0:
raise Exception(serr)
@@ -60,20 +60,12 @@ def get_ramdisk():
if os.path.exists("/Volumes"):
# hdiutil eject /Volumes/cptd/
devname, _ = chkcmd("hdiutil attach -nomount ram://131072".split())
devname, _ = chkcmd("hdiutil", "attach", "-nomount", "ram://131072")
devname = devname.strip()
print("devname: [{}]".format(devname))
for _ in range(10):
try:
_, _ = chkcmd(["diskutil", "eraseVolume", "HFS+", "cptd", devname])
with open("/Volumes/cptd/.metadata_never_index", "w") as f:
f.write("orz")
try:
shutil.rmtree("/Volumes/cptd/.fseventsd")
except:
pass
_, _ = chkcmd("diskutil", "eraseVolume", "HFS+", "cptd", devname)
return subdir("/Volumes/cptd")
except Exception as ex:
print(repr(ex))
@@ -127,13 +119,14 @@ class VHttpConn(object):
self.addr = ("127.0.0.1", "42069")
self.args = args
self.asrv = asrv
self.nid = None
self.is_mp = False
self.log_func = log
self.log_src = "a"
self.lf_url = None
self.hsrv = VHttpSrv()
self.nreq = 0
self.nbyte = 0
self.workload = 0
self.ico = None
self.thumbcli = None
self.t0 = time.time()