We'd like to share large files with the same semantics as wiki pages. We can assume that they are too large to pass through the browser's GET and PUT as synchronous operations.
See Image Assets for earlier thinking.
Store assets in .wiki within an assets subdirectory. The exact location tbd.
Assets to be identified by random id and suffix. An item would be created with the asset file name, id.suffix, in the "asset" field of the item, either as a string or an array of strings.
Assets can be created client-side, cached locally, and saved asynchronously coincident with a page action that records the asset's presence. This will require a new PUT route.
Assets can be retrieved and cached client-side from remote servers. This will require a new CORS GET route.
A server would retrieve an asset when a page with asset-bearing items is forked. The journal would provide the best advice as to how to fetch the asset.
A serve could discard an asset when no page has a reference to the asset in story or journal. This could happen if a page is forked from an earlier revision.
A client-side plugin that cannot retrieve an asset should either fall back to alternative resources or report the failure in a class=caption field at the bottom of the rendered item.
Consider how we might exploit DAT or IPFS for transport. One problem is retrieving an id in time to record it in an edit action.
Consider restartable transfers for huge files. Some systems open multiple channels.
Consider how the presence of large assets could be represented in sitemap.json and search correspondingly enhanced.
Consider how to 'push' an asset to a public server across a firewall. IPFS can do this but it is tricky.
Consider how asset revisions can propagate. My immediate application is distributing database dumps in csv format. These could be revised daily.