We have often talked about schemes to keep the journal from growing too big. I suggest a better approach is that we handle the "Request Entity Too Large" error and save a slim version of the page.
An acceptable approach would be to try the fork again with a simplified page that will fork. We can simplify by purging all but the fork actions from the journal. We can meet our cc-attribution requirement if we save only the most recent fork from each remote site.
We could defend this approach by explaining there is a natural limit to the size of pages and when this is exceeded we must depend on our sources for historic details.