We are trying to incorporate FileUltimate into an Azure webapp, connecting to a storage Bucket on Amazon S3 (because FileUltimate doesn't support Azure Blog storage yet). We are encountering heavy memory usage when uploading large files, leading to an out of memory exception.
The memory usage seems to occur as a result of chunking the large files, presumably because the chunks are held in the memory until the entire file is uploaded. (This is just our guess.)
We need to eliminate the risk of an out of memory exception when multiple users try to upload large files simultaneously.
Any of these approaches would be OK for us:
1. Limit the maximum upload size, I understand that html4 doesn't support chunking but we need to use html5 as the upload method so users can drag-and-drop files for upload. Sticking with html5, we have tried setting the maxRequestLength value in web.config at 25Mb but this seems to be overridden when large files are chunked (because the individual chunks are smaller than 25Mb?), so it doesn't actually block users from uploading files larger than 25Mb when the upload method is html5.
2. Avoid holding large files in the memory while they upload. Is it possible to write to the storage Bucket while a large file is uploading, instead of holding it in the memory?
3. Some other clever solution we haven't thought of...
Thanks in advance for your suggestions!