1
How to limit memory usage when uploading large files
Question asked by Henry Clough - 7/21/2016 at 7:32 PM
Answered
We are trying to incorporate FileUltimate into an Azure webapp, connecting to a storage Bucket on Amazon S3 (because FileUltimate doesn't support Azure Blog storage yet).  We are encountering heavy memory usage when uploading large files, leading to an out of memory exception.
 
The memory usage seems to occur as a result of chunking the large files, presumably because the chunks are held in the memory until the entire file is uploaded.  (This is just our guess.)
 
We need to eliminate the risk of an out of memory exception when multiple users try to upload large files simultaneously.
 
Any of these approaches would be OK for us:
 
1. Limit the maximum upload size,  I understand that html4 doesn't support chunking but we need to use html5 as the upload method so users can drag-and-drop files for upload.  Sticking with html5, we have tried setting the maxRequestLength value in web.config at 25Mb but this seems to be overridden when large files are chunked (because the individual chunks are smaller than 25Mb?), so it doesn't actually block users from uploading files larger than 25Mb when the upload method is html5.
 
2. Avoid holding large files in the memory while they upload.  Is it possible to write to the storage Bucket while a large file is uploading, instead of holding it in the memory?  
 
3. Some other clever solution we haven't thought of...
 
Thanks in advance for your suggestions!

2 Replies

Reply to Thread
0
Cem Alacayir Replied
Employee Post Marked As Answer
Yes, as of current version, this memory issue only happens with Amazon S3 file system but it will not happen when using physical file system. When you upload to Amazon S3, it’s first uploaded to the memory on  the server and then uploaded to S3. This is the first iteration of S3 feature and it has some shortcomings. For example, also progress bar may not be accurate (reflects only the first upload to the server and not the second upload to actual S3). So it's possible that you get "out of memory" errors when your server memory is filled.
 
In the next release, we will optimize uploading to S3 file system, i.e. we will make the browser upload directly to S3 and skip the server. This way uploading to S3 file system will be as fast as uploading to physical file system and there will be no memory issues.
 
We also plan to add a file system for Azure Blog storage in future versions (if not in the next version).
0
Henry Clough Replied
Thanks.  In the meantime we have found a way to limit the maximum size of file which can be uploaded.  We will look to remove limit after the next release.

Reply to Thread