Fix Azure blob storage race condition with size-based upload #665
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Part of #664
Replace uploadStream() with upload() for documents <100MB. Fallback to uploadStream() for large files (>100MB).
Problem:
Multiple instances uploading same blob with uploadStream() causes race conditions when committing blocks, resulting in 'invalid block list' errors from Azure Storage.
Solution:
Note: maxConcurrency parameter only affects chunk parallelism within a single upload operation, not race conditions between multiple instances. For large files with critical data integrity, consider implementing Azure blob leases (pessimistic locking) as documented by Azure.