Does anyone know why 4096 is used as the default chunk size for reading data? According to https://opensource.apple.com/source/CommonCrypto/CommonCrypto-36064/Source/Digest/sha1.c (via http://stackoverflow.com/a/5387310/1011953), SHA1 (and perhaps the other digests) is hardware accelerated if the size of the data it is given is > 4096.
For example, the average of 100 sha1's on a ~1.4MB file on a 9.7" iPad Pro:
| chunk size (bytes) |
time (sec) |
| 4096 |
0.00042184 |
| 8192 |
0.00039322 |
| 12,288 |
0.00035151 |
| 16,384 |
0.00033951 |
| 20,480 |
0.00035334 |
Thank you.