"Paul" wrote
| The main differences are in how many operations per
| second can be handled (CPU GHz), how efficiently multiple
| cores can be used, how clean the system is running, etc.
|
| Hmmm.
|
|
https://github.com/pagespeed/zlib/bl...ter/minigzip.c
|
I don't understand why you linked that. It's just
a light wrapper using the zlib functions to gzip a file.
There's no compression code there at all.
The only thing that might yield a clue to the
optimizing would be the source code for zlib. (Which
I'm not especially anxious to study.
The gzip code compresses a whole file into a
single data stream in one go, calling zlib functions
to do it. If zlib does something like work on one
chunk at a time, altering the chunk size on 32 vs 64,
that might explain the difference. But that would
also be a very function-specific optimizing. Most
software is not doing operations like that. Even
when it does, most people are not needing to deal
with vastly gigantic compression jobs. If I need
to open a .gz file it's instant in my perception, and
there's no such thing as more instant.