With Enki (a Brain Training app for coders – if you want to try iy, and you need an invite, you can my code: MAGNE985) I found a quick benchmark to for Linux, to see the speed of a CPU core.
dd if=/dev/zero bs=1M count=1024 | md5sum
This line tells the CPU to calculate an md5 hash for an 1gb of “zeroes” and measure how long it takes. For example, on the Pentium G3420 that I have in my office I get this:
dd if=/dev/zero bs=1M count=1024 | md5sum
1073741824 byte (1,1 GB) copiati, 2,10036 s, 511 MB/s dd if=/dev/zero bs=10M count=2048 | md5sum
21474836480 byte (21 GB) copiati, 49,0278 s, 438 MB/s
while on an Intel Xeon W3520 (my web server) I get this:
dd if=/dev/zero bs=1M count=1024 | md5sum
1073741824 bytes (1.1 GB) copied, 2.79137 s, 385 MB/s
dd if=/dev/zero bs=10M count=2048 | md5sum
21474836480 bytes (21 GB) copied, 56.9042 s, 377 MB/s
Hey! It takes 10 seconds more! What? An expensive Xeon is slower than a cheaper Pentium????
Yes, the server is outdated, but I did not expect to have such a difference! It is time to change my web server!
Yes, the server is outdated, but I did not expect to have such a difference! It is time to change my web server!