>/dev/null, but do it fast

If a script or program produces lots of output, generally, a reduction of the amount of output will make things run faster. The most radical reduction often seen at the command line is a redirection to /dev/null. However, this can be done in various ways with rather varying results.

My reference script ;-) is

$ while ((c<12345678)) ; do ((c++)) ; echo -n . ; done

If I run this on one of our newer servers, accessed by means of a remote SSH shell, it produces first of all lots of futile dots, and takes almost 2 minutes:

$ time bash -c "while ((c<12345678)) ; do ((c++)) ; echo -n . ; done"
(... lots of dots not shown ...)
real    1m58.311s
user    1m43.310s
sys     0m14.613s

Now, let's >/dev/null:

$ time bash -c "while ((c<12345678)) ; do ((c++)) ; echo -n . >/dev/null ; done"
real    2m52.775s
user    2m15.276s
sys     0m36.298s

That's hard to believe, isn't it? I have some idea of why this actually takes longer, but I don't know for sure. Anyway, there is room for improvement, of course:

$ time bash -c "while ((c<12345678)) ; do ((c++)) ; echo -n . ; done" >/dev/null
real    1m37.386s
user    1m33.458s
sys     0m3.668s
$ time bash -c "while ((c<12345678)) ; do ((c++)) ; echo -n . ; done >/dev/null"
real    1m36.513s
user    1m32.246s
sys     0m3.998s

The last 2 variations I have been running a few times to see if the small difference persists. The above quoted examples reflect some sort of observed average. But, try for yourself and let me know what you get.