How is latency calculated?


Calculation of latency depends on the OS/platform, but generally they all track latency the same way:

Average time (in milliseconds) for each IO to go through the host stack down and back.  This means the latency reported by DPACK is often different from latency reported by the actual disk arrays.  Often, the hosts queue IO before issuing them out to the SAN.  DPACK accounts for that time in the queue.

 However, the latency reported by tools like perfmon should be identical to DPACK, as both are reporting the same counter.

Was this article helpful?
1 out of 1 found this helpful
Article is closed for comments.