<br><br><div><span class="gmail_quote">On 7/24/07, <b class="gmail_sendername">Matthew Gillen</b> <<a href="mailto:mgillen@bbn.com">mgillen@bbn.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Friedhelm Wolf wrote:<br>> 6. I did some measurements and found that even if the latency for 15<br>> consumers is different for every consumer (basically it looks like that<br>> the calls are processed serially and the last consumer has to wait the
<br>> longest to get an answer) the throughput remains the same. I'm not sure<br>> if I understand it the right way, but should a bad latency not result in<br>> reduced throughput?<br><br>Not necessarily. Latency and throughput are often related, but not always.
<br> In this case, it looks like some consumers were always 'first in line' to<br>get events, and thus had lower latencies (supposing latency is measured from<br>publisher-event-generation-time to consumer-event-delivery-time).
</blockquote><div><br>Ok, so that means, that the events in the tests are somehow sent serialized, right?<br>And the reason, why this is not affecting throughput (at least not significantly) is, <br>that the latency is relativly small (mean below 2500 us) compared to the cycle
<br>time of sending events, which is 10 ms, so there is enough time to send all packets<br>before the next burst?<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
What's kind of strange about your results is that the publisher had a<br>slightly lower publish throughput (410.17 event/sec) than all the consumer's<br>throughput (410.22 event/sec):</blockquote><div><br>Which means, that the publisher is sends less data, than is received by the consumers?
<br></div></div><br>Thanks for your comments. As I told, it is not clear to me, how to interpret the<br>numbers, so your hints are very welcome.<br><br>Cheers,<br>Friedhelm<br>