[email protected] wrote:
> We agree to disagree.
> The TPS as reported by Grinder is not actually "per second", but rather
> "per interval" (according to the Grinder documentation). And, the
> interval length is arbitrary. So, continuing the example I used, if I
> set the interval length to 100 sec, but run for only 60 sec (all other
> parameters remaining the same), then Grinder would (presumably) report 0
> TPS (because it didn't receive any reports). Which isn't a reasonable
I think that's somewhat tangential. Its also not true - the console
would report the TPS (using the number of requests completed divided by
100 seconds) at the end of the sample period.
> I do already, of course (and I'm sure everybody else does too), use
> Grinder to drive the server to peak capacity. And I agree that one
> can't interpolate from small-scales test to large-scale ones.
> But I disagree about whether the non-Grinder calculation is meaningful.
> Certainly, the Grinder TPS calculation is a clear concept. But the
> non-Grinder calculation is well-defined (meaningful), and does give
> interesting information.
> In fact, I would "expect" that as server(+network) capacity is
> approached, the Grinder number and non-Grinder number "should" converge
> to one another (modulo round-off errors), with the non-Grinder number
> always being larger than the Grinder number, due to Grinder
> But it's exactly that expectation that I don't trust, that is, I'd like
> to see the expectation verified by actual computation rather than mere
> conjecture. Perhaps it would help to think of it as a
> Grinder-inefficiency metric, or cross-check on Grinder's TPS
> calculations: The difference between non-Grinder TPS and Grinder TPS is
> Grinder "think-time". For simple tests, that think-time can be made
> almost arbitrarily small. But the script I've developed is pretty
> sophisticated at this point, and I'm wondering if its complexity is
> degrading the quality of my measurements.
The Grinder could record and report the ratio of time spent in
instrumented (wrapped) code to the total time. This efficiency ratio may
be of interest to the curious, but would add overhead to record and
report. I do object to reporting this data as a rate, that is as some
"TPS" value - such a figure is is well defined, but I maintain it is
Note, The Grinder already goes to some lengths not to include
post-processing time (e.g. in its reported response times, and I would
expect this to be included in "grinder time", as well as time spent in
explicit sleeps too. (E.g. check out the code begining
"threadContext.pauseClock()" in net.grinder.plugin.http.HTTPRequest). If
you want to exclude your own processing time, you can "stop the clock"
yourself - you do need the patch I sent you a while back if you want to
do this from a script though - see the JavaDoc for setReadResponseBody().
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables
unlimited royalty-free distribution of the report engine
for externally facing server and web deployment.