(Got swapped-out, hence the lapse.)
Given that there are workarounds for the "curious" (and yes, I AM
curious, in addition to having a real need), there's no reason to press
for infrastructural support. (And note, though it was probably clear, I
was only probing for additional support, not knocking what already
But, I may need a little more info in order to implement your
Let's take the specific example that prompted the query. (Beyond the
general complexity of the script I mentioned, I had this example in
What I'm doing in the mainline of my script is, ultimately, primitive
HTTP calls to our Web Server. No problem there, thanks to HTTPClient.
The problem is that some users have requested a "custom" facility. That
is, they want to specify a "composite" operation, such as: "10 CREATEs;
followed by a combination of 100 READs & UPDATEs (of the 10 objects just
CREATEd); followed by 10 DELETEs (of those 10 objects)." I can do that,
by making a total of 120 (un-wrap()'d) calls of the primitives I've
already implemented. Namely, after some preprocessing to create a list
of operations specified by the user, I iterate through the list in a
function customTest(), which is called from doCUSTOMtest(), wherein
customTest() is Test.wrap()'d and invoked. In this scheme, the elapsed
time (latency) for the custom/composite test includes all the 120 calls,
as it should. So far, this is a well-known Grinder technique.
The thing is, as already mentioned, that the script needs to do a
non-trivial (or at least "curious") amount of pre/post-call processing
("think-time") before/after the actual 120 wire calls to the server, and
I don't want that think-time included in the latency recorded for the
My first attempt at this was to setDelayReports(True), sandwich each
HTTPClient call between a couple of timestamp snapshots, adding them all
up in an elapsedTime variable, and finally do set time=elapsedTime in
the stats-for-last-test. Unfortunately, that doesn't work, because time
is a read-only attribute.
Hence the initial query.
Now, considering your suggestion, I'm wondering how
setReadResponseBody() figures into it? I am indeed using
pauseClock()/resumeClock() with streaming, but for the present problem
I'm not streaming, that is, not reading any response body. So how
exactly do these pieces fit together?
I'll make a conjecture: setReadResponseBody() actually has nothing to do
with "reading response body", but instead is just a general facility
that makes pauseClock()/resumeClock() work. How close am I? If this is
the case, then I'll guess I'll have to make a foray into encapsulating
the (free-standing) function customTest() inside a class, which isn't a
big deal, but I'd like to know if it's going to work before I delve into
Thx, as always.
From: Philip Aston
Sent: Tuesday, May 19, 2009 2:06 PM
Subject: Re: [Grinder-use] What is TPS?
[email protected] wrote:
> We agree to disagree.
> The TPS as reported by Grinder is not actually "per second", but
> "per interval" (according to the Grinder documentation). And, the
> interval length is arbitrary. So, continuing the example I used, if I
> set the interval length to 100 sec, but run for only 60 sec (all other
> parameters remaining the same), then Grinder would (presumably) report
> TPS (because it didn't receive any reports). Which isn't a reasonable
I think that's somewhat tangential. Its also not true - the console
would report the TPS (using the number of requests completed divided by
100 seconds) at the end of the sample period.
> I do already, of course (and I'm sure everybody else does too), use
> Grinder to drive the server to peak capacity. And I agree that one
> can't interpolate from small-scales test to large-scale ones.
> But I disagree about whether the non-Grinder calculation is
> Certainly, the Grinder TPS calculation is a clear concept. But the
> non-Grinder calculation is well-defined (meaningful), and does give
> interesting information.
> In fact, I would "expect" that as server(+network) capacity is
> approached, the Grinder number and non-Grinder number "should"
> to one another (modulo round-off errors), with the non-Grinder number
> always being larger than the Grinder number, due to Grinder
> But it's exactly that expectation that I don't trust, that is, I'd
> to see the expectation verified by actual computation rather than mere
> conjecture. Perhaps it would help to think of it as a
> Grinder-inefficiency metric, or cross-check on Grinder's TPS
> calculations: The difference between non-Grinder TPS and Grinder TPS
> Grinder "think-time". For simple tests, that think-time can be made
> almost arbitrarily small. But the script I've developed is pretty
> sophisticated at this point, and I'm wondering if its complexity is
> degrading the quality of my measurements.
The Grinder could record and report the ratio of time spent in
instrumented (wrapped) code to the total time. This efficiency ratio may
be of interest to the curious, but would add overhead to record and
report. I do object to reporting this data as a rate, that is as some
"TPS" value - such a figure is is well defined, but I maintain it is
Note, The Grinder already goes to some lengths not to include
post-processing time (e.g. in its reported response times, and I would
expect this to be included in "grinder time", as well as time spent in
explicit sleeps too. (E.g. check out the code begining
"threadContext.pauseClock()" in net.grinder.plugin.http.HTTPRequest). If
you want to exclude your own processing time, you can "stop the clock"
yourself - you do need the patch I sent you a while back if you want to
do this from a script though - see the JavaDoc for
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables
unlimited royalty-free distribution of the report engine
for externally facing server and web deployment.
grinder-use mailing list
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers & brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing, &
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA, & Big Spaceship. http://www.creativitycat.com