Faban test tool


















Rod, Try using the fabancli tool. There are a number of commands you can send using the tool. Calling the tool reveals usage instructions. Vincenzo Ferme. We currently do not have documentation, but the source code has Javadoc for all the provided functionalities.

Thank you very much!!! One thing I was hoping I might get, and I know this is still a likely arbitrary thing to base any assumptions on, but I did want to get at least some sort of at least "anecdotal" ideas as to how much variation I might generally expect to see in the report summaries generated at the end of a run.

I've unfortunately been unable to vary my testing much across different hardware up to this point, and what I'm seeing seems to be fluctuating pretty wildly, so I'd like to understand whether I may be wielding this tool properly, whether I may need to adjust my expectations considerably, or if there may be some other issue such as a problem with my Docker version I currently have a crontab which spins-up the client container every 10 minutes terminating after each run starting at around concurrent connections which, in my infra I've had a pretty consistent "pass" rate with incremented by 1 each time it runs every ten minutes, which I've confirmed is more than enough time to guarantee completion, and for which I also have some checks in place.

This has all been working fine, and I also have some logging setup to confirm that things are executing and terminating as they should according to this schedule. I'm also using the default values suggested by Cloudsuite of 90 seconds for ramp-up, 60 seconds for steady-state timing, and 60 seconds for ramp-down - which has seemed sufficient, - or at least has not triggered any explicit errors from Faban, i.

What I'm seeing is pretty random I might see a fairly consistent pass rate from - connections one day, followed by a string of failures that continues on up to an oddly successful run at something like concurrent connections.

On other occasions, I might see 1 - 2 aborted runs Java fatal exceptions thrown related to timing , followed by a consistent string of failures starting at as low as connections and failing continuously until I manually intervene the following day, again resetting the number of connections back to , and start seeing "pass" results again. Am I doing something incorrectly? Is my expectation of fairly consistent results from one identically configured run to the next reasonable?

Again, this would seem a simple container-to-container Docker implementation, following the instructions on Cloudsuite's site and simply scaling the number of connections up by one starting from an established "passing" baseline, and gradually increasing the number of connections until I start hitting the failure mark - which would seem like the correct use case for this benchmark, if not the very reason for its existence.

Other reasons could be memory, GC issues, not sufficient rampup etc. I agree. Using a cloud environment will suffer variance to some degree. You will see this in performance sensitive benchmarking. If you want to do benchmarking and be certain results are consistent and keep your sanity you'll need dedicated machines and a switch.

The master may or may not act as an load driving agent by itself, dependent on the configuration. The system under test - or SUT - runs the server software that is being tested. It also does not make use of the Faban master process for the web user interface. In contrast, fhb drives the load from a sinigle system and uses the command line interface for invocation.

It also cannot control server processes on the SUT or collect any statistics for that matter. Faban is constrained to only one system acting as both the master and the driver agent. As you now probably have a vague idea on how you want to use Faban and how a Faban rig looks like, lets dive straight into getting Faban installed.

Faban has two major ways to communicate with the agents: 1 By starting the agent daemons and 2 by having Faban start the agents using a remote shell facility such as rsh or ssh. Note that combinations between agent daemons and remote shells are allowed. However, we cannot mix between differen remote shell facilities.

For example, mixing rsh and ssh in the same rig cannot be done. The first step for setting up the network is, of course, ensuring that you have physical network connection to all systems in the rig.

The ping utility is a good tool to ensure such connectivity. Make sure you can ping all network interfaces you may want to use, from all systems using those interfaces in the rig. As mentioned earlier, combining agent daemons and rsh or ssh or other remote shells are supported. But no more than one remote shell is supported. Setting up each mechanism is discussed below. Starting the Agent as a daemon is very straightforward. Note that the agent daemon is to be started for all the systems wishing to use this mode of communications except the master itself.

DO NOT start the agent daemon on the master.



0コメント

  • 1000 / 1000