If you ask a random developer, how many concurrent users the application will handle, she/he will probably answer, I don't know, I think it is limited by something in my applications server, or maybe by the database. Then the developer will probably use jmeter or ab or similar utility, and give an answer. Will you take it seriously? If you do, then yes, you're in trouble.
The reality of production system is much more complex. There is a list of some of the network related parameters of a production system:
- Firewall at the gate - number of TCP connection states
- HTTP server - maximum of number of childs that can be created, or maximum of handled connection
- Firewall between HTTP and Application server - number of states
- Application server - thread pools size
- Application server - jdbc connection pool size
- Firewall - states
- Database - maximum connections handled
Let's assume that you have a production system and the requirement is that the "System must handle 1000 unique users at the same time". What value will you choose for those parameter? Yes, 1000 seems fine. Will your database handle 1000 connections? Maybe. So maybe choose 100. For how long will users' connections be will be waiting on HTTP backlog? You don't know. You probably will ask about firewalls. Firewalls are not as transparent as you may think. And, finally, what does it really mean "1000 users at the same time". Hahaha. Welcome to the desert of the real.
To tell you the truth, I don't know the answers either. Generally speaking, nobody knows. You may find heuristics on the Internet. The best way to choose the values is discover them through a series of tests.
I will share with you a dirty secret about these network parameters. The parameters are not orthogonal; they are related to each other. Every parameter has a time aspect: "timeout" or "aging". The system has its own dynamics. The system is not static a series of pipes of different diameter. It's a series of swinging pipes :). You can't simply test each element separately. They are like team players. You have to find a balance.
Testing is a quite simple process. Just follow the rules. On some steps you have to be creative and thoughtful; on others you have to work as a robot. It's better to work as a pair. One person is an operator; the other is the observer. Before the tests:
- Define the goal, hypotheses to tests, for example "system must handle 1000 requests per seconds", "system must survive digg effect", etc.
- Define the observable parameters, such as request per seconds, CPU load, I/O throughput on storage, etc.
- Define the traffic and series of requests, based upon the existing system, recording via proxy, statistical analysis, and so on. This step may be time consuming.
- Write everything in the report
As I said, testing is simple:
- Gather all values of the parameters. Put them in the report.
- Perform a base line session. Do not change any parameter.
- Observe the system. Watch, smell, hear :). And think at the same time. Put your thoughts into the report.
- Wait till the end of the test. If you are satisfied with the results, then you are finished. If not, go on. satisfy finish. If not go on.
- Pick one and only one parameter. Which one? Don't know. Just trust your gut. Change the value of it. Report it.
- Start next testing session.
- GOTO 3.
Testing may takes a few hours or days. It depends. Do not try to find "ideal" parameters during a sessions. There is no such thing. "Just enough" approach is a perfect one. Trust me :).
Unfortunately, I can't tell you anything about tuning the OneWebSQL. No, it's not secret knowledge and I want make money on it. OneWebSQL is simply not tunable. No, we didn't hide anything. Simply put, there is no need to tune it. It has no moving parts inside -- no cache, no threads, no extra resources. It uses a data source which you provide. That's all. That is pretty awesome.
What would you rather buy? A popular-fits-everyone car and spend a lot of time in a garage to make it a racer? Or a car which is designed to be fast?