Category

Performance Testing for Public Service Web Application
Client Overview
A government organization developed a web application to streamline vehicle registration, driving license issuance, and agricultural services nationwide. To assess its performance under real-world conditions, they engaged us to conduct performance testing.
Client Requirement
Since the application delivers critical public services, the organization needed to ensure it could efficiently handle high traffic while maintaining consistent performance. They required a performance testing solution to evaluate the application's ability to manage concurrent users under varying load conditions. Additionally, they sought a configurable testing framework that would allow them to independently define key parameters, such as the number of concurrent users and the interval between user requests. This flexibility ensured the framework's usability for future testing.
Process Overview
Preparation Phase
Our process began with a comprehensive preparation phase. During this stage, we gained access to the application and retrieved over 5,000 corresponding APIs directly from it for performance testing. We then developed a detailed performance test plan, which involved gathering critical information such as minimum transaction thresholds, target concurrency levels, and required test data. We conducted extensive discussions with the client to define the reporting structure and ensure alignment with their expectations.
Scripting Phase
Apache JMeter was selected as the performance testing tool, and test scripts were developed to simulate concurrent users. These scripts were designed to simulate real-world usage by configuring thread groups and ramp-up periods based on the client’s specifications. The framework was structured to allow the client to simulate various user scenarios and conduct future performance assessments, including testing with upgraded infrastructure.
Test Execution Phase
In the initial testing phase, we conducted a dry run with 10 concurrent users and a ramp-up time of 100 seconds. This helped validate the accuracy of the test scripts and identify any potential issues early in the process.
Following the dry run, load tests were executed in two phases:
Intermediate Load Test
Simulated 100 concurrent users with a ramp-up time of 60 seconds. During this test, system metrics such as CPU utilization, memory consumption, and response times were monitored to identify performance bottlenecks.
Final Load Test
As requested by the client, we simulated 1,000 concurrent users with a ramp-up time of 50 seconds.
The Final Outcome
Based on our performance testing, we found that:
> Over 3,000 requests were processed with response times under 500 milliseconds, which is considered excellent given the load.
> Approximately 1,000 requests had response times between 500 milliseconds and 1,500 milliseconds, which were deemed acceptable.
> Around 500 requests had response times exceeding 1,500 milliseconds, indicating potential performance bottlenecks.
> More than 200 requests failed due to system overload, resulting in HTTP 500 errors and timeouts.
We submitted a detailed test report to the client outlining these findings. Based on the results, the client updated the backend code and increased server capacity to accommodate up to 10,000 concurrent users. A subsequent test confirmed significant performance improvements.
Conclusion
As per the client’s needs, we delivered a Performance Test Metrics that enabled them to conduct performance tests independently. The scripts were designed to allow easy configuration of key parameters such as user count and ramp-up time, along with clear instructions for test data input.
The insights gained from the testing helped identify performance bottlenecks, prompting the client to increase server capacity. The delivered framework empowers the client to conduct ongoing performance evaluations as their user base grows, assess application capacity, identify areas for improvement, and ensure optimal system scalability and reliability.