The platform leverages Apache JMeter as its core engine while providing a unified web interface for managing, executing, and analyzing performance tests. It adopts a frontend-backend separation architecture using Vue.js and Spring Boot, with Lin-CMS supplying foundational user management and authentication features.
System Architecture
Users interact with the Vue-based frontend, which communicates via HTTP with the Spring Boot backend. Each Linux-based load generator runs a single JMeter instance alongside a lightweight agent. This agent maintains a persistent Socket.IO connection to the backend server to receive execution commands and report real-time status. Data is distributed across multiple storage systems:
- MySQL 5.7: Stores core business entities (projects, test cases, machines, tasks).
- MongoDB 4.2: Holds detailed test reports and historical aggregated statistics.
- InfluxDB 1.8: Captures high-frequency metrics during active tests for real-time visualization.
- MinIO: Manages all file assets including JMX scripts, CSV datasets, JAR dependencies, logs, and result archives.
Core Workflow Lifecycle
Each performance test follows a four-phase lifecycle:
- Configuration: The system distributes required files from MinIO to all selected load generators. CSV datasets marked for splitting are partitioned evenly across machines. Agents confirm successful setup before proceeding.
- Execution: Upon confirmation from all agents, the backend triggers JMeter runs simultaneously across all nodes. Real-time metrics stream into InfluxDB.
- Collection: After completion, agents upload result files (JTL, logs) to MinIO. The backend processes these into structured reports stored in MongoDB.
- Cleanup: All load generators reset their environments. Any failure during the first three phases automatically triggers cleanup to maintain system hygiene.
Manual termination is supported at any stage before cleanup.
Distributed Load Generation
Unlike native JMeter’s master-slave model—which requires complex setup, wastes master resources, and incurs network overhead—the platform uses its agent-server communication layer to orchestrate truly distributed execution. The backend directly coordinates all agents, eliminating the need for a dedicated master node.
Load Generator Management
Each load generator must:
- Set
JMETER_HOMEas an environment variable. - Run the platform’s agent service.
The agent periodically validates JMeter availability and reports its IP, version, and path to the backend. Online status is determined by matching the reported IP against registered machine records.
Test Case Management
Creation & Editing
Test cases are organized under projects. Users upload JMX files along with optional CSV data files (with split support) and custom JARs.
Debug Mode
A debug feature executes the test script with a single thread, displaying request/response details and JMeter logs—similar to JMeter’s View Results Tree listener.
Test Execution
Launch parameters include:
- Thread count and ramp-up duration
- Test duration
- Throughput control (see below)
- Selcetion of specific or auto-assigned load generators
- Log level and report sampling interval
- Toggle for real-time metric collection
Progress is visualized via status indicators showing elapsed time percentage during execution.
Dynamic Throughput Control
To align with QPS/TPS targets rather than fixed thread counts, the platform enables runtime adjustment of throughput limits using JMeter’s BeanShell server:
-
Enable BeanShell server in
jmeter.properties:beanshell.server.port=9000 beanshell.server.file=../extras/startup.bsh -
Deploy an update script (
update.bsh) that modifies JMeter properties:import org.apache.jmeter.util.JMeterUtils; setprop("throughput", args[0]); -
From the platform UI, users input a new TPS value, which triggers:
java -jar $JMETER_HOME/lib/bshclient.jar localhost 9000 update.bsh <new_value>
This allows on-the-fly pressure adjustments without stopping the test.
Test Details & Logs
The detail view shows metadata (threads, duration, operators), file downloads (scripts, logs), and—upon completion—an aggregated report. Phase-level logs track each step’s progress per load generator, aiding diagnostics.
Real-Time Metrics
Using JMeter’s Backend Listener, metrics flow into InfluxDB. The frontend polls this data to display:
- Overall TPS and error trends
- Transaction-specific TPS and error breakdowns
- Error type distribution
Integration with Grafana is supported for advanced dashboards.
Test Results
Post-execution, five key charts are rendered:
- TPS over time
- Average response time
- Total throughput
- Response time percentiles (successful requests)
- Active threads
Additional insights include error type proportions and a top-5 error leaderboard. Full HTML reports can be downloaded.
Test History & Analysis
The history module allows filtering by test ID, case name, time range, or outcome. Clicking a record navigates to its detailed results.
The analysis dashboard provides system-wide metrics:
- Project, case, and machine counts
- Total test runs, duration, and requests
Per-case metrics include average/90th percentile response times, throughput, and run frequency. Trend charts for response time, throughput, and error rate across multiple executions help validate performance improvements.
Deployment Options
Standard Deployment
- Install MySQL 5.7, MongoDB 4.2, InfluxDB 1.8, and MinIO (with public bucket policy).
- Build and deploy the Spring Boot backend (
api/), configuringsocket.server.enable=truefor the server orsocket.client.enable=true+serverUrlfor agents. - Build the Vue frontend (
web/) using Node v12.13.0 (npm run build). - Serve the frontend via Nginx using the provided configuration.
Containerized Deployment
- Build the backend JAR (
mvn clean package) and frontend dist (npm run build). - Adjust
docker-compose.yamlto reflect actual MinIO, InfluxDB, and JMeter host paths. - Launch with
docker-compose up -d.