Setting Up a Test Environment
A test environment (TE) encompasses the hardware, software, and network configurations required to execute tests on a system under test. While developers or DevOps teams typically handle deployment, testers often trigger it—especially in regulated workflows where developers lack direct deployment permissions.
For mobile applications:
- Android: Testers receive an
.apkbuilt against test backend services and databases. This package is not published to public stores like Huawei AppGallery or Xiaomi Store. - iOS: Test devices must be registered via their UDID with the developer’s provisioning profile. Once enrolled, users can install and trust the test build.
For web applications, deployment is frequently automated through CI/CD tools like Jenkins. A single click may trigger a build, deploy to servers (e.g., on Alibaba Cloud or Tencent Cloud), and launch the app via web servers such as Nginx, Tomcat, or IIS.
Test Environment Components
-
Mobile (e.g., using Kuchuan for distribution):
- Hardware: Physical devices, tablets, or emulators
- Software: Android/iOS OS versions
- Network: Wi-Fi, 4G, or 5G
-
Web Systems:
- Hardware: Desktop or laptop computers
- Software: OS (Windows/Linux/macOS), browsers (Chrome, Firefox)
- Network: Wired or wireless connections
Principles for environment setup include mimicking production as closely as possible and prioritizing widely used platforms and browser combinations based on user demographics.
Project and Test Management Tools
Teams use platforms like Zentao, Jira, or Teambition to manage requirements, test cases, and bug tracking. Some organizations build custom internal tools for this purpose.
Automation Testing Stacks
- API Testing: Python with
requests - Mobile UI Testing: Python with
uiautomator2(preferred over Appium for Android) - Web UI Testing: Python with
Selenium
Root Causes of Bugs
Bugs arise from multiple sources:
- Software complexity: Intricate architecture or dependencies
- Team dynamics: Poor requirement communication or skill gaps
- Technical issues: Logic errors, boundary conditions (e.g., array index out of bounds), data type mismatches, or unhandled null/zero values
- Process deficiencies: Incomplete documentation or flawed project workflows
When analyzing bugs, testers often collaborate with developers post-fix to understand root causes. Over time, pattern recognition helps anticipate common failure points.
Backend examples:
- Unhandled edge cases (e.g., empty input, invalid types)
- Database query failures or incorrect data serialization
Frontend examples:
- Icnorrect API invocation
- UI not reflecting updated data (e.g., new post not appearing after submission)
Validating a Defect
Never rely solely on personal assumptions. Use:
- Official documents (requirements specs, design docs, user manuals)
- Industry standards or competitor benchmarks
- Clarification from product managers or stakeholders
Bug Report Structure
A well-documented bug includes:
- ID: Auto-generated by the tracking system
- Title: Concise and descriptive
- Type: Code error, design flaw, UI issue, etc.
- Severity: Critical, major, minor, or suggestion
- Priority: Immediate, high, normal, low
- Status: New, resolved, reopened, closed, etc.
- Module: Affected feature area
- Version: Build or release number
- Steps to Reproduce: Clear actions, expected vs. actual results
- Reporter & Date: Usually auto-filled
- Attachments: Screenshots, logs, or screen recordings
Severity Levels
- Critical (Level 1): System crash, data loss, financial miscalculation, complete feature failure
- Major (Level 2): Security flaws (e.g., plaintext passwords), secondary feature loss, rare crash scenarios
- Minor (Level 3): Cosmetic issues, slow response, incorrect but non-breaking data display
- Suggestion (Level 4): UX improvements or non-essential enhancements
Priority Levels
Indicate urgency: Immediate → High → Normal → Low
Bug Lifecycle States
- New/Open: Reported and awaiting triage
- Resolved/Fixed: Developer claims fix; pending verification
- Reopened: Issue persists after fix
- Closed: Verified as resolved
- Won’t Fix: Valid but intentionally not addressed
- Not a Bug (NAB): Misunderstanding or changed requirements
- Duplicate: Already reported
- Later: Deferred to future release
- Needs More Info: Cannot reproduce; requires additional details
Handling Intermittent Bugs
Non-reproducible issues should still be logged internally. Testers should monitor for patterns and gather evidence (logs, videos). Once a reliable reproduction path is found, formally report the defect. Never ignore sporadic failures—they may indicate deeper systemic issues.