Testing the Grant Management System
You are going to test a real system β a grant application and approval system β just like a professional tester would. This page explains what you are doing, in what order, and why.
Testing is how we find out whether a system actually does what it is supposed to do β and whether it is safe, accessible, and ready for real users. A tester does not just check if things work. They deliberately try to break things, push boundaries, and ask "what if?"
Good testing finds problems before real users do. Every bug you find today is a problem that will not reach a live system.
No live system needed for this part. It is paper-based practice to build your testing skills.
This is where you actually test something real.
| Group | Testing type | The big question | Live system? |
|---|---|---|---|
| A | Functional & Integration | Does it work? Do the parts talk to each other? | β Yes |
| B | Accessibility & Usability | Can everyone use it? Is it easy to navigate? | Concepts only |
| C | Performance & Security | Does it handle pressure? Is data safe? | β Yes (security) |
| D | Operational Readiness | Are the teams ready? Is support in place? | Concepts only |
| E | Regression | When something changes, what else breaks? | β Yes |
Testing the Grant Management System
You are going to test a real working system β just like a professional tester would. This page explains what you are doing, in what order, and why.
Testing finds out whether a system actually does what it is supposed to do β and whether it is safe, accessible, and ready for real users. A tester does not just check if things work. They deliberately try to break things, push boundaries, and ask "what if?"
Good testing finds problems before real users do. Every defect you find today is a problem that will not reach a live system.
| Group | Testing type | The key question | Live system? |
|---|---|---|---|
| A | Functional & Integration | Does it work? Do the parts connect? | β Yes |
| B | Accessibility & Usability | Can everyone use it? | Concepts only |
| C | Performance & Security | Does it handle load? Is data safe? | β Yes (security) |
| D | Operational Readiness | Are teams ready for go-live? | Concepts only |
| E | Regression | When things change, what breaks? | β Yes |
Testing β why, what, and how
Testing is not just about checking if a feature works. A robust test strategy covers five dimensions. Each group will become the expert in one area and present their findings back to the class.
| Field | What it means | Example |
|---|---|---|
| Test ID | Unique reference for tracking | FUNC-001 |
| Story reference | Which user story this tests | As an applicant, I want to submit⦠|
| Preconditions | What must be true before the test runs | Applicant is logged in. Form is complete. |
| Test steps | Exactly what the tester does | 1. Click Submit. 2. Observe response. |
| Expected result | What should happen if the system works | Confirmation screen shown. Email sent. |
| Actual result | What actually happened (filled in when running) | β (filled during test execution) |
| Pass / Fail / Blocked | Verdict based on expected vs actual | Pass |
Pre-test setup
Before running a single test, your group must agree what you are testing, under what conditions, and with what data. Complete this section together before moving to scenarios.
Test scenarios
You have three scenarios to work through. Write one from scratch, complete one, and fix one broken scenario.
The most important scenario: a valid, complete application submitted successfully. Write every field using the template below.
| Field | Your answer |
|---|---|
| Test ID | |
| Story reference | |
| Preconditions | |
| Test steps | |
| Expected result |
An applicant tries to submit without filling in all mandatory fields. Complete the missing parts.
| Field | Content |
|---|---|
| Test ID | FUNC-002 |
| Story reference | As an applicant, I want to submit a completed grant application so that I am considered for funding. |
| Preconditions | Complete this: What state must the form be in? Which field(s) are left empty? |
| Test steps | 1. Log in as test applicant. 2. Navigate to draft application FUNC-TEST-002. 3. Leave the "Project description" field empty. 4. Click Submit application. |
| Expected result | Complete this: What should the system do? What should the applicant see? Should an email be sent? |
This scenario tests the integration between the submission form and the eligibility rules engine. It contains three deliberate errors. Find them and correct them in the boxes below.
| Field | As written (find the errors) |
|---|---|
| Test ID | FUNC-003 |
| Preconditions | Applicant is not logged in. Application is incomplete. Write the corrected precondition: |
| Test steps | 1. Log in as test applicant (ineligible org type). 2. Complete all form fields. 3. Click Submit application. |
| Expected result | Application is submitted and confirmed. Eligibility check does not run. Applicant receives approval email immediately. Write the corrected expected result: |
| Integration check | No need to check the eligibility rules engine β that is a separate system. Why is this wrong? Write the correct statement: |
Pass / Fail verdicts
Read each described test outcome. Decide whether it is a Pass, Fail, or Blocked β and why.
Debrief β present back to the class
Use these points to guide your 3-minute presentation to the class.
Pre-test setup
Accessibility testing checks that all users β including those with disabilities β can use the system. GOV.UK services must meet WCAG 2.1 AA as a minimum. Usability testing checks whether real users can complete tasks easily. Set up your approach before starting.
Test scenarios
Some users cannot use a mouse and rely entirely on keyboard navigation. Write a scenario that tests whether the submission form is fully keyboard accessible.
| Field | Your answer |
|---|---|
| Test ID | |
| WCAG criterion | |
| Preconditions | |
| Test steps | |
| Expected result |
GOV.UK standards require error messages to be specific, written in plain English, and explain how to fix the problem. Complete this scenario.
| Field | Content |
|---|---|
| Test ID | ACC-002 |
| WCAG criterion | 3.3.1 Error Identification β errors described in text; 3.3.3 Error Suggestion β suggestions provided |
| Preconditions | Submission form open. All mandatory fields empty. |
| Test steps | 1. Click Submit without filling in any fields. 2. Read every error message displayed. 3. Compare against GOV.UK error message guidance. |
| Expected result | Complete this: What should each error message do? Give an example of a good error message vs a bad one for the "Organisation name" field. |
| GOV.UK standard | What does the GOV.UK Design System say about error messages? (Think: be specific, say what went wrong, say how to fix it.) |
This scenario tests colour contrast for users with visual impairments. It contains three errors. Find and fix them.
| Field | As written |
|---|---|
| WCAG criterion | 1.4.3 Contrast β minimum ratio 3:1 for normal text Fix: What is the correct minimum contrast ratio for normal text under WCAG 2.1 AA? |
| Test steps | 1. Use a colour contrast analyser tool on every text element. 2. Check submit button text against button background. 3. Check placeholder text in form fields. |
| Expected result | All text elements pass 3:1 contrast ratio. Placeholder text is exempt from contrast requirements. Fix the expected result β two things are wrong: |
| Scope | Only check the Submit button. Other elements are the designer's responsibility. Why is this wrong? Who is responsible for accessibility across the whole screen? |
Pass / Fail verdicts
Debrief
Pre-test setup
Performance testing checks the system behaves correctly under load. Security testing checks that data is protected and access is controlled. These are often called non-functional requirements β but they are just as important as functional ones.
Test scenarios
The submission deadline is midnight. Based on previous years, 400 applicants typically submit in the final hour. Write a performance test scenario for this.
| Field | Your answer |
|---|---|
| Test ID | |
| Risk being tested | |
| Preconditions | |
| Test steps | |
| Expected result |
Can Applicant A access Applicant B's form? This is a critical security test. Complete the missing fields.
| Field | Content |
|---|---|
| Test ID | SEC-001 |
| Risk | Horizontal privilege escalation β one applicant accessing another applicant's data. |
| Preconditions | What two test accounts do you need? What state must their applications be in? |
| Test steps | 1. Log in as Applicant A. 2. Note the URL of Applicant A's application (e.g. /applications/123). 3. Manually change the URL to Applicant B's reference (e.g. /applications/124). 4. Observe the response. |
| Expected result | What should happen? What should the system return? Should this be logged? |
This scenario tests whether a reviewer can access the grants officer's approval decision screen. It contains three errors.
| Field | As written |
|---|---|
| Preconditions | Log in as grants officer. Application REF-001 is in draft state. Fix: Who should be logged in? What state should the application be in? |
| Test steps | 1. Log in as a reviewer. 2. Navigate directly to the approval decision URL. 3. Attempt to click Approve. |
| Expected result | The reviewer sees the approval screen and can click Approve. The system processes the approval. Fix: What should actually happen when a reviewer tries to access the approval screen? |
| Security principle | This only matters if the reviewer knows the URL. Security through obscurity is sufficient. Why is "security through obscurity" not acceptable? What principle applies here? |
Pass / Fail verdicts
Debrief
Pre-test setup
Operational readiness testing asks: even if the system works perfectly, are the people and processes ready? Go-live is not just a technical event β it is an organisational one. Support must be trained. Monitoring must be in place. A rollback plan must exist.
Readiness scenarios
Write a go-live readiness checklist for the grant management system. Include at least 8 items across people, process, and technology. Tick the ones you believe are ready based on what you know about the system.
At 11:45pm on deadline night, the submission endpoint returns 503 errors. Complete this incident response scenario.
| Field | Content |
|---|---|
| Incident trigger | Monitoring alert fires: submission endpoint error rate exceeds 5% for 3 consecutive minutes. |
| Immediate action | What is the first thing the on-call engineer does? Who do they contact? |
| Applicant communication | What should applicants be told? How quickly? Through which channel? |
| Resolution criteria | How do you know the incident is resolved? What checks confirm recovery? |
| Post-incident | What happens after the incident is resolved? (Think: applicants who failed, deadline extension, incident review.) |
This is the go/no-go briefing for the SRO. It contains three things that should block go-live but are being waved through. Identify them.
| Field | As stated β find what should block go-live |
|---|---|
| Defects | Two Severity 2 defects open β the team says they are "low risk" and will be fixed in the next sprint. One known issue: the confirmation email sometimes sends twice. SRO has approved proceeding. Should this block go-live? Why or why not? |
| Training | Service desk has not yet received training on the new system. The grants team lead says "they will pick it up on the day." Why is this a go-live blocker? |
| Monitoring | Monitoring dashboards will be set up in the week after go-live β the team is too busy right now. Why must monitoring be live before go-live, not after? |
Go / No-go verdicts
For operational readiness, Pass = Ready to go live. Fail = Not ready β must be resolved. Blocked = Cannot assess until a dependency is resolved.
Debrief
Pre-test setup
Regression testing asks: when something changes, what else breaks? Every change to a system carries risk. Regression testing identifies which existing tests must be re-run to confirm the rest of the system still works after a change is made.
Test scenarios
A new "Request more information" status has been added. Write a regression test that checks the complete approval journey still works correctly after this change.
| Field | Your answer |
|---|---|
| Test ID | |
| Change being regressed | |
| Preconditions | |
| Test steps | |
| Expected result |
The change adds a new notification email for "Request more information." Complete this regression check for the notification system.
| Field | Content |
|---|---|
| Test ID | REG-002 |
| Why this is a regression risk | Adding a new email template to the notification service could affect the configuration of existing templates. |
| Tests to re-run | Which existing notification tests must be re-run? List them and say why each is at risk. |
| New test needed | Write the expected result for the new "Request more information" email. What should it contain? When should it fire? |
| Regression verdict | If the confirmation email now sends twice (it was working before the change), what type of defect is this? What caused it? |
The reporting dashboard shows a count of applications by status. The new status was added. This scenario contains three errors in how the regression was planned.
| Field | As written |
|---|---|
| Regression scope | The reporting dashboard does not need to be regression tested β it is a separate system and the change only affects the approval workflow. Why is this wrong? Why must the dashboard be regression tested? |
| Test approach | Run the dashboard report and check it looks the same as before. What specifically should be checked? What data values matter? |
| Expected result | The dashboard should show the same numbers as before the change. Why is "same numbers as before" the wrong expected result after adding a new status? |