|
--- |
|
license: mit |
|
language: |
|
- en |
|
tags: |
|
- testing |
|
size_categories: |
|
- n<1K |
|
--- |
|
|
|
# Functional Test Cases |
|
|
|
This is a _very_ small list of functional test cases that a team of software testers (QA) created for an example mobile app called Boop. |
|
|
|
## Dataset |
|
|
|
* Name: `Boop Test Cases.csv` |
|
* Number of Rows: `136` |
|
* Columns: `11` |
|
* `Test ID` (int) |
|
* `Summary` (string) |
|
* `Idea` (string) |
|
* `Preconditions` (string) |
|
* `Steps to reproduce` (string) |
|
* `Expected Result` (string) |
|
* `Actual Result` (string) |
|
* `Pass/Fail` (string) |
|
* `Bug #` (string) |
|
* `Author` (string) |
|
* `Area` (string) |
|
|
|
> 💡 There are missing values. For example, not every test case had a related Bug |
|
|
|
## Use Cases |
|
|
|
Two common problems in Software Testing are: |
|
|
|
* Duplicate test cases (and bug reports) |
|
* Assigning issues to the correct team quickly (from internal sources, Customer or Tech Support, etc) |
|
|
|
This dataset is probably too small to create an "Auto-Assigner" tool -- especially because almost half the tests are focused in the `Account` Area. |
|
|
|
However, with embeddings, we could see if a new Test Case already exists by checking similarity 🤔 |