Artificial intelligence is rapidly changing the world around us. Every day we see news about a new AI created to take over the world help us. As we all know, testing can be quite time-consuming and it can be difficult to keep up with software changes.
While AI automation testing is not a complete solution to replace regular AQA, we can use it to speed up some processes and help QA focus on more important tasks.
In this article, we will explore the process of creating automation tests using an innovative AI-powered approach. Additionally, we will provide useful prompts to help you design and implement highly effective tests.
Disclaimer: The code snippets in this article are AI-generated.
We are going to use Bard, a new large language model from Google. It is trained on more up-to-date data than ChatGPT. It also has access to real-time data. However, it is still under development, but it already provides good responses.
We will use a popular website, that just so happens to be one of our favorites - LinkedIn, for our experiments 🙃
Ask a Google Bard these prompts to get the most accurate answers.
“Write a login test case for LinkedIn”
“Write a negative login test case for LinkedIn”
“Write a API login test case for LinkedIn”
Result of the first prompt:
Test Case ID: TC-001
Description: Verify that a user can log in to LinkedIn with valid credentials.
Test Case Name: Login to LinkedIn
Steps:
1. Go to the LinkedIn login page.
2. Enter a valid username and password.
3. Click the "Log In" button.
4. Verify that the user is logged in and is taken to the LinkedIn homepage.
Expected Results:
The user should be able to log in to LinkedIn with valid credentials.
The user should be taken to the LinkedIn homepage.
Pass/Fail: Pass
Comments: None.
Here are some additional test cases that you may want to consider:
Verify that a user cannot log in with an invalid username or password.
Verify that a user is not able to log in if they have forgotten their password.
Verify that a user is able to reset their password if they have forgotten it.
Verify that a user is able to log in with a social media account (e.g., Facebook, Google).
Verify that a user is able to log in from a mobile device.
We got the test case title, description, steps, and expected result. Additionally, AI provided us with some suggestions for future test cases.
Ask a Google Bard this prompt inside the same conversation.
“Using the test case you’ve created before write a Page Object model”
import { Locator } from "playwright";
export class LinkedInLoginPage {
constructor(page: Page) {
this.page = page;
this.usernameInputLocator = Locator.input("username");
this.passwordInputLocator = Locator.input("password");
this.submitButtonLocator = Locator.button("submit");
}
async login(username: string, password: string) {
await this.page.goto("https://www.linkedin.com/login");
await this.page.click(this.usernameInputLocator);
await this.page.type(this.usernameInputLocator, username);
await this.page.click(this.passwordInputLocator);
await this.page.type(this.passwordInputLocator, password);
await this.page.click(this.submitButtonLocator);
}
async isLoggedIn() {
const isLoggedIn = await this.page.evaluate(() => {
return document.querySelector(".me-avatar").textContent !== "";
});
return isLoggedIn;
}
}
We got a new file LinkedInLoginPage with the needed locators and methods to use in the test. It’s not perfect, but now instead of writing a whole new page all we need to do is fix the small details.
After fixing we can now ask AI to create a test based on a Page Object which it created earlier.
“Create a test based on the test case and Page Object which you provided earlier.”
const { LinkedInLoginPage } = require("./linkedin-login-page");
describe("LinkedIn Login", () => {
let page: Page;
beforeEach(async () => {
page = await playwright.chromium.launch();
});
afterEach(async () => {
await page.close();
});
it("should allow users to login with a valid username and password", async () => {
const loginPage = new LinkedInLoginPage(page);
await loginPage.login("username", "password");
expect(await loginPage.isLoggedIn()).toBe(true);
});
});
Now we’ve got a lovely test structure with test and steps before and after the test. It allows us to expand this file with different test cases in the future.
Generate a list of possible test cases for an <application>
Identify potential bugs in an <application>
Analyse the results of test executions
Generate test data in <this format> in JSON
Identify the most important tests to automate <list of tests cases or application>
Recommend new tests to automate <application>
Generate reports on test execution results - <results>
Recommend fixes for unstable test - <code for this test>
Generate documentation for automated tests - <tests>