How a non-tech person can test a website

If you’re going to test a new feature, this QA cheat sheet will be particularly useful. Please ensure that you first validate that the feature just added satisfactorily integrates with the current system. On most occasions, developers are not as familiar with the entire product as the QA who thoroughly investigated it. Boundary testing will uncover a number of edge cases.

Margarita Simonova
Margarita Simonova
Founder and CEO of ILoveMyQA

December 13, 2022

4 min read

How a non-tech person can test a website

If you’re going to test a new feature, this QA cheat sheet will be particularly useful.

Please ensure that you first validate that the feature just added satisfactorily integrates with the current system.

On most occasions, developers are not as familiar with the entire product as the QA who thoroughly investigated it. Boundary testing will uncover a number of edge cases.

Testing a New Feature:

  1. User Acceptance Testing

Ensure that the feature works as expected by the user.

  1. Functional details:

Compare each element of the application to the functional specification document – this is known as validation.

  1. Test fields for the following:
  • Frontend validation versus backend validation
  • Accepted characters
  • Initial button state
  • Check if copy-paste works and how;
  • Do empty spaces count?
  • Min or max
  • Microphone input
  • Requirements
  • Observe behavior with unacceptable characters
  • Backend validation versus front-end validation
  1. Button testing
  • Verify conditions for field availability, negative and positive testing, ON or OFF, and error handling.
  • Enter invalid data in one or a few fields and select a button to observe the app’s behavior
  1. Keyboard testing
  • Note the initial keyboard state and check for a done or a return button, as expected.
  • With the exception of credential fields, the predictive text should work properly
  • Pick some non-standard characters from the UTF-8 catalog and enter them. Observe how the application behaves.
  1. Check the following thoroughly: spelling mistakes, expired data behavior, removal of data behavior, and testing duplicate data behavior.
  2. Navigation testing
  • Ensure that any links within the feature are working properly
  • Check if populated field data is lost or saved while surfing
  1. Device integration – feature interaction with external factors
  • Check what happens with the feature when WiFi is suddenly disconnected, a call or text message is received, or when the user is disconnected by a third party.
  • Check the feature when screen orientation is changed.
  • Uninstall the application, reinstall it, and check what happens. Also, try installing it with an older build existing side by side.
  1. Cross-platform testing – feature must be tested on the accepted Android and iOS versions.
  • Test the most widespread screen sizes. They can seriously distort the UI.
  • Test the feature on low-performance devices.
  • Create some stress tests and observe the app’s behavior when a load of other apps is uploaded and the device’s available memory and CPU power are throttled down.
  1. Test the feature to make it as user-friendly as possible
  • The OS standard approach differs between Android and iOS so ensure that the feature respects it
  1. Compatibility testing – explore forward and backward compatibility with other product versions.
  • Observe its compatibility with other platforms, for example, Android vs. iOS
  1. Test to ensure that the feature works and is showing up correctly in all approved application languages
  • Any other regional setting must also be tested – check whether they are respected by the application
  • So must accessibility options – observe how the application behaves
  1. Security Testing. This will uncover vulnerabilities within the app as well as possible leaks in data resources.
  2. Synchronization Testing – check when local data synchronizes with the server
  • Check functionality on various bandwidths using a network throttle tool. In testing the synch function, monitor the new feature’s communication with the cloud.
  • Try performing test synchronizations while dropping the WiFi before, during, and after the connection is processed to the cloud.
  • Using multiple devices, try to perform changes on the same account.
  1. Design testing. Does the implemented UI match the requirements of the designers?
  2. Component integration testing. Check how the feature interacts with existing features, for example, how the feature is affected by SLA or Login
  • Use invalid or valid data to check
  1. Load testing, performance testing, and stress testing.
  • Reach the specification limit by utilizing scripts and then closely monitor the application when it’s near the limit, at the limit, and higher than existing limits.

API testing and Web-related testing:

*Not limited to the web.

  • Cross-browser compatibility
  • Resizing the browser window – different screen sizes should have no effect on the UI elements. They should display correctly regardless.
  • Try using multiple tabs to log in on the same account and performing contradictory edits to the exact data object, for example, patient data.
  • Check whether the data selected in the UI exactly matches the one sent to the server upon clicking the Send button.
  • Ensure that only one request is sent despite the button being tapped multiple times.
  • Validate the field limits and see whether identical limits exist on both the client and server sides.
  • Check the error handling capabilities of the client by entering incorrect data and making minor changes to the requests received from the server side (using Charles, for example)
  • Use Chrome to request that the API be blocked – observe how a blocked request is handled by the client.
  • Use Charles to edit API request payload – observe the behaviors of the client and server with unexpected data.

Hopefully, this list proves useful to everyone attempting to ensure that nothing is missed while carrying out quality assessments.

Margarita Simonova
Margarita Simonova
Founder and CEO of ILoveMyQA

You May Also Like These Posts

View all
brain
QA’s Role In Auditing AI Ethics
Artificial Intelligence

QA’s Role In Auditing AI Ethics

AI boosts productivity but raises ethical concerns like bias, transparency, and privacy. QA professionals address these by mitigating bias, ensuring transparency, and protecting data privacy using tools like Fairlearn. As AI evolves, QA’s role in ethics is crucial, requiring ongoing education and adherence to best practices.
Margarita Simonova
Margarita Simonova
Founder and CEO of ILoveMyQA
July 18, 2024
8 min read
women explaining to men
Traditional Automation Testing Vs. AI-Driven Testing: What’s The Difference?
Forbes

Traditional Automation Testing Vs. AI-Driven Testing: What’s The Difference?

AI-driven testing surpasses traditional automation by using machine learning to generate, adapt, and optimize test cases, providing greater efficiency, accuracy, and comprehensive coverage.
Margarita Simonova
Margarita Simonova
Founder and CEO of ILoveMyQA
May 29, 2024
10 min read
Man working at PC
Implementing AI In QA? Time To Think About Your ROI
Artificial Intelligence

Implementing AI In QA? Time To Think About Your ROI

Learn how to boost ROI in QA with AI through cost savings, efficiency gains, accuracy improvements, better user experiences, scalability, and competitive edge.
Margarita Simonova
Margarita Simonova
Founder and CEO of ILoveMyQA
April 26, 2024
8 min read
View all