Artificial intelligence (AI) has brought about enormous increases in productivity across various industries. However, it comes with some costs. One area of concern lies with ethics. Ethics can be described as the moral principles that guide a person’s actions. With AI becoming more human-like, how does AI handle ethical situations?
That is a question that more and more quality assurance (QA) professionals are faced with every day. Many questions remain about how AI is related to ethical concerns such as bias, transparency and privacy. In this article, we will examine those concerns as they relate to QA.
QA’s Role In Ethical AI Development
QA has a large role in the realm of AI ethics. When AI responds to user input in a way that is considered biased or discriminatory, people will look to QA teams for explanations of what went wrong. It will be up to QA to fix these types of issues.
Fortunately, there are ways that QA professionals can help contribute to the ethical development of AI systems. Three principles can help guide QA professionals when developing ethical AI:
Considering these three principles will help QA develop ethical AI. We will refer to these concepts throughout this article.
Testing For Bias And Fairness
QA professionals first need to identify bias in order to ensure fairness. One technique is to use statistical analysis to identify patterns. When biases are found, techniques such as data balancing can be used. Numerous test cases need to be developed and executed to detect biases. A rigorous data quality assurance program ensures that training datasets are free from biases.
One case study where AI bias was successfully identified and mitigated occurred when Amazon stopped using an AI recruiting tool that showed biases against women. The company realized that the model was trained on previous employees, which were primarily men. It subsequently nearly always recommended more men because of the type of language that appeared more often in men’s resumes than women’s. Through analyzing the results of the AI model, the company learned of its bias and attempted to modify its behavior in subsequent iterations.
Ensuring Transparency And Explainability
The next key goal for addressing ethics involves the principles of transparency and explainability. Transparency in AI refers to being able to understand why an AI system made its decisions. Explainability refers to the AI system’s ability to provide explanations for its decisions to a user in an easy-to-understand manner. Transparency and explainability are important because if we cannot trace the source of biases, then trying to address them becomes too difficult.
QA has an important role in verifying the transparency of AI systems. This can involve several steps. First, adequate documentation is needed that discloses the algorithm's architecture. Next, an AI model that can answer questions about the source of its information should be selected rather than an AI system that is a black box and cannot explain how it came to its conclusions. Furthermore, QA professionals should ensure that AI has interpretability, which means that internal processes such as inputs and outputs are understandable.
Privacy And Data Protection
The third main principle involving QA relates to privacy and data protection. The data fed to AI may contain personal information that is protected by various regulations. Also, it is possible that the data was taken without the consent of the users—or they may not have realized that an organization would use their data for training AI.
QA can help ensure that AI complies with privacy laws and regulations. First, they can make sure that the data that the system is trained on is anonymized. QA can then ensure that personal information is not present in AI results through repetitive testing. For testing data anonymization, techniques such as k-anonymity, l-diversity and t-closeness can be applied. To ensure data protection, QA can require regular security audits that check for compliance.
Challenges And Best Practices
Auditing AI ethics is no easy task—there are challenges at all stages for QA teams. One common challenge comes from how new the field is. That means that there is currently a lack of standardized frameworks. Another challenge relates to the complexity of AI models. Put simply, it is difficult to understand the decision-making processes of these models even with the best tools available.
To overcome the lack of frameworks, an organization needs to create its own criteria and benchmarks. In response to AI’s complexity, QA professionals need to get training on different ways that models make decisions. Furthermore, they need to continue learning because the field of AI is dynamic and constantly evolving. In addition, tools and frameworks such as fairness indicators, model cards, and AI Explainability 360 should be adopted.
Future Directions And Innovations
There are some promising trends happening in terms of QA ethics. There has been increased attention and frameworks devoted to the issue. That allocation of resources will help develop new solutions.
Advancements in tools and methodologies are also sure to continue into the future. Tools such as Microsoft’s Fairlearn will help QA teams assess and improve their systems. Standards will also be developed by ISO/IEC that will help provide guidelines and best practices.
Conclusion
There's no doubt that QA will have a starring role in the process of auditing AI ethics. But QA specialists need to ensure the important issues we covered here—biases, transparency, data privacy/protection, accountability, best practices—are discussed in great detail. With enough effort along with new tools and methodologies, QA teams are positioning themselves to take on AI ethics as one of their top priorities. It is time for QA professionals to take the reins of AI ethics.