Self-driving cars, facial recognition system, voice recognition system, digital assistants, robots.
They are not just examples of Artificial Intelligence but stand testament to its growing influence in our daily lives. The world is indeed moving towards increased adoption of Ai-powered systems, with a McKinsey Global Survey showing a 25 percent year-on-year increase in the use of the technology. The survey also indicates the implementation of Ai in enterprises, with executives reporting an uptick in revenue in the business verticals where it’s used. Forty-four percent of respondents say Ai has helped reduce costs.
Though it’s evident Ai is transforming the technology landscape, testing is one of the key challenges in Ai-based systems. While traditional IT systems use a rule-based method such as ‘if A, then B’, Ai systems are based on advanced input models. Therefore, testing artificial intelligence systems requires a change from output conformance to input validation to ensure robustness and optimize performance.
Below are a few of the common challenges in testing Ai systems:
- Storage and analytics challenges: Large volumes of sensor data pose storage and analytics challenges, while resulting in noisy datasets
- Training challenges: Data collected from unexpected events is difficult to collate but Ai systems rely on this data, thus presenting training challenges
- Removing human bias: Test scenarios should be prepared to detect and eliminate human bias, which is often part of testing and training datasets
- Fixing isolated defects: Defects in Ai systems are often amplified and therefore challenging to fix any isolated issue
A typical software testing process, performed by software testing companies, would involve the Quality Assurance (QA) team testing the functionality, analyzing and reviewing the code, unit testing and performing single-user performance testing. However, testing Ai-based systems isn’t always straightforward; QA teams must have a well-defined test strategy for the challenges and potential fail points in frameworks.
Key facets of testing Ai systems
Data validation
Training data is vital for Ai systems to provide the desired output. In fact, their efficiency is based on the quality of the training data, including features such as variety and bias.
An experiment by MIT Media Lab researchers exemplifies the importance of training data. According to the experiment, Norman—an Ai-powered psychopath—was exposed to potentially dangerous web content and captions were compared between what Norman saw and what a standard image captioning neural framework (Ai) saw. Where in one instance the Ai sees “A couple of people standing next to each other”, Norman sees: “Man jumps from floor window”.
It is representative of how Ai can go wrong when biased data is used. When we speak of bias in Ai algorithms, often the algorithm isn’t to blame but the biased data it was fed with.
Algorithm
Algorithms are the foundation of Ai-based systems as they process data and generate insights. Efficiency, learnability and model validation are three of the key features of Ai algorithms, which can make systems smarter but, without common sense, also produce strange outputs.
One of the challenges faced by Ai systems is deducing if a task is appropriate or ethical. Testers, therefore, are responsible for preventing them from causing havoc. They must also ensure they define the limits within which Ai-based algorithms operate and stay vigilant to prevent any breaches.
Integration testing
When several Ai systems with contradicting objectives are deployed simultaneously, it’s essential to perform a thorough assessment of the systems and the various connection points. Integration testing is critical to uncover faults and integration issues, and ensure the integrations work as expected.
Common steps in integration testing:
- Validate input request and response from individual application programming interface (API)
- Perform integration testing of APIs and algorithms and verify output
- Test interaction between components – input and response, along with format and accuracy
Performance and security testing
Performance and security testing are, along with regulatory compliance, integral to the functioning of Ai-based systems. Poor or improper testing could result in the manipulation of the systems to expose sensitive information.
In 2017, a bank’s voice recognition security system, powered by Ai, was breached. The person who mimicked the voice of his twin brother gained access to the account. He could access account balances and recent transactions and, while he could not withdraw money, had the option to transfer money from the account.
Common steps in security testing:
- Perform end-to-end testing for specific use cases
- Evaluate system security via static and dynamic security testing
- Perform user interface and regression testing
Summary
According to research and advisory firm Gartner, business value derived from Ai is forecast to reach US$3.9 trillion by 2022. Customer experience, new revenue and cost reduction are, Gartner says, three different sources of Ai business value. That said, with a growing number of systems gaining artificial intelligence characteristics, it’s vital they are tested completely to achieve the desired performance, to glean actionable insights, to generate efficient outputs, all without compromising on security.