Human contact in identity verification
Businesses and consumers alike are increasingly dependent on AI every day. Technology is the foundation for almost every digital assistant, search engine, online shopping recommendation, or automated technology that we use today. Its reach is only growing as organizations rush to digitally transform their businesses in the wake of the pandemic. In fact, early data shows that 43% of companies have stepped up their deployment of AI-based tools due to the pandemic, and those numbers will only continue to rise. However, this rapid adoption and increased reliance on AI also comes with challenges.
Why the ID verification is not ready for autonomous AI
New research shows that with this immense growth in adoption of AI, executives fear that technology is changing too quickly and becoming the ‘wild west’ with few rules or regulations. As a result, challenges related to inequalities and prejudices are becoming increasingly relevant, especially when it comes to Identity Verification (IDV), a technology that has also become critical with the recent boom in digital solutions and technology. online activity resulting from global lockdowns. With more and more people relying on digital identity verification for approval of financial transactions, loans, transportation, etc., the biased results can have a significant impact. Let’s take a look at some AI flaws that negatively impact certain demographics and populations:
Biased AI algorithms: Bias AI algorithms remain a significant issue in IDV. AI algorithms are constantly learning to make decisions based on data that may include and reflect biased information such as gender, race, or sexual orientation. Even after removing these variables, data that includes biased historical decisions or social inequalities may remain. Organizations rely on these algorithms to confirm whether a person is who they say they are, but biased algorithms can lead to the unfair rejection or blocking of some customers, denying them access to technologies and resources.
Access to modern technology: Some regions of the world are disproportionately affected by their lack of access to modern cameras and devices. Modern facial recognition and authentication software often uses biometrics to trace facial features from a video or photograph, but blurry, low-resolution images can easily impact the ability of the software. to identify a face, continuing to fuel biases in AI models.
Documenting inequality: Not all documents are created equal. Some countries have invested more in document security, making it easier to present your ID online compared to older paper documents, such as government-issued IDs that have fields readable by machine (NFC, MRZ). It also puts people in less technologically advanced countries at a disadvantage.
With these challenges remaining, it’s clear that there are some decisions AI technology just isn’t ready to make, especially if it could have negative ramifications for certain populations. So where can humans fill these gaps?
Integration of human decision making into the IDV process
Organizations should start by implementing flexible decision-making thresholds in the IDV process for any time there is uncertainty about the verification or when a verification is inconclusive. At this point in the verification process, a diverse team of specialists should be included to help make the verification decision once the automated system has failed to do so. This allows for more accurate and impartial decision making and authorization.
With autonomous driving, for example, there has to be a human behind the wheel to make decisions if the conditions are not optimal for the AI to decide. The same should be true in the area of identity verification. Humans are more than capable of handling identity verification tasks. The downside is that it is expensive and time consuming to do. Consumers on the other side of the screen waiting for confirmation of their loan, bank transfer, or car registration request aren’t ready to wait. The key is to incorporate human intuition when absolutely necessary and only to the extent that it is necessary, whether it is low confidence in facial biometrics due to data bias. training in a specific region of a poor image of identity or elsewhere. there is another risk of bias. This helps maintain a consistent level of quality, timeliness, and fairness, while training AI models to further improve with each check.
The race continues for equity in AI
The race for equity in AI will be long, especially as the adoption of AI continues to increase rapidly. Someday we’ll likely achieve standalone AI, but not until these biases are removed and the playing field leveled. Going forward, it will be important for organizations to remember that AI biases are invariably the result of real-world biases, but the process of changing this within an organization begins with its people.