The Dangerous Dismissal of AI Edge Cases



Original Source Here

The Dangerous Dismissal of AI Edge Cases

FACT: Edge cases are limitless and model performance cannot be predicted by traditional accuracy metrics.

We’ve all done it. Deployed a model only to have it fail on some obscure situation and dismissing it as merely an edge case that was unlikely to occur in the first place and claim success as it is quickly trained out of the model. Problem solved? Not so fast.

AI is reorganizing our world at a dizzying pace. The many innovations are absolutely remarkable but we are doing ourselves a disservice when we dismiss AI limitations as merely edge cases. This dismissal and failure to acknowledge the true nature and magnitude of edge cases puts our pace of AI adoption at risk, and thus the entire industry. Instead of dismissing such limitations as edge cases, we need to acknowledge their true nature so we can work towards actually addressing the problem.

An edge case is something that will rarely occur in practice and we often report that it’s unlikely to happen again once addressed. Sometimes the situation is so obscure that we’ll double down stating that even some humans may have been confused enough to make a wrong decision.

The fallacy we are selling is that there are a very small finite number of edge cases that we need to worry about, and this is absolutely wrong.

The truth is, there are a limitless number of edge cases. It’s a very real case of the probable improbable where across the world, chances are, an AI system will encounter edge cases every single day. These edge cases form a very long tail of situations that must be dealt with by AI models, especially as we begin deploying them to mission critical systems.

In a long-tailed distribution, the most frequently occurring 20% of items represent less than 50% of occurrences; or in other words, the least frequently occurring 80% of items are more important as a proportion of the total population. -Alpheus Bingham and Dwayne Spradlin (2011). The Long Tail of Expertise.

Further, while even some humans may be confused by edge cases, most won’t be and those that are will typically deal with the confusion more gracefully than an AI. Conversely, when an AI is confused by an edge case, ALL of the AI will be confused, as in the case of autonomous vehicles.

Edge Cases are Everywhere
…and Impossible to Predict

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot



via WordPress https://ramseyelbasheer.io/2021/07/31/the-dangerous-dismissal-of-ai-edge-cases/

Popular posts from this blog

I’m Sorry! Evernote Has A New ‘Home’ Now

Jensen Huang: Racism is one flywheel we must stop

Streamlit — Deploy your app in just a few minutes