MIT Shares How Machine Learning Models Can Make Sense Of Nonsense & How This Could Be A Problem | Clean Technica

Share on facebook
Share on twitter
Share on linkedin
Flystock/Shutterstock

Medical diagnosis and automated vehicles both involve image categorization. Scientists at the Massachusetts Institute of Technology recently discovered an intriguing problem involving machine learning and picture identification. Depending on what the technology is being used for, this problem could be trivial or dangerous if it is not addressed. Although this isn’t a new issue, the one found by MIT scientists and dubbed “overinterpretation” is. It’s a problem that could have ramifications for both medical diagnostics and self-driving cars. Overinterpretation is essentially an algorithm that makes a “confident” forecast based on details it seems that we humans don’t understand, resulting in a prediction that shouldn’t be made. 

Source: https://cleantechnica.com/2021/12/17/mit-shares-how-machine-learning-models-can-make-sense-of-nonsense-how-this-could-be-a-problem/ 

RELATED ARTICLES

Subscribe to our newsletter