Researchers at MIT have developed a technique for estimating how much information data are likely to contain that is more accurate and scalable than earlier approaches.
All data is not created equal. But how much information is likely to be included in any one piece of data? This issue lies at the heart of medical research, scientific experiment development, and even ordinary human knowledge and thought. Researchers at MIT have devised a novel method for addressing this issue, which has implications for health, scientific discovery, cognitive science, and artificial intelligence. The essential concept is to utilize probabilistic inference techniques to first infer which explanations are likely, and then use these probable explanations to generate high-quality entropy estimates rather than enumerating all possible explanations. The research demonstrates that this inference-based method is substantially quicker and more accurate than earlier methods.