The Money Magician’s Toolbox: Top 10 Accounting Software of the Decade
In the world of business, financial wizards wave their wands to conjure profits and success. But behind every great money magician …
Melanoma is a type of malignancy that is responsible for more than 70 percent of all skin cancer deaths worldwide. For years, doctors have relied on visual examination to identify suspicious pigmented lesions (SPLs) that may be an indicator of skin cancer. Such early identification of SPL in primary care can improve melanoma prognosis and significantly reduce treatment costs.
The challenge is that it is difficult to quickly find and prioritize SPL due to the large number of pigmented lesions that often are required to assess potential biopsies. Researchers at MIT and elsewhere have now developed a new artificial intelligence pipeline using deep convolutional neural networks (DCNNs) and applied them to SPL analysis using the wide-ranging photography common to most smartphones and personal cameras.
DCNNs are neural networks that are used to classify images to then cluster them, like when performing a photo search. These machine learning algorithms are a subset of deep learning.
Using cameras to capture a wide field of large areas of patients’ bodies, the program uses DCNN to quickly and effectively identify and examine melanoma at an early stage.
Early detection of SPL can save lives; the current capacity of medical systems to provide comprehensive skin examinations is still lacking. An SPL analysis system that uses DCNN to more quickly and effectively identify skin lesions that require more research, screening that can be performed during primary aid routine visits or even the patients themselves. The system uses DCNN to optimize the identification and classification of SPL on wide-field images.
Using artificial intelligence, researchers are training the system using 20,388 wide-field images of 133 patients at the Gregorio Maranhón Hospital in Madrid and publicly available images. The images are taken with various ordinary cameras available to users. Dermatologists working with researchers visually classify the lesions in the pictures for comparison. It was found that the system achieved more than 90.3 percent sensitivity in distinguishing SPL from nonsuspicious lesions, skin, and complex origins, avoiding the need for strenuous and time-consuming images of individual lesions. Also, the research presents a new method for extracting intra-patient lesion saliency (known as the ugly duckling criteria, or comparing the lesions on the skin of a person that stand out from the rest) based on the DCNN characteristics of the detected lesions.
Research shows that systems that use computer vision and deep neural networks, thus quantifying standard features, can achieve accuracy comparable to professional dermatologists. The researchers hope this will revive the desire to provide more effective dermatological screening in primary care to guide good recommendations. According to researchers, this would allow for faster and more accurate SPLS assessments and could lead to earlier treatment of melanoma.
In the world of business, financial wizards wave their wands to conjure profits and success. But behind every great money magician …
Insurtech is not just making waves in the insurance industry—it’s rewriting the rulebook. As technology-driven startups disrupt …
Insurtech is not just making waves in the insurance industry—it’s rewriting the rulebook. As technology-driven startups disrupt …
When managing finances, the dreaded spreadsheet has long been a necessary evil. Hours spent painstakingly inputting …