Abstract:
In recent years, a great many approaches for learning from multiple sources by considering the diversity of different views have been proposed. The most interesting field is medical diagnosis. For example, breast cancer screening normally employs two views of mammography (Cranio-Caudal and Medio-Lateral-Oblique) or two modes of ultrasound (B-mode and Doppler mode) breast images. This study proposes a multi-evidence learning model that combines the multiple evidences of breast images to improve diagnosis. Two views mammography and two modes of ultrasound were used. Our proposed model consists of four stages. First, feature extraction using Convolutional Neuron Networks was operated to extract the image features on each view separately. Second, feature selection by exploring the mutual information between the feature and the class label was used to select the informative features. Third, canonical correlation analysis was explored to merge two feature sets into one final layer. Finally, the classification of malignant or benign was performed using a support vector machine. The experiment results indicated that the proposed method increases the classification performance. In addition, not only high accuracy but also the maximal correlation has been achieved with combined views.