Multitree GP-Based Feature Learning for Multimodal Medical Image Classification
Created by W.Langdon from
gp-bibliography.bib Revision:1.8620
- @InProceedings{wu:2025:CEC3,
-
author = "Zhicheng Wu and Bing Xue and Mengjie Zhang",
-
title = "Multitree {GP-Based} Feature Learning for Multimodal
Medical Image Classification",
-
booktitle = "2025 IEEE Congress on Evolutionary Computation (CEC)",
-
year = "2025",
-
editor = "Yaochu Jin and Thomas Baeck",
-
address = "Hangzhou, China",
-
month = "8-12 " # jun,
-
publisher = "IEEE",
-
keywords = "genetic algorithms, genetic programming,
Representation learning, Three-dimensional displays,
Magnetic resonance imaging, Feature extraction,
Vectors, Robustness, Data mining, Medical diagnostic
imaging, Image classification, Multimodal, Medical
Image Classification, Feature Learning, Fusion
Strategy",
-
isbn13 = "979-8-3315-3432-5",
-
DOI = "
10.1109/CEC65147.2025.11043090",
-
abstract = "Multimodal medical image classification (MMIC) refers
to the process of extracting and combining information
from various modalities to classify medical images,
ultimately improving diagnostic accuracy. Most existing
methods extract discriminative features from different
modalities for specific tasks but often lack
adaptability to other tasks. Additionally, they suffer
from poor interpretability, which is critical in
medical image analysis, as understanding the
decision-making process is essential. Genetic
programming (GP), particularly multitree GP, provides a
flexible framework for evolving multimodal features.
However, current multitree GP methods only perform
feature-level fusion and remain underexplored in
medical multimodal feature learning. To address these
issues, this paper proposes a novel multitree GP
method, Multimodal Feature GP (MFGP), to automatically
extract informative feature vectors from different
modalities. To fully use both modality-specific
features and fused multimodal features, we integrate
feature-level fusion and decision-level fusion
strategies into our framework. The performance of the
proposed method is evaluated on two distinct MMIC
tasks, namely polyp classification and glaucoma
classification, representing different medical
scenarios. The results are compared with both
single-modality and multimodal methods. Experimental
results demonstrate that the proposed method
significantly outperforms all single-modality
approaches and most multimodal benchmark methods.
Further analysis reveals that the evolved models can
effectively capture the unique characteristics of
different modalities.",
-
notes = "also known as \cite{11043090}",
- }
Genetic Programming entries for
Zhicheng Wu
Bing Xue
Mengjie Zhang
Citations