A Convolutional Sparse Representations Integration Strategy Based on Self-Attention Genetic Programming for Multimodal Image Fusion
Created by W.Langdon from
gp-bibliography.bib Revision:1.8620
- @InProceedings{DBLP:conf/cec/LuoLJ25,
-
author = "Yi Luo and Chang Liu and Dong Ji",
-
title = "A Convolutional Sparse Representations Integration
Strategy Based on Self-Attention Genetic Programming
for Multimodal Image Fusion",
-
booktitle = "2025 IEEE Congress on Evolutionary Computation (CEC)",
-
year = "2025",
-
editor = "Yaochu Jin and Thomas Baeck",
-
address = "Hangzhou, China",
-
month = "8-12 " # jun,
-
publisher = "IEEE",
-
keywords = "genetic algorithms, genetic programming, Convolutional
codes, Measurement, Visualization, Correlation, Feature
extraction, Encoding, Image fusion, Optimization,
Biomedical imaging, multimodal image fusion,
self-attention, convolutional sparse coding",
-
isbn13 = "979-8-3315-3432-5",
-
timestamp = "Mon, 30 Jun 2025 01:00:00 +0200",
-
biburl = "
https://dblp.org/rec/conf/cec/LuoLJ25.bib",
-
bibsource = "dblp computer science bibliography, https://dblp.org",
-
URL = "
https://doi.org/10.1109/CEC65147.2025.11043003",
-
DOI = "
10.1109/CEC65147.2025.11043003",
-
abstract = "Multimodal image fusion (MIF) seeks to amalgamate
complementary information from various image
modalities, offering a more comprehensive and accurate
representation of data. Convolutional sparse coding
(CSC) based methods have demonstrated effectiveness in
this domain. However, they encounter difficulties in
adaptively discerning and leveraging cross-modality
feature correlations, which essentially presents a
multi-objective optimisation challenge, such as
maximizing information retention while minimizing
feature distortion. To address this issue, we introduce
a self-attention based multi-objective genetic
programming (SA-MOGP) method. SA-MOGP frames the quest
for an optimal fusion strategy as a multi-objective
optimisation problem. By integrating the self-attention
mechanism into strongly typed genetic programming
(STGP), it enables efficient exploration of
associations between sparse features across different
modalities. Our approach is structured in three
hierarchical layers. The feature extraction (FE) layer
automatically generates Q, K, V sub-trees according to
a predefined function set. These are then input into
the self-attention (SA) layer for computation. Finally,
the output layer estimates the fusion weights. We use
the NSGAII framwork to solve this multi-objective
problem, aiming for Pareto front solutions. Experiments
on infrared-visible and medical image datasets attest
to the superior performance of the SAGP method in
multimodal image fusion.",
-
notes = "also known as \cite{luo:2025:CEC} \cite{11043003}",
- }
Genetic Programming entries for
Yi Luo
Chang Liu
Dong Ji
Citations