Hyperdimensional Multimedia Perception and Frontier Security

Faculty of Applied Sciences, Macao Polytechnic University

PFPS: Polymerized Feature Panoptic Segmentation Based on Fully Convolutional Networks


Journal article


Shucheng Ji, Xiaochen Yuan, Junqi Bao, Tong Liu, Yang Lian, Guoheng Huang, Guo Zhong
IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 9, 2025, pp. 2584-2596


Link
Cite

Cite

APA   Click to copy
Ji, S., Yuan, X., Bao, J., Liu, T., Lian, Y., Huang, G., & Zhong, G. (2025). PFPS: Polymerized Feature Panoptic Segmentation Based on Fully Convolutional Networks. IEEE Transactions on Emerging Topics in Computational Intelligence, 9, 2584–2596. https://doi.org/10.1109/TETCI.2024.3515004


Chicago/Turabian   Click to copy
Ji, Shucheng, Xiaochen Yuan, Junqi Bao, Tong Liu, Yang Lian, Guoheng Huang, and Guo Zhong. “PFPS: Polymerized Feature Panoptic Segmentation Based on Fully Convolutional Networks.” IEEE Transactions on Emerging Topics in Computational Intelligence 9 (2025): 2584–2596.


MLA   Click to copy
Ji, Shucheng, et al. “PFPS: Polymerized Feature Panoptic Segmentation Based on Fully Convolutional Networks.” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 9, 2025, pp. 2584–96, doi:10.1109/TETCI.2024.3515004.


BibTeX   Click to copy

@article{ji2025a,
  title = {PFPS: Polymerized Feature Panoptic Segmentation Based on Fully Convolutional Networks},
  year = {2025},
  journal = {IEEE Transactions on Emerging Topics in Computational Intelligence},
  pages = {2584-2596},
  volume = {9},
  doi = {10.1109/TETCI.2024.3515004},
  author = {Ji, Shucheng and Yuan, Xiaochen and Bao, Junqi and Liu, Tong and Lian, Yang and Huang, Guoheng and Zhong, Guo}
}

[Picture]
Framework of Polymerized Feature Panoptic Segmentation (PFPS)
Abstract: Panoptic segmentation requires the prediction of a pixel-level mask with a category label in an image. In recent years, panoptic segmentation has been gaining more attention since it can help us understand objects and the environment in many fields, such as medical images, remote sensing images, and autonomous driving. However, existing panoptic segmentation methods are usually challenging for multi-scale object segmentation and boundary localization. In this paper, we propose a Polymerized Feature Panoptic Segmentation (PFPS) to enhance the network's feature representation ability by polymerizing the extracted stage features. Specifically, we propose a Generalization-Enhanced Stage Feature Generation Module (GSFGM) to extract and enhance the stage features. In the GSFGM, a novel Sampled and Concated Feature Generation (SCFG) is designed as an individual component, which polymerizes the convoluted backbone features to enhance multi-scale feature representation. Thereafter, we propose a Stage Feature Re-weight Module (SFRM) to ensure the network can learn efficient information from the massive channels. Moreover, we further propose a Unified Encoder Module (UEM) to provide spatial information and compress the high-dimensional features by coordinating convolution operations and channel attention. To demonstrate the superiority of the proposed PFPS, we conduct experiments on the COCO-2017 and the Cityscapes validation datasets. The experimental results indicate that the PFPS achieves a better performance in PQ of 43.0%, SQ of 80.4%, RQ of 51.9%, PQth of 48.6%, SQth of 82.6%, RQth of 58.1%, PQst of 34.6% on COCO-2017 validation dataset, while PQ of 61.7%, and PQst of 67.9% on Cityscapes validation dataset