• College of Medical Information Engineering, Shandong University of Traditional Chinese Medicine, Jinan 250355, P. R. China;
WEI Guohui, Email: bmie530@163.com
Export PDF Favorites Scan Get Citation

To address the high computational complexity of the Transformer in the segmentation of ultrasound thyroid nodules and the loss of image details or omission of key spatial information caused by traditional image sampling techniques when dealing with high-resolution, complex texture or uneven density two-dimensional ultrasound images, this paper proposes a thyroid nodule segmentation method that integrates the receiving weighted key-value (RWKV) architecture and spherical geometry feature (SGF) sampling technology. This method effectively captures the details of adjacent regions through two-dimensional offset prediction and pixel-level sampling position adjustment, achieving precise segmentation. Additionally, this study introduces a patch attention module (PAM) to optimize the decoder feature map using a regional cross-attention mechanism, enabling it to focus more precisely on the high-resolution features of the encoder. Experiments on the thyroid nodule segmentation dataset (TN3K) and the digital database for thyroid images (DDTI) show that the proposed method achieves dice similarity coefficients (DSC) of 87.24% and 80.79% respectively, outperforming existing models while maintaining a lower computational complexity. This approach may provide an efficient solution for the precise segmentation of thyroid nodules.

Citation: ZHU Licheng, WEI Guohui. Thyroid nodule segmentation method integrating receiving weighted key-value architecture and spherical geometric features. Journal of Biomedical Engineering, 2025, 42(3): 567-574. doi: 10.7507/1001-5515.202412009 Copy

Copyright © the editorial department of Journal of Biomedical Engineering of West China Medical Publisher. All rights reserved

  • Previous Article

    Image-aware generative medical visual question answering based on image caption prompts
  • Next Article

    Cross modal translation of magnetic resonance imaging and computed tomography images based on diffusion generative adversarial networks