NEURAL NETWORKS
神经网络
ISSN: 0893-6080
自引率: 9.6%
发文量: 202
被引量: 14065
影响因子: 9.647
通过率: 暂无数据
出版周期: 月刊
审稿周期: 1
审稿费用: 0
版面费用: 暂无数据
年文章数: 202
国人发稿量: 125

投稿须知/期刊简介:

Neural Networks is an international journal appearing nine times each year that publishes original research and review articles concerned with the modelling of brain and behavioral processes and the application of these models to computer and related technologies. Models aimed at the explanation and prediction of biological data and models aimed at the solution of technological problems are both solicited, as are mathematical and computational analyses of both types of models. Neural Networks serves as a central, interdisciplinary publication for all researchers in the field and its editors represent a range of fields including psychology, neurobiology, mathematics, physics, computer science, and engineering.

期刊描述简介:

Neural Networks is an international journal appearing nine times each year that publishes original research and review articles concerned with the modelling of brain and behavioral processes and the application of these models to computer and related technologies. Models aimed at the explanation and prediction of biological data and models aimed at the solution of technological problems are both solicited, as are mathematical and computational analyses of both types of models. Neural Networks serves as a central, interdisciplinary publication for all researchers in the field and its editors represent a range of fields including psychology, neurobiology, mathematics, physics, computer science, and engineering.

最新论文
  • Contrastive Graph Representation Learning with Adversarial Cross-View Reconstruction and Information Bottleneck.

    Graph Neural Networks (GNNs) have received extensive research attention due to their powerful information aggregation capabilities. Despite the success of GNNs, most of them suffer from the popularity bias issue in a graph caused by a small number of popular categories. Additionally, real graph datasets always contain incorrect node labels, which hinders GNNs from learning effective node representations. Graph contrastive learning (GCL) has been shown to be effective in solving the above problems for node classification tasks. Most existing GCL methods are implemented by randomly removing edges and nodes to create multiple contrasting views, and then maximizing the mutual information (MI) between these contrasting views to improve the node feature representation. However, maximizing the mutual information between multiple contrasting views may lead the model to learn some redundant information irrelevant to the node classification task. To tackle this issue, we propose an effective Contrastive Graph Representation Learning with Adversarial Cross-view Reconstruction and Information Bottleneck (CGRL) for node classification, which can adaptively learn to mask the nodes and edges in the graph to obtain the optimal graph structure representation. Furthermore, we innovatively introduce the information bottleneck theory into GCLs to remove redundant information in multiple contrasting views while retaining as much information as possible about node classification. Moreover, we add noise perturbations to the original views and reconstruct the augmented views by constructing adversarial views to improve the robustness of node feature representation. We also verified through theoretical analysis the effectiveness of this cross-attempt reconstruction mechanism and information bottleneck theory in capturing graph structure information and improving model generalization performance. Extensive experiments on real-world public datasets demonstrate that our method significantly outperforms existing state-of-the-art algorithms.

    被引量:- 发表:1970

  • Multi-scale graph harmonies: Unleashing U-Net's potential for medical image segmentation through contrastive learning.

    Medical image segmentation is essential for accurately representing tissues and organs in scans, improving diagnosis, guiding treatment, enabling quantitative analysis, and advancing AI-assisted healthcare. Organs and lesion areas in medical images have complex geometries and spatial relationships. Due to variations in the size and location of lesion areas, automatic segmentation faces significant challenges. While Convolutional Neural Networks (CNNs) and Transformers have proven effective in segmentation task, they still possess inherent limitations. Because these models treat images as regular grids or sequences of patches, they struggle to learn the geometric features of an image, which are essential for capturing irregularities and subtle details. In this paper we propose a novel segmentation model, MSGH, which utilizes Graph Neural Network (GNN) to fully exploit geometric representation for guiding image segmentation. In MSGH, we combine multi-scale features from Pyramid Feature and Graph Feature branches to facilitate information exchange across different networks. We also leverage graph contrastive representation learning to extract features through self-supervised learning to mitigate the impact of category imbalance in medical images. Moreover, we optimize the decoder by integrating Transformer to enhance the model's capability in restoring the intricate image details feature. We conducted a comprehensive experimental study on ACDC, Synapse and BraTS datasets to validate the effectiveness and efficiency of MSGH. Our method achieved an improvement of 2.56-13.41%, 1.04-5.11% and 1.77-3.35% of dice on the three segmentation tasks respectively. The results demonstrate that our model consistently performs well compared with state-of-the-art models. The source code is accessible at https://github.com/Dorothywujie/MSGH.

    被引量:- 发表:1970

  • Multi-compartment neuron and population encoding powered spiking neural network for deep distributional reinforcement learning.

    Inspired by the brain's information processing using binary spikes, spiking neural networks (SNNs) offer significant reductions in energy consumption and are more adept at incorporating multi-scale biological characteristics. In SNNs, spiking neurons serve as the fundamental information processing units. However, in most models, these neurons are typically simplified, focusing primarily on the leaky integrate-and-fire (LIF) point neuron model while neglecting the structural properties of biological neurons. This simplification hampers the computational and learning capabilities of SNNs. In this paper, we propose a brain-inspired deep distributional reinforcement learning algorithm based on SNNs, which integrates a bio-inspired multi-compartment neuron (MCN) model with a population coding approach. The proposed MCN model simulates the structure and function of apical dendritic, basal dendritic, and somatic compartments, achieving computational power comparable to that of biological neurons. Additionally, we introduce an implicit fractional embedding method based on population coding of spiking neurons. We evaluated our model on Atari games, and the experimental results demonstrate that it surpasses the vanilla FQF model, which utilizes traditional artificial neural networks (ANNs), as well as the Spiking-FQF models that are based on ANN-to-SNN conversion methods. Ablation studies further reveal that the proposed multi-compartment neuron model and the quantile fraction implicit population spike representation significantly enhance the performance of MCS-FQF while also reducing power consumption.

    被引量:- 发表:1970

  • GTC: GNN-Transformer co-contrastive learning for self-supervised heterogeneous graph representation.

    Graph Neural Networks (GNNs) have emerged as the most powerful weapon for various graph tasks due to the message-passing mechanism's great local information aggregation ability. However, over-smoothing has always hindered GNNs from going deeper and capturing multi-hop neighbors. Meanwhile, most methods follow a semi-supervised learning manner, the label scarcity would limit their applicability in real-world systems. Unlike GNNs, Transformers can model global information and multi-hop interactions via multi-head self-attention and a proper Transformer structure can show more immunity to over-smoothing. So, can we propose a novel framework to combine GNN and Transformer, integrating both GNN's local information aggregation and Transformer's global information modeling ability to eliminate the over-smoothing problem and achieve self-supervised graph representation? To realize this, this paper proposes a collaborative learning scheme for GNN-Transformer and constructs GTC architecture. GTC leverages the GNN and Transformer branch to encode node information from different views respectively, and establishes contrastive learning tasks based on the encoded cross-view information to realize self-supervised heterogeneous graph representation. For the Transformer branch, we propose Metapath-aware Hop2Token and CG-Hetphormer, which can cooperate with GNNs to attentively encode neighborhood information from different levels. As far as we know, this is the first attempt in the field of graph representation learning to utilize both GNNs and Transformer to collaboratively capture different view information and conduct cross-view contrastive learning. The experiments on real datasets show that GTC exhibits superior performance compared with state-of-the-art methods. Codes can be available at https://github.com/PHD-lanyu/GTC.

    被引量:- 发表:1970

  • Analog Spiking U-Net integrating CBAM&ViT for medical image segmentation.

    SNNs are gaining popularity in AI research as a low-power alternative in deep learning due to their sparse properties and biological interpretability. Using SNNs for dense prediction tasks is becoming an important research area. In this paper, we firstly proposed a novel modification on the conventional Spiking U-Net architecture by adjusting the firing positions of neurons. The modified network model, named Analog Spiking U-Net (AS U-Net), is capable of incorporating the Convolutional Block Attention Module (CBAM) into the domain of SNNs. This is the first successful implementation of CBAM in SNNs, which has the potential to improve SNN model's segmentation performance while decreasing information loss. Then, the proposed AS U-Net (with CBAM&ViT) is trained by direct encoding on a comprehensive dataset obtained by merging several diabetic retinal vessel segmentation datasets. Based on the experimental results, the provided SNN model achieves the highest segmentation accuracy in retinal vessel segmentation for diabetes mellitus, surpassing other SNN-based models and most ANN-based related models. In addition, under the same structure, our model demonstrates comparable performance to the ANN model. And then, the novel model achieves state-of-the-art(SOTA) results in comparative experiments when both accuracy and energy consumption are considered (Fig. 1). At the same time, the ablative analysis of CBAM further confirms its feasibility and effectiveness in SNNs, which means that a novel approach could be provided for subsequent deployment and hardware chip application. In the end, we conduct extensive generalization experiments on the same type of segmentation task (ISBI and ISIC), the more complex multi-segmentation task (Synapse), and a series of image generation tasks (MNIST, Day2night, Maps, Facades) in order to visually demonstrate the generality of the proposed method.

    被引量:- 发表:1970

统计分析
是否有问题?您可以直接对期刊官方提问 提问

最近浏览

关于我们

zlive学术集成海量学术资源,融合人工智能、深度学习、大数据分析等技术,为科研工作者提供全面快捷的学术服务。在这里我们不忘初心,砥砺前行。

友情链接

联系我们

合作与服务

©2024 zlive学术声明使用前必读