已发表成果:
WOK 论文 183 篇;中文核心 7 篇;其它论文 3 篇;专利发明 1 个;
An efficient blur kernel estimation method for blind image Super-Resolution
You only compress once: Towards effective and elastic BERT compression via exploit-explore stochastic nature gradient
ARLP: Automatic multi-agent transformer reinforcement learning pruner for one-shot neural network pruning
Deep hybrid transformer network for robust modulation classification in wireless communications
MOVE AND ACT: ENHANCED OBJECT MANIPULATION AND BACKGROUND INTEGRITY FOR IMAGE EDITING
Optg: Optimizing Gradient-Driven Criteria in Network Sparsity
Actor-Critic With Synthesis Loss for Solving Approximation Biases
ConCLVD: Controllable Chinese Landscape Video Generation via Diffusion Model
Rethinking 3D Dense Caption and Visual Grounding in A Unified Framework through Prompt-based Localization
Local representation-based neighbourhood for robust classification
Learning Image Demoiréing from Unpaired Real Data
AFFINEQUANT: AFFINE TRANSFORMATION QUANTIZATION FOR LARGE LANGUAGE MODELS
Multi-scale representation of surface-enhanced Raman spectroscopy data for deep learning-based liver cancer detection
EBFT: Effective and Block-Wise Fine-Tuning for Sparse LLMs
An Efficient Blur Kernel Estimation Method for Blind Image Super-Resolution
Shadow-aware dynamic convolution for shadow removal
Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration
Learning Image Demoiréing from Unpaired Real Data
Enhancing GAN Compression by Image Probability Distribution Distillation
Binarizing Super-Resolution Neural Network Without Batch Normalization
Frequency Domain Distillation for Data-Free Quantization of Vision Transformer
Dynamic Neural Networks for Adaptive Implicit Image Compression
Large Kernel Convolutional Attention Based U-Net Network for Inpainting Oracle Bone Inscription
Uncovering the Over-Smoothing Challenge in Image Super-Resolution: Entropy-Based Quantification and Contrastive Optimization
FUNCTIONALLY SIMILAR MULTI-LABEL KNOWLEDGE DISTILLATION
AFFINEQUANT: AFFINE TRANSFORMATION QUANTIZATION FOR LARGE LANGUAGE MODELS