stereoplegic
's Collections
Convolution
updated
Trellis Networks for Sequence Modeling
Paper
•
1810.06682
•
Published
•
1
Pruning Very Deep Neural Network Channels for Efficient Inference
Paper
•
2211.08339
•
Published
•
1
LAPP: Layer Adaptive Progressive Pruning for Compressing CNNs from
Scratch
Paper
•
2309.14157
•
Published
•
1
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Paper
•
2312.00752
•
Published
•
138
Interpret Vision Transformers as ConvNets with Dynamic Convolutions
Paper
•
2309.10713
•
Published
•
1
EfficientFormer: Vision Transformers at MobileNet Speed
Paper
•
2206.01191
•
Published
•
1
Laughing Hyena Distillery: Extracting Compact Recurrences From
Convolutions
Paper
•
2310.18780
•
Published
•
3
Zoology: Measuring and Improving Recall in Efficient Language Models
Paper
•
2312.04927
•
Published
•
2
FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor
Cores
Paper
•
2311.05908
•
Published
•
12
Vision Mamba: Efficient Visual Representation Learning with
Bidirectional State Space Model
Paper
•
2401.09417
•
Published
•
59
LKCA: Large Kernel Convolutional Attention
Paper
•
2401.05738
•
Published
•
1
Parameter-Efficient Conformers via Sharing Sparsely-Gated Experts for
End-to-End Speech Recognition
Paper
•
2209.08326
•
Published
•
1
StableSSM: Alleviating the Curse of Memory in State-space Models through
Stable Reparameterization
Paper
•
2311.14495
•
Published
•
1
SegMamba: Long-range Sequential Modeling Mamba For 3D Medical Image
Segmentation
Paper
•
2401.13560
•
Published
•
1
Graph-Mamba: Towards Long-Range Graph Sequence Modeling with Selective
State Spaces
Paper
•
2402.00789
•
Published
•
2
Convolutional State Space Models for Long-Range Spatiotemporal Modeling
Paper
•
2310.19694
•
Published
•
2
Vivim: a Video Vision Mamba for Medical Video Object Segmentation
Paper
•
2401.14168
•
Published
•
2
Attention or Convolution: Transformer Encoders in Audio Language Models
for Inference Efficiency
Paper
•
2311.02772
•
Published
•
3
Robust Mixture-of-Expert Training for Convolutional Neural Networks
Paper
•
2308.10110
•
Published
•
2
Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning
Tasks
Paper
•
2402.04248
•
Published
•
30
Simple Hardware-Efficient Long Convolutions for Sequence Modeling
Paper
•
2302.06646
•
Published
•
2
Structured Pruning is All You Need for Pruning CNNs at Initialization
Paper
•
2203.02549
•
Published
Graph Mamba: Towards Learning on Graphs with State Space Models
Paper
•
2402.08678
•
Published
•
13
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random
Features in CNNs
Paper
•
2003.00152
•
Published
•
1
DenseMamba: State Space Models with Dense Hidden Connection for
Efficient Large Language Models
Paper
•
2403.00818
•
Published
•
15
MambaMixer: Efficient Selective State Space Models with Dual Token and
Channel Selection
Paper
•
2403.19888
•
Published
•
10
MambaByte: Token-free Selective State Space Model
Paper
•
2401.13660
•
Published
•
51
Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
Paper
•
2402.18508
•
Published
Samba: Simple Hybrid State Space Models for Efficient Unlimited Context
Language Modeling
Paper
•
2406.07522
•
Published
•
37
Deconvolutional Paragraph Representation Learning
Paper
•
1708.04729
•
Published
ReMamba: Equip Mamba with Effective Long-Sequence Modeling
Paper
•
2408.15496
•
Published
•
10