News
1 week ago
Cut 90% of Fine-Tuning Cost—Still Beat Baselines on Text and Vision Benchmarks
Multimodal hyper-embeddings enable strong cross-task transfer; tiny tuned modules match or beat full fine-tunes on unseen domains.
1 week ago
Dataset Splits, Vision Encoder, and Hyper-PELT Implementation Details
Benchmarks span MRPC→GQA; text splits follow prior work, images downsampled to a 7×7 grid, visual encoder is frozen for fair param...
1 week ago
One Tiny Hypernetwork to Rule All Tasks and Modalities
This article surveys parameter-efficient tuning, V&L adaptation, and multitask hypernetworks—then frames a unified hyper-embedding...
1 week ago
Cut Fine-Tuning Cost: Adapt LMs to Multi-Modal Tasks with
GLUE and V&L results show near–full-tune accuracy, strong few-shot transfer, and far lower per-task storage than current adapter/p...