Member-only story
Your Daily AI Research tl;dr — 2022–07–19 🧠
A special ICML 2022 iteration!
Welcome to your official daily AI research tl;dr (often with code and news) for AI enthusiasts where I share the most exciting papers I find daily, along with a one-liner summary to help you quickly determine if the article (and code) is worth investigating. I will also take this opportunity to share daily exciting news in the field.
Let’s get started with this special ICML 2022 iteration!
1️⃣ HyperPrompt: Prompt-based Task-Conditioning of Transformers
HyperPrompt is a novel architecture for prompt-based task-conditioning of self-attention in Transformers, allowing for the network to learn task-specific feature maps where the hyper-prompts serve as task global memories for the queries to attend to, at the same time enabling flexible information sharing among tasks.
Link to the paper: https://arxiv.org/pdf/2203.00759.pdf
2️⃣ Understanding The Robustness in Vision Transformers
From the main author’s Twitter:
In this work, we unveil the interesting trinity among grouping, IB [(information bottleneck)] and robustness in ViTs [(vision transformers)].