Member-only story

Your Daily AI Research tl;dr — 2022–07–19 🧠

A special ICML 2022 iteration!

Louis-François Bouchard
2 min readJul 19, 2022

Welcome to your official daily AI research tl;dr (often with code and news) for AI enthusiasts where I share the most exciting papers I find daily, along with a one-liner summary to help you quickly determine if the article (and code) is worth investigating. I will also take this opportunity to share daily exciting news in the field.

Let’s get started with this special ICML 2022 iteration!

1️⃣ HyperPrompt: Prompt-based Task-Conditioning of Transformers

HyperPrompt is a novel architecture for prompt-based task-conditioning of self-attention in Transformers, allowing for the network to learn task-specific feature maps where the hyper-prompts serve as task global memories for the queries to attend to, at the same time enabling flexible information sharing among tasks.

Link to the paper: https://arxiv.org/pdf/2203.00759.pdf

2️⃣ Understanding The Robustness in Vision Transformers

From the main author’s Twitter:

In this work, we unveil the interesting trinity among grouping, IB [(information bottleneck)] and robustness in ViTs [(vision transformers)].

--

--

Louis-François Bouchard
Louis-François Bouchard

Written by Louis-François Bouchard

I try to make Artificial Intelligence accessible to everyone. Ex-PhD student, AI Research Scientist, and YouTube (What’s AI). https://www.louisbouchard.ai/

No responses yet