Even with everything that happened in the world this year, we still had the chance to see a lot of amazing research come out. Especially in the field of artificial intelligence. More, many important aspects were highlighted this year, like the ethical aspects, important biases, and much more. Artificial intelligence and our understanding of the human brain and its link to AI is constantly evolving, showing promising applications in the soon future.
I agree. The Perceiver is definitely a super interesting paper by Deepmind! Quite promising.
The only thing is that I wanted to talk about transformers applied to computer vision with a model that could be tested right away by the reader, as most of my readers are pretty technical. This is why I chose this one over The Perceiver as they implemented the code and it is easy to use.
Also, my friend Yannic Kilcher covered the Perceiver extremely well on his youtube channel and I did not want to do this again as his explanation is perfect!
The goal was mainly to talk about how transformers CAN be applied to CV, and show an example of how it can be achieved! It will be different in the future and most certainly even different from how Perceiver works.
But thank you for the remark. I completely agree with you.
What you see below is someone carefully creating a scene for a video game. It takes many hours of work by a professional just for a single object like this one.
Listen to this story:
I will talk about a randomly picked application of transformers from the 600 new papers published this week, adding nothing much to the field but improving the accuracy by 0.01% on one benchmark by tweaking some parameters.
I hope you are not too excited about this introduction because that was just to mess with transformers’ recent popularity. Of course, they are awesome and super useful in many cases, and most researchers are focusing on them, but other things exist in AI that are as exciting if not more! …
Here are the 3 most interesting research papers of the month, in case you missed any of them. It is a curated list of the latest breakthroughs in AI and Data Science by release date with a clear video explanation, link to a more in-depth article, and code (if applicable). Enjoy the read, and let me know if I missed any important papers in the comments, or by contacting me directly on LinkedIn!
Follow me on Medium to see this AI top 3 monthly!
Can an AI understand what beauty is to us? This one reads your brain to generate…
This article is about most probably the next generation of neural networks for all computer vision applications: The transformer architecture. You’ve certainly already heard about this architecture in the field of natural language processing, or NLP, mainly with GPT3 that made a lot of noise in 2020. Transformers can be used as a general-purpose backbone for many different applications and not only NLP. In a couple of minutes, you will know how the transformer architecture can be applied to computer vision with a new paper called the Swin Transformer by Ze Lio et al. from Microsoft Research .
As we discussed in a previous post, it is now possible to train artificial intelligence (AI) to model beauty. Within a couple of years, the famous saying “Beauty is in the eye of the beholder” may have to change for “Beauty is in the AI of the beholder.”
Indeed, beauty is something rather complex and subjective. In a recent study , researchers showed that an artificial intelligence analyzing your tastes and interests could generate personally attractive faces.
I think you will agree that when you look at someone and find the person attractive, you cannot really explain why. There…