- 45
- 320 438
Neural Breakdown with AVB
United States
Приєднався 22 лют 2022
AI, Deep Learning, Machine Learning
I went through 100+ LLM papers from 2024 - these 17 were the best!
This is a rundown of the best NLP, Transformers and LLM papers of 2024. The papers in this video cover a wide range of disciplines - from core deep learning research, novel tokenization schemes, resurrecting RNNs, building on the Mamba architecture, to the latest prompting techniques for efficient retrieval, continual learning, and maximizing code intelligence.
To support the channel financially, and get access to the material used in all channel videos - consider joining our Patreon: www.patreon.com/NeuralBreakdownwithAVB
Links to papers:
ICL: arxiv.org/abs/2412.15563
BLT: arxiv.org/abs/2412.09871
Were RNNs All We Needed? - arxiv.org/abs/2410.01201
xLSTM: Extended Long Short-Term Memory - arxiv.org/abs/2405.04517
The Era of 1-bit LLMs - arxiv.org/abs/2402.17764
LLM2Vec - arxiv.org/abs/2404.05961
Moshi - arxiv.org/abs/2410.00037
Mixtral of Experts - arxiv.org/abs/2401.04088
Transformers are SSMs - arxiv.org/abs/2405.21060
SliceGPT: Compress LLMs - arxiv.org/abs/2401.15024
The AI Scientist - arxiv.org/abs/2408.06292
RAGs vs Long Context LLMs - arxiv.org/abs/2407.16833
CriticGPT - arxiv.org/pdf/2407.00215
DocLLM - arxiv.org/abs/2401.00908
SWE-Agent - arxiv.org/abs/2405.15793
Self-Rewarding Language Models - arxiv.org/abs/2401.10020
DeepSeek Coder - arxiv.org/abs/2401.14196
Videos you may consider watching:
Attention to Transformers playlist: ua-cam.com/play/PLGXWtN1HUjPfq0MSqD5dX8V7Gx5ow4QYW.html
50 concepts to know NLP: ua-cam.com/video/uocYQH0cWTs/v-deo.html
Guide to fine-tuning open source LLMs: ua-cam.com/video/bZcKYiwtw1I/v-deo.html
Generative Language Modeling from scratch: ua-cam.com/video/s3OUzmUDdg8/v-deo.html
Overview of LLM Training Framework (Apple Intelligence Case Study): ua-cam.com/video/Sah0dnu8Hxo/v-deo.html
0:00 - Intro
1:00 - Deep Learning papers
17:00 - Agentic AI papers
#nlp #deeplearning #transformers
To support the channel financially, and get access to the material used in all channel videos - consider joining our Patreon: www.patreon.com/NeuralBreakdownwithAVB
Links to papers:
ICL: arxiv.org/abs/2412.15563
BLT: arxiv.org/abs/2412.09871
Were RNNs All We Needed? - arxiv.org/abs/2410.01201
xLSTM: Extended Long Short-Term Memory - arxiv.org/abs/2405.04517
The Era of 1-bit LLMs - arxiv.org/abs/2402.17764
LLM2Vec - arxiv.org/abs/2404.05961
Moshi - arxiv.org/abs/2410.00037
Mixtral of Experts - arxiv.org/abs/2401.04088
Transformers are SSMs - arxiv.org/abs/2405.21060
SliceGPT: Compress LLMs - arxiv.org/abs/2401.15024
The AI Scientist - arxiv.org/abs/2408.06292
RAGs vs Long Context LLMs - arxiv.org/abs/2407.16833
CriticGPT - arxiv.org/pdf/2407.00215
DocLLM - arxiv.org/abs/2401.00908
SWE-Agent - arxiv.org/abs/2405.15793
Self-Rewarding Language Models - arxiv.org/abs/2401.10020
DeepSeek Coder - arxiv.org/abs/2401.14196
Videos you may consider watching:
Attention to Transformers playlist: ua-cam.com/play/PLGXWtN1HUjPfq0MSqD5dX8V7Gx5ow4QYW.html
50 concepts to know NLP: ua-cam.com/video/uocYQH0cWTs/v-deo.html
Guide to fine-tuning open source LLMs: ua-cam.com/video/bZcKYiwtw1I/v-deo.html
Generative Language Modeling from scratch: ua-cam.com/video/s3OUzmUDdg8/v-deo.html
Overview of LLM Training Framework (Apple Intelligence Case Study): ua-cam.com/video/Sah0dnu8Hxo/v-deo.html
0:00 - Intro
1:00 - Deep Learning papers
17:00 - Agentic AI papers
#nlp #deeplearning #transformers
Переглядів: 812
Відео
All the Computer Vision AI research you may have missed in 2024...
Переглядів 1,3 тис.21 день тому
This is a rundown of the best Computer Vision papers of 2024. The papers covered in this video are as follows. They cover a wide range of disciplines - from video diffusion, to conditional image generation, to monocular depth estimation, promptable image segmentation, object detection, and even world engines. Hope you all enjoy! To support the channel financially, and get access to the material...
Visually explaining Byte Latent Transformers - LLMs just got a massive breakthrough!
Переглядів 6 тис.Місяць тому
In this video, we discuss Meta's latest paper on the Byte Latent Transformers (BLT) model from the paper Byte Latent Transformers - Patches scale better than Tokens. Quite literally, we go over each word in that sentence, and what they mean. Personally, I think dynamic compute allocation is a huge deal and this feels like a pretty exciting research direction for LLMs going forward. I tried to p...
Turns out Attention wasn't all we needed - How have modern Transformer architectures evolved?
Переглядів 4,8 тис.Місяць тому
In this video, we discuss the evolution of the classic Neural Attention mechanism from early adoptions of Bahnadau Attention and more specifically Self-Attention and Causal Masked Attention introduced in the seminal "Attention is all you need" paper. This video discusses more advanced forms of the Multi Headed Attention such as Multi Query Attention and Grouped Query Attention. Along the way, w...
Vision Transformers - The big picture of how and why it works so well.
Переглядів 2,2 тис.2 місяці тому
This in-depth tutorial is about writing Vision Transformer models from scratch in Pytorch. I explain all the concepts you need to understand what goes under the hood in Self Attention, VITs, and how they compare with Convolutional Neural Nets (CNNs). I also tried to add some visualizations to help explain the important concept, and walk you through every line of code to explain how all the math...
From Attention to Generative Language Models - One line of code at a time!
Переглядів 4,8 тис.3 місяці тому
This in-depth tutorial is about writing Causal Generative Language models from scratch in Pytorch. I explain all the concepts you need to understand what goes under the hood in Transformers and Attention Models. I also tried to add some visualizations to help explain each concept, and walk you through every line of code to explain how all the math works. I aimed for the right balance of complex...
The RAG Visual Breakdown - The Ultimate guide to building powerful LLM pipelines!
Переглядів 6 тис.3 місяці тому
In this video, we talk about Retrieval Augmented Generation. The idea of RAGs are pretty simple - suppose you want to ask a question to a LLM, instead of just relying on the LLM's pre-trained knowledge, you first retrieve relevant information from an external knowledge base. This retrieved information is then provided to the LLM along with the question, allowing it to generate a more informed a...
Finetune LLMs to teach them ANYTHING with Huggingface and Pytorch | Step-by-step tutorial
Переглядів 19 тис.3 місяці тому
This in-depth tutorial is about fine-tuning LLMs locally with Huggingface Transformers and Pytorch. We use Meta's new Llama-3.2-1B-Instruct model and teach it to predict paper categories using LORA adapters. Along the way I break down all the major things you must know about fine-tuning, from prompting, creating datasets, generating input-output pairs, loss functions, pytorch optimizers, peft L...
Llama 3.2 Vision - How to make a Multimodal project | Step by Step tutorial
Переглядів 2,2 тис.4 місяці тому
In this video, I break down the core concepts of Meta’s new LLama3.2 model - going over training procedures, structure pruning, knowledge distillation, etc. I also show how to make a simple web app using Python Flask to run Llama 3.2 Vision, and make it play Pictionary! #ai #meta #llama3 Patrons will get access to the code from all videos on my channel, including this one! Visit my Patreon link...
The complete TextGrad Tutorial - Easily optimize LLM prompts, math, and code!
Переглядів 3,5 тис.4 місяці тому
In this video, we are discussing TextGrad, a brand-new LLM Framework that can do Text Optimization. TextGrad is a Python package that provides a simple interface to implement LLM-“gradients” pipelines for text optimization! TextGrad implements backpropagation through text feedback provided by LLMs, strongly building on the gradient metaphor. We will learn TextGrad through 5 detailed examples - ...
So you think you know Text to Video Diffusion models?
Переглядів 2,9 тис.4 місяці тому
Video Diffusion Generative AI is the next frontier for AI. In this video we discuss the problem, the challenges, the solutions, and the seminal papers in the field like Google's Imagen, Meta's Make-a-video, Nvidia's Video Latent Diffusion Model (LDM), and OpenAI's SORA. On the way, we discuss the core concepts of Image Diffusion models, like Forward and Reverse Diffusion, UNet, convolution, and...
How to write YOLO networks from the ground up for object detection
Переглядів 2,1 тис.4 місяці тому
In this video, we will program a YOLO neural network from scratch in Python and Pytorch! YOLO is a computer vision algorithm that detects objects from images (and videos). This is part 1 of a two-part series. This part is all about the deep learning behind YOLO models and how to write your own, plus preprocess images through proper data augmentation, and train special Feature Pyramid Networks (...
The Machine Learning behind Apple Intelligence - Blueprint of a Modern LLM
Переглядів 1,7 тис.5 місяців тому
The Machine Learning behind Apple Intelligence - Blueprint of a Modern LLM
Complete DSPy Tutorial - Master LLM Prompt Programming in 8 amazing examples!
Переглядів 29 тис.5 місяців тому
Complete DSPy Tutorial - Master LLM Prompt Programming in 8 amazing examples!
How does Segment Anything 2 (SAM 2) work? Paper and Network Architecture Explained!
Переглядів 7 тис.6 місяців тому
How does Segment Anything 2 (SAM 2) work? Paper and Network Architecture Explained!
How Neural Nets estimate depth from 2D images? Monocular Depth Estimation Explained!
Переглядів 6 тис.6 місяців тому
How Neural Nets estimate depth from 2D images? Monocular Depth Estimation Explained!
The entire history of Computer Vision explained one great visualization at a time.
Переглядів 2,5 тис.7 місяців тому
The entire history of Computer Vision explained one great visualization at a time.
Text to Image Diffusion AI Model from scratch - Explained one line of code at a time!
Переглядів 11 тис.8 місяців тому
Text to Image Diffusion AI Model from scratch - Explained one line of code at a time!
Kolmogorov Arnold Networks (KAN) Paper Explained - An exciting new paradigm for Deep Learning?
Переглядів 59 тис.8 місяців тому
Kolmogorov Arnold Networks (KAN) Paper Explained - An exciting new paradigm for Deep Learning?
I made a game where all the LLMs have to lie to each other
Переглядів 1,3 тис.9 місяців тому
I made a game where all the LLMs have to lie to each other
Coding Image Segmentation with UNet from first principles | Football Computer Vision
Переглядів 1,8 тис.9 місяців тому
Coding Image Segmentation with UNet from first principles | Football Computer Vision
But what does a trained Convolution Neural Network actually learn? VISUALIZED!
Переглядів 3,2 тис.10 місяців тому
But what does a trained Convolution Neural Network actually learn? VISUALIZED!
A historical look at the definitions of AGI - Alignment, Consciousness, Behavior
Переглядів 2,1 тис.11 місяців тому
A historical look at the definitions of AGI - Alignment, Consciousness, Behavior
Two Large Language Models DEBATE about AGI and Humanity + How i did it! (ChatGPT vs Mixtral)
Переглядів 1,6 тис.Рік тому
Two Large Language Models DEBATE about AGI and Humanity How i did it! (ChatGPT vs Mixtral)
If LLMs are text models, how do they generate images?
Переглядів 7 тис.Рік тому
If LLMs are text models, how do they generate images?
Here is how Transformers ended the tradition of Inductive Bias in Neural Nets
Переглядів 8 тис.Рік тому
Here is how Transformers ended the tradition of Inductive Bias in Neural Nets
The many amazing things about Self-Attention and why they work
Переглядів 5 тис.Рік тому
The many amazing things about Self-Attention and why they work
Neural Attention - This simple example will change how you think about it
Переглядів 6 тис.Рік тому
Neural Attention - This simple example will change how you think about it
How Multi-Agent AI learn by continuously competing against themselves | Self Play
Переглядів 1,5 тис.Рік тому
How Multi-Agent AI learn by continuously competing against themselves | Self Play
Reinforcement Learning AI through 4 famous projects!
Переглядів 1,6 тис.Рік тому
Reinforcement Learning AI through 4 famous projects!