Це відео не доступне.
Перепрошуємо.
LLMs can "breed" their own prompts
Вставка
- Опубліковано 6 січ 2024
- arxiv.org/abs/...
Previous videos on prompt generation:
Large language models as optimizers: • Large language models ...
Automatic prompt engineering: • Read a paper: Automati...
vivekhaldar.com
x.com/vivekhaldar
Thank you for the video. Please consider critiquing the papers, mentioning both possible strengths and weaknesses of each paper. I am sure you have read many papers, so I'm curious to know your opinions.
Thanks for covering such interesting papers in the realm of prompt engineering! Excellent content! :)
Thanks for the kind words!
@VivekHaldar Where do you find those interesting papers, specifically in the domain of prompt engineering and complex LLM reasoning?
The main source these days is Twitter/X (that's where a lot of even pre-publication findings are posted), and random blogs etc. Then there's chasing citations of the papers I've already read (see connectedpapers.com). Lastly, there are conferences, but tbh for LLMs + AI a conference months away is just too slow.
@@VivekHaldar Thanks! Any specific Twitter accounts you recommend following on those specific topics of prompt engineering and complex LLM reasoning?
Great paper, thanks for covering!
Also what a time to be alive. 😊
I can see voice in the video significantly improved.
Used another mic... getting audio right (recording at the right gain, normalizing etc) is always iffy.
@@VivekHaldar thank you for the adjustments, I have absolutely no complaints now :)
Very interesting. Too bad they haven't benchmarked something locally runnable like LLama2. I wonder how much those gain from these kinds of prompts.
I think you'll need something powerful like GPT-4 (or equivalent) to get enough variety on the mutated prompts.