- 27
- 5 292
PromptHub
Приєднався 30 лис 2023
Everything you need to know about In-Context Learning
Research papers
A Survey on In-context Learning
arxiv.org/pdf/2301.00234
Auto-ICL: In-Context Learning without Human Supervision
arxiv.org/pdf/2311.09263
Resources:
Our full guide on in-context learning
www.prompthub.us/blog/in-context-learning-guide
Few shot prompting guide
www.prompthub.us/blog/the-few-shot-prompting-guide
Comparing Latencies: Get Faster Responses From OpenAI, Azure, and Anthropic
www.prompthub.us/blog/comparing-latencies-get-faster-responses-from-openai-azure-and-anthropic
Automatic prompt engineer (APE) info
www.prompthub.us/blog/a-complete-guide-to-meta-prompting#automatic-prompt-engineer-(ape)
Templates
Auto In Context Learning Step 1
app.prompthub.us/templates/3715
Auto In Context Learning Step 2
app.prompthub.us/templates/3716
A Survey on In-context Learning
arxiv.org/pdf/2301.00234
Auto-ICL: In-Context Learning without Human Supervision
arxiv.org/pdf/2311.09263
Resources:
Our full guide on in-context learning
www.prompthub.us/blog/in-context-learning-guide
Few shot prompting guide
www.prompthub.us/blog/the-few-shot-prompting-guide
Comparing Latencies: Get Faster Responses From OpenAI, Azure, and Anthropic
www.prompthub.us/blog/comparing-latencies-get-faster-responses-from-openai-azure-and-anthropic
Automatic prompt engineer (APE) info
www.prompthub.us/blog/a-complete-guide-to-meta-prompting#automatic-prompt-engineer-(ape)
Templates
Auto In Context Learning Step 1
app.prompthub.us/templates/3715
Auto In Context Learning Step 2
app.prompthub.us/templates/3716
Переглядів: 56
Відео
Before your next LLM project, watch this. A pre-prompt checklist
Переглядів 11114 днів тому
Set up your next LLM-based feature or project for success. Learn how to quickly define success criteria, evals, and test cases to ensure you're shipping high-quality LLM experiences Resources: Everything you need to do before prompting: Success criteria, test cases, evals www.prompthub.us/blog/everything-you-need-to-do-before-prompting-success-criteria-test-cases-evals
7 different meta prompting methods and 3 free tools
Переглядів 69521 день тому
There is no reason to go at it alone when it comes to prompt engineering. In the same way you can use LLMs to help you code or write, you can use them to help you create prompt. In this video we'll dive into some of the latest and greatest meta-prompting methods being researched and discovered. Research papers Our complete Guide to Meta Prompting www.prompthub.us/blog/a-complete-guide-to-meta-p...
Watch this before relying on o1-preview
Переглядів 105Місяць тому
o1-preview can reason! Kind of? Research papers Measuring Faithfulness in Chain-of-Thought Reasoning arxiv.org/pdf/2307.13702 Faithful Chain-of-Thought Reasoning arxiv.org/pdf/2301.13379 Resources: Faithful Chain of Thought Reasoning Guide: www.prompthub.us/blog/faithful-chain-of-thought-reasoning-guide OpenAI o1 - First Impressions llmindset.co.uk/posts/2024/09/openai-o1-first-impressions/#hal...
Prompt chaining for beginners
Переглядів 175Місяць тому
Prompt chaining consistently outperforms large prompts with multiple instructions. Try out prompt chaining in PromptHub for free to start experimenting. Resources Prompt Chaining Guide (has all prompt templates mentioned): www.prompthub.us/blog/prompt-chaining-guide Research paper: Prompt Chaining or Stepwise Prompt? Refinement in Text Summarization arxiv.org/pdf/2406.00507 Prompt Chaining in P...
Everything you need to know about Chain of Thought prompting
Переглядів 498Місяць тому
Check out the Chain of Thought prompt templates linked below and happy prompting! Prompting Guides Our Chain of Thought prompting Guide www.prompthub.us/blog/chain-of-thought-prompting-guide Few Shot prompting Guide: prompthub.us/blog/the-few-shot-prompting-guide Step-Back prompting: www.prompthub.us/blog/a-step-forward-with-step-back-prompting Analogical prompting: www.prompthub.us/blog/using-...
How to use Daniel Kahneman's System 2 thinking in your prompts.
Переглядів 722 місяці тому
System 2 Attention prompting is an easy way to sanitize your prompts of any irrelevant information before generating the final output. Resources Our full rundown: www.prompthub.us/blog/how-to-use-system-2-attention-prompting-to-improve-llm-accuracy System 2 Attention prompt template: app.prompthub.us/templates/2558 Research papers: Large Language Models Can Be Easily Distracted by Irrelevant Co...
Easy tutorial to implement least-to-most prompting
Переглядів 1612 місяці тому
Least-to-most prompting is one of the easiest ways to increase reasoning on more complex tasks, outperforming chain-of-thought in some cases. In this video we'll dive into the research and how you can implement least-to-most prompting with a few free templates. Resources Least-to-most prompting guide: www.prompthub.us/blog/least-to-most-prompting-guide Least-to-most prompting research paper: ar...
How to use Program of Thoughts Prompting
Переглядів 772 місяці тому
Program of Thoughts (PoT) prompting leverages LLMs code generation skills, on non-coding tasks. Tailored for mathematical and financial questions, PoT outperforms typical CoT by leveraging code for the reasoning step. Program of Thought resources Our full run down: www.prompthub.us/blog/program-of-thoughts-prompting-guide Original paper: arxiv.org/pdf/2211.12588 Program of thought prompt templa...
How to use Self-Consistency to boost LLM performance
Переглядів 2683 місяці тому
Everything you need to know about Self-Consistency prompting. The latest research, templates, examples, and more. Want more free prompt engineering content? Check out the prompt engineering Substack: prompthub.substack.com RESOURCES Our blog post about Self-Consistency and Universal Self-Consistency: www.prompthub.us/blog/self-consistency-and-universal-self-consistency-prompting Few Shot Prompt...
How Generated Knowledge Prompting can help reduce hallucinations
Переглядів 1293 місяці тому
A great way to help enhance model outputs and reduce hallucinations is to prompt the model to generate knowledge (information), about the task or question, before answering. This process, originally coined as Generated Knowledge Prompting, helps the model generate some context before returning an output, enabling more detailed and accurate replies. Our rundown on Generated Knowledge Prompting: ...
Why small changes to a prompt make a big difference (and what you can do about it)
Переглядів 653 місяці тому
If you’ve spent any time writing prompts, you’ve probably noticed just how sensitive LLMs are to minor changes in the prompt. In this video we'll dive into three very recent papers that take a deep look at prompt and model sensitivity Link to our rundown: www.prompthub.us/blog/strategies-for-managing-prompt-sensitivity-and-model-consistency- 3 research papers: How are Prompts Different in Terms...
Using LLMs to generate code? Watch this
Переглядів 883 місяці тому
Using LLMs for code generation can be tricky. There are many points of failure along the way, and a lot of unanswered questions about what to do about it. This video will breakdown the latest research and answer questions like: - Where in the process do LLMs fail? - What types of code generation errors are most common? - Which models fail at which points in the process? Link to our full run-dow...
Generate better content with AI
Переглядів 1074 місяці тому
Today is all about prompt engineering to create actually good content. Learn key principles, advanced techniques, and walk away with some templates you can implement right away. Full article available here: www.prompthub.us/blog/prompt-engineering-for-content-creation Additional resources mentioned throughout the video: Prompt Patterns: What They Are and 16 You Should Know: www.prompthub.us/blo...
Solve any prompt engineering problem with a prompt pattern
Переглядів 1765 місяців тому
Prompt patterns are high-level methods that provide reusable solutions to address challenges when prompting LLMs. In this video we'll go over what prompt patterns are, how they are similar to design patterns in software development, and a bunch of examples. MAIN RESOURCES: A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT: arxiv.org/pdf/2302.11382 Our full run down: www.prompt...
This is the most effective prompt engineering method we've ever covered
Переглядів 3725 місяців тому
This is the most effective prompt engineering method we've ever covered
How long until AI outperforms humans in knowledge work? A real-world case study
Переглядів 536 місяців тому
How long until AI outperforms humans in knowledge work? A real-world case study
Do different models require different prompting?
Переглядів 636 місяців тому
Do different models require different prompting?
Does saying please and thanks effect LLM output quality? Finally, a concrete answer
Переглядів 1407 місяців тому
Does saying please and thanks effect LLM output quality? Finally, a concrete answer
Train an LLM to become a better prompt engineer
Переглядів 2918 місяців тому
Train an LLM to become a better prompt engineer
Using LLMs and prompt engineering to making recommendations
Переглядів 1588 місяців тому
Using LLMs and prompt engineering to making recommendations
26 easy prompt engineering principles for 2024
Переглядів 4409 місяців тому
26 easy prompt engineering principles for 2024
Optimize your long prompts automatically with this research backed method
Переглядів 1579 місяців тому
Optimize your long prompts automatically with this research backed method
How LLMLingua can help cut your AI bill by 20x
Переглядів 2959 місяців тому
How LLMLingua can help cut your AI bill by 20x
00:05 Meta prompting involves using language models for prompt engineering. 01:15 Meta Prompting method uses a Conductor LLM to coordinate experts in an iterative process. 02:45 Generating prompts and evaluating outputs for improvement 04:02 Various methods for prompt generation and evaluation 05:27 User and model collaboration in prompt generation 06:40 DeepPrompY is an adaptive open-source tool for refining prompts using scoring mechanisms 08:05 Textual gradient feedback in The Prompt system 09:29 Different meta prompting methods and free tools
Interesting
thanks! some very helpful nuggets throughout. and I'm happy not to have read and mentally translated those papers myself.
Glad to hear it's helpful!
Good one
thanks!
Hey Dan, great video! CoT is definitely the most effective technique. But it‘s a lot of work to come up with the reasoning steps. Is there a good way to get the model to reverse engineer reasoning steps from given input-output pairs? Or just output ideally? This could be a good topic for a new article or video.
Thanks Jonas! Great question - one way to automate the chain-of-thought reasoning steps is to use Analogical Prompting. It works a little different from the approach that you are referring to (although we plan to support this in PromptHub in the near future) We wrote all about it here: prompthub.us/blog/using-analogical-prompting-to-generate-in-context-examples
Thanks for sharing! Your blog and videos are my #1 resource on the web for learning how to improve my prompts. I love that your content is always based on scientific studies.
Thanks so much - glad to hear our resources our helpful!
Nice Daniel Kahneman click bait. 😉
Gotta give credit where it's due!
Nice clear explanation.
Thanks! Hope it helps
personally i changed the usage on my model to use thought ! < it does not make responses only tool calls : so the final answer is the response tool: if it need to think it can query itself for knowledge first : then use the returned context to enrich the response : so it has its own self query tool : so when you ask a question it will execute a tool , or query itself , or create a tool ? after it has done its internal thinking or fuctin calls etc it will use the final answer tool ! Then i also use the thought / action / pause for result / Obeservation > Final answer ! << this is probably the future way ! i do not use rag , but if the model requires to search for knowledge or stroe the conversation or given information it will be added to the rag with the search tool ! < it can query its shortterm emeory ( the rag to also retrive context to enrich the result) It was very interesting of hugging face to release a agents library as it also used Models as Tools!: this also drastically changed the way the model is being used now ! < so given any input can bedirected to the corect tool to return its caption or what ever then if generation is required it can go to its imgae gen or audio gen ! <<<! the thought Action loop ! << this actually changed the outpput of the model drastically : so i trained the model as a planner and also as a tool user ! Also to output json etc : So these Chain of thought are still in use my friend , they are within the external wrapper paradigm of tool use and GameLoop! <<< So you can give the model tools or not ! or you can drop them as single funcitons in a folder and it will load them as required ! >> Brother i dont know what these university STUDENTS think we are doing out here !!! << We are actually in front of you guys ! Dragging you into the future ! hopefully without you guys writing some fake ethical code against us ! ( we hate to share with you guys ! ) examine this chat you just gave it was pure rhetoric ! i bet you fancy a cuppa tea mate !
Great Video! You publish the best content on Prompt engineering in the whole AI space! (Maybe next to Moritz Kremb - the Prompt Warrior).
Thanks Jonas! We think Moritz is pretty great too 💪
Hey Dan, any chance I could get access to the code to test it out my self?
Hey Daniel, yeah no problem. Just respond to the email that gets sent when you join the waitlist on our site and I'll shoot over the code
Umm I think you are misunderstanding quite a few points. The paper states directly that they view Perplexity with respect to entropy, i.e. tokens with a higher perplexity have more information to yield to the model at inference. Because of this, the authors tell us that they create a subset of the demonstrations by first ordering the demonstrations from highest PPL -> lowest PPL, and then continuously adding to the subset until they reach the token budget for the demonstrations. Saying that "It keeps the low perplexity ones, gets rid of the high ones" is just flat out incorrect. Furthermore, "compresses the low ones, and does that for all the demonstrations" is a bit misleading, as the coarse level compression is essentially just picking and choosing which demonstrations get added to the subset to use for iterative level compression.
I am usually polite anyway. (Never know when these things will turn sentient!)
Agreed! Never hurts to be polite
Dan what data format (for your candidate dataset--say possible articles) are you using? .csv passed to the LLM?
Yup, CSV!
Thanks for this Dan
Excellent work, Dan.
Thanks Davis!