Rasmus makes a good point with, "You have to put something really good in to get something really good out." This extends beyond prompts, though; it applies even more to how the model is trained. Every model that we have is also trained on human produced data. Unless you can loop the output of the LLM, RAG or other types of models back into itself and have it improve, I don't see how it is different from googling with just efficiently aggregating and summarizing a bunch of existing sources.
Rasmus makes a good point with, "You have to put something really good in to get something really good out." This extends beyond prompts, though; it applies even more to how the model is trained. Every model that we have is also trained on human produced data. Unless you can loop the output of the LLM, RAG or other types of models back into itself and have it improve, I don't see how it is different from googling with just efficiently aggregating and summarizing a bunch of existing sources.
Thanks for your comments, Stay with us❤️
Same reason you needed another stupid hipster DSL that transpiled into JavaScript.
Totally get the sentiment! While trends come and go, AI engineering is here to solve real-world problems. Appreciate you watching!