It looks like we're on the same wavelength :) I've been experimenting with a similar setup and fine-tuning for LinkedIn posts for quite some time. The quality of the 4o fine-tunes is excellent! In my experience, even a small set with 70 items works well, doesn't hallucinate much, and with a higher temperature of 0.7-0.9, it generates fascinating perspectives that stay on point and feel authentic.
Interesting Nadia! I hadn't tried increasing the temperature, I assumed it would hallucinate too much. But now that you say it, I can imagine combining a small dataset with a higher temperature would be interesting. Thanks for the insight, will try it out!
That's interesting. I should start experimenting with fine-tuning as well. Do you also use Relevance, or would you recommend another platform to create agentic workflows?
This is amazing. Do they always need specific training data like in the form of documents or links or scraping. Or can you simply throw in a prompt and expect it to improvise?
Hey, thanks for this resource! Where can we find the Linkedin Writer Agent? Signed up for the tools and recreated everything but didn't see the Linkedin Writer Agent specifically to clone and pass through the finetuned model to test the results on it. Didn't see a link to it in the Gumroad or anything.
Thanks for the video... very informative. Most videos I watch about AI creating content for block posts and things like that. Are you able to show any other use cases such as checking a human's work? I run a small CAD team who make drawings from text documents from the clients. It would be great if the AI model could cross reference the instructions from the client to the CAD drawings such as dimensions and text. Is this something that AI could do? Thank you
When you upload the dataset, is the LLM fine-tuned on the style and tone of voice or is it fine-tuned on the type of content it could write on? If the answer is both, how can we make it so the LLM is fine tuned with one dataset for tone and style, and another dataset for the type of content?
As you assumed, it's both yes. Ideally, you train it on a combination of the type of content you like and tone of voice. But in the case you don't have that, you can use a smaller dataset and you should still get the tone of voice for the "unrelated" type of content. Then you should still be flexible with the type of content you can generate with it. You could experiment with finetuning 2 different models and add them both in a tool in relevance but I haven't tried this, so can't tell you what the performance would be.
dude never fails to deliver. Ben tussi chah gaye ho ji!
The king is back 👑
lol 😂
It looks like we're on the same wavelength :) I've been experimenting with a similar setup and fine-tuning for LinkedIn posts for quite some time. The quality of the 4o fine-tunes is excellent! In my experience, even a small set with 70 items works well, doesn't hallucinate much, and with a higher temperature of 0.7-0.9, it generates fascinating perspectives that stay on point and feel authentic.
Interesting Nadia! I hadn't tried increasing the temperature, I assumed it would hallucinate too much. But now that you say it, I can imagine combining a small dataset with a higher temperature would be interesting. Thanks for the insight, will try it out!
That's interesting. I should start experimenting with fine-tuning as well. Do you also use Relevance, or would you recommend another platform to create agentic workflows?
Super interesting to see the value of using fine-tuned models, thanks for sharing Ben!
Loving your work Ben, super quality!!
Thanks Moe, appreciate it!
Cracked 5k, forward, Ben! Congrats :)
Thanks man!
Thanks again Ben!
Super Usefull Stuff! TY
This is amazing. Do they always need specific training data like in the form of documents or links or scraping. Or can you simply throw in a prompt and expect it to improvise?
Hey, thanks for this resource!
Where can we find the Linkedin Writer Agent?
Signed up for the tools and recreated everything but didn't see the Linkedin Writer Agent specifically to clone and pass through the finetuned model to test the results on it. Didn't see a link to it in the Gumroad or anything.
combine this with n8n and boomm
Thanks for the video... very informative. Most videos I watch about AI creating content for block posts and things like that. Are you able to show any other use cases such as checking a human's work? I run a small CAD team who make drawings from text documents from the clients. It would be great if the AI model could cross reference the instructions from the client to the CAD drawings such as dimensions and text. Is this something that AI could do? Thank you
Thanks Jonathan! should be possible but would need more context to help you.
which small business has more than 600 linkedin posts? or what do you mean with datapoints?
Outstanding work, as always. Thank you!
how do you know all these infos about relevance?
+
When you upload the dataset, is the LLM fine-tuned on the style and tone of voice or is it fine-tuned on the type of content it could write on? If the answer is both, how can we make it so the LLM is fine tuned with one dataset for tone and style, and another dataset for the type of content?
As you assumed, it's both yes. Ideally, you train it on a combination of the type of content you like and tone of voice. But in the case you don't have that, you can use a smaller dataset and you should still get the tone of voice for the "unrelated" type of content. Then you should still be flexible with the type of content you can generate with it. You could experiment with finetuning 2 different models and add them both in a tool in relevance but I haven't tried this, so can't tell you what the performance would be.
and for no code it was a lot of coding