Another question, can you finetune an existing finetuned gpt3.5 model with mire training data ir do do you have to start from scratch always? Especially for using the outcome to feedback in to training
With their new API you have to fine-tune from scratch each time, but they might add fine-tuning a fine-tuned model back at some point, they used to have it. Most of the fine-tunes I run are under $1 though.
Thanks for sharing! Curious - can you fine tune the model by providing images? For example, one use case is resumes. What if I'd like to upload resume examples that are in PDF or JPEG format?
Multi-modal fine-tuning is definitely going to be a thing, but it's not available right now (at least, readily). For the time-being you would run an image through some OCR model to extract the text and then you could use that in a fine-tuned language model
Can embeddings improve the Model ability to write press releases? If ao what type of corpus of data would that include. Trying to understand how embeddings with RAG+ fine tuning can be used. Additionally, i really like the feedback loop of improving the finetuning with its results.
Thanks for the upload!
Loved this one Mark! Super informative, I was wondering what the workflow would be like for longer-form completions.
Thank you!
great video mate cheers
Congrats! Very nice software
Another question, can you finetune an existing finetuned gpt3.5 model with mire training data ir do do you have to start from scratch always? Especially for using the outcome to feedback in to training
With their new API you have to fine-tune from scratch each time, but they might add fine-tuning a fine-tuned model back at some point, they used to have it. Most of the fine-tunes I run are under $1 though.
Thanks for sharing! Curious - can you fine tune the model by providing images? For example, one use case is resumes. What if I'd like to upload resume examples that are in PDF or JPEG format?
Multi-modal fine-tuning is definitely going to be a thing, but it's not available right now (at least, readily). For the time-being you would run an image through some OCR model to extract the text and then you could use that in a fine-tuned language model
Can embeddings improve the Model ability to write press releases? If ao what type of corpus of data would that include. Trying to understand how embeddings with RAG+ fine tuning can be used. Additionally, i really like the feedback loop of improving the finetuning with its results.
This other video I made might help
ua-cam.com/video/YVWxbHJakgg/v-deo.html