In the terms of finetuning, what's the benefit of doing this fine-tuning process opposed to just using vanilla Gemini and prompting it: "For a real estate agency, give me a caption with an emoji and 2 hashtags"? After all, using fine tuning models via the API is typically more expensive, right?
Is there a limit to the tokensize of the output? Im thinking about training a model to output json files based on my input to control a 3rd party software but jsons might be kinda big
I just plain don't get it, maybe I am misunderstanding with fine-tuning means, maybe I don't even need this for my case usage... in the end I have one folder on my desktop with a measly 1.4 GB of markdown files that are totaling over 3 million words in research, that I want google pro 1.5 to represent as a mouthpiece. I guess the quickest way to explain it would be, master levels of needle in a haystack representation, I want it to be able to take all the files into macro context per each question, and give me a higher order perspective between the files that only artificial intelligence could possibly keep a wrangle of, comprehend? How on earth can I achieve this please!? Thank you.🙏
Thanks for sharing 👏🏻 I do the same but I want it use my tuning model on colab but I got this error 403 POST You do not have permission to access tuned model tunedModels. Can you provide a video for that 😢
This is easy for simple use cases where the output is always similar in structure. Corbin, have you used AI studio to fine-tune based on unstructured data? IE: inputting data to capture a person’s unique writing style. I have 500 articles and writing examples from wiki pages I've written on software engineering. Each item has a category and a label that I've curated but the copy in the body for each item varies widely. It may be the wrong approach to try to fine-tune based on a persons writing style using AI studio. Maybe, I just need to create embeddings and store them in a vector DB, first. Thoughts? Thanks Corbin, love the channel!
How do you access the fine tuned model from API?
In the terms of finetuning, what's the benefit of doing this fine-tuning process opposed to just using vanilla Gemini and prompting it: "For a real estate agency, give me a caption with an emoji and 2 hashtags"? After all, using fine tuning models via the API is typically more expensive, right?
Is there a limit to the tokensize of the output? Im thinking about training a model to output json files based on my input to control a 3rd party software but jsons might be kinda big
Do we have costs fine-tuning a model like you showed in the video?
Grrreat content, thank you
Thank you
I just plain don't get it, maybe I am misunderstanding with fine-tuning means, maybe I don't even need this for my case usage... in the end I have one folder on my desktop with a measly 1.4 GB of markdown files that are totaling over 3 million words in research, that I want google pro 1.5 to represent as a mouthpiece.
I guess the quickest way to explain it would be, master levels of needle in a haystack representation, I want it to be able to take all the files into macro context per each question, and give me a higher order perspective between the files that only artificial intelligence could possibly keep a wrangle of, comprehend?
How on earth can I achieve this please!? Thank you.🙏
I'm also trying to find solutions for large knowledge bases... Not as large as yours but large hahaha
did you find anything ?
how can we use that model url in api
same
go to new prompt and click get code
same here
how can I use my tuned model in my flutter app ?
still cant find an answer to this
bro... go to new prompt and click get code
@@deepfakes4567 but there aren't "flutter" option there
Did you find how?
💫🙏🎯👊🤙💪🗿🎬🔥🦅☯️ Thank You CB Great Value Add As Usual Sir
Thanks for sharing 👏🏻
I do the same but I want it use my tuning model on colab but I got this error
403 POST You do not have permission to access tuned model tunedModels.
Can you provide a video for that 😢
I am getting the same error
API keys are not sufficient when u try and use user specific data such as tuned models. You need to configure OAuth credentials
This is easy for simple use cases where the output is always similar in structure. Corbin, have you used AI studio to fine-tune based on unstructured data? IE: inputting data to capture a person’s unique writing style. I have 500 articles and writing examples from wiki pages I've written on software engineering. Each item has a category and a label that I've curated but the copy in the body for each item varies widely. It may be the wrong approach to try to fine-tune based on a persons writing style using AI studio. Maybe, I just need to create embeddings and store them in a vector DB, first. Thoughts?
Thanks Corbin, love the channel!
@@htrbgfdvcxzaefwgrhbntgsca4056 how to do it bro please tell.
First Comment
Clickbait Thumbnail, Gemini flash's finetuning hasn't released yet
It is released only for some beta users
Clickbait thumbnail. hit report.