Very informative video, thank you. One question about HuggingFace Chat. How did you get the change model option ? I see the current chat version is v0.2 and in this version or in my account at least, the change model option is missing. Thank you
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model?
I was so looking forward to using this LLM to use on my iOS project and then the paper said, "Out of the languages present in MultiPL-E (Cassano et al., 2023), only D and Swift were not included in the training set. For D, language misclassification of the files led to less than 2MB of data in The Stack (Kocetkov et al., 2022). For Swift, due to a human mistake, it was not included in the final list of languages." lmao wat
Definitely true! While it's better than the original Codex model (in metrics) it has a lot of distance to cover before it can exceed the newest Codex model.
Please, don't stop making these
Thanks, Olabode!
Very informative video, thank you.
One question about HuggingFace Chat. How did you get the change model option ? I see the current chat version is v0.2 and in this version or in my account at least, the change model option is missing.
Thank you
It looks like they've removed that particular option from the Chat demo!
You could still use the model by heading a Space powered by the Chat model!
Can you do a review on all of the other code copilots and show us which is the best?
I might be able to do that in the coming weeks, sure! I want to wait for GitHub to announce their latest tooling before I do, though!
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model?
How to use it locally through gpt4all? Thank you!
I was so looking forward to using this LLM to use on my iOS project and then the paper said, "Out of the languages present in MultiPL-E (Cassano et al., 2023), only D and Swift were not included in the training set. For D, language misclassification of the files led to less than 2MB of data in The Stack (Kocetkov et al., 2022). For Swift, due to a human mistake, it was not included in the final list of languages." lmao wat
That is.... unfortunate.
Time to create your own set!
Anyone know if it's possible to host Star Coder locally?
Yes! Definitely!
You'll need some decent hardware - but it's definitely possible.
Checkout the docs!
@@chrisalexiuk Thanks Chris, looks like they've made it pretty easy to configure HF Code Autocomplete to point to custom endpoints, awesome.
This needs lots of improvement the Codex AI is far better model
Definitely true! While it's better than the original Codex model (in metrics) it has a lot of distance to cover before it can exceed the newest Codex model.
😌 "Promosm"