Розмір відео: 1280 X 720853 X 480640 X 360
Показувати елементи керування програвачем
Автоматичне відтворення
Автоповтор
This is freaking gold my friend! instant subscribe! thanks so much keep up the good work.
Welcome aboard!
This is basically recommended for faster llm token inference to be used in ai playground😊
thx for this video and free usage of 70b. but it looks like, that CodeQwen1.5-7B-Chat is outperforming Llama3-70B-instruct so u can use local llm?
It's completely your call. The purpose of the demo is to show that one can replace openai models with the open source ones.
@@saurabhkankriya Yes, of course, thanks so much for the demo. just thought share some other options if anyone is interested
How can you do this in Jetbrains IDE?
I don't have experience with Jetbrains. You can read the continue extension's github Readme. They probably may have a documentation on this kind of integration.
Revoke your api key
Thanks for pointing this out. After recording the video, I deleted the API keys.
This is freaking gold my friend! instant subscribe! thanks so much keep up the good work.
Welcome aboard!
This is basically recommended for faster llm token inference to be used in ai playground😊
thx for this video and free usage of 70b. but it looks like, that CodeQwen1.5-7B-Chat is outperforming Llama3-70B-instruct so u can use local llm?
It's completely your call. The purpose of the demo is to show that one can replace openai models with the open source ones.
@@saurabhkankriya Yes, of course, thanks so much for the demo. just thought share some other options if anyone is interested
How can you do this in Jetbrains IDE?
I don't have experience with Jetbrains. You can read the continue extension's github Readme. They probably may have a documentation on this kind of integration.
Revoke your api key
Thanks for pointing this out. After recording the video, I deleted the API keys.