my settngs.json file does not look like this. I do not have a cody.autocomplete.experimental.ollamaOptions but my settings are pointed at experimental - ollama. I cannot select an Ollama model and I feel like it's jnust defaulting to Claude 2.
When trying to use Cody with Ollama and a local LLM, the chat is working fine, but when using the setup as recommended in this video, the autocomplete returns chat suggestions instead of code. Any idea what's causing this and how to fix?
Hey Adam - so if you seeing the option, you can add the setting manually in your settings.json file. Just add this property: "cody.experimental.ollamaChat": true, restart VS Code, and it should work.
@@ado interestingly, I got it working, and then the folowing day when I tried to use ollama, it wasn't working. without without me making any changes to my settings myself. what I did to get it working the first time was I changed the version of cody to the prerelease version(this one supported all the stuff described in the video) and just followed the steps in the video. then I restarted VSCode and it worked. Why it stopped working the following day I have no clue. it also keeps resetting the model back to claude 2
@@adamvelazquez7336 hey Adam - can you check your `settings.json` file and make sure that the "cody.experimental.ollamaChat": true, property still exists. If it does, then the local LLM's should load from Ollama, otherwise, they won't. It's possible that that flag got removed when you updated (although unlikely).
Does the cody with ollama also has limit like "500 Autocompletions per month" for free version?
No. You can use it as much as you want.
Nope!
my settngs.json file does not look like this. I do not have a cody.autocomplete.experimental.ollamaOptions but my settings are pointed at experimental - ollama. I cannot select an Ollama model and I feel like it's jnust defaulting to Claude 2.
The "Experimental" Ollama Models are not being shown in the select models tab at my work computer. On my home computer they appear.
Is this available for Jetbrains editors?
Would also like to see this happen.
When trying to use Cody with Ollama and a local LLM, the chat is working fine, but when using the setup as recommended in this video, the autocomplete returns chat suggestions instead of code. Any idea what's causing this and how to fix?
when I install cody I see no option at all to select ollama chat. has it been removed?
currently running on windows
Hey Adam - so if you seeing the option, you can add the setting manually in your settings.json file. Just add this property: "cody.experimental.ollamaChat": true, restart VS Code, and it should work.
@@ado interestingly, I got it working, and then the folowing day when I tried to use ollama, it wasn't working. without without me making any changes to my settings myself. what I did to get it working the first time was I changed the version of cody to the prerelease version(this one supported all the stuff described in the video) and just followed the steps in the video. then I restarted VSCode and it worked.
Why it stopped working the following day I have no clue. it also keeps resetting the model back to claude 2
@@adamvelazquez7336 hey Adam - can you check your `settings.json` file and make sure that the "cody.experimental.ollamaChat": true, property still exists. If it does, then the local LLM's should load from Ollama, otherwise, they won't. It's possible that that flag got removed when you updated (although unlikely).
Somehow my autocompletion is broken since I moved to another place. Nothing suggested even I "trigger autocomplete at cursor".
Chats and other commands are working, tho