Fantastic - me and my wife use LibreChat every day in our household, it's how we interact with all the OpenAI models, Claude etc. What I'd love is a 'headless mode' where by I can say "use this model", "here's the prompt", and get back the output via a curl call. Even something super simple would be very useful to be able to compare models.
I think you can actually do this with Open AI / Claude API's, it's not hard to config it if all you want to do is just sending a plain prompt, probably like 30 min of python / some other language implementation, you can even ask it to build it for you, I think chat gpt knows how to teach it as well, not sure but ye
Respect. In addition to being an ideal project to use with e.g. openrouter, my co-workers and I use a modified version with langchain/autogen and can't imagine going back to any other tool. I hope artifacts will work well too. Thanks!
You’re the epitome of open source. Brilliant work, will try this tomorrow. Is there a chance artifacts also work with models in the Azure OpenAI endpoint?
@@LibreChat thank you! We are using ModelSpecs with the azureOpenAI endpoint and so far, generating code with GPT-4o and mini doesn't seem to trigger artifacts, although it's turned on in the beta settings. Will have to dig a little deeper here.
It should be possible since it is triggered by a prompt If the default prompt we’re re using doesn’t work you can use the custom prompt option to tweak it.
hello, i've managed to connect to Ollama Phi3 running locally but the app does not use artifacts. should i use another model with Ollama or is there something else i miss? it answers only at conversational level, as usual (LE - the button is visible and toggled to use Artifacts - no oerrors show in containers - i run in docker compose)
@@WorldEnder It's likely that open source models will not follow the instructions as well as the leading models, you can experiment with this in custom prompt mode as instructed in the video.
How do I enable this option in my installation? In my beta features, it only has Enable switching Endpoints mid-conversation and Parsing LaTeX in messages (may affect performance). Is there a configuration to enable it or do I need a specific branch?
Fantastic - me and my wife use LibreChat every day in our household, it's how we interact with all the OpenAI models, Claude etc.
What I'd love is a 'headless mode' where by I can say "use this model", "here's the prompt", and get back the output via a curl call. Even something super simple would be very useful to be able to compare models.
I think you can actually do this with Open AI / Claude API's, it's not hard to config it if all you want to do is just sending a plain prompt, probably like 30 min of python / some other language implementation, you can even ask it to build it for you, I think chat gpt knows how to teach it as well, not sure but ye
Always will be impressed by your ability to develop these things and release it FOSS. Kudos to you. Thank you!!
This is amazing. Did you just ship code artifacts for OpenAI before OpenAI?
😎
@@LibreChat Based
awesome work! next step: integrate flowise and dify so users can import custom agents lol
+1
I just love it when people make cool useful things.
Nice, picked a good time to start using librechat :)
Respect. In addition to being an ideal project to use with e.g. openrouter, my co-workers and I use a modified version with langchain/autogen and can't imagine going back to any other tool. I hope artifacts will work well too. Thanks!
Thank you! I was looking for an open source AI Chat UI. You helped make the choice :)
This is awsome, Thanks for making this!
Looks pretty nice!
Do you use a specifc prompt for using artifact ?
amazing man, can't wait to try it!
Great stuff Danny! Any thoughts on adding memory? There should be pretty easy to implement right since it’s just adding to the system prompt?
Well done. Boss. 😂❤
Source code?
Not that you need to throw out ur work for free
How much per day?
GitHub + no license = it's it's exclusively
Urs
Awesome 🔥
Looks 🔥
You’re the epitome of open source. Brilliant work, will try this tomorrow. Is there a chance artifacts also work with models in the Azure OpenAI endpoint?
It should work with any endpoints and almost every models!
@@LibreChat thank you! We are using ModelSpecs with the azureOpenAI endpoint and so far, generating code with GPT-4o and mini doesn't seem to trigger artifacts, although it's turned on in the beta settings. Will have to dig a little deeper here.
@@LibreChat OK, not using modelSpecs seems to do the trick. Without them, artifacts works.
@@DigitDani having issues trying to run Ollama, checked the docs, still nothing, any ideas or a different way to get local models added?
@@build.aiagents have you seen this: www.librechat.ai/blog/2024-03-02_ollama
Really nice! 🙌🏻
great product - thanks so much
omg i love it!
How does the LLM trigger the artifact? Like, what kind of tolens is needed for the UX to know it's code to put into the artifact?
It’s prompt based. When artifacts is on the LLM receives instructions on how and when he should use artifacts.
Looks great!
does this work with deepseek aswell?
It should be possible since it is triggered by a prompt
If the default prompt we’re re using doesn’t work you can use the custom prompt option to tweak it.
COOL!
PHENOMENAL
Wow!
hello, i've managed to connect to Ollama Phi3 running locally but the app does not use artifacts. should i use another model with Ollama or is there something else i miss? it answers only at conversational level, as usual (LE - the button is visible and toggled to use Artifacts - no oerrors show in containers - i run in docker compose)
nvm, it started on it's own after pc went to idle :))
and it stopped using artifacts again...
how'd you get Ollama working? i've been having no luck, it loads the model but no output.
@@WorldEnder It's likely that open source models will not follow the instructions as well as the leading models, you can experiment with this in custom prompt mode as instructed in the video.
Bless you!!!
How do I enable this option in my installation? In my beta features, it only has Enable switching Endpoints mid-conversation and Parsing LaTeX in messages (may affect performance).
Is there a configuration to enable it or do I need a specific branch?
latest -dev build. This was released just after 0.7.5rc1
If you're already on latest try to clear your browser cache
this is amazing, is this already available on github or is it coming in 075?
It is available!
i pulled the latest code , but its not there , should i enable it in the config
How do you finance this project?
Why does this video look like me recording in Hi8 in the nineties?
can this be used for open webui?
Not at all, this is a completely different, better app :)
Does it support gemini ai ?
💯
yes!
it support the long list it displays in the app
So, something no one asked for, but there's still no Amazon Bedrock support? SMH...
Bedrock support is coming this week
@@LibreChat I stand corrected. Thanks.
- "No one asked for it"
+ Source?
- Trust me bro.