КОМЕНТАРІ •

  • @PubgSpeed-wl8yo
    @PubgSpeed-wl8yo 6 місяців тому +2

    Bro, thanks for the tutorials, you are the only one on youtube who has studied this issue in depth, keep it up, you have no competition. Trust me, I've been gathering information for 2 months, I've been gathering information about artificial intelligence and linking them to websites and apps. And there are only a few people like you and not everyone is as deep as you.

    • @LaunchableAI
      @LaunchableAI 5 місяців тому

      Thanks for the kind words!

  • @malekaimischke2444
    @malekaimischke2444 7 місяців тому

    thank you very much for making this video @Launchable AI. Seeing how you use and think about these APIs is really helpful (particularly for largely non-technical folks like me). Appreciate it!

  • @syhintl
    @syhintl 7 місяців тому

    Thanks for the great contents! Particularly on the uploading file and creation of assistant file. Would like to see some deep dive of creating multiple assistant files from the Bubble frontend soon!

  • @luminrabbit9488
    @luminrabbit9488 7 місяців тому

    Fantastic Video, Thank You!

  • @user-sv9hj6gf2s
    @user-sv9hj6gf2s 5 місяців тому

    Well done folk, you are a legend!

  • @sitedev
    @sitedev 7 місяців тому

    Thanks for making this video - I would have struggled over the whole 'threads' bit. It makes sense now.

    • @LaunchableAI
      @LaunchableAI 7 місяців тому

      Glad it was helpful!

    • @sitedev
      @sitedev 7 місяців тому

      @@LaunchableAI I've been experimenting with RAG quite a bit (Bubble/Pinecone/Flowise) - I see these assistants only allow for a max of 20 files to be attached to a given assistant and I believe each file has a max of 100k characters. My initial thinking is that the implementation of RAG inside an assistant isn't ideal in that there does not appear to be any method of controlling or directing the retrieval process (as compared to Pinecone/metadata for instance). I'm keen to know what your thoughts are on this. I'm tending toward experimenting with creating a tool that an assistant can use where it 'hands off' the user queries to a 'Pinecone tool' along with a prompt explaining the tools role in the whole RAG precess (it simply returns relevant chunks and document references) which the assistant then uses to synthesise the response as per a typical RAG process.

    • @LaunchableAI
      @LaunchableAI 7 місяців тому +2

      ​@@sitedev You make some excellent points. I was talking to a client about exactly this this morning. We've built a bunch of pinecone-based storage & pre-processing bits, and are thinking about the Assistants + Files APIs as replacements.
      My thoughts currently are in line with yours. There's not quite enough flexibility with the current OpenAI options, for some more complex use cases (e.g., we're doing database and S3 retrievals with LangChain, passing to Pinecone; this sort of thing isn't an option yet. I can imagine other cases too that wouldn't work).
      That being said, I suspect they'll increase the file limit and the per-file size limit over time, so perhaps its not the right option for some projects now, but will be more viable soon enough.
      There's also some concern about vendor lock-in, if you do all your data storage and indexing via OpenAI, it makes it tougher to use other models / platforms. So depending on your industry / use case, something else to keep in mind.
      And lastly, you're point about passing some helper context to the assistants API, and using pinecone as a "tool", is probably an excellent idea. I hadn't though of turning pinecone etc. into a tool that the ChatGPT could call directly, but it sounds like a topic that's ripe for a tutorial video ;) If you try it, please let us know how it goes!

  • @vcapp.
    @vcapp. 7 місяців тому

    Great explanatory video @LaunchableAI - Thank you

  • @MrJackywong8712
    @MrJackywong8712 7 місяців тому

    Great sharing. Thanks

  • @charliekelland7564
    @charliekelland7564 5 місяців тому

    Great content, thank you. I don't need this yet but may well do at some point and it's good to know it's here. I'm currently using a plugin but I don't think it does everything I'm going to need so... thanks again - subbed 👍

  • @link0171
    @link0171 2 дні тому

    Incrivel, você possui uma didadica muito foda, sou do brasil e as vezes fica um pouco dificil de acompanhar o video mas com paciencia consigo compreender bem.
    Não sou um esperiente em api, mas queria saber se é possivel construir esse sistema todo integrado com n8n, isso usaria menos WU? Depois como eu faria pra conectar no bubble?

  • @guillaume6761
    @guillaume6761 7 місяців тому

    Cool!

  • @Olwen89
    @Olwen89 2 місяці тому

    Thanks for the video! Wondering if you have an update video on how to stream the responses from open ai (seems there's some recent updates that allow streaming)

    • @LaunchableAI
      @LaunchableAI 6 днів тому

      Yep, the latest plugin versions and recent tutorial videos cover streaming. May also be releasing a tutorial on how to build streaming from scratch

  • @OutTitan
    @OutTitan 6 місяців тому

    Hey Korey, thank so much for the video. I don't know if the documentation has changed or something. But when I try to use the "Get Threads" endpoint like you showed in the video, I'm hit with this error.
    "error": {
    "message": "Your request to GET /v1/threads must be made with a session key (that is, it can only be made from the browser). You made it with the following key type: secret.",
    "type": "invalid_request_error",
    "param": null,
    "code": "missing_scope"
    }
    But when I pass in the thread id, it works fine.

    • @LaunchableAI
      @LaunchableAI 6 місяців тому

      Yeah I ran into this problem too and spent awhile figuring it out. Thought initially the call had to be made client-side, so tried that. Later on in the tutorial (might be part 2), I think I discuss that you can't actually make the GET threads request; it's not supported. You need to store your thread IDs on your own, and use those.
      Last I looked, there was an open issue on OpenAI's developer/community forum of people discussing (ahem complaning about) this.

    • @lukekoletsios3236
      @lukekoletsios3236 3 місяці тому

      @@LaunchableAI The same issue still exists :(
      Just gonna continue with the video and hope you say how to fix it lol

    • @lukekoletsios3236
      @lukekoletsios3236 3 місяці тому

      I think I fixed the issue. Simply change it from GET to POST.

  • @kashishvarshney2225
    @kashishvarshney2225 6 місяців тому

    i want to create chatbot with dynamic data and gpt-3.5 how can i do that with bubble please reply

    • @LaunchableAI
      @LaunchableAI 5 місяців тому

      You can try using a plugin. Our plugin "ChatGPT Toolkit" has various functions for extracting text from files and websites. Maybe that would do the trick? It's a paid plugin ($10/mo), but you may be able to find some free alternatives if that price is too high.
      If you don't want to use a plugin, you'll probably want to find an API that can accept files or websites and return text. Then you''d pass that content to ChatGPT when you make a request.

  • @guillaume6761
    @guillaume6761 7 місяців тому

    Is that the start of a series?

    • @LaunchableAI
      @LaunchableAI 7 місяців тому +3

      If there's interest in expanding, maybe I'll make some more on the topic, sure.

  • @user-sv9hj6gf2s
    @user-sv9hj6gf2s 5 місяців тому

    Just one detail: GPT-3 in not compatible and GPT-4 subscription plan cost a fortune. U$D22 is too much to run assistants.

    • @LaunchableAI
      @LaunchableAI 5 місяців тому

      Yep, it's kind of pricey, esp. if you're outside North America or Europe. I use gpt-4 pretty much every day for my work, so it's worth it for me, but I can see that it wouldn't be worth it if you're only using it occasionally or only for this one feature.