CODE Fine-Tune Vision Language VLM eg PaliGemma-3B

Поділитися
Вставка
  • Опубліковано 17 жов 2024
  • Free COLAB NB to fine-tune new Vision Language Models (VLM) on your datasets, including code to fine-tune w/ LoRA your simple LLMs.
    From simple PyTorch notebooks ... to advanced massive parallel 8 TPU JAX FLAX notebooks to fine-tune LLMs and VLMs with KERAS 3.
    Full code examples. Plus my recommendations for free compute infrastructures to run this examples on Google COLAB, Vertex AI, Model Garden, Kaggle, etc.
    Fine-tune PaliGemma (from beginning of my video) ipynb:
    github.com/goo...
    And if you only want to PEFT fine-tune your Gemma Model (LLM), I recommend this here (with full model parallelism):
    ai.google.dev/...
    For advanced coder: Inference with Gemma using JAX and Flax, that runs on a free Google T4 TPU:
    ai.google.dev/...
    All rights w/ authors:
    keras.io/guide...
    ai.google.dev/...
    ai.google.dev/...
    #airesearch
    #ai
    #aicoding

КОМЕНТАРІ • 5

  • @propeacemindfortress
    @propeacemindfortress 5 місяців тому +1

    there also is a Keras pytorch wrapper for those who look back on plenty useful code in pytorch, yes you can take it with you 😉

  • @barderino5673
    @barderino5673 5 місяців тому +1

    I'm confused , why for me it takes more VRAM and I'm using PEFT Lora with 8 rank and targeting just 2 layer

  • @JaqUkto
    @JaqUkto 4 місяці тому +1

    Thank you very much!

  • @samirelamrany-bi7dy
    @samirelamrany-bi7dy 3 місяці тому +1

    Thank you

  • @ernestoflores3873
    @ernestoflores3873 5 місяців тому +1

    🔥