Generative AI with Google Cloud: Tuning Foundation Models in Vertex Generative AI Studio

Поділитися
Вставка
  • Опубліковано 1 гру 2024

КОМЕНТАРІ • 6

  • @BillionaireBites00
    @BillionaireBites00 9 місяців тому +2

    00:06 Discussion on tuning foundational models and its complexities
    02:10 Introduction to tuning large language models
    06:08 Different models like T5, Bison, Chat Bison, and DSS are available for generative AI.
    08:00 Advantage of fine-tuning for model customization and personalization.
    11:48 Tuning involves adding adaptation or optimizing layer activations
    13:44 Adapted tuning allows model optimization without additional costs
    17:30 Tuning smaller models from teacher model rationale
    19:04 Tuning foundation models in Vertex Generative AI Studio
    22:17 Fine-tuning with reward function and feedback for model refinement
    23:50 Developing policy for language model responses
    27:11 Adapter tuning is essential for tuning foundational models in Vertex AI Studio.
    29:00 RLM helps optimize model performance with human feedback
    32:45 Tuning foundation models involves pre-end LLM, fine-tuning, and adapter tuning.
    34:42 Creating and tuning adapter models for task-specific datasets.
    38:32 Key considerations for fine-tuning AI models
    40:18 Using text to SQL with Code Bon model to generate queries for big query engine.
    44:10 Model stored in GCB parameter decides the path, input data sets uploaded for tuning
    46:15 Google Cloud stores artifacts related to the model in GCS bucket.
    50:07 Fine-tuning and embedding concepts explained
    51:53 Feedback for upcoming sessions and recommendations for customization
    55:29 The decision to fine-tune or tweak the prompts depends on business stability and specific needs.
    57:14 Incremental tuning recommended for new data and huge data volumes.
    1:00:55 Tuning foundation models and using XAI with Google Cloud.

  • @sowrabhsanathkumar6615
    @sowrabhsanathkumar6615 9 місяців тому

    @35:38 json format is confusing. First example has 'Input_text' and 'output_text' only but second example has 'context' as well. What is the right format? is this intentional?

  • @SteigerMiller
    @SteigerMiller 9 місяців тому

    Are the slides from this talk available anywhere?

  • @WeylandLabs
    @WeylandLabs 9 місяців тому +1

    I thought about getting a cert in google and thought Microsoft, but I chose ibm because the future isn't one type of machine learning it's all.

  • @shyamkadari
    @shyamkadari 7 місяців тому +1

    Very confusing presentation and discussion. They are talking about too many things quickly.

  • @loisskinner4736
    @loisskinner4736 8 місяців тому

    Promo_SM