VulnerabilityGPT: Cybersecurity in the Age of LLM and AI

Поділитися
Вставка
  • Опубліковано 23 січ 2025

КОМЕНТАРІ • 12

  • @karengomez3143
    @karengomez3143 5 місяців тому +1

    Takeaways:
    GPT is making many structured relation placement between words in different levels (layers) so different inputs could bring a set of outputs, but it's not a DB, and it's not searching for patterns within a created DB.
    Within the GPT answers are the alignment response rules, what would be if a response is following the user's request in spite of company intent or social or compliance rules.
    GPT models are not that good at making a whole story or remembering a conversation, so it's not good in making novels, but it has a window response that would be good from a user's point of view aligning to their intend. Guardrails are limits or ways to make a system in place to follow alignments.
    Grounding as a hallucination mechanism, providing context to the user's query through a database management (large language model), so whenever the user is asking a question that needs more info about, or that is recent, the app would bring another page, just like google would retrieve twitter webpage when someone is asking for it.
    AI application: Scammer response generator

  • @georgeb8637
    @georgeb8637 Рік тому +5

    8:00 - all letters in English language
    9:41 neural network
    22:13 - AI confessing love
    26:58 Hallucination
    32:06 prompt engineering
    40:53 - AI apology 😂
    46:58 - Go game beat by human
    54:00 - sequencing attack

  • @manamsetty2664
    @manamsetty2664 Рік тому +2

    Awesome talk 👏
    Really good explanation about what AI is doing
    Great animations
    Was always engaged throughout the talk
    Questions need to be audible though that was the only issue

  • @ChrisLeftBlank
    @ChrisLeftBlank 10 місяців тому

    This is true AI Safety, all the closed-sourced policy holders guiding the system is doing is showing the AI how to say no to end-user. I mean alignment is not a bad thing but the block box approach is just tuning models to select what human alignment is for the user.

  • @karengomez3143
    @karengomez3143 5 місяців тому

    Takeaways:
    Attacks:
    -Injection (silly activities could defeat an AI model, since this data is not in the training data).
    -Grounding (allows an AI to show false outputs, through data creation, (Search, Engine, Optimization) and then the result is shown by the AI.
    -Prompt Hijacking (when the context is modified by someone that does not have the authority to do it, like a user's input being treated as a developers).
    Exploits:
    -Conversation attacks to Business flaws (wrong discounts, upgrades, math)
    -Guardrails attacks

  • @rumpelstiltskin9729
    @rumpelstiltskin9729 Рік тому +3

    The news segments were so cringe

  • @achunaryan3418
    @achunaryan3418 Рік тому +2

    AAAA

    • @manamsetty2664
      @manamsetty2664 Рік тому

      At the beginning of the talk i thought this was a random comment but the end made it clear.

  • @Carnyride79
    @Carnyride79 10 місяців тому

    Good talk but you like to stroke your ego quite often and to say Elon doesn't know what he's talking about is a stretch

  • @d_lom9253
    @d_lom9253 Рік тому

    This is only helpful for a very niche crowd. If your have to protect your network or anything like that, wasting time

  • @8starsAND
    @8starsAND 10 місяців тому

    Sans is very overrated, I don’t know how they got so big