KGC23 Keynote: The Future of Knowledge Graphs in a World of LLMs - Denny Vrandečić, Wikimedia

Поділитися
Вставка
  • Опубліковано 18 лип 2023
  • Keynote Session: The Future of Knowledge Graphs in a World of Large Language Models by Denny Vrandečić (Wikimedia Foundation).
    Denny Vrandečić is Head of Special Projects at the Wikimedia Foundation, leading the development of Wikifunctions and Abstract Wikipedia. He is the founder of Wikidata, co-creator of Semantic MediaWiki, and a former elected member of the Wikimedia Foundation Board of Trustees. He worked for Google on the Google Knowledge Graph and has a Ph.D. in Semantic Web and Knowledge Representation from the Karlsruhe Institute of Technology.
    Large Language Models such as GPT or LLaMA profoundly change our understanding of what computers can do. What is the role of Knowledge Graphs in a future where Large Language Models may reign supreme? Both Knowledge Graphs and Large Language Models have incurable weaknesses: brittleness on the one side and the tendency to hallucinate on the other. And both have unique strengths. This talk predicts the future, showing how KGs and LLMs will complement each other, allowing us to reap the benefits of both technologies.
    Speaker: Denny Vrandečić, Wikimedia Foundation.
    Like what you see?
    Expect to see keynotes, talks, and masterclasses by experts in artificial intelligence, machine learning, and data mining at The Knowledge Graph Conference 2024!
    Get ready to explore, connect, and innovate with the Knowledge Graph community.
    Save the date for the Knowledge Graph Conference: May 6-10, 2024!
    Book your tickets for KGC 2024 here today ➡ events.knowledgegraph.tech/
    #LLMs #GPT #artificialintelligence
  • Наука та технологія

КОМЕНТАРІ • 10

  • @infraia
    @infraia Місяць тому

    Excellent presentation Denny!

  • @AEVMU
    @AEVMU 5 місяців тому +3

    Decentralized knowledge graphs deserve more attention.

  • @kevon217
    @kevon217 10 місяців тому +8

    ‘’it’s complicated”, love it

  • @Salfie007
    @Salfie007 11 місяців тому +8

    Fast forward to 2:30 for better audio.

  • @paulina5247
    @paulina5247 3 місяці тому

    Such a great presentation! I learned a lot, thank you!

  • @mandymoo1188
    @mandymoo1188 5 місяців тому +1

    🎯 Key Takeaways for quick navigation:
    00:00 *🎤 Introduction to Keynote*
    - Introduction to the keynote session by Denny Vrandečić from Wikimedia Foundation.
    01:23 *🌐 Challenges and Changes in Knowledge Graphs and LLMs*
    - Knowledge graphs and LLMs are rapidly evolving, challenging existing paradigms.
    - Adoption of LLMs like GPT-3 has been unprecedented, impacting various sectors globally.
    - Researchers and practitioners are adapting to the implications of LLMs on knowledge graphs.
    04:03 *⚠️ Narrow Focus and Disclaimers*
    - The talk specifically addresses the interaction between knowledge graphs and LLMs.
    - Disclaimers: The presentation does not include AI-generated content and avoids broader ethical and legal implications.
    05:13 *🧠 Understanding Knowledge Graphs and Large Language Models*
    - Knowledge graphs represent relationships between entities, stored in graph databases like Wikidata.
    - Large language models (LLMs), exemplified by GPT-3, are neural networks trained on vast textual data.
    - LLMs, despite their capabilities, face challenges in computational efficiency compared to knowledge graphs.
    10:23 *💡 Costs and Technical Challenges of LLMs*
    - LLMs incur high computational costs for both inference and training, posing financial and technical challenges.
    - Even with optimization efforts, LLMs remain computationally intensive compared to traditional knowledge graph lookup methods.
    - Industry leaders acknowledge the substantial computational overhead of LLMs.
    11:33 *🔄 Evolving Landscape of LLMs*
    - The pace of change in the LLM landscape is rapid, with indications that the era of large language models might be waning.
    - Innovations like Meta's LLaMA model highlight the community's adaptability and creativity beyond GPT-3.
    - Technical limitations, including diminishing returns and cost concerns, influence the direction of LLM development.
    12:53 *🌐 Challenges in Information Accuracy and Consistency*
    - Information accuracy and consistency pose challenges across platforms, exacerbated by reliance on sources like Wikipedia.
    - Discrepancies in information retrieval from platforms like Google, Bing, and LLMs reflect broader issues in data accuracy and verification.
    - Language-specific variations in information retrieval underscore the complexities of maintaining accurate knowledge bases.
    17:01 *🤔 Limitations and Inefficiencies of LLMs*
    - LLMs exhibit limitations in handling specific queries, particularly those requiring mathematical operations or nuanced understanding.
    - The efficiency and reliability of knowledge retrieval through LLMs are questioned compared to structured knowledge bases like Wikidata.
    - Alternative approaches, such as augmented language models, offer potential solutions to mitigate LLM limitations.
    20:09 *🧠 Understanding Knowledge Storage in Large Language Models*
    - Large language models (LLMs) store knowledge in their parameters.
    - Parameters in LLMs are essential for tasks like text-to-image generation.
    - Comparison between the parameter size of stable diffusion and GPT-3.
    22:19 *📚 Role of Knowledge Graphs in Text Generation*
    - Questioning the necessity of vast parameter sizes in LLMs for text generation.
    - Introducing knowledge graphs as efficient knowledge extraction mechanisms.
    - Using knowledge graphs to store, curate, and extract valuable information.
    23:15 *💡 Significance of Knowledge in a World of LLMs*
    - Emphasizing the value of knowledge in a world of infinite content generation.
    - Utilizing LLMs for knowledge extraction and symbolic representation.
    - Highlighting the importance of overfitting for truth in symbolic systems.
    25:12 *🌐 Extending the Expressivity of Knowledge Graphs*
    - Discussing the limitations of knowledge graphs in terms of expressivity.
    - Introducing initiatives like Wikipedia Functions to enhance expressivity.
    - Proposing the introduction of a new special value, "it's complicated," in knowledge graphs.
    26:50 *🚀 Enhancing the Future with Knowledge Graphs and LLMs*
    - LLMs have limitations including hallucinations, expense, and difficulty in auditing.
    - Knowledge graphs can address these limitations and provide ground truth for LLMs.
    - The future of knowledge graphs is promising, especially in conjunction with LLMs.
    30:08 *💰 Cost Consideration in Knowledge Extraction*
    - Comparing the cost-effectiveness of using LLMs versus knowledge graphs for answering questions.
    - Considering whether cost consciousness will impact the hype around LLMs.
    - Money is a significant factor influencing the adoption and sustainability of LLMs.
    Made with HARPA AI

  • @2DReanimation
    @2DReanimation 8 місяців тому +2

    22:00: Yes, that's really where KG's would be optimal when integrated into an LLM -- trivia / facts that can't be reduced further than nodes and links in a KG.
    I mean if you think about the modelling horsepower that would remain in a 170B ANN after outsourcing fact learning and retrieval would be insane.

  • @NataliiaLytvyn
    @NataliiaLytvyn 9 місяців тому +2

    Really make sense!

  • @StanleyDenman
    @StanleyDenman 24 дні тому

    What in the world are you saying!