Have you heard these exciting AI news? - May 10, 2024 - AI Updates Weekly

Поділитися
Вставка
  • Опубліковано 31 тра 2024
  • Have you heard these exciting AI news? - May 10, 2024 - AI Updates Weekly
    Presented by Lev Selector
    Slides - github.com/lselector/seminar/...
    --------- My websites:
    - Enterprise AI Solutions - EAIS.ai
    - Linkedin - / levselector
    - GitHub - github.com/lselector
    --------- Contents of today's video:
    Llama3 Continued Success
    Apple M4 Chip - Available in iPad Pro on May 15
    Microsoft MAI-1
    AI-enabled Web Search
    DeepSeek-V2, open-source, 236B, ~GPT-4
    Large LLMs are better at "generalizing"
    Regulatory Capture
    Andrew Ng about AI and regulations
    Yoshua Bengio - Fears About the Future of AI
    My "ai" GitHub repo - LLM & RAG
    Most profitable business ideas with AI
    Galileo Protect
    Information Overload
    Selling Information Stopped Working
    OpenAI will use Stack Overflow's OverflowAPI
    Warren Buffett warns about AI scamming
    X.ai Grok + X.com: news + comments
    ScrapeGraphAI - web scraping using AI
    DocRes - Document Image Restoration
    DeepMind AlphaFold 3
    OpenAI Public Model Specification
    DeepLearning.ai - Agentic RAG Course
    Mistral AI valued at $6 Billion
    Alibaba Qwen2.5
    Gemma with 10 Mln tokens context window
    Phi-3 WebGPU runs in browser
    Guardian for CIA (by Microsoft)
    IBM Granite Code - open-source LLMs
    massive prompts can outperform fine-tuning
    Ilya Sutskever's list of papers
    Crowd-sourced "Arena" Leaderboard
    Tech Layoffs were lower than in 2023

КОМЕНТАРІ • 5

  • @devbites77
    @devbites77 21 день тому +1

    This is a great roundup of AI news and updates. You should definitely have more views and likes.

  • @eliastsoukatos2
    @eliastsoukatos2 21 день тому +2

    Thanks for this videos Lev. I watch them all

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g 19 днів тому

    I just made a comment on another channel that in context learning is probably now a better way to provide data and instructions to LLMs (fine-tuning having become a very sensitive process). And now you post research proclaiming the same. Its especially effective now that we are getting 10M context windows.
    BTW Best GPTs I have previously created do exactly that.

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g 19 днів тому

    Its massively hypocritical that OpenAI having trained on public and often copyrighted data, now want to turn around and demand regulations to have their model weights protected.