AgenticAlex
AgenticAlex
  • 13
  • 576 665
Your browser history knows you way better than you think
Your browsing habits reveal who you are. Using Chrome’s site engagement metrics and ChatGPT, you can spot patterns in your online behavior, and turn that data into a detailed user persona that shows what you’re really into right now - and who you are.
In this video:
💻 14" MacBook Pro M4 Pro: amzn.to/3ANEPwB
🎤 Microphone: amzn.to/3AFgvNw
🖱️ Mouse: amzn.to/3Z3pal4
⌨️ Keyboard: amzn.to/3OdkjZv
Site engagement on Chrome:
chrome://site-engagement/
Andrea Volpini's Tweet:
x.com/cyberandy/status/1862510466855051681
Prompt:
promptden.com/post/extract-user-personas-from-chrome-engagement-metrics
Site engagement metrics explained:
dejan.ai/blog/site-engagement-metrics/
Переглядів: 649

Відео

Create AI images on your computer (in 80 seconds)
Переглядів 473Місяць тому
Learn how to create images on your computer, for free, with a quality as good as MidJourney. To do so, we'll use DiffusionBee and the Flux Schnell (or Flux Dev) models. Follow along to create ai images for free! Latest version of DiffusionBee on GitHub: github.com/divamgupta/diffusionbee-stable-diffusion-ui/releases In this video: 💻 14" MacBook Pro M4 Pro (12 cores): amzn.to/3ANEPwB 💨 TG Pro (C...
I tried to run a 70B LLM on a MacBook Pro. It didn't go well.
Переглядів 25 тис.Місяць тому
Today, we're trying to load and use a 70B LLM with ollama on a 14" M4 Pro MacBook Pro with 48GB RAM. Will it work? In this video: 💻 14" MacBook Pro M4 Pro (12 cores): amzn.to/3ANEPwB 💨 TG Pro (CPU GPU cores temps and fan speed): www.tunabellysoftware.com/tgpro/index.php?fpr=d157l 🎤 Microphone: amzn.to/3AFgvNw 🖱️ Mouse: amzn.to/3Z3pal4 ⌨️ Keyboard: amzn.to/3OdkjZv I tested 7 small LLMs locally t...
What is the fastest LLM to run locally? Let's find out.
Переглядів 6 тис.Місяць тому
Today, we're testing 7 LLMs with ollama on a 14" M4 Pro MacBook Pro with 48GB RAM to find which small LLM (SLM) is the best to write locally, and the fastest. In this video: 💻 14" MacBook Pro M4 Pro (12 cores): amzn.to/3ANEPwB 💨 TG Pro (CPU GPU cores temps and fan speed): www.tunabellysoftware.com/tgpro/index.php?fpr=d157l 🎤 Microphone: amzn.to/3AFgvNw 🖱️ Mouse: amzn.to/3Z3pal4 ⌨️ Keyboard: amz...
How LOUD are the fans on the 14" MacBook Pro M4 Pro? (fan noise @ 7700 RPM)
Переглядів 15 тис.2 місяці тому
What's the 14" MacBook Pro M4 Pro fan noise like? Today, I put the 12 core version to the test, with ramping up, TG Pro in max mode (reaching the top speed of 7700 RPM) and the cooldown speed. Is it as bad as the 2019 16" Intel Macbook Pro? In this video: 💻 14" MacBook Pro M4 Pro (12 cores): amzn.to/3ANEPwB 💨 TG Pro (CPU GPU cores temps and fan speed): www.tunabellysoftware.com/tgpro/index.php?...
I found the best sound effect AI can generate 😂
Переглядів 2,7 тис.7 місяців тому
Today I'm testing Eleven Labs' Text To Sound Effects AI sound effect generator and the results are... interesting. Get ElevenLabs: elevenlabs.io/?from=partnergarcia5904
How to Setup Cloudflare DNS FAST (2024 update)
Переглядів 60 тис.7 місяців тому
Here is how to setup Cloudflare DNS for your domain name (updated tutorial for 2024) ⚡️ I buy my domains on Dynadot. It's a fast and cheap registrar. www.dynadot.com?s8z6k8T6Q6Cp8U6w
Google Search Console Setup (How to Add a Domain to GSC)
Переглядів 99 тис.2 роки тому
Here is how to add your domain to Google Search Console. The setup is pretty easy. 📺 Cloudflare DNS setup 👉 ua-cam.com/video/9vaiZQtL9lQ/v-deo.html ⚡️ I buy my domains on Dynadot. It's a fast and cheap registrar. www.dynadot.com?s8z6k8T6Q6Cp8U6w
How to Setup Cloudflare DNS (2022 update) [FAST]
Переглядів 325 тис.3 роки тому
UPDATED VERSION FOR 2024 👉 ua-cam.com/video/9vaiZQtL9lQ/v-deo.html Here is how to setup Cloudflare DNS for your domain name. Updated tutorial for 2022 ⚡️ I buy my domains on Dynadot. It's a fast and cheap registrar. www.dynadot.com?s8z6k8T6Q6Cp8U6w
How to Fix the "This domain is already in use" Error in Google Workspace
Переглядів 40 тис.4 роки тому
Here is how to fix the "this domain is already in use" or the "this domain name has already been used as an alias or domain" error in Google Workspace (formerly G Suite). 📺 Cloudflare DNS setup 👉 ua-cam.com/video/9vaiZQtL9lQ/v-deo.html Make sure you or someone in your organisation doesn't already own or manage your domain name support.google.com/a/answer/80610?hl=en&ref_topic=1687139 Fill this ...

КОМЕНТАРІ

  • @RichArtLove
    @RichArtLove 9 днів тому

    How can I Change From Cloudflare Free Account Nameservers to the Nameserver Of My Website Host Server, And Move DNS Control From Cloudflare To My Hosting CPanel?

  • @DK-ox7ze
    @DK-ox7ze 10 днів тому

    70b one is 4 bit quantized?

  • @ErinMartijn
    @ErinMartijn 12 днів тому

    I just ran an 8B Hermes3 model on a 9 year old 2016 MacBook Pro (Core i7 chip with just 2 cores, and 16gb of RAM). It was quite slow though, about 2 words per second, but it worked!

  • @b.c.2177
    @b.c.2177 12 днів тому

    You need an external NVIDIA Project Digits if you want high performance for big LLMs. It will be available for sale in Mai, 2025.

  • @Sigmamale_629
    @Sigmamale_629 13 днів тому

    thanks bro you explain in simple words

  • @SonLeTyP9496
    @SonLeTyP9496 13 днів тому

    I wonder how much performance of running llms could improve if opting to m4 pro with 20 GPU..

  • @AlexMint
    @AlexMint 14 днів тому

    The problem with these types of video is how dependent they are on your own audio setup. It's really something that either needs hard decibel numbers or to be seen in person.

  • @kevenCodes
    @kevenCodes 15 днів тому

    Follow up. How is that noise level treating you

  • @raymondong4504
    @raymondong4504 16 днів тому

    Why did you go with the 12/16 instead of the 14/20 spec? I’m debating which one to get as a dev who needs to run Parallels with Windows for some Windows desktop development.

  • @SonLeTyP9496
    @SonLeTyP9496 17 днів тому

    after watching your clip, i wonder how the performance of m4 pro 14cpu 20gpu would turn out.

  • @chidorirasenganz
    @chidorirasenganz 18 днів тому

    Have you tried something like Private LLM for comparison in terms of speed?

  • @mpic6
    @mpic6 19 днів тому

    Have you tried MLX versions of llama to see if you're able to go bigger using Apple GPU optimized models?

  • @andreatramacere
    @andreatramacere 20 днів тому

    As a 16" mbp 2019 owner, I think that would be nice to run the same test with low power mode on, just to evaluate the best tradeoff between performance and fan noise, thanks for the video!

  • @raymondong4504
    @raymondong4504 25 днів тому

    Knowing now how the llms perform locally, would you have gotten a max and at least a 64gb MBP instead?

  • @shukrans7623
    @shukrans7623 25 днів тому

    Fan is loud comes on straight away

  • @andrei.avramescu
    @andrei.avramescu 27 днів тому

    Between M3 Max (30 cores GPU and only 36GB RAM) and M4 Pro with 48GB RAM, which would be faster?

  • @mikhailkalashnik0v
    @mikhailkalashnik0v 28 днів тому

    Do the bigger models require more ram only or ram + gpu?

  • @justinhalsall4077
    @justinhalsall4077 Місяць тому

    Can you run any of these on the machine’s Neural Engine cores?

  • @BeatSaberIsFun
    @BeatSaberIsFun Місяць тому

    where 405b?

  • @rbvan
    @rbvan Місяць тому

    First off I want to say you do an excellent job with your videos. As someone 100% brand new to LLM’s (I sold my 2019 MacBook so don’t even have a computer right now and am deciding which model) I’m wondering if you have any beginner courses of what software to install, etc. to get up and running. I mean from absolute beginner to the point you are at when running these models? Any help or suggestions would be greatly appreciated. Happy Holidays and best wishes for a fabulous 2025! 🎉 🙏

  • @jobs2132
    @jobs2132 Місяць тому

    I have a MBP M1 with 16gb RAM and running 70B Llama 3.1 and it's fast

  • @rockonhero3611
    @rockonhero3611 Місяць тому

    As if you could do much with 70B llama. Try duck duck go chat. It’s mostly useless for complex tasks

  • @FadedGlint
    @FadedGlint Місяць тому

    Great video, does anyone know what widget or application he is using to monitor the temperature and fan speed of the mac?

    • @amitsamra1937
      @amitsamra1937 Місяць тому

      TG Pro. Standard app everyone uses.

  • @jaggyjut
    @jaggyjut Місяць тому

    What about Mac mini m4 pro with 64 GB?

  • @fVNzO
    @fVNzO Місяць тому

    Did you maximize VRAM allocation to get the whole model into memory? If not, that will explain why it's so slow.

  • @tomerweiss4900
    @tomerweiss4900 Місяць тому

    !4GB-8GB Goes to deal with screen and GPU

  • @MartinBenesCreative
    @MartinBenesCreative Місяць тому

    Thanks for the review. Can you confirm that the MBP is generally quite as would be a fanless MacBook Air?

    • @nniklask
      @nniklask Місяць тому

      Doing Macbook Air level tasks will keep this thing quiet. You usually shouldn't compare between the two by noise. If you need the performance you go pro, if you dont you go air, regardless of the fan noise. They only turn on on those heavy prolonged tasks that you typically wouldnt buy a macbook air for

    • @MartinBenesCreative
      @MartinBenesCreative Місяць тому

      @nniklask Agree. I'm kind in the middle. My air is enough to me but some tasks would be better with the Pro. But on the other side I'm used to the total silence of the Air and i dont want to go back in that sense. So i's fine to me hear the fans one in a while but not as the old 2014 MacBook Pros used to do. To me the absence of noise is actually quite important.

    • @AgenticAlex
      @AgenticAlex Місяць тому

      @MartinBenesCreative it's exactly that - dead quiet most of the time (fans are off) and some mild noise when you push it

    • @MartinBenesCreative
      @MartinBenesCreative Місяць тому

      @@AgenticAlex Thanks buddy i think ill go for the 14 inch pro than.

  • @mydayq
    @mydayq Місяць тому

    You should you LM Studio, it will be faster

  • @AskOn-yb9lz
    @AskOn-yb9lz Місяць тому

    Excuse me, brother, I want to communicate with you. It is not an important matter. I have a problem. I hope you can help me. I have a problem with Cloudflare.

  • @AskOn-yb9lz
    @AskOn-yb9lz Місяць тому

    Excuse me, brother, I want to communicate with you. It is not an important matter. I have a problem. I hope you can help me. I have a problem with Cloudflare.

  • @RoninTekk
    @RoninTekk Місяць тому

    Im thinking to buy 2019 for like 350-400$. Does it worth it?

    • @AgenticAlex
      @AgenticAlex Місяць тому

      So Intel generation... definite NO. Get a M1.

  • @trancepriest
    @trancepriest Місяць тому

    Llama 3.3 70b... It runs fine on my 40 core MBP M4 Max with 48GB.

    • @AgenticAlex
      @AgenticAlex Місяць тому

      40 vs 16 CPU cores 😅 how many tokens per second do you get?

  • @levoniust
    @levoniust Місяць тому

    That was fun. I found it was easier to print to pdf then upload to GPT Here is the prompt: Analyze the given website engagement metrics, focusing on the types of sites visited, their frequency, and overall patterns of usage. Based on this data, infer the likely profile of the user, including their profession, interests, and potentially their age range or life stage. Consider the high engagement with professional, technical, and AI-related sites like Google Docs, LinkedIn, Hugging Face, and WordLift, alongside personal productivity tools like Google Calendar and Duolingo. Suggest what kind of work the user does, their level of expertise in technology or a specific domain, and their main areas of focus. Be concise and provide a well-rounded user persona. Have fun!

  • @amitsamra1937
    @amitsamra1937 Місяць тому

    0:52 Are you sure it's 90W? I thought the binned 12-core accepts 70W? That's actually one of the reasons why I want to buy the 12-core instead of the 14-core - to make sure the TDP is lower. Thanks for the review.

  • @E-B.S.9
    @E-B.S.9 Місяць тому

    Which llm is best when you compare those llm and which one gives 100% accurate results?

    • @AgenticAlex
      @AgenticAlex Місяць тому

      Results always depend on the task, and 100% accuracy (again, it depends on the task) has never been reached by any LLM (afaik). Structured outputs are probably your best friends at this time.

  • @NotSoLiberal
    @NotSoLiberal Місяць тому

    Wasn’t there a mode to throttle performance if you don’t want to get it too hot ?

  • @CommentGuard717
    @CommentGuard717 Місяць тому

    Hmm I have 64GB on a 7 year old GPU and it runs fine. But 48GB on a mac is basically 96 on a PC so it shouldn't matter

    • @AgenticAlex
      @AgenticAlex Місяць тому

      48GB is 48GB - memory size doesn't magically double because it's a Mac. Theoretically, the system should be able to load a model up to 36GB (3/4 of the whole memory available). That's probably where the bottleneck lies here.

  • @xose.goncalves
    @xose.goncalves Місяць тому

    its giving windows vibe ehheheeheh

  • @itimdesigner
    @itimdesigner Місяць тому

    so should i buy imac m4, mac mini m4, or macbook pro m4 to play some roblox, minecraft and do some video editing? - tim:)

    • @nniklask
      @nniklask Місяць тому

      Depends on what you want as a form factor? Performance should be the same on the same configs.

  • @axiomaticclarity324
    @axiomaticclarity324 Місяць тому

    The model has to be entirely in real memory.

  • @ajsphinx
    @ajsphinx Місяць тому

    can't we just autoscan the DNS records?

    • @AgenticAlex
      @AgenticAlex Місяць тому

      You definitely can - the goal here is to start from scratch if you don't have an existing website.

  • @avroman100
    @avroman100 Місяць тому

    Got one, I don't need earphones when making full load (comparing to windows when you need them when open browser)

  • @eFFeFab
    @eFFeFab Місяць тому

    my registrar asks also the ip address for every name server, so i cannot proceed :/ may u help me in some way? thank you in advance

  • @Lewehot
    @Lewehot Місяць тому

    11:30 Model Results - need a MBP Max with 64GB ram minimum to run llama3.1 and bigger models

  • @beace4436
    @beace4436 Місяць тому

    this is a cool video. i'm looking to buy a macbook for software development bit i'm unsure what spec to get. i want the fans to turn on as little as possible as i live in in dusty environment. would you recommend the M4 24GB, M4 32GB, M4 Pro 12c 24GB or M4 Pro 12c 48 ram

    • @AgenticAlex
      @AgenticAlex Місяць тому

      M4 Pro 12c 24GB memory is great, 48GB if you can. The benchmarks make the M4 Pro a clear winner over the M4 - and you get 2 fans (there's only 1 in the M4 base) so better thermal maangement. And since I'm sure we'll run more and more LLMs locally, you can never have too much memory.

  • @vin.k.k
    @vin.k.k Місяць тому

    Try LM Studio.

    • @AgenticAlex
      @AgenticAlex Місяць тому

      A lot faster for llama3.2 - didn't have the time to test a heavy model but will do soon!

  • @danielexposito2204
    @danielexposito2204 Місяць тому

    I try to find this application in the App Store, but I can't find it. May it be due to the fact that my computer is a Macbook Air M1, and maybe they don't give access to this application for computers that don't have a fan? (Can you answer me, please?)

    • @AgenticAlex
      @AgenticAlex Місяць тому

      AFAIK, it's only available on their website. It's still on sale now! www.tunabellysoftware.com/tgpro/index.php?fpr=d157l

  • @ExcelsiorXII
    @ExcelsiorXII Місяць тому

    The llama 3.1 is still faster than writing the 500 words story ourselves 🙂

    • @AgenticAlex
      @AgenticAlex Місяць тому

      Very true! I said unuseable because I interact with LLMs a lot, but for batch operation that don't need interaction or supervision, this is fine!

  • @hankmoody7521
    @hankmoody7521 Місяць тому

    I can run llama3.2-vision using Ollama on my 16GB M1 Macbook Air without any issues. Having the same feelings about the Phi, Mistral7B and the llama3.2. Hope the new Ministral 8B is making it to Ollama. ps here my llama3.2-vision stats using your prompt for comparision: total duration: 1m22.165174917s load duration: 29.836ms prompt eval count: 133 token(s) prompt eval duration: 2.959s prompt eval rate: 44.95 tokens/s eval count: 715 token(s) eval duration: 1m19.173s eval rate: 9.03 tokens/s

  • @wasdq9748
    @wasdq9748 Місяць тому

    I just seen you post this! thanks so much, I havnt watched it yet, but people asked for it and you took the time to do it. Thank you! -----Watching now.