“Wait, this Agent can Scrape ANYTHING?!” - Build universal web scraping agent

Поділитися
Вставка
  • Опубліковано 25 лис 2024

КОМЕНТАРІ • 113

  • @AIJasonZ
    @AIJasonZ  6 місяців тому +15

    If you are interested in universal web scraper i'm building, please leave your email in this waiting list: forms.gle/8xaWBBfR9EL5w8jr6

    • @teegees
      @teegees 6 місяців тому

      Can you pass credentials along with the scraper in a secure manner? For example I want to scrape NYTimes but with my NYTimes account.

    • @24-7gpts
      @24-7gpts 6 місяців тому

      @@teegees I don't think that's probable because of privacy and security

    • @ozxbt
      @ozxbt 5 місяців тому +4

      i need to bypass cloudflare etc.

    • @raunaqss
      @raunaqss 5 місяців тому

      @@ozxbt I have a solution for this reply and we could get in touch

  • @Joe-bp5mo
    @Joe-bp5mo 6 місяців тому +34

    It's interesting how much performance gain you got from clean markdown data like firecrawl, sometimes you dont need much stronger reasoning, you just need to give agent better tools

    • @nigeldogg
      @nigeldogg 6 місяців тому +3

      All you need are tools

    • @djpete2009
      @djpete2009 6 місяців тому +1

      @@nigeldogg Love it!

  • @agenticmark
    @agenticmark 6 місяців тому +27

    I am already doing this. Its the same way I trained models to play video games - take a screensshot, convert to greyscale, but instead of inserting that into a CNN, I pipe it into an agent that I built and it has mouse and keyboard tools instead of the typical selenium/headless tools. It works pretty damn good although some models will refuse cpatchas outright.

    • @Hshjshshjsj72727
      @Hshjshshjsj72727 6 місяців тому

      Captchas? Maybe one of those “uncensored” llms

    • @Plash14
      @Plash14 6 місяців тому

      How does your mouse know where to click?

    • @OldManShoutsAtClouds
      @OldManShoutsAtClouds 5 місяців тому

      ​@@Plash14gridmap, I assume.

  • @Chris_Faraday
    @Chris_Faraday 4 місяці тому +1

    Really Love his accent and voice, very soothing and clear

  • @jasonfinance
    @jasonfinance 6 місяців тому +14

    Gonna try out the 2 examples soon, and please please launch the universal web scraping agent, i will pay you for that in a heartbeat!

    • @ianmoore5502
      @ianmoore5502 6 місяців тому +6

      Pls Jason AI
      - Jason Finance

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 6 місяців тому +97

    You talked about 'universal scrapers' then you used a bunch of expensive services to create a very vanilla hyper-specific scraper that doesn't' require LLMs at all.... hmm....

    • @نشامي
      @نشامي 6 місяців тому +11

      It's just stupid, it's all about them using these services and putting the affiliate link, then finding true budget friendly alternatives. I can build the same with public API of a llm service, I will take hours but at the end, never again I will need to waste my time, you can even make the llm find names of classes and ids you want to scrape them the llm create the code, and run it automaticly.

    • @colecrouch4389
      @colecrouch4389 6 місяців тому +3

      Yeah i believe this commenter and I just unsubbesd. What’s with the web scraping grift lately?

    • @kilianlindberg
      @kilianlindberg 6 місяців тому +3

      9:43 lol

    • @rowkiing
      @rowkiing 6 місяців тому +4

      I made some video on my LinkedIn on building in public something similar a web scraper that summarize website and make outreach message based on that, everything free as chrome extension u just need a good to computer to run lllama locally

    • @pizzaiq
      @pizzaiq 6 місяців тому +3

      Everything he did can be done free with the use of python libraries he showed. He also explained the issue with scraping very throughly and accurately and then demonstrated the solution quite clearly. And then explained the use of agents LLMs in this context. I really don't understand what you think you just watched.

  • @Jim-ey3ry
    @Jim-ey3ry 6 місяців тому +9

    Holy shit, that universal ecommerce scraping agent in the end is sick, thanks for sharing that framework!!

  • @nestpasunepipe1173
    @nestpasunepipe1173 5 місяців тому

    dear jason, i am really amateur with coding so i don't have a clue on so many topics that i try to execute. i have come across some of your interesting videos while trying to achieve but failed miserably on most of em. but today i just came for the thumbnail and rolling my sleeves to implement this masterpice. thank you so much & peace from 🇹🇷

  • @amandamate9117
    @amandamate9117 6 місяців тому +7

    perplexity should use this crawler since their models are hallucinating reference URLs LOL

  • @AllenGodswill-im3op
    @AllenGodswill-im3op 6 місяців тому +2

    With all these expensive tools, I think it will best to build with playwright.
    Though it will take weeks or months, but it will be cost effective.

    • @helix8847
      @helix8847 6 місяців тому +1

      Issue with just Playwright it will be detected as a bot.

    • @AllenGodswill-im3op
      @AllenGodswill-im3op 6 місяців тому

      @@helix8847 You know any better alternative?

  • @elon-randgul
    @elon-randgul 5 місяців тому

    I am recently thinking about this idea too. Many thanks for sharing your result!!

  • @MechanicumMinds
    @MechanicumMinds 6 місяців тому +1

    I never knew web scraping was so hard. I mean, I ve been trying to scrape together a decent Instagram following for years, but I guess that's not what they mean by web scraping.
    Anyway, who knew websites were like the cool kids at school, only loading their content when you scroll into their 'cool zone' and making you jump through hoops to get to the good stuff

  • @AryaArsh
    @AryaArsh 6 місяців тому +28

    _Advertisements ✅️ Knowledge ❌️_

    • @nonstopper
      @nonstopper 6 місяців тому +3

      Average AI Jason video

    • @rajchinagundi7498
      @rajchinagundi7498 6 місяців тому +4

      @@nonstopper True this guy has stopped creating value content

    • @helix8847
      @helix8847 6 місяців тому

      Sadly it does feel like that now. Nearly everything he shows now cost money. While there are free alternatives to most of what he shows.

    • @SamuelJunghenn
      @SamuelJunghenn 6 місяців тому +1

      And all the trolls come out.. never created a piece of value in their lives for anyone else for free, but they rag on content producers who dedicate a lot of time to bring value to others. Thumbs up guys keep your value less contributions coming, you’re really heroes here.

    • @djpete2009
      @djpete2009 6 місяців тому

  • @danielcave9606
    @danielcave9606 6 місяців тому +1

    The cost per request for this must be through the roof!

    • @pizzaiq
      @pizzaiq 6 місяців тому

      Not if you run llama on olama on your own server or local machine, which is doable. Hopefully soon this cost goes further down with services we can't host.

    • @danielcave9606
      @danielcave9606 6 місяців тому +1

      I mean in comparison to other more specialised ML models currently used in industry, where hundreds of millions, to billions, of requests are being made where cost per request really matters.
      What LLMs like this CAN give you is speed to data which is great for a subset projects, from any site while eliminating the need to write selectors and extraction code, but at the expense of high cost per request.
      But again we have ML that can deliver that at scale at a fraction of the cost, and at a much higher accuracy.
      In a world where simply adding a headless browser to access HTML can 30x the cost per request and kill a project. Adding a LLM is simply a no go.
      I’m excited to see the future of LLMs in scraping, but it’s VERY early days but I haven’t seen usecase where LLMs are used for extracting and structuring the data are significantly faster or cheaper better than the existing tech.
      Where I have seen LLMs provide practical utility is in the post extraction process where it can be used effectively to extract data from unstructured text which as item descriptions.
      I’m excited for the future of LLMs when they become practical and the when the benefits can outweigh the cost in real world applications, but for now I view them as interesting research projects pushing things forward, and as fun tools for smaller personal projects where budgets are not an issue.
      I love these kinds of discussions, and last year I attended and spoke at extract summit in Ireland, I hope to be going again this year to hear more about the latest AI use cases.
      To wrap up, I think the best use of LLMs I’ve seen is to generate xpaths and to use those inside cheap to run spiders/crawlers. And I’m looking forward to seeing what people come up with next.

  • @damionmurray8244
    @damionmurray8244 6 місяців тому +3

    We are in a world where data is the most sought after commodity. And AI is going to make accessing information trivial. I wonder how Big Business will respond. I suspect they'll start pushing for laws to criminalize web scraping in the not too distant future. It will be interesting to see how this all plays out in the years to come.

    • @pizzaiq
      @pizzaiq 6 місяців тому +1

      They would never win with that kind of law. If you show data publicly it's there for the picking. If A.I. can have vision and mimic a human user, it's game over for hiding data.

    • @il35215
      @il35215 3 місяці тому

      Scraping already illegal in many countries and if you will try create business around that data they will sue you instantly.

  • @justafreak15able
    @justafreak15able 6 місяців тому

    The cost of making is comparatively so costly than creating a website specific scrapper and maintaining it.

  • @tkp2843
    @tkp2843 6 місяців тому +2

    Fire video🔥🔥🔥

  • @PromptEngineer_ChromeExtension
    @PromptEngineer_ChromeExtension 4 місяці тому

    Nice thanks for that

  • @bernardthongvanh5613
    @bernardthongvanh5613 6 місяців тому +1

    In movies they do all they can so the AI cannot access the internet, in real life : we need web scrapping man, give it access!

  • @syberkitten1
    @syberkitten1 6 місяців тому +2

    I don't believe it's possible to create a universal scraping solution that would be efficient in many edge cases. A custom solution would likely be faster and cheaper, especially if you need to scale.
    I've evaluated a lot of scraping SaaS services and used everything from Selenium to headless browsers. There are so many protection mechanisms, including headers, API checks, cookies, etc., and I'm sure I haven't seen a fraction of them. Some sites even require the browser to load JS and render changes on screen.
    With AI, we can get closer to an ideal solution. For example, you could take a screenshot if necessary (if the data is graphic and not part of the HTML source) and at the same time scrape the HTML. Then, pass them together to an LLM with your question. The structured data should then answer what you need it to become.
    However, you need to run the LLM yourself. Any solution using an LLM should allow users to provide an extraction schema, which needs to be very flexible as a prompt. This could be a nice service for hobbyists, but for scale, it would be too expensive. A custom implementation would probably serve better.

    • @AIJasonZ
      @AIJasonZ  5 місяців тому

      I agree it is not easy to build an universal one that works for every website - one path im exploring now is to build good scraper for specific website category; e.g. one scraper for all ecommerce, one scraper for all company websites, one scraper for all blogs, etc. Then you have something to route to the right scraper;

    • @HarpaAI
      @HarpaAI 5 місяців тому

      @@AIJasonZ Agree with the assessment. In our tests, GPT-4 is still a bottleneck, no matter how good the tools and clean the data you give it, for a Universal scrapping / web automation task it often fails to provide a correct next best action to take, goes into loops, performs redundant actions, does not abort / complete execution etc. If you build your agent around a specific workflow where you predefine the sequence of steps to take - that's a different story. But that approach is far from universal.

  • @eduardoribeiro3313
    @eduardoribeiro3313 6 місяців тому

    Great work!! I'm currently tackling web scraping challenges, especially with certain sites where determining the delivery location or dealing with pop-ups obstructing the content poses issues. This often requires user action before the search query can proceed. What do you believe are the most effective methods or tools to overcome these hurdles? Sometimes, even the agentql struggle to resolve these issues.

  • @CordeleMinceyIII
    @CordeleMinceyIII 6 місяців тому +1

    How does it handle s?

  • @paulevans3060
    @paulevans3060 6 місяців тому +3

    can it be used for scrapping estate agents for finding a house to buy?

    • @il35215
      @il35215 3 місяці тому

      Sure after big reworking the code and in semiautomatic mode

  • @kilianlindberg
    @kilianlindberg 6 місяців тому +2

    10:42 i follow tutorial, build scraper with cleanmymac, nothing happen, install twice, Ubuntu 22.04 only get many index.html

  • @dannyquiroz5777
    @dannyquiroz5777 6 місяців тому +1

    I'm here for the thumbnail

  • @cidhighwind8590
    @cidhighwind8590 4 місяці тому

    I’m afraid I will code an infinite ai gpt loop and accidentally get charged thousands in the process.

  • @yashsrivastava677
    @yashsrivastava677 6 місяців тому +3

    I wonder if this is an Advertisement video or a knowledge sharing video..Nothing is open source.

  • @ronallan8680
    @ronallan8680 2 місяці тому

    You deserve Subscribe ✅

  • @maloukemallouke9735
    @maloukemallouke9735 6 місяців тому

    Thank you for share

  • @brianchow-rg2lo
    @brianchow-rg2lo 2 місяці тому

    Hi Jason, Your second example doesn't work. AgentQL doesn't open the amazon page.

  • @javiermarti_author
    @javiermarti_author 6 місяців тому

    Great work

  • @sanchaythalnerkar9736
    @sanchaythalnerkar9736 6 місяців тому +1

    Would it be possible for me to contribute and collaborate on this project? I’m also working on developing a universal scraper myself.

  • @pizzaiq
    @pizzaiq 6 місяців тому

    Good walkthrough. Now we need better hardware to run better models so we can stop paying for lobotomized AI

  • @AhmedMekallach
    @AhmedMekallach 6 місяців тому +1

    Is bounding box method open-source ?
    Looking for a function that returns an X,Y coordinate of an element.
    Def FindCoordinates(instruction, screenshot)
    Return (x coordinate, y coordonate)

  • @smokedoutmotions_
    @smokedoutmotions_ 6 місяців тому

    Cool video

  • @productresearchgeek
    @productresearchgeek 6 місяців тому

    what's the event about scraping you quoted in your video? please cite the link

  • @dipkumardhawa3513
    @dipkumardhawa3513 6 місяців тому

    Hi I am a student, I want to build same kind of thing for LinkedIn can it possible.
    Thank you so much for sharing this knowledge❤

  • @bobharris5093
    @bobharris5093 5 місяців тому

    i never can understand why you need an api for the search. is there any tool that can just type in the google search bar at all ??

  • @gRosh08
    @gRosh08 5 місяців тому

    Cool.

  • @Fonsecaj89
    @Fonsecaj89 4 місяці тому

    Scammed by ads… these snake’s oil seller are out of control

  • @rishabnandi9593
    @rishabnandi9593 6 місяців тому +2

    This looks sus selenium could do this why do all this work if gpt 4o is generating selenium scripts faster than an Asian thinking

  • @gold-junge91
    @gold-junge91 3 місяці тому

    Affiliate scrapping is this more not, what is with you will provide the code?

  • @LaelAl-Halawani-c4l
    @LaelAl-Halawani-c4l 6 місяців тому

    You didn't name the title of the speech, the names of the authors or team, got to give credit where it's due... can we get a link to the videos your using? the source? i would like to see the whole thing

  • @PassiveJ_1
    @PassiveJ_1 6 місяців тому

    Who wants to be a millionare? Scrape linkedin with AI and become a Zoominfo competitor. Youre welcome.

  • @garic4
    @garic4 6 місяців тому

    Any TLDR here for this nightmare long blob video?

  • @techfren
    @techfren 6 місяців тому +3

    first lesgoo 🔥

  • @eugenetaranov4549
    @eugenetaranov4549 5 місяців тому

    Curl is a protocol 😂

  • @onlineinformation5320
    @onlineinformation5320 6 місяців тому

    hey can u make a video on Multion

  • @meers_edits6867
    @meers_edits6867 4 місяці тому

    what About Scrapegraph-ai ???

  • @Septumsempra8818
    @Septumsempra8818 6 місяців тому +1

    My whole startup is based on scraping. I hope this doesn't catch up...

    • @AIJasonZ
      @AIJasonZ  5 місяців тому

      hah what does your startup do?

  • @mble
    @mble 6 місяців тому

    Great work, yet I am not willing to use anything that is propriatary

  • @chauhanpiyush
    @chauhanpiyush 6 місяців тому

    You didn't put the signup link for your universal scraper agent.

    • @AIJasonZ
      @AIJasonZ  6 місяців тому

      thanks for the notes! here is the link: forms.gle/8xaWBBfR9EL5w8jr6

  • @ShadowD2C
    @ShadowD2C 6 місяців тому

    Hi, Im building a PDF QA chatbot than answers from 10 long pdfs, Ive experimented with RAG but the chunks I get from the vector db often dont provide the correct context, what can I do to get reliable answers based on my pdfs? will passing the entirety of the pdfs to an llm with a large max tokens help? it doesnt seem effecient to pass the entirety of the pdfs with every question ask.... Im lost please help

    • @matiascoco1999
      @matiascoco1999 6 місяців тому

      Try using claude models. They have huge context windows and some models are pretty cheap

    • @productresearchgeek
      @productresearchgeek 6 місяців тому

      1 try different sized chunks 2 add adjacent chunks to what vector db returns 3 include section titles in the chunks

  • @yunyang6267
    @yunyang6267 6 місяців тому

    why are you building a startup every week

  • @fathin7480
    @fathin7480 6 місяців тому

    Did anyone manage to write the full script? or has access to it?

  • @brianWreaves
    @brianWreaves 6 місяців тому

    🏆

  • @uwegenosdude
    @uwegenosdude 6 місяців тому

    Hi Jason, thanks for your interesting video. Would it be possible to place your microphon so that we can see your lips when you are talking. For me it's easier to understand english, if I can see them. You huge mic covers so much of your face. Thanks.

  • @TheFlounderPounder
    @TheFlounderPounder 5 місяців тому

    Does this make money ? Or a waste of time?

  • @hernandosierra8759
    @hernandosierra8759 6 місяців тому

    Excelente. Gracias.

  • @JD-xm3pe
    @JD-xm3pe 6 місяців тому +2

    Your content is fantastic, your English is top-notch but your accent adds some overhead to understanding. I hope that doesn't feel insulting, your vocabulary and grammar is better than most native English speakers. So an idea... Could you look at using gpt-4o to improve elecution (not just English) in a foreign language? It would be quite useful for many people.

  • @vitalii131
    @vitalii131 6 місяців тому

    It’s like pirate game but not to buy it

  • @ashishtater3363
    @ashishtater3363 6 місяців тому +1

    Total nonsense

    • @Phanboy
      @Phanboy 6 місяців тому

      Noob

  • @tinato67
    @tinato67 6 місяців тому

    unsubscribed

  • @DevProHub
    @DevProHub 6 місяців тому

    Ugly banner image

  • @nullvoid12
    @nullvoid12 5 місяців тому

    What a waste of time!