Pi School
Pi School
  • 179
  • 320 416
Revolutionizing Retail with AI-Enhanced Stock Management
Discover the improvements in the retail industry thanks to the introduction of AI into processes by watching the pitch "Revolutionizing Retail with AI-Enhanced Stock Management" of Prakhar Rathi, Valerio Calà, and Hari Prasad under the mentorship of Adrian Buzatu as they delve into transforming retail operations through cutting-edge AI technologies.
Over eight weeks, our team has meticulously worked on leveraging state-of-the-art models, including Exponential Smoothing, Autoregression, Seasonal Autoregression, Prophet, Gradient Boosting, Random Forest, Convolutional Neural Networks, and Long Short Term Memory Models to address the crucial aspect of demand forecasting and inventory optimization.
This presentation showcases the culmination of their efforts, achieving significant milestones from improving algorithm accuracy to seamlessly integrating solutions into retail workflows, ultimately enabling substantial cost savings and efficiency enhancements. Witness how our team navigated demand forecasting challenges across 309 products over a two-year timeline, iterating from statistical models to deep learning approaches, refining their techniques to enhance prediction accuracy, and optimizing inventory decisions for maximum cost-effectiveness.
As the retail market continues to evolve rapidly, with sectors like clothing, pharmaceutical, and food delivery reaching staggering valuations, the imperative for sophisticated stock management solutions has never been more critical. Our video highlights our innovative approach to tackling these challenges and offers a blueprint for retailers looking to harness the power of AI for strategic advantage.
Tune in to gain valuable insights into the next frontier of retail management, where AI-driven precision meets operational excellence. Whether you're a retail business owner, a supply chain enthusiast, or intrigued by the application of AI in transforming industries, this video is for you. Stay ahead of the curve in the competitive retail landscape with AI-enhanced stock management strategies that promise not just to meet but exceed customer expectations while optimizing your bottom line.
Переглядів: 43

Відео

Mentoring a Data Science Team Building Impactful Data Products
Переглядів 26Місяць тому
Look at this video to discover how Adrian Buzatu, a staff data scientist at Tier Mobility, transforms his research on the Higgs boson and experience at significant laboratories like Fermilab and CERN into crucial lessons for mentoring data science teams. Learn how automation and advanced data analysis are revolutionising data products through ML time-series forecasts, optimisation with constrai...
AI for Reliability of Medical Literature - Pitch Day - School of AI Session 14
Переглядів 25Місяць тому
Pi School and Library Med are pleased to announce significant progress in enhancing the reliability of medical literature through the integration of human expertise and artificial intelligence. The efforts of our Session 14 fellows from the Pi School of AI - Kuntal Pal, Arkaprava Majumdar, and Mabel Ubong - have been central to this achievement. Their specialized knowledge in physics, NLP, ML, ...
Building Interactive Instruction Manuals with Large Language Models
Переглядів 962 місяці тому
Dive into the world of Interactive Instruction Manuals enhanced by the power of Large Language Models (LLMs). This video explores the groundbreaking approach taken by a collaborative team to transform traditional instruction manuals into dynamic, interactive guides. Discover how AI-based information extraction and the expertise of domain professionals streamline the creation process, making man...
Pitch Day Pi School of AI Session 14
Переглядів 882 місяці тому
Watch the transformative power of AI unfold in the recorded Pitch Day event. Witness School of AI fellows and industry leaders showcase solutions in retail, healthcare, and tech sectors with cutting-edge AI projects. From AI-enhanced stock management to advanced medical literature analysis, see how these innovations pave the way for a promising future. Don't miss this glimpse into the latest te...
Data Products and Data Driven Actionable Insights at Scale - Tech Talk with Adrian Buzatu
Переглядів 563 місяці тому
Dive into the fascinating world of particle physics and data analysis with our Pi School tech talk, featuring Adrian Buzatu, a distinguished figure in the field. This engaging session, titled “Data Products and Data-Driven Actionable Insights at Scale,” for the fellows of the School of AI Session 14, is now available on the Pi School Official UA-cam Channel for you to watch at your convenience....
Cheshire Cat AI a Production Ready AI Assistant Framework - Tech talk with Nicola Procopio
Переглядів 3463 місяці тому
Dive into the world of AI with Pi School's latest tech talk, now available for streaming on our UA-cam channel: "Cheshire Cat AI, a Production-Ready AI Assistant Framework," led by Nicola Procopio. During this session, Nicola Procopio, a seasoned data scientist with a wealth of experience across industries such as Telecommunications and Healthcare, delves into the complexities of AI development...
AI Agents: Exploring the Potential of Web-Enabled LLMs
Переглядів 1404 місяці тому
Welcome to our video, where we explore the revolutionary future of LLM-Augmented Autonomous Agents (LAAs) with Manuel Del Verme, a visionary in Artificial Intelligence. In this insightful video, we'll delve into: The Era of LAAs: Discover how these agents, powered by large language models (LLMs), are redefining the task of managing complex web-based tasks. AgentBench: An innovative approach to ...
Robotics Meets Foundation Models | Tech Talk with Norman Di Palo
Переглядів 3344 місяці тому
Explore the Intersection of Robotics and AI: Norman Di Palo’s Tech Talk Now Available! Dive into the recorded session of our riveting Tech Talk featuring Norman Di Palo, an esteemed alumnus of the School of AI and a leading researcher at Imperial College London and Google DeepMind. In this talk, Norman unveils the exciting convergence of Robotics and Foundation Models, particularly Large Vision...
AI Travel Assistant: a case study of La Filanda
Переглядів 656 місяців тому
Will you trust an AI Booking Agent? During Session 13 of the School of AI, we helped La Filanda Agriturismo, a charming agritourism destination in the Garda Lake, to enhance the users' booking experience by making it smoother and more efficient. 🚀 The Solution: We introduced a cutting-edge Machine Learning model, like an AI Booking Agent. We designed this model to emulate the expertise of human...
Building Efficient Mathematical Reasoners in the LLM Era Tech Talk with Zhenwen Liang
Переглядів 2066 місяців тому
We are pleased to share with you the recorded session featuring Zhenwen Liang, who delivered a comprehensive presentation on "Building Efficient Mathematical Reasoners in the LLM Era." Zhenwen Liang, a PhD student at the University of Notre Dame, provided valuable insights into mathematical reasoning and large language models, shedding light on innovative approaches. In the abstract, Zhenwen di...
Unraveling the Immunological Code Classic and Explainable AI Methods in Vaccine Development
Переглядів 756 місяців тому
Discover how Machine Learning and Deep Learning have transformed vaccinology in this UA-cam video. Explore classic ML methods and cutting-edge AI explainability techniques in vaccine development. Dive into the role of protein language models like ESM in understanding proteins and predicting vaccine properties. Learn about the significance of AI explainability in ensuring transparency, trust, an...
How can you make your mark in the LLMs market?
Переглядів 1296 місяців тому
Discover valuable tips for startups interested in Large Language Models. At a recent Pi School fireside chat event, Marco Trombetti, Pi Campus and Pi School co-founder, interviewed Hassan Sawaf, aiXplain founder and Former AI Director at Facebook and Amazon. How can you make your mark in the LLMs market? Hassan Sawaf shares valuable tips to unlock the potential of Large Language Models in busin...
On the geometry of Large Language Models representations - Tech Talk Alberto Cazzaniga.
Переглядів 4737 місяців тому
Abstract Large language models are powerful architectures for self-supervised data analysis of various natures, ranging from protein sequences to text to images. In these models, the data representation in the hidden layers live in the same space, and the semantic structure of the dataset emerges by a sequence of functionally identical transformations between one representation and the next. We...
Designing efficient and modular neural networks
Переглядів 4107 місяців тому
Tech talk with Simone Scardapane, tenure-track assistant professor at La Sapienza University. As neural networks keep getting bigger and more complex, finding new ways to make them more efficient, use less power, and boost accuracy is crucial. One hot topic is making neural networks more flexible and adaptable. It often means handling choices differently, like directing tokens in a mix of exper...
Supply Chain Optimization: Leveraging Data and Emerging Technologies
Переглядів 609 місяців тому
Supply Chain Optimization: Leveraging Data and Emerging Technologies
Advancing Wildfire Forecasting using Explainable AI
Переглядів 589 місяців тому
Advancing Wildfire Forecasting using Explainable AI
Improving Chatbot Effectiveness using Large Language Models
Переглядів 749 місяців тому
Improving Chatbot Effectiveness using Large Language Models
Sviluppare progetti di AI come una startup
Переглядів 4310 місяців тому
Sviluppare progetti di AI come una startup
Tech Talk with Eduardo Calò: at the crossroads between Logic Language and Generation
Переглядів 5410 місяців тому
Tech Talk with Eduardo Calò: at the crossroads between Logic Language and Generation
Tech Talk with Chiara Mugnai, CTO and co-founder of Eoliann
Переглядів 10711 місяців тому
Tech Talk with Chiara Mugnai, CTO and co-founder of Eoliann
How to do Research for Fun and Profit
Переглядів 10111 місяців тому
How to do Research for Fun and Profit
Improving Translation Consistency with Large Language Models
Переглядів 16611 місяців тому
Improving Translation Consistency with Large Language Models
Interpreting Neural Language Models -Tech Talk 5 with Alessio Miaschi
Переглядів 117Рік тому
Interpreting Neural Language Models -Tech Talk 5 with Alessio Miaschi
Technical Details of Diffusion-Based Generative Models -Tech Talk with Ahmet Gündüz
Переглядів 197Рік тому
Technical Details of Diffusion-Based Generative Models -Tech Talk with Ahmet Gündüz
Pi School of AI Session 11 fellows takeaways
Переглядів 260Рік тому
Pi School of AI Session 11 fellows takeaways
Tech Talk with Simone Di Somma, Augment data analytics with Artificial Intelligence
Переглядів 106Рік тому
Tech Talk with Simone Di Somma, Augment data analytics with Artificial Intelligence
Tech Talk with Emile Courthoud co-founder at Nebuly
Переглядів 153Рік тому
Tech Talk with Emile Courthoud co-founder at Nebuly
Pi School of AI introduction
Переглядів 1,3 тис.Рік тому
Pi School of AI introduction
Generative Models: Tech Talk with Gabriele Lombardi CTO at ARGO Vision
Переглядів 208Рік тому
Generative Models: Tech Talk with Gabriele Lombardi CTO at ARGO Vision

КОМЕНТАРІ

  • @Cropinky
    @Cropinky 10 днів тому

    very interesting of him to call deep learning a trade :)

  • @ShadowD2C
    @ShadowD2C 22 дні тому

    good video but his and the camera placements are subobtimal

  • @ds920
    @ds920 3 місяці тому

    I’m based in Milan, any chances any events will take place in Milan as well? I working on computational environment for agents. Already in production to some degree.

  • @vashisthegde2169
    @vashisthegde2169 9 місяців тому

    Great work guys! ❤🎉🎉

  • @CharlesVanNoland
    @CharlesVanNoland 10 місяців тому

    I just wish he hadn't stood right in front of what he was trying to show people, but I love his passion for explaining what he's talking about.

  • @alexandrogomez5493
    @alexandrogomez5493 10 місяців тому

    Tarea 6

  • @ARE_YOU_SICK_OF_YT_CENSORSHIP
    @ARE_YOU_SICK_OF_YT_CENSORSHIP 11 місяців тому

    i notice that Yasmin Moslem is the only person without a photo, is being photographed for public display against her religious beliefs, does she also not appear in public in person, or is she simply a virtual mentor who doesn't exist in the real world?

    • @pischool6210
      @pischool6210 11 місяців тому

      She is a human, a researcher, but she does not want to appear. We understand it might be strange for a human in 2023, but we respect and support any choices.

  • @ARE_YOU_SICK_OF_YT_CENSORSHIP
    @ARE_YOU_SICK_OF_YT_CENSORSHIP 11 місяців тому

    what these systems don't do i suppose is recognize text from images/scans which translators work a lot with so accurate text extraction from source is often required before the actual translation could be performed unless this software is able to not only extract text but also correct all the errors which are usually associated with OCR, and the more exotic the writing system the more challenging is the extraction task

    • @pischool6210
      @pischool6210 11 місяців тому

      These aren't multimodal models as they were trained only on text data (specifically for Machine Translation), so they can't be used to extract text from images. What you're probably looking for are models trained for Document Understanding tasks, such as the one we covered last session of School of AI (ua-cam.com/video/gXSFE0TznGM/v-deo.html&ab_channel=PiSchool)

    • @ARE_YOU_SICK_OF_YT_CENSORSHIP
      @ARE_YOU_SICK_OF_YT_CENSORSHIP 11 місяців тому

      @@pischool6210 thank you for the link, i'll take a look

  • @someone_518
    @someone_518 Рік тому

    ChatGPT gave me link to this video)

  • @IExSet
    @IExSet Рік тому

    Strange thing, he mention "attention" term before explaining what it is. What was EXACT meaning of this Query Key Value magic ??? I suspect speakers just copy thoughts of another people mechanically, not understaning real meaning of operations !

  • @ahmedb2559
    @ahmedb2559 Рік тому

    Thank you !

  • @ShahNawaz-zb3bu
    @ShahNawaz-zb3bu Рік тому

    Great describon. Love from Pakistan 🇵🇰

  • @nabinchaudhary73
    @nabinchaudhary73 2 роки тому

    does embedding gets trained or key or query or value gets trained i am confused. please help

  • @kingenking9303
    @kingenking9303 2 роки тому

    the video image is too poor, you need to fix it more

  • @tenzinkaldan2007
    @tenzinkaldan2007 2 роки тому

    where can i find the implementation

  • @uhmerikuhn
    @uhmerikuhn 2 роки тому

    ...comes from Google - Check. ...TensorFlow T-shirt - Check. Most viewers therefore rate this lecture highly - Check. This is very hand-wavy throughout with relatively no rigor shown. There are many lectures/presentations online which actually explain the nuts and bolts and wider use cases of Attention mechanisms. Maybe the title of this video should be something else, like "Our group's success with one use case (language translation) of Attention." Frankly, the drive-by treatment of the technical details of language translation case was almost terrible and should have probably been omitted.

    • @georgemaratos1122
      @georgemaratos1122 2 роки тому

      which lectures do you like that explain attention mechanisms and their wider use?

  • @natalescarpato9411
    @natalescarpato9411 2 роки тому

    Tokio

  • @adrianradu1298
    @adrianradu1298 2 роки тому

    7y30 8

  • @marioerrante6717
    @marioerrante6717 2 роки тому

    Gg v cejtqnhmmt i io

  • @louerleseigneur4532
    @louerleseigneur4532 2 роки тому

    Thanks buddy

  • @robertc6343
    @robertc6343 3 роки тому

    Fantastic talk!

  • @tylersnard
    @tylersnard 3 роки тому

    I love how excited he is.

  • @brandomiranda6703
    @brandomiranda6703 3 роки тому

    where is the library he talks about to get the details of training the DL "right"?

  • @lmaes
    @lmaes 3 роки тому

    The passion that he transmits is priceless

  • @kadamparikh8421
    @kadamparikh8421 3 роки тому

    Great content in this video. Would love if you had the multi-headed devil covered! Though, great video to get the overall view..

  • @empowercode
    @empowercode 3 роки тому

    Hey! I just found your channel and subscribed, love what you're doing! I appreciate how clear and detailed your explanations are as well as the depth of knowledge you have surrounding the topic! Since I run a tech education channel as well, I love to see fellow Content Creators sharing, educating, and inspiring a large global audience. I wish you the best of luck on your UA-cam Journey, can't wait to see you succeed! Your content really stands out and you've put so much thought into your videos! Cheers, happy holidays, and keep up the great work!

  • @harryderek521
    @harryderek521 3 роки тому

    Very nice 😍💋 💝💖♥️❤️

  • @fisherroberto3570
    @fisherroberto3570 3 роки тому

    Love you 💋💋😘😘❤️💯

  • @jahanzaibanwar5946
    @jahanzaibanwar5946 3 роки тому

    Hi nice project. If that’s okay can you share the GitHub link?

  • @clray123
    @clray123 3 роки тому

    Most I gather from this talk is that "attention" is a pretty terrible term. Something like "fuzzy lookup" or "matching" or "mapping" would have been much more descriptive, but oh well, which researcher needs to think about terminology before unleashing it on the world.

  • @sajjadayobi688
    @sajjadayobi688 3 роки тому

    Transformers learned translation without language dependency O_o

  • @RobertElliotPahel-Short
    @RobertElliotPahel-Short 3 роки тому

    math majors/ graduate math students skip to 15:36

  • @sudiplingthep7409
    @sudiplingthep7409 3 роки тому

    in two months i am starting a farm here in nepal, looks useful for me aswell how can i know more about smartcrop.

  • @vast634
    @vast634 3 роки тому

    They should invent a device that can always tell the time of day when the user wants.

  • @TheAIEpiphany
    @TheAIEpiphany 3 роки тому

    47:55 "We tried it on images it didn't work so well". 2020, Visual Transformer: am I a joke to you?

    • @souhamghosh8714
      @souhamghosh8714 3 роки тому

      In VIT, it is clearly stated that a "small dataset" like imagenet doesnt show promising results but a larger dataset like the jft gives amazing results, so this maybe a start, but it is far from perfection. Btw, I am not contradicting your statement. 😁. and also JFT is not an open source dataset(yet)

    • @TheAIEpiphany
      @TheAIEpiphany 3 роки тому

      @@souhamghosh8714 True Google folks ^^

    • @souhamghosh8714
      @souhamghosh8714 3 роки тому

      “Hi, I am from google, you know what i got, TPUs..more than you can imagine”😂

  • @AlexVoxel
    @AlexVoxel 3 роки тому

    Interessante, grazie

  • @pankajtiwari12
    @pankajtiwari12 3 роки тому

    great explanation !

  • @autripat
    @autripat 3 роки тому

    Starting @ 15:45, in well under 2 minutes, attention explained! Only a true master can do it. Love.

    • @Scranny
      @Scranny 3 роки тому

      K is a matrix representing the T previously seen words and V is the matrix representing the full dictionary of words of the target language, right? But what are K and V exactly? What values do these matrices hold? Are they learned?

  • @FranckDernoncourt
    @FranckDernoncourt 3 роки тому

    Thanks for sharing! It'd be great if the video could pay more attention to the slides though.

    • @pischool6210
      @pischool6210 3 роки тому

      Thank you for your comment, Franck! You can download the slides here: picampus-school.com/open-day-2017-presentations-download/

    • @FranckDernoncourt
      @FranckDernoncourt 3 роки тому

      @@pischool6210 perfect, thanks!

  • @HimanshuGhadigaonkar
    @HimanshuGhadigaonkar 3 роки тому

    Best expaination!!

  • @Marcos10PT
    @Marcos10PT 4 роки тому

    This is the best explanation of attention I have seen so far! And I have been looking :)

    • @ksrajavel
      @ksrajavel Рік тому

      Bcoz, he is one of the co-author of the revolutionary paper which introduced it

  • @jayantpriyadarshi9266
    @jayantpriyadarshi9266 4 роки тому

    Great talk. Something very useful.

  • @ramyaneekashyap4356
    @ramyaneekashyap4356 4 роки тому

    Is there any way i could get the ppts for reference?

    • @pischool6210
      @pischool6210 4 роки тому

      Hi, sure! You can download it here: picampus-school.com/open-day-2017-presentations-download/

    • @ramyaneekashyap4356
      @ramyaneekashyap4356 4 роки тому

      @@pischool6210 thankyou so much!!!!

  • @kunalaneja1720
    @kunalaneja1720 4 роки тому

    Hey can you please send the model to me for this project

  • @taimoor722
    @taimoor722 4 роки тому

    nice

  • @GeorgeZoto
    @GeorgeZoto 4 роки тому

    Great session Pi School 📚and Łukasz 😀. Here are a few key concepts I "attended" to and found interesting: "There is always another head that whatever word it is in, it looks at the (head) noun of the sentence or the (head) verb, just wants to know what are we talking about here 🤔" - Łukasz Kaiser ua-cam.com/video/rBCqOTEfxvg/v-deo.html "n^2 * d seems worse than n * d^2 (on attention vs recurrent algorithmic/operation complexity). Luckily at Google there is guy name Noam (Shazeer), he never got his bachelor but wrote most of these papers..." - Łukasz Kaiser ua-cam.com/video/rBCqOTEfxvg/v-deo.html "We trained a model to translate from English to French and vice versa, and from English to German and vice versa. Then if you give it French and ask for a German translation it will do a reasonable job. Multitask helps with deep learning tasks where you have little data." - Łukasz Kaiser ua-cam.com/video/rBCqOTEfxvg/v-deo.html

  • @shashidhar775
    @shashidhar775 4 роки тому

    Can i see the code?

  • @skinnyboy996
    @skinnyboy996 4 роки тому

    Your book on Go is the best

  • @23232323rdurian
    @23232323rdurian 4 роки тому

    I have a Synthetic Text Generator working on simple Ngram propagation technique that handles "long term memory" very crudely by keeping a Python set of items already present in the SynTexExtension. Where 'items set' represents the Topic....globally rare Tokens that are 'featured' to appear much more frequently in the Extension than in the Training Data....thus simulating a Topic and also a semblance of "long term coherency". A Bag-of-Words excluding the Nmost frequent Tokens (maybe N=500 or =1000). In practice: everything except Stopwords and HiFreqs.... A Parasitic Context is obtained by choosing a random stretch of EuroParl Training Text, then extracting a Bag-of-Words as described above... Then STG generates Extensions favoring that Context... Works OK for short passages....a paragraph or two... but to 'progress' a narrative the Context needs to be a Moving Window with new favorites gradually replacing tired old ones... The CONVERSE problem is handled in a similar way. When a few Topicals repeat TOO OFTEN. The Extensions often end up OVER-featuring items so that their proliferation detracts from plausibility... I'm only training on 10 million Tokens of EuroParl English, so "Topic" is already naturally limited to English Eurocratese anyway... so STG typically harps on "the Commission", "the European Parliament", "the rapporteur"...

  • @josy26
    @josy26 4 роки тому

    Slides?

    • @SubhamKumar-eg1pw
      @SubhamKumar-eg1pw 4 роки тому

      drive.google.com/file/d/0B8BcJC1Y8XqobGNBYVpteDdFOWc/view