MLCon | Machine Learning Conference
MLCon | Machine Learning Conference
  • 88
  • 172 447
Scalable Data Pipelines for ML: Integrating Argo Workflows and dbt | Hauke Brammer
Join Hauke Brammer at MLcon Munich 2024 as he takes you through building scalable data pipelines for machine learning. In this session, you'll learn how to integrate Argo Workflows and dbt to create robust ELT pipelines that can handle the growing demands of your ML projects. Hauke will provide a detailed overview of ELT processes, focusing on how to scale these workflows effectively for large datasets. Whether you're looking to optimize your data infrastructure or improve the resilience and efficiency of your ML applications, this session is packed with actionable insights to help you elevate your machine learning initiatives.
📌 Key Highlights:
- Master the integration of Argo Workflows and dbt for scalable data pipelines.
- Learn best practices for handling expanding datasets in ML environments.
- Practical guidance on orchestrating and scaling data jobs efficiently.
- Enhance your data infrastructure to support robust machine learning projects.
🔗 More about MLcon: mlconference.ai/
Переглядів: 224

Відео

Practical LLM Fine Tuning For Semantic Search | Dr. Roman Grebennikov
Переглядів 1,1 тис.5 місяців тому
Join Dr. Roman Grebennikov at MLcon Munich 2024 to explore fine-tuning large language models (LLMs) for semantic search. Discover how to customize models for specific domains like medicine, law, or hardware using open-source tools like sentence-transformers and nixietune. Learn about data requirements, training bi-encoders and cross-encoders, and achieving quality improvements with a single GPU...
A Cabinet of Deep Learning Curiosities | Christoph Henkelmann
Переглядів 2056 місяців тому
Join Christoph Henkelmann as he takes you on a journey of deep learning techniques at MLCon Munich 2024. While most deep learning tasks today are best tackled using pretrained models and established architectures, there are always those quirky, obscure tricks that can make a significant difference. In this fascinating session, Christoph shares a collection of unusual and little-known methods th...
Beyond 2026 - The role of AI generated Content in ML | Eric Dauenhauer [MLCon Sessionrecording]
Переглядів 588 місяців тому
Data and content are central to Machine Learning (ML), but with the rise of AI-driven tools like ChatGPT, the landscape is rapidly evolving. Join us as we delve into the implications of generative AI models on bias in future ML systems. As experts project that up to 90% of online content could be AI-generated by 2026, it's crucial to address the impact on bias, especially in light of upcoming r...
AI is MagIA: Explaining AI Concepts with Card Tricks | Juantomás García [MLCon Sessionrecording]
Переглядів 458 місяців тому
Step into the enchanting world where AI meets the art of card magic! Join us for a captivating session where we unravel the mysteries of AI through the lens of captivating card tricks. "AI is MagIA" is not just a talk; it's an immersive journey that bridges the realms of cutting-edge AI science and timeless card magic. In this unique presentation, we delve into six fundamental AI concepts, demy...
Natalia Ziemba Jankowska The Untapped Potential of Audio ML Projects Exploring Vocal Health Analys
Переглядів 9310 місяців тому
Revisiting the groundbreaking moments from the recent MLCon : 🚀 Join Natalia Ziemba Jankowska for a concise yet enlightening journey through the intersection of technology and the realm of sound. Whether you're a seasoned professional or a curious enthusiast, this session promises to offer valuable insights into the untapped potential of audio in the realm of machine learning. Let's reimagine t...
Prompt Engineering - Best Practice for Productive Customer | Maximilian Vogel
Переглядів 38610 місяців тому
The Secrets of Effective Prompt Engineering in Machine Learning! Join Maximilian Vogel in revisiting the groundbreaking moments from the recent ML conference as he explores the dynamic landscape of Machine Learning development. The explosion of Proof of Concepts (POCs) and prototypes centered around language models has sparked a revolution, pushing us to move beyond the hype and focus on constr...
MLCon Munich 2024 | June 25 - 28, 2024
Переглядів 6 тис.11 місяців тому
The Event for Machine Learning Technologies & Innovations UNDERSTAND YOUR DATA Making sense of your data is key for every modern predictive business. At ML Conference you will develop a deep understanding of your data, as well as learn about the latest tools and technologies. OPTIMIZE YOUR MODELS Learn from leading experts about which methods, libraries, services, models, and algorithms to use....
Why Security Is Important in ML and How To Secure Your ML-based Solutions | Rachid Kherrazi
Переглядів 4712 роки тому
When enterprises adopt new technologies, security is often on the back burner. It can seem more important to get new products or services to customers and internal users as quickly as possible and at the lowest cost. AI and ML offer all the same opportunities for vulnerabilities and misconfigurations as earlier technological advances, but they also have unique risks. As enterprises embark on ma...
Using A.I to make recommendations for career progression | Dorra Nouira
Переглядів 2532 роки тому
Despite the wealth of information available to job seekers, choosing careers and transitioning between jobs remain somewhat random. With thousands of job titles available, it is difficult for candidates to know what each role entitles and how well-suited they are for various positions. Our research aims to break though this complexity and identify the most fitting careers for every job. To this...
Kotlin? For Machine Learning? | Hauke Brammer
Переглядів 2,3 тис.2 роки тому
Python is *the* language of choice when it comes to Machine Learning. Easy to learn, very readable syntax, and a huge ecosystem. Why would I bother with any other language? And why Kotlin in particular? In this talk, I’ll give you an overview of how to use Kotlin in every phase of your Machine Learning project. From data cleaning and feature extraction to deploying the model into production and...
Honey, I shrunk the TinyML | Lars Gregori
Переглядів 1842 роки тому
You might remember the movie from the 80’s when Wayne Szalinski shrunk his family. Back then, machine learning was also little. But I don’t want to talk about the past, I want to talk about the future, where machine learning is getting tiny. I will show how to train a model and convert it to use TinyML on a microcontroller. Which possibilities this could open up now and in the future. What are ...
Machine Learning Conference - The Conference for Machine Learning Innovation
Переглядів 6992 роки тому
At the Machine Learning Conference, you will listen to inspiring talks, get insider tips and gain deep insights from our internationally known speakers and industry experts. On top of that, you will have the chance to practice your learnings with sessions, keynotes and power workshops on a variety of topics. Let’s shape the future of ML together!
ML Conference Speaker - Christoph Henkelmann
Переглядів 1633 роки тому
Christoph Henkelmann is a renown speaker at the ML Conference. He has been a pioneer in the field of machine learning and well-known for his work. At the ML Conference, he continues to lead from the front and present various machine learning topics. He holds a degree in Computer Science from the University of Bonn. He currently works at DIVISIO, an AI company from Cologne, where he is CTO and c...
From NASA to Hollywood - using predictive analytics and machine learning
Переглядів 2973 роки тому
From NASA to Hollywood - using predictive analytics and machine learning
An Introduction to Natural Language Generation
Переглядів 4 тис.3 роки тому
An Introduction to Natural Language Generation
MLOps, Automated Machine Learning Made Easy
Переглядів 3503 роки тому
MLOps, Automated Machine Learning Made Easy
Language: The next stronghold to be taken by AI
Переглядів 803 роки тому
Language: The next stronghold to be taken by AI
Language: The next stronghold to be taken by AI
Переглядів 3233 роки тому
Language: The next stronghold to be taken by AI
From NASA to Hollywood: using predictive analytics and machine learning
Переглядів 1633 роки тому
From NASA to Hollywood: using predictive analytics and machine learning
Everything You Need to Know about Security Issues in Today’s ML Systems | David Glavas
Переглядів 6574 роки тому
Everything You Need to Know about Security Issues in Today’s ML Systems | David Glavas
First Steps to Interpretable Machine Learning | Natalie Beyer
Переглядів 3264 роки тому
First Steps to Interpretable Machine Learning | Natalie Beyer
Next generation AI: Emotional Artificial Intelligence based on audio | Dagmar Schuller
Переглядів 1,1 тис.4 роки тому
Next generation AI: Emotional Artificial Intelligence based on audio | Dagmar Schuller
Building emotionally intelligent Machines | Srividya Rajamani
Переглядів 2064 роки тому
Building emotionally intelligent Machines | Srividya Rajamani
Explainable AI with Machine Teaching | Murat Vurucu
Переглядів 1984 роки тому
Explainable AI with Machine Teaching | Murat Vurucu
From Paper to Product - How we implemented BERT | Christoph Henkelmann
Переглядів 1,5 тис.4 роки тому
From Paper to Product - How we implemented BERT | Christoph Henkelmann
Automatic Image Cropping for Online Classifieds | Alexey Grigorev
Переглядів 7364 роки тому
Automatic Image Cropping for Online Classifieds | Alexey Grigorev
Continuous Delivery for Machine Learning Applications with Open Source Tools
Переглядів 7334 роки тому
Continuous Delivery for Machine Learning Applications with Open Source Tools
Machine Learning Conference - The Conference for Machine Learning Innovation
Переглядів 3384 роки тому
Machine Learning Conference - The Conference for Machine Learning Innovation
Deep Learning: the final Frontier for Time Series Analysis?
Переглядів 10 тис.4 роки тому
Deep Learning: the final Frontier for Time Series Analysis?

КОМЕНТАРІ

  • @di380
    @di380 11 днів тому

    Two take aways from this is that humans seem to have a very good understanding of the game of chess and are able to competitively handcraft evaluation functions that play as good as reinforcement learning engines. Second, is that two CNN using Montecarlo could evolve into completely different solutions using the same exact implementation 😮

  • @zhangwei2671
    @zhangwei2671 4 місяці тому

    Kotlin Notebook pls.

  • @YuriKhrustalev
    @YuriKhrustalev 4 місяці тому

    Roman, nice to see you using vector db as well, greetings from Canada

  • @beattoedtli1040
    @beattoedtli1040 4 місяці тому

    Nice talk, but in 2024, Stockfish is still better than alpha zero. Why?

  • @SebastianBeresniewicz
    @SebastianBeresniewicz 4 місяці тому

    Very well presented and great content! I struggle with understanding some accents and maintaining focus with many presenters, especially if they are not great communicators but Dr. Grebennikov is very articulate and easy to follow. Thank you!

  • @jonabosman4524
    @jonabosman4524 5 місяців тому

    Favourite talk of the conference!

  • @carlosfreire8249
    @carlosfreire8249 5 місяців тому

    Very helpful content, thanks for sharing.

  • @bisdakaraokeatbp
    @bisdakaraokeatbp 5 місяців тому

    It's really annoying when you desperately need help and chatbots are just redirecting you over and over,. They don't understand context so companies using these are definitely a turn off.

  • @gyanantaran
    @gyanantaran 5 місяців тому

    This was comprehensible, quite insightful too, thanks for sharing.

  • @urimtefiki226
    @urimtefiki226 6 місяців тому

    I play and I do not think, just repeat the same things wasting my time while waiting for the bullshiter since 2016

  • @primingdotdev
    @primingdotdev 8 місяців тому

    Great talk. A reasonable approach.

  • @dipanshukumar5504
    @dipanshukumar5504 9 місяців тому

    You made a Awesome video but nobody actually cares about ML in Kotlin . Because most of the wanted to learn ML chooses Python over ML

  • @danruth1089
    @danruth1089 10 місяців тому

    tHANK YOU, but I disagree that rook usage

    • @michaelmassaro4375
      @michaelmassaro4375 8 місяців тому

      That Rook move was the best move it’s the only move to draw the game otherwise there is no stopping the Queen from checkmating you can try battling Rook vs Queen but that’s a losing effort

  • @berndmayer3984
    @berndmayer3984 Рік тому

    the best investigation yielded approx. 10^42 positions and that is what counts.not the rough estimate of 10^120 possible games.

  • @XiaoshuoYe
    @XiaoshuoYe Рік тому

    This is a very very nice talk, thank you!

  • @wmchanakakasun
    @wmchanakakasun Рік тому

    awesome!

  • @plunderersparadise
    @plunderersparadise Рік тому

    He sounds like he has no idea what he is talking about lol. Just the speech issue I think. Sorry for hate.

    • @michaelmassaro4375
      @michaelmassaro4375 8 місяців тому

      He’s not doing a good job in breaking down the mechanics in my view probably because of my own ineptness but I was hoping for more simpler terms and mechanisms to explain how it is the engines function

  • @philj9594
    @philj9594 Рік тому

    Just started learning chess and I know only a little about computer science/programming but this was wonderful to gain a better understanding of what chess engines are actually doing under the hood when I use them and also a better understanding of their limitations. I've noticed many people talk about people over-relying on engines so I figured it would be a good use of my time to gain a deeper understanding of what a chess engine even is if I'm going to be using them regularly. Also, it's just interesting and fun to learn! Thanks for the amazing lecture. :)

    • @Magnulus76
      @Magnulus76 Рік тому

      Yes, it's possible to over-rely upon computer chess. Stockfish and Leela are powerful engines but they can have problems in their own understanding (particularly with chess concepts, such as endgame fortresses, something that still baffles engines out there). They also don't always produce data that is particularly relevant to learning chess, in particular Stockfish's "thinking" is very alien and sometimes difficult to learn from.

    • @michaelmassaro4375
      @michaelmassaro4375 8 місяців тому

      So engines can be used for studying lines etc or analyzing a game that was played but it seems many players can use them to cheat on line

  • @christrifinopoulos8639
    @christrifinopoulos8639 Рік тому

    about the stockfish evaluation function, is it completely prewritten or are there some (handwritten) parameters that can be optimised through learning? (

  • @desertplayz3955
    @desertplayz3955 Рік тому

    I wanna see stockfish pull a Jerome opening now

    • @A_Swarm_of_Waspcrabs
      @A_Swarm_of_Waspcrabs Рік тому

      There's a UA-camr Joe Kempsey that forced Stockfish 14 to play the Jerome Gambit against MagnusApp

  • @nedafiroz514
    @nedafiroz514 Рік тому

    Fantastic talk

  • @ruffianeo3418
    @ruffianeo3418 2 роки тому

    There is one point, usually never mentioned. I will try to explain that (rather valid question) below, hoping, someone else will explain, why this is not a concern: Neural networks (deep or otherwise) act as function estimators. Here, it is the value function F(position) -> Value. As was pointed out early in the talk, this must be an approximation, because it would be cheating the universe if it managed to be exact in the presence of those high numbers of possible positions. (Store more information than number of atoms in the universe). So, an assumption is being made (and that is what is usually not elaborated): Positions, never seen before by the network still yield a value and the means of doing that is some form of interpolation. But for this to work, you assume a smooth value function (how ever high dimensional it is), you assume: V(P+Delta) = Value + Delta` for small enough deltas. So for this to work, the value function for chess has to be smooth-ish. But where did anyone ever prove, that this is the case? Here a simple example of the difference I try to point out: f1: Float -> Float f1 x = x * x If you sample F1 at some points, you can interpolate (with some errors) values between the samples: So, you train the network for, say: x in [1,3,5,7,...]. And when the network is trained and applied to values in [2,4,6], you get some roughly useful value (hopefully). Why? Because the function the network approximates is smooth. Here another function, not smooth: f2: Float -> Float f2 x = random x Training a network at the x in [1,3,5,7] cases does not yield a network which gives good estimators for the even x values. Why? Because that function is not smooth (unless you got lucky with your random numbers). So, which of the above F1, F2 is more akin to a chess value function V(position) -> Value? Who has shown, that chess is F1-ish?

    • @marcotroster8247
      @marcotroster8247 Рік тому

      You don't just learn a value function but also a strategy to weigh the trajectories sampled during training, so you can concentrate on good moves and their following moves to distill critical positions out of the game tree. It's quite clever actually. The strategy pi provides a random distribution indicating how likely each move is to be picked by the agent. When sampling a trajectory, you pick moves according to the distribution (stochastically), so the training experiences are just an empirical sample of the real game tree. Then you fit the distribution to explore good trajectories more intense by increasing their probabilities and vice versa. (Have a look at policy gradient / actor-critic techniques if you're interested) So to answer your question about smooth functions. You're usually only guaranteed to converge towards a local minimum of your estimator's error term. It's an empirical process, not an analytical one, so you cannot expect that from AI anyways. After all, you pick moves by sampling from a random distribution to model the intuition of "this move looks good" 😉

    • @congchuatocmay4837
      @congchuatocmay4837 Рік тому

      @@marcotroster8247 There are a lot of ways to go in higher dimensional space. If you get blocked one way, there are still 999,999 ways to go for example, or if you get blocked 1000 ways there are still 999000 ways to go. And that is how these artificial neural networks can be trained at all. They completely turn the tables on the 'curse of dimensional.'

  • @mohit6517
    @mohit6517 2 роки тому

    can we get the code?

  • @allorgansnobody
    @allorgansnobody 2 роки тому

    Wow just 4 minutes in and this is an excellent explanation. Just knowing whether or not stockfish had these "handcrafted" elements is so important to understanding how it works.

    • @themanwhoknewtoomuch6667
      @themanwhoknewtoomuch6667 2 місяці тому

      Also didn't know Stockfish is classical. We tend to take engines as gospel.

  • @mohamedyasser2068
    @mohamedyasser2068 2 роки тому

    attending such a lecture for me is a dream , I can't believe that most of them don't play chess !!

    • @michaelmassaro4375
      @michaelmassaro4375 8 місяців тому

      I play chess I’m subscribed to a you tuber he shows many Stockfish game Jozarovs Chess I figured I’d take a look to see how the engines actually work

  • @hrsger3760
    @hrsger3760 2 роки тому

    Can someone give me some good reference material or guides for Time Series Analysis using Deep Learning?

  • @kevingallegos9466
    @kevingallegos9466 2 роки тому

    Please what is the song at the beginning of the video! I've heard it before and now I want to listen to it! Thankyou!

  • @sunnysunnybay
    @sunnysunnybay 2 роки тому

    Without analysing as a chess engine i can see it's actually better for black. Count 9 pieces around the king, both queens are at the 5th rank of the king so they are not included, but black has a rook while no rook is near the white king and the pawn structure has 1 shape out for them too, agains't 3 move on black and good defense around them with both pawn and major pieces. Black has 1 pawn on the 5th rank in font of this weak king also, while it's 1 pawn for H rank & G & E for white.

  • @AnthonyRonaldBrown
    @AnthonyRonaldBrown 2 роки тому

    Stockfish 15 NNUE Plays ? The A.R.B Chess System ua-cam.com/video/NKK1WbinfNk/v-deo.html Stockfish 15 NNUE Plays The (A.R.B.C.S) - Kings & Pawns Game - A.R.B :) ua-cam.com/video/aFNvSLIzdDU/v-deo.html

  • @kleemc
    @kleemc 2 роки тому

    Great presentation. I deal with a lot of time series using deep learning. This lecture gave me some ideas to test.

  • @uprobo4670
    @uprobo4670 2 роки тому

    I liked his take on GPT ... 1000% accurate and respectable answer ...

  • @trontonmogok
    @trontonmogok 2 роки тому

    thank you for mentioning generative autoencoders

  • @Frost_Byte_Tech
    @Frost_Byte_Tech 2 роки тому

    It's because of content like this that I'll never get bored of trying to solve complex problems, really insightful and thought provoking 💫

  • @ME0WMERE
    @ME0WMERE 2 роки тому

    10:45 as someone who is actually making a chess engine: Haha no.

    • @zeldasama
      @zeldasama 2 роки тому

      The disconnect from professors to students. Lmao

    • @samreenrehman6643
      @samreenrehman6643 2 роки тому

      They probably just suck at chess

    • @michaelmassaro4375
      @michaelmassaro4375 8 місяців тому

      @@samreenrehman6643they might suck at chess but than again they probably have a greater understanding of the elements the man is speaking on

  • @andrescolon
    @andrescolon 3 роки тому

    Great talk on NLG! Simple, comprehensive and to the point. Thank you for doing this.

  • @kingshukbanerjee748
    @kingshukbanerjee748 3 роки тому

    Awesome - excellent treatment - use-case by use-case

  • @vladimirtchuiev2218
    @vladimirtchuiev2218 3 роки тому

    I don't understand why do you need the value function, if you have probabilities over possible moves, you will always during deployment select the argmax of the probability vectors... Is it for victory/defeat flags or something like that? Also, after each iteration of the MCTS, is the network trained until convergence or do you go over the self-played game only once?

    • @fisheatsyourhead
      @fisheatsyourhead 2 роки тому

      for timed games is a value function not faster

    • @vladimirtchuiev2218
      @vladimirtchuiev2218 2 роки тому

      @@fisheatsyourhead After some month of digging, ye it's faster because you don't have the time usually to go towards the end of the game, and instead you consider the values of the leaf nodes.

    • @amanbansll
      @amanbansll Рік тому

      I think there is another reason: the task of learning the policy vector alone doesn't teach the model about whether the current position is good or bad, it only learns whats the best thing to do in the situation. While this is enough to play, augmenting the learning process by adding another objective (multi-task learning style, because the model is shared between both objectives, only the head is different) helps the model learn better. Just my thoughts though, feel free to correct me if I'm wrong.

  • @misterfisk7402
    @misterfisk7402 3 роки тому

    Thank you for uploading this. It is very informative.

  • @virtualvoyagers429
    @virtualvoyagers429 3 роки тому

    wow this is just amazing

  • @dominican5683
    @dominican5683 3 роки тому

    I hate chatbots, I miss the good ole days when you could simply press 0 and talk to a human who could fix your problems easy peasy

  • @avlavas
    @avlavas 3 роки тому

    Intel Core i7 11700K Motherboard Asus Z590 32GB RAM DDR4 1TB SSD GTX 1060 3GB DDR5 Hi this is my computer, and I use SF14, plus I have a 100 gb chess moves, but in a way I didn't get 100% of it. Can you help me? Ty

  • @nielspaulin2647
    @nielspaulin2647 3 роки тому

    EXCELLENT TEACHING. I am a university teacher myself in the past!

  • @ahmadmaroofkarimi9125
    @ahmadmaroofkarimi9125 3 роки тому

    Great talk!

  • @kyokushinfighter78
    @kyokushinfighter78 3 роки тому

    Interesting talk. I do MehOps nowadays... I don't bloody cares about my stupid management..

  • @levelerzero1214
    @levelerzero1214 3 роки тому

    Lot of preparation with Tensorflow to create a project, but it's the real deal. I hope some day I find the time to dig into this.

  • @geoffreyanderson4719
    @geoffreyanderson4719 3 роки тому

    Mr Henkelmann j(DIVISIO) has supplied us with an excellent and informative video here. Thanks buddy I hope you make more, and good luck with your work there! It is such a good video because it's clear and brief and full of practical info I did not find elsewhere.

  • @YourMakingMeNervous
    @YourMakingMeNervous 3 роки тому

    This is still by far the best lecture I've seen on the topic so far

  • @simovihinen875
    @simovihinen875 3 роки тому

    This is very interesting... just finished the game. The accent is very strong though, and I'm kind of struggling to understand the speaker. Only worth watching for actual coders I think.

  • @saydtg78ashd
    @saydtg78ashd 3 роки тому

    They should show the slide fullscreen so we don't have to zoom our eyes. We don't need the footage of the presenter speaking.

    • @ME0WMERE
      @ME0WMERE 2 роки тому

      they did? (Or very close to fullscreen anyway)

    • @your_average_joe5781
      @your_average_joe5781 2 роки тому

      Footage is a term used for movie film. No film was used here so... No 'footage'👍

  • @kingsgambit
    @kingsgambit 3 роки тому

    Very interesting contribution! However, the windows executable files, that are linked on the Github site, are down (404 error). Could you check that?