- 360
- 80 651
MLOps World: Machine Learning in Production
Canada
Приєднався 19 кві 2021
Why MLOps World?
This initiative is created to help establish a clearer understanding of the best practices, methodologies, principles and lessons around deploying machine learning models into production environments.
Throughout weekly sessions you’ll have an opportunity to meet specialists, and form a stronger network with practitioners sharing lessons, and real world use-cases
Join us on this open exploration as we gather to cover conference proceedings, hands-on workshops, tooling & open source demos, a career exploration and more, here: (MLOpsWorld.com)
This initiative is created to help establish a clearer understanding of the best practices, methodologies, principles and lessons around deploying machine learning models into production environments.
Throughout weekly sessions you’ll have an opportunity to meet specialists, and form a stronger network with practitioners sharing lessons, and real world use-cases
Join us on this open exploration as we gather to cover conference proceedings, hands-on workshops, tooling & open source demos, a career exploration and more, here: (MLOpsWorld.com)
Making Enterprise GenAI Safe and Effective - Tools and Approaches
Speakers:
Rahm Hafiz, CTO, AutoAlign AI
Dan Adamson, Interim Chief Executive Officer and Co-Founder, AutoAlign AI
AutoAlign CTO Rahm Hafiz will show how different approaches (finetuning, moderation guardrails, and sidecars) can be used to deploy AI safely. Rahm will show setting up a sidecar and showing how it can be used as an automated guardrail system that dynamically interacts with LLMs to make them safe, effective, and compliant without losing efficacy and without having to retune every time your LLM changes.
Rahm Hafiz, CTO, AutoAlign AI
Dan Adamson, Interim Chief Executive Officer and Co-Founder, AutoAlign AI
AutoAlign CTO Rahm Hafiz will show how different approaches (finetuning, moderation guardrails, and sidecars) can be used to deploy AI safely. Rahm will show setting up a sidecar and showing how it can be used as an automated guardrail system that dynamically interacts with LLMs to make them safe, effective, and compliant without losing efficacy and without having to retune every time your LLM changes.
Переглядів: 90
Відео
Running prompts at CI does not make your GenAI app enterprise ready
Переглядів 1526 місяців тому
Speaker: Jakob Frick, CTO, Radiant AI
The BEST component for your RAG system
Переглядів 4586 місяців тому
Speaker: Jeffrey Kim, AutoRAG Lead Dev, Markr Inc. In this session, I will talk about the importance of optimization of the RAG system. And tell you how to use AutoRAG to automatically optimize the RAG system for your data briefly. It will lead you to boost RAG performance quickly and easily. There are many RAG pipelines and modules out there, but you don’t know what pipeline is great for “your...
Why AI apps don't work in prod: AI Reliability Survey
Переглядів 886 місяців тому
Speaker: Shreya Rajpal, CEO, Guardrails AI Despite the initial frenzy around the impact of AI on software projects, the actualized impact remains limited. This is in large part because AI has inherent variability which makes engineering orgs stumped with the dreaded question "how do I know it won't break in prod even though it works in dev". In this talk, Shreya will cover why reliability for A...
What It Actually Takes to Deploy GenAI Applications to Enterprises Custom Evaluation Models
Переглядів 1336 місяців тому
Speaker: Alexander Kvamme, CEO, Echo AI Arjun Bansal, CEO & Co-founder, Log10 Alexander Kvamme and Arjun Bansal will share Echo AI's journey in deploying their conversational intelligence platform to billion-dollar retail brands. They will discuss the challenges faced due to LLM accuracy issues, which impacted their ability to deploy at scale. The speakers will speak about the iterative prompt ...
Lessons learned from scaling large language models in production
Переглядів 1876 місяців тому
Speaker: Matt Squire, CTO, Fuzzy Labs Open source models have made running your own LLM accessible many people. It's pretty straightforward to set up a model like Mistral, with a vector database, and build your own RAG application. But making it scale to high traffic demands is another story. LLM inference itself is slow, and GPUs are expensive, so we can't simply throw hardware at the problem....
From Idea to Production: AI Infra for Scaling LLM Apps
Переглядів 3116 місяців тому
Speaker: Guy Eshet, Product manager, Qwak AI applications have to adapt to new models, more stakeholders and complex workflows that are difficult to debug. Add prompt management, data pipelines, RAG, cost optimization, and GPU availability into the mix, and you're in for a ride. How do you smoothly bring LLM applications from Beta to Production? What AI infrastructure is required? Join Guy in t...
LLM Fine-Tuning for Modern AI Teams: How One E-Commerce Unicorn Cut Inference Cost by 90%
Переглядів 1616 місяців тому
Speaker: Emmanuel Turlay, CEO/Founder, Airtrain AI While commercial LLMs such as GPT-4 and Claude 3 Opus offer amazing generative quality, small open-source fine-tuned models such as Mistral 7B and Phi-2/3 can offer similar performance on specific tasks, for a fraction of the cost, and with much more control. However, this has been proven to be true only when the tuning dataset is of high quali...
Function Calling for LLMs: RAG without a Vector Database
Переглядів 3286 місяців тому
Speaker: Jim Dowling, CEO, Hopsworks In this talk, we will look at extending RAG with Function Calling to access structured/tabular data. We will look at how to enrich your tables with metadata and the expressivity of the queries that you can reasonably expect to perform well. We will examine function calling in the context of queries to the Hopsworks feature store, that supports extensive meta...
Finding training inefficiencies with CentML DeepView
Переглядів 516 місяців тому
Speaker: Yubo Gao, Research Software Development Engineer at CentML Inc, and PhD student at University of Toronto, CentML Inc. Performance bottlenecks and resource underutilization is a common occurrence to deep learning researchers and developers. They slow down workflows of ML developers and waste computational resources. The current ecosystems of DL profilers do not provide a developer-frien...
Evaluating LLMs and RAG Pipelines at Scale
Переглядів 4406 місяців тому
Speakers: Eric O. Korman, Cofounder / Chief Science Officer, Striveworks Large Language Models (LLMs) and their applications, such as Retrieval-Augmented Generation (RAG) pipelines, present unique evaluation challenges due to the often unstructured nature of their outputs. These challenges are compounded by the variety of moving parts and parameters involved, such as the choice of underlying LL...
Empowering Data Science Teams: Harnessing AI with Appen
Переглядів 366 місяців тому
Speakers: Sasha McGrath, Account Executive, Appen Geoff LaPorte, Adoption Program Manager, Applied AI, Appen In an era driven by data and powered by artificial intelligence, the effectiveness of data science teams hinges upon access to high-quality data and robust collaboration tools. Our presentation unveils a comprehensive platform designed to revolutionize how data science projects are execu...
Better Chatbots with Advanced RAG Techniques
Переглядів 3936 місяців тому
Speaker: Zain Hasan, Developer Advocate, Weaviate Chatbots are becoming increasingly popular for interacting with users, providing information, entertainment, and assistance. However, building chatbots that can handle diverse and complex user queries is still a challenging task. One of the main difficulties is finding relevant and reliable information from large and noisy data sources. In this ...
Enhance Cost Efficiency in Domain Adaptation with PruneMe
Переглядів 716 місяців тому
Speaker: Shamane Siri, Ph.D. , Head of Applied NLP Research, Arcee.ai Our PruneMe repository, inspired by "The Unreasonable Ineffectiveness of the Deeper Layers," demonstrates a layer pruning technique for Large Language Models (LLMs) that enhances cost efficiency in domain adaptation. By removing redundant layers, we facilitate continual pre-training on streamlined models. Subsequently, these ...
Data Versioning in Generative AI: A Pathway to Cost-effective ML
Переглядів 696 місяців тому
Speaker: Dmitry Petrov, CEO, DVC For 5 years we have been building DVC and we know how data versioning helps teams. The evolving Generative AI workflows are different and require an evolution of versioning workflows to accomplish Generative AI goals. This new era thrives on vast amounts of unstructured data, which include everything from images, videos, and audio, to MRI scans, document scans, ...
Building ML and GenAI Systems with Metaflow
Переглядів 1686 місяців тому
Building ML and GenAI Systems with Metaflow
Efficiently Fine-Tune And Serve Your Own LLMs
Переглядів 1206 місяців тому
Efficiently Fine-Tune And Serve Your Own LLMs
The Who, What, and Why of Data Lake Table Formats
Переглядів 916 місяців тому
The Who, What, and Why of Data Lake Table Formats
The Journey of Building a Leading Open Source LLM Security Toolkit
Переглядів 1146 місяців тому
The Journey of Building a Leading Open Source LLM Security Toolkit
The Secret Sauce for Deploying LLM Applications into Production
Переглядів 1276 місяців тому
The Secret Sauce for Deploying LLM Applications into Production
Running Multiple Models on the Same GPU, on Spot Instances
Переглядів 3526 місяців тому
Running Multiple Models on the Same GPU, on Spot Instances
Towards Robust GenAI: Techniques for Evaluating Enterprise LLM Applications
Переглядів 1316 місяців тому
Towards Robust GenAI: Techniques for Evaluating Enterprise LLM Applications
Introducing Arize-Phoenix and OpenInference
Переглядів 6226 місяців тому
Introducing Arize-Phoenix and OpenInference
Mitigating RAG Hallucinations with Aporia Guardrails
Переглядів 1376 місяців тому
Mitigating RAG Hallucinations with Aporia Guardrails
Evaluation Engineering: Iterative Strategies to Testing Prompts
Переглядів 3286 місяців тому
Evaluation Engineering: Iterative Strategies to Testing Prompts
Customizable RAG Workflows with your Own Data
Переглядів 1376 місяців тому
Customizable RAG Workflows with your Own Data
Wanted: A Silver Bullet MLOps Solution for Enterprise
Переглядів 1256 місяців тому
Wanted: A Silver Bullet MLOps Solution for Enterprise
Evaluation Techniques for Large Language Models
Переглядів 2826 місяців тому
Evaluation Techniques for Large Language Models
Great to see Susrutha.... Wow keep doing great work 👏👏.
How does this have less than 500 views? This is fukcing gold!
This is exactly what I wanted for my project
It is possible to share the google doc that describes used in the hands-on workshop. Thanks
This video caused a clash of Nikunjs in my team
Nice Video, can we get the github link code for practising it
This is really awesome! Thank you very much.
Nice explanation thanks mam 👌
Excellent video, full of incredibly useful information, and very well presented.
Great, Could you share the resources used for this video? Many thanks
15:40 I'd add here a Task which is more 'main' than any other task. QA must understand what they do and why, they must understand business domain itself. Thank you for the video.
Hi, how to export to onnx using cuda?
can you give me an example notbook to do this. in video.
Nice, but it should be better to split into chapters, first 1 hours was setting up on AWS. Thank you.
Amazing structured breakdown of the problem.
Hello Kartik/UA-cam Handler, I have just joined a company as a Machine Learning Engineer Intern and still a fresher. I would like to keep my Name and where I work anonymous for this specific platform. I am working on a task where I need to analyse the dataset I have been given and convert that data into text using LLM. Example Data: Date Temperature 2 Feb 30C 3 Feb 24C Example Output: Today's weather will be warmer than yesterday and a little pleasant.... <so on> The use case is a little different but this is just an example to explain what I actually want. A little more explanation: What I want is that the LLM to read the dataset completely either through an excel I have or any format like CSV and answer my queries or create a conclusion based on the dataset I gave. I would love to get some help/insights from someone as experienced as you on how I can achieve my goal. We can connect on some other platform if you are comfortable with it. You can contact me at me personal mail: rohitkhare998@gmail.com Thanks. regards, Novice ML Engineer
Awesome talk. I am preparing for a privacy preserving ML interview and this was an amazing crash course. Second, for the thermal flu issue you mentioned, can't we just use FHE or SMPC like you mentioned in the slides?
Well explained!! thank you !!
For f****s sake turn the damn phone off
For f****s sake turn the damn phone off
Very good tutorial, specially the MLServer part
Helpful!
Do you have demo video for this? And not able to access the github
Great talk! As suggested, we do see now more "small" LLMs trained with considerably larger amounts of tokens than the "compute-optimal” recommended by the Chinchilla scaling laws
Great stuff. Really looking forward to more content like this! Props @AI-Makerspace
This talk is amazing. Completely nailed it.
Repo link in description or comments will be helpful
Thanks for the very good overview of training distributed systems on kubernetes, would love to see more detailed information making all the pieces fit together !
Well done, Stefan!
Finetuning e.g. Mistral LLM should perform way better than BERT. In practise, we typically finetune LLM model for the task.
Great talk!
Great talk!
fantastic
Great talk!
It is too good thank you for this wonderful workshop
Can you please let me know where can I find the presentations and note books ?
Is there still a link somewhere to the slides?
I have found the link of the docs in case anyone needs it . docs.google.com/document/d/1zbPak5aDFcMgEIYbDmL_F9N0GHptvoobxP9GpQlesmk/edit
Promo`SM
Grate session. Thank you guys
great explaination
lack of clarity in ppt
plz take care of the clarity its really shitty
Great session
This is intellectually beautiful and useful
the 1080p is the same as 360p
👍👍🎉🎉❤️👍
fanstastic demo! thank you so much
Can you share the sample code as well ?
nice
Where to find the demo notebooks?