You're so passionate about teaching and it shows through all your tutorials. Thanks for the effort you put into this and helping others. The best ML channel I know so far on youtube.
There are annotation tools like Haystack Annotation where you can annotate the training and test data set manually. Its a great tool for someone who is looking out to train a huge corpus of data. Btw Fantastic video Krish!! Thankyou :)
Hi Krunal! Thanks for the insight! Have you tried this with simpletransformers? I tried using haystack annotation and exported the annotated documents in the SQuAD format, but that doesn't seem to work while training the model! Am I doing something wrong?
@@avartarstar6744 Hey, as far as I remember, there is just some formatting that needs to be done in the JSON file, after you export the annotations. You just need to remove some fields in the JSON file, if I am not wrong. Check the difference in the format of the file required for simpletransformers vs what you are getting using Haystack. Or you can use the Haystack Model itself for the QnA training. I worked with the Haystack Model as well.
Thank you Krish. We really appreciate your effort to create the video lectures. Your tutorials are really informative. Thanks for covering the Question Answer Generation BERT model topic.
@3:52 , it's a wrong explanation of the is_impossible flag. It essentially means that if it is set to false, the answer can be obtained directly from the context and if it is true, it means the answer can not be directly answered from the context.
Context, can answer, index at which answer is available. Feeding this kind of details to a ML model which is already trained on NL is not too much spoon feeding ? Think the amount of effort it will take to create a Q&A dataset with diverse topic. At the end model is doing a lookup or search? Where is intelligence ?
Thank you, your videos are of great help. Can you please guide me on how you created your custom data? Like if there are any labelling tools for question answering tasks.
ValueError: 'use_cuda' set to True when cuda is unavailable. Make sure CUDA is available or set use_cuda=False. what does that mean. and 2 on executing this code :# Train the model model.train_model(train, eval_data=test) showing train and eval not available
Hi Krish, thank you very much for the great video, my dataste is in the csv format and I have one column of description, and the other column of labels both in text, can I do QA on this such that the Q is the description and the answer is the labels? If yes, how can I prepare the data in the format you mentioned?
@saharyarmohamadi9176 hi, I have the same data as you and tried converting to json, however when train_model I get a message that cannot be found... (I don't know if the default setting answer_start = 0 affects the results or not)
@kirsh naik Hi, when creating a custom dataset, is it best to limit the context as short / as long as possible? Additionally, can a context have multiple question, and each quesiton might have multiple variation of answer?
The QA is so nonsense. I mean if the user has to provide context with the question then it means he already knows the answer then why would he take help from the model 😂
@@SmartTech-m1u Actually, it doesn'ttake too much time. This is because the fact that if you know beforehand that the task is QA with context, you can finetune a small language model which should run faster than an LLM like GPT3, it's should even run faster than GPT3-turbo or GPT-4o-mini by orders of magnitude at no cost. This is possible because abstractive QA can be formulared as a prediction of indices over the context. The training data looks something like this input=(question, context) -> output=(position where the answer appears in the context, span of the answer). Notice that the output os just a tuple of integers, which means this task is easier than text generation (generative QA). Now, you can even filter out unnecesary context even more, just follow the RAG process but exchange the final call generative model (the G in RAG) with the abstractive QA model and there you go. That simple trick should make the whole process even faster. And if you use an open model trained on the Squad dataset, the implementation should be really easy, no more than 50 lines I'd say.
The dataset building process will take too long and the process is not feasible to create custom datasets from scratch. Is there any work around for this??. Mostly looking for an answer which will automate this task.
model.train_model(train, eval_data=test) NameError Traceback (most recent call last) in ----> 1 model.train_model(train, eval_data=test) NameError: name 'test' is not defined
Thanks for the illustrative video. I have transformed my data from csv to json format required by simple transformers. I checked the format line by line with yours as well. As soon as I try to train my model, it says list index out of range. Can you please help me why is it throwing that error?
What is the difference between Transformer and a Simple transformer you are using a simple transformer while implementing the QA. Anyone can give and if you guys know the answer.
@krish What is the use of this if we have to provide context every single time when we are doing the prediction. This makes this whole framework garbage , if there is no other . if there is please share
You're so passionate about teaching and it shows through all your tutorials. Thanks for the effort you put into this and helping others. The best ML channel I know so far on youtube.
There are annotation tools like Haystack Annotation where you can annotate the training and test data set manually. Its a great tool for someone who is looking out to train a huge corpus of data. Btw Fantastic video Krish!! Thankyou :)
Hi Krunal! Thanks for the insight!
Have you tried this with simpletransformers?
I tried using haystack annotation and exported the annotated documents in the SQuAD format, but that doesn't seem to work while training the model!
Am I doing something wrong?
@@hridaymehta893 hi, may i ask for the solution for the issues u have mentioned?
@@avartarstar6744 Hey, as far as I remember, there is just some formatting that needs to be done in the JSON file, after you export the annotations.
You just need to remove some fields in the JSON file, if I am not wrong.
Check the difference in the format of the file required for simpletransformers vs what you are getting using Haystack.
Or you can use the Haystack Model itself for the QnA training. I worked with the Haystack Model as well.
@@hridaymehta893 i see! Much appreciated!
@@avartarstar6744 No worries :)
Thank you Krish for covering this Topic, you are a saviour as always.
Thank you Krish. We really appreciate your effort to create the video lectures. Your tutorials are really informative. Thanks for covering the Question Answer Generation BERT model topic.
Thanks for all the effort you've put on this Krishnaik. It's super well-made and helpful!
Sir I am not able to find more videos from this playlist. This is amazing playlist. I want to learn more about transformers.
Krish sir, your videos are more informative than others. Would you please share how you created the dataset for QA model training.
Thank you Krish.
Could you please make a video for "Text Summarization with Custom Data"
@3:52 , it's a wrong explanation of the is_impossible flag. It essentially means that if it is set to false, the answer can be obtained directly from the context and if it is true, it means the answer can not be directly answered from the context.
Thank you Krish. This was really helpful. Keep up the good work :)
Please create a video on how to train squad dataset for question generation. And thanks for this video.
Great topic Krish, please add videos on other NLP tasks also.
GREAT video! very informative!
I would request you to please make an updated video on this same topic. This is need of the hour.
Change train_batch_size to lower value like 6 or 2 . It will give you correct result. Bcs number of training data is very small.
Thank you for your video. It was so helpful. One question. In real implementation, how do you use metric (e.g. f1 score) for evaluate the model?
Context, can answer, index at which answer is available. Feeding this kind of details to a ML model which is already trained on NL is not too much spoon feeding ? Think the amount of effort it will take to create a Q&A dataset with diverse topic. At the end model is doing a lookup or search? Where is intelligence ?
thank you so much krish! how can you get the list of training accuracy and evaluation accuracy?
Thank you, your videos are of great help. Can you please guide me on how you created your custom data? Like if there are any labelling tools for question answering tasks.
impressive tutorial
Thank you so much, it was a great tutorial it helped alot
ValueError: 'use_cuda' set to True when cuda is unavailable. Make sure CUDA is available or set use_cuda=False. what does that mean. and 2 on executing this code :# Train the model
model.train_model(train, eval_data=test) showing train and eval not available
Thank you Krish
hey can can u make video automatically created question and answers from pdf and text file?
Hi Krish, thank you very much for the great video, my dataste is in the csv format and I have one column of description, and the other column of labels both in text, can I do QA on this such that the Q is the description and the answer is the labels? If yes, how can I prepare the data in the format you mentioned?
Have you got process
how to prepare that?
No, what do you mean?@@manasmanuu5430
@saharyarmohamadi9176 hi, I have the same data as you and tried converting to json, however when train_model I get a message that cannot be found... (I don't know if the default setting answer_start = 0 affects the results or not)
Hi Krish, can you please make a video on Feature Engineering on numerical variables? Thank you
Sir make video on batch normalisation please
Great tutorial
How can I achieve the same question answering model without transformers and use rnn LSTM and attention mechanism. Please help me on that sir
could u please make a video same for jira sample data .
How to finetune Dense Passage Retriever using your own excel file for Question-Answer Model using DPR?
thank you krish for the great video, can you please make a video about deploying custom object detection model on android? thanks in advance!
@kirsh naik Hi, when creating a custom dataset, is it best to limit the context as short / as long as possible? Additionally, can a context have multiple question, and each quesiton might have multiple variation of answer?
The QA is so nonsense. I mean if the user has to provide context with the question then it means he already knows the answer then why would he take help from the model 😂
If the document is too long and the person doesn't wanna read it all
It's called extractive question answering for a reason 🤦 what did you expect
@@nicolasnicolas5238 but the extraction process will take alot of time if the context is long document
@@SmartTech-m1u
Actually, it doesn'ttake too much time. This is because the fact that if you know beforehand that the task is QA with context, you can finetune a small language model which should run faster than an LLM like GPT3, it's should even run faster than GPT3-turbo or GPT-4o-mini by orders of magnitude at no cost. This is possible because abstractive QA can be formulared as a prediction of indices over the context. The training data looks something like this input=(question, context) -> output=(position where the answer appears in the context, span of the answer). Notice that the output os just a tuple of integers, which means this task is easier than text generation (generative QA).
Now, you can even filter out unnecesary context even more, just follow the RAG process but exchange the final call generative model (the G in RAG) with the abstractive QA model and there you go. That simple trick should make the whole process even faster.
And if you use an open model trained on the Squad dataset, the implementation should be really easy, no more than 50 lines I'd say.
Please can you make more videos on transformers
krish sir please please reply one thing if a want a advice from you what subscription I have to take please tell me u are everything for me sir
Sir can you add a video to extract the output from the lstm model
The dataset building process will take too long and the process is not feasible to create custom datasets from scratch. Is there any work around for this??. Mostly looking for an answer which will automate this task.
krish sir i want to work under u for whole life what I have to do please tell me bcoz of u I am learning so many things
what if i have a csv file which has 2 columns "Questions" and "Answers", how will i build the chatbot then??
Hi! I have a question... How use SimpleTransformers for generate answers more smartest?
can I use a one large context with many questions, instead of using separate contexts?
Great tutorial. Could anybody share the link to the source code?
multi label classification using bert possible ? any good Neural Network project to refer ?
Sir My dataset have only two column first is question and answer i want to train my model on this so can i train my model using this method or not
i want this to work offline what should i do sir... where should i download the bert files from
How can I use this for more than 512 tokens?
Why is it asking for API key??Cant we train offline? if so what is required jee?
model.train_model(train, eval_data=test)
NameError Traceback (most recent call last)
in
----> 1 model.train_model(train, eval_data=test)
NameError: name 'test' is not defined
I have the same issue.
@krishnaik the video is amazing...i am looking for Faq based qna with no context ...how can i use your code?
Please share some input..
Amazing Video! Can you make similar video on Conversational AI with End to End Pipeline :)
can i do it for mistral or llama2?
Sir is this based on same paper
Why is the eval_loss negative? Is it possible that there might be a bug somewhere?
Thanks for the illustrative video. I have transformed my data from csv to json format required by simple transformers. I checked the format line by line with yours as well. As soon as I try to train my model, it says list index out of range. Can you please help me why is it throwing that error?
Hey I faced the same error. Were you able to solve it?
cant we use csv file for the bert model ?
How to creat a custom.dataset?
Can I use this simpletransformer for bengaliQA dataset?
Can u tell me how to install cdqa In colab
can we use it for fake news detection? and can excel format will work?
is this task with Pytorch or tensorflow?
Sir, is there a fast and efficient way to create the dataset in this format from a csv file?
What is the difference between Transformer and a Simple transformer you are using a simple transformer while implementing the QA. Anyone can give and if you guys know the answer.
@krish What is the use of this if we have to provide context every single time when we are doing the prediction. This makes this whole framework garbage , if there is no other . if there is please share
model.train_model(train, eval_data = test) Hey Krish, got confused at this point. What is train_model?
😍😍
gem💎
#Thanks #krish
Robinson Larry Taylor Christopher Lopez Larry
How do i experts in data scientist???how
no offence but you don't have any idea how to improve accuracy