Розмір відео: 1280 X 720853 X 480640 X 360
Показувати елементи керування програвачем
Автоматичне відтворення
Автоповтор
Great video. But, what can we do when our data is much larger than in the video example ? Is there any alternative way to increase the number of tokens per query?
Hi Lauren maybe you can try some the large model for Google tapas or Microsoft tapex. I can see large models seem to have higher model_max_length
You can also try chunking your table data and then feed it to model
@@superlazycoder1984 Yeah but it might lose some precision into the prediction no?
Chunking might not lose precision and also using larger models
*Promosm*
Great video. But, what can we do when our data is much larger than in the video example ? Is there any alternative way to increase the number of tokens per query?
Hi Lauren maybe you can try some the large model for Google tapas or Microsoft tapex. I can see large models seem to have higher model_max_length
You can also try chunking your table data and then feed it to model
@@superlazycoder1984 Yeah but it might lose some precision into the prediction no?
Chunking might not lose precision and also using larger models
*Promosm*