First, thank you for putting this together. I just wanted to add my experience. When I tried ChatGPT, at least in my experience with my document (PDF medical text), ChatGPT results were poor. So poor that I decided to immediately look at the others. I then did a side-by-side with ChatDOC and Humata. Both are easy to create free accounts. However, I personally found Humata results a bit more thorough as compared to ChatDOC. Plus, and this is a big plus, from what I can see ChatDOC has a limit of 500 pages per file. For larger academic texts, this can be an issue. I'm (personally) less concerned about spending a bit more if it allows me to submit and query an entire lengthy document at once. Humata's Pro account includes a base of 250 pages per document, but then a nominal fee of $1 per each additional 100 pages. Very much worth it for me, so I went with Humata.
Dear Timothy, Thank you. I'm glad you got something out of my review. And thank you for sharing your experience! I also agree with you. HUMATA is likely the better program for scholarly use, but if you wanted to access a "free" app, then ChatDOC is more forgiving and allows the user more chances to use it. The free HUMATA runs out pretty quickly if you are trying a bunch of different documents. But if you are subscribing, then that wouldn't concern you. I think if a user were to get "serious" about using these tools, then I too would agree that HUMATA is the better choice. Best, Mike
Just started using chat doc today after watching this video. So far so good since the responses stays within the scope of the paper that was uploaded, as you mentioned.
ChatDOC now has a collection feature in which you can group files in a single folder and then ask questions to all the documents. Winning feature in my opinion
Dear Matt, Thank you so much for telling me about this new feature! I agree. It is a winning feature and one that I was hoping would be available. Best, Mike
Dear Matt, I just looked at this feature on their FAQ, but it seems it is only part of the paid subscription. How is it working, if you have tried it. Have you been able to use it via the free service? Best, Mike
I'm currently comparing ChatPDF, ChatDOC, and Humata, with the same PDF sources and questions across the board to determine which service I'll end up relying for my graduate research. I asked the developers and all of them use GPT-3.5 under the hood. To my knowledge none of these tools uses GPT-4 model first because of the vastly more expensive $/token I wonder, does this mean that fundamentally there should be no difference in the quality of output generation as all three tools use the same GPT-3.5 Turbo (Default) model? I have no coding nor computer science background to have any reliable inklings, but perhaps the way each tool is designed to parse the long text and what instruction given to GPT model in regard to those chunks make a difference..?
Dear Handrio Nurhan, I so glad to hear that you are testing all three programs: ChatPDF, ChatDoc, and HUMATA. I am very interested to know of your results. I hope you can share them here when you are finished. re: does this mean that fundamentally there should be no difference b/c all three use 3.5? Firstly, thank you for asking the developers which version of GPT they are using. It is so interesting that they are using 3.5 and not 4. I simply assumed they were using 4 because of the paid plan or Pro plan that ChatGPT is offering and also because Microsoft Bing is using GPT 4. I also assumed the Microsoft co-pilot for MSWord, Excel, Outlook, Teams, and Powerpoint would also be using 4, but now I am not so certain. I assumed it because these would be Enterprise or business suites that they are offering to the worldwide clients and I would not see why they would only reserve the use of ChatGPT 4 for OpenAI's professional subscription model. Second, I think there will always be differences between the three programs. I have also been testing the three programs with different documents and the "voice" or tenor of the writing is often subtly different. Sometimes they are very different. Each team of programmers will be writing different code to parse the text. That is also why I thought they were using version 4 because the token size is much bigger for 4. But if they all are using 3.5 because of the cost, then it must be that they have written some kind of python code to cut up the text into smaller chunks that are processed individually and then reassembled after. In so doing, I think it does matter how the programming chooses to manipulate the output. Thirdly, none of the three programs solves the citation problem. HUMATA and ChatDoc might highlight the passages from which it is semantically fabricating the answer to your questions, but they never actually cite or quote any passages. In this regard, this is the biggest failure of the programs. The user cannot use their phrasing except for "informational" purposes because they do not tell you which passage of text or turns of phrases are paraphrased and which are direct verbatim "stolen" passages. It would be up to the user to scan each highlighted passage and see for themselves --a tedious process. In conclusion, all three are great for interacting with a PDF document as an interlocutor. They create this amazing interface where we can quiz and challenge the document. The ability to asking anything and keep asking more is the uncanny aspect of all three programs. Yes, sometimes the answers can become repetitive, but it is still very useful to see/hear how the programs help the user steer around or encircle a question. The sum is greater than the parts. This "quizzing" of the document does produce new knowlege. And that is really its power and value. Best, Mike
@@UniversityAIed Hi Mike, On model costs: Currently besides Microsoft’s exclusive access, there are two ways to access GPT-4: through API and ChatGPT Plus. The API route is presently only released to select developers-I’m still on the waitlist myself. The ChatGPT Plus route is $20 subscription, but the GPT-4 is capped at 4K total context and it is only usable from that chat interface only, not to other applications. These PDF tools are all using the API route but with GPT-3.5 Turbo. Completions with GPT-4 with 8K & 32K context variants cost respectively $0.06 and $0.12 per 1K tokens whereas GPT-3.5 Turbo is just at $0.002/1K tokens-thus the cheapest GPT-4 completion costs 30x as much as GPT-3.5 Turbo! Though of course, the costs will be adjusted down the line. So I don’t see we’ll get GPT-4 powering many applications too soon. If there’s such an option, one better be ready to pay quite a lot for it for now. Regarding the performance of the three apps, I preliminarily observed that ChatPDF produced consistently the least number of word generation, but the quality of the summary and answers are too general for my liking-it’s like a student’s writing who doesn’t get sufficiently specific in his/her reading responses. ChatDOC and Humata on the other hand would be more suitable for academic settings in their more elaborate and specific answers. I found their answers to be more useful and informational in understanding readings. I suspect that this has a lot to do with the settings (temperature/randomness value and word limit) that the developers of each app set on the backend. You’re absolutely right on the direct in-text citations/quotes. The ability to rephrase with incorporated direct quotes would be ideal and most useful as we can also check the context with better accuracy. One other game changing feature is querying across multiple PDFs at the same time, which ChatPDF and ChatDOC are developing. Best, Handrio.
@nurhandrio @aiineducation5746 have you tried using Bing Chat inside of the Microsoft Edge Browser? I believe "Precise" mode uses GPT-4. Load up your page/pdf and ask questions.
Dear Mathspro, It depends on your use model. But I think that ChatDoc and HUMATA are both very good. They both highlight the passage from which their answer is derived. But ChatPDF is also excellent --it just doesn't highlight the passage. All three of these interfaces allow you to interact with a specific document in a meaningful way. Best, Mike
Dear Likengineer, I think it is good. The highlighting of the content is useful because it shows the user from where the "paraphrasing" derives. When you say "good" I think you are asking if it is accurate. Yes, in this sense, it is good. It is good because I believe it is accurate for articles that are in social science and humanities. I am not sure how it performs for science and technology articles. Someone else with a science academic background will have to test for those parameters. Best, Mike
Dear MV Krishna, I think that ChatDoc and HUMATA are both very accurate in their summarization. They are also both very good at showing the user from where the information is derived with the use of highlighting. I also like ChatPDF because the answers are very straightforward. Some might say that its voice is the least academic sounding of the three programs, and it does not highlight the text, but it is still accurate and easy to use. HUMATA also has more limitations for the free version than ChatDoc and ChatPDF. The other two are more generous with the free user limits. Best, Mike
First, thank you for putting this together. I just wanted to add my experience.
When I tried ChatGPT, at least in my experience with my document (PDF medical text), ChatGPT results were poor. So poor that I decided to immediately look at the others.
I then did a side-by-side with ChatDOC and Humata. Both are easy to create free accounts. However, I personally found Humata results a bit more thorough as compared to ChatDOC. Plus, and this is a big plus, from what I can see ChatDOC has a limit of 500 pages per file. For larger academic texts, this can be an issue. I'm (personally) less concerned about spending a bit more if it allows me to submit and query an entire lengthy document at once. Humata's Pro account includes a base of 250 pages per document, but then a nominal fee of $1 per each additional 100 pages. Very much worth it for me, so I went with Humata.
Dear Timothy,
Thank you. I'm glad you got something out of my review. And thank you for sharing your experience!
I also agree with you. HUMATA is likely the better program for scholarly use, but if you wanted to access a "free" app, then ChatDOC is more forgiving and allows the user more chances to use it. The free HUMATA runs out pretty quickly if you are trying a bunch of different documents. But if you are subscribing, then that wouldn't concern you.
I think if a user were to get "serious" about using these tools, then I too would agree that HUMATA is the better choice.
Best,
Mike
Just started using chat doc today after watching this video. So far so good since the responses stays within the scope of the paper that was uploaded, as you mentioned.
That is great! I’m so glad this was helpful. You can also try out Humata or ChatPDF.
ChatDOC now has a collection feature in which you can group files in a single folder and then ask questions to all the documents. Winning feature in my opinion
Dear Matt,
Thank you so much for telling me about this new feature! I agree. It is a winning feature and one that I was hoping would be available.
Best,
Mike
Dear Matt,
I just looked at this feature on their FAQ, but it seems it is only part of the paid subscription. How is it working, if you have tried it. Have you been able to use it via the free service?
Best,
Mike
I'm currently comparing ChatPDF, ChatDOC, and Humata, with the same PDF sources and questions across the board to determine which service I'll end up relying for my graduate research. I asked the developers and all of them use GPT-3.5 under the hood. To my knowledge none of these tools uses GPT-4 model first because of the vastly more expensive $/token I wonder, does this mean that fundamentally there should be no difference in the quality of output generation as all three tools use the same GPT-3.5 Turbo (Default) model? I have no coding nor computer science background to have any reliable inklings, but perhaps the way each tool is designed to parse the long text and what instruction given to GPT model in regard to those chunks make a difference..?
Dear Handrio Nurhan,
I so glad to hear that you are testing all three programs: ChatPDF, ChatDoc, and HUMATA. I am very interested to know of your results. I hope you can share them here when you are finished.
re: does this mean that fundamentally there should be no difference b/c all three use 3.5?
Firstly, thank you for asking the developers which version of GPT they are using. It is so interesting that they are using 3.5 and not 4. I simply assumed they were using 4 because of the paid plan or Pro plan that ChatGPT is offering and also because Microsoft Bing is using GPT 4. I also assumed the Microsoft co-pilot for MSWord, Excel, Outlook, Teams, and Powerpoint would also be using 4, but now I am not so certain. I assumed it because these would be Enterprise or business suites that they are offering to the worldwide clients and I would not see why they would only reserve the use of ChatGPT 4 for OpenAI's professional subscription model.
Second, I think there will always be differences between the three programs. I have also been testing the three programs with different documents and the "voice" or tenor of the writing is often subtly different. Sometimes they are very different. Each team of programmers will be writing different code to parse the text. That is also why I thought they were using version 4 because the token size is much bigger for 4. But if they all are using 3.5 because of the cost, then it must be that they have written some kind of python code to cut up the text into smaller chunks that are processed individually and then reassembled after. In so doing, I think it does matter how the programming chooses to manipulate the output.
Thirdly, none of the three programs solves the citation problem. HUMATA and ChatDoc might highlight the passages from which it is semantically fabricating the answer to your questions, but they never actually cite or quote any passages. In this regard, this is the biggest failure of the programs. The user cannot use their phrasing except for "informational" purposes because they do not tell you which passage of text or turns of phrases are paraphrased and which are direct verbatim "stolen" passages. It would be up to the user to scan each highlighted passage and see for themselves --a tedious process.
In conclusion, all three are great for interacting with a PDF document as an interlocutor. They create this amazing interface where we can quiz and challenge the document. The ability to asking anything and keep asking more is the uncanny aspect of all three programs. Yes, sometimes the answers can become repetitive, but it is still very useful to see/hear how the programs help the user steer around or encircle a question. The sum is greater than the parts. This "quizzing" of the document does produce new knowlege. And that is really its power and value.
Best,
Mike
@@UniversityAIed
Hi Mike,
On model costs:
Currently besides Microsoft’s exclusive access, there are two ways to access GPT-4: through API and ChatGPT Plus. The API route is presently only released to select developers-I’m still on the waitlist myself. The ChatGPT Plus route is $20 subscription, but the GPT-4 is capped at 4K total context and it is only usable from that chat interface only, not to other applications. These PDF tools are all using the API route but with GPT-3.5 Turbo. Completions with GPT-4 with 8K & 32K context variants cost respectively $0.06 and $0.12 per 1K tokens whereas GPT-3.5 Turbo is just at $0.002/1K tokens-thus the cheapest GPT-4 completion costs 30x as much as GPT-3.5 Turbo! Though of course, the costs will be adjusted down the line. So I don’t see we’ll get GPT-4 powering many applications too soon. If there’s such an option, one better be ready to pay quite a lot for it for now.
Regarding the performance of the three apps, I preliminarily observed that ChatPDF produced consistently the least number of word generation, but the quality of the summary and answers are too general for my liking-it’s like a student’s writing who doesn’t get sufficiently specific in his/her reading responses. ChatDOC and Humata on the other hand would be more suitable for academic settings in their more elaborate and specific answers. I found their answers to be more useful and informational in understanding readings. I suspect that this has a lot to do with the settings (temperature/randomness value and word limit) that the developers of each app set on the backend.
You’re absolutely right on the direct in-text citations/quotes. The ability to rephrase with incorporated direct quotes would be ideal and most useful as we can also check the context with better accuracy.
One other game changing feature is querying across multiple PDFs at the same time, which ChatPDF and ChatDOC are developing.
Best,
Handrio.
@nurhandrio @aiineducation5746 have you tried using Bing Chat inside of the Microsoft Edge Browser? I believe "Precise" mode uses GPT-4. Load up your page/pdf and ask questions.
can i use a debit card on chatdoc?
Hello sir wich the best
Chatpdf or chatdoc or an other
Dear Mathspro,
It depends on your use model. But I think that ChatDoc and HUMATA are both very good. They both highlight the passage from which their answer is derived. But ChatPDF is also excellent --it just doesn't highlight the passage. All three of these interfaces allow you to interact with a specific document in a meaningful way.
Best,
Mike
Thanks
You are welcome! I hope it helps.
is its paraphrazing is good.?
Dear Likengineer,
I think it is good. The highlighting of the content is useful because it shows the user from where the "paraphrasing" derives. When you say "good" I think you are asking if it is accurate. Yes, in this sense, it is good. It is good because I believe it is accurate for articles that are in social science and humanities. I am not sure how it performs for science and technology articles. Someone else with a science academic background will have to test for those parameters.
Best,
Mike
So which is the best tool among all these?
Dear MV Krishna,
I think that ChatDoc and HUMATA are both very accurate in their summarization. They are also both very good at showing the user from where the information is derived with the use of highlighting. I also like ChatPDF because the answers are very straightforward. Some might say that its voice is the least academic sounding of the three programs, and it does not highlight the text, but it is still accurate and easy to use.
HUMATA also has more limitations for the free version than ChatDoc and ChatPDF. The other two are more generous with the free user limits.
Best,
Mike
@@UniversityAIed thanks alot
@@UniversityAIed actually chatDoc is really amazing. I dont know how , but its very accurate in answering my questions