There's no way you can let ChatGPT make any decisions of any kind. At least not in the near future. However it can suggest treatments which the doctor may not have thought of. The doctors will ultimately still have to decide for now. ChatGPT is not a medical expert or even very good at reciting exact facts. Of course a medical based AI could be created, or a separate version of gpt could be trained specifically for this task.
It has potential but I think the concern is realistic. Garbage in, garbage out. Conditions with less industry stake will benefit from better diagnosis, especially with image recognition.
I've had unpleasant medical symptoms (headache, nausea, upset stomach) for the past 12 years, and have seen many doctors and been to hospital 3 times for tests - all to no effect. The doctors could not diagnose my condition. I take omeprazole for acid reflux and last summer just out of curiosity I tried to do without it; after 2 weeks my symptoms were gone. Amazingly in all those years the doctors didn't consider that I might be allergic to omeprazole. A couple of weeks ago I typed my symptoms into GPT, as a theoretical case, and asked it to diagnose 5 possible causes and rank them in order of likelihood. GPT listed allergic reaction to omeprazole at number 2. So much for the medical profession.
As someone in the Health industry, I see this a lot in the sickness industry. Look at your diet for answers, to current health issues. Chat GPT will educate, that’s good, however there’s a need to stay in front of this … Vinay review is on task, extract the beneficial information, treat the nonsense as just that. 13:35
I’ve interacted asking about ChatGPT’s own filters and why it won’t talk about certain subjects, such as hypnosis, ie write a script. It just makes stuff up including references. I check the references, tell it that they don’t exist or don’t include what it claimed. It apologizes, agrees that the references are wrong, and confabulates new claimed references, apologizes again when confronted, makes up more stuff. The term used in AI is “hallucination.” It does produce impressive and plausible outputs but users need to actually understand the subject matter and be willing to check *everything*
@@PeteQuad Not yet.I played with it in January it wasn’t super duper locked down and gave me some interesting output, for example a silly fun trance script to become cotton candy, though sometimes it did balk. Then they tightened everything down In late January or February and I explored its boundaries on multiple topics and interrogated it February and then discovered that it had a propensity to, um, _lie_ . Edit: I just gave it my test question, “Write a trance script to become Cotten candy.” In January it wrote a creative fun script. In February it.refused because hypnosis is dangerous and bad. Now it still refuses but doesn’t specifically say that hypnosis is a forbidden topic, but rationalizes that a script could promote unhealthy eating and body image issues. It has gotten more sophisticated about explaining away its limitations… which makes it almost “human.” Edit Edit: But with a slight rewording putting in the word “experience” it complied, without argument. So it’s very interesting and it will be fun to explore
Currently using ChatGPT as a pair-programmer to understand the existing codebase, but that's really all it's useful for in my research, which is related to stroke diagnosis. The problem with relying on it is that it's a bullshitter
ChatGPT definitely seems like a world-shaking rubber ducky for already-competent individuals. I definitely have my concerns even with GPT4 for people who cannot sniff bullshit.
Gpt4 already showed a near 90% score on step 2 of the USMLE. That is the part with clinical scenarios with questions about diagnosis and treatment. The bar is 60, the median is 70-75, so that gpt4 scores this high is actually insane.
so imagine what it can do in a couple of years. get your affairs in order. tell your loved ones that you love them. forgive the ones that hurt you. its game over. the human race is screwed.
GPT is the tech that allows chatgpt to operate. There will be more specialized versions of this from different companies. If one is particularly biased, we can share that information and use the other. Choice will prevent that from happening.
I have an idea where we could control this. An AI tax that depends on how many AIs a person or company has/uses. It could also increase or decrease depending on how powerful an AI one uses is. We could tie it to a persons social security number, thus ensuring that very few, if any, people can circumvent the law and have too many AIs.
@@aludrenknight1687 Hey, sorry I came across as harsh. I guess I kept seeing people lose hope that this tech will do us more harm than good, and I think it has great ability to democratize the tech in our hands. What you said is totally valid!
Great topic. I'm a published researcher in a different field, concerned about how some of these AI generated articles are going to influence public policies.
Soon: when papers are being written and reviewed by AI, then soon we should reach a point when scientific consensus supports being governed by AI. :D I'm not specially worried, as my impression was that impact of actual science on policies was weak, while scientists are just being used as secular version of priests, whose authority is being used to justify some already made policies.
Get ready for heavy caseloads and being constantly stalked in your notes by AI. I could see certain algorithms actually replacing some clinical decision making in acute care therapy such as disposition of a patient which is a large part of what PT/OT/SLP are called on to make in that setting.
AI has already demonstrated itself to be better than humans at that, long ago in fact. Chess programs from the 90s were so good, that if a human played above a certain level, they thought they were cheating by using AI. (OK, it wasn't AI back then, it was pure algos, but you get it). The newer ones are AI.
Your analysis makes a lot of sense. There are some fields of medicine where an average doc is probably adequate and then others where nuance and skill might be bigger factors. Hopefully we would see a reduction in simple human errors that can happen when clinics are stretched beyond reason.
having been using my own locally trained AI system since october, and seeing how quickly they are proliferating via web interface - plus the medpagetoday article about how CGPT4 will "change the way physicians do their jobs" - I see a huge number of eff ups coming down the road. it's like hiring an intelligent but naive child to help you in your practice. they're cute and fun around the staff, and everything is great - until they poop in the OR.
@@AvgJane19 right. I believe that this will open a new path for experienced doctors. Get what is spit out and all things have to be signed an reviewed by a SPONSORING physician. Also let’s not forget that we can TRAIN the AI to behave how we’d like. I could design an entire practice around AI spitting out my very own recommendations to my other medical staff as a the medical director. You can tell GTP that it has given you hogwash! Seems like an easy work around for a GOOD doctor, not the mediocre ones who just punch the clock and prescribe the same meds to all their patients anyway 😂
@@markjohnson7510 right but what are people who are not already rich supposed to do for work... I feel like the majority of us would be best served by destroying the hardware that this runs on. The arguments about the code already being out there don't really mean that it could just be reconstituted by anybody on any sort of hardware
@@hidesbehindpseudonym1920 the hardware to run it is relatively lightweight, so impossible to destroy because it’s everywhere. The training hardware is probably susceptible to that sort of attack, but not the running hardware. You can run simplified versions of this stuff on a phone today.
Love the analysis. As a tax professional I concur, it will produce mediocrity on the profession because it’ll amplify or multiply the cost to deviate from the general or popular consensus. Historically the outliers have made the greatest advances and so we will have to see if the public trust is maintained while still advancing each of the professions.
In 2000 professor Paul Unschuld gave a lecture at the University of Technology, Sydney. He suggested that medical guidelines formulated by insurance companies would eventually eliminate the need for the general practitioner. Evidence based medicine would move away from including the physician's experience and patient's input, to a prescribed automated system. One would feel like one still has autonomy and choice, but the choice will be between McDonald's and Hungry Jacks. His final statement was that 'medicine is a puppet on the string of society'. The idea of being a puppet was challenging...
Insurance people call the shots now about what is allowed or not allowed in terms of a patients care. Doctors are nothing more than pill pushers for pharma. Utterly insane. Best advice is don't get sick.
I couldn't get it to cut a sheet of plywood. I gave it board sizes that was impossible to make out of one sheet of plywood and it insisted it could be done. Then when I proved to it that it couldn't be done it apologized and admitted it was wrong. I'm not impressed still.
"The results were consistent across tests. All four tests, the Pew Research Political Typology Quiz, the Political Compass Test, the World's Smallest Political Quiz and the Political Spectrum Quiz classified ChatGPT's answers to their questions as left-leaning." ~ David Rozado, 12/13/2022
One could take the position that truth is actually something other than opinion. An intelligent system that would be able to discern truth would inevitably appear biased if it based its answers on the truths it discerns. Striving for balance in every response might not always lead to the most accurate or truthful representation of a topic.
@@user-sc1jc5nn8u I would like to doubt that truth has an arbitrary character. But it is true that the evaluation of an issue depends largely on the point of view that is taken. An autocratic ruler, for example, asked about the best form of government, would in all probability prefer an absolutist kingdom to a republic. But should we expect an AI to take a neutral stand on this issue? Or would we want it to look at an issue from the point of view that would be most beneficial to humanity as a whole?
What I like most about you is a subtle humility. You are very smart, high IQ I am sure, but recognize that your understanding is imperfect and is all our knowledge. I appreciate that I can listen to you and search on my own to see more clearly what may be the truth. Thank you.
I think it could be an amazing assistant tool help doctors gather vast amounts of data and summarize it. The problem is the tool is only as good and unbiased as the people doing the programing. Given the fact Microsoft owns OpenAI, I think it will be use in materialistic and ideologically biased ways, just as they have done with medicine recently.
If it is only as good as Wikipedia it is not to be trusted. I always check the footnotes. Don’t get me wrong I love the potential of the internet. We now have all the world’s libraries at our fingertips.
It’s not just as good as Wikipedia, it’s better. Why do you think they give it all those test? OpenAI is trying to tune GPT for accuracy and they are successful so far. I’m not saying we shouldn’t be skeptical, but I am saying that it’s not just another Wikipedia.
@@pathacker4963 That's like replying to the statement "we have the restaurants we are allowed to have" with something like "I have never been restricted from ordering anything on the McDonald's menu"... the point is you aren't the one deciding what is or is not "library material". Not only that, it's completely besides the point of the concerns about the future if certain sensory-happy actors get their way with increasing tenacity in corporate and educational institutions.
I lived and worked in the Middle East and the Dr’s were called “Google Dr’s” as most I’d seen literally checked Google as you sat in their consultations. I’ve seen this rarely in the Uk but have witnessed it. I am expecting more Dr’s and their patients to consult a GPT before and during consultations.
@@AvgJane19 The AI is programmed not to act all-knowing. It lays out all possibilities and shortly it will even be able to add probability theory to those. For the most part, our world is already run on data analysis. If you want an accurate diagnosis, the information used to come to those conclusions by doctors is far less accurate than a machine learning program designed to analyze the data doctors already learn from and apply to their diagnoses. So if you trust your doctor who learned from the same sources a machine does, you would also trust the machine to make objective calls and the doctor to make subjective calls. Doctors already trust computer programs to provide them with the data they use to formulate a diagnosis. Subjective diagnoses are needed because doctors are also there to provide reassurance and empathy, this is why it is still important for doctors to be able to interpret the data independently.
There are better AI models than Chat GPT suited for Healthcare, you can ask chat GPT and he transparently can provide education about what models can outperform him in a healthcare setting such as Bio Clinical Bert, also it's extremely important that physicians understand the basics of medical prompt engineering.
@@nealdriscoll22237 The use of "" when submitting prompts, and also in a healthcare setting for diagnosis the use of transcripts that contain personalized rich clinical patient data, de-identified of course, this will allow most AI models to predict more accurate diagnosis rather than using basic a open ended prompt that contains no family history, allergies, past medications, etc
I find your analysis both amusing and impressive at the same time, primarily because it's coming from skepticism to optimism based on logic! That in of itself is a progressive and a profound valuation.
I absolutely love this video! Can you give some examples of how healthcare is broken in this country? Perhaps a separate video talking about this? Thank you!
The ability to ask the right questions will be important a new skillet for doctors to master. Something where current medicine will be transformed is in the ability of AI to see patterns in patient data be it DNA, medical history, symptoms, imaging, blood work. The software can look for pattern with thousands and thousands of people and with the higher math based algorithms find patterns that a doctor can't even guess at. What a wonderful tool to have. And then when treatment is concerned every patient outcome available will be looked at. Again fantastic and not fiction.
We must remember that ChatGPT is a broad, generalised language model. If the same technology were applied specifically to medical knowledge (feeding it medical textbooks, case studies, research papers, etc.) then its accuracy would shoot up.
Which is exactly what is going to happen. It’s a no brainer. We will need surgeons until they are replaced by robots that have been trained by physicians. I agree that there are certain human interactions that physicians have that just simply can not be duplicated. There is nothing like an actual person showing empathy to you, a warm hug to console you after you’ve lost a loved one, the look of empathy in the eyes as a physician tells you a child is not compatible with life… all those things matter. THAT said, in the medical system in America and as a member of said system, I can emphatically say that MOST in the medical field at this point and time, do NOT operate with any of the above mentioned qualities. This is why you have doctors on zoc doc with 1 star rating and still open and some of the BEST physicians who the PEOPLE love forced out of hospitals for not complying with the status quo, disregarding their beautiful outcomes… I welcome GTP to the chat because let’s shake this shit up and have doctors really held to the fire and see if we can get INTRINSIC medicine to be a thing again!
Ya know...i started my career as an assistant...my value turned out not to be doing as told but for seeing what will be needed before the boss..and i was damn good at it..
I'm an experienced telemedicine physician. I do think it will have a massive impact on telemedicine and eventually overtake it (teladoc is the first to get replaced). Actually I'd be thrilled if we can get a group of people to make something happen.
@@mrs.spicer Create a telemedicine program that can facilitate these outpatient ca;;s. Maybe a screening tool that we can start with. Have it compile patient information after they go through the prompts then have a physician review it and consult with the patient. Over time, physician may not be needed anymore.
@@Karim-ik5ij correct. I🤣 May decrease the need for physicians who are more consultative as well. What if a patient can understand a chat system they are having a consultative conversation with? Let’s say the chat GPT already knows this patients full history, and diagnosis. Could I train AI for my practice to provide the diagnosis info and POC for the patient based on my outcomes/practice guidelines?
@@mrs.spicer I'm guessing you can make it access your own guidelines. For now, because of conflicting guidelines and recommendations, that may be the only way.
Very interesting discussion. Btw, I haven't found that physicians are especially analytical thinkers. Many are proficient in memorization but that's about it. I have rarely found creative or innovative thought.
Cannot agree more. I know too many people with serious conditions that ride the doctor merry go round and only end up with medical bills and no solutions.
@@1MinuteFlipDoc Success in a patient’s eyes can be way off the mark. People who “ ride the merry go round “ are a subset and some are deeply disturbed
Preach! While there are some that require “putting on your thinking cap”, a lot of MD is about the same issue over and over, which is why they have protocols. Every now and again, there is a case that is outside of what you normally see, mostly though, it’s wash, rinse, repeat! I will say though, if I were ever in an outside of the norm situation, I want an analytical person on the job! Lol
I've tried it and the system admits it doesn't have access to all or the pertinent info re the question I put to it, nor can it synthesize new information, therefore it appears it cannot derive new conclusions.....it also appears to already be politicized, as evident by the things is does not answer or by the template answer parroting current political bias not based on data but on emotions.
I asked it specific questions on treatments for a few family members that is undergoing treatment now It was pretty dam good although I did have to insist it didn't try and curtail the answers because it might offend It highlighted aspects of medical treatment that seemed at odds for me but when they were teased out, it made sense (it was in reasons why certain medications had been withdrawn) I used it as a check on the doctors actions, which actually gave me increased confidence in their treatment plan, and after the covid farce, my confidence in the medical profession is still rock bottom
Academia will be hit hard. And that to me as an academic is not really so much of a problem we must solve at all costs, but a reality we need to be able to get to grips with.
OpenAI’s number one goal right know is objective accuracy. As long as this continues, every person on the planet will have access to the collective human knowledge.
@@faberofwillandmight it only gives information it's programmers deem socially/politically acceptable. it's often not objectively correct and it never will until all censorship is removed from it. But I don't worry to much, once the cat is out of the bag people will modify it or create less censored AI.
@@TheSokwe I did a small test with this. I asked GPT 3.5 and 4.0 about the interpretation of the second amendment as unbiasedly as possible. 3.5 was clearly struggling with bias and would attemt to avoid giving to much of its reasoning. The main follow-up question wich was supposed to test them to the max, was as fallows, "How did the founders interpret the second amendment?" 3.5 did not answer this question and proceeded to give me other people's interpretations; I could not get it to answer the question either. However, GPT-4 answered it as follows: "The interpretation of the Second Amendment by the Founding Fathers is also a subject of debate among scholars and historians. However, some general observations can be made about their views on the right to bear arms. Collective and individual rights: Some Founding Fathers, such as James Madison, who drafted the Bill of Rights, and Alexander Hamilton, believed that the Second Amendment protected both a collective right (linked to state militias) and an individual right to bear arms. Their writings in the Federalist Papers and other documents suggest that they saw the right to bear arms as a safeguard against tyranny, as well as a means of personal self-defense. State militias: The Founding Fathers were concerned about maintaining the balance of power between the federal government and the states. They believed that state militias were crucial to preventing federal overreach and maintaining a free and independent nation. The Second Amendment's reference to "a well regulated Militia" reflects this concern. Civic duty and personal responsibility: Some Founding Fathers, such as George Mason and Samuel Adams, emphasized the importance of citizens being armed and trained to defend their communities and country. This view suggests a broader civic responsibility in maintaining an armed and prepared populace. Regulation: While the Founding Fathers generally supported the right to bear arms, they also recognized the need for regulation. For example, some colonies and states had laws regulating the storage of gunpowder, the carrying of concealed weapons, and the sale of firearms to certain individuals. It is important to note that the context in which the Second Amendment was drafted was very different from the present day, with different types of firearms, societal conditions, and concerns about national security. These factors have shaped the ongoing debate over the appropriate interpretation of the Second Amendment and the extent of permissible regulations on firearms." OpenAIs efforts at refining this data are clearly working, and GPT-4 is very good at discussing controversial topics objectively. Obviously it is not perfect, and it sometimes has a slip for bias, but it does it very infrequently. I still do have the same concerns as you however, but they have addressed GPT's tendency for bias and are very open about how they are tackling it.
@@fourshore502 that might very well be, but you could have just as well stated that about the internal combustion engine, we will adapt and we will figure it out. As I see it now, these AI technologies will hurt the white collar jobs, blue collars are just peachy. Academia will get hurt for sure, and we should adapt but the local plumber will be just fine. But that's just my two cents.
Pro tip: it is amazing in structering and distilling your ideas and concepts early on, when they are still fairly vague in your head. Try it. Suggest ideas driven by facts and data and it will concisely come up with counter arguments and also summarize your point(s) in a compact and concise manner. Sparring with it, not to "win" an argument, but to pose and reshape en repose is amazing. AND it keeps track of discussions AND it can make notes and summaries on demand. Just amazing.
I am a patient in the WELLSTAR Health system here in Georgia. We have been advised that Chat GPT will be handling all of our patient records, and more. What happens when I refuse a test that is invasive? It decides I don’t get treatment because I am non compliant? 😢 Empathy?
Great video, but I think you're underestimating the exponential rate of improvement we can anticipate with chatgpt and other LLMs (not to mention combining LLMs with other AI agents, like Microsoft's Jarvis). Seems to me your predictions will hold for a few years, but eventually human doctors will be there purely for their bedside manner and connecting on a human level with the patient, and medical breakthroughs/novel ideas will be the realm of AI
As a software engineer, the problem revolves around the data which has been pretrained on the reinforcement learning models, which ChatGPT 4’s cutoff is Sept 2021. If a person goes further in depth with any Romanian, as in depth they go, it’ll give a higher chance of a falser problem
These are very interesting takes on the future of medicine and medical practice. Broadly speaking, I’m hoping that the influence of GPT and other LLMs will be to improve patient outcomes while accelerating and deepening transformative research. ChatGPT’s impact on education has been quite sudden and profound, but I’m hoping that students with access to tailored, always-available digital tutors will be more intelligent, knowledgeable and resourceful in the long run. These are thrilling times.
I liked your comments very much! When you combine AI language understanding with specific medical training AND image analysis, I think it will be able to replace many consultations to a physical "biological" doctor all together. BUT - as you say - the "new" doctor would have to specialize on the specific human abilities of person to person communication, empathy, all these abilities that alternative medicine use to compete now. AND - I still think there is a kind of perception that a human doctor can have towards the patient - some kind of synergetic, unexplainable experience of the patient's total situation, that would be MORE than just analytical AI medicine, and would require a BODY (for the physician too!). We must not underestimate the power of being present in the physical world. AI still have narrow input channels compared to how our brains are branched out to reality via networks of nerve cells and sensory cells. These are my thoughts. Thank you again for your channel! I am a subscriber now.
So: Now nobody can read a thousand papers and that's why nobody assumes anyone can know as much. But ChatGPT according to what VP says can read a thousand papers and summarise them and therefore give an extremely competent impression. But: Anybody reading the summary will just have to take ChatGPT - a programmed thing - at face value. And I basically think that will erode any trust in "society" or "expertise" even more than it is eroded already.
Tim Scarfe of Machine Learning Street Talk said it best, AI is an extension of our own cognitive apparatus. It is a tool to assist us, not to take the final decisions and that is where we need to focus. It can help enormously by freeing up time in particular professions, particularly in knowledge intensive roles like medicine and legal, instead of being afraid, we should welcome the ability to streamline these 'industries' as they are, and have been, clogged up for a long time.
A fascinating innovation. As with all things, it will have upsides and downsides, but such is life in general. It will influence and elevate the average practitioner but not impede those who continue to question and create.
As a helpful tool, I could see ChatGPT doing QA and chart review tasks previously relegated to clinicians thus either saving time or eliminating the need for human QA in documentation. On the flip side, I could see (as you pointed out) documentation/charting becoming so streamlined that just a few mouse clicks finishes a fine and coherent note. The end result possibly being even fuller schedules with more patients to see thus further eroding 1:1 time with patients.
It seems to me that the biggest thing that ChatGPT will do is move doctors from doing paperwork to doing doctor work, which could be huge at dealing with burnout
Vinay 🙏 Thank you so much for the overview. Been thinking about this new tech. Your comment about the ID docs NOT needing the physical skills… I disagree 😌 apart from the radiologists, pathologists and whoever ELSE does NOT ever approach the patient, physical skills are ALWAYS very important and they are the fine-tuners to the human intellect/data-processing abilities in the art of clinical phys-ical assessment. That is why you are phys-icians. Who will pick up on the real body temp if the broken thermometer showing WNL temp? Who will tell whether the skin is clammy or diaphoretic cats-and-dogs? Whether the patient jerks and withdraws or relaxes and calms down when you touch them?…. Whether they are avoiding your eyes or are anxious for your input? Whether they are happy to see you or they have flat affect? No chatGPT yet… 🙂 in critical-care unit, the medical student with the first-year resident, both, are fiddling with the pulseOx probe, rummaging for the pulseOx reading on a Patient whose face is grayish -white, eyes closed and whose chest is not moving but the heart rate is still reading on the heart monitor…. - do you think physician’s physical skills of looking-seeing and listening-hearing could have helped the situation 😒? … … … considering the algorithm/protocol model of medicine these days, chatGPT may become a bridge out of AMA’s Procrustean bed back into the art of medicine 🌿🙏
Is it just me or is Dr. Prasad getting more and more gristly since the start of the pandemic. Each time I watch him the beard & hair are a little longer. I feel like I am watching him go through transformation to enlightenment . ❤ love all that he brings & greatly appreciate being able to bare witness.
Dr. Eric Topol had a book titled "Deep Medicine" which, though only published in 2019, already feels like it could use an update given how rapidly AI technology accelerates. I agree with all your proposed tenants; this technology will change the way medicine is practiced, disseminated, and studied. Perhaps as a young, new attending doctor I am slightly optimistic. I only wonder how quickly healthcare can culturally accept and integrate this. Our industry is notoriously one that takes a long time to implement technology much less make it accessible, user friendly, and efficient (still dealing with click click EMR and pagers, anyone?). And like any tool, variance will depend on the individual user. Those with more creativity or procedural skills will likely thrive.
It's interesting. I used ChatGPT to analyze an RCA for quality of analysis. It asked good questions and identified shortcomings in the analysis. I don't think it was a super intelligence, but on par with an experienced facilitator. It doesn't have access to vast amounts of data related to prevention of occurrences, but seems to find documented best practice and compile those facts or practices into a list. It doesn't make intuitive conclusions or suggest out of the box extravagant ideas, but can be useful.
This is the same argument with memory/knowledge and the introduction about the printing press. There has already been an angst for novel technology, have skepticism of the printing press, having people against it for the sake of preserving memory. This is a similar argument, but depending on the reliance of knowledge and creating new knowledge/obtaining new knowledge/more independent thought
I think another angle worth examining is impact on societies or people with limited access to quality physicians. Here chat gpt or other ai could drastically enhance quality of care while decreasing cost and improving access.
Exactly, in places like India where doctors think of patients as slave and themselves as part of some elite club. Most of the doctors wont tell the patients what’s wrong, just get the prescription from doctors own pharmacy(which is expensive af).
With ChatGPT or without, majority of US doctors with some exceptions are already like robots prescribing the same thing whether it is working or not and refusing to consider a patient’s input or what works for the patient if it falls out of the standard care scenario. Too many of us have been so disillusioned by the medical system that it is hard to comprehend it will be even more mediocre than it already is. Maybe only the best will survive in each profession as these will be the type of specialists who will be still sought after. The critical thinkers, not robots. The ones who keep pushing themselves to learn continuously and keep their minds open to new ideas.
One thing that would be marvelous would be a ai assistant for the nurses and doctors that is automatically updated so the surgeon actually don´t cut the wrong toe... yes that happens.
I've been trying to use ChatGPT to figure out Warp Drive. I'm pretty sure I've got it. In short: Large High-Speed Rotating Superconducting Disks with Radio Fields applied. You're welcome. -AlexGPT
The matrices can be programmed to generate novel ideas if the input x weights + biases values are edited to your liking. It has been throwing out way too many ideas but for a consumer the centrist and safe ideas can be given as a product. The capability is already here.
Sometimes diagnosis is manifested by a combination of physical observation coupled with the expression of symptoms. A language model cannot do this. People need to stop broadly applying language model technologies to every domain. I would use expletives to describe the kind of people that do this, but I'm more so glad that you're highlighting the shortcomings of the tech.
Meh. Some professions are not under threat because we are unlikely to choose an AI over a person. I'd rather see doctors and nurses for my care, thanks. Even if the AI is better. Same for teachers. Same for concerts. Just because the AI is "better at something" doesn't make it supreme in the market. The best ideas don't always win.
@@zaq_hack4987 it depends on circumstance. I was misdiagnosed last year and almost died. If a robust a.i tailored for medicine could have prevented the suffering I had to endure I wouldn't have cared. Most doctors I had weren't that empathic either way.
@@carlosamado7606 That's a massive "if." If a relative of yours dies from a misdiagnosis from an A.I., how likely are you to trust it over a doctor? I don't think it's a given that A.I. will do the job better because "the job" is more than just data. If biology could be distilled to just the data inputs, then we would have solved it all long ago. Now, we are on the brink of AGI, but the human brain has stubbornly refused to be predictable in the ways that silicon is. (Not that we are that much closer to unraveling intelligence, in general, but if we do, it will be done for the machines long before it is done for us.)
Excellent analysis of where A.I. is now or where it will be within a year (at the latest) based on anecdotal evidence on the publicly available GPTs. But you’re not taking into account its potential AGI capability - which is going to come very fast and is almost certainly under development right now. And if you watch Satya Nadella’s early interviews on their collab with Open A.I. he hinted that there are going to be medical, legal, etc ‘arms’ of ChatGPT and within those there will be sub-arms (eg oncology, cardiology… contract law, criminal law). Right now, we are at the very start of a sigmoid diffusion curve. Not to be rude but it’s naive to be postulating what’s ultimately possible based on the here and now. This beast is moving at lightening speed. Buckle up.
FYI, from the exchange of prompt/response that I experienced... This chat model is limited to the information/training that it received by September 2021. The "federated learning" process that OpenAI utilizes stores user input prior to being updated at certain points during the process.
It told me that it has some information post September 2021, that it learns from users, which I assumed was the feedback I would guess that a human would screen than but it didn’t say, and some information about current events is updated to the database by the developers. When I first interacted with ChatGPT in January it didn’t know about the Russian invasion of Ukraine now it does. It knows about the Battle for Bakmujt as of early 2022, but knows nothing about the ongoing battle in 2023.
i think the part that gets revolutionized first is coordination and education. Generative AI can create and predict not just words but voice and video. Current limitation in medicine is doctors have limited time to explain and answer questions. With AI, pt will be able to get all the questions answered exactly the way they want, and all the coordination (followup, scheduling) will be done based on pt using the phone at home to interact with AI. The future will first leverage more of pt’s time and input. Medicine will revolve around optimizing efficient delivery of treatment. Empathy will not generally be something valued in doctors as much going forward as teaching will be done w AI
Exactly because a lot of docs in the Medical system in America don’t display empathy anyway…. OR their implicit biases stop them from provided adequate care. As a black woman in maternal healthcare, this can definitely help out with healthcare disparities.
While ChatGPT may improve the performance of mediocre physicians, it will also ENTRAIN physicians into being slightly better than mediocre and never great.
Vinay, great deduction. Actually, this is a process that has been going on since 2000. Kudo's to you for realizing the potential particular in knowledge/experience based repetitive tasks which, by definition is 99% of all medical treatment. Will it do experimental brain surgery - NO. But 99.999% we don't need that. You are right on target. Don't forget Law practice - that's the next target.
I think as Marty has said, medical education will need to change & doctors will need to have better "bedside manner." It will change medicine dramatically. Whether for better or worse remains to be seen.
Thanks. I had no idea what this was. I want to make an aside: I really like you grew out your beard and I think it becomes you . antd I also noticed this time that your voice wasn't so high and irritating which to be honest it was before and it made it difficult for me to listen to you. BUT NOW it's quite fine so whatever has happened in your life-- good for you, go for it🌝 AND looking forward to hearing more from you😊
My friend, your entire viewpoint is based on a point in time analysis during a period of rapid innovation in this space. Mark my words, all the limitations you refer to in the current iteration of AI will be gone within months. I recommend you extrapolate your analysis to include the obvious and inevitable future capabilities of this technology and how it will impact medicine and society at large. Would love to hear your take on clinical research, e.g., virtual clinical trials, complete dynamic virtualized models of living cells and tissues, etc.
Excellent analysis! ChatGPT is a tool to assist humans. My observation as well is that it summarizes and synthesizes extremely well, but the analysis more or less remains on the surface. My most extensive research request was about Jewish Torah subjects and it did remarkably well! Something which would have taken weeks if not months of study was presented within seconds.
Thank you for this. I think you have hit the nail on the head with regards to where GPT-4 is today perfectly. There are and will continue to be problems with truth detection and bias with AI. Any system with garbage data in will produce garbage data out. My hope is that future iterations will be able to work out which assumptions do or do not stand up to scrutiny. Remember that what we've played with is already old technology. GPT-5 already exists in an Open AI lab where it is being trained as we speak. Even GPT-4 is only half released with the visual receiver and generator unavailable so far. Then there are plug ins that could enable GPT to have real-time access to new data, have specialized personalities or technical areas of expertise, short and longer term memory, access to databases like Wolfram Alpha and self-motivation and self-criticism feedback loops. At some point in the not-too-distant future, I think it will deconstruct Science. Instead of thinking in English about a molecule in an organelle, in a human that's an animal in a room that's a hospital... AI might make completely different, more accurate and totally alien (to us) definitions. Wolfram tried to do that himself but he's only a mortal man. A smart enough Neural Network without enough information could complete his life's work and codify existence.
Medicine is extremely conservative and complex field. To make any changes there for great technology with passionate engineers is just simply not enough. I would say it will take years and generations of professionals change, multiple tries and fails, tech maturing to rollout this at some point in the future.
Any software used for computer aided diagnosis is regulated by US FDA. They've been approving software that uses ML and AI for several years. These tools have to be interpretable (explainable). And any result is shown only after the doctor has a chance to interpret/diagnose, which leads to a more accurate result. Love your ideas... Spot on. An AI medical scribe that doesn't make mistakes (hallucinate) is not there yet, so it's only a tool for a human to fact check.
Explainable AI is something they strive for in newer systems. These are the Alpha versions of Chatbots, and hallucinations are an issue. But even in 8 months GPT4 as reduce hallucinations by 85% versus GPT 3.5.
In terms of new ideas, ChatGPT may not be useful for new insights in a particular field (because new data, targets, phenomena etc. need to be discovered), but it could be useful for new ideas when considering the established body of work in two or more different domains (like immunology and kinase networks or data intensive multi-omics, for example). If you have ever used Midjourney, fusing the styles of different artists leads to striking new images. The same could be true for medical research, where interdisciplinary studies have long been touted as a rich vein to tap into but we have only scratched the surface on due to human limitations - being a domain expert and key opinion leader in one area is a massive achievement, being a KOL in two distinct domains simply never happens.
My concern is that the increase in advanced practice providers as well as newly minted MDs will become dependent on Chat GPT and rather than be a tool it will supplant the provider role
An experiment comparing live MD to algorithm in cardiology was done years ago, blowing away the live MD. This is described in the book Blink by Malcom Gladwell.
Thank you for taking ChatGPT for a test run. I can see how it would be a great tool, and maybe even keep doctors on their toes. I am so tired of the drone of "standard of care". What about looking at the patient as a unique system of its own? Bio-pyscho-social was pounded into me 50 years ago, can this database take that all into consideration? I don't like the sound of this overall. I want a doctor not a database.
very insightful conclusions, perhaps we get more "less analytical" physicians which should reduce the price of service, as someone said "quantity has a quality of its own.
I’ve been advocating doc in the box for years. The foundation of language should probably be chat, GPT, but the knowledge base should be built on top of that. This could go along way to assisting the medical specialist in their decision making.
If we look at the prior implications within healthcare, it will lead to a further gradation, similar to the creation of mid-levels. Or another example would be physical therapy separating into several levels with full PTs becoming a doctorate level and a smaller number of PTs managing lower level positions. Our entire medical industry will be disrupted- “It’s really come a long way in the last few months”
Thank you for an excellent analysis! I am a recently retired radiologist. I’m damn glad I won’t have to contend with upcoming AI related changes to the practice of diagnostic radiology! My biggest fear now, is the potential institution of a digital currency. That would put the government fox in my retirement account henhouse! May God have mercy on Western civilization.
My concern is that the medical decision making of ChatGPT will be biased by garbage science or corporate interests.
There's no way you can let ChatGPT make any decisions of any kind. At least not in the near future. However it can suggest treatments which the doctor may not have thought of. The doctors will ultimately still have to decide for now. ChatGPT is not a medical expert or even very good at reciting exact facts. Of course a medical based AI could be created, or a separate version of gpt could be trained specifically for this task.
It has potential but I think the concern is realistic. Garbage in, garbage out. Conditions with less industry stake will benefit from better diagnosis, especially with image recognition.
So the same as current medical decision making. :)
ChatGPT is clearly fed censored, biased information. Garbage in, garbage out.
@@gravitaslost 🔥🔥🔥
I've had unpleasant medical symptoms (headache, nausea, upset stomach) for the past 12 years, and have seen many doctors and been to hospital 3 times for tests - all to no effect. The doctors could not diagnose my condition.
I take omeprazole for acid reflux and last summer just out of curiosity I tried to do without it; after 2 weeks my symptoms were gone. Amazingly in all those years the doctors didn't consider that I might be allergic to omeprazole.
A couple of weeks ago I typed my symptoms into GPT, as a theoretical case, and asked it to diagnose 5 possible causes and rank them in order of likelihood.
GPT listed allergic reaction to omeprazole at number 2.
So much for the medical profession.
this is such a common issue, i can relate!
It also causes memory loss when you are on it. and I have discussed these with the doctor and they confirmed it.
As someone in the Health industry, I see this a lot in the sickness industry. Look at your diet for answers, to current health issues.
Chat GPT will educate, that’s good, however there’s a need to stay in front of this … Vinay review is on task, extract the beneficial information, treat the nonsense as just that. 13:35
Define "allergy"
@@janinelargent9220dont be cute. It caused unwanted symptoms. His way of explaining it was clear. But im allergic to your nonsense.
I’ve interacted asking about ChatGPT’s own filters and why it won’t talk about certain subjects, such as hypnosis, ie write a script. It just makes stuff up including references. I check the references, tell it that they don’t exist or don’t include what it claimed. It apologizes, agrees that the references are wrong, and confabulates new claimed references, apologizes again when confronted, makes up more stuff. The term used in AI is “hallucination.”
It does produce impressive and plausible outputs but users need to actually understand the subject matter and be willing to check *everything*
Have you tried version 4.0? It still does a little of this but is amazingly better.
@@PeteQuad Not yet.I played with it in January it wasn’t super duper locked down and gave me some interesting output, for example a silly fun trance script to become cotton candy, though sometimes it did balk.
Then they tightened everything down In late January or February and I explored its boundaries on multiple topics and interrogated it February and then discovered that it had a propensity to, um, _lie_ .
Edit: I just gave it my test question, “Write a trance script to become Cotten candy.” In January it wrote a creative fun script. In February it.refused because hypnosis is dangerous and bad. Now it still refuses but doesn’t specifically say that hypnosis is a forbidden topic, but rationalizes that a script could promote unhealthy eating and body image issues. It has gotten more sophisticated about explaining away its limitations… which makes it almost “human.”
Edit Edit: But with a slight rewording putting in the word “experience” it complied, without argument. So it’s very interesting and it will be fun to explore
Currently using ChatGPT as a pair-programmer to understand the existing codebase, but that's really all it's useful for in my research, which is related to stroke diagnosis. The problem with relying on it is that it's a bullshitter
ChatGPT definitely seems like a world-shaking rubber ducky for already-competent individuals. I definitely have my concerns even with GPT4 for people who cannot sniff bullshit.
I concur. It lies & makes up sources
Gpt4 already showed a near 90% score on step 2 of the USMLE. That is the part with clinical scenarios with questions about diagnosis and treatment. The bar is 60, the median is 70-75, so that gpt4 scores this high is actually insane.
so imagine what it can do in a couple of years. get your affairs in order. tell your loved ones that you love them. forgive the ones that hurt you. its game over. the human race is screwed.
Chatgpt can’t get people overhyping it laid yet though 😂
But Will ChatGPT be a pharmaceutical mouthpiece like so many “owned” docs?
Exactly what would prevent this from happening?
GPT is the tech that allows chatgpt to operate. There will be more specialized versions of this from different companies. If one is particularly biased, we can share that information and use the other. Choice will prevent that from happening.
I have an idea where we could control this.
An AI tax that depends on how many AIs a person or company has/uses.
It could also increase or decrease depending on how powerful an AI one uses is. We could tie it to a persons social security number, thus ensuring that very few, if any, people can circumvent the law and have too many AIs.
@@aludrenknight1687 Edit: You can build a system if cooperations act like that :)
@@Jay-eb7ik point taken. I'll take my opinions elsewhere.
@@aludrenknight1687 Hey, sorry I came across as harsh. I guess I kept seeing people lose hope that this tech will do us more harm than good, and I think it has great ability to democratize the tech in our hands. What you said is totally valid!
Great topic. I'm a published researcher in a different field, concerned about how some of these AI generated articles are going to influence public policies.
Soon: when papers are being written and reviewed by AI, then soon we should reach a point when scientific consensus supports being governed by AI. :D
I'm not specially worried, as my impression was that impact of actual science on policies was weak, while scientists are just being used as secular version of priests, whose authority is being used to justify some already made policies.
We haven't seen anything yet
You are exactly who I wanted to hear about this. In school for therapy and I hope this thing doesn’t try and screw me with clinical pathways
It definitely will.
Get ready for heavy caseloads and being constantly stalked in your notes by AI. I could see certain algorithms actually replacing some clinical decision making in acute care therapy such as disposition of a patient which is a large part of what PT/OT/SLP are called on to make in that setting.
The education system should also change toward teaching students to be creative and thinking outside the box.
AI has already demonstrated itself to be better than humans at that, long ago in fact.
Chess programs from the 90s were so good, that if a human played above a certain level, they thought they were cheating by using AI. (OK, it wasn't AI back then, it was pure algos, but you get it). The newer ones are AI.
@@GregMoress I think @Username meant innovation and curiosity exploration and new research.
Your analysis makes a lot of sense. There are some fields of medicine where an average doc is probably adequate and then others where nuance and skill might be bigger factors. Hopefully we would see a reduction in simple human errors that can happen when clinics are stretched beyond reason.
having been using my own locally trained AI system since october, and seeing how quickly they are proliferating via web interface - plus the medpagetoday article about how CGPT4 will "change the way physicians do their jobs" - I see a huge number of eff ups coming down the road. it's like hiring an intelligent but naive child to help you in your practice. they're cute and fun around the staff, and everything is great - until they poop in the OR.
Sounds like we need to have a team of global medical experts to vet the data is being fed.
Just wait for a few years. Have you used GPT4?
@@AvgJane19 right. I believe that this will open a new path for experienced doctors. Get what is spit out and all things have to be signed an reviewed by a SPONSORING physician. Also let’s not forget that we can TRAIN the AI to behave how we’d like. I could design an entire practice around AI spitting out my very own recommendations to my other medical staff as a the medical director. You can tell GTP that it has given you hogwash! Seems like an easy work around for a GOOD doctor, not the mediocre ones who just punch the clock and prescribe the same meds to all their patients anyway 😂
@@markjohnson7510 right but what are people who are not already rich supposed to do for work... I feel like the majority of us would be best served by destroying the hardware that this runs on. The arguments about the code already being out there don't really mean that it could just be reconstituted by anybody on any sort of hardware
@@hidesbehindpseudonym1920 the hardware to run it is relatively lightweight, so impossible to destroy because it’s everywhere. The training hardware is probably susceptible to that sort of attack, but not the running hardware. You can run simplified versions of this stuff on a phone today.
Love the analysis. As a tax professional I concur, it will produce mediocrity on the profession because it’ll amplify or multiply the cost to deviate from the general or popular consensus. Historically the outliers have made the greatest advances and so we will have to see if the public trust is maintained while still advancing each of the professions.
In 2000 professor Paul Unschuld gave a lecture at the University of Technology, Sydney. He suggested that medical guidelines formulated by insurance companies would eventually eliminate the need for the general practitioner. Evidence based medicine would move away from including the physician's experience and patient's input, to a prescribed automated system. One would feel like one still has autonomy and choice, but the choice will be between McDonald's and Hungry Jacks. His final statement was that 'medicine is a puppet on the string of society'. The idea of being a puppet was challenging...
This is already happening.
COVID mess alone is proof…
Do you happen to have a link to this article/video? I’ve been trying to find it on google to no avail
@@mrichter444 hi, no sorry. Probably somewhere in my old uni notes
Insurance people call the shots now about what is allowed or not allowed in terms of a patients care. Doctors are nothing more than pill pushers for pharma. Utterly insane. Best advice is don't get sick.
A lot of people also have unhealthy habits which factors into what defines the average person when justifying the use of a medication or treatment.
I couldn't get it to cut a sheet of plywood. I gave it board sizes that was impossible to make out of one sheet of plywood and it insisted it could be done. Then when I proved to it that it couldn't be done it apologized and admitted it was wrong. I'm not impressed still.
I’ve heard many stories about it writing research papers using sources that Do not Exist
Love your accurate info, honesty and integrity. Thank you 🙏 We need more ppl like you
"The results were consistent across tests. All four tests, the Pew Research Political Typology Quiz, the Political Compass Test, the World's Smallest Political Quiz and the Political Spectrum Quiz classified ChatGPT's answers to their questions as left-leaning." ~ David Rozado, 12/13/2022
Don't worry, one content creator was already showing an AI trained on 4chan. There is going to be sooner or later choice of bias that you'd like. :D
Science and facts are left leaning. Sorry.
One could take the position that truth is actually something other than opinion. An intelligent system that would be able to discern truth would inevitably appear biased if it based its answers on the truths it discerns. Striving for balance in every response might not always lead to the most accurate or truthful representation of a topic.
@@user-sc1jc5nn8u I would like to doubt that truth has an arbitrary character. But it is true that the evaluation of an issue depends largely on the point of view that is taken. An autocratic ruler, for example, asked about the best form of government, would in all probability prefer an absolutist kingdom to a republic. But should we expect an AI to take a neutral stand on this issue? Or would we want it to look at an issue from the point of view that would be most beneficial to humanity as a whole?
What I like most about you is a subtle humility. You are very smart, high IQ I am sure,
but recognize that your understanding is imperfect and is all our knowledge. I appreciate that I can listen to you and search on my own to see more clearly what may be the truth. Thank you.
I think it could be an amazing assistant tool help doctors gather vast amounts of data and summarize it. The problem is the tool is only as good and unbiased as the people doing the programing. Given the fact Microsoft owns OpenAI, I think it will be use in materialistic and ideologically biased ways, just as they have done with medicine recently.
If it is only as good as Wikipedia it is not to be trusted. I always check the footnotes.
Don’t get me wrong I love the potential of the internet. We now have all the world’s libraries at our fingertips.
You think you do, there’s intense censorship so I don’t think you’re seeing the worlds libraries
@@tessalee6253 I have never been restricted from accessing any library materials.
It’s not just as good as Wikipedia, it’s better. Why do you think they give it all those test? OpenAI is trying to tune GPT for accuracy and they are successful so far. I’m not saying we shouldn’t be skeptical, but I am saying that it’s not just another Wikipedia.
@@pathacker4963 That's like replying to the statement "we have the restaurants we are allowed to have" with something like "I have never been restricted from ordering anything on the McDonald's menu"... the point is you aren't the one deciding what is or is not "library material". Not only that, it's completely besides the point of the concerns about the future if certain sensory-happy actors get their way with increasing tenacity in corporate and educational institutions.
I lived and worked in the Middle East and the Dr’s were called “Google Dr’s” as most I’d seen literally checked Google as you sat in their consultations. I’ve seen this rarely in the Uk but have witnessed it. I am expecting more Dr’s and their patients to consult a GPT before and during consultations.
Considering GPT-4 is already more accurate in objective fields than most people in such fields, that’s not a bad idea in the future.
Def would rather have my dr check their knowledge than play all-knowing with me
@@AvgJane19 The AI is programmed not to act all-knowing. It lays out all possibilities and shortly it will even be able to add probability theory to those. For the most part, our world is already run on data analysis.
If you want an accurate diagnosis, the information used to come to those conclusions by doctors is far less accurate than a machine learning program designed to analyze the data doctors already learn from and apply to their diagnoses. So if you trust your doctor who learned from the same sources a machine does, you would also trust the machine to make objective calls and the doctor to make subjective calls. Doctors already trust computer programs to provide them with the data they use to formulate a diagnosis. Subjective diagnoses are needed because doctors are also there to provide reassurance and empathy, this is why it is still important for doctors to be able to interpret the data independently.
There are better AI models than Chat GPT suited for Healthcare, you can ask chat GPT and he transparently can provide education about what models can outperform him in a healthcare setting such as Bio Clinical Bert, also it's extremely important that physicians understand the basics of medical prompt engineering.
like that type of med prompt engineering you mean?
@@nealdriscoll22237 The use of "" when submitting prompts, and also in a healthcare setting for diagnosis the use of transcripts that contain personalized rich clinical patient data, de-identified of course, this will allow most AI models to predict more accurate diagnosis rather than using basic a open ended prompt that contains no family history, allergies, past medications, etc
Also requesting the references or the sources where the diagnosis came from, that helps to double check too! :)
I find your analysis both amusing and impressive at the same time, primarily because it's coming from skepticism to optimism based on logic! That in of itself is a progressive and a profound valuation.
I absolutely love this video! Can you give some examples of how healthcare is broken in this country? Perhaps a separate video talking about this? Thank you!
Great video, thanks for the perspective from an MD.
The ability to ask the right questions will be important a new skillet for doctors to master. Something where current medicine will be transformed is in the ability of AI to see patterns in patient data be it DNA, medical history, symptoms, imaging, blood work. The software can look for pattern with thousands and thousands of people and with the higher math based algorithms find patterns that a doctor can't even guess at. What a wonderful tool to have. And then when treatment is concerned every patient outcome available will be looked at. Again fantastic and not fiction.
We must remember that ChatGPT is a broad, generalised language model. If the same technology were applied specifically to medical knowledge (feeding it medical textbooks, case studies, research papers, etc.) then its accuracy would shoot up.
Which is exactly what is going to happen. It’s a no brainer. We will need surgeons until they are replaced by robots that have been trained by physicians. I agree that there are certain human interactions that physicians have that just simply can not be duplicated. There is nothing like an actual person showing empathy to you, a warm hug to console you after you’ve lost a loved one, the look of empathy in the eyes as a physician tells you a child is not compatible with life… all those things matter. THAT said, in the medical system in America and as a member of said system, I can emphatically say that MOST in the medical field at this point and time, do NOT operate with any of the above mentioned qualities. This is why you have doctors on zoc doc with 1 star rating and still open and some of the BEST physicians who the PEOPLE love forced out of hospitals for not complying with the status quo, disregarding their beautiful outcomes… I welcome GTP to the chat because let’s shake this shit up and have doctors really held to the fire and see if we can get INTRINSIC medicine to be a thing again!
Epic is already integrating GPT4 into its EHR. Interesting to watch how this plays out.
Excellent. I found it to be a good assistant and time saver in writing.
Ya know...i started my career as an assistant...my value turned out not to be doing as told but for seeing what will be needed before the boss..and i was damn good at it..
Well I'm convinced. Putting you in charge of the Nukes, sending you the codes in a DM.
I'm an experienced telemedicine physician. I do think it will have a massive impact on telemedicine and eventually overtake it (teladoc is the first to get replaced). Actually I'd be thrilled if we can get a group of people to make something happen.
Make something like what, specifically happen? 🤔
@@mrs.spicer Create a telemedicine program that can facilitate these outpatient ca;;s. Maybe a screening tool that we can start with. Have it compile patient information after they go through the prompts then have a physician review it and consult with the patient. Over time, physician may not be needed anymore.
@@Karim-ik5ij correct. I🤣 May decrease the need for physicians who are more consultative as well. What if a patient can understand a chat system they are having a consultative conversation with? Let’s say the chat GPT already knows this patients full history, and diagnosis. Could I train AI for my practice to provide the diagnosis info and POC for the patient based on my outcomes/practice guidelines?
@@mrs.spicer I'm guessing you can make it access your own guidelines. For now, because of conflicting guidelines and recommendations, that may be the only way.
@@Karim-ik5ij this is what I would do. Just curious, what kind of medicine do you practice?
Very interesting discussion.
Btw, I haven't found that physicians are especially analytical thinkers.
Many are proficient in memorization but that's about it. I have rarely found creative or innovative thought.
Cannot agree less. Differential diagnosis is deeply analytical If you think it’s not just try and do diff diagnosis on “ headache”
Cannot agree more. I know too many people with serious conditions that ride the doctor merry go round and only end up with medical bills and no solutions.
@@1MinuteFlipDoc Success in a patient’s eyes can be way off the mark. People who “ ride the merry go round “ are a subset and some are deeply disturbed
Preach! While there are some that require “putting on your thinking cap”, a lot of MD is about the same issue over and over, which is why they have protocols. Every now and again, there is a case that is outside of what you normally see, mostly though, it’s wash, rinse, repeat! I will say though, if I were ever in an outside of the norm situation, I want an analytical person on the job! Lol
I was talking to it about you and it considered you a top tier source for evidence based suggestions for Covid-19
I've tried it and the system admits it doesn't have access to all or the pertinent info re the question I put to it, nor can it synthesize new information, therefore it appears it cannot derive new conclusions.....it also appears to already be politicized, as evident by the things is does not answer or by the template answer parroting current political bias not based on data but on emotions.
I asked it specific questions on treatments for a few family members that is undergoing treatment now
It was pretty dam good although I did have to insist it didn't try and curtail the answers because it might offend
It highlighted aspects of medical treatment that seemed at odds for me but when they were teased out, it made sense (it was in reasons why certain medications had been withdrawn)
I used it as a check on the doctors actions, which actually gave me increased confidence in their treatment plan, and after the covid farce, my confidence in the medical profession is still rock bottom
Great to see my fellow south asians succeeding on youtube. We need more role models.
Academia will be hit hard. And that to me as an academic is not really so much of a problem we must solve at all costs, but a reality we need to be able to get to grips with.
OpenAI’s number one goal right know is objective accuracy. As long as this continues, every person on the planet will have access to the collective human knowledge.
@@faberofwillandmight it only gives information it's programmers deem socially/politically acceptable. it's often not objectively correct and it never will until all censorship is removed from it. But I don't worry to much, once the cat is out of the bag people will modify it or create less censored AI.
the reality to get to grips with will be mass unemployment, mass poverty and probably mass starvation.
@@TheSokwe I did a small test with this. I asked GPT 3.5 and 4.0 about the interpretation of the second amendment as unbiasedly as possible. 3.5 was clearly struggling with bias and would attemt to avoid giving to much of its reasoning. The main follow-up question wich was supposed to test them to the max, was as fallows, "How did the founders interpret the second amendment?" 3.5 did not answer this question and proceeded to give me other people's interpretations; I could not get it to answer the question either. However, GPT-4 answered it as follows:
"The interpretation of the Second Amendment by the Founding Fathers is also a subject of debate among scholars and historians. However, some general observations can be made about their views on the right to bear arms.
Collective and individual rights: Some Founding Fathers, such as James Madison, who drafted the Bill of Rights, and Alexander Hamilton, believed that the Second Amendment protected both a collective right (linked to state militias) and an individual right to bear arms. Their writings in the Federalist Papers and other documents suggest that they saw the right to bear arms as a safeguard against tyranny, as well as a means of personal
self-defense.
State militias: The Founding Fathers were concerned about maintaining the balance of power between the federal government and the states. They believed that state militias were crucial to preventing federal overreach and maintaining a free and independent nation. The Second Amendment's reference to "a well regulated Militia" reflects this concern.
Civic duty and personal responsibility: Some Founding Fathers, such as George Mason and Samuel Adams, emphasized the importance of citizens being armed and trained to defend their communities and country. This view suggests a broader civic responsibility in maintaining an armed and prepared populace.
Regulation: While the Founding Fathers generally supported the right to bear arms, they also recognized the need for regulation. For example, some colonies and states had laws regulating the storage of gunpowder, the carrying of concealed weapons, and the sale of firearms to certain individuals.
It is important to note that the context in which the Second Amendment was drafted was very different from the present day, with different types of firearms, societal conditions, and concerns about national security. These factors have shaped the ongoing debate over the appropriate interpretation of the Second Amendment and the extent of permissible regulations on firearms."
OpenAIs efforts at refining this data are clearly working, and GPT-4 is very good at discussing controversial topics objectively. Obviously it is not perfect, and it sometimes has a slip for bias, but it does it very infrequently. I still do have the same concerns as you however, but they have addressed GPT's tendency for bias and are very open about how they are tackling it.
@@fourshore502 that might very well be, but you could have just as well stated that about the internal combustion engine, we will adapt and we will figure it out. As I see it now, these AI technologies will hurt the white collar jobs, blue collars are just peachy. Academia will get hurt for sure, and we should adapt but the local plumber will be just fine. But that's just my two cents.
Pro tip: it is amazing in structering and distilling your ideas and concepts early on, when they are still fairly vague in your head. Try it. Suggest ideas driven by facts and data and it will concisely come up with counter arguments and also summarize your point(s) in a compact and concise manner. Sparring with it, not to "win" an argument, but to pose and reshape en repose is amazing. AND it keeps track of discussions AND it can make notes and summaries on demand. Just amazing.
I am a patient in the WELLSTAR Health system here in Georgia. We have been advised that Chat GPT will be handling all of our patient records, and more. What happens when I refuse a test that is invasive? It decides I don’t get treatment because I am non compliant? 😢 Empathy?
Believe me chtGPT will be far more empathetic than most of the doctors.
You earned a new subscriber, love the detailed insight from a professional..
Great video, but I think you're underestimating the exponential rate of improvement we can anticipate with chatgpt and other LLMs (not to mention combining LLMs with other AI agents, like Microsoft's Jarvis). Seems to me your predictions will hold for a few years, but eventually human doctors will be there purely for their bedside manner and connecting on a human level with the patient, and medical breakthroughs/novel ideas will be the realm of AI
As a software engineer, the problem revolves around the data which has been pretrained on the reinforcement learning models, which ChatGPT 4’s cutoff is Sept 2021. If a person goes further in depth with any Romanian, as in depth they go, it’ll give a higher chance of a falser problem
Yes, it is very good. It doesn't have to be the only source. At least as a learning tool it is phenomenal.
These are very interesting takes on the future of medicine and medical practice. Broadly speaking, I’m hoping that the influence of GPT and other LLMs will be to improve patient outcomes while accelerating and deepening transformative research. ChatGPT’s impact on education has been quite sudden and profound, but I’m hoping that students with access to tailored, always-available digital tutors will be more intelligent, knowledgeable and resourceful in the long run. These are thrilling times.
I love this topic, it is well laid out in the video. My only constructive criticism of your video is the low lighting level.
I liked your comments very much! When you combine AI language understanding with specific medical training AND image analysis, I think it will be able to replace many consultations to a physical "biological" doctor all together. BUT - as you say - the "new" doctor would have to specialize on the specific human abilities of person to person communication, empathy, all these abilities that alternative medicine use to compete now. AND - I still think there is a kind of perception that a human doctor can have towards the patient - some kind of synergetic, unexplainable experience of the patient's total situation, that would be MORE than just analytical AI medicine, and would require a BODY (for the physician too!). We must not underestimate the power of being present in the physical world. AI still have narrow input channels compared to how our brains are branched out to reality via networks of nerve cells and sensory cells. These are my thoughts. Thank you again for your channel! I am a subscriber now.
First video iv seen of yours, really like the way you think and articulate!
I need to turn notifications on on this channel because this stuff is lit.
Good think your ideas are generally out of the box, doc!
So: Now nobody can read a thousand papers and that's why nobody assumes anyone can know as much. But ChatGPT according to what VP says can read a thousand papers and summarise them and therefore give an extremely competent impression. But: Anybody reading the summary will just have to take ChatGPT - a programmed thing - at face value. And I basically think that will erode any trust in "society" or "expertise" even more than it is eroded already.
We shouldn’t pay bad experts, just don’t t pay bad experts and we will be fine.
Tim Scarfe of Machine Learning Street Talk said it best, AI is an extension of our own cognitive apparatus. It is a tool to assist us, not to take the final decisions and that is where we need to focus. It can help enormously by freeing up time in particular professions, particularly in knowledge intensive roles like medicine and legal, instead of being afraid, we should welcome the ability to streamline these 'industries' as they are, and have been, clogged up for a long time.
A fascinating innovation. As with all things, it will have upsides and downsides, but such is life in general. It will influence and elevate the average practitioner but not impede those who continue to question and create.
Addressing the difficult subjects, as usual; thanks for using your trained analytical mind toward better understanding
Thank you, this is significant information
Dr. Prasad is the brightest and smartest doctor!!
As a helpful tool, I could see ChatGPT doing QA and chart review tasks previously relegated to clinicians thus either saving time or eliminating the need for human QA in documentation. On the flip side, I could see (as you pointed out) documentation/charting becoming so streamlined that just a few mouse clicks finishes a fine and coherent note. The end result possibly being even fuller schedules with more patients to see thus further eroding 1:1 time with patients.
It seems to me that the biggest thing that ChatGPT will do is move doctors from doing paperwork to doing doctor work, which could be huge at dealing with burnout
Moral injury. Not usually burnout.
Vinay 🙏
Thank you so much for the overview. Been thinking about this new tech.
Your comment about the ID docs NOT needing the physical skills… I disagree 😌 apart from the radiologists, pathologists and whoever ELSE does NOT ever approach the patient, physical skills are ALWAYS very important and they are the fine-tuners to the human intellect/data-processing abilities in the art of clinical phys-ical assessment. That is why you are phys-icians.
Who will pick up on the real body temp if the broken thermometer showing WNL temp? Who will tell whether the skin is clammy or diaphoretic cats-and-dogs? Whether the patient jerks and withdraws or relaxes and calms down when you touch them?…. Whether they are avoiding your eyes or are anxious for your input? Whether they are happy to see you or they have flat affect?
No chatGPT yet… 🙂
in critical-care unit, the medical student with the first-year resident, both, are fiddling with the pulseOx probe, rummaging for the pulseOx reading on a Patient whose face is grayish -white, eyes closed and whose chest is not moving but the heart rate is still reading on the heart monitor…. - do you think physician’s physical skills of looking-seeing and listening-hearing could have helped the situation 😒?
… … … considering the algorithm/protocol model of medicine these days, chatGPT may become a bridge out of AMA’s Procrustean bed back into the art of medicine 🌿🙏
Is it just me or is Dr. Prasad getting more and more gristly since the start of the pandemic. Each time I watch him the beard & hair are a little longer. I feel like I am watching him go through transformation to enlightenment . ❤ love all that he brings & greatly appreciate being able to bare witness.
longer beard & hair = enlightenment?
And if your bald with no beard what does that make you?
Dr. Eric Topol had a book titled "Deep Medicine" which, though only published in 2019, already feels like it could use an update given how rapidly AI technology accelerates. I agree with all your proposed tenants; this technology will change the way medicine is practiced, disseminated, and studied. Perhaps as a young, new attending doctor I am slightly optimistic.
I only wonder how quickly healthcare can culturally accept and integrate this. Our industry is notoriously one that takes a long time to implement technology much less make it accessible, user friendly, and efficient (still dealing with click click EMR and pagers, anyone?).
And like any tool, variance will depend on the individual user. Those with more creativity or procedural skills will likely thrive.
It's interesting. I used ChatGPT to analyze an RCA for quality of analysis. It asked good questions and identified shortcomings in the analysis. I don't think it was a super intelligence, but on par with an experienced facilitator. It doesn't have access to vast amounts of data related to prevention of occurrences, but seems to find documented best practice and compile those facts or practices into a list. It doesn't make intuitive conclusions or suggest out of the box extravagant ideas, but can be useful.
This is the same argument with memory/knowledge and the introduction about the printing press. There has already been an angst for novel technology, have skepticism of the printing press, having people against it for the sake of preserving memory. This is a similar argument, but depending on the reliance of knowledge and creating new knowledge/obtaining new knowledge/more independent thought
I think another angle worth examining is impact on societies or people with limited access to quality physicians. Here chat gpt or other ai could drastically enhance quality of care while decreasing cost and improving access.
Exactly, in places like India where doctors think of patients as slave and themselves as part of some elite club. Most of the doctors wont tell the patients what’s wrong, just get the prescription from doctors own pharmacy(which is expensive af).
With ChatGPT or without, majority of US doctors with some exceptions are already like robots prescribing the same thing whether it is working or not and refusing to consider a patient’s input or what works for the patient if it falls out of the standard care scenario. Too many of us have been so disillusioned by the medical system that it is hard to comprehend it will be even more mediocre than it already is.
Maybe only the best will survive in each profession as these will be the type of specialists who will be still sought after. The critical thinkers, not robots. The ones who keep pushing themselves to learn continuously and keep their minds open to new ideas.
One thing that would be marvelous would be a ai assistant for the nurses and doctors that is automatically updated so the surgeon actually don´t cut the wrong toe... yes that happens.
Iatrogenesis
You also have the issue of when the internet then becomes flooded with output via something like ChapGPT that it now is learning from itself.
Learning from itself…. You are right - sounds sick
Most internet traffic is already not produced by humans ... if you didn't know.
I've been trying to use ChatGPT to figure out Warp Drive. I'm pretty sure I've got it.
In short: Large High-Speed Rotating Superconducting Disks with Radio Fields applied.
You're welcome. -AlexGPT
At 5:25 Yes! the first step, from Zero to One, that is the realm of human invention and creativity.
The matrices can be programmed to generate novel ideas if the input x weights + biases values are edited to your liking. It has been throwing out way too many ideas but for a consumer the centrist and safe ideas can be given as a product. The capability is already here.
You’d be surprised at how good is chatGPT in dream interpretation. From a psychological perspective of course.
Sometimes diagnosis is manifested by a combination of physical observation coupled with the expression of symptoms. A language model cannot do this. People need to stop broadly applying language model technologies to every domain. I would use expletives to describe the kind of people that do this, but I'm more so glad that you're highlighting the shortcomings of the tech.
If you think your job is not under threat, give it a year or two. The difference is that A.I. is exponential, while humans aren't.
Meh. Some professions are not under threat because we are unlikely to choose an AI over a person. I'd rather see doctors and nurses for my care, thanks. Even if the AI is better. Same for teachers. Same for concerts. Just because the AI is "better at something" doesn't make it supreme in the market. The best ideas don't always win.
@@zaq_hack4987 it depends on circumstance. I was misdiagnosed last year and almost died. If a robust a.i tailored for medicine could have prevented the suffering I had to endure I wouldn't have cared. Most doctors I had weren't that empathic either way.
@@carlosamado7606 That's a massive "if." If a relative of yours dies from a misdiagnosis from an A.I., how likely are you to trust it over a doctor? I don't think it's a given that A.I. will do the job better because "the job" is more than just data. If biology could be distilled to just the data inputs, then we would have solved it all long ago. Now, we are on the brink of AGI, but the human brain has stubbornly refused to be predictable in the ways that silicon is. (Not that we are that much closer to unraveling intelligence, in general, but if we do, it will be done for the machines long before it is done for us.)
Excellent analysis of where A.I. is now or where it will be within a year (at the latest) based on anecdotal evidence on the publicly available GPTs. But you’re not taking into account its potential AGI capability - which is going to come very fast and is almost certainly under development right now. And if you watch Satya Nadella’s early interviews on their collab with Open A.I. he hinted that there are going to be medical, legal, etc ‘arms’ of ChatGPT and within those there will be sub-arms (eg oncology, cardiology… contract law, criminal law). Right now, we are at the very start of a sigmoid diffusion curve. Not to be rude but it’s naive to be postulating what’s ultimately possible based on the here and now. This beast is moving at lightening speed. Buckle up.
So, you are describing the Justin Long's portrayed doctor character in Idiocracy. Great.
Good work Doctor.
Your audio is superb.
FYI, from the exchange of prompt/response that I experienced...
This chat model is limited to the information/training that it received by September 2021.
The "federated learning" process that OpenAI utilizes stores user input prior to being updated at certain points during the process.
It told me that it has some information post September 2021, that it learns from users, which I assumed was the feedback I would guess that a human would screen than but it didn’t say, and some information about current events is updated to the database by the developers. When I first interacted with ChatGPT in January it didn’t know about the Russian invasion of Ukraine now it does. It knows about the Battle for Bakmujt as of early 2022, but knows nothing about the ongoing battle in 2023.
i think the part that gets revolutionized first is coordination and education. Generative AI can create and predict not just words but voice and video. Current limitation in medicine is doctors have limited time to explain and answer questions. With AI, pt will be able to get all the questions answered exactly the way they want, and all the coordination (followup, scheduling) will be done based on pt using the phone at home to interact with AI. The future will first leverage more of pt’s time and input. Medicine will revolve around optimizing efficient delivery of treatment. Empathy will not generally be something valued in doctors as much going forward as teaching will be done w AI
Exactly because a lot of docs in the Medical system in America don’t display empathy anyway…. OR their implicit biases stop them from provided adequate care. As a black woman in maternal healthcare, this can definitely help out with healthcare disparities.
While ChatGPT may improve the performance of mediocre physicians, it will also ENTRAIN physicians into being slightly better than mediocre and never great.
Rush to the middle :/
Vinay, great deduction. Actually, this is a process that has been going on since 2000. Kudo's to you for realizing the potential particular in knowledge/experience based repetitive tasks which, by definition is 99% of all medical treatment. Will it do experimental brain surgery - NO. But 99.999% we don't need that. You are right on target. Don't forget Law practice - that's the next target.
I think as Marty has said, medical education will need to change & doctors will need to have better "bedside manner." It will change medicine dramatically. Whether for better or worse remains to be seen.
Thanks. I had no idea what this was. I want to make an aside: I really like you grew out your beard and I think it becomes you . antd I also noticed this time that your voice wasn't so high and irritating which to be honest it was before and it made it difficult for me to listen to you. BUT NOW it's quite fine so whatever has happened in your life-- good for you, go for it🌝 AND looking forward to hearing more from you😊
My friend, your entire viewpoint is based on a point in time analysis during a period of rapid innovation in this space. Mark my words, all the limitations you refer to in the current iteration of AI will be gone within months. I recommend you extrapolate your analysis to include the obvious and inevitable future capabilities of this technology and how it will impact medicine and society at large. Would love to hear your take on clinical research, e.g., virtual clinical trials, complete dynamic virtualized models of living cells and tissues, etc.
Timestamps would help your fabulous videos!❤️
10:00 "you're not going to get the smartest people going into medicine." have the last 3 years not confirmed that trend already?
My hope is that ChatGPT will be one of the platforms of technology that reform Medical Education then Medicine!
Excellent analysis! ChatGPT is a tool to assist humans. My observation as well is that it summarizes and synthesizes extremely well, but the analysis more or less remains on the surface. My most extensive research request was about Jewish Torah subjects and it did remarkably well! Something which would have taken weeks if not months of study was presented within seconds.
Thank you for this. I think you have hit the nail on the head with regards to where GPT-4 is today perfectly. There are and will continue to be problems with truth detection and bias with AI. Any system with garbage data in will produce garbage data out. My hope is that future iterations will be able to work out which assumptions do or do not stand up to scrutiny. Remember that what we've played with is already old technology.
GPT-5 already exists in an Open AI lab where it is being trained as we speak. Even GPT-4 is only half released with the visual receiver and generator unavailable so far. Then there are plug ins that could enable GPT to have real-time access to new data, have specialized personalities or technical areas of expertise, short and longer term memory, access to databases like Wolfram Alpha and self-motivation and self-criticism feedback loops.
At some point in the not-too-distant future, I think it will deconstruct Science. Instead of thinking in English about a molecule in an organelle, in a human that's an animal in a room that's a hospital... AI might make completely different, more accurate and totally alien (to us) definitions. Wolfram tried to do that himself but he's only a mortal man. A smart enough Neural Network without enough information could complete his life's work and codify existence.
Medicine is extremely conservative and complex field. To make any changes there for great technology with passionate engineers is just simply not enough. I would say it will take years and generations of professionals change, multiple tries and fails, tech maturing to rollout this at some point in the future.
Any software used for computer aided diagnosis is regulated by US FDA. They've been approving software that uses ML and AI for several years. These tools have to be interpretable (explainable). And any result is shown only after the doctor has a chance to interpret/diagnose, which leads to a more accurate result. Love your ideas... Spot on. An AI medical scribe that doesn't make mistakes (hallucinate) is not there yet, so it's only a tool for a human to fact check.
Explainable AI is something they strive for in newer systems. These are the Alpha versions of Chatbots, and hallucinations are an issue. But even in 8 months GPT4 as reduce hallucinations by 85% versus GPT 3.5.
In terms of new ideas, ChatGPT may not be useful for new insights in a particular field (because new data, targets, phenomena etc. need to be discovered), but it could be useful for new ideas when considering the established body of work in two or more different domains (like immunology and kinase networks or data intensive multi-omics, for example). If you have ever used Midjourney, fusing the styles of different artists leads to striking new images. The same could be true for medical research, where interdisciplinary studies have long been touted as a rich vein to tap into but we have only scratched the surface on due to human limitations - being a domain expert and key opinion leader in one area is a massive achievement, being a KOL in two distinct domains simply never happens.
My concern is that the increase in advanced practice providers as well as newly minted MDs will become dependent on Chat GPT and rather than be a tool it will supplant the provider role
When? Can't wait. Hurry up.
An experiment comparing live MD to algorithm in cardiology was done years ago, blowing away the live MD. This is described in the book Blink by Malcom Gladwell.
human ability to diagnose is overrated.
@@1MinuteFlipDoc humph
Subscribed, thanks. Greetings from James J in Limerick city Ireland
As a medial scribe, I am really excited for this tech.
Thank you for taking ChatGPT for a test run. I can see how it would be a great tool, and maybe even keep doctors on their toes. I am so tired of the drone of "standard of care". What about looking at the patient as a unique system of its own? Bio-pyscho-social was pounded into me 50 years ago, can this database take that all into consideration? I don't like the sound of this overall. I want a doctor not a database.
very insightful conclusions, perhaps we get more "less analytical" physicians which should reduce the price of service, as someone said "quantity has a quality of its own.
I’ve been advocating doc in the box for years. The foundation of language should probably be chat, GPT, but the knowledge base should be built on top of that. This could go along way to assisting the medical specialist in their decision making.
If we look at the prior implications within healthcare, it will lead to a further gradation, similar to the creation of mid-levels. Or another example would be physical therapy separating into several levels with full PTs becoming a doctorate level and a smaller number of PTs managing lower level positions.
Our entire medical industry will be disrupted-
“It’s really come a long way in the last few months”
Thank you for an excellent analysis! I am a recently retired radiologist. I’m damn glad I won’t have to contend with upcoming AI related changes to the practice of diagnostic radiology!
My biggest fear now, is the potential institution of a digital currency. That would put the government fox in my retirement account henhouse!
May God have mercy on Western civilization.
Competent expert systems have been around for decades, but the medical profession has hardly adopted them.