Sorry but the new WoW expansion just dropped. I'm busy pretending that ticket for mounting a dock and lugging monitors doesn’t exist while I’m out here pushing delves!
i agree her computer setup is terrible. she risks damage to wrist, neck, etc. and this setup directly reduces her efficiency. she does not need to be a computer expert, but the university or business she works for should have an IT expert and an HR health and safety expert who should be skilled in analyzing every worker's office for safe ergonomics. in 1980s i worked for business with first computers in my dept before i even heard word "ergonomics." we had no IT staff. i caused major damage to wrist and neck from inefficient computer setup and use. 40 years since then, there should be no excuse now for companies ignoring safe, efficient setups for all employees. this is huge waste of her efforts and risks her future potential.
Exactly my thought. On top of the ergonomic issue, with that setup, she loses a lot of time moving her hands from the keyboard to the touchpad. (I am just trying to help her)
I know Catherine is an amazing and brilliant scientist, but it is driving me bananas that she has glassware underneath her standing desk and that her laptop is set up so precariously over two books.
You may think that what you're saying is smart, but it's actually not. Before Newton discovered gravitational laws there were libraries full of books and "authoritative sources" about how universe works, and they were all wrong. Giving a reference does not mean it's correct at all, knowing that it's correct needs reasoning, which is what these models are all about. Try to keep up.
@@InternetetWanderer Given enough experience on the technology stack you are using, it is faster to write it and test it on your own, rather than get a questionable implementation from CHATGPT4o. This is because you would have to read the code and understand the choices the LLM made before testing it. This is an extra step which costs time and energy, whereas when you write it yourself, you can move onto testing immediately. The code has to be proper for its environment, not just run, which is why you would have to check the LLM code before testing. Having said that, with better and better versions of LLMs, even experienced developers might turn to LLM code creation as trust builds up. CHATGPT4o was not that convincing, although still useful for code review.
@@randomuser5237 Netwon doesn't explain gravity either, he just quantifies it. There never was any authoritative sources about how gravity works, and there still isn't any to this day. Reference is absolutely critical, but OpenAI has never revealed what data it exactly uses to train their models on, so it's not going to happen before they are ready to reveal that information. These LLM's are absolutely wonderful for productivity, but you need to be an expert in your field to understand what you're asking and be able to notice any hallucinations, which are not rare. Reasoning helps to a certain extent, but it's naive to think that people are going to be able to fact-check the reasoning, the more difficult the questions, the higher level reasoning ability it will require from the person, and it will easily fool most people even with faulty reasoning - because it will sound convincing.
Very cool to see but I would be cautious about potential hallucinations from the model, especially for the context of medical reporting - ie reading the GPT output is a great starting point but please also read peer reviewed scientific articles and think critically about what may be correct and incorrect. This is equally true for GPT-created summaries of scientific literature
This is so important. It is scary that some doctors, that don't understand AI, will start diagnosing you exactly what AI replied to them without critical thinking. Think about text prompts or image analysis. AI is great, like the web, but you need to understand how it works...
As a medical doctor, I’ve been very tempted to implement LLMs in my workflow (eg they would save me hours a day just for discharge letters) but haven’t yet as I’m terrified of AI hallucinations. Those would have a huge impact on patients and possibly go undetected.
This technology is being integrated into Epic. If your hospital uses Epic, ask your hospital's Epic contacts/IT people about using the new AI InBasket technology
@pneumonoultramicroscopicsi4065 it's going to change everything on how to solve problems we couldn't, I don't think you know what was released today, had PhD human level reasoning, Chat gpt 4 was level 1 this is now level 2
Is the training data on a broad range of sequenced species (not just humans)? Can it connect and search databases for annotations, characterize transcriptomes, etc. (GeneBank, JGI, Phytozome, Cartograplant, etc.)? This looks like a surface-level literature search, and I am unsure how it differs from regular GPT or Perplexity. Those are both valuable tools, but they already exist. This tells me nothing about the new model's capabilities and how they specifically benefit geneticists and researchers (since you are explicitly targeting us with this advertisement).
Finally, nice to see the response from someone who knows about the stuff in the video! It seems like a video that was put together in a hurry without carefully thinking what needs to be said.
1:26 she uses the Contigo water bottle to water the plant. Contigo stopped selling that type of spout. Please Contigo. We love that style of spout. Bring them back.
In my opinion you can improve your productivity be 50 percent by using a proper second monitor so you don't always have to switch windows when using several differnt apps. The time used to built a proper setup with the MacBook and Apple Thunderbolt Display 27 inch would pay dividends right away. Google USB-C to Thunderbolt 2 Adapter and you can use the latest MacBooks with the Thunderbolt Display from 2011. If you have even the older Apple Cinema Display 27 or 24 inches [When the Display has MagSafe 1, USB-A and Mini DisplayPort] you can buy a USB-C to Mini DisplayPort Adapter and it will also work. Notice you still will need to charge the MacBook with the normal charging brick. For a fully integrated solution where you only have to plug in one cable for the monitor and charging i can reccomend a Thunderbolt Dockingstation. Hope this helps.
@@Ghostrider-ul7xn I use Microsoft Windows and have at least once a week a problem which requires me to restart my laptop. With my MacBook this is only true for updates. Also Handoff and Continuity between iOS and MacOS is so usefully.
@@stefan-bayer I worked on both in large IT support for over a decade they BOTH have issues. Also, my personal desktop is a 4 year old Alienware and I only need to restart it for updates as well. I can't count the amount of spinning wheels, sad Macs and kernel panics I've seen over the years. With that said, both companies have moved closer to each other over the years. And with many things being cloud based, it's a personal preference now for most folks.
While testing o1(preview), after a few requests for improvements, I managed to reach a C++ code verbosity level that is acceptable, but unfortunately the memory feature seems to not be working with o1(preview). O1 seems more capable in understanding the nuances of the user's requests and more capable of consecutive improvements on the code without hallucinations or losing focus than GPT4o. The memory feature would be a great addition in order to give the user the ability to maintain the desired verbosity and other desired coding habits, throughout all the provided implementations in a specific language.
Dr. Catherine Brownstein’s work with OpenAI o1 and genetics is truly fascinating to me. The way AI is being integrated into genetic research feels like the future unfolding before our eyes. It’s exciting to think about the possibilities-how AI can help unlock new insights into our genetic makeup, from disease prediction to personalized treatment. I’m genuinely inspired by the impact this could have on healthcare and innovation.
did they do this on purpose to annoy us? she has a standup desk right there with an external monitor but instead uses the laptop on top of books and trays.
"I go down a lot of rabbit holes that do not yield anything useful. And being able able to increase the percentage of rabbit hole to useful information is killer."
If you really listen to what she says, and the desk situation, you will realize this is an episode of The Office. “Increasing the the percentage of rabbit holes to useful information, is killer “
I don't see how this is useful for her in the real world, this seems like a publicity stunt but she will never use it, who even ask such a simple question, this won't improve her workflow at all.
@@WearyTimeTraveler the way she said she would use it is improbable, laughably so, maybe a first year who just started uni would use it the way she said she would
Fascinating look into the intersection of genetics and AI! Dr. Brownstein's work with 'N of One' cases is truly inspiring, and it's incredible to see how AI can assist in synthesizing information and solving complex genetic puzzles. The potential for AI to streamline research and reduce the time spent in 'rabbit holes' is remarkable. Kudos to the team for advancing the field and supporting those who are often medical refugees. Exciting times ahead for genetic research and AI! It really is impressive how AI can assist in such specialized and complex fields. It’s amazing to see technology making a tangible difference in areas like genetic research and helping to solve challenging medical cases.
I have tried to use GPT4 o for this type of work and have learned that if you don’t know how to supervise the results (such as whether citrate synthase is in fact expressed in the bladder) then you cannot trust the results. In one case for work, it hallucinated that one type of depth-sensing camera (OAK-D) which I was considering for an application used a different technology (structured light) than what it actually uses (stereo vision). Both are common but it could have led me to make an incorrect engineering decision. Another was very clever and scientific-sounding reasoning about why pressure rises in a shaken can of soda due to the temporary increase of surface area. It only backed down when I told it that it was wrong because of Henry’s law! Given that I often see inconsistent behavior in the models in terms of things that have been supposedly fixed in the new model but are still happening when I use them, I would be careful.
I have that too, along with bloating, acid reflux and a hiatal hernia, and I'm very confident that A.I. will eventually help me cure or significantly alleviate all of them.
that sounds like too quick of an assumption. she might have arranged it that way for numerous reasons. all i wonder is: will OAI finally stop teasing and actually deliver for a change?
1:34 She has a normal monitor but it's disconnected or broken. She doesn't necessarily think that's the best way to use her laptop, folks. Relax and talk about something on-topic.
While cool, I would still be somewhat skeptical of this model’s responses. Also, look at this nerd’s desk setup. If she won’t take 2 seconds to fix that, how careful is she in other areas?
Human will soon forget the concept of “thinking for ourselves”…. There will be very few individuals that will make decisions without having the advice or opinion of an AI agent…
@@i_mjee_jay I see it as… people who are already lazy yes it will make them even more of an NPC…. But for those use to doing the hard work… this is just like a credit card…another tool… lazy people use a credit card to buy things they don’t need and people that use it as a tool collect points and buy things 2-5% cheaper..
The real issue is when these algorithms have biases and the new generation grow up without realising this, and they blindly trust these AI models. So the whole thing becomes a shadow controlling force which the nations like the US can utilise.
Oh lord, they should not be using LLMS for accurate summaries because they can hallucinate. This is headed for disaster if used in this way in the medical field
This is dumb, it’s still gonna hallucinate. And I thought this video was as gonna be something cool about genetics, but the demo just makes o1 look like the old gpt models and nothing much new or interesting
You know shit's serious when your laptop is perched on the Unix manual
fucking lol
made my day 😂
best place for unix book now is to show off now that you can google the manpages
wrrr
Someone get her a dock so she can use her monitor
Sorry but the new WoW expansion just dropped. I'm busy pretending that ticket for mounting a dock and lugging monitors doesn’t exist while I’m out here pushing delves!
🤣
I dunno I kinda like the big UNIX book :D
scientists tend to have this kind of peculiarities, they focus on whatever they're researching and don't care about anything else if possible
😂😂😂😂 funny observation comment
She will solve so many problems, as soon as she figures out how to connect a monitor with that laptop.
The ergonomic computer workstation 😅
Her not having the time to build a proper setup and instead using a book about UNIX as a laptop stand just gives her legitimacy as a scientist.
@@DominikFFarr yeah sure :P
@@DominikFFarrtrue! She uses her brain and new tools for science and that’s what matters, how, we don’t care. All my respect to her 🙏
i agree her computer setup is terrible. she risks damage to wrist, neck, etc. and this setup directly reduces her efficiency. she does not need to be a computer expert, but the university or business she works for should have an IT expert and an HR health and safety expert who should be skilled in analyzing every worker's office for safe ergonomics. in 1980s i worked for business with first computers in my dept before i even heard word "ergonomics." we had no IT staff. i caused major damage to wrist and neck from inefficient computer setup and use. 40 years since then, there should be no excuse now for companies ignoring safe, efficient setups for all employees. this is huge waste of her efforts and risks her future potential.
Exactly my thought. On top of the ergonomic issue, with that setup, she loses a lot of time moving her hands from the keyboard to the touchpad. (I am just trying to help her)
I know Catherine is an amazing and brilliant scientist, but it is driving me bananas that she has glassware underneath her standing desk and that her laptop is set up so precariously over two books.
I work with a lot of PhDs, this is neat and organized compared to most of their offices!
@@JohnVance i can attest to that lol
😂😂@@JohnVance
that's how you know she's real, and not some actor.
She's a genius...her setup isn't slowing her down. Think about that for a moment.
Without links to sources or RAG it's still just "Trust me, I'm an LLM"
Exactly. Useful only until they get something niche and LLM starts hallucinating.
You can double-check
You may think that what you're saying is smart, but it's actually not. Before Newton discovered gravitational laws there were libraries full of books and "authoritative sources" about how universe works, and they were all wrong. Giving a reference does not mean it's correct at all, knowing that it's correct needs reasoning, which is what these models are all about. Try to keep up.
@@InternetetWanderer Given enough experience on the technology stack you are using, it is faster to write it and test it on your own, rather than get a questionable implementation from CHATGPT4o. This is because you would have to read the code and understand the choices the LLM made before testing it. This is an extra step which costs time and energy, whereas when you write it yourself, you can move onto testing immediately. The code has to be proper for its environment, not just run, which is why you would have to check the LLM code before testing.
Having said that, with better and better versions of LLMs, even experienced developers might turn to LLM code creation as trust builds up. CHATGPT4o was not that convincing, although still useful for code review.
@@randomuser5237 Netwon doesn't explain gravity either, he just quantifies it. There never was any authoritative sources about how gravity works, and there still isn't any to this day. Reference is absolutely critical, but OpenAI has never revealed what data it exactly uses to train their models on, so it's not going to happen before they are ready to reveal that information. These LLM's are absolutely wonderful for productivity, but you need to be an expert in your field to understand what you're asking and be able to notice any hallucinations, which are not rare. Reasoning helps to a certain extent, but it's naive to think that people are going to be able to fact-check the reasoning, the more difficult the questions, the higher level reasoning ability it will require from the person, and it will easily fool most people even with faulty reasoning - because it will sound convincing.
Very cool to see but I would be cautious about potential hallucinations from the model, especially for the context of medical reporting - ie reading the GPT output is a great starting point but please also read peer reviewed scientific articles and think critically about what may be correct and incorrect. This is equally true for GPT-created summaries of scientific literature
This is so important. It is scary that some doctors, that don't understand AI, will start diagnosing you exactly what AI replied to them without critical thinking. Think about text prompts or image analysis. AI is great, like the web, but you need to understand how it works...
As a medical doctor, I’ve been very tempted to implement LLMs in my workflow (eg they would save me hours a day just for discharge letters) but haven’t yet as I’m terrified of AI hallucinations. Those would have a huge impact on patients and possibly go undetected.
This technology is being integrated into Epic. If your hospital uses Epic, ask your hospital's Epic contacts/IT people about using the new AI InBasket technology
I tried but they're useless, only good for editing text if you write something
@pneumonoultramicroscopicsi4065 yeah but strawberry just came out its far more advanced
@@tylerisse5120 even if they're good at reasoning they lack the specific knowledge we look for
@pneumonoultramicroscopicsi4065 it's going to change everything on how to solve problems we couldn't, I don't think you know what was released today, had PhD human level reasoning, Chat gpt 4 was level 1 this is now level 2
What this woman is doing is amazing, She is truly a hero. Giving hope and help to those who truly need it the most.
this is where covid came from, and the MRNA non-vxcne
trust me its not "amazing" in a good way
Is the training data on a broad range of sequenced species (not just humans)? Can it connect and search databases for annotations, characterize transcriptomes, etc. (GeneBank, JGI, Phytozome, Cartograplant, etc.)? This looks like a surface-level literature search, and I am unsure how it differs from regular GPT or Perplexity. Those are both valuable tools, but they already exist. This tells me nothing about the new model's capabilities and how they specifically benefit geneticists and researchers (since you are explicitly targeting us with this advertisement).
Finally, nice to see the response from someone who knows about the stuff in the video! It seems like a video that was put together in a hurry without carefully thinking what needs to be said.
@@hiraghuraman the idea is to make you curious so you try it once and then you'll hopefully get immediately hooked.
@@ronilevarez901 Thanks, but not at the cost of accuracy!
I’m just as skeptical as you I think this is payed advertising, just hyping up “AI”
1:15 fat laboratory but doesnt have 5 bucks for a desk stand
Nobody ever wants to fund computing resources for genomics, but that is some next-level jank.
Whatever works 😅
Priorities
😂
😂😂😂
1:26 she uses the Contigo water bottle to water the plant. Contigo stopped selling that type of spout. Please Contigo. We love that style of spout. Bring them back.
In my opinion you can improve your productivity be 50 percent by using a proper second monitor so you don't always have to switch windows when using several differnt apps. The time used to built a proper setup with the MacBook and Apple Thunderbolt Display 27 inch would pay dividends right away.
Google USB-C to Thunderbolt 2 Adapter and you can use the latest MacBooks with the Thunderbolt Display from 2011. If you have even the older Apple Cinema Display 27 or 24 inches [When the Display has MagSafe 1, USB-A and Mini DisplayPort] you can buy a USB-C to Mini DisplayPort Adapter and it will also work. Notice you still will need to charge the MacBook with the normal charging brick. For a fully integrated solution where you only have to plug in one cable for the monitor and charging i can reccomend a Thunderbolt Dockingstation.
Hope this helps.
No one would recommend Apple products unless they are an Apple fanboy lol. Windows can do pretty much everything Apple can do with lesser price point.
Look at where she has to reach to use her mouse, how has she not gotten one 😵💫
@@Ghostrider-ul7xn I use Microsoft Windows and have at least once a week a problem which requires me to restart my laptop. With my MacBook this is only true for updates. Also Handoff and Continuity between iOS and MacOS is so usefully.
@@stefan-bayer I worked on both in large IT support for over a decade they BOTH have issues. Also, my personal desktop is a 4 year old Alienware and I only need to restart it for updates as well.
I can't count the amount of spinning wheels, sad Macs and kernel panics I've seen over the years.
With that said, both companies have moved closer to each other over the years. And with many things being cloud based, it's a personal preference now for most folks.
Omg we need to update her desk 😊
While testing o1(preview), after a few requests for improvements, I managed to reach a C++ code verbosity level that is acceptable, but unfortunately the memory feature seems to not be working with o1(preview). O1 seems more capable in understanding the nuances of the user's requests and more capable of consecutive improvements on the code without hallucinations or losing focus than GPT4o. The memory feature would be a great addition in order to give the user the ability to maintain the desired verbosity and other desired coding habits, throughout all the provided implementations in a specific language.
What if it still hallucinates?
What if you get the answer wrong?
It will 😂
@@HelamanGileI fail. Give you your money back.
@@toadlguy so when you fail on a job you refund your boss so it's not sunk costs?
Nobody will take it seriously, the model doesn't even know about the latest research papers and only knows broad and general things
Dr. Catherine Brownstein’s work with OpenAI o1 and genetics is truly fascinating to me. The way AI is being integrated into genetic research feels like the future unfolding before our eyes. It’s exciting to think about the possibilities-how AI can help unlock new insights into our genetic makeup, from disease prediction to personalized treatment. I’m genuinely inspired by the impact this could have on healthcare and innovation.
Beautiful Guys. Awesome job
Amazing uses cases, brilliant
Someone send her a mouse, you might help speeding up that research by 70%
I would recommend to the doctor another, more ergonomic desk first.
After that, she could start using o1.
Do you not see her Mac on a desk right next to the laptop ??
did they do this on purpose to annoy us? she has a standup desk right there with an external monitor but instead uses the laptop on top of books and trays.
And having to reach up to use the trackpad instead of having a mouse 🫥
"I go down a lot of rabbit holes that do not yield anything useful. And being able able to increase the percentage of rabbit hole to useful information is killer."
yeah percent is probably not the right word, ratio would've been better
@@КалинВелчев-ю3щ I just like the sentiment, because I feel her. lol.
If you really listen to what she says, and the desk situation, you will realize this is an episode of The Office. “Increasing the the percentage of rabbit holes to useful information, is killer “
Wouldn’t you want to decrease the rabbit holes to useful information ratio?
This is amazing
I don't see how this is useful for her in the real world, this seems like a publicity stunt but she will never use it, who even ask such a simple question, this won't improve her workflow at all.
agree.. really looked like she was being insisted upon.
She literally just explained in the video how it’s useful to her in a real world case, wtf are you talking about?
@@WearyTimeTraveler the way she said she would use it is improbable, laughably so, maybe a first year who just started uni would use it the way she said she would
@@pneumonoultramicroscopicsi4065 she said it is impossible to be an expert at 20k genomes are u dumb bro
Return of the King. Travis chief of dog in him round 2
this video is such a relief. natural science STILL can't be run by robots
Thats not a relief, thats sad, it means progress will still be slow as usual and you get cures for stuff in the span of decades
Amazing stuff truly but please BRING BACK SLY AND RELEASE THE COICE MODEL FOR ALL OF US!
WE WNAT SLY AND THE COICE MODEL!!
An also multimodality while we are at it
COICE MODEL SHOULD BE A PRIORITY !!! I WANT TO USE COICE IM TIRDE OF TYCING
How she trust the answers are amazing
Fascinating look into the intersection of genetics and AI! Dr. Brownstein's work with 'N of One' cases is truly inspiring, and it's incredible to see how AI can assist in synthesizing information and solving complex genetic puzzles. The potential for AI to streamline research and reduce the time spent in 'rabbit holes' is remarkable. Kudos to the team for advancing the field and supporting those who are often medical refugees. Exciting times ahead for genetic research and AI! It really is impressive how AI can assist in such specialized and complex fields. It’s amazing to see technology making a tangible difference in areas like genetic research and helping to solve challenging medical cases.
I have tried to use GPT4 o for this type of work and have learned that if you don’t know how to supervise the results (such as whether citrate synthase is in fact expressed in the bladder) then you cannot trust the results.
In one case for work, it hallucinated that one type of depth-sensing camera (OAK-D) which I was considering for an application used a different technology (structured light) than what it actually uses (stereo vision). Both are common but it could have led me to make an incorrect engineering decision.
Another was very clever and scientific-sounding reasoning about why pressure rises in a shaken can of soda due to the temporary increase of surface area. It only backed down when I told it that it was wrong because of Henry’s law!
Given that I often see inconsistent behavior in the models in terms of things that have been supposedly fixed in the new model but are still happening when I use them, I would be careful.
this is so cool
Yes isn't it?🎉
I cannot attach anything in O1-preview mode, but Dr. Catherine can.
I'm sure this is why it was in a laptop. This was filmed way earlier than when the video came out.
It’s hilarious how corporations don’t think this will eventually take their power away and become their replacement.
Could this be related to the 23andme dna test kits fall?
I still don’t understand why OpenAI’s videos still have ads on them
you put us on hold again for the new sound mode.
Nice genetics, and a book of UNIX sweet!!! to Catherine happy hunting!!
Will o1 solve how to cure my eosinophilic oesophagitis?
I have that too, along with bloating, acid reflux and a hiatal hernia, and I'm very confident that A.I. will eventually help me cure or significantly alleviate all of them.
Omg I love her ❤
Whoever is making these videos is so good! Will AI eventually replace them? 👀
We truly are in the era of Courage the cowardly dog
and if chatgpt gets it wrong, then the patient is in serious trouble
If people get it wrong too
Wait is this fucking strawberry?
Yes, that is strawberry model and one of its many capabilities.
wow... it sucks so
@@JohnSnow32how lol
What do you mean strawberry? 😮
Q*?
OpenAI should give the ergonomic computer
they can't pay for a monitor, show she doesn't have to put her laptop on top of books? talk about not investing in science
that sounds like too quick of an assumption. she might have arranged it that way for numerous reasons. all i wonder is: will OAI finally stop teasing and actually deliver for a change?
What makes you think that they cannot pay for a monitor? Perhaps the scientist doesn’t want or need one.
Leave the comedy for anyone else but you
Great video!
We want high-purity in metastable states. Humans have water-down medications for the sake of the equilibrium for to long.
I did not understand that wingwalking comparison at all. Wingwalking?
The most interesting thing about this video is her laptop on a stack of books 😂
I hope she get a monitor.
It looked like old gpt was hard to see the value other than summarize
Best one
Super Interesting.
1:34 She has a normal monitor but it's disconnected or broken. She doesn't necessarily think that's the best way to use her laptop, folks. Relax and talk about something on-topic.
I DO NOT want my doctor to use AI to figure out my condition
While cool, I would still be somewhat skeptical of this model’s responses. Also, look at this nerd’s desk setup. If she won’t take 2 seconds to fix that, how careful is she in other areas?
Someone get this lady a mouse and keyboard before she becomes a carpal tunnel statistic
unix book usage approved
She is implicitly claiming that macOS is based on Unix.
Now we can make superhumans with gene editing
Finally i can get my superpowers
Awesome
Release the voice model
All we see is the fact she does not have a monitor.... Edit: look at those browser tabs, please tell her about arc browser
Chat gpt running on Unix
❤
Glaring screen and using touchpad over a pile of books, all the room is super light and she has dark mode On
I respect those who are trying their best to solve today’s problems.
Human will soon forget the concept of “thinking for ourselves”…. There will be very few individuals that will make decisions without having the advice or opinion of an AI agent…
And they will be the ones making the worst decisions
good observation, i too thought of that.
many people start to become lazy and just use AI instead of their creativity.
@@i_mjee_jay I see it as… people who are already lazy yes it will make them even more of an NPC…. But for those use to doing the hard work… this is just like a credit card…another tool… lazy people use a credit card to buy things they don’t need and people that use it as a tool collect points and buy things 2-5% cheaper..
Please use a mouse, it will help you as much as AI 🙏
so how many months we need to wait to use this ?
It is already out, im testing it today!
@@m4dalex478 how I can't see it in mine
The real issue is when these algorithms have biases and the new generation grow up without realising this, and they blindly trust these AI models. So the whole thing becomes a shadow controlling force which the nations like the US can utilise.
Sora video thrown in there
Hold on, AI can’t make me straight
cool
She knows UNIX!
Wow😮
🌻❤️🌻
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, , Amazing!
parm mentioned
be careful or it's gonna start noticing
We Néed to save civilization whith reproductive medicine
Me after finding out that my son is gonna have 3 arms because chatgpt confused his 8312th gene with some other sht
would
Doctors also google symptoms?
yeah of course do you think they know everything? they are still human
Brazil! 🇧🇷
Oh yes having a cancer father truly very upsetting youngest son cells…
It’s whole life… 😢
Oh lord, they should not be using LLMS for accurate summaries because they can hallucinate. This is headed for disaster if used in this way in the medical field
She needs a proper working setup with a larger display!
This is so powerful
Can anybody tell me the name of the music playing in the background!!
Darude Sandstorm
@@pizzasteve205 Ain't no!
WE need Phd Level for ELECTRICAL ENGINEERS
Still just a surface level litrature review 😅
Is this Q*?
Her desk, OMG! She is doing ergonomics right! :D
where cancer cure?
This is dumb, it’s still gonna hallucinate. And I thought this video was as gonna be something cool about genetics, but the demo just makes o1 look like the old gpt models and nothing much new or interesting
Disappointing..
It has zero relations to genetics..