I asked to describe a very common picture of Monument Valley and the answer was "I'm sorry for any confusion, but as an AI, I don't have the ability to directly see or interpret images. However, I can process and analyze image data using various techniques, such as object detection and image classification. If you could provide more context or ask a specific question about the image, I might be able to help more effectively." 🤔
I have uploaded exactly the same puppy image and I get this answer: "The image you've uploaded is 1200 pixels wide and 1197 pixels high. It is in RGB color mode, meaning it's a full-color image with red, green, and blue components. As I mentioned previously, this only provides basic information about the image's properties. It doesn't provide information about the content of the image. Providing a human-like description of the content of an image would require advanced techniques such as a trained machine learning model, which I am unable to apply in the current environment without internet access. If you need any further assistance or have other questions, please let me know!" so I don't know how you managed what you have described in the video. I used GPT 4 with code interpreter activated of course...
I’ve been using code interpreter for the last couple of weeks coding trading indicators in java for one of the trading platforms I use and it’s quite frustrating in its current form. It has the conversation memory of a gnat and sometimes just does it’s own random thing despite specific instructions. For data analysis there are many folk getting quite excited about it but for serious coding projects it’s nowhere near ready yet especially with the 25 prompts every 3 hours cap that openai has right now. Hopefully when it comes out of Beta they will have made some big improvements.
Doesn't the uploaded file count towards the token caps? You said "very large documents" which made me wonder if our uploaded docs can actually be larger than the cap.
It's still in beta and image description seems to be a little variable depending on your location. Here's an example from a previous session to show the variability: chat.openai.com/share/238eac4b-642b-45f7-b8f6-da745ded52e0
1- Who know how corrupt or accurate Kaggle data sets are? 2- How difficult or easy it is to manipulate data sets and offer them to the general public as TRUE?
It's still in beta and some features seem to be a little variable depending on your location. Here's an example from a previous session to show the variability: chat.openai.com/share/238eac4b-642b-45f7-b8f6-da745ded52e0
Get EARLY ACCESS To My NEW Ai Course 👉 alexanderfyoung.com/7-day-ai-prompt-engineer
You're the first UA-camr ever I had to set the playback speed to 75% for :)
I asked to describe a very common picture of Monument Valley and the answer was "I'm sorry for any confusion, but as an AI, I don't have the ability to directly see or interpret images. However, I can process and analyze image data using various techniques, such as object detection and image classification. If you could provide more context or ask a specific question about the image, I might be able to help more effectively." 🤔
I have uploaded exactly the same puppy image and I get this answer: "The image you've uploaded is 1200 pixels wide and 1197 pixels high. It is in RGB color mode, meaning it's a full-color image with red, green, and blue components.
As I mentioned previously, this only provides basic information about the image's properties. It doesn't provide information about the content of the image. Providing a human-like description of the content of an image would require advanced techniques such as a trained machine learning model, which I am unable to apply in the current environment without internet access.
If you need any further assistance or have other questions, please let me know!" so I don't know how you managed what you have described in the video.
I used GPT 4 with code interpreter activated of course...
I’ve been using code interpreter for the last couple of weeks coding trading indicators in java for one of the trading platforms I use and it’s quite frustrating in its current form. It has the conversation memory of a gnat and sometimes just does it’s own random thing despite specific instructions.
For data analysis there are many folk getting quite excited about it but for serious coding projects it’s nowhere near ready yet especially with the 25 prompts every 3 hours cap that openai has right now.
Hopefully when it comes out of Beta they will have made some big improvements.
I ran into the same issues. You can do some things with it, but it will get stuck at some point.
Great video. Thanks!
Doesn't the uploaded file count towards the token caps?
You said "very large documents" which made me wonder if our uploaded docs can actually be larger than the cap.
I use the iPhone app. Can I upload things with that?
hi, chatgpt was unable to give me a description of the image given. did you do something prior in order to make it work ?
It's still in beta and image description seems to be a little variable depending on your location.
Here's an example from a previous session to show the variability: chat.openai.com/share/238eac4b-642b-45f7-b8f6-da745ded52e0
@@AlexanderFYoung Thank you for your help
Hi Dr Alex, can you use this with the Bing AI which uses ChatGPT 4?
It’s just on ChatGPT as a beta plugin at the moment
1- Who know how corrupt or accurate Kaggle data sets are? 2- How difficult or easy it is to manipulate data sets and offer them to the general public as TRUE?
Audio to text, Describe image and summarize PDF does not work for me. strange ...
It's still in beta and some features seem to be a little variable depending on your location.
Here's an example from a previous session to show the variability: chat.openai.com/share/238eac4b-642b-45f7-b8f6-da745ded52e0
Do you have the plus subscription?
@@oevers yes
Video does not work
ChatGPT on my end cannot describe images
@AlexanderFYoung has already answered this question twice