Hey, there are several things I have to point out to viewers that need to be fixed in this video. 1) There's no need to modify the component to get the middle pointer finger. That channel is already exposed through the CHOP data. You can just put down a Select CHOP, connect it to the first CHOP output of the Hand Tracking component, and select the channel: h*:middle_finger_tip:* to get the data you're looking for. The data is normalized from 0-1 so you can use a Math CHOP to scale the positions to the correct size you need for your display. 2) I wouldn't recommend putting the MediaPipe component in the Palette. It's very large and we externalize it so doesn't take forever to save the project. Make sure you have the toxes folder located next to where you save your TD project / local to your TD project. If it's not re-opening correctly look in the Common tab of the MediaPipe component, check the path used for the External tox, and make sure the path points to the local MediaPipe.tox (currently yours is set to the wrong path). This path issue is also fixed in the newest version of the MediaPipe component, so the project should re-open correctly by default. 3) You can turn off the green points by turning off "Show Overlays" on the MediaPipe component. No need for SpoutCam for that. I'm really excited to see you experiment with MediaPipe, and I'd love to see you make a new tutorial with the updated info!! I think you do a great job of explaining your process and making things very approachable. Hope to see new tutorials from you soon!
Oh thank you for clarifying these elements Torin, I should indeed have mentioned it is possible to get the data out of the box. And thank you for explaining my path issue that I'll be sure to make notice for my next tut. Best
Cool tutorial. Why did you dive into the mediapipe operator and not use a select on the outcoming data? But maybe it reduces load on the computer. Nice to see the insides. Thank you.
You’re right, that was my thinking but I can’t really say if it makes it more efficient. Selecting the data directly is totally possible without having to change the component. I should have been more clear on that.
Really helpful tutorial! Could you make a tutorial about creating our own gesture to be recognized, also controlling some motion graphics at the same time maybe. thanks!
I have worked with basic gestures so far like X,Y,Z but there are infinite ways to associate motion patterns with specific commands, what did you mean by motion graphics ?
Thank you for the clear tutorial. I am a TD beginner. Can you make a video of using mediapipe to make hand attracted partical? I tried it but it didn't work.
would this also work with the kinect too? I still havent started using touch designer but I know that the kinect doesnt have hand tracking natively, you are a god send for this video i love you
i needed one help, currently I am working on sign language detection using mediapipe, its a python project, but i wanted to add text to audio support feature in real time , can you please show how can we achieve that, it would be really appreciable .
I think that might be the second hand you see packed in one corner, maybe try reduce to 1 hand detection if that’s all you need, in the Mediapipe -> HandTracking paramaters ( number of detected hands )
You need to : 1 - go to this github page github.com/leadedge/SpoutCam/releases 2 - install and extract the zip on your pc 3 - execute "SpoutCamSettings.exe" 4 - set frame rate of your video 5 - set resolution 6 - name it "TDSyphonSpoutOut2" and register 7 - reopen your saved project with MediaPipe, and select SpoutCam instead of your webcam.
Followed your tutorial aand was trying to move a fluid simulation with my hands but they are mirrored, do you know how to flip it and set the boundaries of the screen to match the hand movements xy? Thank you for your time explaining this amazing art.
If you want your data to decrease instead of increase we use a math to inverse everything and set it back to the same value. Like i showed, change the math /channel pre OP/ to negate and in the second window increase the /post-add/ to the value you need.
@@outsandatv this helped!!!! I am now able to move it on the X how it supposed to be looking at the projection in front of it. Do you know how to change the value of y using a math? Thank you for your help!!!!
@@outsandatv i appreciate your help!!! I’m currently doing the particles tutorial now. I need to play with the add as you say because the particles are on the upper corner of the screen and not centered. Thank you for taking the time to respond and to show us this tutorials
If you mean the Mediapipe cursor by "mouse" we can use other Pose data like elbows or foot tip, so you can skip the modification of the model and use the data directly from Mediapipe - Pose Tracking - Select the body part you want. And create as many math as you have coordinates ( X, Y, Z ) to change the data to something you need.
Is there any way to work around and replace Kinect to achieve the hand particle video with mediapipe? I saw your first particle video and wanted to do it without Kinect. Kinect doesn’t work well with MacOS.
Sure ! to replace the kinect hand x, y by the mediapipe x, y you jeed to : - add mediapipe to the particle project - add the hand or pose tracking component - use the normalized data from it ( so as I show in this video, place a null then a select CHOP after the component where it says “normalized data” and in the select choose “wrist_x” and “wrist_y”, then drag and drop it to the transform x, y of the metaball SOP. So basically follow both tutorials and when i reference the positions, use mediapipe instead of a kinect. I will upload the project on my patreon now that you are saying it would benefit mac users. I appreciate your feedback in that regard !
@@outsandatv Oh yes, I previously subscribed to your Patreon. Different name though. One last question how do you address the distance and angles when you may be like 10 meters away with a 2K web camera. Is there anyway to have mediapipe still recognize my body and movements?
@@robertjohnson4051 If the detection is sloppy you can zoom into the picture by changing the scale in a transform TOP, if you're standing in a specific part of the image. If the subject is moving from left to right let's say, at a far distance, you could use the subject spine position X, Y to always have yourself upscaled in the center.
@@outsandatv thanks for your awesome help. You saved my show! On a separate note, If I were you, I would start charging a small fee for your Patreon. And uploading projects that you do online and making them accessible to people like me. I like learning from the tutorials, however at this time I’m pressed because my show is starting very soon and to buy projects and slightly adjust them would be wonderful for beginners like me. Just putting that out there and hoping for your success. I’m one of your patrons!
Just drop your picture in the project and add a Transform and use the position reference of the circle in the Transform and plug it instead of the circle.
@@outsandatv Thanks, I've tried everything since then and Mediapipe recognized me even when the camera was physically upside down or tilted 90 degrees, so I solved the problem by setting the parameter to be used to Negate the Channel pre op of MathCHOP in SelectCHOP! :)
I wonder how you achieved this because Mediapipe works directly with the video device. But I would imagine it's possbile with Spout using a fit and transform to control the rotation of the camera. @@user-lx8if5xy9r
Hey, there are several things I have to point out to viewers that need to be fixed in this video.
1) There's no need to modify the component to get the middle pointer finger. That channel is already exposed through the CHOP data. You can just put down a Select CHOP, connect it to the first CHOP output of the Hand Tracking component, and select the channel: h*:middle_finger_tip:* to get the data you're looking for. The data is normalized from 0-1 so you can use a Math CHOP to scale the positions to the correct size you need for your display.
2) I wouldn't recommend putting the MediaPipe component in the Palette. It's very large and we externalize it so doesn't take forever to save the project. Make sure you have the toxes folder located next to where you save your TD project / local to your TD project. If it's not re-opening correctly look in the Common tab of the MediaPipe component, check the path used for the External tox, and make sure the path points to the local MediaPipe.tox (currently yours is set to the wrong path). This path issue is also fixed in the newest version of the MediaPipe component, so the project should re-open correctly by default.
3) You can turn off the green points by turning off "Show Overlays" on the MediaPipe component. No need for SpoutCam for that.
I'm really excited to see you experiment with MediaPipe, and I'd love to see you make a new tutorial with the updated info!! I think you do a great job of explaining your process and making things very approachable. Hope to see new tutorials from you soon!
Oh thank you for clarifying these elements Torin, I should indeed have mentioned it is possible to get the data out of the box. And thank you for explaining my path issue that I'll be sure to make notice for my next tut. Best
Kurt Cobain
cool af dude thanks
Cool tutorial. Why did you dive into the mediapipe operator and not use a select on the outcoming data? But maybe it reduces load on the computer. Nice to see the insides. Thank you.
You’re right, that was my thinking but I can’t really say if it makes it more efficient.
Selecting the data directly is totally possible without having to change the component. I should have been more clear on that.
Thank you so much
Really helpful tutorial! Could you make a tutorial about creating our own gesture to be recognized, also controlling some motion graphics at the same time maybe. thanks!
Love the idea. Will work on it.
I have worked with basic gestures so far like X,Y,Z but there are infinite ways to associate motion patterns with specific commands, what did you mean by motion graphics ?
Thank you sooooo much bro~
Thank you for the clear tutorial. I am a TD beginner. Can you make a video of using mediapipe to make hand attracted partical? I tried it but it didn't work.
Yes sure
would this also work with the kinect too? I still havent started using touch designer but I know that the kinect doesnt have hand tracking natively, you are a god send for this video i love you
Yes hand tracking is limited to Mediapipe.
So you can use your kinect as a simple camera, but a webcam would do the job.
Just ask me if you have any question ! :)
i needed one help, currently I am working on sign language detection using mediapipe, its a python project, but i wanted to add text to audio support feature in real time , can you please show how can we achieve that, it would be really appreciable .
That's a great idea of a project, do you need a text to voice AI ? I will look into it, but for now I think you have to pay to use an API key
This is so cool
In the Hand_Tracking Node, there is a white dot in the top left corner. Can you tell me how to remove it?
I think that might be the second hand you see packed in one corner, maybe try reduce to 1 hand detection if that’s all you need, in the Mediapipe -> HandTracking paramaters ( number of detected hands )
how do I use mediapipe with an uploaded video instead of using webcam?
You need to :
1 - go to this github page
github.com/leadedge/SpoutCam/releases
2 - install and extract the zip on your pc
3 - execute "SpoutCamSettings.exe"
4 - set frame rate of your video
5 - set resolution
6 - name it "TDSyphonSpoutOut2" and register
7 - reopen your saved project with MediaPipe, and select SpoutCam instead of your webcam.
MediaPipe connected with particle system here :
www.patreon.com/posts/attracted-102011982?Link&
Followed your tutorial aand was trying to move a fluid simulation with my hands but they are mirrored, do you know how to flip it and set the boundaries of the screen to match the hand movements xy? Thank you for your time explaining this amazing art.
If you want your data to decrease instead of increase we use a math to inverse everything and set it back to the same value.
Like i showed, change the math /channel pre OP/ to negate
and in the second window increase the /post-add/ to the value you need.
Or probably if you followed the tutorial you just need to remove the “Negate” and it should work
@@outsandatv this helped!!!! I am now able to move it on the X how it supposed to be looking at the projection in front of it. Do you know how to change the value of y using a math? Thank you for your help!!!!
@@yomi0ne for y you probably wont need to negate. Maybe post add a bit.
@@outsandatv i appreciate your help!!! I’m currently doing the particles tutorial now. I need to play with the add as you say because the particles are on the upper corner of the screen and not centered. Thank you for taking the time to respond and to show us this tutorials
Hey do you have any idea why the Y axis on one of my hands does not seem to be reading any data?
Is it occuring from the moment you load your component ?
worked it out, accidentally had the index finger selected lol. thanks for reply tho :)
@@outsandatv
how can replace the inputs from the mouse with something else?
I want to replace it with another coordinate data
@@VeraArt-wt4nl I don't understand your problem could you explain in more details please ?
MouseIn CHOP to get all mouse data - Mediapipe to get webcam data - Kinect to get depth data
If you mean the Mediapipe cursor by "mouse" we can use other Pose data like elbows or foot tip, so you can skip the modification of the model and use the data directly from Mediapipe - Pose Tracking - Select the body part you want. And create as many math as you have coordinates ( X, Y, Z ) to change the data to something you need.
@@outsandatv i used rename op to rename mediapipe data to tx ty so i can replace my projects with mouseIn op thank you.
Is there any way to work around and replace Kinect to achieve the hand particle video with mediapipe?
I saw your first particle video and wanted to do it without Kinect.
Kinect doesn’t work well with MacOS.
Sure ! to replace the kinect hand x, y by the mediapipe x, y you jeed to :
- add mediapipe to the particle project
- add the hand or pose tracking component
- use the normalized data from it ( so as I show in this video, place a null then a select CHOP after the component where it says “normalized data” and in the select choose “wrist_x” and “wrist_y”, then drag and drop it to the transform x, y of the metaball SOP.
So basically follow both tutorials and when i reference the positions, use mediapipe instead of a kinect. I will upload the project on my patreon now that you are saying it would benefit mac users. I appreciate your feedback in that regard !
www.patreon.com/posts/attracted-102011982?Link&
@@outsandatv Oh yes, I previously subscribed to your Patreon. Different name though. One last question how do you address the distance and angles when you may be like 10 meters away with a 2K web camera. Is there anyway to have mediapipe still recognize my body and movements?
@@robertjohnson4051 If the detection is sloppy you can zoom into the picture by changing the scale in a transform TOP, if you're standing in a specific part of the image. If the subject is moving from left to right let's say, at a far distance, you could use the subject spine position X, Y to always have yourself upscaled in the center.
@@outsandatv thanks for your awesome help. You saved my show!
On a separate note, If I were you, I would start charging a small fee for your Patreon. And uploading projects that you do online and making them accessible to people like me. I like learning from the tutorials, however at this time I’m pressed because my show is starting very soon and to buy projects and slightly adjust them would be wonderful for beginners like me. Just putting that out there and hoping for your success. I’m one of your patrons!
can i exchange the circle to a image?
Sure, message me on insta if you want me to show you @outsanda
Just drop your picture in the project and add a Transform and use the position reference of the circle in the Transform and plug it instead of the circle.
How can I use a vertical resolution camera?
No clue, I have to try it out. Thanks for the idea :)
@@outsandatv Thanks, I've tried everything since then and Mediapipe recognized me even when the camera was physically upside down or tilted 90 degrees, so I solved the problem by setting the parameter to be used to Negate the Channel pre op of MathCHOP in SelectCHOP! :)
I wonder how you achieved this because Mediapipe works directly with the video device. But I would imagine it's possbile with Spout using a fit and transform to control the rotation of the camera. @@user-lx8if5xy9r
hmmm