The problem with stand alone app is. Hard to sync color of all footage and put them back all together in editing. Good for short clips. But long multiple sequence it’s hard. It’s grading in two different software. Is it possible to have this as plugin tho
in premiere pro you can save your movie with a different name, then collect all the media in one folder, then if your file naming allows it (if no you can assign new filenames before collecting the shots in premiere) , you can put the files in alphabetical order, which i would assume it will put them in order of film location more or less, then you can batch color grade with this tool, and when you are finished you save them with exact the same filenames, then premiere pro will load the project but this time the videos will be already graded...
Interesting, I’m not sold yet because I need to see more integration with resolve. But you’ve definitely got my attention and subscription w this demonstration
Bro if you simply made a plug in that would reliably color balance footage to a neutral place such as WB/Tint and made that a stand alone product it would KILL. You could make a more advanced product along side it but for most people when color grading the first chunk of time is color correction before they can color grade. If you took care of color correction I could spend so much more time getting creative in the grade. I think nailing that with a super intuitive and reliable davinci plugin would do soooo well. You can then build upon but I know so many people dream of that type of product.
DeHancer have a great CST built into it. That might solve the problem when you guys are trying to solve the issue with multiple colour space options. I like working in Arri C4 sometimes Arri C3 other times.
Thanks for the feedback. fylm.ai uses ACES color management framework. Specifically ACEScct subset. Color managed approach allows easy management of different color spaces and accommodation of different workflows. For color managed applications such as DaVinci or Baselight you can set your pipeline to ACEScct. For non color managed applications you can include the pipeline in the LUT.
@@sleeping_ghxst Thanks for your support. We’ll be uploading such video soon. If you would want any sort of assistance feel free to reach out to our support.
I registered for a free account, uploaded two RAW photos from my Canon taken under different lighting conditions, and experimented with 2-3 models. However, I couldn't achieve a filmic or cinematic look. The colors were exaggerated, and the whites looked strange. I think we need some tutorials to get the results we want. For now, I can say that I achieve a more cinematic look using a RAW converter. Subscribed though
I noticed the PRO membership gives 10GB of cloud... Does it mean that each month i can only upload 10GB? My project in apple pro res is more than 140GB! do i need to upload 10gb per month, or is it 10GB max per grade? thanks - on a second thought - this app just exports LUTS, not the graded footage, correct? (so 10gb should be enough...)
That is correct. You do not need to upload the full-res footage. You can simply import ungraded stills, create a LUT, and then export the LUT and use it anywhere.
Great video but I hope exported LUTS would look the same if I apply it to my video because I’ve had experiences in the past with LUTS not looking the same when I apply it to my video. It would be nice if you can allow video export
Thank you for the comment. If they don't match it's most likely due to something incorrect in the pipeline. Please feel free to contact our support, we'll be more than happy to help.
can i import a shot from another movie that i want to use as reference and grade all other footage using that shot? (i suppose i will analyse it and use it as candidate, right?)
Yes, you can do that. You need to save the reference as a Match and then you can use that Match in either the Technical Color Match tool or the AI Color Extract.
Nice, the only thing stopping me daily this tool is having quick grading is quite easy and I am too lazy to open another app. But it can sometimes help if I get stuck.
If we were to implement this, corrections such as primaries or curves are pretty straight forward to implement in a node tree but what about more complex parts of the grade that are "film emulations" in their nature. Would it be OK to include that part as a LUT? What's your take on this? Thanks!
@@vivoices But DCTL should also be color space specific. It perhaps could have some sort of space detection so it adapts the math to whatever user has already selected as color space of timeline but one math cannot work for all color spaces. So if the LUT is adapted to color space or specific for a certain color space, let's say ACEScct or DWG (assuming you are using a color managed approach which I believe you are) why wouldn't that be valid approach? I'm not negating your approach, I'm just trying to understand your pipeline. Do you ever use print film emulation LUTs for example?
I personally always work color managed and never use LUTs, only tools that are more flexible. Most DCTLs I use are working with DWG or have a dropdown to select the working color space. Look development is not extremely important for me at this time. If at all, it is a split tone curve some very specific contrast adjustment and perhaps a little grain and halation.
Creating a Resolve plugin with all the required functionality is not as straightforward as one would think since DaVinci's API/SDK can be quite limiting. We are working on a plugin though...
The LUT that you create with fylm.ai will work with any video or still format. You cannot directly load a RED raw file in fylm.ai but you don't need to. A simple screen grab is sufficient to create the look.
Absolutely not. No training in any way or form is performed with user supplied content. If you use fylm.ai Lite, no data ever leaves your machine. If you use fylm.ai pro, images are stored in the cloud to enable collaboration functionality and to have your projects available from any machine, but again, and we cannot emphasize this enough, we do not use user’s data for any training, improvements or analysis. User’s data is user’s data and we don’t access it in any way or form for training.
I got an issue when I export directly from your app. Even in tiff or png I get weird pixelation when I zoom in 100% or 200%, there's non when I do the same on the original raw file before processing.
Are you on the free plan? On free plan image size is limited to 2K on the long edge. Otherwise, please send our support an email, they'll be able to troubleshoot it for you. It's hard to do it through UA-cam comments.
Raw files from still cameras can be read directly in fylm.ai and manipulated. Regarding raw footage, what you would do is open the file in the software where you will finish or edit your film anyway (remember fylm.ai is primarily a look creator) and export ungraded shots (either with or without corrections) and use these in fylm.ai to create the look. That biker footage in the video comes actually from BRAW files from Black Magic camera.
If you go to fylm.ai/pricing and select the free option you will be able to test all tools for free. While the free account doesn’t allow LUT download, after you create a grade you can contact our support and ask them to export one LUT for you.
Yes of course. When you load an asset in fylm.ai you first select Input and Output transforms. If you look at the bottom of the app throughout the video you’ll see different inputs are used (depending on the asset) such as BMD Film Gen 5, Sony S-Log3 and more. For more details feel free to send us an email.
Please write to our support stating your browser. It's possible that something is blocking the download in your browser. Are you by any chance using Edge?
If you go to fylm.ai/pricing there’s an option to start a free trial. With free trial you can use all tools but cannot export a LUT. Saved images are limited to 2K. To test a grade you create , apart from visual inspection, you can email our support so that they export a LUT for you.
LUT always does the same thing. If your image is overexposed it will not be corrected by the LUT. If it has too much contrast and the LUT adds contrast it will become even more contrasty. These are just a few examples.
Thanks for the feedback. Integrations are in the making but it’s not always as straightforward as one might think. You are limited to the host API/SDK and some compromises are needed.
Have a look at the thumbnail image for example. You will see that the "after" image is clearly darker than the ungraded one. The same for the closeup of the biker's hands in the video. If this was a preset or LUT it would make ALL images you apply it to become darker. Always. Because a preset always does the exact same changes to the image regardless of image's content. This would mean that correctly exposed images would become way too dark if this was a LUT or preset. This is not the case for NeuralToneAI. If your image is correctly exposed it will remain correctly exposed and if it's too bright it will be darkened. You can see that in the grades created for that person standing in the shallow water. The image is correctly exposed so it doesn't become brighter or darker. Again, if this was a LUT or preset and let's just say that "preset" contains the "look" from the thumbnail image, that image of the person standing in the shallow water would become super super dark.
Really? Taking the creative control away from the grading artist. What's the point? It's the most satisfying process ever. I want ML to do tedious tasks, like masking and segmentation, not take away my paintbrush...
It's not about taking anything away from you. It's about giving you more choice. It's a tool that can help offer some inspiration if you feel stuck or it can get you more quickly to a starting point. The same as LUT or a preset or a DCTL. None of these takes anything away from you but rather can help you achieve your result hopefully more quickly.
@@fylmai Right but LUT or a DCTL was created by an artist, this is not. Like I said before, are these made by someone or is it trained on some data that's opaque to end user and could be of questionable origin? It's a bit silly to compare a crafted look to something that a ML model spits out that we know nothing about.
@@its.bonart The grades NeuralToneAI trained on were created by us. The way it works (in simplified terms is like so). You take an ungraded image. You grade it. Now the graded image is the "ground truth" while ungraded image is the sample which NeuralToneAI needs to learn how to grade. At this point you have two adversarial networks working "against" each other. One is generating grades while the other is telling it if it's right or wrong. You set a certain threshold after which a grade is considered "valid" or "similar". After that threshold is passed, the discriminator network tells the generator network "listen man, the last grade you did is the correct one", so the model learns that. Now you repeat the process for many images. Over time, there are similarities between the images. The generator model learns that this "blue-ish range" should look like so. And that "green-ish" range should like so. So you don't need millions of images for training but you need a variety. High key, low key, contrasty, less contrasty, warm, cold, etc. The same applies for exposure and contrast etc. Models learns that this kind of exposure level should be that kind of exposure level. This is over simplified but it's just to illustrate a point. You can also create grading preferences. For example if all the grades you create are cold, contrasty and dark, that's the only thing model will know how to create. This training data creates weights in the model which it uses to create grades. Yes, the grades NeuralToneAI generates are made by the machine but the machine learned from us 🙂I hope this explains it better.
@@fylmai Sure, thanks for the explainer on the adversial networks, which is not what I asked. If that was the case, why is this information not mentioned anywhere? If this was done in house, I failed to find a reference to it on the website. What I have found, however, were the examples of other artist's work using same footage from UA-cam and the grades looked eerily similar. It doesn't inspire confidence, and again, it's my preference to see what's happening in my tools, rather than relegate my creative decisions to 3 sliders and millions of biased parameters that I have no control over.
Thanks for the feedback. You would be surprised how many filmmakers and photographers struggle with color grading. By the way, feedback we received over time was that many use NeuralToneAI as a source of inspiration or for its suggestions. I’m sure you have found yourself at some point during color grading session looking at the monitor and asking yourself “in which direction I should take this footage “.
This could also help photographers and cinematographers who are colorblind and struggle with grading their images even if it's just to get a good idea from it
Thank you for the feedback. At this stage fylm.ai only supports videos in .mp4 format. Mostly, using videos in fylm.ai. is not necessary as fylm.ai is a look/LUT builder and you can easily do it on a still image. Then, you would export your LUT and use it elsewhere.
Colour Grading AI tool can never exist without the person/artist behind it. You hire a colorist not because they know to do A or B, you hire them for the human input that they bring to the project. NeuralToneAI is a tool that helps a colorist/photographer/director achieve their goal quicker. It can offer inspiration when you're stuck and it can help you get more quickly to a nice starting point. Personally, we don't believe tech can replace anything that requires a human touch.
@@fylmaithat’s sounds beautiful but a lot of colorists will be losing jobs. Specially when the ai gets better which usually is within a couple of years.
Please integrate it into DaVinci as a plug-in
There's no need. Black magic will probably create their own version and blow this out of the water
Davinci will produce a similar build-in tool in the near future, there is no doubt about it.
The problem with stand alone app is. Hard to sync color of all footage and put them back all together in editing. Good for short clips. But long multiple sequence it’s hard.
It’s grading in two different software.
Is it possible to have this as plugin tho
Thanks for the feedback. We’re working on introducing plugins 🥰 You can still of course use fylm.ai to build show LUT/LUTs
@@fylmai would be amazing as a plugin. Would definitely help with the workflow. Cheers
in premiere pro you can save your movie with a different name, then collect all the media in one folder, then if your file naming allows it (if no you can assign new filenames before collecting the shots in premiere) , you can put the files in alphabetical order, which i would assume it will put them in order of film location more or less, then you can batch color grade with this tool, and when you are finished you save them with exact the same filenames, then premiere pro will load the project but this time the videos will be already graded...
oh, wait, this app does not export the media, only LUTs?
@@Rob-ym That is correct
Interesting, I’m not sold yet because I need to see more integration with resolve. But you’ve definitely got my attention and subscription w this demonstration
Bro if you simply made a plug in that would reliably color balance footage to a neutral place such as WB/Tint and made that a stand alone product it would KILL. You could make a more advanced product along side it but for most people when color grading the first chunk of time is color correction before they can color grade. If you took care of color correction I could spend so much more time getting creative in the grade. I think nailing that with a super intuitive and reliable davinci plugin would do soooo well. You can then build upon but I know so many people dream of that type of product.
Thanks for the feedback!
@@fylmai🙏🏼
I also agree I would totally buy that
Yes and option for DCTL or Power grades would be amazing.
definitely it is going to be game changer
we want it inside resolve and in a lifetime option please
Make a lifetime plugin for DaVinci for 200$-300$ with updates for lifetime and you will be killin it!
Capture One profile export is game changer! Big times!
DeHancer have a great CST built into it. That might solve the problem when you guys are trying to solve the issue with multiple colour space options.
I like working in Arri C4 sometimes Arri C3 other times.
Now working in cineon for the past few projects
Leaves me great flexibility to experiment
Thanks for the feedback. fylm.ai uses ACES color management framework. Specifically ACEScct subset. Color managed approach allows easy management of different color spaces and accommodation of different workflows. For color managed applications such as DaVinci or Baselight you can set your pipeline to ACEScct. For non color managed applications you can include the pipeline in the LUT.
I subscribed! I would love to see a short movie out of this tool. 🎉😮
@@sleeping_ghxst Thanks for your support. We’ll be uploading such video soon. If you would want any sort of assistance feel free to reach out to our support.
I registered for a free account, uploaded two RAW photos from my Canon taken under different lighting conditions, and experimented with 2-3 models. However, I couldn't achieve a filmic or cinematic look. The colors were exaggerated, and the whites looked strange. I think we need some tutorials to get the results we want. For now, I can say that I achieve a more cinematic look using a RAW converter. Subscribed though
Hi Ian, thanks for the feedback. Our support would love to assist. Please email them at support [ a t ] fylm dot ai.
I noticed the PRO membership gives 10GB of cloud... Does it mean that each month i can only upload 10GB? My project in apple pro res is more than 140GB! do i need to upload 10gb per month, or is it 10GB max per grade? thanks - on a second thought - this app just exports LUTS, not the graded footage, correct? (so 10gb should be enough...)
That is correct. You do not need to upload the full-res footage. You can simply import ungraded stills, create a LUT, and then export the LUT and use it anywhere.
Great video but I hope exported LUTS would look the same if I apply it to my video because I’ve had experiences in the past with LUTS not looking the same when I apply it to my video. It would be nice if you can allow video export
Thank you for the comment. If they don't match it's most likely due to something incorrect in the pipeline. Please feel free to contact our support, we'll be more than happy to help.
they are just color grading with some effects. It doesn't do anything with bad colors.
can i import a shot from another movie that i want to use as reference and grade all other footage using that shot? (i suppose i will analyse it and use it as candidate, right?)
Yes, you can do that. You need to save the reference as a Match and then you can use that Match in either the Technical Color Match tool or the AI Color Extract.
Nice, the only thing stopping me daily this tool is having quick grading is quite easy and I am too lazy to open another app. But it can sometimes help if I get stuck.
We are working on integrations with other software and next week will be dropping a new and exciting tool 🙂
@@fylmai 🤩that would be so cool.
If you guys make this a plugin you'll make a killing.
I'd get ahead of this before Blackmagic and Adobe decided to do it.
Interesting, but for my use cases it only starts to be an option as soon as it is possible to export a node-tree for Davinci Resolve.
Thanks for the feedback.
If we were to implement this, corrections such as primaries or curves are pretty straight forward to implement in a node tree but what about more complex parts of the grade that are "film emulations" in their nature. Would it be OK to include that part as a LUT? What's your take on this? Thanks!
A LUT is always problematic because it needs a specific working color space to function properly. A DCTL instead of a LUT would be much more flexible.
@@vivoices But DCTL should also be color space specific. It perhaps could have some sort of space detection so it adapts the math to whatever user has already selected as color space of timeline but one math cannot work for all color spaces. So if the LUT is adapted to color space or specific for a certain color space, let's say ACEScct or DWG (assuming you are using a color managed approach which I believe you are) why wouldn't that be valid approach? I'm not negating your approach, I'm just trying to understand your pipeline. Do you ever use print film emulation LUTs for example?
I personally always work color managed and never use LUTs, only tools that are more flexible. Most DCTLs I use are working with DWG or have a dropdown to select the working color space. Look development is not extremely important for me at this time. If at all, it is a split tone curve some very specific contrast adjustment and perhaps a little grain and halation.
does this export whole videos though or just images?
It doesn't export videos. For videos you export a LUT which you can then use in your NLE. For still images, you export the image.
Sincerely, is it possible to create a Resolve plugin? I'm pretty sure BM is working on smth similar, you should hurry guys and win this one.
Creating a Resolve plugin with all the required functionality is not as straightforward as one would think since DaVinci's API/SDK can be quite limiting. We are working on a plugin though...
Tried this. System was overloaded for days, so I gave up.
That shouldn’t happen. You probably encountered some sort of a glitch which we would be happy to help with if you send our support an email.
Looks pretty simple! Would you be interested in working together on a mini-review or test on some BMPCC OG footage?
Sure, please email us at support [ at ] fylm dot ai
What was the data set?
It works with RED RAW?
The LUT that you create with fylm.ai will work with any video or still format. You cannot directly load a RED raw file in fylm.ai but you don't need to. A simple screen grab is sufficient to create the look.
Gamechanger. Looking forward to a Resolve plugin.
I would love to see this as a plugin integration with Premiere Pro. Can I export the photo luts to premiere?
Yes, you can export .cube LUTs for Premiere Pro. Plugin for Premiere should be coming later.
This is amazing
can i download it as an app after subscription.?
Just create a Davinci Resolve plug-in, and take my money!
How does it achieve the look that is the question. It looks like basic grading. So I bet overall it sucks
Either big co’s working on it already or going to buy a similar startup to integrate.
Does this AI application store our data and use it to train on our images and videos without our knowledge and consent?
Absolutely not. No training in any way or form is performed with user supplied content. If you use fylm.ai Lite, no data ever leaves your machine. If you use fylm.ai pro, images are stored in the cloud to enable collaboration functionality and to have your projects available from any machine, but again, and we cannot emphasize this enough, we do not use user’s data for any training, improvements or analysis. User’s data is user’s data and we don’t access it in any way or form for training.
I got an issue when I export directly from your app. Even in tiff or png I get weird pixelation when I zoom in 100% or 200%, there's non when I do the same on the original raw file before processing.
Are you on the free plan? On free plan image size is limited to 2K on the long edge. Otherwise, please send our support an email, they'll be able to troubleshoot it for you. It's hard to do it through UA-cam comments.
@@fylmai ah yes I am, I didn't know that thanks!
Its not color grading its just filters. Maybe better than the old ones, but still nothing to do with grading
how would this work with raw footage? where WB and ISO data can be manipulated in post.
Raw files from still cameras can be read directly in fylm.ai and manipulated. Regarding raw footage, what you would do is open the file in the software where you will finish or edit your film anyway (remember fylm.ai is primarily a look creator) and export ungraded shots (either with or without corrections) and use these in fylm.ai to create the look. That biker footage in the video comes actually from BRAW files from Black Magic camera.
Is there no monthly payment option instead of annual monthly? I'd like to test the appeal, but even for that I have to pay for a year's use?
If you go to fylm.ai/pricing and select the free option you will be able to test all tools for free. While the free account doesn’t allow LUT download, after you create a grade you can contact our support and ask them to export one LUT for you.
Does this work Slog, Hlog, log footage, etc?
Yes of course. When you load an asset in fylm.ai you first select Input and Output transforms. If you look at the bottom of the app throughout the video you’ll see different inputs are used (depending on the asset) such as BMD Film Gen 5, Sony S-Log3 and more. For more details feel free to send us an email.
I was able to only see a loading graphics saying initializing nothing more
Please write to our support stating your browser. It's possible that something is blocking the download in your browser. Are you by any chance using Edge?
@@fylmai no I am using chrome, firefox, opera, brave and have tried in all
Is there a free trial that doesn't have tons of limitations and watermarks? Debating between this and dehandcer
If you go to fylm.ai/pricing there’s an option to start a free trial. With free trial you can use all tools but cannot export a LUT. Saved images are limited to 2K. To test a grade you create , apart from visual inspection, you can email our support so that they export a LUT for you.
@@fylmai 🙏 thank you for sharing this
isnt what the lut do?
LUT always does the same thing. If your image is overexposed it will not be corrected by the LUT. If it has too much contrast and the LUT adds contrast it will become even more contrasty. These are just a few examples.
Would be cool if you collab with topaz video ai and integrate it
This is INSANE
possible sur première pro et Lightroom ?
Yes, you can export a LUT in .cube format for Premiere Pro and you can export the grade as XMP profile which you can use in Lightroom.
You guys could have just integrated it into DaVinci Resolve, Adobe Premiere, or Final Cut Pro as a plugin.
Thanks for the feedback. Integrations are in the making but it’s not always as straightforward as one might think. You are limited to the host API/SDK and some compromises are needed.
Nice but not another subscription for me.
same here )))
Agreed. I’m so done with them.
i don’t see any difference to traditional presets
Have a look at the thumbnail image for example. You will see that the "after" image is clearly darker than the ungraded one. The same for the closeup of the biker's hands in the video. If this was a preset or LUT it would make ALL images you apply it to become darker. Always. Because a preset always does the exact same changes to the image regardless of image's content. This would mean that correctly exposed images would become way too dark if this was a LUT or preset. This is not the case for NeuralToneAI. If your image is correctly exposed it will remain correctly exposed and if it's too bright it will be darkened. You can see that in the grades created for that person standing in the shallow water. The image is correctly exposed so it doesn't become brighter or darker. Again, if this was a LUT or preset and let's just say that "preset" contains the "look" from the thumbnail image, that image of the person standing in the shallow water would become super super dark.
We need for Davinciii
if you cannot integrate it in a professional workflow, like a premiere or fcpx plugin - it is totally useless
And how is premiere and fcpx to be considered "professional" for colourists exactly?
@@ethelquinn They are not - and that is exactly why somebody would use a plugin like that.
@@ethelquinn How is using a plugin or online grading program like this consider professional? LMAO
Give it some more updates and also integrate it for DaVinci I’ll give a review , ❤
this is game changer
Intrigued...
Really? Taking the creative control away from the grading artist. What's the point? It's the most satisfying process ever. I want ML to do tedious tasks, like masking and segmentation, not take away my paintbrush...
Also, I would like to know how the presets were created in the first place. Show your work.
It's not about taking anything away from you. It's about giving you more choice. It's a tool that can help offer some inspiration if you feel stuck or it can get you more quickly to a starting point. The same as LUT or a preset or a DCTL. None of these takes anything away from you but rather can help you achieve your result hopefully more quickly.
@@fylmai Right but LUT or a DCTL was created by an artist, this is not. Like I said before, are these made by someone or is it trained on some data that's opaque to end user and could be of questionable origin? It's a bit silly to compare a crafted look to something that a ML model spits out that we know nothing about.
@@its.bonart The grades NeuralToneAI trained on were created by us. The way it works (in simplified terms is like so). You take an ungraded image. You grade it. Now the graded image is the "ground truth" while ungraded image is the sample which NeuralToneAI needs to learn how to grade. At this point you have two adversarial networks working "against" each other. One is generating grades while the other is telling it if it's right or wrong. You set a certain threshold after which a grade is considered "valid" or "similar". After that threshold is passed, the discriminator network tells the generator network "listen man, the last grade you did is the correct one", so the model learns that. Now you repeat the process for many images. Over time, there are similarities between the images. The generator model learns that this "blue-ish range" should look like so. And that "green-ish" range should like so. So you don't need millions of images for training but you need a variety. High key, low key, contrasty, less contrasty, warm, cold, etc. The same applies for exposure and contrast etc. Models learns that this kind of exposure level should be that kind of exposure level. This is over simplified but it's just to illustrate a point. You can also create grading preferences. For example if all the grades you create are cold, contrasty and dark, that's the only thing model will know how to create. This training data creates weights in the model which it uses to create grades. Yes, the grades NeuralToneAI generates are made by the machine but the machine learned from us 🙂I hope this explains it better.
@@fylmai Sure, thanks for the explainer on the adversial networks, which is not what I asked. If that was the case, why is this information not mentioned anywhere? If this was done in house, I failed to find a reference to it on the website. What I have found, however, were the examples of other artist's work using same footage from UA-cam and the grades looked eerily similar. It doesn't inspire confidence, and again, it's my preference to see what's happening in my tools, rather than relegate my creative decisions to 3 sliders and millions of biased parameters that I have no control over.
i cant for the life of me understand why anyone would need this. Colour grading is already a tiny amount of work.
Thanks for the feedback. You would be surprised how many filmmakers and photographers struggle with color grading. By the way, feedback we received over time was that many use NeuralToneAI as a source of inspiration or for its suggestions. I’m sure you have found yourself at some point during color grading session looking at the monitor and asking yourself “in which direction I should take this footage “.
No its not...maybe for u if you have been grading for years but beginners especially those who make UA-cam content specifically will love this.
This could also help photographers and cinematographers who are colorblind and struggle with grading their images even if it's just to get a good idea from it
nah cloud system no local
The UI is aweful....Couldnt even get it a video to load let alone see it work.
Thank you for the feedback. At this stage fylm.ai only supports videos in .mp4 format. Mostly, using videos in fylm.ai. is not necessary as fylm.ai is a look/LUT builder and you can easily do it on a still image. Then, you would export your LUT and use it elsewhere.
@@fylmai Ok thanks..I wil try that
Oh, another super duper color grade plugin that will make you look like a pro... LOL
No Thanks, Free Palestine!
Nope
Another job in the industry threatened by AI. This is really cool tech, but at the same time as a colourist extremely frightening.
Colour Grading AI tool can never exist without the person/artist behind it. You hire a colorist not because they know to do A or B, you hire them for the human input that they bring to the project. NeuralToneAI is a tool that helps a colorist/photographer/director achieve their goal quicker. It can offer inspiration when you're stuck and it can help you get more quickly to a nice starting point. Personally, we don't believe tech can replace anything that requires a human touch.
@@fylmaithat’s sounds beautiful but a lot of colorists will be losing jobs. Specially when the ai gets better which usually is within a couple of years.
broo tff
another subscription, no ty 🤮