Hi Bruce. Thanks for putting more of these videos on line. I for one am glad that you are finding new topics. And I appreciate that you tell us when you are not familiar or do not necessarily understand something within darktable. I don't fee as dumb that way.! This video was REALLY useful. The soccer analogy worked for me! Although I am French-Canadian and have watched some of Aurélien's original videos in French, admittedly I do not have enough understanding of the concepts for benefit fully from them. This is where your videos are really useful. Keep up the great work. I look forward to the next one.
I feel like your football analogy can also accurately describe what your words are doing to that email, but the goal posts are my ears. Your honest explanations are what originally drew me to this channel and are always much appreciated. Thanks a bunch for tackling such a complicated subject.
That is one of the best explanations of this topic I have seen so far. Thank you for including Aurélien's email as it points out some of the historic aspects why display-referred was even a thing in the first place.
Thank you! Now I have at least some understanding about the difference between these two approaches. And as I understand it, it is important to stay with the scene-referred workflow until the end of the editing because the moment you use a display-referred module, you go "through the bottle neck" and continue your work with a reduced set of data.
I agree about Aurelien, he appears to be a great guy and super smart. Unfortunately there are times I feel like I am 6 years old trying to grasp all of his concepts. You made an analogy to audio work a while back about keeping your work in a format with the most dynamic capabilities all the way to the very end of the work flow at which point the song is played on AM radio. With my take on your analogy give the final conversion all the information possible to still give the cleanest product possible. Thanks Bruce.
At the end of the day, I think this is still the right mindset. Keep the highest quality data set available during the production/workflow, so that the final (archival) master contains the full range of values. Then, you can derive as many export versions as you need at any time you like, without penalty.
@@audio2u Bruce, for me the difference or the advantage of scene over display is your explanation about raw and jpeg. And, how cool it is, to work with a raw years later, when the overall technic or possibilities rise. You gave one example of one of your cameras, where you work on an old raw, somewhere in the beginning or mid of your series. Also what Jim said explains it also for me. As i understand it correectly, the benefit of scene referred is more or less an overall standard, which gives us the possability to fit a new module in an old workflow, without messing all up, cause all is based on one idea, which is independant of a specific viewport (DVD, Monitor, film ...)
Yeah, you're referring to episode 018 where I put the raw file up for people to process, and a guy (whose while name I don't recall right now, but I know it had Elvis in there somewhere!) created a fantastic result out of what I believed to be a supported RAW file. Scene-referred, as previously stated, gives us the ability to work in a high bit depth right through the editing process, so we can work with the greatest quality data right up until the point of export.
Bruce and Peter - THANK YOU! I created a darktable module preset and called it scenic with a star icon. I moved it to the front and included the modules recommended and suggested in the user manual link which peter provided. I created a a second group of "Use wit care" modules. Thanks to you both I am good to go with modules I don't have to worry about. PS I also am now online with the most up to date manual again thanks to Peter -forgot that was all online.
Hi Bruce. Firstly, I have to say I love your channel and presentation style. As a teacher of Lightroom, CameraRaw, Photoshop and Raw Therapee though, I have to say that I struggle to see the relevance of this argument. Yes, a display referred workflow is nuts - totally. But this whole scene referred vs display referred argument only arises when people are discussing DARKTABLE. This has led to a seemingly wide-spread belief that Lightroom for instance (so also Camera Raw) and Raw Therapee somehow have a display referred (or at least 'inferior' referred) workflow - which is indeed 100% WRONG. The crux of the argument is 'should I chuck all my captured colours and tones away, apart from those that my 'monitor' can display - (that would constitute display referred) - or not". Lightroom/ACR has a fixed working space of ProPhotoRGB, as does Raw Therapee. Lightroom/ACR and Raw Therapee allow the user to opt for Adobe effects input profiles, custom .icc or .dcp input profiles, or OEM input profiles in either .icc, .icm or .dcp formats. Therefore, they allow the user the maximum flexibility of 'sensor faithfulness' - or, if you like, 'scene referred' tone and colour flexibility. Darktable ships with a working space of Rec2020, which is considerably smaller than ProPhoto - what's that all about? It does not offer the facility to deploy ANY .dcp input profile - this omission isn't the best idea the developers ever had. All it does offer as input profile options are 'mystery meat' standard colour matrix, or sometimes advanced colour matrix, or the option to use a custom .icc profile - not as versatile as a .dcp profile. Lightroom/ACR and Raw Therapee all conform to what might be referred to as a 'scene referred workflow', and anyone using them is adopting this style of workflow, perhaps without realizing it. As someone who's been at this photography game for a living since 1978, I can honestly say that before looking at DT, I thought this entire argument had been put to bed decades ago. Why it's being resurrected now, and only with reference DT, I can't begin to comprehend. It's leading to a shed load of confusion and misdirection/misinformation being pedalled around the internet etc about other raw processors, and frankly that needs to stop. Other than the omissions etc mentioned though, and a few general GUI gripes, I have to say that DT is very good, possibly the most flexible raw processor out there. Thanks for all the work you do on your channel Bruce; I wouldn't have made it past first base on DT without your content!
Andy, Thanks so much for your comments. I saw this yesterday, but wanted to have the time to sit down, re-read it, and try and put a well-constructed answer together. I will confess that a lot of the information I put forward in this video came from comments from one particular developer, a guy named Aurélien Pierre (he also has a youtube channel), who seems to know his stuff when it comes to colour science. I have a very limited understanding of some of the stuff that Aurélien talks about, but I AM trying to absorb more information on the subject. It's been a while since I recorded this video, so I will admit that I don't recall everything I said in it. Hopefully, I didn't imply that other RAW processors were using a display-referred workflow. As for why this argument, which to quote you should have "been put to bed decades ago", is resurfacing now, and primarily in relation to darktable (by the way, darktable is ALWAYS spelled lower case... the devs specifically mention this on darktable.org) is probably because some of the older pixel-processing modules used such a workflow. Aurélien spent hundreds of hours over the last few years trying to replace all of the modules that did use display-referred algorithms, with new modules which used scene-referred math. His principal argument (and here I am paraphrasing, so shoot ME if I have misquoted him) seemed to be that doing any extreme pixel-pushing in a display-referred module would invariably lead to nasty artefacts. And I can attest to that, because I've done exactly that... pushed some of those old modules too hard, and have seen some very nasty side effects because of it. Cheers.
@@audio2u Hi Bruce! As far as I'm concerned you yourself have never said a word on the subject that I would disagree with. But there are other DT "trainers" on UA-cam that pedal the false info for whatever reason. This has spread onto a number of forums, and seriously, it makes my blood boil - it's very much as you mention at the beginning of the video when you hear people talking about audio! So I actually get a bit of a downer whenever I see this question/discussion raised. Bruce F, Martyn E, Dan Margulis and Digital Dog have all covered the madness of display/device referred workflows years ago. Yes, a scene referred workflow MUST terminate with conversion to a device referred output, be it an 8bit sRGB jpeg or a colour managed print, but that's usually done via a relative colourmetric conversion engine, like ACE. or by soft proofing. Every time the question/discussion arises, less knowledgable folk get confused - and in reality the whole subject shouldn't really be mentioned at all. But DT makes a very distinct point of invoking it all over again, so it's a bit of a Catch 22! In a nutshell, if your working space and input space/profile are large enough to cope with the trillions of colours that can be recorded by a 12, 14 or 16bit sensor, and all your tools are capable of handling that level of date - which Lr/ACR/RT/COP etc can all do - then you are using a 'scene' referred workflow of some form or other. Rest assured Bruce, in my eyes you've said nothing wrong; you just omitted to point this out. BTW, I found out what the 'mystery meat' standard colour matrix actually is yesterday - it's equal to/a clone of, the Adobe Standard input profile - go figure! Thanks very much for taking the time to reply Bruce - stay well Guv'nor.
Thanks Bruce, I switched a a scene referred workflow because I got the impression that it was "better", and now I understand why that is (kind of). The issue I've come across is that I always used to use high and low pass filters with overlays. High pass for sharpening and low pass with a softl-ight overlay for accentuating the "roundness" of objects. In the scene referred workflow these tended to cause some horrible blow-outs of highlights and similar problems and I couldn't tell why. From the content of the email, I think I now know why at least, even if I don't have a replacement module figured out yet.
Yeah, I'm in the same boat. Certain modules that I like are coded in a display-referred style, and now I need to find alternate ways of working. But there is an old post that Aurélien wrote which has suggestions for which modules to avoid, and what to use in their place. I'll have to dig that out again and re-read it.
When you follow 'harry durgin' workflow in weekly edit, you get addicted to it.. I really love the way he edits but I guess DT has come a long way since he last did a weekly edit so some of his workflow may be dated ( Newer better modules like the Filmic + tone Equalizer really do alot of justice to our edits these days.. along with the tone curve )
The thing that frustrated me with Harry's videos was the lack of any monologue to explain WHY he reached for a given module or blend mode. But yes, he achieved great things with darktable.
Same problem here. Low- and high-pass filters are applied by default before filmic rgb, and seems to interfere with the dynamic range scaling of this module. If you move them up in the list of active modules (CTRL+SHIFT and drag with the mouse; that changes the modules order), they seems to work properly.
Yes, you can change the pipeline order, but that's something that only experienced users should do, which is why I don't mention it often. Unless you understand WHY you would want to alter the order (and clearly, you do), it's best not to mess with the default order.
Thanks for the interesting explanation, I can only agree with your words at the beginning and your style is very appropriate. Cool idea to include the email.
Thank you and thank Aurélien always wondered about the two and never really given it much thought other than the basics of one will be what screens does with colour and luminance information and there other would be what light does in the scene manipulated as if your still capturing the scene. I should also thank my physics teacher who all those years ago said light has some weird properties but is interesting. Between the three of you and and some reading and repeated reading I stand a chances of understanding rather than thinking that makes sense.
"everyone knows that corrections done hardware on scene always look more organic" struck a chord with me. I'm retired now but used to teach photography and adobe Photoshop in further education. One principal I used to stress was to get as much right in the shoot (in the camera) as the end result after digital editing will be easier to achieve and more convincing. Aurélien seems (if I am understanding him correctly) to concur with that view. I guess scene referred should be more forgiving to the less experienced in post shoot editing. I agree with your comment that (paraphrased) not going completely overboard with any given module will keep most of us on the straight and narrow so to speak. Prior to this excellent discourse, I had a vague idea what scene referred was bout but this has clarified it and you have translated it intro layman's terms .superbly and I applaud your courage in tackling this rather technical aspect of dt. Many thanks.
Thanks Berny. I too was encouraged by an earlier mentor to do everything possible to get the image right "in-camera". That really does make the job of post-production so much easier! GIGO.
Great video once again. Thanks for that! What I am mostly scared with scene-referred is that whenever I change the camera I have different results on my editing. As an example, using a preset on an A7 camera will give different results on an A7III or even more on an A7RIV and much more when I use again another manufacturer, just because of the difference in dynamic range that most of the cameras have. For my understanding, less so on display-referred which possibly may explain why LR works in display referred, consistency of results. In saying that, I love the concepts behind scene referred. I am more of a creative person and I have to read Aurelien's explanations 6 times before I am able to understand a bit of it LOL He is such an inspiration. Every time there is something to learn from him. He doesn't waste any words. They are all important and useful. Anyway, thanks again for your tutorials. Cheers
Thanks Stefano! The thing about different cameras is interesting, although I wouldn't have expected it to mean you need a different approach for images from different bodies/sensors. At the end of the day, EVERY camera is going to produce RAW files which far and away exceed the capabilities of our monitors and display devices. So really, in all instances, you should be producing images with far greater range than needed, and then the output colour profile module would be delivering you the rendered version that will fit the intended display device.... right? Or am I missing something here?
Right timing. I was just going through the darktable manual and was looking at what modules to use and avoid with the scene related workflow. It's so confusing and your explanation makes some sense.
@@audio2u I had to pause the in between to read the letter to understand what was meant. Yes I will read it again. When I said what you said makes sense, it was your interpretation of what these two workflows mean and contents of the letter. But I fail to understand the reason behind having both these two workflows. I am currently reading the manual. I am a slow learner and have to write down each and every word to learn anything. But correct me if I am wrong. This is supposed to be related to color space and different devices. So if that's the case, then why not keep the working profile same as the output profile and do the image processing? For example most of the monitors and devices used to view images on social media are using the sRGB. Or am I missing something else?
No, you're on the right track! But sRGB is a limited color space. As I've said elsewhere, what we want is to work with the largest range of data available, right through the entire workflow, and only compress down to suit the final delivery format/device right at the last moment. Here's another analogy for you.... Some movie producer shoots a film in 8k/32 bit colour. But today's cinemas can only display 4k/24 bit colour (I don't know if that's right this is just a hypothetical scenario). Is he/she going to edit in 4k/24 and archive the final version of the movie in that format? Hell no! You would hope that they edit and colour grade at 8k/32 so that when we have devices in our homes which can read and display that range of data, they can pull out the archives version of the final edit, and create a new SFA-DVD (that's the Super F***ing Awesome Digital Video Disk, coming to a big box retailer near you real soon! 😃) from that high-res master. This is what plagued Blu-Ray when it first appeared. All those movies shot at 720p for DVD being upscaled to 1080p. But that's just resolution. This discussion is about colour space. Hope this helps!
Thank you. I think I now understand the concept from both terms. From the practical point of view… well, I can only say I will be paying attention to it (it's a hobby for me and in fact I mostly edit camera's jpg, so go figure!). But if Darktable is moving yo scene-referred and linear processing by changing modules and suggesting modified workflows (as Aurélien seems comitted to), maybe it's a great idea for your channel to explore those changes and help "re-educate" old users while introducing new ones. Sort of "if you've trained yourself to use Tone curves, this is how you go around with this new tool; this is the same as that, this is how you (now) achieve that". It seems to me that the exploration for that "task" will not only be very useful to your audience but will also put yourself in a much comfortable position to keep on teaching it.
@@audio2u IT WAS a great idea then, just not new nor mine! xD (sorry, I can't be very subtle in English). I was trying to emphasise the "re-education" part, which is the subtext I got from Aurélien's email. What I can say I know about that is (1) that it's not that easy at our age, but (2) having to teach or explain to others helps a lot in getting a deeper understanding of the subject, such as what it seems to me we will need to be able to track and appreciate future Darktable.
As I got it: display referred is like processing the JPEG the camera offers it and scene referred is genuine RAW processing in the background while each step displayed is rendered to the color space of your display. What astonished me is the historical background which made it a bit clearer. E.g. I learned that a DSLR can only process 8eV so I'm surprised to hear that sensor are nowadays capable of 12 - 13 eV; a value range I've linked with a film until yet. So my understanding goes now that scene referred means developping und processing the 'old fashioned' way of films. And that makes the processed image portabel for any kind of display and none thought of yet. Thx Bruce
Yeah, kind of. Using jpeg and RAW to describe this (SR v DR) could lead to confusion among less-experienced practitioners, though. It's quite a bit more nuanced than that.
@@audio2u Indeed, it's quite more nuanced. My objective was finding a short key, an analogon to get the knack. I expect less-experiemnced practitioiners to keep asking on on; as mentioned in the Sesame Street: ask, ask, ask , .... :-))
Hi, its really important to differentiate color and light terminology in more detail when talking about "a scene" and the image processing leading to a display and our eyes, to properly convey the concepts of luminance (absolute, relative luminance) chrominance and so on. I am writing my thesis on color management and ACES and I strongly recommend checking out the International Lighting vocabulary from CIE as well as "Colour Appearance Issues in Digital Video, HD/UHD, and D‑cinema" from Charles Poynton as well as his FAQs on Colour and Gamma. It's a hard topic, glad people like you try to understand and reach out to talk about it but are still clear about the knowledge level. Best wishes
@@audio2u Yes, this is the only way - otherwise we would'nt have any real progress and just lying to ourselves in the end, so I totally get your introduction point. Really sad youtube got so much of the content you described. Best wishes!
Hello Bruce. Thank you for this video that has the merit of posing the problem. My Dartable's practice has evolved since 2/3 years. I am using the reference to the scene and I have automated a lot of modules (Balance des blancs, Calibration couleur, Filmique RVB, Contraste local, exposition, bruit de profil). All this allows two effects: to have an acceptable image quickly and to concentrate on a few modules to modify. I appreciate Filmic for the recovery of blacks, whites and contrast ... but I do not modify much more. I use the filmic companions as the tone equalizer but I have trouble with the color balance ... in this case I continue to use modules such as Color zone or contrast equalizer (Sorry Aurélien) . With the automation I really feel like I'm efficient and with the settings I'm getting closer to the result I expect. I just wanted to testify to a practice and thank you again for your videos which make me progress both in DT and in English.
Thank you for your work and knowledge, Bruce. I must admit that this subject is really confusing... At the end of the day, even if you work using the scene-referred workflow, every decision you make (brightness, colors, saturation, etc.) as you edit the image in Darktable is based on what you see on your screen, or, in other words, on the limited color space of your display... The equivalent in audio, I believe, would be to mix a song using small computer monitors with a limited frequency range instead of flat response near field monitors; but then again, you never know what people will use to listen to your song. Will it be mono or stereo? A high end audio system or an iPhone? Maybe, what matters is the visual end results on your display assuming it is properly calibrated. Also, if you want to print your photo, you still have to worry about the printer's color space to make sure it matches what you see on your screen. Anyway, this is a pretty difficult topic. Kudos for giving it a shot!
Hi Bruce! When I was using Lightroom (display-referred process), I was quite satisfied with the results and so seems to be a lot of pro photographers worldwide which are still using Lightroom or Capture One Pro and so on... Are they silly? Maybe... I don't know. To be honest, I have nothing to be compared to a pro, my photographic production has more to do with crap rather than "Art". But because it's a hobby (I have no clients and no turnover to achieve), it suits me. Today, I use darktable without fundamentally questioning my whole process. Coming from Lightroom, the learning curve seemed steep enough without adding the constraints of this new "scene-referred" process. *** By the way, I want to thank you Bruce because your videos were a great help to me ! For sure, I'm not alone in that case. *** So, as long as it is possible to use "display-referred" process, I will be a happy user of darktable. My credo is (and will remain): Less time wasted behind the computer screen is more time available to take pictures (or better: more time to take care of your family and friends)! But I can understand that some people see it an other way.
Thanks for the kind words, Jerome. Yeah, as I said (in reply to someone else's comment), some photographers want to be technically correct, and some just want to create images that look pleasing to the eye, without regard for technical niceties. And that's the subjective nature of what we can art. No right or wrong here.
Disclaimer: I'm using dt 3.4.0 I'm using the scene-refered defaults, but the the default pixel-pipe appears to be loading either non-scene-refered modules or modules in the wrong order(??). For example, Shrpen loads. We're supposed to avoid it. And Highlight Reconstruction (6) and Color Calibration (7) is in the pipre before Filmc RGB (8) . Exposure is (9) and White Blance (12), way past Filmc. Is that normal??? From the Manual, I would have thought that the order should be Exposure, White Balance, Filmc, etc. Do I simply need to wait for 3.6? Is there a 3.5 ??? Wow, so may questions today... Thanks in advance!
First, version numbers are always even for stable releases, and odd for development bills. So, the current dev version is 3.5, and anyone can download and install it if they understand how to compile from source. As for module order, that was decided a long time ago by people much smarter than me. If there was an issue, it would have been found and fixed by now. Why would you expect to do exposure before white balance? That seems odd to me, because wb happens before demosaic. Things like sharpen, which are still based on display-referred algorithms, are get late in the pipeline for exactly that reason. By then, all the heavy lifting has been done (by those modules which do use scene-referred math), and it's not a problem.
@@audio2u First of all: good god man ! Why are you answering messages at 6:00 a.m. ??? :-) Secondly, I based exposure before white balance given some of Aurélien's comments and also the manual. I don't have the technical knowledge to understand why it's one way and not the other. Thaks for your answer, as always. www.darktable.org/usermanual/en/overview/workflow/edit-scene-referred/
Bruce, thanks for your "pop" explanation which certainly helps with the intellectual understanding of Aurélien's email. In my mind, much of the value of your darktable videos comes from the pragmatic approach that you take and the examples that you provide. In this case there are no examples, leaving me with a feeling of "That's nice, but so what?". It's probably me being a bit slow; but if not I would appreciate your feedback.
Yeah, I agonized over that. Whether there was anything that I could really demonstrate, and in the end, I determined that there wasn't. If there was, it would be something outside of my understanding. I feel like this whole DR/SR debate is quite academic, to be honest. But again, that might just be because of my limited understanding. Thing is, I absolutely could talk you under the table about the audio equivalent! High bit rate audio is something I DO understand, and I could talk confidently about that topic. And I know that there are parallels here, but the differences are so important, that I don't want to misrepresent them. Put it in the category of Rumsfeldian "known unknowns".
@@audio2u Thanks Bruce. You are obviously not one of those audio specialists, who tells me what I should do to make a marginal improvement to the high frequencies from my sound equipment, without recognizing that I am no spring chicken, whose hearing is not as acute as it used to be. This discussion of Display-referred vs Scene-referred is helpful, despite it's lack of a clear conclusion. Before putting it to bed it would be useful to ask the theoreticians where the difference would be most obvious. Perhaps in rendering extremely high contrast scenes? Or for low contrast images?. Or perhaps in pictures that have a large colour range?
As Aurélien stated in his email, things like blending modes, adding blur, and other alpha compositing techniques tend to get the most benefit from working with a larger range of data. I'll have to take him at his word.... It's above my pay grade! 😃 As for hearing loss, yep, that's one of the unavoidable downsides of getting older. Nothing you can do to reverse that, sadly. But high bit depth audio is not about "including more frequencies"; it's about maintaining greater fidelity in the really small variations in amplitude during the production workflow. In much the same way as Aurélien talks about those parts of image manipulation which benefit from a scene-referred workflow, keeping high bit depth to your audio during production means we can avoid horrible artifacts like truncation distortion (which can be minimized through the addition of dither), and so on and so forth. Like I said, I know this topic well, and could bore you senseless with it! Anyways, it's 02:30 now, and I'd like to get back to sleep! Later!
Haha! I would probably become engrossed and NOT get to sleep! I loved chemistry as a teenager. At 15, I could recite the first 50 elements of the periodic table... In order! Couldn't now, though.
Hi Bruce, I got a question. In order to get the full advantage of the color space and dynamic range in a scene refered worflow, should I shoot my photos (in camera) in adobe rgb mode or sRGB mode? Cheers from Chile!
Another analogy to the difference between display and scene referred workload could be comparing an SLR where you decide the ISO of every image when buying the film to a DSLR where every image can be shoot with an individual ISO. Just my two cents.
The way I understand it, you have to limit the amount of data you export in the end to be able to display it on the screen, as the screen can't show more colors than that. The difference with scene referred is that the limitation happens later in the pipeline, meaning all the modules you use before that happens will be able to work with somewhat more precise math. If you are doing a lot to your image, or if you're working with a wide dynamic range, it might make a difference and you might get a little more from your images. But for most cases, if you prefer the "old way", that should be fine. I'd also expect what you said about mixing scene and display referred modules to be the case, especially if the scene referred modules are applied before the display referred ones. BUT, seeing the part about 0% black, 50% grey and 100% white, it's possible that some display referred modules will expect that distribution of values and if it isn't true, they might render unexpected results in the final image. I know that combining some of the newer scene referred modules with shadows and highlights will render out a purely black image in some cases. :D
Thanks for a great explanation. I do astrophotography and stack then process in a program called SIRIL in "fits" 32 bit colour. Then convert to "tiff" 32 bit for a little denoise and final touch up in Darktable. However I have found that I have to "dumb down" my images to at least 16 bit to display them or share with friends via the internet. If I save them in 32 bit they look terrible in other display programs that only use 8 or 16 bit colour. I now have a better understanding of why I have to do this.
Bruce, I would strongly suggest looking up a video by Blender Guru, about the Filmic module in Blender. Yes, the 3d tool Blender has a similar approach to dynamic range as darktable using a filmic, scene referred, workflow. I've been using darktable for about 3 years and couldn't really visualize what filmic actually does, I had more or less an idea, but could not see this in reality. Blender guru's filmic explanation made everything clear for me....
Thanks for all the great videos Bruce. I've just seen this one and it does help to clarify the differences between display and scene referred. One question if I may. I notice that if you hover the mouse over each module in Darktable it says it is either display or scene referred. Is it best to try and match the modules used to whichever of the two I am using? Thanks
Thanks Alan. Not necessarily. You really want to try to use modules which use a scene-referred algorithm as much as possible. If you do use display-referred, try not to make radical adjustments, as you will more than likely see strange artifacts in your image.
ChatGPT gave me this: Let me explain the difference between display-referred and scene-referred in the context of Darktable software. 1. Display-referred in Darktable: When you choose display-referred in Darktable, it means the software is adjusting the image based on how it will look on your computer screen or other display devices. It's like Darktable is wearing special glasses to make sure the picture looks good on your screen. The adjustments are made with the display in mind, and sometimes details in very bright or very dark areas may be sacrificed to make the overall image look pleasing on your display. 2. Scene-referred in Darktable: On the other hand, when you choose scene-referred in Darktable, the software is aiming to process the image based on the actual scene and the data captured by your camera. It's like Darktable is trying to understand the magical world in your photo without worrying too much about how it will look on your specific display. This mode often preserves more details in both bright and dark areas, giving you more control over the adjustments. In Darktable, you can choose between these two approaches based on your preferences and the specific requirements of your editing workflow. If you want to prioritize accurate representation of the captured scene, you might lean towards scene-referred. If you want the image to look good on your display, you might opt for display-referred. Each option has its advantages, and it's like choosing between different ways of viewing the magical world your camera has captured.
It's not wrong, but the whole point of scene-referred editing is that you are working with higher bit-depth data, which reduces the potential for aliasing, and other visual artifacts.
Part 2: Bruce W. 2. So assuming that one has noew learnt how to take decent photos that are not overexposed or under exposed significantly, then one has to either read through the Darktable manual, or watch UA-cam videos. I must add, before UA-cam, we had manuals, and I recall in the good old days, we did not have to watch videos to learn, or rifle through blogs. In the good old days - Word Perfect, Corel Draw, etc. All we had to do was read the manual, or attend a class taught by people like me who had read the manual enough, to be able to teach others. I recognise that we all have different skills, different skills, some are good at software development like Aurelien(I also come from a software development career - but not in C++ or C like Aurelien), others like you are great at videos, and some are excellent at writing manuals. IN spite of all the prior efforts with Darktable's manual, which is a great reference, I can deduce that people like you and I and any others who share this interest need to contribute to the Darktable manual to include a simple section that explains to beginners, how to go from point A to B, with all the key steps involved with editing a photo from RAW to amazing. This way anyone who needs the help finds it where it should be - right there in the manual, rather than scounging all over the Internet trying to make sense of it all. This simple tutorial with maybe no more than 2 of the basic workflow approaches in Darktable - scene referred or display referred could also be accompanied by a video or two videos, posted in a UA-cam channel associated with darktable - ideally one owned and managed by the Darktable community and contributed to by people such as you and I. This way Every major release, we can refresh the beginners guide and the associated videos - These one or two videos would be part of the "official" Darktabl release - published not long after the software is released. Nothing extensive - just the basics - but enought to ensure anyone can take a picture, import the RAW version, and arrive at an amazing result, in no more than 5 minutes (with practice). Which still gives room for people like you to create more elaborate or involved tutorials of the same basics or extended material on advanced topics on your own UA-cam channels. If we can provide this kind of much needed support, I can see Darktable becoming the most used picture editing tool in the world. Once anyone can read the introductory sections of the manual, and maybe also watch these one or two videos that are part of the "official" release, i.e investing no more than about 1 hour of education, and always come out with fantastic results, we should find that many more people will use Darktable, cos a fw of us who have been successful with darktable, have paved the way for others. As pople like you and Aurelien have made it easier(not easy) for me to become proficient in darktable, I would like to make a small contribution, working with people like you, to : 1. Improve the manual by adding a much needed effective tutorial 2. Work with video content creators like you, to create a video version of the tutorial, to be posted on a darktable official UA-cam channel and/or on Vimeo. Each time a major release of darktable is due. I hope we can achieve this together, "translating" darktable from what is viewed as a complex tool, to one which pople can use with ease, not by changing darktable, but by improving the educational material that is released with darktable, and making this one produced to the same high standard as the remainder of all the darktable releases (the web site and the software, and the rest of the reference manual) - these things being already pretty good. I think there is also room for some of us to get involved to solve some of the quality control issues with darktable's release, especially on popular operating systems like Windows. I use Windows exclusively - its a lot easier than tinkering with Linux (which I do have skills and abilities if I was hell bent on giving myself pain - my education to degree level was in Computer Science and I have managed a few Unix servers in my time). I observed that in release 3.4 the manual was out of sync with the software - with changes to the naming of modules, which was not reflected in the manual. Took me about an hour to figure out - by accident that some modules I was familiar with in version 3.0. had been renamed in version 3.4, but there was no mention of this in the release notes on the darktable web site, or in the version 3.4 manual. None. These are the kinds of inconsistencies which give darktable a bad name. Thej modules I refer to are Denoise - Non Local and Denoise - Bilateral, which were renamed to Astrophoto Denoise and Surface Blur - something you can independently verify. So in closing, darktable is a great tool, which in a similar manner to the great work done by Aurelien, who had been a user for many years, before learning to code and contribute to darktable, we also can become that final glue, to fill in some of the gaps in Darktable, and make it appeal to a much wider audience, especially on popular platforms like Windows and Mac. My early background was in computer education, taching people how to use software. I do hope we can do something together, as described above. Look forward to hearing from you. I must say Aurelien has already done a great job of explaining to techies like us, now we would do well to simplify and explain darktable to the rest of the world, and newbies in particular. Simple easy, hand holding, no complex vocabulary. Look forward to your response.
Wow, now I've forgotten what you covered in your FIRST post! 😃 As for the concept of an official darktable channel... I don't see that happening, simply because the core development team for darktable is about 6 people. That's it! There's a bunch of other contributors, sure, but the core is only half a dozen people. And most of them work other jobs to keep a roof over their heads (like I do). I would love to make enough money from this to go full time, but that's unlikely to happen any time soon. So, until then, the community is stuck with the likes of yours truly and Rico and Boris and anyone else who wants to throw their hat in the ring. As for the beginner's videos, I aim to do a new video along those lines with each major release. I'm not perfect, but I do what I can. As I've said in the past, this is my way of giving back to the community.
Bruce has done those basic workflows...Frank Walsh has done them and there is a channel run by a guy named Hal I think call darktable a-z that has run through every recent module in a fair bit of detail. I get that it is not easy FWIW I use pixls.us forum as a great resource. If you want to stay on top of it that is the place to be. And about 5 youtube content providers pretty much cover all that you could need IMO. From creative to technical. I also use Rawpedia for basic concepts...its a manual for RT but more like what you are asking for in DT. I don't think the dev team is rushing anytime soon to create a how to guide. I think if you get grounded a bit in color theory then the tools in DT make far more sense and are easier to pick up but in the end its a technical tool made by technical people for technical people and I don't see a major shift from that.....
@@audio2u I installed version 3.4 a few days ago, on a Windows 10 laptop which had limited performance - only 8GB RAM, old spinning hard drive and an i5 dual core, bought new in 2013, having been on a few months break from photography. Compared to version 3.0, which was a bit of a nightmare if I am honest, 3.4 was so much more polished, and pretty stable - only one crash so far. What I am so pleased with is that as long as my composition and exposure was good, within a few minutes, I could take any photo and end up with a good result, using the scene referred approach, and thereafter any further adjustments were just polishing and artistic liberty. Scene referred with filmic rgb, is such a phenomenal way to develop the image in just a few minutes, using only a few tweaks to about 4 or 5 modules. 1. Exposure module for overall brghtness/contrast (setting black point) 2. Filmic RGB to "develop" the image. 3. Color balance to add much needed saturation - which seems essential for my tastes - I use output saturation slider for this. 4. Tone Equaliser to shape highlights and shadows as needed or to lighten or darken mid tones, typically point and click on area of image and adjust by dragging right on the image. 5. Then back to Color balance espcially to address things like the look and feel (am I going for contrasting colors or complementary (- a lot of this akin to the kind of thing that the HSL tool in Adobe Lightroom does), Exposure and Filmic RGB for a second round of final tweaks. That has now become my entire workflow or 90% of it. Sometimes Contrast Equaliser and Color Zones, for any xtra look and feel and polish. The following two links have really helped me. The 1st is the english version of Aureliens seminal dissertation on the Scene referred workflow - this is the bible, or the Genesis chapter. It all starts here. pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/ The next really important tool was me understanding how best to use color in an efficient and effective way, to create my own style in images. For that, this video is the best I have ever seen. Phenomenal information, which once digested over about a week of several listens, totally transformed how I think about images, from just the purely scientific, to appreciating that the best images are not a scientifically accurate one, but ones which follow rules well known by all the great painters, about how certain color combinations are either more pleasing to the eye or make us pay far too much attention to the conflicts in the image and one can use this knowledge to define the final look desired. Once I knew the rules. it was much easier to apply in the relevant darktable modules like Color Balance, Color Zones and some may also explore the Color Correction module, which I tend not to use anymore cos its a rather broad strokes approach to achieving split toning(albeit there is also a dedicated Split Toning module in Darktable which I never use). ua-cam.com/video/RfQX4w711MI/v-deo.html I'll do my best to keep in touch
The darktable user manual already has a page describing how to use the scene referred workflow and what modules should or shouldn't be used with it. You can then click on the names of those modules to get further information on using that specific module.
What I get from this is not an easy answer that make me able to easely decide to use the one or the other. Rather it makes me think (learn) about the whole digital editing process. And to make me think of what the end product should be. Will my picture primarily be presented on a screen (choose display-referred, despite the fact that displays vary), or as print (choose scene-referred, after having learned how my printer works, or the print-shop I prefer work). These are elements in a learning process, and not a quick-fix to a specific editing problem.
I think one aspect that Aurelien comes back to is that scene referred edits will hold up when HDR monitors are the norm and they are knocking at the door. Expensive now but that may change quickly as technology does. So if you have a way to edit that accomodates that now why not rather than edit to SDR and then perhaps have to do it all over. On the other hand that may not have any utility for some people and for others it has merit
I just thought about a good analogy for dynamic range. Imagine you have a fish eye lens on your camera. The angle compares to the full dynamic range of your camera. Now switch the lens with a tele lens. You won't see the same as you saw with the fish eye. There is stuff missing on the top, the bottom, to the left and right. Now there would be two things you can do. 1. Move the camera around until you have what is important to you in the frame. You are still missing out on the surroundings, but you have your main object. That would map to moving in your dynamic range without compressing your dynamic range into the smaller one of your monitor 2. You grab your camera and move further away until you are far enough away to cover the same scene as with the fish eye. That would mean you compress the dynamic range to the one of your output device.
Great explanation Bruce. Much appreciated how open minded you are, also with regard to your own knowledge. Still not completely sure what scene referred means to me when using darktable :)
I guess to summarise, I'd say you don't process differently from a mindset point-of-view... But the modules you will use ARE behaving slightly differently, and giving you a much better end result because of it.
3 роки тому+17
The reason why I said applying a display-referred mindset to a scene-referred workflow doesn't work has to do with the fact that "white" and "black" have a fluid meaning in scene-referred. I have read many people who did not understand why, in scene-referred workflow, exposure has by default a +0.5 EV boost and they are encouraged to tweak exposure to brighten midtones because "they exposed correctly in camera". Somehow, in their head, exposure in camera has some standard reference value while it's really just like a microphone gain : set is as high as possible to limit noise, but low enough to prevent clipping. There is no right or wrong here, only different priorities when setting the trade-off. This has nothing to do with right or wrong exposure, but everything to do with managing both ends of the dynamic range at the same time in the current conditions,, which is kinda new because display-referred has fixed bounds (0-100%) and all the changes happen in the middle. The fluidity of the values of black (as in lowest bound of the DR) and white (as in highest bound of the DR), and the fact that we anchor middle-grey instead (because we know all displays can display at least middle-grey) has confused many user, especially the most experienced. And exposure (in software) is meant to match all middle-greys across the pipe, while camera exposure is a trade-off to fit within the sensor DR. Display-referred is more rigid and perhaps users feel more guided (or constricted) into it because of that.
This video and your comment here are the best explinations I've read/heard but this is all abstract and theoretical. I still have no idea what I'm supposed to do with this information. I think I need practical demonstrations before I'll ever fully understand it. On top of that, I do IR photos and my workflow generally involves completely changing the order of modules and unorthodox settings, so I'm not sure if any of this even applies to me.
3 роки тому+5
@@qwertyasdf66 1. on-camera, don't set the exposure such that the preview looks good (the preview is a processed JPEG, the histogram is the one of the preview), but to avoid highlights clipping. Meaning don't hesitate to underexpose. You will need trial and error between camera and raw editing software to guess how the lightmeter works with highlights. 2. in software, set global exposure such that middle-tones look bright enough, with no care for highlights clipping. 3. in filmic, set the white exposure to the maximum value of your dynamic range such that highlights get unclipped. 4. In general, stop looking for an absolute definition of exposure.
@ Oooh thank you for that! It makes so much sense now. Does this mean that we _have_ to use filmic then, and it needs to be quite late in the module stack? That's normally the first module I turn off...
More like this....watch this...its in the context of blender but it is what all this is about....staying as faithful to the original light as possible......... ua-cam.com/video/m9AT7H4GGrA/v-deo.html
First fantastic video and finally clears up device/scenic for me!! Second is there a list of which modules are which? I can easily stay away from device, as a newbie to dt I have no "device habits that need to be broken.
Aurélien wrote a post over at discuss.pixls.us back around the time that 3.2 came out, and that pussy outlined which modules to avoid, and which modules to use in their place. If I can find the link, I'll add it to the video description here.
As a sound engineer, you can easily make a parallel with sound editing. When you work, you always want to get on the best quality sound possible (in term of bandwith, kHz, etc.. I'm not into sound, sorry, but I hope you get the point). So that when you "export" the final result for a medium (CD, tape, radio...), you always get the best result. You might then tweak for some specific medium though. But your edit will use as much data as the original can give. This is for me the same for scene-referred vs display-referred. This is also the same for video, when you want to display on an HDTV set, you will want to edit on a 4k or 8k footage, not on a 1080p file... This is a bit simplified, but I guess this is in that direction.
I think the explanation makes a lot of sense. However, the pragmatic side of this is not helpful (up to version 3.4). There are a few modules that only work in display (I'm looking at you, local contrast), so even if I want to go one way, it's not 100% doable until all modules are on the same page. I have a question on this subject, I've noticed that version 3.4 has added a "blend scene" option, so even "display" modules can have "scene" blends, but all blend modes change completely. What's the deal with that?
@@audio2u I think it relates to what the module can accept as input and perhaps what it outputs....they do change...just looking at what the default channels are in a parametric mask....Lab for some module s and jzh for others....so I believe it is tied to that ....
@@emrg777 I was thinking the same. But "thinking" and Having Bruce (and maybe Aurelien) to actually properly explaining it is a very different thing. Thanks @Bruce! hope it bugs you enough to have new material for new videos :D
3 роки тому+3
Many blending modes rely on the assumption that grey is anchored at 50% and white at 100%, meaning RGB is display-referred and non-linear. For example : overlay, soft light, hard light and screen. (See en.wikipedia.org/wiki/Blend_modes for the equations, you don't need to understand them to see that they do something different for values greater or lower than 0.5, and whenever they do 1 - a, you end up with a negative RGB output if input > 1, which is bad). Blending in non-linear is first dangerous (it's very difficult to mask seamlessly, you will always get halos and harder transitions), so these don't really make sense from the beginning. But, obviously, in scene-referredj the 50% and 100% thresholds assumption is voided fair and square, so you can't use them at all in general. In scene-referred all you can do is basic arithmetic operations.
Thing is, it's very much down to each photographer to determine for themselves whether they want to produce art that is technically correct, or simply pursue a creative approach and "science be damned". And I'm ok with whichever choice someone makes. That's the beauty of art!
Darktabel hat so viele Module. Es ist daher nicht immer leicht, die richtigen auszuwählen. Ich beschränke mich auch auf wenige und komme damit sehr gut zurecht. Das hier ein Unterschied zwischen Display-referred und Scene-referred besteht, war mir so noch nicht bewusst. Daher vielen Dank für Deine Erklärungen!
@@audio2u I try it in English. Darktable has so many modules. It is therefore not always easy to choose the right ones. I also limit myself to a few and get along very well with it. I was not yet aware that there is a difference between display-referred and scene-referred. Therefore, thank you very much for your explanations! :-)
Ah! Yes, the number of modules available is impressive. And I'm glad if I helped you understand the difference between display-referred and scene-referred!
Okay Bruce have a laugh....I set YT to playback speed of 0.5 to read AP's email. I set a time point to start a bit before that started which means I caught a bit of you speaking at half speed....lets just say its a pretty good simulation for if you ever do a video after some pretty serious drinking :)
Nice video Bruce. In the end I think one important point you made is that for many people unless they use a million modules, push them hard and perhaps if they do alot of masking and blending/blurring then maybe they will be impacted. On the other hand setting that aside the scene referred workflow does have more data as you stress and what that "more" data does esp if you work with it linearly is keep more of the light and thus a more accurate representation of the scene longer into your workflow. As I understand it then by that same notion you are making your edits on a more accurate depiction of the "scene" as you edit...I think this video for me was a good explanation of why you might do that and it introduces the concept of filmic tonemapping as a component of managing this light, this scene referred data. ua-cam.com/video/m9AT7H4GGrA/v-deo.html
Hi, you probably look younger. Clearly, Aurelien's explaination doesn't leave a professional audio engineer confident about having understood. It just boils down to the numbers. display-referred, interval [0,1]. Scene-referred, interval [0,inf), proportional to light intensity. The consequences are not obvious. Numbers behave differently in those intervals: if you multiply two numbers in [0,1], you get a smaller number, whereas in [1,inf) you get a bigger number, much bigger and meaningless as light intensity, so some blending modes make no sense. All you can do with those numbers is multiply by values which are not measure of light, but scaling factors. If an operation t 0.5 in the input is kept at 0.5, that's meaningless. I don't understand a lot more than this, but I think the only way to go deeper than this is to see examples of what are operations that make some modules be display-referred. I actually understand more about audio too. In audio, we look at variations in air pressure, we don't care about the absolute air pressure. But the [-1,1] vs [-32768,32767] or whatever also has consequences. For instance, if you multiply two signals in [-1,1] you get a really crazy effect, amplitude modulation, darth vader type voice, because those numbers you multiplied, having an absolute value smaller than 1, all moved closer to 0, and you lost all the voice and kept only the consonants, breathing, noise and crazy high clicks. If the signals weren't represented in [-1,1], the effect would sound different, however you scaled it.
The fact that you always be transparent about your own knowledge is a big point why I love your videos.
Thanks! I've always assumed that if it bugs me, it probably bugs others as well (when youtubers try to make out that they know things they don't).
Totally agree.
Hi Bruce. Thanks for putting more of these videos on line. I for one am glad that you are finding new topics. And I appreciate that you tell us when you are not familiar or do not necessarily understand something within darktable. I don't fee as dumb that way.! This video was REALLY useful. The soccer analogy worked for me! Although I am French-Canadian and have watched some of Aurélien's original videos in French, admittedly I do not have enough understanding of the concepts for benefit fully from them. This is where your videos are really useful. Keep up the great work. I look forward to the next one.
Thanks Patrice!
Love how clearly you broke it down while keeping transparency and not touching the areas you are not too knowledgable in. Big ups
Thanks. I try to be honest about what I know and what I don't.
I feel like your football analogy can also accurately describe what your words are doing to that email, but the goal posts are my ears. Your honest explanations are what originally drew me to this channel and are always much appreciated. Thanks a bunch for tackling such a complicated subject.
Thanks Dennis.
That is one of the best explanations of this topic I have seen so far. Thank you for including Aurélien's email as it points out some of the historic aspects why display-referred was even a thing in the first place.
Thanks! I'm not sure if it was my explanation or Aurélien's email (probably the later!) but glad it helped! 😃
Thank you! Now I have at least some understanding about the difference between these two approaches. And as I understand it, it is important to stay with the scene-referred workflow until the end of the editing because the moment you use a display-referred module, you go "through the bottle neck" and continue your work with a reduced set of data.
Yep, exactly. You want to stay with the wider data set for as long as possible throughout the production workflow.
I agree about Aurelien, he appears to be a great guy and super smart. Unfortunately there are times I feel like I am 6 years old trying to grasp all of his concepts. You made an analogy to audio work a while back about keeping your work in a format with the most dynamic capabilities all the way to the very end of the work flow at which point the song is played on AM radio. With my take on your analogy give the final conversion all the information possible to still give the cleanest product possible. Thanks Bruce.
At the end of the day, I think this is still the right mindset. Keep the highest quality data set available during the production/workflow, so that the final (archival) master contains the full range of values. Then, you can derive as many export versions as you need at any time you like, without penalty.
@@audio2u Bruce, for me the difference or the advantage of scene over display is your explanation about raw and jpeg.
And, how cool it is, to work with a raw years later, when the overall technic or possibilities rise. You gave one example of one of your cameras, where you work on an old raw, somewhere in the beginning or mid of your series.
Also what Jim said explains it also for me.
As i understand it correectly, the benefit of scene referred is more or less an overall standard, which gives us the possability to fit a new module in an old workflow, without messing all up, cause all is based on one idea, which is independant of a specific viewport (DVD, Monitor, film ...)
Yeah, you're referring to episode 018 where I put the raw file up for people to process, and a guy (whose while name I don't recall right now, but I know it had Elvis in there somewhere!) created a fantastic result out of what I believed to be a supported RAW file.
Scene-referred, as previously stated, gives us the ability to work in a high bit depth right through the editing process, so we can work with the greatest quality data right up until the point of export.
Bruce and Peter - THANK YOU! I created a darktable module preset and called it scenic with a star icon. I moved it to the front and included the modules recommended and suggested in the user manual link which peter provided. I created a a second group of "Use wit care" modules. Thanks to you both I am good to go with modules I don't have to worry about. PS I also am now online with the most up to date manual again thanks to Peter -forgot that was all online.
Great stuff!
Hi Bruce. Firstly, I have to say I love your channel and presentation style.
As a teacher of Lightroom, CameraRaw, Photoshop and Raw Therapee though, I have to say that I struggle to see the relevance of this argument. Yes, a display referred workflow is nuts - totally.
But this whole scene referred vs display referred argument only arises when people are discussing DARKTABLE.
This has led to a seemingly wide-spread belief that Lightroom for instance (so also Camera Raw) and Raw Therapee somehow have a display referred (or at least 'inferior' referred) workflow - which is indeed 100% WRONG.
The crux of the argument is 'should I chuck all my captured colours and tones away, apart from those that my 'monitor' can display - (that would constitute display referred) - or not".
Lightroom/ACR has a fixed working space of ProPhotoRGB, as does Raw Therapee.
Lightroom/ACR and Raw Therapee allow the user to opt for Adobe effects input profiles, custom .icc or .dcp input profiles, or OEM input profiles in either .icc, .icm or .dcp formats.
Therefore, they allow the user the maximum flexibility of 'sensor faithfulness' - or, if you like, 'scene referred' tone and colour flexibility.
Darktable ships with a working space of Rec2020, which is considerably smaller than ProPhoto - what's that all about?
It does not offer the facility to deploy ANY .dcp input profile - this omission isn't the best idea the developers ever had.
All it does offer as input profile options are 'mystery meat' standard colour matrix, or sometimes advanced colour matrix, or the option to use a custom .icc profile - not as versatile as a .dcp profile.
Lightroom/ACR and Raw Therapee all conform to what might be referred to as a 'scene referred workflow', and anyone using them is adopting this style of workflow, perhaps without realizing it.
As someone who's been at this photography game for a living since 1978, I can honestly say that before looking at DT, I thought this entire argument had been put to bed decades ago.
Why it's being resurrected now, and only with reference DT, I can't begin to comprehend.
It's leading to a shed load of confusion and misdirection/misinformation being pedalled around the internet etc about other raw processors, and frankly that needs to stop.
Other than the omissions etc mentioned though, and a few general GUI gripes, I have to say that DT is very good, possibly the most flexible raw processor out there.
Thanks for all the work you do on your channel Bruce; I wouldn't have made it past first base on DT without your content!
Andy,
Thanks so much for your comments.
I saw this yesterday, but wanted to have the time to sit down, re-read it, and try and put a well-constructed answer together.
I will confess that a lot of the information I put forward in this video came from comments from one particular developer, a guy named Aurélien Pierre (he also has a youtube channel), who seems to know his stuff when it comes to colour science. I have a very limited understanding of some of the stuff that Aurélien talks about, but I AM trying to absorb more information on the subject.
It's been a while since I recorded this video, so I will admit that I don't recall everything I said in it. Hopefully, I didn't imply that other RAW processors were using a display-referred workflow.
As for why this argument, which to quote you should have "been put to bed decades ago", is resurfacing now, and primarily in relation to darktable (by the way, darktable is ALWAYS spelled lower case... the devs specifically mention this on darktable.org) is probably because some of the older pixel-processing modules used such a workflow.
Aurélien spent hundreds of hours over the last few years trying to replace all of the modules that did use display-referred algorithms, with new modules which used scene-referred math.
His principal argument (and here I am paraphrasing, so shoot ME if I have misquoted him) seemed to be that doing any extreme pixel-pushing in a display-referred module would invariably lead to nasty artefacts. And I can attest to that, because I've done exactly that... pushed some of those old modules too hard, and have seen some very nasty side effects because of it.
Cheers.
@@audio2u Hi Bruce! As far as I'm concerned you yourself have never said a word on the subject that I would disagree with. But there are other DT "trainers" on UA-cam that pedal the false info for whatever reason. This has spread onto a number of forums, and seriously, it makes my blood boil - it's very much as you mention at the beginning of the video when you hear people talking about audio!
So I actually get a bit of a downer whenever I see this question/discussion raised.
Bruce F, Martyn E, Dan Margulis and Digital Dog have all covered the madness of display/device referred workflows years ago. Yes, a scene referred workflow MUST terminate with conversion to a device referred output, be it an 8bit sRGB jpeg or a colour managed print, but that's usually done via a relative colourmetric conversion engine, like ACE. or by soft proofing.
Every time the question/discussion arises, less knowledgable folk get confused - and in reality the whole subject shouldn't really be mentioned at all. But DT makes a very distinct point of invoking it all over again, so it's a bit of a Catch 22!
In a nutshell, if your working space and input space/profile are large enough to cope with the trillions of colours that can be recorded by a 12, 14 or 16bit sensor, and all your tools are capable of handling that level of date - which Lr/ACR/RT/COP etc can all do - then you are using a 'scene' referred workflow of some form or other.
Rest assured Bruce, in my eyes you've said nothing wrong; you just omitted to point this out.
BTW, I found out what the 'mystery meat' standard colour matrix actually is yesterday - it's equal to/a clone of, the Adobe Standard input profile - go figure!
Thanks very much for taking the time to reply Bruce - stay well Guv'nor.
Thanks for the science lesson, Andy! Much appreciated.
Thanks Bruce, I switched a a scene referred workflow because I got the impression that it was "better", and now I understand why that is (kind of). The issue I've come across is that I always used to use high and low pass filters with overlays. High pass for sharpening and low pass with a softl-ight overlay for accentuating the "roundness" of objects. In the scene referred workflow these tended to cause some horrible blow-outs of highlights and similar problems and I couldn't tell why. From the content of the email, I think I now know why at least, even if I don't have a replacement module figured out yet.
Yeah, I'm in the same boat. Certain modules that I like are coded in a display-referred style, and now I need to find alternate ways of working. But there is an old post that Aurélien wrote which has suggestions for which modules to avoid, and what to use in their place. I'll have to dig that out again and re-read it.
When you follow 'harry durgin' workflow in weekly edit, you get addicted to it.. I really love the way he edits but I guess DT has come a long way since he last did a weekly edit so some of his workflow may be dated ( Newer better modules like the Filmic + tone Equalizer really do alot of justice to our edits these days.. along with the tone curve )
The thing that frustrated me with Harry's videos was the lack of any monologue to explain WHY he reached for a given module or blend mode. But yes, he achieved great things with darktable.
Same problem here. Low- and high-pass filters are applied by default before filmic rgb, and seems to interfere with the dynamic range scaling of this module. If you move them up in the list of active modules (CTRL+SHIFT and drag with the mouse; that changes the modules order), they seems to work properly.
Yes, you can change the pipeline order, but that's something that only experienced users should do, which is why I don't mention it often. Unless you understand WHY you would want to alter the order (and clearly, you do), it's best not to mess with the default order.
Thanks for the interesting explanation, I can only agree with your words at the beginning and your style is very appropriate. Cool idea to include the email.
Cheers
the combination of your discussion and his written information was helpful. Thanks!
You're welcome!
Thank you and thank Aurélien always wondered about the two and never really given it much thought other than the basics of one will be what screens does with colour and luminance information and there other would be what light does in the scene manipulated as if your still capturing the scene. I should also thank my physics teacher who all those years ago said light has some weird properties but is interesting. Between the three of you and and some reading and repeated reading I stand a chances of understanding rather than thinking that makes sense.
You'll probably get there before ME! :)
"everyone knows that corrections done hardware on scene always look more organic" struck a chord with me. I'm retired now but used to teach photography and adobe Photoshop in further education. One principal I used to stress was to get as much right in the shoot (in the camera) as the end result after digital editing will be easier to achieve and more convincing. Aurélien seems (if I am understanding him correctly) to concur with that view.
I guess scene referred should be more forgiving to the less experienced in post shoot editing.
I agree with your comment that (paraphrased) not going completely overboard with any given module will keep most of us on the straight and narrow so to speak.
Prior to this excellent discourse, I had a vague idea what scene referred was bout but this has clarified it and you have translated it intro layman's terms .superbly and I applaud your courage in tackling this rather technical aspect of dt. Many thanks.
Thanks Berny. I too was encouraged by an earlier mentor to do everything possible to get the image right "in-camera".
That really does make the job of post-production so much easier! GIGO.
@@audio2u Garbage in... 😂
Great video once again. Thanks for that!
What I am mostly scared with scene-referred is that whenever I change the camera I have different results on my editing. As an example, using a preset on an A7 camera will give different results on an A7III or even more on an A7RIV and much more when I use again another manufacturer, just because of the difference in dynamic range that most of the cameras have. For my understanding, less so on display-referred which possibly may explain why LR works in display referred, consistency of results. In saying that, I love the concepts behind scene referred. I am more of a creative person and I have to read Aurelien's explanations 6 times before I am able to understand a bit of it LOL He is such an inspiration. Every time there is something to learn from him. He doesn't waste any words. They are all important and useful.
Anyway, thanks again for your tutorials. Cheers
Thanks Stefano! The thing about different cameras is interesting, although I wouldn't have expected it to mean you need a different approach for images from different bodies/sensors. At the end of the day, EVERY camera is going to produce RAW files which far and away exceed the capabilities of our monitors and display devices. So really, in all instances, you should be producing images with far greater range than needed, and then the output colour profile module would be delivering you the rendered version that will fit the intended display device.... right? Or am I missing something here?
@@audio2u I guess you are right. Maybe I am overthinking here. I need to do more work and tests on my side. Cheers and again great work
Right timing. I was just going through the darktable manual and was looking at what modules to use and avoid with the scene related workflow. It's so confusing and your explanation makes some sense.
Really? I'm not sure it even made sense to ME! Maybe you need to watch it again! 😃
@@audio2u I had to pause the in between to read the letter to understand what was meant. Yes I will read it again. When I said what you said makes sense, it was your interpretation of what these two workflows mean and contents of the letter. But I fail to understand the reason behind having both these two workflows. I am currently reading the manual. I am a slow learner and have to write down each and every word to learn anything. But correct me if I am wrong. This is supposed to be related to color space and different devices. So if that's the case, then why not keep the working profile same as the output profile and do the image processing? For example most of the monitors and devices used to view images on social media are using the sRGB. Or am I missing something else?
No, you're on the right track! But sRGB is a limited color space. As I've said elsewhere, what we want is to work with the largest range of data available, right through the entire workflow, and only compress down to suit the final delivery format/device right at the last moment.
Here's another analogy for you....
Some movie producer shoots a film in 8k/32 bit colour.
But today's cinemas can only display 4k/24 bit colour (I don't know if that's right this is just a hypothetical scenario).
Is he/she going to edit in 4k/24 and archive the final version of the movie in that format?
Hell no!
You would hope that they edit and colour grade at 8k/32 so that when we have devices in our homes which can read and display that range of data, they can pull out the archives version of the final edit, and create a new SFA-DVD (that's the Super F***ing Awesome Digital Video Disk, coming to a big box retailer near you real soon! 😃) from that high-res master.
This is what plagued Blu-Ray when it first appeared. All those movies shot at 720p for DVD being upscaled to 1080p. But that's just resolution. This discussion is about colour space.
Hope this helps!
Thank you. I think I now understand the concept from both terms. From the practical point of view… well, I can only say I will be paying attention to it (it's a hobby for me and in fact I mostly edit camera's jpg, so go figure!). But if Darktable is moving yo scene-referred and linear processing by changing modules and suggesting modified workflows (as Aurélien seems comitted to), maybe it's a great idea for your channel to explore those changes and help "re-educate" old users while introducing new ones. Sort of "if you've trained yourself to use Tone curves, this is how you go around with this new tool; this is the same as that, this is how you (now) achieve that". It seems to me that the exploration for that "task" will not only be very useful to your audience but will also put yourself in a much comfortable position to keep on teaching it.
Well, that's pretty much my aim anyway... to keep on creating new videos that address whatever changes have recently occurred in the software. :)
@@audio2u IT WAS a great idea then, just not new nor mine! xD (sorry, I can't be very subtle in English). I was trying to emphasise the "re-education" part, which is the subtext I got from Aurélien's email. What I can say I know about that is (1) that it's not that easy at our age, but (2) having to teach or explain to others helps a lot in getting a deeper understanding of the subject, such as what it seems to me we will need to be able to track and appreciate future Darktable.
As I got it: display referred is like processing the JPEG the camera offers it and scene referred is genuine RAW processing in the background while each step displayed is rendered to the color space of your display.
What astonished me is the historical background which made it a bit clearer. E.g. I learned that a DSLR can only process 8eV so I'm surprised to hear that sensor are nowadays capable of 12 - 13 eV; a value range I've linked with a film until yet. So my understanding goes now that scene referred means developping und processing the 'old fashioned' way of films. And that makes the processed image portabel for any kind of display and none thought of yet. Thx Bruce
Yeah, kind of. Using jpeg and RAW to describe this (SR v DR) could lead to confusion among less-experienced practitioners, though. It's quite a bit more nuanced than that.
@@audio2u Indeed, it's quite more nuanced. My objective was finding a short key, an analogon to get the knack.
I expect less-experiemnced practitioiners to keep asking on on; as mentioned in the Sesame Street: ask, ask, ask , .... :-))
Hi, its really important to differentiate color and light terminology in more detail when talking about "a scene" and the image processing leading to a display and our eyes, to properly convey the concepts of luminance (absolute, relative luminance) chrominance and so on. I am writing my thesis on color management and ACES and I strongly recommend checking out the International Lighting vocabulary from CIE as well as "Colour Appearance Issues in Digital Video, HD/UHD, and D‑cinema" from Charles Poynton as well as his FAQs on Colour and Gamma. It's a hard topic, glad people like you try to understand and reach out to talk about it but are still clear about the knowledge level. Best wishes
Yeah, I don't claim to understand ALL of this stuff. I'm doing my best to describe what I know and be transparent about the bits I don't know.
@@audio2u Yes, this is the only way - otherwise we would'nt have any real progress and just lying to ourselves in the end, so I totally get your introduction point. Really sad youtube got so much of the content you described. Best wishes!
Hello Bruce.
Thank you for this video that has the merit of posing the problem.
My Dartable's practice has evolved since 2/3 years. I am using the reference to the scene and I have automated a lot of modules (Balance des blancs, Calibration couleur, Filmique RVB, Contraste local, exposition, bruit de profil). All this allows two effects: to have an acceptable image quickly and to concentrate on a few modules to modify. I appreciate Filmic for the recovery of blacks, whites and contrast ... but I do not modify much more. I use the filmic companions as the tone equalizer but I have trouble with the color balance ... in this case I continue to use modules such as Color zone or contrast equalizer (Sorry Aurélien) . With the automation I really feel like I'm efficient and with the settings I'm getting closer to the result I expect. I just wanted to testify to a practice and thank you again for your videos which make me progress both in DT and in English.
Thanks Jean-Marc!
Thank you for your work and knowledge, Bruce. I must admit that this subject is really confusing... At the end of the day, even if you work using the scene-referred workflow, every decision you make (brightness, colors, saturation, etc.) as you edit the image in Darktable is based on what you see on your screen, or, in other words, on the limited color space of your display... The equivalent in audio, I believe, would be to mix a song using small computer monitors with a limited frequency range instead of flat response near field monitors; but then again, you never know what people will use to listen to your song. Will it be mono or stereo? A high end audio system or an iPhone? Maybe, what matters is the visual end results on your display assuming it is properly calibrated. Also, if you want to print your photo, you still have to worry about the printer's color space to make sure it matches what you see on your screen. Anyway, this is a pretty difficult topic. Kudos for giving it a shot!
Spot on with the analogy! When I created this video, I didn't own a calibrated monitor, but now I do.
Hi Bruce!
When I was using Lightroom (display-referred process), I was quite satisfied with the results and so seems to be a lot of pro photographers worldwide which are still using Lightroom or Capture One Pro and so on... Are they silly? Maybe... I don't know.
To be honest, I have nothing to be compared to a pro, my photographic production has more to do with crap rather than "Art". But because it's a hobby (I have no clients and no turnover to achieve), it suits me.
Today, I use darktable without fundamentally questioning my whole process. Coming from Lightroom, the learning curve seemed steep enough without adding the constraints of this new "scene-referred" process.
*** By the way, I want to thank you Bruce because your videos were a great help to me ! For sure, I'm not alone in that case. ***
So, as long as it is possible to use "display-referred" process, I will be a happy user of darktable.
My credo is (and will remain): Less time wasted behind the computer screen is more time available to take pictures (or better: more time to take care of your family and friends)! But I can understand that some people see it an other way.
Thanks for the kind words, Jerome. Yeah, as I said (in reply to someone else's comment), some photographers want to be technically correct, and some just want to create images that look pleasing to the eye, without regard for technical niceties. And that's the subjective nature of what we can art. No right or wrong here.
Thank you Bruce!
No problems. Got you message. Will be in touch later.
Disclaimer: I'm using dt 3.4.0 I'm using the scene-refered defaults, but the the default pixel-pipe appears to be loading either non-scene-refered modules or modules in the wrong order(??). For example, Shrpen loads. We're supposed to avoid it. And Highlight Reconstruction (6) and Color Calibration (7) is in the pipre before Filmc RGB (8) . Exposure is (9) and White Blance (12), way past Filmc. Is that normal??? From the Manual, I would have thought that the order should be Exposure, White Balance, Filmc, etc. Do I simply need to wait for 3.6? Is there a 3.5 ??? Wow, so may questions today... Thanks in advance!
First, version numbers are always even for stable releases, and odd for development bills. So, the current dev version is 3.5, and anyone can download and install it if they understand how to compile from source.
As for module order, that was decided a long time ago by people much smarter than me. If there was an issue, it would have been found and fixed by now.
Why would you expect to do exposure before white balance? That seems odd to me, because wb happens before demosaic.
Things like sharpen, which are still based on display-referred algorithms, are get late in the pipeline for exactly that reason. By then, all the heavy lifting has been done (by those modules which do use scene-referred math), and it's not a problem.
@@audio2u First of all: good god man ! Why are you answering messages at 6:00 a.m. ??? :-) Secondly, I based exposure before white balance given some of Aurélien's comments and also the manual. I don't have the technical knowledge to understand why it's one way and not the other. Thaks for your answer, as always. www.darktable.org/usermanual/en/overview/workflow/edit-scene-referred/
Bruce, thanks for your "pop" explanation which certainly helps with the intellectual understanding of Aurélien's email. In my mind, much of the value of your darktable videos comes from the pragmatic approach that you take and the examples that you provide. In this case there are no examples, leaving me with a feeling of "That's nice, but so what?". It's probably me being a bit slow; but if not I would appreciate your feedback.
Yeah, I agonized over that. Whether there was anything that I could really demonstrate, and in the end, I determined that there wasn't. If there was, it would be something outside of my understanding. I feel like this whole DR/SR debate is quite academic, to be honest. But again, that might just be because of my limited understanding.
Thing is, I absolutely could talk you under the table about the audio equivalent! High bit rate audio is something I DO understand, and I could talk confidently about that topic. And I know that there are parallels here, but the differences are so important, that I don't want to misrepresent them. Put it in the category of Rumsfeldian "known unknowns".
@@audio2u Thanks Bruce. You are obviously not one of those audio specialists, who tells me what I should do to make a marginal improvement to the high frequencies from my sound equipment, without recognizing that I am no spring chicken, whose hearing is not as acute as it used to be.
This discussion of Display-referred vs Scene-referred is helpful, despite it's lack of a clear conclusion. Before putting it to bed it would be useful to ask the theoreticians where the difference would be most obvious. Perhaps in rendering extremely high contrast scenes? Or for low contrast images?. Or perhaps in pictures that have a large colour range?
As Aurélien stated in his email, things like blending modes, adding blur, and other alpha compositing techniques tend to get the most benefit from working with a larger range of data. I'll have to take him at his word.... It's above my pay grade! 😃
As for hearing loss, yep, that's one of the unavoidable downsides of getting older. Nothing you can do to reverse that, sadly. But high bit depth audio is not about "including more frequencies"; it's about maintaining greater fidelity in the really small variations in amplitude during the production workflow. In much the same way as Aurélien talks about those parts of image manipulation which benefit from a scene-referred workflow, keeping high bit depth to your audio during production means we can avoid horrible artifacts like truncation distortion (which can be minimized through the addition of dither), and so on and so forth. Like I said, I know this topic well, and could bore you senseless with it! Anyways, it's 02:30 now, and I'd like to get back to sleep! Later!
@@audio2u if you like I could help you get to sleep by explaining some obscure corner of Chemistry :-)
Haha! I would probably become engrossed and NOT get to sleep! I loved chemistry as a teenager. At 15, I could recite the first 50 elements of the periodic table... In order! Couldn't now, though.
Hi Bruce, I got a question. In order to get the full advantage of the color space and dynamic range in a scene refered worflow, should I shoot my photos (in camera) in adobe rgb mode or sRGB mode? Cheers from Chile!
If you are shooting RAW, I don't think it matters. But just for safety, I'd set your camera colour space to Adobe RGB because it's a larger space.
Its why your videos are the best! You admit when you don't know! No BS
Hehe... Just over here keepin' it real! 😃
@@audio2u of course!
I can't wait for 3.6! I think we're fast coming to the time where DT is ahead of lightroom in terms of what it will be able to do
Mate, seriously, darktable was ahead of Lightroom when I started using it in 2016! The masking alone made it a one-horse race! 😃
Another analogy to the difference between display and scene referred workload could be comparing an SLR where you decide the ISO of every image when buying the film to a DSLR where every image can be shoot with an individual ISO. Just my two cents.
Interesting concept.
The way I understand it, you have to limit the amount of data you export in the end to be able to display it on the screen, as the screen can't show more colors than that.
The difference with scene referred is that the limitation happens later in the pipeline, meaning all the modules you use before that happens will be able to work with somewhat more precise math. If you are doing a lot to your image, or if you're working with a wide dynamic range, it might make a difference and you might get a little more from your images. But for most cases, if you prefer the "old way", that should be fine.
I'd also expect what you said about mixing scene and display referred modules to be the case, especially if the scene referred modules are applied before the display referred ones.
BUT, seeing the part about 0% black, 50% grey and 100% white, it's possible that some display referred modules will expect that distribution of values and if it isn't true, they might render unexpected results in the final image. I know that combining some of the newer scene referred modules with shadows and highlights will render out a purely black image in some cases. :D
Great points!
Thanks for a great explanation. I do astrophotography and stack then process in a program called SIRIL in "fits" 32 bit colour. Then convert to "tiff" 32 bit for a little denoise and final touch up in Darktable. However I have found that I have to "dumb down" my images to at least 16 bit to display them or share with friends via the internet. If I save them in 32 bit they look terrible in other display programs that only use 8 or 16 bit colour. I now have a better understanding of why I have to do this.
Great stuff! Is 24 bit not an option for export? If have thought that would display everywhere without issue.
Thank you for your relentless work making these videos! They made me decide to do the switch from Lightroom... Regards... Bengt from Sweden
Thanks for the kind words!
Bruce, I would strongly suggest looking up a video by Blender Guru, about the Filmic module in Blender. Yes, the 3d tool Blender has a similar approach to dynamic range as darktable using a filmic, scene referred, workflow. I've been using darktable for about 3 years and couldn't really visualize what filmic actually does, I had more or less an idea, but could not see this in reality. Blender guru's filmic explanation made everything clear for me....
Will do.
Thanks for all the great videos Bruce. I've just seen this one and it does help to clarify the differences between display and scene referred. One question if I may. I notice that if you hover the mouse over each module in Darktable it says it is either display or scene referred. Is it best to try and match the modules used to whichever of the two I am using? Thanks
Thanks Alan.
Not necessarily. You really want to try to use modules which use a scene-referred algorithm as much as possible.
If you do use display-referred, try not to make radical adjustments, as you will more than likely see strange artifacts in your image.
@@audio2u Thanks for the quick response. Keep up the good work!
ChatGPT gave me this:
Let me explain the difference between display-referred and scene-referred in the context of Darktable software.
1. Display-referred in Darktable:
When you choose display-referred in Darktable, it means the software is adjusting the image based on how it will look on your computer screen or other display devices. It's like Darktable is wearing special glasses to make sure the picture looks good on your screen. The adjustments are made with the display in mind, and sometimes details in very bright or very dark areas may be sacrificed to make the overall image look pleasing on your display.
2. Scene-referred in Darktable:
On the other hand, when you choose scene-referred in Darktable, the software is aiming to process the image based on the actual scene and the data captured by your camera. It's like Darktable is trying to understand the magical world in your photo without worrying too much about how it will look on your specific display. This mode often preserves more details in both bright and dark areas, giving you more control over the adjustments.
In Darktable, you can choose between these two approaches based on your preferences and the specific requirements of your editing workflow. If you want to prioritize accurate representation of the captured scene, you might lean towards scene-referred. If you want the image to look good on your display, you might opt for display-referred. Each option has its advantages, and it's like choosing between different ways of viewing the magical world your camera has captured.
It's not wrong, but the whole point of scene-referred editing is that you are working with higher bit-depth data, which reduces the potential for aliasing, and other visual artifacts.
Thank you, very helpful. It's all falling into place now.
Cheers!
Part 2: Bruce W.
2. So assuming that one has noew learnt how to take decent photos that are not overexposed or under exposed significantly, then one has to either read through the Darktable manual, or watch UA-cam videos. I must add, before UA-cam, we had manuals, and I recall in the good old days, we did not have to watch videos to learn, or rifle through blogs. In the good old days - Word Perfect, Corel Draw, etc. All we had to do was read the manual, or attend a class taught by people like me who had read the manual enough, to be able to teach others.
I recognise that we all have different skills, different skills, some are good at software development like Aurelien(I also come from a software development career - but not in C++ or C like Aurelien), others like you are great at videos, and some are excellent at writing manuals.
IN spite of all the prior efforts with Darktable's manual, which is a great reference, I can deduce that people like you and I and any others who share this interest need to contribute to the Darktable manual to include a simple section that explains to beginners, how to go from point A to B, with all the key steps involved with editing a photo from RAW to amazing. This way anyone who needs the help finds it where it should be - right there in the manual, rather than scounging all over the Internet trying to make sense of it all. This simple tutorial with maybe no more than 2 of the basic workflow approaches in Darktable - scene referred or display referred could also be accompanied by a video or two videos, posted in a UA-cam channel associated with darktable - ideally one owned and managed by the Darktable community and contributed to by people such as you and I. This way Every major release, we can refresh the beginners guide and the associated videos - These one or two videos would be part of the "official" Darktabl release - published not long after the software is released. Nothing extensive - just the basics - but enought to ensure anyone can take a picture, import the RAW version, and arrive at an amazing result, in no more than 5 minutes (with practice).
Which still gives room for people like you to create more elaborate or involved tutorials of the same basics or extended material on advanced topics on your own UA-cam channels.
If we can provide this kind of much needed support, I can see Darktable becoming the most used picture editing tool in the world. Once anyone can read the introductory sections of the manual, and maybe also watch these one or two videos that are part of the "official" release, i.e investing no more than about 1 hour of education, and always come out with fantastic results, we should find that many more people will use Darktable, cos a fw of us who have been successful with darktable, have paved the way for others.
As pople like you and Aurelien have made it easier(not easy) for me to become proficient in darktable, I would like to make a small contribution, working with people like you, to :
1. Improve the manual by adding a much needed effective tutorial
2. Work with video content creators like you, to create a video version of the tutorial, to be posted on a darktable official UA-cam channel and/or on Vimeo. Each time a major release of darktable is due.
I hope we can achieve this together, "translating" darktable from what is viewed as a complex tool, to one which pople can use with ease, not by changing darktable, but by improving the educational material that is released with darktable, and making this one produced to the same high standard as the remainder of all the darktable releases (the web site and the software, and the rest of the reference manual) - these things being already pretty good.
I think there is also room for some of us to get involved to solve some of the quality control issues with darktable's release, especially on popular operating systems like Windows. I use Windows exclusively - its a lot easier than tinkering with Linux (which I do have skills and abilities if I was hell bent on giving myself pain - my education to degree level was in Computer Science and I have managed a few Unix servers in my time). I observed that in release 3.4 the manual was out of sync with the software - with changes to the naming of modules, which was not reflected in the manual. Took me about an hour to figure out - by accident that some modules I was familiar with in version 3.0. had been renamed in version 3.4, but there was no mention of this in the release notes on the darktable web site, or in the version 3.4 manual. None. These are the kinds of inconsistencies which give darktable a bad name. Thej modules I refer to are Denoise - Non Local and Denoise - Bilateral, which were renamed to Astrophoto Denoise and Surface Blur - something you can independently verify.
So in closing, darktable is a great tool, which in a similar manner to the great work done by Aurelien, who had been a user for many years, before learning to code and contribute to darktable, we also can become that final glue, to fill in some of the gaps in Darktable, and make it appeal to a much wider audience, especially on popular platforms like Windows and Mac. My early background was in computer education, taching people how to use software. I do hope we can do something together, as described above. Look forward to hearing from you. I must say Aurelien has already done a great job of explaining to techies like us, now we would do well to simplify and explain darktable to the rest of the world, and newbies in particular. Simple easy, hand holding, no complex vocabulary. Look forward to your response.
Wow, now I've forgotten what you covered in your FIRST post! 😃
As for the concept of an official darktable channel... I don't see that happening, simply because the core development team for darktable is about 6 people. That's it! There's a bunch of other contributors, sure, but the core is only half a dozen people. And most of them work other jobs to keep a roof over their heads (like I do).
I would love to make enough money from this to go full time, but that's unlikely to happen any time soon.
So, until then, the community is stuck with the likes of yours truly and Rico and Boris and anyone else who wants to throw their hat in the ring.
As for the beginner's videos, I aim to do a new video along those lines with each major release. I'm not perfect, but I do what I can. As I've said in the past, this is my way of giving back to the community.
Bruce has done those basic workflows...Frank Walsh has done them and there is a channel run by a guy named Hal I think call darktable a-z that has run through every recent module in a fair bit of detail. I get that it is not easy FWIW I use pixls.us forum as a great resource. If you want to stay on top of it that is the place to be. And about 5 youtube content providers pretty much cover all that you could need IMO. From creative to technical. I also use Rawpedia for basic concepts...its a manual for RT but more like what you are asking for in DT. I don't think the dev team is rushing anytime soon to create a how to guide. I think if you get grounded a bit in color theory then the tools in DT make far more sense and are easier to pick up but in the end its a technical tool made by technical people for technical people and I don't see a major shift from that.....
@@audio2u I installed version 3.4 a few days ago, on a Windows 10 laptop which had limited performance - only 8GB RAM, old spinning hard drive and an i5 dual core, bought new in 2013, having been on a few months break from photography. Compared to version 3.0, which was a bit of a nightmare if I am honest, 3.4 was so much more polished, and pretty stable - only one crash so far.
What I am so pleased with is that as long as my composition and exposure was good, within a few minutes, I could take any photo and end up with a good result, using the scene referred approach, and thereafter any further adjustments were just polishing and artistic liberty. Scene referred with filmic rgb, is such a phenomenal way to develop the image in just a few minutes, using only a few tweaks to about 4 or 5 modules.
1. Exposure module for overall brghtness/contrast (setting black point)
2. Filmic RGB to "develop" the image.
3. Color balance to add much needed saturation - which seems essential for my tastes - I use output saturation slider for this.
4. Tone Equaliser to shape highlights and shadows as needed or to lighten or darken mid tones, typically point and click on area of image and adjust by dragging right on the image.
5. Then back to Color balance espcially to address things like the look and feel (am I going for contrasting colors or complementary (- a lot of this akin to the kind of thing that the HSL tool in Adobe Lightroom does), Exposure and Filmic RGB for a second round of final tweaks.
That has now become my entire workflow or 90% of it. Sometimes Contrast Equaliser and Color Zones, for any xtra look and feel and polish.
The following two links have really helped me. The 1st is the english version of Aureliens seminal dissertation on the Scene referred workflow - this is the bible, or the Genesis chapter. It all starts here.
pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/
The next really important tool was me understanding how best to use color in an efficient and effective way, to create my own style in images. For that, this video is the best I have ever seen. Phenomenal information, which once digested over about a week of several listens, totally transformed how I think about images, from just the purely scientific, to appreciating that the best images are not a scientifically accurate one, but ones which follow rules well known by all the great painters, about how certain color combinations are either more pleasing to the eye or make us pay far too much attention to the conflicts in the image and one can use this knowledge to define the final look desired. Once I knew the rules. it was much easier to apply in the relevant darktable modules like Color Balance, Color Zones and some may also explore the Color Correction module, which I tend not to use anymore cos its a rather broad strokes approach to achieving split toning(albeit there is also a dedicated Split Toning module in Darktable which I never use).
ua-cam.com/video/RfQX4w711MI/v-deo.html
I'll do my best to keep in touch
The darktable user manual already has a page describing how to use the scene referred workflow and what modules should or shouldn't be used with it. You can then click on the names of those modules to get further information on using that specific module.
What I get from this is not an easy answer that make me able to easely decide to use the one or the other. Rather it makes me think (learn) about the whole digital editing process. And to make me think of what the end product should be. Will my picture primarily be presented on a screen (choose display-referred, despite the fact that displays vary), or as print (choose scene-referred, after having learned how my printer works, or the print-shop I prefer work). These are elements in a learning process, and not a quick-fix to a specific editing problem.
Yeah, it's a deep topic, and I'm still not sure I covered it adequately. :(
I think one aspect that Aurelien comes back to is that scene referred edits will hold up when HDR monitors are the norm and they are knocking at the door. Expensive now but that may change quickly as technology does. So if you have a way to edit that accomodates that now why not rather than edit to SDR and then perhaps have to do it all over. On the other hand that may not have any utility for some people and for others it has merit
Exactly. See my comment elsewhere about movie production, and archiving the final edit in the highest resolution and colour space possible.
ua-cam.com/video/m9AT7H4GGrA/v-deo.html
I just thought about a good analogy for dynamic range. Imagine you have a fish eye lens on your camera. The angle compares to the full dynamic range of your camera. Now switch the lens with a tele lens. You won't see the same as you saw with the fish eye. There is stuff missing on the top, the bottom, to the left and right. Now there would be two things you can do.
1. Move the camera around until you have what is important to you in the frame. You are still missing out on the surroundings, but you have your main object. That would map to moving in your dynamic range without compressing your dynamic range into the smaller one of your monitor
2. You grab your camera and move further away until you are far enough away to cover the same scene as with the fish eye. That would mean you compress the dynamic range to the one of your output device.
I guess that makes sense.
Great explanation Bruce. Much appreciated how open minded you are, also with regard to your own knowledge. Still not completely sure what scene referred means to me when using darktable :)
The video didn't help?
I guess to summarise, I'd say you don't process differently from a mindset point-of-view... But the modules you will use ARE behaving slightly differently, and giving you a much better end result because of it.
The reason why I said applying a display-referred mindset to a scene-referred workflow doesn't work has to do with the fact that "white" and "black" have a fluid meaning in scene-referred. I have read many people who did not understand why, in scene-referred workflow, exposure has by default a +0.5 EV boost and they are encouraged to tweak exposure to brighten midtones because "they exposed correctly in camera". Somehow, in their head, exposure in camera has some standard reference value while it's really just like a microphone gain : set is as high as possible to limit noise, but low enough to prevent clipping. There is no right or wrong here, only different priorities when setting the trade-off. This has nothing to do with right or wrong exposure, but everything to do with managing both ends of the dynamic range at the same time in the current conditions,, which is kinda new because display-referred has fixed bounds (0-100%) and all the changes happen in the middle. The fluidity of the values of black (as in lowest bound of the DR) and white (as in highest bound of the DR), and the fact that we anchor middle-grey instead (because we know all displays can display at least middle-grey) has confused many user, especially the most experienced. And exposure (in software) is meant to match all middle-greys across the pipe, while camera exposure is a trade-off to fit within the sensor DR. Display-referred is more rigid and perhaps users feel more guided (or constricted) into it because of that.
Thanks for the clarification (again!), Aurélien. 😃
Aurélien, can you answer wido's comment/question about blend modes?
This video and your comment here are the best explinations I've read/heard but this is all abstract and theoretical. I still have no idea what I'm supposed to do with this information. I think I need practical demonstrations before I'll ever fully understand it.
On top of that, I do IR photos and my workflow generally involves completely changing the order of modules and unorthodox settings, so I'm not sure if any of this even applies to me.
@@qwertyasdf66
1. on-camera, don't set the exposure such that the preview looks good (the preview is a processed JPEG, the histogram is the one of the preview), but to avoid highlights clipping. Meaning don't hesitate to underexpose. You will need trial and error between camera and raw editing software to guess how the lightmeter works with highlights.
2. in software, set global exposure such that middle-tones look bright enough, with no care for highlights clipping.
3. in filmic, set the white exposure to the maximum value of your dynamic range such that highlights get unclipped.
4. In general, stop looking for an absolute definition of exposure.
@ Oooh thank you for that! It makes so much sense now.
Does this mean that we _have_ to use filmic then, and it needs to be quite late in the module stack? That's normally the first module I turn off...
So like the difference between 8- and 16 - bit editing in e.g. Photoshop then? Or having Many more multiple audio tracks before mixing to stereo?
More so the difference between 8 and 16 bit. The audio analogy is much different to just having more sources to mix, though!
More like this....watch this...its in the context of blender but it is what all this is about....staying as faithful to the original light as possible......... ua-cam.com/video/m9AT7H4GGrA/v-deo.html
First fantastic video and finally clears up device/scenic for me!! Second is there a list of which modules are which? I can easily stay away from device, as a newbie to dt I have no "device habits that need to be broken.
Aurélien wrote a post over at discuss.pixls.us back around the time that 3.2 came out, and that pussy outlined which modules to avoid, and which modules to use in their place. If I can find the link, I'll add it to the video description here.
If you want to post the link, I'll approve it.
No, I've been slammed all day and haven't had a chance to look. I'll do it now.
As a sound engineer, you can easily make a parallel with sound editing. When you work, you always want to get on the best quality sound possible (in term of bandwith, kHz, etc.. I'm not into sound, sorry, but I hope you get the point). So that when you "export" the final result for a medium (CD, tape, radio...), you always get the best result. You might then tweak for some specific medium though. But your edit will use as much data as the original can give. This is for me the same for scene-referred vs display-referred. This is also the same for video, when you want to display on an HDTV set, you will want to edit on a 4k or 8k footage, not on a 1080p file... This is a bit simplified, but I guess this is in that direction.
Yep, makes sense!
I think the explanation makes a lot of sense. However, the pragmatic side of this is not helpful (up to version 3.4). There are a few modules that only work in display (I'm looking at you, local contrast), so even if I want to go one way, it's not 100% doable until all modules are on the same page.
I have a question on this subject, I've noticed that version 3.4 has added a "blend scene" option, so even "display" modules can have "scene" blends, but all blend modes change completely. What's the deal with that?
Woah, that's above my pay grade! 😃 I don't know. I'll see if Aurélien can comment here.
@@audio2u I think it relates to what the module can accept as input and perhaps what it outputs....they do change...just looking at what the default channels are in a parametric mask....Lab for some module s and jzh for others....so I believe it is tied to that ....
@@emrg777 I was thinking the same. But "thinking" and Having Bruce (and maybe Aurelien) to actually properly explaining it is a very different thing. Thanks @Bruce! hope it bugs you enough to have new material for new videos :D
Many blending modes rely on the assumption that grey is anchored at 50% and white at 100%, meaning RGB is display-referred and non-linear. For example : overlay, soft light, hard light and screen. (See en.wikipedia.org/wiki/Blend_modes for the equations, you don't need to understand them to see that they do something different for values greater or lower than 0.5, and whenever they do 1 - a, you end up with a negative RGB output if input > 1, which is bad). Blending in non-linear is first dangerous (it's very difficult to mask seamlessly, you will always get halos and harder transitions), so these don't really make sense from the beginning. But, obviously, in scene-referredj the 50% and 100% thresholds assumption is voided fair and square, so you can't use them at all in general. In scene-referred all you can do is basic arithmetic operations.
I’m with you Bruce it’s photography not computer science
Thing is, it's very much down to each photographer to determine for themselves whether they want to produce art that is technically correct, or simply pursue a creative approach and "science be damned". And I'm ok with whichever choice someone makes. That's the beauty of art!
Darktabel hat so viele Module. Es ist daher nicht immer leicht, die richtigen auszuwählen. Ich beschränke mich auch auf wenige und komme damit sehr gut zurecht. Das hier ein Unterschied zwischen Display-referred und Scene-referred besteht, war mir so noch nicht bewusst. Daher vielen Dank für Deine Erklärungen!
I have no idea what you said! 😃
@@audio2u I try it in English. Darktable has so many modules. It is therefore not always easy to choose the right ones. I also limit myself to a few and get along very well with it. I was not yet aware that there is a difference between display-referred and scene-referred. Therefore, thank you very much for your explanations! :-)
Ah! Yes, the number of modules available is impressive. And I'm glad if I helped you understand the difference between display-referred and scene-referred!
Okay Bruce have a laugh....I set YT to playback speed of 0.5 to read AP's email. I set a time point to start a bit before that started which means I caught a bit of you speaking at half speed....lets just say its a pretty good simulation for if you ever do a video after some pretty serious drinking :)
Bwahaha!
Nice video Bruce. In the end I think one important point you made is that for many people unless they use a million modules, push them hard and perhaps if they do alot of masking and blending/blurring then maybe they will be impacted. On the other hand setting that aside the scene referred workflow does have more data as you stress and what that "more" data does esp if you work with it linearly is keep more of the light and thus a more accurate representation of the scene longer into your workflow. As I understand it then by that same notion you are making your edits on a more accurate depiction of the "scene" as you edit...I think this video for me was a good explanation of why you might do that and it introduces the concept of filmic tonemapping as a component of managing this light, this scene referred data. ua-cam.com/video/m9AT7H4GGrA/v-deo.html
Hi, you probably look younger. Clearly, Aurelien's explaination doesn't leave a professional audio engineer confident about having understood. It just boils down to the numbers. display-referred, interval [0,1]. Scene-referred, interval [0,inf), proportional to light intensity. The consequences are not obvious. Numbers behave differently in those intervals: if you multiply two numbers in [0,1], you get a smaller number, whereas in [1,inf) you get a bigger number, much bigger and meaningless as light intensity, so some blending modes make no sense. All you can do with those numbers is multiply by values which are not measure of light, but scaling factors. If an operation t 0.5 in the input is kept at 0.5, that's meaningless.
I don't understand a lot more than this, but I think the only way to go deeper than this is to see examples of what are operations that make some modules be display-referred.
I actually understand more about audio too. In audio, we look at variations in air pressure, we don't care about the absolute air pressure. But the [-1,1] vs [-32768,32767] or whatever also has consequences. For instance, if you multiply two signals in [-1,1] you get a really crazy effect, amplitude modulation, darth vader type voice, because those numbers you multiplied, having an absolute value smaller than 1, all moved closer to 0, and you lost all the voice and kept only the consonants, breathing, noise and crazy high clicks. If the signals weren't represented in [-1,1], the effect would sound different, however you scaled it.
thank you i am a little more knowledgeable now
Cheers!