![Chris Long](/img/default-banner.jpg)
- 41
- 752 566
Chris Long
United Kingdom
Приєднався 11 гру 2010
Videos about making things - Sketchup, woodwork, maybe some Arduino/electronics in the future. I have three children so you can expect to see a lot of kids' toys being made and/or repaired.
3D Printing from Below #3dprinting
Messing about with my 3D printer... what does the extrusion process look like from underneath? Printing on to glass (actually clear acrylic) to find out...
Переглядів: 163
Відео
The best 3D printer I've ever used
Переглядів 4343 місяці тому
My experiences with the Creality Ender-3 S1 Plus 3D printer - the first and therefore the best 3D printer I've ever used. This is more a review of my experiences getting started in 3D printing than a review of the printer itself.
Weird POV Effect - trying out OKLab, hidden line removal, blurring and IRL
Переглядів 3,9 тис.4 місяці тому
More explorations of the weird POV effect (which really needs a better name) prompted by comments on earlier videos. This one looks at: - how different colours affect the effect if the lightness is held constant according to the OKLab colour model, - how hidden line removal affects the ease of seeing complex 3D shapes, - how blurring the image affects the effect, - and whether the effect still ...
More Weird POV Effect Experiments
Переглядів 37 тис.4 місяці тому
More experiments inspired by comments on earlier videos, this time looking at: - the effect of mixing pure red, green, blue and white with each other - how the complexity of the wire-frame shape affects the effect - how the maths involved in rotating the shape can lead to it erasing itself. You can download the software and play with it yourself - visit my GitHub at github.com/ChrisBLong/POV an...
Weird POV Effect - Follow Up
Переглядів 36 тис.5 місяців тому
A much longer video digging in to some of the more popular comments on the original Weird POV Effect video, covering dinosaur vision, migraines, various screen resolutions, text and colour effects. There's also a ten-minute demo of how the XOR operation works with colours in computer graphics systems. See the playlist for the earlier videos in this series. The custom-written/hacked-together sof...
Bad Apple!! but it's not there when you pause, version 2 #badapple
Переглядів 4,5 тис.5 місяців тому
My version of Bad Apple based on the 'weird POV effect' technique. I've made two versions, this one is based on the full black-and-white frames of the Bad Apple original, which results in severe strobing - DO NOT WATCH if you are photosensitive. Although you can clearly see the animation when the video is playing, when you pause you just see random(-ish) noise. How it's done: starting with a bl...
Bad Apple!! but it's not there when you pause, version 1 #badapple
Переглядів 5 тис.5 місяців тому
My version of Bad Apple based on the 'weird POV effect' technique. I've made two versions, this one is based on an edge-detected version of the Bad Apple original, which results in some flickering effects - CAUTION ADVISED if you are photosensitive. Although you can clearly see the animation when the video is playing, when you pause you just see random(-ish) noise. How it's done: starting with ...
Weird POV effect 2 - What's my password?
Переглядів 47 тис.5 місяців тому
Following up last week's 'Cube or no cube?' video, many comments asked about rendering text against the random noise background, so here's a video exploring that. As with the previous video, every frame is just random noise and if you pause it that's all you'll see. The spinning cube and the text are only visible as changes from one frame to the next. I've embedded a fictional username and pass...
Weird POV effect
Переглядів 487 тис.6 місяців тому
A short video demonstrating an interesting persistence of vision (POV) effect where objects being animated on a random field of black and white pixels are only visible when they're moving. The graphics were generated by a simple Win32/C app that you can find at github.com/ChrisBLong/POV EDIT: Some related stuff that has come up in the comments: Bad Apple but it disappears if you pause it: ua-ca...
Seiko Kinetic Capacitor/Battery Swap
Переглядів 1 тис.8 місяців тому
Watch a complete amateur attempting to fix an old Seiko Kinetic watch, replacing the old capacitor with a rechargable battery. Everything does not go smoothly.
Laptop RAM upgrade
Переглядів 15510 місяців тому
This old Lenovo Yoga only had 4Gb of RAM and was getting a bit slow, so follow me as I upgrade it to 16Gb. It's a pretty simple thing to do if you're careful and take your time.
A look inside an £85 Honeywell HR92 wireless radiator valve.
Переглядів 836Рік тому
A look inside an £85 Honeywell HR92 wireless radiator valve.
Seiko Dancing Hands repair and demo
Переглядів 3,9 тис.3 роки тому
Seiko Dancing Hands repair and demo
Lava Lamp - How It Works and Time Lapse
Переглядів 2,1 тис.5 років тому
Lava Lamp - How It Works and Time Lapse
Making a really simple catapult / slingshot
Переглядів 1007 років тому
Making a really simple catapult / slingshot
Tape measures have loose hooks - why?
Переглядів 2247 років тому
Tape measures have loose hooks - why?
Constructing a fidget spinner template with a compass and ruler
Переглядів 6 тис.7 років тому
Constructing a fidget spinner template with a compass and ruler
Modeling a shuriken fidget spinner in Sketchup
Переглядів 3,8 тис.7 років тому
Modeling a shuriken fidget spinner in Sketchup
How did you deal with the gray pixels in the video?
This is very interseting. I came across this on "The Useless Web" and it facinated the heck out of me. They linked this video on the website so here I am. Great video by the way!
Very informs video. Well detailed. Thanks
i was literally just thinking about this
Definitely frustrating.
Where is the repair part? You sent it away and you call it "repair" when you only remount the hands...?
its so weird when i try to pause it and try to imagine the moving animation in noise just to be super off when i unpause it. its like when i try to close my eyes and try to imagine where certain things are and what the general image looks like before i open my eyes again
Any references to celebrities and glass coffee tables will be deleted.
You said glass. Then acrylic.
I know, having glass in the thumbnail gets the point across more efficiently, I thought. No deception intended!
You know, I have a huge square of transparent acrylic I have not used yet, except a little corner. Could turn out to be a good thing! No worries buddy! Thanks.
I like it. I'd suggest a clamp to hold down the towers. Seems like a tight gap between the bed and the platform beneath it, but it should be doable from the sides without the towers. The bed and printing surface look flexible, which could cause issues too. I don't think that tape is going to work even short term for the pillars.
Yes, all good points. The tower design is all wrong - they can be shorter, they should be entirely under the acrylic, and I'll add a way to clamp the bases down to the base.
@@ChrisBLongEnder 3 is V-Slot based, can't you attach to it?
@@InfiniteCoder01 I guess that would be possible, I didn't want to mess with removing the bed though.
Neuroscientist here. This is called second order motion and many insects can see this kind of second order motion.
The print quality that you show is not actually that good and your machine can do noticeably better if you adjust wheels on Z. Also, it would be a good idea to level your bed lower (to have all four springs tighter). Also - did you check the flatness of your bed cold, or at printing temperature? There should be quite a difference.
That's interesting, what defects are you seeing that I'm not? The bed was initially lower but as far as I remember I had to raise it nearly all the way because the Z-axis compensation only goes to -5mm or so - I should re-check this. The bed was cold when I filmed it but the dishing is at least as bad when hot. I realised I may have bent the bed when I crashed the head into it - is that possibly/likely? If so I may just be able to bend it back. It doesn't seem to be made of particularly bend friendly material though.
@@ChrisBLong For one - the rose closeup around 5.28 shows quite uneven surface, I'm pretty sure that is not the limit for your machine. I didn't quite get the Z-compensation part. Your printer homes Z via CR-touch probe on the print head, correct? If so - whereever your bed is - that's your zero. If you have a separate endstop for Z and it's too high - move the endstop. A bed, that is not paticularly flat is not uncommon, especially in this size. The corner nobs are not just there to rotate the bed - they actually bend it into shape slightly. But if you have a ditch in the middle - you'll probably have to make do with automatic bed mesh compensation, or replace tthe prin surface with glass.
Thanks, I'll keep playing with it. Re: the Z compensation, I don't understand it either! I'm probably doing something wrong. It homes the Z-axis using the probe, but then moving the head to zero on the Z axis leaves it several mm above the bed, unless I set the Z adjustment thing to about -3.40mm. I'm going to have to RTFM at some point, I think.
@@ChrisBLong We all know, that reading the manual is just a form of surrender B-))))))))) In all seriousness, does the Z-offset(that's what it's called) change, when you change actual position of your printbed? It's supposed to be just the difference between the hight your probe measures and the actual position of the nozzle.
Bro why did you buy an S1 Plus in 2024? Even the $200 V3 SE prints over twice as fast and doesn't make you manually adjust the bed.
A. It doesn't print twice as fast. It is a bit faster, mainly because B. It has a smaller print volume (So - a smaller bed, thus - a lighter bed). I would agree, that the guy made a typical beginners mistake by going with a bigger print volume, but it's not the end of the world. C. Manualy adjusting the bed is pretty much the only way of getting it straight. If you are printing something slightly more functional then benchys and dragons - you'd rather have your bed adjustable. Spending 5 minutes once or twice a year for reajusting the bed is hardly a significant waste of time. B-)))
I feel like you are conflating good value to a good printer. While the ender will work, it's far from the best. Bed slingers are old news now. Mechanic swith type bed probes are dinotech these days. Youll want a beacon soon enough. You say your not interested in speed. That sounds like something someone that's never waited 3hrs for that last 3% to print would say. Speed is king now. Get those prints printed. Heating that bed isn't free after all. Lots to learn, lots to mod. Hope you have fun.
Thanks for the comment. I knew the Ender-3 wasn't the best printer but I wanted to start with something basic and common so there's lots of support and information out there. Having been a noob in lots of different fields over the years, I like to start at or near the bottom and learn the basics first, that way I'll have a better understanding of which aspects of the machine are limiting me and what my priorities will be for my next printer. It's been fun so far which is the main goal, I'm not running a business where productivity/reliability would be crucial.
4:25 Come on, Ender 3 is not as bad as something like Anet A8 (which I had for 1.5 years and the mainboard seems to be damaged now, so I'm planning to upgrade to KP3S) Comparing to an Ender 3 which even has auto bed leveling, Anet A8 seems like a massive pain to work with
Yeah, I'm very happy with it. It was only after I ordered it that I started seeing loads of videos saying the Ender-3s are overrated, poor quality etc, but based on a sample size of 1, they're great :-)
Welcome to the 3D printing community!
Having the ability to spot camouflaged predators and prey from movement is a valuable capability as a species.
I believe the white works so well because you're XORing the colours. When you XOR a color with white using binary RGB values, you get the exact opposite of that color, creating a big contrast. The more contrast there is the better you can see it. Edit: This may be why green is good too. The human eye is most sensitive to green color, so when it changes to the exact opposite value you see a lot of contrast. To test this, you can try using colors that have the green value of 128 or 127 where there should be almost no green contrast.
Could you please add a red dot in the middle? I'd like to try if you can see the cube, when having something to focus on to follow the screen shake properly.
I was wondering if the effect is still there if the object isn't moving. That is if you keep the cube stationary but still update the pixles with the same rate along the lines. This should of course be done on a background that is already noise, or the cube will just flash on and off.
Yes, that would work like in the Bad Apple videos I made with this effect - look at my other videos to find them. A 'static' object appears as a shimmering outline against the background of noise, but still disappears when you pause the video.
Since you are looking for a name for this effect, this might help you finding something: I described this effect to my friends as 'motion without body'
With the word bouncing around on the screen, if I close one of my eyes I can make the text out better. What I think is happening is that with both eyes, the pixels significantly messes with depth perception for tightly packed images such as words.
amazing
this would make a sweet xscreensaver screensaver
suzanne
Really curious to know how this would look paired with motion extraction: ua-cam.com/video/NSS6yAMZF78/v-deo.htmlsi=Hv2kC5-q54HkH_GC
Someone processed one of my earlier videos using that 'Posy' technique and posted a comment about it... I'll try to find it again...
When paused on pc use the: < > keys to really show off the effect
I think an issue with the more complex shapes is that IRL they don't have any "lines". (As in, actual objects aren't made of polygons.) Think of an actual sphere, by it's definition it doesn't have any edges, but this effect relies on them so you end up with a weird dice with many faces (that in this case is easy to see as an imperfect sphere, but with other objects is not so simple to do - like with suzanne). You could try to approximate the shape more closely to show the round edges, but it'd eventually be too jumbled to tell the individual lines apart. All this to say- I think that that's a limitation of the effect, objects have to have enough hard edges to make sense of.
I think that flip book can't work because it is shaking overall. Nothing against your book flipping, it just requires much more stable image. Each pixel need to stay in same place between pages. Even watching computer animations, they get blurry if you shake your head fast enough. I think we can compensate a little for shaking. But if you want to give flip book a second try, maybe try to add a solid frame around the picture, so we have a reference points so we can compensate better? I think in fullscreen, without reference I loose ability to see easier, with smaller, slower head shakes.
On phone, if I try to track the circle with my eyes I see the corners a bit, if I don:t it's completely gone.
1. The blurred version is actually clearer. 2. What happens if you do the operation on the pixels which are empty instead?
I think it would aid visualization if only "important" lines were visualized. That is, for standard polyhedra like cubes, tetrahedra, icosahedra, etc. all of their edges are visualized. For Suzanne, only the outline of its head, mouth, and eyes are visualized. For the teapot, only the outline of its shape, spout, or handle are visualized. Front-facing vs. back-facing lines may be handled separately. I can already kind of picture the algorithm in my mind: 1. Render all faces in white in an off-screen buffer filled with black 2. Edge detection (Edges in white, all other pixels are black) 3. XOR the off-screen buffer with the normal frame It may be possible to render the face normals and run an edge detection on them to generate additional pixels to render.
This is the sort of Jiggle Physics i like
what would happen if the noise pixels are wayyyy smaller
Are you sure that implementation of oklab conversion to rgb is correct in your demo? Clipping might occur at the extreme a/b values, and you look at colors that couldn't be represented as rgb ,nearest will be used, and actual luminance of the pixels will be different. It would be great if there is some check that lab->rgb produced valid sRGB results, otherwise it doesn't actually shows that blues less noticable than reds, because colors got clipped.
You could try only the outline of meshes only, something like what blender does with it's freestyle drawing for edge outlines
I love how the blurred frame just looks like noise when you pause it. I'm wondering what it would look like if the background moved instead of the object. You create a random pattern in the left side of the frame and just continually shift the columns to the right. Or instead of a 'random ' pattern, you wrap the far right column to the left side.
The OKLab luminace doesn't seem to quite correspond to perceived brightness for me - with constant "luminance", blue > red > green > grey. The theory of "brighter perceived colour = easier to see" does still seem to hold though. And subjectively it makes sense, trying to make it out in the grey feels like the same kind of difficult as seeing in the dark.
Yes, it's the same for me - I'm prepared that that's because I've messed up the conversions, but reading around this subject also makes clear that perceived brightness is also strongly effected by other factors, like the background colour, other surrounding colours, the brightness of the room you're in, and others. It seems very hard/impossible to generate perfectly equal perceived brightness via any kind of simple mathematical transformation.
so many mouth noises ._.
A thing I've noticed: When moving my mouse cursor around the video in a smooth circular motion, my eyes can track the cursor steadily. With increasing speed of the motion, the cube becomes less and less visible. This might be related to the flip-book motion you've mentioned. Maybe one can put either put a white dot on the screen that moves smoothly with variable speed, or the image could be moving by itself, and a fixed crosshair in the middle of the screen could be used to anchor the view.
To easily compare how strong the effect is with different colors, it might be good to update the image monochromatically and color each frame with the selected color as it's displayed.
Is there any chance you missed a conversion from a linear color space to sRGB which causes the dark gray and the green to look darker than the blue and the red at the same L value? Btw, OKLCH is to OKLab what HSL is to RGB.
What about bluring the wire-frame?
what if you make the resolution a lot higher?
i think the best way to make more complex shapes more visible is to isntead of rendering the wireframe, to render hard edges those would include the outline/silhouette of said mesh as well as lines separating any areas where there is a change of depth. sorry, im not good at explaining, but basically imagine looking at a donut from the side, not quite 90° but slightly from top so you can see part of the hole. in that example you would atleast in real life make out the dip because there is a visible edge around the bottom part of the hole. this effect can be achieved by just drawing lines where there is a great change in distance. should be fairly easy to implement by just storing each face's distance from the camera and drawing edges when a certain threshold of change occurs also another thing, about the OKLab implementation and the irregular color intersection chart. the reason some colors just suddenly get bright is because the XOR operation is still happening with RGB. it would be cool to see if there was a way to swap the XOR operation to use the OKLAB L A B values instead
Love this series! There's a popular horror game series called Faith that has a pixel art style, and it has some Very Scary cutscenes with a rotoscoped pixel art style. I wonder how some of those scenes might look with this effect.