Physicist: a point mass Materials engineer: a group of crystal lattices Biologist: a population of skin bacteria on a rock Mechanical engineer: a part that is very difficult to manufacture Graphic designer: an ugly font Photographer: bokeh with artifacts Programmer: an object
That took a long time to explain a reeaaly simple concept. There's plenty interesting stuff on the 3d modeling topic to talk about though, now is a great time to jump onto the theme too.
A good way to illustrate that the hulls can't be understood by the camera would be to imagine the object made from a vantablack material. You'd only see the silhouette, you'd understand the convex hull, yet wouldn't be able to tell wether there's really a hull in there. (Apart from your intuition about the object, as they mentioned haha)
This reminds me of the CVPR paper "3D ShapeNets: A Deep Representation for Volumetric Shapes" which models the uncertainty of the 3D geometry of an object and tries to find the next-best view to minimize the uncertainty.
It's amazing how much progress we've made on this in the last 3 years. Look up meshroom or photogrammetry if you haven't seen it yet. I don't know if it works the same way as this but man, new software is becoming so powerfull its incredible
I once implemented space carving using POV-ray. Because it's very easy to use it to code extrusion of 2D paths and intersections of 3D objects. The drawback it that you don't get a point mesh that can easily be imported into modeling software.
About halfway though a video, when it was stated that "how would you know that something's hollow", then it started pulsating in my mind and it just can't leave it. Light! Additional light source should be a huge help in that, because you can base a lot of predictions of your shape based on shadows, right? Capture an image without that extra light, then let your additional light source to rotate around an object or gradually get closer to it, capture a shadow and get a definite evidence that your mug isn't stretching a mile behind -- even without rotating it! Even just turning some static light source on and off for every picture should help -- if we know its exact position, for instance. Can't you use that in this whole scheme?
another alternative is to use narrow spectrum lights each emitting from a 60 degree offset and shadow/shader for each color to determine vectors to cleave, (interestingly possibly use the color mixing gradients for depth and opposing surface shape, especially useful if you compare the objects texture color profile to a full spectrum lighted )
That tomato seedlings footage looked awesome! Would love to see more of that! Is there any additional UA-cam footage available of the high speed use of space carving?
If the camera had a light source directly behind it illuminating everything it see's (and always aligned with the camera's PoV), so shadows are dependent on the camera position; you could calculate depth from shadows.
Can you release a 10-hour video loop of you pulling a sheet off a stack, drawing lines on it, rinse and repeat? Purpose? just to see who would watch you draw lines for (almost) all day? XP
there used to be some software, way back in the day, that let you take a toy or something small, stick it in front of the lens of your webcam, and manually mark out the edges in each frame, then it would run through it's algorytyms and generate a painted 3D mesh of the thing. This might well explain how that software worked.
@Computerphile and the person in the movie. What if you move the camera sideways, eventually the center of the camera lens will line up with the edge of the Rubrics Cube and then you can see that it actually is a straight edge.
13:09 "And finally, our rubik's cube, which is almost, in some sense the worst of our reconstructions. Because it's cube-ish, right? But it's not particularly a cube. I mean, that's cube-kind-of." haha!
What if you also used translational instead of only rotational motion to get different camera angles? Wouldn't that carve away the sides of the cube more accurately?
+David Clemens That's what I was wondering when he described space carving. How much translating and rotating would improve the results as compared to the rotational approach. That and "why not use an orthographic lens" but he answered that one, never realized just how much they cost.
It's pretty easy to know if something is hollow or not... cameras have to focus to take a picture and if they focus in different places then you have your answer
What if you always assumed that all shapes are hollow, and removed parts of that shape based on pattern recognition (but you would need to have a database of shapes) and see what that blob would look like, if I removed "this" part (whatever that is) would it be closer to some other part in my DB via heuristics?
That's really cool, I haven't heard of space carving before, but it makes a lot of sense. I wonder if there's a sort of hybrid between space carving and photogrammetry that'll sort of a bunch of problems associated with each.
How far backwards do we have to go to have cameras that cannot detect distance (autofocus)? You simply need multiple focus points to measure the hollowness of the mug. A bit of extra software will communicate this to your computer.
There has to be some elegant way to detect a 90 degree corner and set the camera up such that one of it's peripheral lines lines up perfectly with one of the sides of such a corner..
That's the point of this method: You can't know that very well (if at all) from the outset, you can only optimise so far. You'd need pretty much to know the shape beforehand to set the cameras up. And when you know the shape already, why would you then still bother going through the space carving?
My takeaway from this video is that, with low-resolution space-carving that can't take depth into account, the jungle plant is functionally undistinguishable from Sideshow Bob's head.
thanks for the info on optical 3d systems. (I bought a lytro after watching that video.) can you discuss how fringe projection 3d systems work? I dont get the phase wrapping and unwrapping that is explained in the white papers. thanks again for the great series.
photogrammetry programmes like agisoft photscan and reality capture seem to be way ahead of what they are doing, or is space carving just one of the methods used during photogrammetry?
@COMPUTERPHILE: Regarding the hollow shape thing... Why don't they make use of an external light source shining on the scene at a different angle than the camera? The shadow that's being cast could be used to compute some depth information, couldn't it?
Hah! I was trying to remember when he *was* after reading this. "Voxhauls, who the heck is that? I don't remember him talking about someone named Voxhauls...."
+Soda POP 67 The two I know of (because they are cheap and easy to find information on) are the kinect and intel realsense cameras. Details vary, but the first generation kinect is an ordinary 2d camera, an infrared camera, and a infrared projector. The projector projects a specific pattern of infrared light, and the infrared camera picks up the pattern on the environment. The way the pattern is distorted gives the depth, and then everything else about a picture comes from the standard visible light camera. There's other ways of doing it, different kinds of patterns, and so on, but at the moment, infrared cameras seem to be involved in most of the 'cheap' depth camera setups...
Interesting stuff, Moving on from what i can understand, what this computer is missing is colour and lighting, as humans we can see how flat a surface is because of how the colour + lighting changes across objects surface to return depth values or a sense of roundness, if a computer had the same concepts it could make better results right?
Couldn't you use a single light source with a known position in space, to cast a shadow, and post process the resultant pixel intensities to approximate hollow features?
"It's not hollow, because how would we know?" Can you not use an rgbd camera to do it? I found it odd that you started talking about how it's impossible to tell that a mug is hollow immediately after discussing rgbd, but never mentioned why/why not use that to make space carving more accurate.
9:15 am sorry but here you are mixing visual hull with photo hull....and the algorithm you are descring is called shape from silhouette which is based on binary segmented images (background/forground)..space carving is different algorithm
+Mateo Vozila Imagine you are moving the camera (or the subject) 360 degrees for the full angles. Then you also do pitch for another 360 degrees. Then, you move the camera left/right 30 centimeters in .25 centimeter increments for a total of 120 steps. And another 120 steps up/down. Multiply them all together... 360*360*120*120 = 1,866,240,000 or 1.8 billion images taken. Now do you understand the problem? Given infinite time, one could move the camera around forever and get a better representation of the object. But they don't have infinite time now do they? And that is before you consider that 360 degrees is not that great of an resolution nor is 120 by 120. You would still end up with a coarse looking object. Just ever so slightly more detailed. If you really want to scan something you need depth measurements and generalizations / averaging.
Mateo Vozila Cube yes. But this system is meant to work on any shape, remember? If you design your camera movement around the particular shape then you might as well model the shape in a 3D editor and fill it with voxels.
How about using the texture / reflecting properties (BRDF and stuff) to infer the shape more precisely? IIRC, the Université de Poitiers (France), did this for quite some time now. At least that's what they told their students. :)
+Ceelvain photogrammetry does this and is widely available. Under the right conditions the results are spectacular. I doubt it is as fast though so in light of the end of this video where he mentions space carving working rapidly in a factory setting it would not be practical. check out sketchfab website there are literally tens of thousands of uploaded photogrammetry results which you can view in your browser.
This is basically an actual version of the joke: How do you make a statue of an elephant? Take a piece of stone, and cut away everything that doesn’t look like an elephant. *ba dum tss* xD I’ll be here all week.
Yes! Same here, all I (think I) know is it just zooms a lot, which matches with Dr Mike saying it gives a small image and also the smaller the image, the smaller the angles and more parallel the "outer lines" of the camera, giving you an orthographic view! Sorry for going off lol you might already know this
+MarcBot Nope I was looking for a book on modern MVVM/WPF techniques recently and there isn't very much out there that is any good. Even the popular frameworks tend to fall short in other areas, like doing dependency injection wrong, and a lot of the books have terrible mass 1* reviews. I wouldn't be surprised if this book is just as good as any other.
+ghelyar It's really not anything specific to WPF. Most of how to do things right with WPF are how to do things right with C# in general. As long as your follow good practice guidelines, WPF is just a really wild mix of C# best features.
+MarcBot Nope Maybe in 1000 years it will buy you a kingdom, after it turns out WPF 2010 turned out to be the pentacle of human technological achievement before it all went to hell. I think he should keep it and pass it on to his descendants to preserve this wisdom.
Alan Hunter There are things to know which are specific to WPF though e.g. doing MVVM well is something that general purpose C# does not teach you. Unfortunately, I have found these books tend to lack anything useful. They might teach you some standard controls, how to use data binding and how to write a value converter but that's about it. Most of them still treat WPF as if it's WinForms, and some of them don't even get the C# basics right.
I love the use of technical terms. "A sort of triangulary cylinder"
Otherwise known as a prism.
triangular prism*
Triangularly cylindrical prism.
2:20 "What's this?"
Topologist: a torus
Physicist: a point mass
Materials engineer: a group of crystal lattices
Biologist: a population of skin bacteria on a rock
Mechanical engineer: a part that is very difficult to manufacture
Graphic designer: an ugly font
Photographer: bokeh with artifacts
Programmer: an object
Surrealist: a horse galloping on a tomato
Chemist: A ceramic, probably silicate, with a mostly organic substance applied to the surface.
Lawyer: An object that appears to be consistent with a description of a mug.
The office's single IT guy: My fuel for the day.
Running ubuntu and then a "WPF in C#" book in the background. Seems like a jack of all trades. Good stuff!
I love these videos of Mike Pound, so interesting!
That took a long time to explain a reeaaly simple concept. There's plenty interesting stuff on the 3d modeling topic to talk about though, now is a great time to jump onto the theme too.
I could listen to Dr Mike all day...
A good way to illustrate that the hulls can't be understood by the camera would be to imagine the object made from a vantablack material. You'd only see the silhouette, you'd understand the convex hull, yet wouldn't be able to tell wether there's really a hull in there. (Apart from your intuition about the object, as they mentioned haha)
+Adelar Scheidt *whether
That is a really cool way to help wrap one's head around the problem. Thank you.
*Edit button
This reminds me of the CVPR paper "3D ShapeNets: A Deep Representation for Volumetric Shapes" which models the uncertainty of the 3D geometry of an object and tries to find the next-best view to minimize the uncertainty.
So basically, a silhouette tells you where the object definitely isn't and where it might be, but not where it definitely is.
It's amazing how much progress we've made on this in the last 3 years. Look up meshroom or photogrammetry if you haven't seen it yet. I don't know if it works the same way as this but man, new software is becoming so powerfull its incredible
+1 for pronouncing Wageningen correctly
You forgot to write beneath the Rubik's cube >.>
No
+MMMIK13 Bit of a Parker Square...
+MMMIK13 maybe they meant :-P
+MMMIK13 maybe it is a loosely defined derivative of XML and hexagon is defined to not need a closing tag.
+BrickOfDarkness in that case it's missing the SGML doctype
"Voxels are quite popular these days due to a certain piece of software called Minecraft"
That got me 😂
You said 'Wageningen' with the proper 'g' sound. You're awesome! :D
3:29 Basically the rule for all youtube videos.
+Victor P. thought the same :D
read this comment at the exact moment he said it lol
false.
Love this guy! Clear and to the point.
I see dr. Mike Pound in the thumbnail.
I click.
I once implemented space carving using POV-ray. Because it's very easy to use it to code extrusion of 2D paths and intersections of 3D objects. The drawback it that you don't get a point mesh that can easily be imported into modeling software.
It's like a 3D CT scan. You measure how well X-rays pass through an object from a number of angles in a 2D to get a 'slice' image. Expand this to 3D.
+TheBigBigBlues only that either all or no ray gets blocked, there's no value in between
+TheJoshinils proof ?
Similar idea, but CT is way more powerful because you have way more information related to depth.
+TheJoshinils Yeah true.
***** I thought we were talking about x rays ,in which case i am pretty sure this " there's no value in between" doesnt apply
About halfway though a video, when it was stated that "how would you know that something's hollow", then it started pulsating in my mind and it just can't leave it. Light!
Additional light source should be a huge help in that, because you can base a lot of predictions of your shape based on shadows, right? Capture an image without that extra light, then let your additional light source to rotate around an object or gradually get closer to it, capture a shadow and get a definite evidence that your mug isn't stretching a mile behind -- even without rotating it!
Even just turning some static light source on and off for every picture should help -- if we know its exact position, for instance. Can't you use that in this whole scheme?
another alternative is to use narrow spectrum lights each emitting from a 60 degree offset and shadow/shader for each color to determine vectors to cleave, (interestingly possibly use the color mixing gradients for depth and opposing surface shape, especially useful if you compare the objects texture color profile to a full spectrum lighted )
That tomato seedlings footage looked awesome! Would love to see more of that! Is there any additional UA-cam footage available of the high speed use of space carving?
You should make a time-lapse of him just drawing lines all day
its great when you watch these videos and they help with your revision!
Voxel ambient occlusion would make the 3d models a lot more pleasant to look at, it's easy to implement and it can be zero overhead for the GPU.
The way he pronounces Wageningen! Fantastic. Wacheningen
ubuntu and sublime on the background, you have my thumbs up, from a fellow brother in arms
Very great video, proud of your work:)
If the camera had a light source directly behind it illuminating everything it see's (and always aligned with the camera's PoV), so shadows are dependent on the camera position; you could calculate depth from shadows.
Can you release a 10-hour video loop of you pulling a sheet off a stack, drawing lines on it, rinse and repeat?
Purpose? just to see who would watch you draw lines for (almost) all day? XP
there used to be some software, way back in the day, that let you take a toy or something small, stick it in front of the lens of your webcam, and manually mark out the edges in each frame, then it would run through it's algorytyms and generate a painted 3D mesh of the thing.
This might well explain how that software worked.
@Computerphile and the person in the movie.
What if you move the camera sideways, eventually the center of the camera lens will line up with the edge of the Rubrics Cube and then you can see that it actually is a straight edge.
Sion Yeah, that requires you to get lucky with how you place your camera.
13:09 "And finally, our rubik's cube, which is almost, in some sense the worst of our reconstructions. Because it's cube-ish, right? But it's not particularly a cube. I mean, that's cube-kind-of." haha!
Shout out from The Hague, a mere 15 minutes from the Westland, where at night the sky is bright with light from the greenhouses.
What if you also used translational instead of only rotational motion to get different camera angles? Wouldn't that carve away the sides of the cube more accurately?
+David Clemens That's what I was wondering when he described space carving. How much translating and rotating would improve the results as compared to the rotational approach. That and "why not use an orthographic lens" but he answered that one, never realized just how much they cost.
This channel is cool as hell.
I will never trust mugs to be mugs again
a mug with an ugly mug who likes to mug mugs.
Congratulations, you have begun your training to be a Certified Fair Witness.
Morgan Yu is that you?
This technique was used to produce the crude "holograms" in Steven Spielberg's film "Minority Report."
4:00 the key is to avoid the computer to think it´s a #parkercube
false.
0:05 Ubuntu. And is that Python in the right hand window?
It's pretty easy to know if something is hollow or not... cameras have to focus to take a picture and if they focus in different places then you have your answer
You can't always do that, but that's an interesting idea
The same as what's called "backprojection" which is widely used in medical imaging specially for a semi-transparent objects.
1:21 Is that Oskar's Treasure chest Rubix cube on the shelf I see?
In a single video, Mike has singlehandidly upset every Geometer in a 500km radius of nottingham
What if you always assumed that all shapes are hollow, and removed parts of that shape based on pattern recognition (but you would need to have a database of shapes) and see what that blob would look like, if I removed "this" part (whatever that is) would it be closer to some other part in my DB via heuristics?
That's really cool, I haven't heard of space carving before, but it makes a lot of sense. I wonder if there's a sort of hybrid between space carving and photogrammetry that'll sort of a bunch of problems associated with each.
shouldn't you go able to get rid of the bulges by moving the cube to the side so that the edge would be in the center?
+Dieze TA Good point.
+littlebigphil Depends how much harder doing everything else becomes after translating the camera location; should work though
why not use a distance sensor for measuring distance and hence hollowness??
How far backwards do we have to go to have cameras that cannot detect distance (autofocus)? You simply need multiple focus points to measure the hollowness of the mug. A bit of extra software will communicate this to your computer.
There has to be some elegant way to detect a 90 degree corner and set the camera up such that one of it's peripheral lines lines up perfectly with one of the sides of such a corner..
That's the point of this method: You can't know that very well (if at all) from the outset, you can only optimise so far. You'd need pretty much to know the shape beforehand to set the cameras up. And when you know the shape already, why would you then still bother going through the space carving?
How is this diferent to what is done with Computed tomography reconstruction?
Netherlands represent, woo! Wageningen, haha, solid effort on trying to pronounce that :)
My takeaway from this video is that, with low-resolution space-carving that can't take depth into account, the jungle plant is functionally undistinguishable from Sideshow Bob's head.
I'm thinking,could you using sound to help in fine tuning the hollow shape ,the frequency change
thanks for the info on optical 3d systems. (I bought a lytro after watching that video.) can you discuss how fringe projection 3d systems work? I dont get the phase wrapping and unwrapping that is explained in the white papers. thanks again for the great series.
I am quite lost on this subject. Which is the playlist (or first video) one can watch to understand computer vision better?
12:40 youtube compression why
photogrammetry programmes like agisoft photscan and reality capture seem to be way ahead of what they are doing, or is space carving just one of the methods used during photogrammetry?
@COMPUTERPHILE: Regarding the hollow shape thing... Why don't they make use of an external light source shining on the scene at a different angle than the camera? The shadow that's being cast could be used to compute some depth information, couldn't it?
But it wouldn't help you with the inside of the mug
This reminded me of the puzzle game Picross 3D.
I completed that game a few months ago. I've never had a happier moment that that
+Watch The World Burn *than that
Hah! Me too, had it for 5 years or sow!
There's a sequel on 3DS but Japan only :/
What about using stereography to develop depth information? Just a thought...
Would it not be possible to have 3 cameras. One aimed at the centre and the other two aimed at the edges instead of buying an orthographic lens?
good luck on that project mate!
Hi, where are you located?
Can we get the voxel data that you generate?
Why are some of the green cubes Bigger?
I was sitting here for at least a minute thinking about why he's talking about Vauxhalls...
+mellanslag HAHAHAHA you killed me
Haha this must be the most British comment I've read in a while
I never could figure out how the heck you pronounced that.
Because it looks like a square... With wheels.
Hah! I was trying to remember when he *was* after reading this. "Voxhauls, who the heck is that? I don't remember him talking about someone named Voxhauls...."
so are depth camera's just like, 3d cameras? Or do they have some sort of sensor like infrared or something?
+Soda POP 67 The two I know of (because they are cheap and easy to find information on) are the kinect and intel realsense cameras.
Details vary, but the first generation kinect is an ordinary 2d camera, an infrared camera, and a infrared projector.
The projector projects a specific pattern of infrared light, and the infrared camera picks up the pattern on the environment. The way the pattern is distorted gives the depth, and then everything else about a picture comes from the standard visible light camera.
There's other ways of doing it, different kinds of patterns, and so on, but at the moment, infrared cameras seem to be involved in most of the 'cheap' depth camera setups...
KuraIthys ah ok, thanks ^-^
Is this a Team Viewer popup on the screen near the end of the video? ^^
Interesting stuff, Moving on from what i can understand, what this computer is missing is colour and lighting, as humans we can see how flat a surface is because of how the colour + lighting changes across objects surface to return depth values or a sense of roundness, if a computer had the same concepts it could make better results right?
Why was Ubuntu in the background?
Link to the full Tomato-Seedling-Footage?
I'm waiting for the follow up video :)
Have they made a video on octrees yet?
I wonder, aren't Kinect cameras very cheap right now? Does anyone know how hard it is to use several?
I love this channel, but so much of it goes way over my head, can anyone recommend a channel like this dedicated to programming and software? Thanks.
Couldn't you use a single light source with a known position in space, to cast a shadow, and post process the resultant pixel intensities to approximate hollow features?
"It's not hollow, because how would we know?"
Can you not use an rgbd camera to do it? I found it odd that you started talking about how it's impossible to tell that a mug is hollow immediately after discussing rgbd, but never mentioned why/why not use that to make space carving more accurate.
When you're so early it's still in 360p
Oct-tree video! We want the oct-tree video! Today!
Is that Vim with molokai colorscheme on the monitor?
+Tommy Ip I didn't take a close look but I think it's Sublime
+Tommy Ip pretty sure it's sublime text
+FreekyMage yep definitely Sublime.
+Tommy Ip sublime
+Tommy Ip yeah its sublime text. i've taught him well.
There were his fingers behind the mug
9:15 am sorry but here you are mixing visual hull with photo hull....and the algorithm you are descring is called shape from silhouette which is based on binary segmented images (background/forground)..space carving is different algorithm
Why are you not factoring in light into the equation? light effects the shades of colour and thus gives depth
Use monocular SLAM and you get a point cloud that you can mesh easily.
7:56 A voice edit?
Can't you get some depth information from the way the colours change as you rotate the object?
+Bart Stikkers Yes, but presumably that's not space carving. Carving is safer considering the object *could* be painted deceptively.
cant you just move the camera a bit to either side so it is looking right along the edge and carving it as it should be?
+Mateo Vozila Imagine you are moving the camera (or the subject) 360 degrees for the full angles.
Then you also do pitch for another 360 degrees.
Then, you move the camera left/right 30 centimeters in .25 centimeter increments for a total of 120 steps.
And another 120 steps up/down.
Multiply them all together... 360*360*120*120 = 1,866,240,000 or 1.8 billion images taken.
Now do you understand the problem? Given infinite time, one could move the camera around forever and get a better representation of the object. But they don't have infinite time now do they?
And that is before you consider that 360 degrees is not that great of an resolution nor is 120 by 120. You would still end up with a coarse looking object. Just ever so slightly more detailed.
If you really want to scan something you need depth measurements and generalizations / averaging.
+Cadde but you would only need the six pictures that go along the sides and each pic. eliminates all the empty space from that side of the cube.
Mateo Vozila Cube yes. But this system is meant to work on any shape, remember?
If you design your camera movement around the particular shape then you might as well model the shape in a 3D editor and fill it with voxels.
It's hexagonal, the rubix cube, when viewed from a side where you see three faces.
How about using the texture / reflecting properties (BRDF and stuff) to infer the shape more precisely? IIRC, the Université de Poitiers (France), did this for quite some time now. At least that's what they told their students. :)
+Ceelvain photogrammetry does this and is widely available. Under the right conditions the results are spectacular. I doubt it is as fast though so in light of the end of this video where he mentions space carving working rapidly in a factory setting it would not be practical. check out sketchfab website there are literally tens of thousands of uploaded photogrammetry results which you can view in your browser.
+ian pretorius Absolutely awesome.
Is that a rhombus?
0:04 At a guess I’d say that’s Python code on the right half of the screen.
Great vid! Thanks
i also like to carve if you know what i mean
No ZimoNitrome, I do not know what you mean
ok?
This is basically an actual version of the joke:
How do you make a statue of an elephant?
Take a piece of stone, and cut away everything that doesn’t look like an elephant.
*ba dum tss* xD I’ll be here all week.
Can someone tell me why they don't use triangles? rather squares?
+James H I think its because pixels are square, so when they wanted to push them into 3d they just turned into cubes.
Ah, Ubuntu's Unity, that is a blast from the semi-recent past.
"They can't tell us that it's hollow because... How would they know?" Deep learning
You could use a moving lamp to detect holes.
Want more on orthographic camera!!
Yes! Same here, all I (think I) know is it just zooms a lot, which matches with Dr Mike saying it gives a small image and also the smaller the image, the smaller the angles and more parallel the "outer lines" of the camera, giving you an orthographic view! Sorry for going off lol you might already know this
I think you can throw away that WPF in c# 2010 book
+MarcBot Nope I was looking for a book on modern MVVM/WPF techniques recently and there isn't very much out there that is any good. Even the popular frameworks tend to fall short in other areas, like doing dependency injection wrong, and a lot of the books have terrible mass 1* reviews. I wouldn't be surprised if this book is just as good as any other.
+ghelyar It's really not anything specific to WPF. Most of how to do things right with WPF are how to do things right with C# in general. As long as your follow good practice guidelines, WPF is just a really wild mix of C# best features.
+MarcBot Nope Maybe in 1000 years it will buy you a kingdom, after it turns out WPF 2010 turned out to be the pentacle of human technological achievement before it all went to hell. I think he should keep it and pass it on to his descendants to preserve this wisdom.
Alan Hunter There are things to know which are specific to WPF though e.g. doing MVVM well is something that general purpose C# does not teach you. Unfortunately, I have found these books tend to lack anything useful. They might teach you some standard controls, how to use data binding and how to write a value converter but that's about it. Most of them still treat WPF as if it's WinForms, and some of them don't even get the C# basics right.