I really enjoy his speaking rate. Anyone who doesn't? That's why UA-cam has playback rate adjustment. I use it to slow certain presenters, others can use it to speed him up.
This is one of the best talks I've heard in a while. you can put 1.5x or 1.25x if you think he speaks a pace that's too low for you. Love the statement at 20:50 . It's been something I've been arguing with big O notation puritans a while back.
In the windows hierarchy example if a unique pointer was used instead of a shared that would make it a composite object no? This then makes class hierarchies perfectly fine. Am I missing something?
Just chiming in against the "too slow" complaints. I prefer the considered approach; x10000000 better than _some_ talks where they haave death by powerpoint (too many slides) and so much to say that they say, "Comments and questions at the end, please". Often, they never get to the end. Just compare this talk to any Lakos talk. Heaven!
How about he uses some of these good data structures on Photoshop so that it doesn't take forever to load when it doesn't do any work that's useful to the user on startup?
Each node has two in-edges, leading (in the picture left side pointing down), and trailing (right side pointing up). `begin(f)` points to the leading edge of `A`. `trailing_of(begin(f))` returns an iterator pointing to the trailing edge of `A`. After inserting `B, C, D` the result is the image shown.
So I feel a bit dumb after watching this. I did understand most of his ideas but up to a point in each case. Something gives up in my brain and i lose the feeling of comprehensive understanding.
Yea his use of vocabulary was a bit bothersome in some places. I'll have to re-watch with google in another tab ... will be good thing however and bring me to another level though
These types of things aren't understood for free. He has done as he recommends, so he has dedicated hours into thinking about STL algorithms and their implementations. He also references an entire book he has carefully read as the source of many concepts he brought up. People often see a challenge and give up if things don't immediately click. The only way to be like him is through hard work. Many people who picked up programming on its own never studied data structures and algorithms for hours. I'd recommend getting an introductory book on those topics and diving right in if you are one of these people. It takes a good chunk of time and effort.
At 21:00 log 1'000'000'000'000 = 40 tests (in search) each of these is 200 times slower than cache-friendly linear test. So 8'000 is faster then 500'000'000'000 (average number of tests in linear search). Big O notation does make sense or I didn't get the idea of the example.
He meant that a cache friendly nlogn algorithm might run faster than a non cache friendly linear algorithm linear, cache unfriendly (200x comes from slow memory access) 1'000'000'000'000 * 200 = 200'000'000'000'000 linearithmic, cache friendly (40x comes from the log factor) 1'000'000'000'000 * 40 = 40'000'000'000'000 200'000'000'000'000 / 40'000'000'000'000 = 20 (nlogn could end up being 20 times faster) (or just 10 times faster on average)
I liked this talk, but apparently this goes against many well stablished and useful Design Patterns. Self-referential classes are really nice sometimes and I think we should not make an explicit effort to not use it when the abstract model of the problems is clearly self referential.
Your perspective is bizarre. He most likely makes a million or more a year programming. His viewpoint that he has developed over decades from real-world experience, reading books, studying code beyond what he needed, etc. is both highly valuable and justified. In a situation like this, people jealously associate success from hard work with stuff like arrogance. Sometimes, a person just knows what they are talking about.
Sean Parent speaks so much like a professor/teacher. I love his tone and pace!
This is a really good talk; no idea what everybody below is complaining about. It's not even particularly slow compared to other talks.
I really enjoy his speaking rate. Anyone who doesn't? That's why UA-cam has playback rate adjustment. I use it to slow certain presenters, others can use it to speed him up.
This is one of the best talks I've heard in a while. you can put 1.5x or 1.25x if you think he speaks a pace that's too low for you.
Love the statement at 20:50 . It's been something I've been arguing with big O notation puritans a while back.
Just reference how std::sort for random access iterators uses an O(n^2) algorithm when the container is small enough.
Great talk! The tree representation outlined in 44:25 looks like an Euler Tour Tree, introduced in 1984.
In the windows hierarchy example if a unique pointer was used instead of a shared that would make it a composite object no?
This then makes class hierarchies perfectly fine. Am I missing something?
Just chiming in against the "too slow" complaints. I prefer the considered approach; x10000000 better than _some_ talks where they haave death by powerpoint (too many slides) and so much to say that they say, "Comments and questions at the end, please". Often, they never get to the end. Just compare this talk to any Lakos talk. Heaven!
Very well done. I really enjoyed this talk.
'Let us form some happy little algorithms, okay?'
35:10
when are we going to have more chapters, can't wait lol : )
When the moon hits your eye like a big pizza pie, that's a rotate
11:17 What if I told you 4 is actually greater than 3?
It's funny because we don't use greater-than in standard library predicates, it's always less-than to define well-ordering.
Does this book exist yet?
How about he uses some of these good data structures on Photoshop so that it doesn't take forever to load when it doesn't do any work that's useful to the user on startup?
Great insights shared by Sean Parent. Must watch
The talk is exactly 64 min long.
Sean what a speaker beautiful
What is the trailing_of_begin() 51:05? Hm. Apple's UIKit has Window 'Hierarchy' and detect a tap by pass the tap down through the hierarchy
Each node has two in-edges, leading (in the picture left side pointing down), and trailing (right side pointing up). `begin(f)` points to the leading edge of `A`. `trailing_of(begin(f))` returns an iterator pointing to the trailing edge of `A`. After inserting `B, C, D` the result is the image shown.
Are there transcripts of CppCon videos somewhere?
Would have been clearer with more code examples. Like the do's and don't's. But perhaps I'm just too tired for my brain to work properly.
So I feel a bit dumb after watching this. I did understand most of his ideas but up to a point in each case. Something gives up in my brain and i lose the feeling of comprehensive understanding.
Yea his use of vocabulary was a bit bothersome in some places. I'll have to re-watch with google in another tab ... will be good thing however and bring me to another level though
Rewatch at 1.5x. You'll be surprised.
Reassuring I'm not the only one. Constantly feeling alittle bit stupid
These types of things aren't understood for free. He has done as he recommends, so he has dedicated hours into thinking about STL algorithms and their implementations. He also references an entire book he has carefully read as the source of many concepts he brought up. People often see a challenge and give up if things don't immediately click. The only way to be like him is through hard work. Many people who picked up programming on its own never studied data structures and algorithms for hours. I'd recommend getting an introductory book on those topics and diving right in if you are one of these people. It takes a good chunk of time and effort.
Digital Imaging.
+Evgeniy Zheltonozhskiy Transcripts are under ". . . More".
At 21:00 log 1'000'000'000'000 = 40 tests (in search) each of these is 200 times slower than cache-friendly linear test. So 8'000 is faster then 500'000'000'000 (average number of tests in linear search). Big O notation does make sense or I didn't get the idea of the example.
He meant that a cache friendly nlogn algorithm might run faster than a non cache friendly linear algorithm
linear, cache unfriendly (200x comes from slow memory access)
1'000'000'000'000 * 200 = 200'000'000'000'000
linearithmic, cache friendly (40x comes from the log factor)
1'000'000'000'000 * 40 = 40'000'000'000'000
200'000'000'000'000 / 40'000'000'000'000 = 20 (nlogn could end up being 20 times faster)
(or just 10 times faster on average)
tl;dr use vector for everything.
I liked this talk, but apparently this goes against many well stablished and useful Design Patterns. Self-referential classes are really nice sometimes and I think we should not make an explicit effort to not use it when the abstract model of the problems is clearly self referential.
19:33 accidentally lookup in a map became n*log(n)
watched it at 2x speed, still too slow
zZZZZZZZZZZZZZZ
And this, boys and girls, is what happens when you are given 1 hour to talk and you have (if you have) only 30 minutes of topic to fill.
And this, boys and girls, is what happens when you have nothing to say but you make a comment saying it anyway
is this guys in on slow motion? lol
Pro Tip - use UA-cams 2x speedup .... I can't watch any talks without it
M M pro tip: you can't in mobile
My favorite part is where he says he must have talked faster than he was expecting to.
Now you can
ok?
God that guy loves to listen to himself
Your perspective is bizarre. He most likely makes a million or more a year programming. His viewpoint that he has developed over decades from real-world experience, reading books, studying code beyond what he needed, etc. is both highly valuable and justified. In a situation like this, people jealously associate success from hard work with stuff like arrogance. Sometimes, a person just knows what they are talking about.
Terribly slow and exhausting.