Beautiful explanation and great idea for a series. I had looked at several papers and strongly thought this is what was going on, but all of them overcomplicated the basic idea.
so it basically depend on an obscure model to define it's weights . what if the model is wrong..? can it update the model logic, given maybe we want certain reward?
Love it, please please keep the 5 min explanation. amazing
Beautiful explanation and great idea for a series. I had looked at several papers and strongly thought this is what was going on, but all of them overcomplicated the basic idea.
Thanks for not shouting.
Thank you for an easy explanation
Like the vibe! Cool vid!
This sounds a lot like particle swarm optimization, other than the weighting factor.
Not really, it is a proper recursive Bayes’ Filter
@@CyrillStachniss i’m just saying it bears resemblance to me. Great lecture!
Please discuss feedback particle filter
perfect thank you for the video :)
Anough said.
so it basically depend on an obscure model to define it's weights . what if the model is wrong..?
can it update the model logic, given maybe we want certain reward?
If model is wrong, filter will not converge (It means your estimation has very very low certainty or confidence).
Correct, if you use a wrong observation model, you cannot expect your filter to converge to the right solution.
@@CyrillStachniss
Thanks.
So,Is the Pose graph algorithm (Least squares ...) not care about the observation model ?
Thanks. I needed that explanation.
One comment - please try to speak more clearly, as some of the words are inaudible