Kasper Green Larsen
Kasper Green Larsen
  • 66
  • 57 582
Sublinear Time Shortest Path in Expander Graphs
Computing a shortest path from a source s to a destination t in an undirected unweighted graph is a basic algorithmic problem with the classic Breadth First Search (BFS) algorithm solving it optimally in linear time. But is it possible to solve the problem in sublinear time? That is, without even looking at the entire graph. It is not hard to show that this is impossible in general. A common practical heuristic used to speed up BFS is to run it simultaneously from both the source and destination. While having no asymptotic benefits in the worst case, this has been shown to improve the running time to around the square-root of the number of vertices in the graph when the input graph is chosen randomly, e.g. as an Erdös-Rényi random graph.
In this work, which was presented at the MFCS'24 (Mathematical Foundations of Computer Science) conference, we identify a natural "deterministic" property of a graph that also implies sublinear time shortest paths from bidirectional BFS. This property is the well-studied notion of being a (spectral) expander graph. The video introduces all these concepts and gives a proof of sublinear time in expanders.
Переглядів: 112

Відео

The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds
Переглядів 2307 місяців тому
In this video, I present a paper from the ITCS'24: Innovations in Theoretical Computer Science conference. The main contribution of the work is to introduce a new hardness conjecture to the landscape of fine-grained complexity. The conjecture asserts that the naive algorithm is optimal for determining whether a given string is accepted by a Nondeterministic Finite Automata. Using this conjectur...
Bagging is an Optimal PAC Learner
Переглядів 3389 місяців тому
Determining the optimal sample complexity of PAC learning in the realizable setting was a central open problem in learning theory for decades. Finally, the seminal work by Hanneke (2016) gave an algorithm with a provably optimal sample complexity. His algorithm is based on a careful and structured sub-sampling of the training data and then returning a majority vote among hypotheses trained on e...
Machine Learning 59: Recurrent Neural Nets - Considerations
Переглядів 3379 місяців тому
We conclude our discussion of Recurrent Neural Nets (RNNs) by discussing how to adapt them to different setups. We also discuss the choice of hyperbolic tangent as activation function and argue why vanishing gradients is a problem for RNNs.
Machine Learning 58: Recurrent Neural Nets - Training
Переглядів 3309 місяців тому
We continue our discussion of Recurrent Neural Nets (RNNs) and show how to use Stochastic Gradient Descent together with backpropagation to train them.
Machine Learning 57: Recurrent Neural Nets - Sequence Modeling and Motivation
Переглядів 3889 місяців тому
We motivate and introduce Recurrent Neural Nets (RNNs) for supervised machine learning on sequence data.
Super-Logarithmic Lower Bounds for Dynamic Graph Problems
Переглядів 15510 місяців тому
In this video, I present a paper from FOCS'23 on proving lower bounds for dynamic graph problems. I mainly survey previous results and bounds for proving data structure lower bounds and argue why previous techniques fail to address graph problems. I then present our new lower bounds and discuss the barriers we overcome to prove them.
Machine Learning 56: Cluster Analysis - K-Means++
Переглядів 24810 місяців тому
We present the k-means extension to Lloyd's algorithm for k-means clustering. Along the way, we motivate the algorithm and conclude by discussing some theoretical guarantees it provides.
Machine Learning 55: Cluster Analysis - Lloyd's/K-Means Algorithm
Переглядів 56010 місяців тому
We present the k-means clustering problem as well as Lloyd's algorithm, sometimes called the k-means algorithm, for heuristically computing a good clustering. We then discuss some short-comings of Lloyd's algorithm that motivate the k-means algorithm that we cover in the following video.
Machine Learning 54: Cluster Analysis - Motivation
Переглядів 20810 місяців тому
In this video, we give a gentle introduction to the purpose of cluster analysis. This video also serves as a warm-up for the two following videos describing Lloyd's algorithm and the k-means algorithm for k-means clustering.
Anniversary of Computer Science at Aarhus University - Algorithms and Data Structures
Переглядів 541Рік тому
A humorous historic account of some of the research (not all mine) carried out in the Algorithms and Data Structures group at Computer Science, Aarhus University (AU). Made in the occasion of the 5225 anniversary - 52 years as research field at AU, 25 years as independent Department at AU. Thanks to Sebastian Krog Knudsen for editing!
Lower Bounds for Multiplication via Network Coding
Переглядів 185Рік тому
Multiplication is one of the most fundamental computational problems, yet its true complexity remains elusive. The best known upper bound, by Harvey and van der Hoeven shows that two n-bit numbers can be multiplied via a boolean circuit of size n log n. In this video, we show that if a central conjecture in the area of network coding is true, then any constant degree boolean circuit for multipl...
Machine Learning 53: Skip-Gram
Переглядів 3,2 тис.Рік тому
We present Skip-Gram for representing words as vectors. We present it as an alternative to Continuous Bag of Words, CBOW, and discuss the few differences between the two.
Machine Learning 52: CBOW - Continuous Bag of Words
Переглядів 4,1 тис.Рік тому
We present the Continuous Bag of Words, or CBOW, approach for learning representations of words to be used for machine learning with text data. We discuss how to construct a supervised learning problem from a collection of documents and how to extract word embeddings from a shallow neural network trained from on the text data.
Machine Learning 51: Co-Occurrence Matrix
Переглядів 2,9 тис.Рік тому
We present the notion of a co-occurrence matrix for representing text as vectors for machine learning. We discuss its advantages compared to bag of words and tf-idf representations. We also discuss how to reduce the number of features in the vector representations of different words by using an eigendecomposition like in principal component analysis (PCA).
Machine Learning 50: Feature Hashing
Переглядів 3 тис.Рік тому
Machine Learning 50: Feature Hashing
Machine Learning 49: tf-idf, Term Frequency Inverse Document Frequency
Переглядів 1,1 тис.Рік тому
Machine Learning 49: tf-idf, Term Frequency Inverse Document Frequency
Machine Learning 48: Bag of Words
Переглядів 548Рік тому
Machine Learning 48: Bag of Words
Machine Learning 47: Random Projections
Переглядів 1,6 тис.Рік тому
Machine Learning 47: Random Projections
Machine Learning 46: Autoencoders
Переглядів 554Рік тому
Machine Learning 46: Autoencoders
Machine Learning 45: Principal Component Analysis - Practical Considerations
Переглядів 433Рік тому
Machine Learning 45: Principal Component Analysis - Practical Considerations
Machine Learning 44: Principal Component Analysis - Minimizing Loss in Projection
Переглядів 661Рік тому
Machine Learning 44: Principal Component Analysis - Minimizing Loss in Projection
Machine Learning 43: Principal Component Analysis - Maximizing Variance
Переглядів 698Рік тому
Machine Learning 43: Principal Component Analysis - Maximizing Variance
Machine Learning 42: Principal Component Analysis - Motivation
Переглядів 541Рік тому
Machine Learning 42: Principal Component Analysis - Motivation
Optimal Weak to Strong Learning
Переглядів 385Рік тому
Optimal Weak to Strong Learning
Machine Learning 41: Support Vector Machines - SMO Algorithm
Переглядів 3,2 тис.Рік тому
Machine Learning 41: Support Vector Machines - SMO Algorithm
Machine Learning 40: Support Vector Machines - Non Separable Data
Переглядів 565Рік тому
Machine Learning 40: Support Vector Machines - Non Separable Data
Machine Learning 39: Support Vector Machines - Kernels
Переглядів 674Рік тому
Machine Learning 39: Support Vector Machines - Kernels
Machine Learning 38: Support Vector Machines - Dual Problem
Переглядів 1,4 тис.Рік тому
Machine Learning 38: Support Vector Machines - Dual Problem
Machine Learning 37: Support Vector Machines - Convex Optimization
Переглядів 1,1 тис.Рік тому
Machine Learning 37: Support Vector Machines - Convex Optimization