Thanks for your talk! This is the first time I see linear types in a talk. Thanks for clarifying the difference bweteen linear types and quantitative type theory (yes, I watched every Idris2 video I could find). Your example and the open questions in the field reminded me a lot of how things are done in the programming language Rust. Have you by any chance had a look at it? If you did, how would you describe the gap between Rust (which is used in production) and the current state of the art in linear types research (which is what you are doing)?
I'm not the person you asked, but I'll give my two cents anyway. I would say the gap between Rust and "real linear types" is that in Rust you can't make a type that _must_ be handled. There are error types that the linter tells you to handle, but _Rust doesn't have a way to make a type that can't simply be thrown away._ There is the Drop trait that makes something follow a special procedure to be thrown away, but that's different to not letting something be discarded in the first place. For instance, let's say you had some kind of structure that encapsulates an OS resource, which has multiple ways of being returned to the OS, and no good default way to do so that would be fine no matter what. There's no way for Rust to stop you from either leaking this resource, or making the mistake of letting the Drop happen naturally, which might lead to the wrong kind of drop. At least, not outside of asking the linter to shout at you, as well as making Drop panic at runtime, but I'd say this *should* be something you can get the compiler to yell at you for rather than the linter. As for some more advanced linear type things, I think you can have dependent types along with linear types, which would allow you to interleave proofs (such as proofs that memory is allocated i.e. a guarrantee that you won't segfault) with code, which I believe would give the chance for even better runtime, now that a lot of the runtime checks (such as making sure an index is less than the length of a vector) can be done at compile time.
I think I am missing something here. You are splitting a string of length N into two parts of length N/2, then insert a bar, resulting in a string of length N+1. So how can you possibly mutate the buffer inplace, since it is too small? In the C example the buffer has length 32 so that happens to work for this specific example, but in general, it won't work, and cause of buffer overrun? Maybe it was more like a theoretical example?
Nice talk. Just curious as to whether you might have looked at the Formality language much (also Cedille / Juvix to some extent) and if you had any thoughts on it? It appears to be a reasonable attempt to build a performant proof language. I’d also be interested to hear what you think of their claim of “No Garbage Collection”. The language appears to be based on affine lambdas and I’m very keen to find out what the programming experience would be like writing large-ish programs using only those.
static reference counter? :)
Thanks for your talk! This is the first time I see linear types in a talk. Thanks for clarifying the difference bweteen linear types and quantitative type theory (yes, I watched every Idris2 video I could find).
Your example and the open questions in the field reminded me a lot of how things are done in the programming language Rust. Have you by any chance had a look at it? If you did, how would you describe the gap between Rust (which is used in production) and the current state of the art in linear types research (which is what you are doing)?
I'm not the person you asked, but I'll give my two cents anyway. I would say the gap between Rust and "real linear types" is that in Rust you can't make a type that _must_ be handled. There are error types that the linter tells you to handle, but _Rust doesn't have a way to make a type that can't simply be thrown away._ There is the Drop trait that makes something follow a special procedure to be thrown away, but that's different to not letting something be discarded in the first place. For instance, let's say you had some kind of structure that encapsulates an OS resource, which has multiple ways of being returned to the OS, and no good default way to do so that would be fine no matter what. There's no way for Rust to stop you from either leaking this resource, or making the mistake of letting the Drop happen naturally, which might lead to the wrong kind of drop. At least, not outside of asking the linter to shout at you, as well as making Drop panic at runtime, but I'd say this *should* be something you can get the compiler to yell at you for rather than the linter.
As for some more advanced linear type things, I think you can have dependent types along with linear types, which would allow you to interleave proofs (such as proofs that memory is allocated i.e. a guarrantee that you won't segfault) with code, which I believe would give the chance for even better runtime, now that a lot of the runtime checks (such as making sure an index is less than the length of a vector) can be done at compile time.
I think I am missing something here. You are splitting a string of length N into two parts of length N/2, then insert a bar, resulting in a string of length N+1. So how can you possibly mutate the buffer inplace, since it is too small? In the C example the buffer has length 32 so that happens to work for this specific example, but in general, it won't work, and cause of buffer overrun? Maybe it was more like a theoretical example?
Nice talk. Just curious as to whether you might have looked at the Formality language much (also Cedille / Juvix to some extent) and if you had any thoughts on it? It appears to be a reasonable attempt to build a performant proof language. I’d also be interested to hear what you think of their claim of “No Garbage Collection”. The language appears to be based on affine lambdas and I’m very keen to find out what the programming experience would be like writing large-ish programs using only those.
Affine Lambdas + lazy copying during reduction