Exactly! They're very close to the semantics, so great for building models. And for the same reason, completeness proofs are easy too: if you can't prove something, you can build a counter-model. So (contraposing): if it's valid, it's provable. That's completeness!
Very good. I learned modal logic from Cardeanas.org channel, but he never explained how to number all the possible worlds along the proof. Very interesting. Now I can turn proofs into diagrams of possible worlds, and have a more visual sense of what is going on. Thanks
Are we already fiddling with the accessibility relations by adding 1 arrow 2? isn't that from an axiom which allows you to introduce worlds from worlds?
Thanks for the great video! After practicing on some different examples I'm a bit confused. I have a premise and the negation of the conclusion in the 'main' branch. Because of the rules, I have to split right away. I continue on the left branch and have to apply the diamond rule so, introduce world 1. If I then apply a rule on the negation of the conclusion, which is still in world zero, does that automatically change to world one because I enter the left branch? Or only If It starts with a box? It's something like: premise: (A v B),0 conclusion !(C-->D),0 Split A and B in left and right Apply diamond rule 0>1 If I now add the conclusion to my left branch, will that automatically be in world 1?
Great question! The rules for and, or, ->, ~, don't change the world. So applying the negation rule to ~A 0 will add new sentences to world 0. If you're applying that rule to a sentence *before* the tree splits, you should apply the results to *both* branches.
Is that the box rule? It doesn’t assume seriality, as the n> m line is a premise. So if there’s no n>m in the branch, you can’t infer anything from []A.
In terms of how the rules operate it just seems to behave like predicate logic? How's about this? {1, w} 1. □(P --> Q) (A for CP) {2, w} 2. □P (A for CP) {1, w'} 3. P --> Q (1, □E) {2, w'} 4. P (2, □E) {1, 2, w'} 5. Q (3, 4, MP) {1, 2, w} 6. □Q (5, □I) {1, w} 7. □P --> □Q ((2), 6, CP) {/, w} 8. □(P --> Q) --> (□P --> □Q) ((1), 7 CP) Keeping track of the world in the set of dependencies, and □E and □I follow the same rules as universal elim/intro. How the worlds are tracked will probs need some work, but it seems like it should work w/o any difficulties?
Yes, you can do modal logic proofs like that, essentially using 1st order natural deduction. Since modalities are semantically first-order quantifiers over worlds, the usual quantifiers rules can be used. However, that’s a very semantic perspective (same goes for proof trees, really). You can do modal natural deduction *without* any mention of worlds, and that’s typically something proof theorists are happier about. Can you work out how to do it? Hint: You need a new type of assumption to deal with intro and elim for the Box.
@@AtticPhilosophy, hmm, using the dependency numbers is enough to keep track of things? We can basically assume for the purposes of the rules that we're working with arbitrary worlds that we can set to the same unless we run up against exceptions, namely if A is a wff and it has a dependency number specifically attached to it. The other exception would be pointing at ◇A within ◇E. Edit: it would need to be any specific A within the set of dependencies. If I'd assumed P for the proof, I could have only concluded Q?
@@AtticPhilosophy, well that idea breaks with multiple quantifiers and it does weird things with passage rules, like ◇(P) --> Q does something weird when I try and bring Q within the scope of the ◇ New thought, I can't get rid of the world's entirely, but I can make the relation a part of the mix, and, I think, I can keep chaining relations □(P --> Q) = Ax[ Rwx --> Ay[ (Py --> Qy) ]] □□(P --> Q) = Ax [ Rwx --> Ay[ Rxy --> Az[ Pz --> Qz ]]], etc, if more □ even have meaning lol www.umsu.de/trees/#~6x(Rwx~5~6y(Py~5Qy))|=~6x(Rwx~5~6y(Py))~5~6x(Rwx~5~6y(Qy)) www.umsu.de/trees/#~7x(Rwx~2~7y(Py))~5Q|=~6x(Rwx~5~6y(Py~5Q))
Sort of. You need a special kind of assumption, which blocks info from other assumptions getting through. The idea is, within the new ‘strict’ assumption, you work just with what you previously proved. Since that stuff is necessary, if you derive A in the strict assumption, you infer []A outside the assumption. Similarly, if you have []A outside (proved or assumed), you can add A inside the assumption.
@@AtticPhilosophy, that's pretty much where I was trying to go with my first attempt. If the dependency number was pointing at □P, then it was within the scope of □, but I ran into some issues. If you have a link to something, or a name I can search for more info, that would be awesome. Most resources I've come across seem focused on truth trees
I showed your channel to my logic prof and he said “Mark Jago is legit.”
Haha, that made my day!
The fact that you sound Like David Tennent makes all of your videos seem like an Episode of Doctor Who. Much easier to learn this way
Thanks! I should do a video on philosophy of time - or "timey-wimey stuff" - soon
I hadn't seen proof trees, they look super helpful!, I like how they can help make a countermodel.
Exactly! They're very close to the semantics, so great for building models. And for the same reason, completeness proofs are easy too: if you can't prove something, you can build a counter-model. So (contraposing): if it's valid, it's provable. That's completeness!
@@AtticPhilosophy great! I'll see the other video about proof trees
Very insightful. Thank you for the great content! Could you recommend some readings into getting ahead of the meaning in such systems?
Hi, a great book on modal logic (and lots more) is:
Graham Priest, An Introduction to Non-Classical Logic
Thank you!
Very good. I learned modal logic from Cardeanas.org channel, but he never explained how to number all the possible worlds along the proof. Very interesting.
Now I can turn proofs into diagrams of possible worlds, and have a more visual sense of what is going on.
Thanks
Yes exactly! Finished open branches essentially build a possible worlds model.
it seems like you could show examples with more possible worlds and branchings
Are we already fiddling with the accessibility relations by adding 1 arrow 2? isn't that from an axiom which allows you to introduce worlds from worlds?
Thanks for the great video! After practicing on some different examples I'm a bit confused.
I have a premise and the negation of the conclusion in the 'main' branch. Because of the rules, I have to split right away. I continue on the left branch and have to apply the diamond rule so, introduce world 1. If I then apply a rule on the negation of the conclusion, which is still in world zero, does that automatically change to world one because I enter the left branch? Or only If It starts with a box?
It's something like:
premise: (A v B),0
conclusion !(C-->D),0
Split A and B in left and right
Apply diamond rule 0>1
If I now add the conclusion to my left branch, will that automatically be in world 1?
Great question! The rules for and, or, ->, ~, don't change the world. So applying the negation rule to ~A 0 will add new sentences to world 0. If you're applying that rule to a sentence *before* the tree splits, you should apply the results to *both* branches.
Is KT4 form a category?
3:47 This seems to assume that accessibility is serial. Otherwise, it could be the case that world n doesn't access any other worlds.
Is that the box rule? It doesn’t assume seriality, as the n> m line is a premise. So if there’s no n>m in the branch, you can’t infer anything from []A.
@@AtticPhilosophy Oops, I misread and thought the n > m was after the vertical line
In terms of how the rules operate it just seems to behave like predicate logic? How's about this?
{1, w} 1. □(P --> Q) (A for CP)
{2, w} 2. □P (A for CP)
{1, w'} 3. P --> Q (1, □E)
{2, w'} 4. P (2, □E)
{1, 2, w'} 5. Q (3, 4, MP)
{1, 2, w} 6. □Q (5, □I)
{1, w} 7. □P --> □Q ((2), 6, CP)
{/, w} 8. □(P --> Q) --> (□P --> □Q) ((1), 7 CP)
Keeping track of the world in the set of dependencies, and □E and □I follow the same rules as universal elim/intro. How the worlds are tracked will probs need some work, but it seems like it should work w/o any difficulties?
Yes, you can do modal logic proofs like that, essentially using 1st order natural deduction. Since modalities are semantically first-order quantifiers over worlds, the usual quantifiers rules can be used. However, that’s a very semantic perspective (same goes for proof trees, really). You can do modal natural deduction *without* any mention of worlds, and that’s typically something proof theorists are happier about. Can you work out how to do it? Hint: You need a new type of assumption to deal with intro and elim for the Box.
@@AtticPhilosophy, hmm, using the dependency numbers is enough to keep track of things? We can basically assume for the purposes of the rules that we're working with arbitrary worlds that we can set to the same unless we run up against exceptions, namely if A is a wff and it has a dependency number specifically attached to it. The other exception would be pointing at ◇A within ◇E.
Edit: it would need to be any specific A within the set of dependencies. If I'd assumed P for the proof, I could have only concluded Q?
@@AtticPhilosophy, well that idea breaks with multiple quantifiers and it does weird things with passage rules, like ◇(P) --> Q does something weird when I try and bring Q within the scope of the ◇
New thought, I can't get rid of the world's entirely, but I can make the relation a part of the mix, and, I think, I can keep chaining relations
□(P --> Q) = Ax[ Rwx --> Ay[ (Py --> Qy) ]]
□□(P --> Q) = Ax [ Rwx --> Ay[ Rxy --> Az[ Pz --> Qz ]]], etc, if more □ even have meaning lol
www.umsu.de/trees/#~6x(Rwx~5~6y(Py~5Qy))|=~6x(Rwx~5~6y(Py))~5~6x(Rwx~5~6y(Qy))
www.umsu.de/trees/#~7x(Rwx~2~7y(Py))~5Q|=~6x(Rwx~5~6y(Py~5Q))
Sort of. You need a special kind of assumption, which blocks info from other assumptions getting through. The idea is, within the new ‘strict’ assumption, you work just with what you previously proved. Since that stuff is necessary, if you derive A in the strict assumption, you infer []A outside the assumption. Similarly, if you have []A outside (proved or assumed), you can add A inside the assumption.
@@AtticPhilosophy, that's pretty much where I was trying to go with my first attempt. If the dependency number was pointing at □P, then it was within the scope of □, but I ran into some issues. If you have a link to something, or a name I can search for more info, that would be awesome. Most resources I've come across seem focused on truth trees