AI & Logical Induction - Computerphile
Вставка
- Опубліковано 27 гру 2024
- Continuing to address the challenges of AI safety, Rob Miles discusses a paper from the Machine Intelligence Research Institute (MIRI).
Read the paper for yourself here: bit.ly/LogicalI...
More from Rob Miles: bit.ly/Rob_Mile...
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottsco...
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com
Definitely check out the paper for this one (link in the description). Even in a video this long there's so much cool stuff in there that we didn't have time for! I might make a more technical follow-up video myself if people want that
Absolutely
That would be great!
Please make a followup video, this is a really interesting subject
I am really looking forward to extra bits on this paper on your channel.
You need to eat more and do some exercise to gain some muscle.
Then people say philosphy is useless...this is pure formal epistemology, and it really shows in the references section of the paper BTW (Carnap, Priest, Hintikka, etc...)
Yes, Kant talks about the edges of computability. Except the philosophers of today can't compete or converse on the mathematical level. Which is to say philosophy isn't useless, but philosophers are... Lolz.
@@DJjakedrake how did you come to be so confident in such a false statement? Burnt out from that Williamson?
@@DJjakedrake we tend to be specialists nowadays. In the past, people could be mathematicians, philosophers and even artists all at the same time.
@@IcthiCalm down. It's a "Truthism".
@@w花band in the future we will be able to be none of those.
This video explained so much more than the title promised it would.
I just learned a lot. Thank you!
This is what’s been missing from recent computerphile videos. Rob!
He's great, and looks like a real-life Alex Kidd, too.
A young Terence McKenna...
Although I think this is the worst he's ever done, you're still right ;-)
The guy writes on toilet paper sheets and all.. but then he rolls the dice, and out of nowhere, the dice turns green! We can see it's trajectory in slow-mo UNDER THE CUP! Blew my mind! Direct thumbs up!
>"spherical chickens in a vacuum"
I always heard this expressed as "spherical cows on a frictionless surface".
I heard it as an anecdote about physics.
A rich man came to biologist, statistician and physicist and asked them to predict the outcome of a horse race. Biologist looked at body structure and physical health of the horses, and named the probable winners. Statistician looked at outcomes of past races and named the probable winners. Then came physicist's turn. He was still busily writing and calculating. The rich man got impatient and asked, what is he doing. The physicist answered, "I am working on a model of spherical horses in a vacuum"... :)
I know it as "spehrical cow in a vacuum"
It turns out that physicists don't like being placed on a frictionless surface in a vacuum.
Get this genius a glass of water when you interview him next.
or make him a cup of tea.
@Tomasz
and of course make sure, if at all possible, that it's made by a safe AGI agent. And probably with no vases or children around, just for extra caution
He did, you can see it 14 seconds in
P.S.A.: There's an abridged version (from 131 pages down to 20) of the paper on logical induction. The link to it is given in the original article (see Description)
i love this guy. More of him please. Computerphile used to do videos on real world stuff like cross site scripting (tom scott is dope) and like more feet on the ground real world programming things, not so etherial. This guy is more in the weeds, which i like.
Great video.
One small caveat: the agents must be risk neutral and have a discount factor equal to one, for the conclusions in the video to be right (otherwise, for instance with risk-averse rational agents with a discount factor smaller than one, a 50% bet would be traded at LESS than 0.5, and vice-versa).
Really love all Rob Miles' videos!
wouldnt such agents go bankrupt in the limit, though?
@@Ockerlord I don't think so, but they would progress rather slowly compared to other agents.
Computerphile needs infinitely more theory/math videos.
That is what Numberphile channe is for
This is a topic I've thought about for a long time. I'm excited to learn the theory.
Sounds like you need a device to throw ink at the page, to bypass friction with the delivery device. ink-jet-pen?
was going to suggest laser pointer based pen but then checked up on how did laser printers actually work again.. turns out heat to transfer powder to paper is actually produced by the drum and, if I understood correctly, laser is basically drawing negative image to counteract an electrostatic charge to prevent the powder sticking to non-printed parts of the paper. TIL. probably (I'll likely forget again soon)
"we're not going to get too far into it" (looks at the video length) -rrr-right
The paper is 131 pages
@@RobertMilesAI wow! I will check out the paper. Btw, I'm shaking right now :D I read superintelligence upon your advice and watched all your videos! Thank you, you're awesome!
FYI here's a full lecture where one of the co-authors of the paper talks about it in more depth: watch?v=UOddW4cXS5Y
It's a great talk, I highly recommend watching it before trying to read the paper, which is quite technical.
It is now my life's goal to qualify all of my initial thoughts on solving a problem as "in a 'spherical chickens in a vacuum' sort of way"
So what you're saying is now's a good time to buy Bitcoin?
No that's just the induction hypothesis
It was when you wrote that comment
Can't disprove that
always buy. trust me.
You should be doing arbitrage between all alt coins, silly.
Rob Miles! My favorite presenter.
Little correction: The price of futures does not actually depend on the expected future price. It is only a function of the current price and interest rates. That is the case because the predicted future price of the good is already reflected in the current price. If you would predict the price to go up in the future, you could also buy the good now and sell it in the future. By "no arbitrage" assumption, the expected value of doing this and selling a futures contract must be the same. As such the price in a futures contract will just be the current price plus interest for the time period.
Doesn't that assume completely durable goods? Buying strawberries could be different than strawberry futures, because that option for arbitrage wouldn't be available.
Edit: not trying to dispute you, just asking a question
@@toast_recon The future contract is durable - that you'd buy the current contract and then sell it later. Of course, real strawberries do begin to decline immediately but they were already purchased months or years ago in a futures market.
17:50 According to Gödel there might be statements which can't be proved or disproved in a finite amount of time. An example could be Riemann's hypothesis about the roots of his zeta function. So the confidence value would be 1 - but w/o provability.
Probably one of my favorite computerfile videos.
When you say probability theory doesn't include a framework for including beliefs, that may be true for traditional frequentist probably theory, but it is absolutely a big part of Bayesian probability theory.
For a *really* good treatment of Bayesian probability theory I'd highly recommend Jaynes' book "Probability Theory: The Logic of Science". He makes a big point of pointing out that probabilities should be treated as degrees of belief which absolutely depend on a person's knowledge and he lays out all the mathematics needed for "updating one's belief" when you get more information or discover something by analyzing it (like in your square root example). This rule is simply Bayes theorem.
I think you misinterpreted him. He wasn’t saying that probability theory doesn’t describe how to update one’s subjective probability based on new evidence (rather, he says the opposite, that it does. He is talking about Bayesian probability.), but that it doesn’t describe how to update one’s subjective probability over time based purely on one taking more time to reason out the logical implications of the things one already knows (or already thinks likely).
He says that most probability theory assumes “logical omniscience”. E.g. if X and Y are two statements that turn out to be logically equivalent, standard probability theory requires that P(X)=P(Y), but determining if two statements are equivalent takes time and computation, possibly very large amounts of it.
And if you haven’t had time to check yet, then it seems like your probabilities for X and Y have to have the potential to be different, even though X and Y might turn out to be logically equivalent.
The graphics were cool in this video
One way or another, I don't think the predicted jet fuel prices from last year are holding up right now ;)
Also, great video!
Miles is so smart, it literally hurts me to listen with full attention for too long at a time.
What's on that bookcase?
Sapiens - Yuval Noah Harari
Soonish - Zach Weinersmith
Run Program - Scott Meyer
Very very interesting, great video! It is pure gold. Since the term is dropped multiple times, the rational choice theory has its limits. Individual entirely rational actions can lead in sum to irrational outcomes.
25:47 I'd point out that by "loads of money" you mean infinite money (as time approaches infinity). You can in fact make loads of money for very high but finite values of "loads", because this efficient trader is very slow. It does a lot of things "in a timely manner" based on a definition in the paper, but the definition of "timely manner" is not very timely.
For example, you could make a lot of money by buying a lot of "The thousandth digit of pi is 9" and selling a lot of shares for every other digit. You couldn't get *unboundedly* high amounts of money because the inductor would eventually learn that the thousandth digit is 9, but you could probably get a lot in the meantime because until it ends up figuring out you're right it'll value all of those at $0.10.
The Dutch invented shipping insurance. It's a Dutch Book because if you set the numbers right the sponsor makes a profit regardless - either the trades work or they claim it from the insurer. The original was 'I always win', not 'I always lose', but in probability that's kind of the same concept, just flipped.
This is exactly why some Bayesians say that no probability is without prior. The only way to deal with probabilities on this sort of level is to be very explicit about what you know and don't know when you model your current state of knowledge.
Super interesting from a mathematical and computer scientist's point of view.
Problem I already see is that with those contracts people blow a bubble of unrealistic prices for real goods by gambling. That leads to increasing prices for food and other real and basic resources so that people who already have hardly enough for a living (in developing countries for example) have to pay even more just to survive.
7:53 Time
10:10 Seeing part of the process eliminates some wrong answers.
17:20 Well-Calibrated
19:02 Prediction Markets
One technical correction. "Most" futures contracts (as in most kinds, not most contracts) actually settle. If you forget to close out your contract (purchase an offsetting contract), you could find yourself the proud owner of a tank car of orange juice. Joy! Now what?* Of course, the way out is to close out your contract before the settlement date.
* That's basically the evidence the SEC (CFTC?) used in their case against the Hunt brothers in 1980 -- they held their contracts (LOTS of them) to maturity.
I can not literally imagine that the computers we use are result of countless transitions and one of those transition is just pure mathematics.
This is so interesting, wish the video was a bit longer. I think he was about to talk about how the algorithm reacts to paradoxes
I have a problem with the probability relationships as presented, compare 13:30:
P(A), P(B), and P(A and B)
Whats hillarious about this is that instantly beeing able to process the stuff around you, is the semipossible superpower of sherlock holmes.
There are not enough serious videos about computer intelligence
*Bringing the 'wisdom of markets' into AI research (and codifying it) intuitively seems like something that will be a game changer. It just makes sense, as it's how the world (and many natural systems within it) work.*
You're sort of looking at it backwards. Mathematics informs the structure and implementation of markets, as well as models of how they behave in the real world. Mathematics also informs computery stuff in exactly the same way. It's like saying "bringing Maxwell's equations into the structure of computers, the way it structures everything else, might just be a game changer." And what do you think people have been doing all along?
Look to the natural world instead. Things like evolutionary algorithms. Will they, in the end, be all that useful? I don't know but you'd be adding something to the algorithmist's toolbox that isn't already there.
What a legend! He has Soonish on his bookshelf! AI isn't my area but this paper looks like a really interesting. Thanks for bringing it to my attention CP.
Just wanna say thanks for your videos, always look forward to them!
Dutch was the term used for most germanic communities 100-300 years ago in the USA. This is still the case with the Pennsylvania Dutch, who in turn refer to all that is outside their community as "English".
Do, would the AI become a Laplace's Demon if you let It to know more?
Going into 2021, where are we at with this?
Very difficult topic to explain. Well done!
20:00 I now understand why futures markets are a thing.
Mathematician vs doubling cube: "Oh, powers of two".
Very happy to find this video!
Amazing channel! Keep up the great work!
Had a brain freeze watching this. At one point my brain slammed the door and shouted through the letter box, come back Tomorrow.
As a fan of MassEffect this explanation reminded me a LOT of how the Geth build a "Consensus" among them, and the more there are that communicate, the better they work.
Let's just hope we don't create something exactly similar and they exile us from the planet :D
Ngl getting exiled is the fifth best case scenario, and probably not happen
Thanks Rob I enjoy listening to your explanations
is there an implementation of explained algorithm?
The none dogmatism property is interesting, nice word to use instead of agnosticism.
class Task extends Goal {
// you have the robot inherit you're overall goals so that they don't obstruct those while pursuing specific task
}
Maxwell Jann // TODO: make AI a human
where is the video where he talks about - should and is as ways to describe what is and what we want - he was talking about always needing two shoulds in order to express a wish in order to state a logic construct for comeing up with that wish!?? would really like to watch that video again. got stuck with me
27 quality minutes!
SUMMARY OF THE PAPER courtesy of CHATGPT
Here are some notes on the "Logical Induction" paper:
Introduction: The paper introduces the concept of Logical Induction as a new framework for reasoning about uncertain or unknown information.
Background: The authors provide a brief overview of traditional induction methods based on probabilities and their limitations.
Logical Induction Framework: The framework is described in detail, including its mathematical foundations and the rules for making predictions based on past observations.
Advantages of Logical Induction: The authors argue that Logical Induction provides several advantages over traditional induction methods, including greater accuracy and flexibility.
Applications: The authors discuss potential applications of Logical Induction in fields such as machine learning and artificial intelligence.
Conclusion: The paper concludes by summarizing the benefits of Logical Induction and its potential for advancing the field of reasoning about uncertain information.
---------------
HOW IS THE LOGICAL INDUCTION FRAMEWORK IN THE PAPER DIFFERENT FROM TRADITIONAL LOGICAL INDUCTION
---------------
The Logical Induction framework proposed in the paper is different from traditional logical induction in several ways.
Integration of Induction and Deduction: Traditional logical induction is based solely on logical deduction, while the Logical Induction framework proposed in the paper combines elements of both induction and deduction.
Mathematical Rules: The Logical Induction framework is based on a set of mathematical rules, which define how to make predictions based on past observations. This is different from traditional logical induction, which relies on formal logic and inference rules.
Flexibility: The Logical Induction framework is more flexible than traditional logical induction, as it allows for the incorporation of uncertain or unknown information into predictions.
Accuracy: The authors argue that Logical Induction provides a more accurate approach to induction compared to traditional methods, as it takes into account both the logical relationships between variables and the empirical evidence from past observations.
Overall, the Logical Induction framework in the paper represents a departure from traditional logical induction, offering a more comprehensive and flexible approach to reasoning about uncertain information.
Can you do a video on honeypots, honeynets, and honeyfarms?
just a question why do they always write on that dot matrix paper with green lines on it...I've never seen that paper before.
Now I am a few percent less confident that artificial intelligence will necessarily be rational and predictable.
Isn't converging on a probability of 0 or 1 in contradiction to nondogmatism as it is essentially an approximation? To make an estimation of what an estimation is would seem to indicate an inaccuracy rather than an imprecision or a problem with applicability not application.
If you can't use 1 or 0 probability, then it's ensuring the logic will fail because of the Halting Problem, instead of having limiting "Renormalization" boundaries, (I guess that's the logical objective of calculating probabilities in the first place). Very interesting discussion, thank you.
This was really well explained!
Do I understand correctly? The minds which are thinking towards AGI safety engineering are using the paradigm of neoliberal commodity markets as the mathematical instrument for ‘value alignment’ in a formal system of reason. i.e a google adwords algorithm trading cheap reasons instead of cheap adverts. What are the other predicates of this system that aren’t philosophically grounded in Classical economic theory?
Does having a bunch of algorithms trading their predictions on a market have any better or worse or different consequences than doing Bayesian inference on their validity?
what does this say about the nature of uncertainty, i mean is it an artifact of our ignorance, or is there something inherent in the physical processes that makes it so?
It's the first one
I'm skeptical this approach works well in presence of randomness. Take a problem where a nondeterministic T machine is much faster than a regular one. Now consider betting on such a problem in presence of randomness. In such a case you can't have a condition about market not being beatable since someone could always get lucky and verify the answer.
22:00 Almost made me choke from laughing while eating...
I love Rob's tangents!
23:24 super cool if every trader would affect the system equally. Reality is that the ones who will bet more, will affect the price more, and if her/his prediction is off, then humanity's predictions is off.
Anyway, the video was great, thanks :)
Depends on your timeline length. Over a 30 year length, I'm not sure if that's true.
Why there isn't transcriber?
I've never forgot about htis guy since the "difference between a difficult problem and a very difficult problem...".
Perhaps the last criteria that things that cannot be proven do not take values of 0 or 1 is not that obvious. It's my understanding that Goedel's incompleteness theorem implies that there exist some statements that cannot be proven to be true/false.
It seems to me like this supports the criteria because if something is not provable then you don't know whether it is true or false and therefore cannot assign definite 0/1 values to the statement.
It is controversial, because there are some meta-statements about logic itself that are unprovable. For example, you can't prove memory is valid and past exists. Or that logical deduction is actually logically valid. So even statements like "1=1" should have probability
"Goedel's incompleteness theorem implies that there exist some statements that cannot be proven to be true/false."
true
"this supports the criteria because if something is not provable then you don't know"
false,
because anything that is not-false is true, but that does not mean it is useful or meaningful.
Great video!
Make a video on how to think in recursion
What about undecidability? I assume an AGI would be Turing Complete. There is in general no decision procedure for a Turing Complete system's operation, so all the talk about proving theorems about an AGI's behavior seems vacuous.
Just because something is Turing complete doesn't mean you can't prove theorems about it. (If it did, then we wouldn't be able to prove that anything was Turing complete, now would we? :P)
There is no general decision procedure which will always tell use whether a program halts. This is true.
However, this does not mean that it is impossible to make a program which takes as input a program, and either correctly says that the input program halts, correctly says that it doesn't halt, or says "I 'unno" . It is possible to make programs that do this.
Also, it is a little unclear what you mean when you describe the AI as "being turing complete". the AI is not a programming language. It doesn't take as input a source code and run it. Now, yes, the AI could simulate a turing machine, just as you or I could with pencil and paper.
Basically, that the halting problem is undecidable implies "you can't get an answer to *all* questions of this particular form", it doesn't mean "you can't get an answer to *any* question of this particular form" (for some particular form).
and that why u need to remember.......remember as much as u can anyway
But which number was under the cup?
Maybe I’ll try to peek at that paper (my old brain is hurting already). But it seems to me that formal specifications and rational choice theory are good but limited; they only work to the extent of one’s understanding of the universe of discourse (even if you were logically omniscient). But it seems like the unaccounted for possibilities are a pretty serious concern in these cases - I guess I’m just restating the basic problem of induction. Which isn’t to say that it’s not useful to try and optimize what can be done re. what we do know and the limited resources available to process that knowledge (I am a card carrying Bayesian) just that a dose of humility re. any conclusions seems useful.
I guess that logical induction must be perfected before they continue developing self driving cars. There are so many variables in such a short amount of time that it will have for computations.
Where can I ask questions??
At school
Lol
1 and 0 are not really probabilities: when you try to convert them to fractions you get 1/infinity or 1/1. True certainty (or 1 in this case) isn't really a prediction or a relation to the real world any more, because there's literally no possible observation you could make which could change it, not even direct observation.
Could this algorithm apply to algorithmic stable coins?
As a framework for producing safe AI (and I do not know how to code this) how about making the primary goal of the AI "minimize the impact of actions on the world in the process of achieving a task." This might make its utility more difficult to realize, but it should be safer. By "impact" I mean an increase of entropy. A human body is more organized than a smear on the floor (stepping on a baby to make tea). Converting a human body into crystalline forms of its constituents produces more entropy in the form of heat than leaving it alone.
Yeah, people have looked at these kind of information-theoretic impact metrics for AI Safety. I made a video about it a while ago. I think links don't work well in comments but just go to my channel and/or search for "Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1" and "Empowerment: Concrete Problems in AI Safety part 2"
Thank you.
Love you Rob!
What if your agent has the ability to spend enough time to become so omniscient that it can simulate the whole universe, play it backwards and figure out that the die was actually a 2? Is that agent then omniscient?
What if WE are in a simulation conducted by an omniscient AI that's trying to determine which poker card its opponent has, and we'll all get shutdown at the very instant the AI has its answer? We only exist to help that AI win five bucks :'(
Is there any work going into preventing deliberately unsafe general AI being created? / Is there reason to believe there is such a threat?
Assuming currently public AIs are an indication of current state of cutting edge AI research, we are nowhere near general AI at the moment.
I love how Rob explains these papers.
in the part about the gathering evidence and accumulating enough beliefs to narrow your probability down, I actually sort of see it like a -- imagine a hologram to be your understanding of the thing. you see the hologram by perceiving many images of it from many different angles, each helping you to formulate a better understanding of what is being shown (for me, probabilities are not really like numbers, but more like blurry images that slowly make more and more sense with time), so even if you perceive it from all angles, you can never really "see" the hologram for what it really is. the more angles I see of it, it'll create a sort of meta-logical understanding of what it really is which I can now apply to things of similar nature.
so, how do you notice things of similar nature? well, I guess you could recognise aspects of details of the thing, but I personally "feel" it, and then the similar aspects start to show themselves.
what I'm trying to say is, while watching this video, I'm realising that I perceive things a bit backwards. like he said, the probability theory assumes logical omniscience and so therefore, if the pattern is not understood, it cannot be recognised. the brilliance to my approach is actually the assumption that never is it possible to perceive the whole thing at once (omniscience), and so therefore a gradually sharper understanding of what's happening in the image, allows for that continual revelation of what the probability really is, that logical omniscience assumes.
So, like, hindsight.
This reminds me of TRON where the various programs compete against each other to see which one is fittest or something.
I suppose I ought to actually watch a Rob video before I hit like, but I never do.
Try wearing Bluetooth headphones at max volume and listening to the ending without going deaf
If b occurring increases the chance of a occurring then isn't it the case that the probability of a and b occurring is greater than that of just a occurring?
Yes, but read the fine-print. The rule applies for INDEPENDENT occurrences. Robert even mentions that when stating the rule.
It may be that P(A|B) > P(A), but even if this is the case, P(A&B)
The reason the English language is so hard to the Dutch is that the Dutch where the great advanced nation - then the English eventually caught up and overtook them. Hence the derision for the Dutch in the English language.
this method will work for 10+ years, then explode into something out of control.
That was my problem with probability in school. When we were taught about the probability of both outcomes of a flip of a coin being 1/2 I argued that given the physical laws we could calculate with certainity which side it is going to fall on. I think my teacher didn't understand what I exactly meant so instead of arguing I accepted his answer that a coin has two side and only one will be pointing upwards at the end hence 1/2. I still don't understand why such experiments are considered as random. I mean what is true randomness? Isn't there always an actual reason the things are as they are?
There is a kind of "true randomness" in quantum physics, but it's all what's called "indexical uncertainty"; ie. if I make an exact copy of you then you don't know whether you're you or the copy, not because you're not able to calculate it all the way through in detail but because there's genuinely two people that both qualify as "you", so you don't know which of your two brains is actually asking the question, ie. whether your index is 1 or 2, until I break the symmetry by telling you.
But most uncertainty is indeed "logical uncertainty" in the sense of this paper, where there is a true answer but you simply don't have the mental capacity to precisely arrive at it in an acceptable timespan.
I guess as you said true randomness exists only in quantum physics. But I don't see how logical uncertainty is random? Me not being able to calculate the outcome of an event shouldn't mean that the event is random. If I place a sheet of paper on one side no one is going to argue that the probability of paper being placed on one side is 50%. Isn't a coin toss the same thing? The only difference is that with paper it is easy for me to foresee how paper is going to be placed. Just because I cannot do the same for the coin doesn't mean that it is random.
@@uzeyirveli One of the research fellows at the institute that published the logical induction paper has written an essay on this topic called Probability Is in the Mind, might be worth reading.
Theoretically you could know it, but practically you don't. Calculations are based on knowledge you have, not on knowledge you could theoretically have. Therefore the coin flip is considered random for the purpose of calculating its outcome.
The thing is that the term "random" has several conflicting meanings. You can also view randomness as a relation of knowledge between you and the universe, where "random" is an outcome it's impossible for you to foresee or control. In this case, since there is no useful correlation between any part of your environment and the outcome of the coin toss, which is to say the actual interactions driving it are so complex as to approximate arbitrary (read: pseudorandom), the closest useful approximation you can work with is a representation of the world that models the outcomes as random.
It's not like "the coin toss has many paths, and about half of them based on an equal distribution of initial conditions land heads and the other land tails" is _wrong._ The coin can be modelled as a physical system with a precise outcome, yes, but it can also be modelled as a physical system with two random outcomes, and that model is generally more useful for non-omniscients. Models are always representations of reality, and the random 50/50 model represents a true fact about coins. As long as you cannot split the set of coinflips into head-flips and tail-flips, it's the most information about coinpaths you can practically use.
The paper uses the qualifier "efficiently computable" so often that it doesn't really prove whole lot if the class of efficiently computable trading strategies is too small. Also the enumeration technique it uses is completely impractical.
I thought it just used "efficiently computable" to mean in polynomial time?
But yes, it isn't meant to be a practically useful to run, it is meant to be *an* algorithm which performs the desired task, to show that there is any such algorithm.
Knowing that there is such an algorithm (even if the only one we know of is very slow), helps make us less confused about how uncertainty about logical statements works, and lets us reason about that idea better.
@@drdca8263 I don't remember entirely the context, but I think that wasn't my point. I am aware of the fact that efficiently computable generally means polynomial time. I probably meant that for a problem class where it is not known in general what polynomial time algorithms are capable of(In the context this hasn't really been established), it is of limited value to prove results about them. It is actually a fairly common thing in math and computer science to prove things about trivial or empty sets without realising it at the time. Theoretically this is a fine paper, from what I remember of it. I think for further work on the subject it has a lot of useful insight. My complaint was in end effect that none of it is practically directly applicable with out considerable additional theoretical effort. Again not bad. It just makes me less than excited about the paper.
Amazing ! Love it ! This concept is just super duper cool I'm so excited to read the paper now :3
Oooh, I see that copy of Iain M Banks' "Consider Phlebas" on the shelf... nice.
Come for the computer science, stay for the English-Dutch relations.