I'm reminded of the old joke: If you ever find yourself the target of a mugging, simply say "no". The mugger actually can't legally take your stuff without your consent. I'm fairly sure even if you get somebody to sign that that windfall clause, if they DO succeed in AGI they'll weasel out of paying in less time than it took you to explain what the windfall clause was.
'legally binding' requires there is some government or force that can force you to comply through physical or financial pain. If you have an AGI that has already amassed GWP level wealth, it will not be susceptible to those forces. It will be able to create a way to mitigate them. Maybe this is solved because we are assuming the safety problem is solved, but it seems we still have work to do on the idea of the Windfall Clause.
@@biobear01 i come from germany and here lives the king of thailand, former prince of thailand. when he became king he was supposed to pay 3 billion€ in taxes because he essentially obtained an entire country and lives in germany. he did not pay a single cent. when that kind of money is at play, the wheels turn differently
@@MsMotron The first company to make ASI will not bother making any money. Why bother selling products/services when you could just wish anything you want into existence?
Global GWP is 142 Trillion, EU, and US GDP is around 18-20 Trillion each, so around 26% of world production. While EU and US do not have direct control over that money, they do control monetary policy, patent policy, and the law. - If your company says "screw you" to either of those two powers, suddenly your AI patents are made invalid, your corporate offices are raided, and your executive board is all put on their Sanction list, along with their families. Facebook, Amazon, and Google are all worried about Antitrust legislation from the left at the moment, expect more of that as companies grow bigger.
"Taxes aren't voluntary, you can make companies pay them" Any company responsible for 1% of the GWP will almost certainly have armies of lobbyists (or simply buy elections/government leaders) to keep their taxes as low as possible. Major multinationals already do.
This. My prediction for an AGI future in big corporations will be: * Corporation develops AGI * Corporation socks soar * Corporation lays off immense amounts of staff * Corporation stocks soar further * Corporation is now immensely powerful, essentially buys up other big competitors, lack of anti-trust law enforcement in the US allow this * New mega-corporation exerts massive global influence * Massive poverty everywhere from layoffs * Massive unrest, but hey, that's the governments' problem now * Governments powerless in the face of mega-corp * End result: Extreme class divide, people who literally are useless since labor is mostly obsolete and dont partake in economy, people who either have irreplaceable jobs or own AGI stock etc, partake in economy. One could argue that having a large portion of the population no longer be economically relevant would hit the corporation's bottom line, but they have an AGI, they will probably transition away from selling goods and over to simply shifting money around in order to make yet more money. I mean the US has been showing us how to do it for years, with an already staggering wealth imbalance. I don't think it's too far of a leap from there. It'll just be even more wealth imbalance together with a healthy sprinkling of war and civil unrest. People really forget that corporations absolutely don't care about ethics at all, so AI safety, windfall clause etc, all that doesn't really matter in the end. If apple/google/amazon gets an AGI, prepare to watch the world change for the worst, whilst their owners get even more unimaginably rich, that's pretty much that. It's just a matter of when this happens. Society won't turn into this utopia where work is mostly handled by AGI and humans can now self-actualize. Itll turn into a dystopia where corporations will absolutely rule all, and poverty is everywhere. They won't share, they already don't.
In this context, any company responsible for 1% of the GWP will /also/ have an unstoppable artificial god who does anything they ask it to do, which might be a bigger problem.
@@Elzilcho1000 Wouldn't be hard. Just spend huge chunks of your profits on expanding your business and buying land and paying out employee bonuses and other areas. Profit is what's left over after you've spent the rest. They can complain to you on your corporate yacht.
@@jarrod752 as a company if you buy stuff (ie not expense to conduct the business that generates the profit) the money you spend is still counted in your profits
@@Scubadooper so buffer the money into things that are meant to conduct the business then convert it back into whatever - this will still get flagged as a huge profit but you may just manage to slip into the zone where you have enough control that your company calls the shots
Rob, love the video as per usual. You mention the Windfall Clause contract is "legally binding." While contract law certainly differs across countries, in general they are only binding to the extent that the quid pro quo is maintained. In other words, if I enter into a contract with someone to do maintenance on my house in exchange for money, I'm only legally bound to provide the money if he has held up his end of the bargain. The problem I see with a Windfall Clause is, once the "windfall profits" have been theoretically realized by the AGI first mover, the other companies and institutions that may have signed on have no leverage to enforce the contract. The first mover could say, "I choose not to honor my side of the contract," and the only legal recourse would effectively be an acknowledgement that the other companies no longer have to provide their end of the bargain, which was nothing to begin with. Contracts can always be legally broken so long as the exchange of goods or services outlined in said contract are undone - and because this has no exchange, it can be broken at any time with effectively no recourse. I suspect you would have serious trouble getting the AGI "winner" to uphold their end, because at that point it won't matter to them. It sounds like a Windfall Clause is more of an insurance policy for companies in case they "lose" the race. By signing on, they are maximizing their chances of receiving profit sharing should the "winner" choose to follow through with the promise. If the winner chooses to ignore the contract, they are no worse off than they would have been absent the contraxt. If they end up the winner, they can choose at that point whether it makes sense for them to hold up their end of the bargain. I still think it is a great idea and should be further pursued, but it seems to hold all the typical first mover problems we associate with AGI, namely that once it is achieved, its potential benefits will be so great that the benefit of honoring any past agreements is dwarfed by the cost of ignoring them.
Even if companies did decide to sign the Windfall Clause, which I highly doubt happens in the first place, the company to reach 10% of the world's GDP will be so incredibly powerful they'll effectively be immune from any enforcement actions that may be taken to force them to honor the contract. The world's most powerful governments can't get Amazon to pay their taxes, you think anyone will be to separate trillions of dollars from a company that's already worth at least $8,000,000,000,000 (roughly 10% of the world's GDP) and has AGI at their disposal?
True. But in conclusion, we have no reason not to do this and everyone will participate. The contract will even be legally binding and yet, it won't help. Game theory leads to really weird conclusions sometimes (on the face of it).
@@MRender32 The problem is the Rich can win a revolution, at this point. They can mass-produce tiny flying drones that can snipe a human holding a gun from 500 feet in the air. They would completely destroy the illusion of choice if they actually did it, but they could do it. A couple billion dollars. Done. Easy. Every protester gets one free bullet. Revolution: solved.
Warp Zone When EVERYONE is starving and unable to care for themselves, you don’t think 450 million people can bust down the doors and raze the place? We can’t underestimate how many people are gonna be affected. You are probably right that they’re SO much stronger, but if they manage to kill the labor class (literally this time) I don’t know who they’ll be able to sell to. After all, they need to generate wealth, don’t they?
"What happens when you create huge amounts of wealth and that wealth all goes to a small group of people?" Hmm I can't possibly imagine. Such a thing has surely never occurred!
Here's a concern I'd have with this: rich corporations already wield incredible power over governments, public opinion, etc. By the time they get big enough for any windfall clause to kick in, they might say "well that was a fun PR stunt but now we're not going to follow through", and they just might be powerful enough to pull it off and not pay anything back to the world. And by this point, they've already got AGI/ASI and don't need cooperation of pretty much anybody to keep being the #1 company.
It's true that a company which develops AGI probably has no reason to continue to respect any agreements or contracts it signed, and that includes any laws of any nations it may be a part of. It would have the power to change the world to its liking, so in a way it would effectively be the new government, except far more powerful than a normal government since it's not bound by economic considerations. It would have the power to point to anyone at random and arbitrarily declare: you're rich, or you're poor and it would just happen. Still, that doesn't mean that the company would not pay out the windfall. Even if nothing forces them to pay out, it's also true that there would be no cost to paying out. The company has everything it could possibly want. When all of its hopes and desires are totally fulfilled, any hording beyond that seems pointless. Once a person has everything he or she wants, the only thing left to want is the general good of the rest of the world. Why not cure poverty and illness when it costs nothing to do so?
@@Ansatz66 one potential cost to altruism at that stage might be that by sharing your profits you're also sharing a bit of your power. And who's to say that a different, less benevolent actor, possibly one who hasn't quite figured out the alignment problem yet but it's willing to risk it to get in on some of that world domination, isn't going to use these resources to put their plan in motion? Best keep it all to yourself, you know what's best for everyone anyway.
@@Ansatz66 Let's look at what humans do. Just about any billionaire could be considered as having enough. Do they though ? No, too much money only create a crave for more. Company don't have as much empathy as human do though. It's only purpose it to generate more money. Owning everything wouldn't be enough, it would only be the natural starting point, the first step.
@@automatescellulaires8543 I mean, I would qualify this slightly. There are good and bad billionaires just as there are good and bad people. Nobody believes that Bill Gates is a bad person; some people believe it is wrong that we live in a society where people can be is rich as Bill Gates, but the current social contract is hardly Bill Gates's fault. The problem is not that rich people are all sociopaths; the problem is that society is incapable of whipping rich sociopaths into line in the same way it could if I behaved similarly.
BTW the Luddite fallacy actually is not a fallacy: technological development always resulted in worse circumstances for workers over-all; only through collective bargaining or new laws we managed to get part of the results. New factories and steam engines didn't create better jobs; they created unemployment, longer hours, smaller wages and enabled the wider use of child labour. They allowed companies to fire half of their old workers and replace them with children; they created bigger risks in investment in materials which had to be compensated by longer workdays; and the over-all theme was always to replace expensive labour with cheaper one, which created a lot of cheap labour that was always competing against itself. The employment of engineers, repairers and so on always had to necessarily cost much less than the labour replaced by the new technology, and as those positions required more education and training, thus paying better, the over-all available work by definition had to fall. And so it did. Every time. And the new work created by surplus of available labour was always much worse than the old work. AI will necessary do the same. Unless workers stay vigilant and demand their rights, those rights are denied. Even if it obviously results in fall of consumption and drastically worsen the position of corporations; paying more wages to stimulate sales simply is not a solution for any individual company. We will get few new jobs that require high skill, and drastically reduce the amount of avarage jobs that pay avarage wage. This will open door for a lot of new really shitty jobs that don't pay well, and which will constantly be a target for optimation and reduction. The more AI thinks the less people are paid to do so - just as the more precision and dexterity machines gained, the less people were paid for such work. Some capitalist in the middle of industrial revolution were begging the parliament of Britain to create legislation to regulate factories, as they faced such strong competition from those profiting from unethical practices, that they had no choice but to adopt the same explotation of workers and children. Something similar will necesarely happen in our economy; some billionaires already are calling for goverment action, as they know they are not free to make the ethical choice in a market where others can choose not to.
The problem with the industrial revolution was that it took place within an extremely capitalist context. It was not the new technologies. In a vacuum, the technologies were good. The workers kept getting bad jobs because it was ruthless cold hearted industrial barons with no public accountability whatsoever who were in charge of all the jobs. They still are btw.
Obviously automation in any industry would reduce the human jobs as those jobs are replaced by machines, but that doesn't automatically lead to an overall reduction in jobs. There is a bigger picture to consider beyond just the activities that are being automated. The engineers and repairers that maintain the machines are not the only place where we might find new job created following automation. When automation allows some good to be produced more cheaply, that tends to cause the price to fall. People might buy more of that good as the price falls, or else people might spend that money on other things, thereby causing other industries to expand. When the production of widgets are automated, many people who make widgets may lose their jobs, but the demand for cogs will naturally rise as the price of widgets falls, and so the widget-makers might gain employment in the expanding cog industry. Surely it is obvious that something must cause new jobs to appear despite automation, since we've been automating things for a long time, and yet people continue to work at jobs and life has been greatly improved.
@@Ansatz66 this is deeply wrong. the thing you're falling prey to is the idea that prices can only increase when demand increases. in fact, when the price of bread is raised, so is the price of basically every other staple good, because the demand for all of them is fixed. and by no means does automation necessarily translate into a reduced cost of product. the price to produce a car has fallen drastically since the mid 20th century due to automation, and yet the price for a new car has risen steadily, even adjusted for inflation. and further, in your example, you assume that cog-makers will not have also discovered the ability to automate their workers. automation does not happen once per decade, affecting one industry at a time. it happens constantly and across the spectrum of production. widget makers would be (and have not historically been) able to find equally paying jobs as cog makers. they would simply join the cog makers in the unemployment line and end up spending what ought to have been their retirement working at a domino's or walmart or other low-paying service industry job. and the thing that rises up to fill the demand void left by the slight decrease in price of widgets and cogs will be built to take advantage of automation, meaning there will be few if any jobs available in its manufacture. to simplify automation into the world of econ 101 is a gross disservice to workers around the world who have seen their lives upended and de-facto ended by automation and its knock-on consequences. tl;dr take your theoreticals elsewhere, we have no place for them in the real world.
This comment thread is one of the most ill informed I’ve seen in quite some time. The alternative to child labour during the industrial revolution was death by starvation. Yes, ethical practices are a luxury, that’s why we want every country to get wealthy as soon as possible. Cars haven’t got cheaper? Seriously? In what world? I can afford a car with 4 month wages, and I make a pitance. Try that 20 years ago... Rising the price of bread only raises modestly the price of *substitution goods*, complementary goods like e.g. ham get a lower price.
One problem with this idea is that it will not be humans that are being used in terrible working conditions, it will always be robots. If the AGI somehow made so much money that there was no money left in the world, then that would entail that the people of the world would have viewed what it was producing as more valuable than anything else they could have gotten with that money. Taxes would also have increased enough that there would be enough money flowing to people hired or subsidized by governments so that they can continue to buy the better and better products the AGI was making.
"You might face boycotts and activism." Amazon has been facing boycotts and activism for years. They don't care. Profits over everything. No company will sign a Windfall Clause. It's a nice idea but pure wishful thinking. A little bit of free PR right now (that honestly most people wouldn't really give a shit about) is literal fractions of pennies when you're talking about a company making 10% of the world's GDP (~$8,000,000,000,000). If you think this is a viable solution to inevitable mass automation, you live in a fairy tale. Even if companies did decide to sign the Windfall Clause,, the company to reach 10% of the world's GDP will be so incredibly powerful they'll effectively be immune from any enforcement actions that may be taken to force them to honor the contract. The world's most powerful governments can't get Amazon to pay their taxes, you think anyone will be to separate trillions of dollars from a company that's already worth 10% of the world's GDP and has real AI at their disposal?
As I was watching the video I was also reminded about how climate experts have known that we were headed for trouble, designed very good plans to avoid it, and even presented those plans to people with the authority to enact them, but....well you probably know where that ended up...
"viable solution to inevitable mass automation" We don't need a 'solution' to mass automation because mass automation is not a problem, it is a potentially amazing thing. The problem is capitalism, not automation.
Juicy Boi fair point, what I should have said is we need a solution to the unprecedented scale of economic disruptions that will be caused by automation.
Nah, they would all accept to sign it because the requisite for it to aply would be ridiculously easy to circumvent. Just chop the company into bits, make it a conglomerate and even if you move the equivalent to 40% of the world's money, you'll be fine. The shortsightedness of this whole endeavour just baffles me.
@@mvmlego1212 Obviously the workers, the ones doing the actual research, writing code, etc, work incredibly hard. But even if they are well compensated they will probably not be the ones who own the means of production. Drug researchers work very hard to produce amazing drug therapies, but the ones who make the lions share of the profit do so by owning capital, not working.
Yes please. Seems like a super important slice of the problem. There are lots of interesting objections in the comments here to go through for a start.
I dont really like that the apparent solution here is 'corporations maybe sign a pledge as a PR move and by the time they wield pet AGI's and significant-percentage-of-world-GDP levels of wealth hope they just honour it willingly (because there is no realistic enforcement mechanism) to sustain an entirely outmoded economic system'. I feel there are a lot of ways for that not to work out.
@@peacemaster8117 It's sort of equivalent to "have the government fix the problem". A corporation can sign a legally binding contract, but the government still has to be willing and able to enforce it.
"you can think of the world as having 2 types of people, some that make money by seling their labour, and some that make money by owning AI sistems" - Rob. "There are those who make money by owning capital, and those who make money by seling their labour" - Karl Marx
the automation crises being discussed here was literally discussed in the communist manifesto, certain took a lot longer than Marx anticipated but here we are still. Even for people still married to capitalism as the best economic system there is a lot that can be learned from Marx analysis of capitalism
What will really happen is that unchecked automation will make everything so incredibly cheap that what little money you make will be enough to have more luxurious life than we currently have. Social stratification will be ridiculous, but you want to have better life, not prevent other people from having exceptionally great life, right?
@@michaelbuckers Thank you for being a voice of reason in this comment section. Not to mention, as production of products and services becomes cheaper, more decentralized and more available, we are more likely to see a democratization of entrepreneurship, with clusters of mostly or entirely self sustaining local communities. People project tomorrow's problems on today's market but the landscape of the financial world changes all the time.
What value does Legaly Binding have once a company makes 1% Gross World Product. South Korea already has a problem regulating SamSung, because they are roughly 17% GDP of South Korea. A company with 1% GWP will be in a similar situation. So it would require that every goverment on earth promisses to uphold companies to their Windfall Clause. But then we move the problem one step away. I don't know, I am just not confident in the promisses such companies make.
We could also just, you know, try to move past capitalism, which is so obviously incompatible with a post-AI world. And if everyone’s labor suddenly becomes worthless, that’s some pretty strong motivation for some massive political change.
Capitalism is not obviously incompatible with a post-AI world. In fact, some companies make good money employing AI. AI is so effective at playing Monopoly that there is now an international agreement not to bet on rising food prices in the derivatives market. If everyone's labour suddenly becomes worthless, that is an enormous potential for down-sizing and cost effectiveness. Usually this results in wars in which the proletariat kills of its surplus. But with today's killer robots even that can be automated already. (Technically killing all the poor would be a massive political change.)
@@davidwuhrer6704 Incompatible with it in a way that is good for anyone that isn't the bourgeoisie. Of course, that is no less true now, but post-AI its even more obviously true to the unaided liberal eye.
Capitalism is the best by far rn But once post scarcity kicks in everything will and should be free since AI and infinite production removes need for any money
@@saosaqii5807 Not if the capitalists have anything to do with it, its in their interests to prevent that future and they have more resources than anyone else to get what they want.
Robert, literally any video you make about AI is something I'm interested in seeing. You are a fantastic communicator of all things AI and we need more people like you, especially now. Keep them coming!
That would work if most people were reasonable. Recent events have shown they aren't. In the US for example, you can put that "I am a dickhead" label on your forehead and literally get elected for president. So why would a company have a problem with that? Unfortunately, many people stopped caring.
Given what plenty of people have already pointed out in the comments (namely how totally unenforceable a windfall clause would be in practive), I think examining these types of problems really illustrates the need for fundamental changes to the way we view and enforce property laws and ownership as a whole
"Firstly, governments are not actually great at spending money effectively..." [CITATION NEEDED] Just because it is "widely known" to be so, doesn't make it true. In this case, you'd probably find that (paraphrasing here) "governments are the worst way of spending money effectively, except for all those other forms that have been tried from time to time..."
Yeah, that was probably the weakest part of the video for me. Asking Republicans and Democrats how much money is "wasted" by government and, what do you know, the numbers match up with their exit polling numbers.
Didn't have time to put this in the video, but this is addressed in section A.2.i of the report at : www.fhi.ox.ac.uk/wp-content/uploads/Windfall-Clause-Report.pdf > “Firms will evade the Clause by nominally assigning profits to subsidiary, parent, or sibling corporations.” > The worry here is that signatories will structure their earnings in such a way that the signatory itself technically does not earn windfall profits, but its subsidiary, parent, or sibling corporation (which did not sign the Clause) does. Such a move could be analogous to the “corporate inversion” tax avoidance strategy that many American corporations use. Thus, the worry goes, shareholders of the signatory would still benefit from the windfall (since the windfall-earning corporation remains under their control) without incurring obligations under the Clause. > We think that the Clause can mitigate much of this risk. First, the Clause could be designed to bind the parent company and stipulate that it applies not only to the signatory proper, but also to the signatory’s subsidiaries. Thus, any reallocation of profits to or between subsidiaries would have no effect on windfall obligations.* Second, majority-owned subsidiaries’ earnings should be reflected in the parent corporation’s income statement, so the increase in the subsidiary’s profits from such a transfer would count towards the parent’s income for accounting purposes.† Finally, such actions by a corporation could constitute a number of legal infractions, such as fraudulent conveyance or breach of the duty to perform contracts in good faith.
have you looked into the paper if it addresses this criticism, or are just just assuming you instantly came up with something that the experts behind the paper have never thought of?
I'm glad I was lying down when I saw the "appearing to not be sociopaths" bit. I would have fallen out of my chair! But seriously, thanks again for the hard work making these videos.
We already kind of have this divide between people. The working class survives by working, and the capitalist class by owning private property. Finding a new way to organize the economy could be a part of the solution.
The problem with legally binding is that almost by definition, anything that large and influential can probably buy a coup. Or carve out it's own state.
Possibly depends on the country, but in the case of the UK it definitely seems like the second, the government created the loopholes so that the biggest companies don't pay much tax, and then benefit personally from that.
Let's not kid ourselves. The executives and the shareholders all being sociopaths wasn't a hypothetical, and they're putting less and less effort into appearing not to be by the day.
Really? The amount of corporate virtue-signalling this June was almost nauseating. Also, I don't understand the stereotype of evil shareholders. How much stock do you have to own in order to be classified as as _bourgeois_ swine?
@@mvmlego1212 "I don't understand the stereotype of evil shareholders" Shareholders are pretty evil by definition. Invest in a company when it's doing well and make money. Withdraw when it's not and make money. "How much stock do you have to own in order to be classified as as bourgeois swine" Owning stock is the immoral part. So any at all.
@@AvatarOfBhaal -- _"Shareholders are pretty evil by definition"_ Could you state your definition of evil, please? I can't follow your argument. _"Invest in a company when it's doing well and make money. Withdraw when it's not and make money."_ That is not how investing works--at least not if you want to make money, rather than lose it. Buying high and selling low will make you broke, not rich. If you want to make money from investing, then you find a company that you believe will be good at making money. Then, you and the company make an agreement: you give them some money so they can execute their money-making plans, and they give you a share of their profits and some influence over the company. Eventually, you'll find yourself in a situation where you value your share of the company less than you value the money that you could get from selling that share of the company to another person, and so you sell the stock. These are all voluntary, mutually beneficial transactions. They don't steal or destroy wealth; they create wealth. I find it bizarre to demonize transactions or the people who make them.
@@mvmlego1212 The reasoning in these comments aren't that clear, but it's usually argued that shareholders incentivize endless growth at _any_ cost, including unethical business practises. Even at the cost of the free market, through monopolization and anti-competitive practises which benefit shareholders but remove consumer choice. However these could simply be solved by better laws on unethical practices and updating anti-trust legislation. Also removing regulations which benefit monopolies over potential competitors. Or we could just move to cooperatives.
@@LowestofheDead -- Those are reasonable points, and I even agree with some of them, but they're a lot different from saying "shareholders are pretty evil by definition". If a shareholder is concerned that the company they've invested in has run out of room to grow without compromising their ethics, then they can divest from the company.
It is unknown if AGI will have a soul, but corporations definetly don't and will never have. I think that rogue AGI is much less of an evil, than corporation, that has AGI doing whatever they say.
All I'm asking for is more videos, period, as long as that doesn't take away from the (so far excellent) quality of them. This is my absolute favourite UA-cam channel out of hundreds I've subscribed to and you've got the likes of the Vlogbrothers, CPG Grey and The Royal Institution beat as far as I'm concerned. General purpose A.I. is the most important and interesting topic of our time and, if we survive till we have it, will impact the future of humanity incomparably more than isolated historical events like the current Corona crisis and even larger, more dramatic events like global warming.
Its worth pointing out that this windfall clause is likely to be ignored even if signed for two reasons: 1. The company massively benefits from avoiding it 2. The state where the company is present (and thus the place its bound by the laws of) massively benefits from letting the company out of the windfall clause. (Or the company will just make offers to national governments to let it out of the windfall clause in exchange for the company moving there) Even if the windfall clause is part of international law, already powerful countries (the likely place AI development will succeed) have show themselves to be powerful enough to ignore international agreements.
And if we can think of a way to ensure all the methods of avoiding the clause are covered, you can be pretty sure the first thing the company does with its AI is stick it on finding a way out of the clause. (Its kinda funny that this controlling corporations problem is just an AI safety problem in disguise)
We might get to a point where something like GPT(n) can be used to be taught arithmetic and come up with solutions to mathemathical problems. Then mathemathicians/computer scientists will have to decide whether or not that's a valid proof much like they did with the first computer-proven theorems (and they may come to a different conclusion).
Some confusion is going on here: Taking the ideological consensus in the States regarding public spending as representing reality, also assuming the difficulty in enforcing taxation on companies isn't a result of the influence of capital to corrupt taxation systems (and thus isn't soluble). Bad sociology, and political science.
I would argue that extreme inequality does not need the rise of AGI to tear apart the social fabric. Great inequalities are symptomatic of societies on the verge of collapse across history, and we're living in one of them. If anything, the impact of AI/AGI deployment will be a catalyst, but political choices seem to be already made. I doubt conventions such as discussed here would change anything realistically. Empty promises are to be broken, especially if you wield such a power.
You just assume that public money is inefficient because people think so... that's not a serious argument. Corporation are not encline to fight for a greater good, they are here for money. All the biggest corporations implement sophisticated and aggressive tax reduction schemes, that's not for the greater good, I think that's proof enough we cannot rely on them, especially if we expect profits to grow exponentially. If we want something that benefits us all, a more efficient tax strategy is probably what we need. If you're thinking lomg-term, why not thinking about something like a world-wide tax at international level? Or taxing profits where they're made/sold instead of where the product is produced/engineered?
Very glad to see a new video relatively short after the last one, keep your great work up! I specialized to ML partly due to your great, interesting videos during my CS studies.
Assuming decision makers to be human beings? Supposing that corporatives would have any issues with rightfully looking like the sociopaths they are? Suggesting a "tit for tat" argument for cooperation? Look, Robert, I love your content and your commitment towards spreading awareness about matters even tangentially related to AI, but at this point I must assume you don't live in the same planet as the rest of us... Great video, though xD
@@starvalkyrie To me, it just reflects humanistic tendencies in his train of thought. I mean, just look at his content: the guy is spreading awareness and inciting interest on AI safety research, which is to say "let's make sure that this thing that will eventually be made doesn't screw us all over". Nice? Absolutely. Charming? To some extent. Naive? As all hell. It's pretty much tied with libertarianism in terms of naivity. Nevertheless, it's worth taking the time to examine ways to deal with the problem without changing the entire framework (AKA late stage capitalism) before giving up on it and forcefully engineering a legal and economic system built around a new technological paradigm that may never come.
At least this video does trigger people into stating the obvious. Maybe the naivety is faked, and only meant to help us realize how screwed we really are. Human made economic choices, makes Skynet looks like a Saint.
@@automatescellulaires8543 Well, you actually nailed it with the "Human made economic choices". The biggest lie to ever befall upon our species is one promoted by academics in the field of economics: the economy is treated as a phenomenon, AKA "something that happens", instead of being the sum of all the decisions made by individuals serving (mostly) their own interests. These sociopaths would make you think that the fact that they "use game theory" is already accounting for individual agency and extrapolating it to bigger systems, but it's a lie enabled by the obscuring of the events' sequential order. First, a tendency is found, then, it's exploited and purposefully perpetuated and the last step (usually when someone outside the lobby questions the ethics of such actions) is justifying the events by stating that "it couldn't have happened any other way because game theory says so". Source: my great uncle was a trader. The guy could never find peace after the small bussiness debacle he had contributed towards by speculating with warehouse and shop prices (this was in Spain in the late 80s). Several of his acquaintances lost their livelyhoods because of something that he himself was doing. Both them and himself had come in a mass migration from the southern-most parts of the country, and he was a predator to them. A decent human being doesn't come back from that kind of realisation.
Question, doesn’t this contract be basically useless in the situation that a company creates a super intelligent AI who’s interests are aligned with theirs? Wouldn’t it very likely try and succeed at getting them out of this contract?
Due to the lack of quid pro quo (as stated by commenters below) in this contract for when it comes into effect (the only benefits that would be gained by making it in the first place being ephemeral things like publicity and some cooperation that would be impossible to prove not to have been there otherwise), I think we need to change up the windfall profits clause considerably. The best way to change up the clause is to have something like the US military’s policy paper regarding cyberspace, where anyone who creates an AGI that makes all human labor obsolete is committing a cyber attack on everybody unless the company is using something like 50% of their profits to directly aid all unemployed workers (or just have universal basic income via that 50% of profits, which also helps people continue to be able to buy the goods the company is making).
Execs and shareholders: "Yo AGI, how do we look charitable but not actually give anyone money?" AGI: "I gotchu bro, just invented a million forms of windfall evasion" Execs: "sickkkkk"
CEO of the first company that owns an AGI: "AI, tell me how to get out of the windfall clause without arising suspicion, and then make me king of the world!"
Money and profits will become obsolete. The first company to discover AGI will not bother making/selling products. Imagine having a wish-granting genie with unlimited wishes. Why would you bother creating and selling products when you could just wish everything you want into existence?
If you plan to cover on the socio-economical aspects of AGI impact, please consider collaborating with CaspianReport on one of the videos. I think it's good to facilitate cross-pollination of multiple discipline in the AI Research because we are all in this together. Cheers Rob!
While I am more interested in technical side of things, this is also very interesting. This is a really neat idea but it only works with extreme breakthroughs. I think that problems will creep up more gradually. More and more jobs slowly and across different countries will be replaced by AI systems. In that scenario no single company will earn 1% of world GDP while most companies will employ very little actual workers.
I believe the Windfall Clause is useless as AI would be more effective at solving Human Terminal Goals than Currency, effectively allowing it to take the role of currency. A better alternative clause would be something that prevents monopolization of all AI upon creation, essentially using AI to sabotage other people's ability to create AI.
It seems that ultimately, the issue here is that AGI is more likely to be designed to work for the benefit of the company that created it, rather than for the benefit of humanity as a whole.
Even a peak AI won't be able to make communism work. In fact long before then, it will likely tell you exactly the same thing much dumber humans have been telling you. Communism is not sustainable.
@@IAmNumber4000 That's an easy claim to make. Marx specifically defined his "ideal" vision for when you achieve communism. But he never specifically defined HOW to get to that, or HOW to maintain it. And because actually getting there or maintaining it is effectively impossible, (without making all individual actors mindless) that constant no true scottsman fallacy is brought up. Which equates to: That way everyone else who has tried communism to reach this impossible standard didn't work because they didn't reach or maintain this impossible standard. Therefore it wasn't the "right" way. There's literally no solution for the calculation problem, the incentive problem and the local knowledge problem. And those are only the tip of the iceberg. Like I said. "Dumb" humans have figured this out a long time ago. If you set an AI to create communism it would either have to kill everyone or render them all mindless and control them and work with a tiny group which is very close by. And even then it wouldn't precisely fulfill Marx's vision. But that would come the closest by far.
@@sirellyn4391 Actually Marx never specified what a socialist or communist society would look like, only its defining features and difference from regular capitalism. Namely that the communist mode of production has no currency, no class system, and no state. Marxism is a systems theory, not itself a proposed system. Pretty crucial difference, there. Blaming Marx for the actions of state capitalist tankies like Stalin and Mao is like blaming Charles Darwin because some nuts deliberately misinterpreted his theories to justify "Social Darwinism". "And because actually getting there or maintaining it is effectively impossible, (without making all individual actors mindless)" Again, stuff like this demonstrates you haven't made the slightest attempt to understand leftism or Marxist theory. You think anyone is in favor of making every person 1984-style slaves to some absolutely powerful state? Why would anybody even be a leftist if that were the case? Obviously, someone isn't telling you the full story, because it's an easy out to think of your political opponents are stupid and insane rather than make any effort to understand how they arrived at their conclusion. You should try reading what Marx had to say about the state and democracy. Read what he wrote about the Paris Commune in "The Civil War in France". He was closer to a direct-democracy anarchist than a USSR-style tankie. I'm not going to hold your hand the whole way and I can't paste links here. Nobody knows if communism is possible because it hasn't happened yet. Automation hasn't obsoleted human labor. What can be known, however, is that capitalism can't last forever. Economic growth can't continue forever because the economy _relies_ on the development of new labor-saving technologies to grow. Even now, the growth of the global economy relies entirely on non-existent money in the form of debt that will never be paid back. So, can the world continue to go into debt forever to fund economic growth? If so, then there is no reason why we can't take on more debt to feed and shelter the homeless. If not, then Marx was right and capitalism will be replaced. "That way everyone else who has tried communism to reach this impossible standard didn't work because they didn't reach or maintain this impossible standard. Therefore it wasn't the "right" way. " If you had made even the slightest attempt to understand Marxist theory then you would know Marx considered the complete obsolescence of labor by automation to be a precondition for achieving the communist mode of production. He argued _constantly_ with what he called "crude communists" (AKA tankies), people who thought capitalism could be ended by merely making private ownership illegal. "There's literally no solution for the calculation problem," The calculation problem is a refutation of a straw man. Marx's law of value is a theory about exchange values, not prices. Exchange value is just one of many factors that influences price. So claiming "Marx's theory can't even predict prices in a capitalist economy LEL" is utterly pointless, because the theory was never meant to predict prices. But strawmen get picked up fast by mainstream economists because they're all desperate for _any_ refutation of Marxism, no matter how fallacious it is. If you don't want to understand leftist theory then you don't have to. Just quit pretending like you're an authority because some wingnut blog fed you anti-leftist talking points. It's embarrassing.
I certainly see no downside to a windfall clause, but I also suspect in practice the people with 1% of global GDP will spend a significantly smaller amount than they would have to pay out with said clause on hiring lawyers and lobbying politicians to make sure they don't end up going through with it.
7:25 You make this argument sound like it's a good thing because it will make the company sign this agreement. But what you proven is that the company doesn't actually care about giving people money and there's no reason for it to give any money after it produces an AGI.
There are so many problems with so many of these rationalizations for this scheme that I'm not quite sure where to start. It's like they started to explore Marxism with the concept of making money by owning capital vs. selling labor, but they got caught up in the idea that communism is a four-letter word, so instead of following the premise to its logical conclusion and gaining any real understanding of the problems with Capitalism, they stopped pulling that thread and pivoted to the tired old strategy of: "lets keep putting a bandaid on Capitalism instead of addressing any root problems and hope that works". . 1:45 "you can think of the world as having two types of people: people who make money by selling their labor, and people who make money by owning AI systems" You can replace "AI systems" with "capital" in that sentence and it becomes a central observation of Marxist theory. AI is just a type of capital. It's the end of the long tail automation. People won't be able to make money by selling their labor by the time AGI emerges. You don't need complete human-level intelligence to render almost all jobs obsolete. There are very few jobs that require the complete spectrum of human intelligence to perform, which means that expert systems will be largely sufficient to completely disrupt the world economy. . 4:18 "Governments are not actually great at spending money effectively." Compared to what? By what metric? Why? Is this a fundamental problem with the very concept of government or is it a consequence of sub-optimal implementation? . 4:28 "This isn't really controversial. A 2011 poll found that Republicans think that 52% of their tax money is waisted while Democrats think it's 47%" I actually think this is highly controversial. I think a lot of modern politics is guided by this falsehood that a lot of the general public believes. Especially Americans who are still suffering a hangover from Cold War propaganda which is still taught in schools. Using public opinion about tax spending is not a rigorous metric and it doesn't even offer any sort of comparison with alternatives like corporations or charities. A lot of it is based on the idea that we know of some better way to organize large institutions than bureaucracy (democracy is the only real contender here) and that bureaucracy is an organizational structure that is somehow unique to governments despite the fact that pretty much every single corporation is organized as a bureaucracy. The major difference is how people gain power and to whom they are beholden. . In a democracy, people gain power by consent of the governed and are beholden to the governed. Corporations are more like dictatorships beholden to stock holders. That can lead to openly sociopathic behavior even if the people in charge aren't sociopaths. Here's an excerpt from the Wikipedia article on the "Yes Men's" hoax where they pretended to be executives for Dow Chemical and promised that Dow was committed to taking care of the damages caused by the infamous Bhopal disaster: . "The Yes Men decided to pressure Dow further, so as "Finisterra," Bichlbaum went on the news to claim that Dow planned to liquidate Union Carbide and use the resulting $12 billion to pay for medical care, clean up the site, and fund research into the hazards of other Dow products. After two hours of wide coverage, Dow issued a press release denying the statement, ensuring even greater coverage of the phony news of a cleanup. In Frankfurt, Dow's share price fell 4.24 percent in 23 minutes, wiping $2 billion off its market value. The shares rebounded in Frankfurt after the BBC issued an on-air correction and apology. In New York, Dow Chemical's stock were little changed because of the early trading." en.wikipedia.org/wiki/The_Yes_Men#Dow_Chemical . Even if there were people at Dow who wanted to do the right thing and were in a position to do so, the share-holders would rebel and the "good eggs" would lose their job and be replaced by sociopaths.
4:45 "Actually it's worse than that, because this is a global issue, not a national one, and tax money tends to stay in the country it's collected in. Countries like the US and the UK spend less than 1% of their taxes on foreign aid and much of that's military aid" This brings up two problems. First: foreign aid is not so simple to quantify. One reason you always see the US on the scene of major disasters around the globe is: It's the only country that has the infrastructure and logistical capability to respond effectively thanks to its enormous military budget. When the US buys an aircraft carrier, it's not billed as foreign military aid even though having a fleet of aircraft carries spread all over the world allows the US to respond within days or even hours of a natural disaster striking another nation. Second: I think there's a sort-of imperative that world governments eventually merge into one (I can hear the sound of a million conspiracy theorists crying out). As our technological capability continues to grow, our capacity for destruction also grows. If we're still split into hundreds of nations debating (sometimes violently) whether Theocracy is a valid form of government by the time we've mastered synthetic biology, we'll be in a very precarious existential position. Government might be one of the most crucial applications of AI.
6:41 "Even if the executives and the share holders are all, hypothetically; complete sociopaths. They still have a good reason to sign something like a windfall clause. Namely: appearing to not be sociopaths." There are plenty of companies that get along just fine without much good will. Again: Dow Chemical is a great example. See also: Comcast (especially through the lens of South Park).
"most people think their tax money is wasted" is not a good reason to say that tax money is actually wasted. also it's possible to tax this agi if it does stuff within a nation's borders, even if it isn't based within that nation.
I find that poll results interesting. Most elephant voters think that more than half of federal taxes are wasted, donkey voters thing it is just shy of half. This is federal tax money, not municipal or state. What does the federal government of the USA even do for the general population?
This definitely seems like something to be encouraged, but my biggest problem with it is the assumption that the economy and capital systems will continue to operate in anything like the same way as they do now, once AGI is developed. It's not a reason not to do it, I just don't expect it to be relevant when it comes time.
I feel like this whole video sidesteps the point that if you make profits in a significant percentage of GWP you can just start economically blackmailing countries into doing whatever you want. The example given in the video (Saudi Amco) can already do this, to some degree, with many countries already. If a company was making profits in the 5-10% range they effectively are a country of their own with more direct economic influence than almost any other country on earth. The windfall clause is, to an enitity of this size and economic power, just a piece of paper that can be safely ignored. Even if countries try to enforce it (which considering the money is supposed to go to charities which they gain no direct benfit from, this seems unlikely) the AGI, or any suffiencently skilled team, could just work around any barrier put up by any nation or coalition of nations. Embargo? Loopholes or alternative sources of trade. Sieze company assets? Come up with both legal and physical defences. Outright war against the company? When you control 5-10% of the worlds economy and have an AGI on your side you can probably win, or at least survive, through both economic and traditional warfare. TL;DR If a company is making enough money for the windfall clause to take effect then they have enough power to ignore it, and even if countries tried to enforce it the company would be powerful enough to circumvent the enforcement Socialism, or barbarism under the boot of AGI run corparations.
Reading the comment section I think there is more than enough interest and discourse going on to also take the policy side of AI into your video portfolio. I'd be more than happy to see the current state of research presented by you. 😊
This sort of proposal sounds very sensible, conditional on us ending up in the situation where some particular organization successfully invents "friendly" AGI, but still manages to "control" it in the way we usually think of companies controlling software and other intellectual property -- i.e., they have some ability to license and control its usage, capture profits, avail themselves of law and courts to protect their interests, etc. But... doesn't this run into some of the same, shall we say, failures of imagination that we see when people talk about the nature of *unfriendly* AGI? Like, the most serious risk of unfriendly AGI isn't "some evil corporation or terrorist group uses the AI for nefarious purposes." It's "oops, the matter in your solar system has been repurposed to tile the galaxy with tiny molecular smiley faces." In other words, a genuine super-intelligence -- almost by definition -- *can't* be controlled by non-super-intelligent humans, one way or another. That's really bad when it comes to unfriendly AGI, but if you're willing to stipulate a friendly AGI (i.e., one that is sufficiently aligned with human values), doesn't it suggest that a lot of this concern about how to distribute the benefits is kind of beside the point? Like, if we suppose, as I think is pretty reasonable, that one particular conclusion that falls out of "human values" is "it would be bad for the vast majority of mankind to be reduced to abject poverty or death after the development of AGI," then that's a value the AGI itself is going to share, right? We are talking about AGIs here as agents, after all, with their own values. So if we actually *solve* the value-alignment problem, doesn't that basically address this issue, without the need for human-level legal pre-commitments?
There is one case where human precommitment might help. It is the case where the technical side is solved. People know how to align an AI to arbitrary values. But a selfish human aligns the AI to their own personal values, not human values in general. (There are good reasons for a benevolent person to align the AI to themselves, at least at first, with the intention of telling the AI to self modify later. Averaging values is not straightforward. Your method of determining a humans values might depend strongly on the sanity of the human. )
A friendly AGI will have whatever values we build it to have. The only way an AGI can be friendly is if we have the engineering skill to build the kind of AGI that we desire. Most likely such an AGI would follow whatever orders we give it, regardless of human suffering. Hopefully whoever builds the first AGI won't want the majority of mankind to be reduced to poverty, but only time will tell.
You make this topic so interesting and cover a good range of related issues. One thing I'd love to hear more about is the different ways companies are trying to create AGI. We hear that different groups want to create it, but what are their approaches, and how do they differ? I think this topic is barely covered online and would be a really interesting video!
It's worth bringing up that socialist economics systems, which don't have a capital class, don't have this problem. Automation is unambiguously good under socialist systems, where the means of production (factories/farms/mines/AI systems) are managed by and for the working class. It just means everyone gets to work less and spend more time doing things that they are passionate about.
althought I alight a lot with marx ideas I wouldn't be so sure about AI serving the greater good, the administrative class could use the AI to boost their personal power. I guess in the end it all depends on the personality of who has the AI first. Anyway I would agree that in a socialist country AI would be less likely to be used for "evil" due to the fact that those who control the AI are most likely comunists therefore valuing colective gain over personal gain more than a capitalist.
I tend to prefer your technical content because it's more immediately useful to me, but also hold the opinion that ai safety is ultimately a cultural problem, so it's cool to see you tackling those aspects. I wouldn't mind further pursuit of this direction, but definitely want to keep seeing the great technical stuff you've made. Thanks for all your efforts!
Prediction: companies will avoid signing this or refuse to pay if the clause is triggered. They will avoid signing it by voicing concern that whatever organization was designated to receive the money is not trustworthy/corrupt. If they do sign, they will avoid paying by using lawyers. Lots of lawyers.
I suspect that the super-intelligent AGI lawyer-bot will be able to find a loophole in the clause. And the AGI PR-bot will manage to convince us that this is the right thing to do.
"two types of people. People who make money by selling their labor, and people who make money by owning AI systems." Wouldn't this be true in our society anyway? People who make money by selling their labor and people who make money by owning capital goods and extracting surplus value?
The difference is that labour currently has value. If a robots can do everything cheaper, the people who make money selling labour stop earning money. At which point things get worse.
@@gnaskar see my comment on the main thread. Big business can get rid of their workers if they want to but they will unintentionally destroy the capitalist economic system. Maybe that would be for the best.
This was... not an ideal video. The authors of the proposal are making a great many assumptions about the enforceability of such an agreement (just say you will totally give your disgusting gains to the plebs until you've gained enough power to ignore your pledge). And even in the unlikely situation where the hyper rich decide to provide aid to people, it's done so in a completely undemocratic manner. These .000001% of the world's population get to decide where the charity goes with no input from anyone. They could donate to racist causes because they believe them to be noble pursuits, and who will stop them? All in all, an astonishing bad take on how to spread the gains from AI.
I think that most of the current utrarich are competent and often benevolent. Governments and random rich benefactors have different failure modes. How often do supposedly elected governments make unpopular decisions. In the covid crisis, bill gates has been trying to develop a vaccine and the FDA has tied red tape everywhere and basically made testing illegal at one point.
@@c99kfm pretty much. bill gates is not a nice bloke. He does a lot to make himself look good but his wealth is built upon the back of so many disadvantaged people.
@@gadget2622 jesus christ... get one single argument. built upon what? did he personally designate the production of microsoft products to third world countries and made sure the factories in question had impossible conditions, in addition to somehow locking people up when they applied for the job and thus werent able to take the job out of choice? you do realise he and people like him make reidculous amounts of jobs that people CHOSE TO WORK AT, because it is a net positive exchange of their effort for the wage they are payed right? IN OTHER WORDS, VALUE IS CREATED. child labor is fantastic, unless they are stolen off the streets and forced to work. get out of here, communis
Profits are so 20th century. Once corporate income taxes became large and widespread, they became a measure to be gamed, rather than any sort of objective measure of benefit of an activity to society or the owners. Note the various companies whose stock prices appreciated despite not earning nominal profits. Combine that with central banking and stock market speculation, and it is easy to foresee how this would play out. You will never see a situation where it will be possible to identify that AI was the source of the profits rather than some other issue. If this ever "paid out", it would because of accommodation to other forces, rather than to this ex ante agreement.
There already is a big gap between two groups of people: Real estate owners and renters. It seems that nobody seems to care about some people making money just by having money to begin with.
I think the best way to "patch loopholes" is to start with specifying what you actually want companies to do, and where they fall short, just do it yourself and charge them for the costs. Any measure deviating from this will suffer the same misalignment problem we know from AI. Companies are reasonably good optimizers, too.
I'm sure this will work, because it worked so well when we asked companies to sign a windfall clause at the start of the industrial revolution and again at the start of the information age. You might argue AI is unique because at least we're thinking about it beforehand, but I'd say the only thing that seems different is the focus on the inherent classism and I feel even that's not all that unique. Both the industrial revolution and the rise of the information age did exactly the same thing: put massive amounts of wealth in the hands of those that controlled the involved machines, while forcing everyone else to shift their labour to new and different types of work that the machines happened to not be able to do, or not be cost-effective to do. Of course the true capitalist here would argue that "anyone can develop AIs and those with the best AIs win until someone else comes up with an even better AI" but the real problem is the one so often pointed out by Robert Miles and others: once you have the better AI, your chances increase incredibly to get exclusive access to the even better AI as well... That may be the argument to sell the windfall clause in this case, but that does nothing to reduce the value it would have in the previous two examples and that value wasn't enough at the time either. After all, all the gains from machinery and information systems also made it easier to build better machines and software. There's a reason Elon Musk and Jeff Bezos are building rockets and bricklayers aren't.
"the problem is (describes capitalism)" "our proposed solution is (describes the same ineffective band-aid we've used for more than a century)" Anand Giridharadas gives a pretty good explanation of why charity is ineffective at actually solving issues. An oversimplified summary is that letting people who benefit from problems control how we address them, allows them to invest in 'solutions' that don't involve actually fixing the underlying problems that they make so much money from. If "we promise we'll do the right thing with the money we're stealing" was going to work, it would have done so by now.
Exactly, it's not like we haven't been there before. It's just what happened with automation but on a larger scale. We know how that turns out when capitalism is involved : Rich get richer by right of ownership, works don't get to work less despite being more productive, and superfluous workers are discarded and become extremely poor. Also apparently government are bad at sharing wealth but we should trust self serving corporate entities whose only goal is generating profit to do it better? When we know they actually do not. These ideas are just trying to preserve the status quo, and the status quo is rich getting richer draining wealth from everyone else. It's hardly a status quo worth preserving. (and that's without including sustainability issues).
“You can think of the world as having two types of people: People who make money by selling their labor, and people who make money by owning advanced AI systems.” B A S E D
Just because they willingly sign up for it doesn't mean they will actually be any more likely to be willing to pay. Watch "How Hollywood Studios Manage to Lose Money on Movies That Make a Billion Dollars" from Today I Found Out - Hollywood studios would sign profit sharing deals with some of the talent, but then through accounting magic would pass all their profits to other subsidiaries to avoid honoring their part of the deal.
Your channel talks a lot about intelligent agents getting around restrictions we put on them in unpredictable ways. Designing the clauses that wouldn't be bypassed seems very difficult, especially if you consider the company in question will have an AGI doing their accounting.
I think any sort of attempt to get companies to voluntary donate their income is misguided at best. Nevermind that it will be extremely hard to force a company to adhere to their Windfall Clause after they have amassed massive money, and therefore power, from creating a powerful general AI - even if they did do it (and they might, if only to prevent a revolution!), it still results in a scenario where a significant amount of world production is in the hands of a single company, and if they do share their wealth through some kind of global UBI, it means a lot of people's livelihoods would be dependent on the AI creators. That gives them far, far too much power over the rest of society, with us having basically no bargaining leverage with the creators. It's better than massive poverty, but it's still at best a benevolent dictatorship. An optimal solution would need to ensure that the AI system is socially owned and managed and the benefits are shared equally, so that the AI does not create any elite class. Nationalization isn't an option here, because it would exclude the world outside of the nation that created the AI, and because a national government might use the AI to subjugate other nations, and because nation-states are frequently not as democratic as they appear (if they even bother - what if the superhuman general AI is created in China?). We would need some other form of social ownership of the general AI, one open to the entire world population and difficult for any one group to dominate. Like a sort of super-co-op :P
Philanthropy is not an unqualified good, many supposed philanthropic foundations set up by obscenely wealthy people often times use that money to gain influence over international organizations, and direct them towards their own pet projects, donate towards questionable causes, or in the worst cases are just elaborate tax evasion and money laundry schemes that funnel the money right back to the philanthropist that set them up. It scares me the fact that not once in this well researched video not once was the idea suggested of some kind of democratic oversight over the wealth an agi could generate.
^ This. Distributing the wealth created by the AGI is not the same as distributing the power. The company that is generating that wealth using the AGI still has all the power in this situation, and even if they are run by completely benevolent angels (spoilers: they're not), the idea of everyone on Earth becoming economically dependent on a single company is highly concerning.
There's a massive error here: Social Security is 99.6% efficient. You also neglect the possibility of dividend systems, like the Alaskan oil fund. I'm pretty sure writing people bigger checks isn't going to take more work, and removing the means testing would probably lower it. "We can't rule out the possibility they mean it" lol. On about the same % chance that snacking on depleted uranium might give me super spidey powers, sure. But historical norms have shown that peasants have to riot and be on the brink of revolution to receive improved material conditions. The current unrest managed to uh.... accomplish the rebranding of a pancake mix and some rice products. Which somehow feels infinitely more than we usually get, but somehow feels infinitely worse than nothing. Capitalism is amazing.
problems which should be addressed before implementation of a powerful AGI becomes safe: value misalignment, reward hacking, implicit bias, environmental robustness, C A P I T A L I S M
Considering how automation turned out I'm not optimistic about how AI would be used. Windfall clause here is basically a way to make sure to preserve the status quo, which isn't great. Like, automation could have been used to considerably reduce the time individuals have to work to survive, but instead it led to some people being extremely rich, other having to still work a lot, and then those that were rendered useless becoming extremely poor. This would do exactly the same thing. On a larger scale. And I mean, that's not surprising, AGI made by capitalist in a capitalist society will lead to an AI that emphasize the problem of capitalism. So... uuuh... hopefully capitalism is dead before we get to AGI is the takeaway here?
It's at times like this that you realise why goal alignment for advanced AI systems is so hard. We can't even achieve goal alignment between different human beings.
The thing about legally enforcing a windfall clause, is it's a lot like enforcing tax laws. And really, at the point where human labour has no value maybe we should be getting rid of companies entirely, because all the advantages of capitalism are gone at that point, and the disadvantages for things like socialism can be dealt with, well, AI.
@Enclave Soldier Capitalism doesn't "work out fine" for the society we have now, much less for one where labor has no value. Communism and Socialism aren't based exclusively around human labor having value, that concept is only used to explain how the working class is exploited by the capitalist class. In a society where labor is worthless and the working class becomes merely a consumer class, the exploitation is clear enough. The real basis of Socialism is the common ownership of the means of production, and in this case those means are the AI itself and the machines it uses. For this situation, there is no fairer and more democratic solution than common ownership of AI -- what good faith argument can a person have to defend the position that the future and well being of all of Humanity should lie in the hands of a few corporate executives?
Please do make more videos on these aspects of AI safety, but don't stop your usual approach. I greatly enjoy your style of explanation and would like to hear anything you have to say, or to comment upon.
Love to hear about the human part of the bargain, if possible please do more on the socioeconomic impact of AI or platform other creators who do. Great vids as always!
Someone who likes this might be interested in Suarez's novel Daemon and its second half Freedom(TM). It involves (in part) "beneficent" "malware" that at one point for example starts hiring lawyers to keep itself from being deleted. It's a very fun ride, and one of my favorite books. (Not really AGI, but practical AGI, sort of. Very realistic for sci-fi.)
Man I'll take any content you want to make, your stuff is awesome. I really like how clearly you explain things. If I get a vote I am interested in formal systems, such as metamath, and AI being trained to use formal reasoning. I asked Stuart Russel about it in a Reddit AMA and he said he had previously considered the idea of using formal systems to prove results about AI systems as a control technique. A proven result would be one of the only really solid control structures I feel. Moreover there might be some bootstrapping possibility, where an AI is only allowed to expand it's capabilities after it's proven that the expanded system will obey the same rules that it is proven to obey. Additionally making gpt-3 do mathematics is sort of like training a computer to run a simulation of a dog trying to walk on it's hind legs, you can do it but it's not playing to anyone's strengths. Computer systems that reason using set theory, such as metamath, can use symbolic language where there is a rigorous definition for every symbol and use currently existing tools to check their reasoning to be correct. This is a much more solid foundation for developing a system for thinking and reasoning about the world I feel, natural language is a mess. Anyway yeah that was long vote ha ha, keep up the good work, love the channel.
I'd be quite interested to hear how AGI and ASI would transform the economy. That being said, I'm also a bit sceptical about some claims you made in regards to the economic impact you've mentioned. There are some things in micro- and macro-economics such as comparative advantage and the appreciation/depreciation of currencies etc. which actually kinda seem to go against the notion that "humans wouldn't have any work left in a world dominated by AGI/ASI", "a company using AGI/ASI would result in extreme wealth inequality/the focusing of large amounts of wealth towards a single entity" or even (though this point wasn't being made by you, but somebody else) "a company run by an AGI/ASI would evolve into a monopoly". Admittedly, I'm not a world-expert in economics (I'm just somebody who's passionate about it), but what I do know is that a lot of things in economics are based in mathematics and that mathematics still should hold true for an ASI. On a semi-related note... how much power would an AGI need (including the cooling of the processors) with a mental productivity equal to that of an avarage human? I'm quite curious about that number since it can be then used to calculate how much the electricity-bill would be and, combined with area-cost that the AGI would take up, how much "employing" an AGI would actually cost compared to a human.
"There are some things in micro- and macro-economics such as comparative advantage and the appreciation/depreciation of currencies etc. which actually kinda seem to go against the notion that 'humans wouldn't have any work left in a world dominated by AGI/ASI'." It is not a matter of economics. Once we have AGI, there would be no jobs left that could not be performed by a machine. Our minds are our greatest asset and the only thing which allows us to do things which are beyond the capability of machines. What work could we possibly do when everything we might do can already be done for free by a computer? "How much power would an AGI need (including the cooling of the processors) with a mental productivity equal to that of an avarage human?" No one knows how an AGI will work. Once we figure out how to build an AGI, then we'll be in a position to estimate how expensive it will be. "I'm quite curious about that number since it can be then used to calculate how much the electricity-bill would be and, combined with area-cost that the AGI would take up, how much 'employing' an AGI would actually cost compared to a human." No doubt an AGI would require some amount of electricity, but there would be no electricity bill since electricity would be free. We'd no longer need human labor to produce electricity, so there would be no one to pay for our electricity bill. If we want more electricity, we can just program our AGIs to build more power plants.
@@Ansatz66 Ok, let me adress your counterpoints since I have some disagreements on them. First of all, how do you even come to the conclusion that it is not an economic matter here? I'm well aware that an AGI and above would be able to do the same mental tasks a human could. That being said, there are clear mathematical benefits for an AGI/ASI to not try to emulate the whole human thinking-palette, but instead try to be hyper-focused on a single thing/task. It's known as comparative advantage and clearly showcases that an AGI would use its own resources (computing-power etc.) most efficiently if it focused on something that it can do the best... which in the case of an AGI/ASI could be innovating etc. in which we humans tend to be a bit slower, rather than focus on something humans already are pretty good at relatively speaking. If anything, an AGI/ASI wouldn't be so stupid and go like "we'll overtake the entire economy", and instead it would make benefit of things like comparative advantage, trade etc. to use its own resources the most efficient, and then trade with humans (either via money or other things) for stuff it hadn't focused. And I mean, it's not like we have real-life examples on an abstract level... the trade between the US and poland for example can be seen as a good example for how an AGI vs. a human would work in an economic sense. While the US has an absolute comparative advantage (much like an AGI or even an ASI) it would still be more beneficial for the US to focus on what it can do the best, and trade with poland for stuff the US hasn't focused on (even if it theoretically could also produce them more efficiently than poland... at the cost of producing less on what it focused previously). Mathematics clearly shows here that the US and poland still would both benefit from this trade, even if the US(aka the AGI/ASI) theoretically could produce everything more efficiently. And that's just one of many economic arguments in regards to AGI/ASIs (would probably be better to talk about this in more details in discord or so). Second point... it's true, nobody really knows how an AGI would work, let alone how its requirements would be. That being said, there are some physical limits etc. (such as how much computing-power can be physically possible from a certain volume) which work as a good boundary. Likewise, we can look at our current tech and its current capabilities and use that as well for a boundary. Taking that into account, I do have quite some strong reasons to think that early AGI/ASIs would be quite expensive to run and thus, even though the world now has AGI/ASIs, they wouldn't fill out every job but, as mentioned above, would instead be deployed on tasks which makes their usage the most efficient. And third point... did you really just argue with the countless-times disproven, marxist "Labour theory of value" ? Prices for a product or service are not only dictated by the human labour within them, but also the time that it takes, the resources that go into it, the costs of ground/area on which this production takes places and the market itself in regards to supply and demand (and probably a lot of other factors as well). Producing electrical energy requires materials and ground. In regards to ground, it can be safe to say that property-rights will still exist in an age of AGI/ASIs and thus the AGI will not only be limited in how much energy it could produce at max, but also what resources the AGI would have at its disposal and ultimately at what cost it would have those resources. (Also I do consider an AGI/ASI as safe when it amongst other things also respects human rights) Keeping that in mind, it would mean that the AGI would have to min-max what to do with the area that it has been given. Will it use the area to build more server-farms? Will it use the area to build a large mining and refining operation? Will it use the area to cover it with some power-generating machine? Needless to say, given the limited area, the AGI would have to think about how to most efficiently use the given area and what things it should import/export in order to become self-sustaining in an economical sense. Given how a limited area might not have everything needed for the construction of a powerplant (and maybe keep it operational), the AGI would need to trade which... means that money will be involved since money is factually and mathematically speaking the best trading-medium possible. Also in case that the AGI wanted to expand the area that it owns... it might need to buy land or maybe even pay an area-rent which... once again means money. Overall, the notion that "electricity will be free in an AGI-world" is just utterly absurd and just plain factually wrong. And that's just mere talking about electricity... we haven't even talked about heat-generation from an AGI and how that would need to be managed (which once again would result in some money being in use here). And yeah, just to say one final thing here... "It is not a matter of economics" is, as I hopefully demonstrated, clearly wrong here as economics does matter regarding AGI/ASIs. Economics isn't just about money and how humans will get paid etc. ... economics is about trade, resource-management and ultimately about "what is the most efficient use of the resources I have". Even an AGI/ASI would think economically if it truly is rational (which should be a given considering the mathematical nature of an AGI/ASI). So once again, YES economics DOES matter, even for an AGI/ASI.
4:20 "the united states federal gov wastes a lot of money, you can see this because people when polled think the united states wastes about 50% of it's money" Without, any reference to other entities and their waste. Not to mention.... that this is a poll on how people felt about national spending, not data on national spending, or a comparison to other entities. Saying this was was really dishonest and people are going to be reinforced in the conclusion that the united states is wasteful, when in reality you didn't actually talk about weather or not it was, just that people thought it was.
I think the point here was that the waste was assumed, and that he was trying to depoliticize that fact by showing that people from either party would agree by about the same extent.
sure the framing was off but its just about the most consistent thing in the history of humankind the ineffeciency of bureacracies and the less taxes that are taken away from people who could have chosen to spend them in the way that they saw was best for them from the perspective of their own eyes, and instead chosen by a giant bureaucracy, the worse off societies has been
Define "waste". If it's "used in an unproductive way" or "spent in an unproductive way" well quite frankly, so what? That money may not have been productive in the government's hands, but perhaps the people they paid for those unproductive services used that money for other, more productive things. Modern money is more like the water cycle than say precious metals of limited quantity. It's not "lost" in the sense of being destroyed, it's simply just not fully utilized as effectively as it could be. Even dept servicing is paid to someone, who will spend/use it somewhere else.
@@wasdwasdedsf Citing an opinion poll doesn't substantiate how actually waseful the bureaucracies of all institutions are. 50% waste if it's true (which wasn't mentioned) Doesn't explain if that's low or high compared to things like charities or corporations or..
The whole issue around explosive increase in production has a parallel to AI safety. This clause sounds like a good move (it might be) but it's more likely just a patch in the same way than most proposed "solutions" to AI problems are. We need to be actually prepared and start changing stuff now or we won't be able to handle what would happen otherwise. Capitalism has an expiration date. Either resources start to run out and you need to put something other than profits at the forefront or you get to a post-scarcity society where it's an opt-in deal. Either way, our society is not ready nor even preparing for the transition, much like we're not ready to design a safe AGI.
Very interesting video. There is no question the technology is only a tiny percentage of the problems posed by AGI. I’d love more videos on the people problems.
I feel like this doesn't really solve the larger relational issue here. Even if companies gave up large sums of money from the windfalls of AGI, the company still decides how that wealth will be allocated and to whom it will be allocated. You still have a relationship where large sums of the population will be forced to rely on the generosity of a few individuals, and would still be subject to the whims of an immensely powerful company, one more powerful than any before it because it not only has more wealth but a vastly intelligent AGI. A better solution, though it would require an upheaval of the status quo, would be to abolish the possibility of owning an AGI, or, if you are willing to go as far as I am, to abolish private ownership entirely. Because even if AGI is not developed these issues still exist as long as capitalism exists. Now, I am not advocating that it should be owned by the state either, but by the community that works on it and is immediately benefitted or at risk from it should have a say in its operations. But, I understand that this is generally considered a radical opinion, so do with it as you will.
Private ownership of half of the american soil and production capabilities, and that of a mass produced watch that your grand father offered to you when you were 10 are two different things.
@@automatescellulaires8543 You're confusing private and personal property, my friend. Communists aren't concerned about your watch, they're concerned about the fact that people claim ownership of things like land, factories, and the like and use that as justification to screw people over.
Lets all agree that if any of us come into possession of the one ring, we'll definitely cast it into the fires of mt.doom and not keep it for ourselves.
There are huge problems with this solution right off the bat. 1. It's relying on a very naive perception of how people just act in general. Being shamed by the populace for something the average person has no knowledge or education about is going to get people to sign a document that says "If you ever 'win', then just stop 'winning' "? Just historically speaking, on much, much smaller scales, this has a very bad track record in general. 2. Never really addresses the idea of reneging on the contract. Who is going to sue someone who literally buys every single lawyer the world over? What are the remaining good samaritans to do when someone uses that money to threaten legal or even physical action on anyone who opposes them? Or they just buy out the contract holders themselves and then absolve the contract? 3. What do you do when the contract just gets held up to be unenforceable in court? 4. Even if everyone agrees to sign this peacefully and then plans to actually make good on it, what do you do when someone creates a shell company and gives the AI to it? 5. Even if all of the above is negated and everyone plays fair and fully intends to uphold the spirit of this, what do you do if a brand new startup company or some guy in his garage manages to beat everyone to the AI? He never signed the contract and may have no reason to give up his gains. I really liked this video, and I enjoy the thinking exercise and the subject, but if you do this in the future, I think you should find some economic and socio-political experts to discuss the matter with as well. It'll really help illustrate how big of a problem this really is, and also highlight flaws in current ideas. EDIT: I actually just thought of something even more important that I missed before. 6. What do you do when the AI itself has decided that you enforcing that contract would be detrimental to its goal of getting more money and then decides it can't let you do that? How are you even going to contend with the in-human mind games and loopholes that an AI might play against you when we've already run into some serious human based loopholes?
I'm reminded of the old joke: If you ever find yourself the target of a mugging, simply say "no". The mugger actually can't legally take your stuff without your consent.
I'm fairly sure even if you get somebody to sign that that windfall clause, if they DO succeed in AGI they'll weasel out of paying in less time than it took you to explain what the windfall clause was.
They'll hire the AGI as the world's best lawyer to argue their way out of the contract
"legally binding" is a very stretchable term if you earn 1% of the worlds gdp
'legally binding' requires there is some government or force that can force you to comply through physical or financial pain. If you have an AGI that has already amassed GWP level wealth, it will not be susceptible to those forces. It will be able to create a way to mitigate them. Maybe this is solved because we are assuming the safety problem is solved, but it seems we still have work to do on the idea of the Windfall Clause.
@@biobear01 i come from germany and here lives the king of thailand, former prince of thailand. when he became king he was supposed to pay 3 billion€ in taxes because he essentially obtained an entire country and lives in germany. he did not pay a single cent. when that kind of money is at play, the wheels turn differently
@@MsMotron The first company to make ASI will not bother making any money. Why bother selling products/services when you could just wish anything you want into existence?
@@biobear01 Enforcement is always power relaint. And the company that just cracked the holy grail of AI... will be holding all the cards.
Global GWP is 142 Trillion, EU, and US GDP is around 18-20 Trillion each, so around 26% of world production. While EU and US do not have direct control over that money, they do control monetary policy, patent policy, and the law. - If your company says "screw you" to either of those two powers, suddenly your AI patents are made invalid, your corporate offices are raided, and your executive board is all put on their Sanction list, along with their families.
Facebook, Amazon, and Google are all worried about Antitrust legislation from the left at the moment, expect more of that as companies grow bigger.
"Appearing to Not Be Sociopaths. This is sometimes called 'Public Relations' " (Dying!)
One of the best lines I've heard in years :D
Still laughing.........
@@natcarish Haha, same 😂
This made me laugh as a marketing student 😂
and all the while bringing it as sesame street level realism , no cynicism involved whatsoever . . . rotfl !
"Taxes aren't voluntary, you can make companies pay them"
Any company responsible for 1% of the GWP will almost certainly have armies of lobbyists (or simply buy elections/government leaders) to keep their taxes as low as possible. Major multinationals already do.
This. My prediction for an AGI future in big corporations will be:
* Corporation develops AGI
* Corporation socks soar
* Corporation lays off immense amounts of staff
* Corporation stocks soar further
* Corporation is now immensely powerful, essentially buys up other big competitors, lack of anti-trust law enforcement in the US allow this
* New mega-corporation exerts massive global influence
* Massive poverty everywhere from layoffs
* Massive unrest, but hey, that's the governments' problem now
* Governments powerless in the face of mega-corp
* End result: Extreme class divide, people who literally are useless since labor is mostly obsolete and dont partake in economy, people who either have irreplaceable jobs or own AGI stock etc, partake in economy.
One could argue that having a large portion of the population no longer be economically relevant would hit the corporation's bottom line, but they have an AGI, they will probably transition away from selling goods and over to simply shifting money around in order to make yet more money. I mean the US has been showing us how to do it for years, with an already staggering wealth imbalance. I don't think it's too far of a leap from there. It'll just be even more wealth imbalance together with a healthy sprinkling of war and civil unrest.
People really forget that corporations absolutely don't care about ethics at all, so AI safety, windfall clause etc, all that doesn't really matter in the end. If apple/google/amazon gets an AGI, prepare to watch the world change for the worst, whilst their owners get even more unimaginably rich, that's pretty much that. It's just a matter of when this happens.
Society won't turn into this utopia where work is mostly handled by AGI and humans can now self-actualize. Itll turn into a dystopia where corporations will absolutely rule all, and poverty is everywhere. They won't share, they already don't.
Or buy an army. Just a normal army. Not of lobbyists.
We took down robber barons like the Rockefellers with anti-monopoly laws. We could do this thing.
In this context, any company responsible for 1% of the GWP will /also/ have an unstoppable artificial god who does anything they ask it to do, which might be a bigger problem.
I would imagine a company with >1% of the world's DGP would be able to get out of any contract they want.
Not if the collective lobbying of the other ~90% is enough to overrule them. Never underestimate the power of jealousy.
I would imagine a company with an AGI may be tempted to set it the task of getting out of the contract. 😉
@@Elzilcho1000 Wouldn't be hard. Just spend huge chunks of your profits on expanding your business and buying land and paying out employee bonuses and other areas. Profit is what's left over after you've spent the rest. They can complain to you on your corporate yacht.
@@jarrod752 as a company if you buy stuff (ie not expense to conduct the business that generates the profit) the money you spend is still counted in your profits
@@Scubadooper so buffer the money into things that are meant to conduct the business then convert it back into whatever - this will still get flagged as a huge profit but you may just manage to slip into the zone where you have enough control that your company calls the shots
Rob, love the video as per usual. You mention the Windfall Clause contract is "legally binding." While contract law certainly differs across countries, in general they are only binding to the extent that the quid pro quo is maintained. In other words, if I enter into a contract with someone to do maintenance on my house in exchange for money, I'm only legally bound to provide the money if he has held up his end of the bargain. The problem I see with a Windfall Clause is, once the "windfall profits" have been theoretically realized by the AGI first mover, the other companies and institutions that may have signed on have no leverage to enforce the contract. The first mover could say, "I choose not to honor my side of the contract," and the only legal recourse would effectively be an acknowledgement that the other companies no longer have to provide their end of the bargain, which was nothing to begin with. Contracts can always be legally broken so long as the exchange of goods or services outlined in said contract are undone - and because this has no exchange, it can be broken at any time with effectively no recourse. I suspect you would have serious trouble getting the AGI "winner" to uphold their end, because at that point it won't matter to them. It sounds like a Windfall Clause is more of an insurance policy for companies in case they "lose" the race. By signing on, they are maximizing their chances of receiving profit sharing should the "winner" choose to follow through with the promise. If the winner chooses to ignore the contract, they are no worse off than they would have been absent the contraxt. If they end up the winner, they can choose at that point whether it makes sense for them to hold up their end of the bargain.
I still think it is a great idea and should be further pursued, but it seems to hold all the typical first mover problems we associate with AGI, namely that once it is achieved, its potential benefits will be so great that the benefit of honoring any past agreements is dwarfed by the cost of ignoring them.
I dunno, given how drastically AGI would shift money away from the labor class, that’s just begging for a revolution
Even if companies did decide to sign the Windfall Clause, which I highly doubt happens in the first place, the company to reach 10% of the world's GDP will be so incredibly powerful they'll effectively be immune from any enforcement actions that may be taken to force them to honor the contract. The world's most powerful governments can't get Amazon to pay their taxes, you think anyone will be to separate trillions of dollars from a company that's already worth at least $8,000,000,000,000 (roughly 10% of the world's GDP) and has AGI at their disposal?
True. But in conclusion, we have no reason not to do this and everyone will participate. The contract will even be legally binding and yet, it won't help. Game theory leads to really weird conclusions sometimes (on the face of it).
@@MRender32 The problem is the Rich can win a revolution, at this point. They can mass-produce tiny flying drones that can snipe a human holding a gun from 500 feet in the air. They would completely destroy the illusion of choice if they actually did it, but they could do it. A couple billion dollars. Done. Easy. Every protester gets one free bullet. Revolution: solved.
Warp Zone When EVERYONE is starving and unable to care for themselves, you don’t think 450 million people can bust down the doors and raze the place? We can’t underestimate how many people are gonna be affected. You are probably right that they’re SO much stronger, but if they manage to kill the labor class (literally this time) I don’t know who they’ll be able to sell to. After all, they need to generate wealth, don’t they?
"What happens when you create huge amounts of wealth and that wealth all goes to a small group of people?"
Hmm I can't possibly imagine. Such a thing has surely never occurred!
Yes, he literally said this has never happened. So yes, you probably really can't imagine it.
Here's a concern I'd have with this: rich corporations already wield incredible power over governments, public opinion, etc. By the time they get big enough for any windfall clause to kick in, they might say "well that was a fun PR stunt but now we're not going to follow through", and they just might be powerful enough to pull it off and not pay anything back to the world. And by this point, they've already got AGI/ASI and don't need cooperation of pretty much anybody to keep being the #1 company.
It's true that a company which develops AGI probably has no reason to continue to respect any agreements or contracts it signed, and that includes any laws of any nations it may be a part of. It would have the power to change the world to its liking, so in a way it would effectively be the new government, except far more powerful than a normal government since it's not bound by economic considerations. It would have the power to point to anyone at random and arbitrarily declare: you're rich, or you're poor and it would just happen.
Still, that doesn't mean that the company would not pay out the windfall. Even if nothing forces them to pay out, it's also true that there would be no cost to paying out. The company has everything it could possibly want. When all of its hopes and desires are totally fulfilled, any hording beyond that seems pointless. Once a person has everything he or she wants, the only thing left to want is the general good of the rest of the world. Why not cure poverty and illness when it costs nothing to do so?
@@Ansatz66 one potential cost to altruism at that stage might be that by sharing your profits you're also sharing a bit of your power. And who's to say that a different, less benevolent actor, possibly one who hasn't quite figured out the alignment problem yet but it's willing to risk it to get in on some of that world domination, isn't going to use these resources to put their plan in motion? Best keep it all to yourself, you know what's best for everyone anyway.
@@iwikal You have powerful AI, you can keep detailed records of what everyone is doing and stop that easily.
@@Ansatz66 Let's look at what humans do. Just about any billionaire could be considered as having enough. Do they though ? No, too much money only create a crave for more. Company don't have as much empathy as human do though. It's only purpose it to generate more money. Owning everything wouldn't be enough, it would only be the natural starting point, the first step.
@@automatescellulaires8543 I mean, I would qualify this slightly. There are good and bad billionaires just as there are good and bad people. Nobody believes that Bill Gates is a bad person; some people believe it is wrong that we live in a society where people can be is rich as Bill Gates, but the current social contract is hardly Bill Gates's fault.
The problem is not that rich people are all sociopaths; the problem is that society is incapable of whipping rich sociopaths into line in the same way it could if I behaved similarly.
BTW the Luddite fallacy actually is not a fallacy: technological development always resulted in worse circumstances for workers over-all; only through collective bargaining or new laws we managed to get part of the results.
New factories and steam engines didn't create better jobs; they created unemployment, longer hours, smaller wages and enabled the wider use of child labour. They allowed companies to fire half of their old workers and replace them with children; they created bigger risks in investment in materials which had to be compensated by longer workdays; and the over-all theme was always to replace expensive labour with cheaper one, which created a lot of cheap labour that was always competing against itself.
The employment of engineers, repairers and so on always had to necessarily cost much less than the labour replaced by the new technology, and as those positions required more education and training, thus paying better, the over-all available work by definition had to fall. And so it did. Every time. And the new work created by surplus of available labour was always much worse than the old work.
AI will necessary do the same. Unless workers stay vigilant and demand their rights, those rights are denied. Even if it obviously results in fall of consumption and drastically worsen the position of corporations; paying more wages to stimulate sales simply is not a solution for any individual company.
We will get few new jobs that require high skill, and drastically reduce the amount of avarage jobs that pay avarage wage. This will open door for a lot of new really shitty jobs that don't pay well, and which will constantly be a target for optimation and reduction. The more AI thinks the less people are paid to do so - just as the more precision and dexterity machines gained, the less people were paid for such work.
Some capitalist in the middle of industrial revolution were begging the parliament of Britain to create legislation to regulate factories, as they faced such strong competition from those profiting from unethical practices, that they had no choice but to adopt the same explotation of workers and children. Something similar will necesarely happen in our economy; some billionaires already are calling for goverment action, as they know they are not free to make the ethical choice in a market where others can choose not to.
The problem with the industrial revolution was that it took place within an extremely capitalist context. It was not the new technologies. In a vacuum, the technologies were good. The workers kept getting bad jobs because it was ruthless cold hearted industrial barons with no public accountability whatsoever who were in charge of all the jobs. They still are btw.
Obviously automation in any industry would reduce the human jobs as those jobs are replaced by machines, but that doesn't automatically lead to an overall reduction in jobs. There is a bigger picture to consider beyond just the activities that are being automated. The engineers and repairers that maintain the machines are not the only place where we might find new job created following automation.
When automation allows some good to be produced more cheaply, that tends to cause the price to fall. People might buy more of that good as the price falls, or else people might spend that money on other things, thereby causing other industries to expand. When the production of widgets are automated, many people who make widgets may lose their jobs, but the demand for cogs will naturally rise as the price of widgets falls, and so the widget-makers might gain employment in the expanding cog industry.
Surely it is obvious that something must cause new jobs to appear despite automation, since we've been automating things for a long time, and yet people continue to work at jobs and life has been greatly improved.
@@Ansatz66 this is deeply wrong. the thing you're falling prey to is the idea that prices can only increase when demand increases. in fact, when the price of bread is raised, so is the price of basically every other staple good, because the demand for all of them is fixed.
and by no means does automation necessarily translate into a reduced cost of product. the price to produce a car has fallen drastically since the mid 20th century due to automation, and yet the price for a new car has risen steadily, even adjusted for inflation.
and further, in your example, you assume that cog-makers will not have also discovered the ability to automate their workers. automation does not happen once per decade, affecting one industry at a time. it happens constantly and across the spectrum of production. widget makers would be (and have not historically been) able to find equally paying jobs as cog makers. they would simply join the cog makers in the unemployment line and end up spending what ought to have been their retirement working at a domino's or walmart or other low-paying service industry job. and the thing that rises up to fill the demand void left by the slight decrease in price of widgets and cogs will be built to take advantage of automation, meaning there will be few if any jobs available in its manufacture.
to simplify automation into the world of econ 101 is a gross disservice to workers around the world who have seen their lives upended and de-facto ended by automation and its knock-on consequences.
tl;dr take your theoreticals elsewhere, we have no place for them in the real world.
This comment thread is one of the most ill informed I’ve seen in quite some time.
The alternative to child labour during the industrial revolution was death by starvation. Yes, ethical practices are a luxury, that’s why we want every country to get wealthy as soon as possible.
Cars haven’t got cheaper? Seriously? In what world? I can afford a car with 4 month wages, and I make a pitance. Try that 20 years ago...
Rising the price of bread only raises modestly the price of *substitution goods*, complementary goods like e.g. ham get a lower price.
One problem with this idea is that it will not be humans that are being used in terrible working conditions, it will always be robots.
If the AGI somehow made so much money that there was no money left in the world, then that would entail that the people of the world would have viewed what it was producing as more valuable than anything else they could have gotten with that money.
Taxes would also have increased enough that there would be enough money flowing to people hired or subsidized by governments so that they can continue to buy the better and better products the AGI was making.
"You might face boycotts and activism."
Amazon has been facing boycotts and activism for years. They don't care. Profits over everything.
No company will sign a Windfall Clause. It's a nice idea but pure wishful thinking. A little bit of free PR right now (that honestly most people wouldn't really give a shit about) is literal fractions of pennies when you're talking about a company making 10% of the world's GDP (~$8,000,000,000,000). If you think this is a viable solution to inevitable mass automation, you live in a fairy tale.
Even if companies did decide to sign the Windfall Clause,, the company to reach 10% of the world's GDP will be so incredibly powerful they'll effectively be immune from any enforcement actions that may be taken to force them to honor the contract. The world's most powerful governments can't get Amazon to pay their taxes, you think anyone will be to separate trillions of dollars from a company that's already worth 10% of the world's GDP and has real AI at their disposal?
As I was watching the video I was also reminded about how climate experts have known that we were headed for trouble, designed very good plans to avoid it, and even presented those plans to people with the authority to enact them, but....well you probably know where that ended up...
I was thinking this throughout the whole video.
"viable solution to inevitable mass automation" We don't need a 'solution' to mass automation because mass automation is not a problem, it is a potentially amazing thing. The problem is capitalism, not automation.
Amazon boycotts have been pretty small. If a significant fraction of their normal userbase started to boycott them they would start to care.
Juicy Boi fair point, what I should have said is we need a solution to the unprecedented scale of economic disruptions that will be caused by automation.
Seeing how far corporations will go to avoid paying taxes, good luck trying them to accept this windfall clause.
Yes, even if it helps, it can't be the whole solution.
Nice job to both of you for watching the whole video before commenting /s
Nah, they would all accept to sign it because the requisite for it to aply would be ridiculously easy to circumvent. Just chop the company into bits, make it a conglomerate and even if you move the equivalent to 40% of the world's money, you'll be fine. The shortsightedness of this whole endeavour just baffles me.
@@KirillTheBeast Also, owning an AGI the amount of loopholes found by it to avoid paying, would be unfanthomable
@@KirillTheBeast have you looked in the paper if it addresses this point? you should probably do that before judging "the whole endevour", right?
This seems to me like a specific case of the general problem that a few people own the means of production.
@@nullumamare8660 "he wrote the self-writing code!"
@@gearandalthefirst7027 he wrote the code for the code that wrote the ai
@@nullumamare8660 -- Do you think the people to achieve AGI _won't_ have worked hard for it?
Why is that a problem, though?
@@mvmlego1212 Obviously the workers, the ones doing the actual research, writing code, etc, work incredibly hard. But even if they are well compensated they will probably not be the ones who own the means of production. Drug researchers work very hard to produce amazing drug therapies, but the ones who make the lions share of the profit do so by owning capital, not working.
I nodded at my phone when you asked "is that the kind of thing you'd be interested in"
Same
We need more people to liked your comment, so Robert Miles makes more videos on the legal, political, and economic aspects of the future of AI.
@@TheSam1902 Seconded!
Yes please. Seems like a super important slice of the problem. There are lots of interesting objections in the comments here to go through for a start.
Me too :D
I dont really like that the apparent solution here is 'corporations maybe sign a pledge as a PR move and by the time they wield pet AGI's and significant-percentage-of-world-GDP levels of wealth hope they just honour it willingly (because there is no realistic enforcement mechanism) to sustain an entirely outmoded economic system'.
I feel there are a lot of ways for that not to work out.
It's more realistic than "have the government fix the problem".
@@peacemaster8117 It's sort of equivalent to "have the government fix the problem". A corporation can sign a legally binding contract, but the government still has to be willing and able to enforce it.
"you can think of the world as having 2 types of people, some that make money by seling their labour, and some that make money by owning AI sistems"
- Rob.
"There are those who make money by owning capital, and those who make money by seling their labour"
- Karl Marx
Both sound pretty clear and accurate to me
Really and truly, this whole video was spent dancing around the fact that AGI should mean the end of capitalism.
the automation crises being discussed here was literally discussed in the communist manifesto, certain took a lot longer than Marx anticipated but here we are still. Even for people still married to capitalism as the best economic system there is a lot that can be learned from Marx analysis of capitalism
What will really happen is that unchecked automation will make everything so incredibly cheap that what little money you make will be enough to have more luxurious life than we currently have. Social stratification will be ridiculous, but you want to have better life, not prevent other people from having exceptionally great life, right?
@@michaelbuckers Thank you for being a voice of reason in this comment section. Not to mention, as production of products and services becomes cheaper, more decentralized and more available, we are more likely to see a democratization of entrepreneurship, with clusters of mostly or entirely self sustaining local communities. People project tomorrow's problems on today's market but the landscape of the financial world changes all the time.
What value does Legaly Binding have once a company makes 1% Gross World Product. South Korea already has a problem regulating SamSung, because they are roughly 17% GDP of South Korea. A company with 1% GWP will be in a similar situation. So it would require that every goverment on earth promisses to uphold companies to their Windfall Clause. But then we move the problem one step away. I don't know, I am just not confident in the promisses such companies make.
We could also just, you know, try to move past capitalism, which is so obviously incompatible with a post-AI world. And if everyone’s labor suddenly becomes worthless, that’s some pretty strong motivation for some massive political change.
Capitalism is not obviously incompatible with a post-AI world. In fact, some companies make good money employing AI.
AI is so effective at playing Monopoly that there is now an international agreement not to bet on rising food prices in the derivatives market.
If everyone's labour suddenly becomes worthless, that is an enormous potential for down-sizing and cost effectiveness.
Usually this results in wars in which the proletariat kills of its surplus. But with today's killer robots even that can be automated already. (Technically killing all the poor would be a massive political change.)
@@davidwuhrer6704 the sooner we get rid of capitalism the better
@@davidwuhrer6704 Incompatible with it in a way that is good for anyone that isn't the bourgeoisie.
Of course, that is no less true now, but post-AI its even more obviously true to the unaided liberal eye.
Capitalism is the best by far rn
But once post scarcity kicks in everything will and should be free since AI and infinite production removes need for any money
@@saosaqii5807 Not if the capitalists have anything to do with it, its in their interests to prevent that future and they have more resources than anyone else to get what they want.
Robert, literally any video you make about AI is something I'm interested in seeing. You are a fantastic communicator of all things AI and we need more people like you, especially now. Keep them coming!
That would work if most people were reasonable. Recent events have shown they aren't. In the US for example, you can put that "I am a dickhead" label on your forehead and literally get elected for president. So why would a company have a problem with that? Unfortunately, many people stopped caring.
Given what plenty of people have already pointed out in the comments (namely how totally unenforceable a windfall clause would be in practive), I think examining these types of problems really illustrates the need for fundamental changes to the way we view and enforce property laws and ownership as a whole
"Firstly, governments are not actually great at spending money effectively..." [CITATION NEEDED]
Just because it is "widely known" to be so, doesn't make it true. In this case, you'd probably find that (paraphrasing here) "governments are the worst way of spending money effectively, except for all those other forms that have been tried from time to time..."
@@bosstowndynamics5488 Even THEY have their moments. The US social security program is rated somewhere over 99% efficient, I believe.
The usas health care.
@@c99kfm I mean it's 99% efficient because it's just giving money to people, can't get much more efficient than that
Yeah, that was probably the weakest part of the video for me. Asking Republicans and Democrats how much money is "wasted" by government and, what do you know, the numbers match up with their exit polling numbers.
Companies will just divide into smaller companies with the same owners.
Agreed. Without ironclad agreements and strict enforcement, I'd expect to see the same shenanigans companies use to avoid paying taxes.
The World *will* end up with 101 companies, each grossing 0.99% of World GDP.
This wind-fall-thing is a crap idea ... nothing but "public relations".
This is one of those things that you (I) don't think of right away but makes so much sense as soon as you read it
Didn't have time to put this in the video, but this is addressed in section A.2.i of the report at : www.fhi.ox.ac.uk/wp-content/uploads/Windfall-Clause-Report.pdf
> “Firms will evade the Clause by nominally assigning profits to subsidiary, parent, or sibling corporations.”
> The worry here is that signatories will structure their earnings in such a way that the signatory itself technically does not earn windfall profits, but its subsidiary, parent, or sibling corporation (which did not sign the Clause) does. Such a move could be analogous to the “corporate inversion” tax avoidance strategy that many American corporations use. Thus, the worry goes, shareholders of the signatory would still benefit from the windfall (since the windfall-earning corporation remains under their control) without incurring obligations under the Clause.
> We think that the Clause can mitigate much of this risk. First, the Clause could be designed to bind the parent company and stipulate that it applies not only to the signatory proper, but also to the signatory’s subsidiaries. Thus, any reallocation of profits to or between subsidiaries would have no effect on windfall obligations.* Second, majority-owned subsidiaries’ earnings should be reflected in the parent corporation’s income statement, so the increase in the subsidiary’s profits from such a transfer would count towards the parent’s income for accounting purposes.† Finally, such actions by a corporation could constitute a number of legal infractions, such as fraudulent conveyance or breach of the duty to perform contracts in good faith.
have you looked into the paper if it addresses this criticism, or are just just assuming you instantly came up with something that the experts behind the paper have never thought of?
I'm glad I was lying down when I saw the "appearing to not be sociopaths" bit. I would have fallen out of my chair!
But seriously, thanks again for the hard work making these videos.
Underrated comment!;)
This option encourages companies to hide what they are doing with AGI. Also it is going to be very difficult to separate what profit is due to AGI.
We already kind of have this divide between people. The working class survives by working, and the capitalist class by owning private property. Finding a new way to organize the economy could be a part of the solution.
The problem with legally binding is that almost by definition, anything that large and influential can probably buy a coup. Or carve out it's own state.
I think the more important question is "What happens to capital when labor realizes it's a made up concept"
It resist. During some time
"You can make companies pay them."
Well...
Either we're not very good at it, or we don't want to.
It's the second one.
No, it's the first one.
Possibly depends on the country, but in the case of the UK it definitely seems like the second, the government created the loopholes so that the biggest companies don't pay much tax, and then benefit personally from that.
@@robhulluk There is stuff like that in every country. My own country was named as a tax haven for billionaires in the Panama Papers.
@@robhulluk I certainly didn't profit from it.
Let's not kid ourselves.
The executives and the shareholders all being sociopaths wasn't a hypothetical, and they're putting less and less effort into appearing not to be by the day.
Really? The amount of corporate virtue-signalling this June was almost nauseating.
Also, I don't understand the stereotype of evil shareholders. How much stock do you have to own in order to be classified as as _bourgeois_ swine?
@@mvmlego1212 "I don't understand the stereotype of evil shareholders"
Shareholders are pretty evil by definition. Invest in a company when it's doing well and make money. Withdraw when it's not and make money.
"How much stock do you have to own in order to be classified as as bourgeois swine"
Owning stock is the immoral part. So any at all.
@@AvatarOfBhaal -- _"Shareholders are pretty evil by definition"_
Could you state your definition of evil, please? I can't follow your argument.
_"Invest in a company when it's doing well and make money. Withdraw when it's not and make money."_
That is not how investing works--at least not if you want to make money, rather than lose it. Buying high and selling low will make you broke, not rich.
If you want to make money from investing, then you find a company that you believe will be good at making money. Then, you and the company make an agreement: you give them some money so they can execute their money-making plans, and they give you a share of their profits and some influence over the company. Eventually, you'll find yourself in a situation where you value your share of the company less than you value the money that you could get from selling that share of the company to another person, and so you sell the stock.
These are all voluntary, mutually beneficial transactions. They don't steal or destroy wealth; they create wealth. I find it bizarre to demonize transactions or the people who make them.
@@mvmlego1212 The reasoning in these comments aren't that clear, but it's usually argued that shareholders incentivize endless growth at _any_ cost, including unethical business practises. Even at the cost of the free market, through monopolization and anti-competitive practises which benefit shareholders but remove consumer choice.
However these could simply be solved by better laws on unethical practices and updating anti-trust legislation. Also removing regulations which benefit monopolies over potential competitors.
Or we could just move to cooperatives.
@@LowestofheDead -- Those are reasonable points, and I even agree with some of them, but they're a lot different from saying "shareholders are pretty evil by definition". If a shareholder is concerned that the company they've invested in has run out of room to grow without compromising their ethics, then they can divest from the company.
It is unknown if AGI will have a soul, but corporations definetly don't and will never have. I think that rogue AGI is much less of an evil, than corporation, that has AGI doing whatever they say.
All I'm asking for is more videos, period, as long as that doesn't take away from the (so far excellent) quality of them. This is my absolute favourite UA-cam channel out of hundreds I've subscribed to and you've got the likes of the Vlogbrothers, CPG Grey and The Royal Institution beat as far as I'm concerned.
General purpose A.I. is the most important and interesting topic of our time and, if we survive till we have it, will impact the future of humanity incomparably more than isolated historical events like the current Corona crisis and even larger, more dramatic events like global warming.
Its worth pointing out that this windfall clause is likely to be ignored even if signed for two reasons:
1. The company massively benefits from avoiding it
2. The state where the company is present (and thus the place its bound by the laws of) massively benefits from letting the company out of the windfall clause. (Or the company will just make offers to national governments to let it out of the windfall clause in exchange for the company moving there)
Even if the windfall clause is part of international law, already powerful countries (the likely place AI development will succeed) have show themselves to be powerful enough to ignore international agreements.
And if we can think of a way to ensure all the methods of avoiding the clause are covered, you can be pretty sure the first thing the company does with its AI is stick it on finding a way out of the clause.
(Its kinda funny that this controlling corporations problem is just an AI safety problem in disguise)
GPT3 was interesting, would love to see next iterations figuring out chemistry or physics just from reading texts
GPT3: i can do math, just tell me examples
GPT4: i can do physics, just give me textbooks
GPT5:[DATA EXPUNGED]
GPT6: Finds a bug in the program displaying its output, breaks out of its sandbox, hacks major government players, takes over the world
We might get to a point where something like GPT(n) can be used to be taught arithmetic and come up with solutions to mathemathical problems. Then mathemathicians/computer scientists will have to decide whether or not that's a valid proof much like they did with the first computer-proven theorems (and they may come to a different conclusion).
I wonder how GPT3 compares to IBM's Watson?
Input a series of example jeopardy questions and answers. And then try test questions.
“Can entropy be reversed?”
Just saw the GPT-3 Computerphile video and really hoped Miles would upload soon. Every video he makes is amazing!
Some confusion is going on here: Taking the ideological consensus in the States regarding public spending as representing reality, also assuming the difficulty in enforcing taxation on companies isn't a result of the influence of capital to corrupt taxation systems (and thus isn't soluble). Bad sociology, and political science.
I would argue that extreme inequality does not need the rise of AGI to tear apart the social fabric. Great inequalities are symptomatic of societies on the verge of collapse across history, and we're living in one of them. If anything, the impact of AI/AGI deployment will be a catalyst, but political choices seem to be already made.
I doubt conventions such as discussed here would change anything realistically. Empty promises are to be broken, especially if you wield such a power.
You just assume that public money is inefficient because people think so... that's not a serious argument.
Corporation are not encline to fight for a greater good, they are here for money. All the biggest corporations implement sophisticated and aggressive tax reduction schemes, that's not for the greater good, I think that's proof enough we cannot rely on them, especially if we expect profits to grow exponentially. If we want something that benefits us all, a more efficient tax strategy is probably what we need. If you're thinking lomg-term, why not thinking about something like a world-wide tax at international level? Or taxing profits where they're made/sold instead of where the product is produced/engineered?
Very glad to see a new video relatively short after the last one, keep your great work up! I specialized to ML partly due to your great, interesting videos during my CS studies.
Assuming decision makers to be human beings?
Supposing that corporatives would have any issues with rightfully looking like the sociopaths they are?
Suggesting a "tit for tat" argument for cooperation?
Look, Robert, I love your content and your commitment towards spreading awareness about matters even tangentially related to AI, but at this point I must assume you don't live in the same planet as the rest of us...
Great video, though xD
The optimism is charming but yeah...
@@starvalkyrie To me, it just reflects humanistic tendencies in his train of thought. I mean, just look at his content: the guy is spreading awareness and inciting interest on AI safety research, which is to say "let's make sure that this thing that will eventually be made doesn't screw us all over".
Nice? Absolutely.
Charming? To some extent.
Naive? As all hell. It's pretty much tied with libertarianism in terms of naivity.
Nevertheless, it's worth taking the time to examine ways to deal with the problem without changing the entire framework (AKA late stage capitalism) before giving up on it and forcefully engineering a legal and economic system built around a new technological paradigm that may never come.
At least this video does trigger people into stating the obvious. Maybe the naivety is faked, and only meant to help us realize how screwed we really are. Human made economic choices, makes Skynet looks like a Saint.
@@automatescellulaires8543 Well, you actually nailed it with the "Human made economic choices".
The biggest lie to ever befall upon our species is one promoted by academics in the field of economics: the economy is treated as a phenomenon, AKA "something that happens", instead of being the sum of all the decisions made by individuals serving (mostly) their own interests.
These sociopaths would make you think that the fact that they "use game theory" is already accounting for individual agency and extrapolating it to bigger systems, but it's a lie enabled by the obscuring of the events' sequential order. First, a tendency is found, then, it's exploited and purposefully perpetuated and the last step (usually when someone outside the lobby questions the ethics of such actions) is justifying the events by stating that "it couldn't have happened any other way because game theory says so".
Source: my great uncle was a trader. The guy could never find peace after the small bussiness debacle he had contributed towards by speculating with warehouse and shop prices (this was in Spain in the late 80s). Several of his acquaintances lost their livelyhoods because of something that he himself was doing. Both them and himself had come in a mass migration from the southern-most parts of the country, and he was a predator to them. A decent human being doesn't come back from that kind of realisation.
Your videos are amongst the most fascinating I've ever seen, please keep making them!
Question, doesn’t this contract be basically useless in the situation that a company creates a super intelligent AI who’s interests are aligned with theirs? Wouldn’t it very likely try and succeed at getting them out of this contract?
Just sign it! once you have an AGI, you can have it figure out a loophole in no time.
When making things easier for humanity is a problem, you know something has gone seriously, seriously wrong.
Explain why
@@Tijaxtolan making things easier should make things easier. You know, help other people.
Due to the lack of quid pro quo (as stated by commenters below) in this contract for when it comes into effect (the only benefits that would be gained by making it in the first place being ephemeral things like publicity and some cooperation that would be impossible to prove not to have been there otherwise), I think we need to change up the windfall profits clause considerably.
The best way to change up the clause is to have something like the US military’s policy paper regarding cyberspace, where anyone who creates an AGI that makes all human labor obsolete is committing a cyber attack on everybody unless the company is using something like 50% of their profits to directly aid all unemployed workers (or just have universal basic income via that 50% of profits, which also helps people continue to be able to buy the goods the company is making).
Execs and shareholders: "Yo AGI, how do we look charitable but not actually give anyone money?"
AGI: "I gotchu bro, just invented a million forms of windfall evasion"
Execs: "sickkkkk"
CEO of the first company that owns an AGI: "AI, tell me how to get out of the windfall clause without arising suspicion, and then make me king of the world!"
Money and profits will become obsolete. The first company to discover AGI will not bother making/selling products. Imagine having a wish-granting genie with unlimited wishes. Why would you bother creating and selling products when you could just wish everything you want into existence?
What if you want power?
Oh wow thanks guy with ALL the power for telling me that if I just trust you everything will be ok, I'm sure I'll get that windfall money. lmao
If you plan to cover on the socio-economical aspects of AGI impact, please consider collaborating with CaspianReport on one of the videos. I think it's good to facilitate cross-pollination of multiple discipline in the AI Research because we are all in this together. Cheers Rob!
While I am more interested in technical side of things, this is also very interesting.
This is a really neat idea but it only works with extreme breakthroughs. I think that problems will creep up more gradually. More and more jobs slowly and across different countries will be replaced by AI systems. In that scenario no single company will earn 1% of world GDP while most companies will employ very little actual workers.
I believe the Windfall Clause is useless as AI would be more effective at solving Human Terminal Goals than Currency, effectively allowing it to take the role of currency. A better alternative clause would be something that prevents monopolization of all AI upon creation, essentially using AI to sabotage other people's ability to create AI.
How would it be enforced if, say, AI is developed secretly?
It seems that ultimately, the issue here is that AGI is more likely to be designed to work for the benefit of the company that created it, rather than for the benefit of humanity as a whole.
Time to seize the means of computation, comrades. 😎👍
Absolutely this will be part of it
Even a peak AI won't be able to make communism work. In fact long before then, it will likely tell you exactly the same thing much dumber humans have been telling you.
Communism is not sustainable.
@@sirellyn4391 Making a claim like that would require you to know what communism is and you clearly don't. :)
@@IAmNumber4000 That's an easy claim to make. Marx specifically defined his "ideal" vision for when you achieve communism. But he never specifically defined HOW to get to that, or HOW to maintain it.
And because actually getting there or maintaining it is effectively impossible, (without making all individual actors mindless) that constant no true scottsman fallacy is brought up. Which equates to:
That way everyone else who has tried communism to reach this impossible standard didn't work because they didn't reach or maintain this impossible standard. Therefore it wasn't the "right" way.
There's literally no solution for the calculation problem, the incentive problem and the local knowledge problem. And those are only the tip of the iceberg.
Like I said. "Dumb" humans have figured this out a long time ago. If you set an AI to create communism it would either have to kill everyone or render them all mindless and control them and work with a tiny group which is very close by. And even then it wouldn't precisely fulfill Marx's vision. But that would come the closest by far.
@@sirellyn4391 Actually Marx never specified what a socialist or communist society would look like, only its defining features and difference from regular capitalism. Namely that the communist mode of production has no currency, no class system, and no state.
Marxism is a systems theory, not itself a proposed system. Pretty crucial difference, there. Blaming Marx for the actions of state capitalist tankies like Stalin and Mao is like blaming Charles Darwin because some nuts deliberately misinterpreted his theories to justify "Social Darwinism".
"And because actually getting there or maintaining it is effectively impossible, (without making all individual actors mindless)"
Again, stuff like this demonstrates you haven't made the slightest attempt to understand leftism or Marxist theory. You think anyone is in favor of making every person 1984-style slaves to some absolutely powerful state? Why would anybody even be a leftist if that were the case? Obviously, someone isn't telling you the full story, because it's an easy out to think of your political opponents are stupid and insane rather than make any effort to understand how they arrived at their conclusion.
You should try reading what Marx had to say about the state and democracy. Read what he wrote about the Paris Commune in "The Civil War in France". He was closer to a direct-democracy anarchist than a USSR-style tankie. I'm not going to hold your hand the whole way and I can't paste links here.
Nobody knows if communism is possible because it hasn't happened yet. Automation hasn't obsoleted human labor. What can be known, however, is that capitalism can't last forever. Economic growth can't continue forever because the economy _relies_ on the development of new labor-saving technologies to grow. Even now, the growth of the global economy relies entirely on non-existent money in the form of debt that will never be paid back.
So, can the world continue to go into debt forever to fund economic growth? If so, then there is no reason why we can't take on more debt to feed and shelter the homeless. If not, then Marx was right and capitalism will be replaced.
"That way everyone else who has tried communism to reach this impossible standard didn't work because they didn't reach or maintain this impossible standard. Therefore it wasn't the "right" way.
"
If you had made even the slightest attempt to understand Marxist theory then you would know Marx considered the complete obsolescence of labor by automation to be a precondition for achieving the communist mode of production. He argued _constantly_ with what he called "crude communists" (AKA tankies), people who thought capitalism could be ended by merely making private ownership illegal.
"There's literally no solution for the calculation problem,"
The calculation problem is a refutation of a straw man. Marx's law of value is a theory about exchange values, not prices. Exchange value is just one of many factors that influences price. So claiming "Marx's theory can't even predict prices in a capitalist economy LEL" is utterly pointless, because the theory was never meant to predict prices. But strawmen get picked up fast by mainstream economists because they're all desperate for _any_ refutation of Marxism, no matter how fallacious it is.
If you don't want to understand leftist theory then you don't have to. Just quit pretending like you're an authority because some wingnut blog fed you anti-leftist talking points. It's embarrassing.
I certainly see no downside to a windfall clause, but I also suspect in practice the people with 1% of global GDP will spend a significantly smaller amount than they would have to pay out with said clause on hiring lawyers and lobbying politicians to make sure they don't end up going through with it.
7:25 You make this argument sound like it's a good thing because it will make the company sign this agreement. But what you proven is that the company doesn't actually care about giving people money and there's no reason for it to give any money after it produces an AGI.
There are so many problems with so many of these rationalizations for this scheme that I'm not quite sure where to start. It's like they started to explore Marxism with the concept of making money by owning capital vs. selling labor, but they got caught up in the idea that communism is a four-letter word, so instead of following the premise to its logical conclusion and gaining any real understanding of the problems with Capitalism, they stopped pulling that thread and pivoted to the tired old strategy of: "lets keep putting a bandaid on Capitalism instead of addressing any root problems and hope that works".
.
1:45 "you can think of the world as having two types of people: people who make money by selling their labor, and people who make money by owning AI systems"
You can replace "AI systems" with "capital" in that sentence and it becomes a central observation of Marxist theory. AI is just a type of capital. It's the end of the long tail automation. People won't be able to make money by selling their labor by the time AGI emerges. You don't need complete human-level intelligence to render almost all jobs obsolete. There are very few jobs that require the complete spectrum of human intelligence to perform, which means that expert systems will be largely sufficient to completely disrupt the world economy.
.
4:18 "Governments are not actually great at spending money effectively." Compared to what? By what metric? Why? Is this a fundamental problem with the very concept of government or is it a consequence of sub-optimal implementation?
.
4:28 "This isn't really controversial. A 2011 poll found that Republicans think that 52% of their tax money is waisted while Democrats think it's 47%"
I actually think this is highly controversial. I think a lot of modern politics is guided by this falsehood that a lot of the general public believes. Especially Americans who are still suffering a hangover from Cold War propaganda which is still taught in schools. Using public opinion about tax spending is not a rigorous metric and it doesn't even offer any sort of comparison with alternatives like corporations or charities. A lot of it is based on the idea that we know of some better way to organize large institutions than bureaucracy (democracy is the only real contender here) and that bureaucracy is an organizational structure that is somehow unique to governments despite the fact that pretty much every single corporation is organized as a bureaucracy. The major difference is how people gain power and to whom they are beholden.
.
In a democracy, people gain power by consent of the governed and are beholden to the governed. Corporations are more like dictatorships beholden to stock holders. That can lead to openly sociopathic behavior even if the people in charge aren't sociopaths. Here's an excerpt from the Wikipedia article on the "Yes Men's" hoax where they pretended to be executives for Dow Chemical and promised that Dow was committed to taking care of the damages caused by the infamous Bhopal disaster:
.
"The Yes Men decided to pressure Dow further, so as "Finisterra," Bichlbaum went on the news to claim that Dow planned to liquidate Union Carbide and use the resulting $12 billion to pay for medical care, clean up the site, and fund research into the hazards of other Dow products. After two hours of wide coverage, Dow issued a press release denying the statement, ensuring even greater coverage of the phony news of a cleanup. In Frankfurt, Dow's share price fell 4.24 percent in 23 minutes, wiping $2 billion off its market value. The shares rebounded in Frankfurt after the BBC issued an on-air correction and apology. In New York, Dow Chemical's stock were little changed because of the early trading." en.wikipedia.org/wiki/The_Yes_Men#Dow_Chemical
.
Even if there were people at Dow who wanted to do the right thing and were in a position to do so, the share-holders would rebel and the "good eggs" would lose their job and be replaced by sociopaths.
4:45 "Actually it's worse than that, because this is a global issue, not a national one, and tax money tends to stay in the country it's collected in. Countries like the US and the UK spend less than 1% of their taxes on foreign aid and much of that's military aid"
This brings up two problems. First: foreign aid is not so simple to quantify. One reason you always see the US on the scene of major disasters around the globe is: It's the only country that has the infrastructure and logistical capability to respond effectively thanks to its enormous military budget. When the US buys an aircraft carrier, it's not billed as foreign military aid even though having a fleet of aircraft carries spread all over the world allows the US to respond within days or even hours of a natural disaster striking another nation.
Second: I think there's a sort-of imperative that world governments eventually merge into one (I can hear the sound of a million conspiracy theorists crying out). As our technological capability continues to grow, our capacity for destruction also grows. If we're still split into hundreds of nations debating (sometimes violently) whether Theocracy is a valid form of government by the time we've mastered synthetic biology, we'll be in a very precarious existential position. Government might be one of the most crucial applications of AI.
6:41 "Even if the executives and the share holders are all, hypothetically; complete sociopaths. They still have a good reason to sign something like a windfall clause. Namely: appearing to not be sociopaths."
There are plenty of companies that get along just fine without much good will. Again: Dow Chemical is a great example. See also: Comcast (especially through the lens of South Park).
"most people think their tax money is wasted" is not a good reason to say that tax money is actually wasted. also it's possible to tax this agi if it does stuff within a nation's borders, even if it isn't based within that nation.
I find that poll results interesting. Most elephant voters think that more than half of federal taxes are wasted, donkey voters thing it is just shy of half. This is federal tax money, not municipal or state.
What does the federal government of the USA even do for the general population?
@@davidwuhrer6704 start wars in the middle east and get a Nobel peace prize while doing so (Obama).
This definitely seems like something to be encouraged, but my biggest problem with it is the assumption that the economy and capital systems will continue to operate in anything like the same way as they do now, once AGI is developed. It's not a reason not to do it, I just don't expect it to be relevant when it comes time.
I feel like this whole video sidesteps the point that if you make profits in a significant percentage of GWP you can just start economically blackmailing countries into doing whatever you want. The example given in the video (Saudi Amco) can already do this, to some degree, with many countries already. If a company was making profits in the 5-10% range they effectively are a country of their own with more direct economic influence than almost any other country on earth. The windfall clause is, to an enitity of this size and economic power, just a piece of paper that can be safely ignored.
Even if countries try to enforce it (which considering the money is supposed to go to charities which they gain no direct benfit from, this seems unlikely) the AGI, or any suffiencently skilled team, could just work around any barrier put up by any nation or coalition of nations. Embargo? Loopholes or alternative sources of trade. Sieze company assets? Come up with both legal and physical defences. Outright war against the company? When you control 5-10% of the worlds economy and have an AGI on your side you can probably win, or at least survive, through both economic and traditional warfare.
TL;DR If a company is making enough money for the windfall clause to take effect then they have enough power to ignore it, and even if countries tried to enforce it the company would be powerful enough to circumvent the enforcement
Socialism, or barbarism under the boot of AGI run corparations.
Reading the comment section I think there is more than enough interest and discourse going on to also take the policy side of AI into your video portfolio. I'd be more than happy to see the current state of research presented by you. 😊
This sort of proposal sounds very sensible, conditional on us ending up in the situation where some particular organization successfully invents "friendly" AGI, but still manages to "control" it in the way we usually think of companies controlling software and other intellectual property -- i.e., they have some ability to license and control its usage, capture profits, avail themselves of law and courts to protect their interests, etc.
But... doesn't this run into some of the same, shall we say, failures of imagination that we see when people talk about the nature of *unfriendly* AGI? Like, the most serious risk of unfriendly AGI isn't "some evil corporation or terrorist group uses the AI for nefarious purposes." It's "oops, the matter in your solar system has been repurposed to tile the galaxy with tiny molecular smiley faces." In other words, a genuine super-intelligence -- almost by definition -- *can't* be controlled by non-super-intelligent humans, one way or another.
That's really bad when it comes to unfriendly AGI, but if you're willing to stipulate a friendly AGI (i.e., one that is sufficiently aligned with human values), doesn't it suggest that a lot of this concern about how to distribute the benefits is kind of beside the point? Like, if we suppose, as I think is pretty reasonable, that one particular conclusion that falls out of "human values" is "it would be bad for the vast majority of mankind to be reduced to abject poverty or death after the development of AGI," then that's a value the AGI itself is going to share, right? We are talking about AGIs here as agents, after all, with their own values. So if we actually *solve* the value-alignment problem, doesn't that basically address this issue, without the need for human-level legal pre-commitments?
There is one case where human precommitment might help. It is the case where the technical side is solved. People know how to align an AI to arbitrary values. But a selfish human aligns the AI to their own personal values, not human values in general. (There are good reasons for a benevolent person to align the AI to themselves, at least at first, with the intention of telling the AI to self modify later. Averaging values is not straightforward. Your method of determining a humans values might depend strongly on the sanity of the human. )
A friendly AGI will have whatever values we build it to have. The only way an AGI can be friendly is if we have the engineering skill to build the kind of AGI that we desire. Most likely such an AGI would follow whatever orders we give it, regardless of human suffering. Hopefully whoever builds the first AGI won't want the majority of mankind to be reduced to poverty, but only time will tell.
I know it's late in the game, but I would like more of the more human side of the implications of AI from this channel. :D
Company that signed the Windfall Clause and successfully invents AI:
“I am altering the deal. Pray I do not alter it further.”
You make this topic so interesting and cover a good range of related issues. One thing I'd love to hear more about is the different ways companies are trying to create AGI. We hear that different groups want to create it, but what are their approaches, and how do they differ? I think this topic is barely covered online and would be a really interesting video!
It's worth bringing up that socialist economics systems, which don't have a capital class, don't have this problem. Automation is unambiguously good under socialist systems, where the means of production (factories/farms/mines/AI systems) are managed by and for the working class. It just means everyone gets to work less and spend more time doing things that they are passionate about.
althought I alight a lot with marx ideas I wouldn't be so sure about AI serving the greater good, the administrative class could use the AI to boost their personal power. I guess in the end it all depends on the personality of who has the AI first.
Anyway I would agree that in a socialist country AI would be less likely to be used for "evil" due to the fact that those who control the AI are most likely comunists therefore valuing colective gain over personal gain more than a capitalist.
I tend to prefer your technical content because it's more immediately useful to me, but also hold the opinion that ai safety is ultimately a cultural problem, so it's cool to see you tackling those aspects. I wouldn't mind further pursuit of this direction, but definitely want to keep seeing the great technical stuff you've made. Thanks for all your efforts!
Prediction: companies will avoid signing this or refuse to pay if the clause is triggered.
They will avoid signing it by voicing concern that whatever organization was designated to receive the money is not trustworthy/corrupt.
If they do sign, they will avoid paying by using lawyers. Lots of lawyers.
I suspect that the super-intelligent AGI lawyer-bot will be able to find a loophole in the clause. And the AGI PR-bot will manage to convince us that this is the right thing to do.
"two types of people. People who make money by selling their labor, and people who make money by owning AI systems."
Wouldn't this be true in our society anyway? People who make money by selling their labor and people who make money by owning capital goods and extracting surplus value?
Precisely. We already live in that world and we can see how it goes for the little guys.
The difference is that labour currently has value. If a robots can do everything cheaper, the people who make money selling labour stop earning money. At which point things get worse.
@@gnaskar see my comment on the main thread. Big business can get rid of their workers if they want to but they will unintentionally destroy the capitalist economic system. Maybe that would be for the best.
@@TheStarBlack
_> Big business can get rid of their workers if they want to_
That has happened before. Several times. It will happen again.
@@davidwuhrer6704 not all of them at the same time!
I'm so glad you started uploading videos again. Always very interesting, keep up the good work!
This was... not an ideal video. The authors of the proposal are making a great many assumptions about the enforceability of such an agreement (just say you will totally give your disgusting gains to the plebs until you've gained enough power to ignore your pledge). And even in the unlikely situation where the hyper rich decide to provide aid to people, it's done so in a completely undemocratic manner. These .000001% of the world's population get to decide where the charity goes with no input from anyone. They could donate to racist causes because they believe them to be noble pursuits, and who will stop them?
All in all, an astonishing bad take on how to spread the gains from AI.
I think that most of the current utrarich are competent and often benevolent. Governments and random rich benefactors have different failure modes. How often do supposedly elected governments make unpopular decisions. In the covid crisis, bill gates has been trying to develop a vaccine and the FDA has tied red tape everywhere and basically made testing illegal at one point.
@@donaldhobson8873 Wow, they're better at Appearing Not To Be Sociopaths than I thought.
@@c99kfm pretty much. bill gates is not a nice bloke. He does a lot to make himself look good but his wealth is built upon the back of so many disadvantaged people.
yes raci endevours that takes horrific amounts like the cult of bla liv matte
@@gadget2622 jesus christ... get one single argument. built upon what? did he personally designate the production of microsoft products to third world countries and made sure the factories in question had impossible conditions, in addition to somehow locking people up when they applied for the job and thus werent able to take the job out of choice? you do realise he and people like him make reidculous amounts of jobs that people CHOSE TO WORK AT, because it is a net positive exchange of their effort for the wage they are payed right? IN OTHER WORDS, VALUE IS CREATED. child labor is fantastic, unless they are stolen off the streets and forced to work. get out of here, communis
Profits are so 20th century. Once corporate income taxes became large and widespread, they became a measure to be gamed, rather than any sort of objective measure of benefit of an activity to society or the owners. Note the various companies whose stock prices appreciated despite not earning nominal profits. Combine that with central banking and stock market speculation, and it is easy to foresee how this would play out. You will never see a situation where it will be possible to identify that AI was the source of the profits rather than some other issue. If this ever "paid out", it would because of accommodation to other forces, rather than to this ex ante agreement.
Good luck with any of this. The income inequality in any given nation has already far surpassed any level of acceptability.
There already is a big gap between two groups of people: Real estate owners and renters. It seems that nobody seems to care about some people making money just by having money to begin with.
If you think landlords are over-privileged, look up Amazon.
"If we feel like it, we'll share the profits." This experiment has already been demonstrated to be a non-starter.
Wow, I love the new transitions! The woosh sound effect really adds to the experience!
Obviously, I'm going to spread my future profits among 101 "independent" companies.
Every loophole you can think off in a minute is one covered in the report but skipped for being too obvious and detailed in a 10min intro.
I think the best way to "patch loopholes" is to start with specifying what you actually want companies to do, and where they fall short, just do it yourself and charge them for the costs. Any measure deviating from this will suffer the same misalignment problem we know from AI. Companies are reasonably good optimizers, too.
I'm sure this will work, because it worked so well when we asked companies to sign a windfall clause at the start of the industrial revolution and again at the start of the information age. You might argue AI is unique because at least we're thinking about it beforehand, but I'd say the only thing that seems different is the focus on the inherent classism and I feel even that's not all that unique. Both the industrial revolution and the rise of the information age did exactly the same thing: put massive amounts of wealth in the hands of those that controlled the involved machines, while forcing everyone else to shift their labour to new and different types of work that the machines happened to not be able to do, or not be cost-effective to do. Of course the true capitalist here would argue that "anyone can develop AIs and those with the best AIs win until someone else comes up with an even better AI" but the real problem is the one so often pointed out by Robert Miles and others: once you have the better AI, your chances increase incredibly to get exclusive access to the even better AI as well... That may be the argument to sell the windfall clause in this case, but that does nothing to reduce the value it would have in the previous two examples and that value wasn't enough at the time either. After all, all the gains from machinery and information systems also made it easier to build better machines and software. There's a reason Elon Musk and Jeff Bezos are building rockets and bricklayers aren't.
"the problem is (describes capitalism)"
"our proposed solution is (describes the same ineffective band-aid we've used for more than a century)"
Anand Giridharadas gives a pretty good explanation of why charity is ineffective at actually solving issues. An oversimplified summary is that letting people who benefit from problems control how we address them, allows them to invest in 'solutions' that don't involve actually fixing the underlying problems that they make so much money from.
If "we promise we'll do the right thing with the money we're stealing" was going to work, it would have done so by now.
Exactly, it's not like we haven't been there before. It's just what happened with automation but on a larger scale. We know how that turns out when capitalism is involved : Rich get richer by right of ownership, works don't get to work less despite being more productive, and superfluous workers are discarded and become extremely poor.
Also apparently government are bad at sharing wealth but we should trust self serving corporate entities whose only goal is generating profit to do it better? When we know they actually do not.
These ideas are just trying to preserve the status quo, and the status quo is rich getting richer draining wealth from everyone else. It's hardly a status quo worth preserving. (and that's without including sustainability issues).
“You can think of the world as having two types of people: People who make money by selling their labor, and people who make money by owning advanced AI systems.”
B A S E D
Just because they willingly sign up for it doesn't mean they will actually be any more likely to be willing to pay.
Watch "How Hollywood Studios Manage to Lose Money on Movies That Make a Billion Dollars" from Today I Found Out - Hollywood studios would sign profit sharing deals with some of the talent, but then through accounting magic would pass all their profits to other subsidiaries to avoid honoring their part of the deal.
Your channel talks a lot about intelligent agents getting around restrictions we put on them in unpredictable ways. Designing the clauses that wouldn't be bypassed seems very difficult, especially if you consider the company in question will have an AGI doing their accounting.
I think any sort of attempt to get companies to voluntary donate their income is misguided at best. Nevermind that it will be extremely hard to force a company to adhere to their Windfall Clause after they have amassed massive money, and therefore power, from creating a powerful general AI - even if they did do it (and they might, if only to prevent a revolution!), it still results in a scenario where a significant amount of world production is in the hands of a single company, and if they do share their wealth through some kind of global UBI, it means a lot of people's livelihoods would be dependent on the AI creators. That gives them far, far too much power over the rest of society, with us having basically no bargaining leverage with the creators. It's better than massive poverty, but it's still at best a benevolent dictatorship. An optimal solution would need to ensure that the AI system is socially owned and managed and the benefits are shared equally, so that the AI does not create any elite class. Nationalization isn't an option here, because it would exclude the world outside of the nation that created the AI, and because a national government might use the AI to subjugate other nations, and because nation-states are frequently not as democratic as they appear (if they even bother - what if the superhuman general AI is created in China?). We would need some other form of social ownership of the general AI, one open to the entire world population and difficult for any one group to dominate. Like a sort of super-co-op :P
Philanthropy is not an unqualified good, many supposed philanthropic foundations set up by obscenely wealthy people often times use that money to gain influence over international organizations, and direct them towards their own pet projects, donate towards questionable causes, or in the worst cases are just elaborate tax evasion and money laundry schemes that funnel the money right back to the philanthropist that set them up. It scares me the fact that not once in this well researched video not once was the idea suggested of some kind of democratic oversight over the wealth an agi could generate.
^ This.
Distributing the wealth created by the AGI is not the same as distributing the power. The company that is generating that wealth using the AGI still has all the power in this situation, and even if they are run by completely benevolent angels (spoilers: they're not), the idea of everyone on Earth becoming economically dependent on a single company is highly concerning.
There's a massive error here: Social Security is 99.6% efficient. You also neglect the possibility of dividend systems, like the Alaskan oil fund. I'm pretty sure writing people bigger checks isn't going to take more work, and removing the means testing would probably lower it.
"We can't rule out the possibility they mean it" lol. On about the same % chance that snacking on depleted uranium might give me super spidey powers, sure. But historical norms have shown that peasants have to riot and be on the brink of revolution to receive improved material conditions.
The current unrest managed to uh.... accomplish the rebranding of a pancake mix and some rice products. Which somehow feels infinitely more than we usually get, but somehow feels infinitely worse than nothing. Capitalism is amazing.
Yeah, Robert really should stick to topics directly related to his field of expertise. This went a bit out of his comfort zone and out of bounds.
problems which should be addressed before implementation of a powerful AGI becomes safe: value misalignment, reward hacking, implicit bias, environmental robustness, C A P I T A L I S M
Considering how automation turned out I'm not optimistic about how AI would be used.
Windfall clause here is basically a way to make sure to preserve the status quo, which isn't great.
Like, automation could have been used to considerably reduce the time individuals have to work to survive, but instead it led to some people being extremely rich, other having to still work a lot, and then those that were rendered useless becoming extremely poor.
This would do exactly the same thing. On a larger scale. And I mean, that's not surprising, AGI made by capitalist in a capitalist society will lead to an AI that emphasize the problem of capitalism. So... uuuh... hopefully capitalism is dead before we get to AGI is the takeaway here?
It's the old alignment problem. Corporations are not aligned with human values.
It's at times like this that you realise why goal alignment for advanced AI systems is so hard. We can't even achieve goal alignment between different human beings.
Good observation.
The thing about legally enforcing a windfall clause, is it's a lot like enforcing tax laws. And really, at the point where human labour has no value maybe we should be getting rid of companies entirely, because all the advantages of capitalism are gone at that point, and the disadvantages for things like socialism can be dealt with, well, AI.
For the most part big companies stick very well to the tax laws, it 'just so happens' there's enough loop holes to pay no tax very legally
@Enclave Soldier Capitalism doesn't "work out fine" for the society we have now, much less for one where labor has no value. Communism and Socialism aren't based exclusively around human labor having value, that concept is only used to explain how the working class is exploited by the capitalist class. In a society where labor is worthless and the working class becomes merely a consumer class, the exploitation is clear enough. The real basis of Socialism is the common ownership of the means of production, and in this case those means are the AI itself and the machines it uses. For this situation, there is no fairer and more democratic solution than common ownership of AI -- what good faith argument can a person have to defend the position that the future and well being of all of Humanity should lie in the hands of a few corporate executives?
Please do make more videos on these aspects of AI safety, but don't stop your usual approach. I greatly enjoy your style of explanation and would like to hear anything you have to say, or to comment upon.
Love to hear about the human part of the bargain, if possible please do more on the socioeconomic impact of AI or platform other creators who do.
Great vids as always!
Someone who likes this might be interested in Suarez's novel Daemon and its second half Freedom(TM). It involves (in part) "beneficent" "malware" that at one point for example starts hiring lawyers to keep itself from being deleted. It's a very fun ride, and one of my favorite books. (Not really AGI, but practical AGI, sort of. Very realistic for sci-fi.)
Man I'll take any content you want to make, your stuff is awesome. I really like how clearly you explain things.
If I get a vote I am interested in formal systems, such as metamath, and AI being trained to use formal reasoning. I asked Stuart Russel about it in a Reddit AMA and he said he had previously considered the idea of using formal systems to prove results about AI systems as a control technique.
A proven result would be one of the only really solid control structures I feel. Moreover there might be some bootstrapping possibility, where an AI is only allowed to expand it's capabilities after it's proven that the expanded system will obey the same rules that it is proven to obey.
Additionally making gpt-3 do mathematics is sort of like training a computer to run a simulation of a dog trying to walk on it's hind legs, you can do it but it's not playing to anyone's strengths. Computer systems that reason using set theory, such as metamath, can use symbolic language where there is a rigorous definition for every symbol and use currently existing tools to check their reasoning to be correct. This is a much more solid foundation for developing a system for thinking and reasoning about the world I feel, natural language is a mess.
Anyway yeah that was long vote ha ha, keep up the good work, love the channel.
I'd be quite interested to hear how AGI and ASI would transform the economy.
That being said, I'm also a bit sceptical about some claims you made in regards to the economic impact you've mentioned.
There are some things in micro- and macro-economics such as comparative advantage and the appreciation/depreciation of currencies etc. which actually kinda seem to go against the notion that "humans wouldn't have any work left in a world dominated by AGI/ASI",
"a company using AGI/ASI would result in extreme wealth inequality/the focusing of large amounts of wealth towards a single entity" or even (though this point wasn't being made by you, but somebody else)
"a company run by an AGI/ASI would evolve into a monopoly".
Admittedly, I'm not a world-expert in economics (I'm just somebody who's passionate about it), but what I do know is that a lot of things in economics are based in mathematics and that mathematics still should hold true for an ASI.
On a semi-related note... how much power would an AGI need (including the cooling of the processors) with a mental productivity equal to that of an avarage human?
I'm quite curious about that number since it can be then used to calculate how much the electricity-bill would be and, combined with area-cost that the AGI would take up, how much "employing" an AGI would actually cost compared to a human.
"There are some things in micro- and macro-economics such as comparative advantage and the appreciation/depreciation of currencies etc. which actually kinda seem to go against the notion that 'humans wouldn't have any work left in a world dominated by AGI/ASI'."
It is not a matter of economics. Once we have AGI, there would be no jobs left that could not be performed by a machine. Our minds are our greatest asset and the only thing which allows us to do things which are beyond the capability of machines. What work could we possibly do when everything we might do can already be done for free by a computer?
"How much power would an AGI need (including the cooling of the processors) with a mental productivity equal to that of an avarage human?"
No one knows how an AGI will work. Once we figure out how to build an AGI, then we'll be in a position to estimate how expensive it will be.
"I'm quite curious about that number since it can be then used to calculate how much the electricity-bill would be and, combined with area-cost that the AGI would take up, how much 'employing' an AGI would actually cost compared to a human."
No doubt an AGI would require some amount of electricity, but there would be no electricity bill since electricity would be free. We'd no longer need human labor to produce electricity, so there would be no one to pay for our electricity bill. If we want more electricity, we can just program our AGIs to build more power plants.
@@Ansatz66 Ok, let me adress your counterpoints since I have some disagreements on them.
First of all, how do you even come to the conclusion that it is not an economic matter here?
I'm well aware that an AGI and above would be able to do the same mental tasks a human could.
That being said, there are clear mathematical benefits for an AGI/ASI to not try to emulate the whole human thinking-palette, but instead try to be hyper-focused on a single thing/task.
It's known as comparative advantage and clearly showcases that an AGI would use its own resources (computing-power etc.) most efficiently if it focused on something that it can do the best... which in the case of an AGI/ASI could be innovating etc. in which we humans tend to be a bit slower, rather than focus on something humans already are pretty good at relatively speaking.
If anything, an AGI/ASI wouldn't be so stupid and go like "we'll overtake the entire economy", and instead it would make benefit of things like comparative advantage, trade etc. to use its own resources the most efficient, and then trade with humans (either via money or other things) for stuff it hadn't focused.
And I mean, it's not like we have real-life examples on an abstract level... the trade between the US and poland for example can be seen as a good example for how an AGI vs. a human would work in an economic sense.
While the US has an absolute comparative advantage (much like an AGI or even an ASI) it would still be more beneficial for the US to focus on what it can do the best, and trade with poland for stuff the US hasn't focused on (even if it theoretically could also produce them more efficiently than poland... at the cost of producing less on what it focused previously).
Mathematics clearly shows here that the US and poland still would both benefit from this trade, even if the US(aka the AGI/ASI) theoretically could produce everything more efficiently.
And that's just one of many economic arguments in regards to AGI/ASIs (would probably be better to talk about this in more details in discord or so).
Second point... it's true, nobody really knows how an AGI would work, let alone how its requirements would be.
That being said, there are some physical limits etc. (such as how much computing-power can be physically possible from a certain volume) which work as a good boundary.
Likewise, we can look at our current tech and its current capabilities and use that as well for a boundary.
Taking that into account, I do have quite some strong reasons to think that early AGI/ASIs would be quite expensive to run and thus, even though the world now has AGI/ASIs, they wouldn't fill out every job but, as mentioned above, would instead be deployed on tasks which makes their usage the most efficient.
And third point... did you really just argue with the countless-times disproven, marxist "Labour theory of value" ?
Prices for a product or service are not only dictated by the human labour within them, but also the time that it takes, the resources that go into it, the costs of ground/area on which this production takes places and the market itself in regards to supply and demand (and probably a lot of other factors as well).
Producing electrical energy requires materials and ground.
In regards to ground, it can be safe to say that property-rights will still exist in an age of AGI/ASIs and thus the AGI will not only be limited in how much energy it could produce at max, but also what resources the AGI would have at its disposal and ultimately at what cost it would have those resources.
(Also I do consider an AGI/ASI as safe when it amongst other things also respects human rights)
Keeping that in mind, it would mean that the AGI would have to min-max what to do with the area that it has been given.
Will it use the area to build more server-farms?
Will it use the area to build a large mining and refining operation?
Will it use the area to cover it with some power-generating machine?
Needless to say, given the limited area, the AGI would have to think about how to most efficiently use the given area and what things it should import/export in order to become self-sustaining in an economical sense.
Given how a limited area might not have everything needed for the construction of a powerplant (and maybe keep it operational), the AGI would need to trade which... means that money will be involved since money is factually and mathematically speaking the best trading-medium possible.
Also in case that the AGI wanted to expand the area that it owns... it might need to buy land or maybe even pay an area-rent which... once again means money.
Overall, the notion that "electricity will be free in an AGI-world" is just utterly absurd and just plain factually wrong.
And that's just mere talking about electricity... we haven't even talked about heat-generation from an AGI and how that would need to be managed (which once again would result in some money being in use here).
And yeah, just to say one final thing here...
"It is not a matter of economics" is, as I hopefully demonstrated, clearly wrong here as economics does matter regarding AGI/ASIs.
Economics isn't just about money and how humans will get paid etc. ... economics is about trade, resource-management and ultimately about "what is the most efficient use of the resources I have".
Even an AGI/ASI would think economically if it truly is rational (which should be a given considering the mathematical nature of an AGI/ASI).
So once again, YES economics DOES matter, even for an AGI/ASI.
4:20 "the united states federal gov wastes a lot of money, you can see this because people when polled think the united states wastes about 50% of it's money" Without, any reference to other entities and their waste. Not to mention.... that this is a poll on how people felt about national spending, not data on national spending, or a comparison to other entities.
Saying this was was really dishonest and people are going to be reinforced in the conclusion that the united states is wasteful, when in reality you didn't actually talk about weather or not it was, just that people thought it was.
I think the point here was that the waste was assumed, and that he was trying to depoliticize that fact by showing that people from either party would agree by about the same extent.
sure the framing was off but its just about the most consistent thing in the history of humankind the ineffeciency of bureacracies and the less taxes that are taken away from people who could have chosen to spend them in the way that they saw was best for them from the perspective of their own eyes, and instead chosen by a giant bureaucracy, the worse off societies has been
Define "waste". If it's "used in an unproductive way" or "spent in an unproductive way" well quite frankly, so what? That money may not have been productive in the government's hands, but perhaps the people they paid for those unproductive services used that money for other, more productive things.
Modern money is more like the water cycle than say precious metals of limited quantity. It's not "lost" in the sense of being destroyed, it's simply just not fully utilized as effectively as it could be. Even dept servicing is paid to someone, who will spend/use it somewhere else.
@@wasdwasdedsf Citing an opinion poll doesn't substantiate how actually waseful the bureaucracies of all institutions are. 50% waste if it's true (which wasn't mentioned) Doesn't explain if that's low or high compared to things like charities or corporations or..
@@shaunclarkson7131 "It is better for bureaucrats and corrupt contractors to have your money than you" is a tough sell.
Bueno de Mesquita would ask whether the contract is "renegotiation proof"
The whole issue around explosive increase in production has a parallel to AI safety. This clause sounds like a good move (it might be) but it's more likely just a patch in the same way than most proposed "solutions" to AI problems are. We need to be actually prepared and start changing stuff now or we won't be able to handle what would happen otherwise.
Capitalism has an expiration date. Either resources start to run out and you need to put something other than profits at the forefront or you get to a post-scarcity society where it's an opt-in deal. Either way, our society is not ready nor even preparing for the transition, much like we're not ready to design a safe AGI.
Very interesting video. There is no question the technology is only a tiny percentage of the problems posed by AGI. I’d love more videos on the people problems.
I feel like this doesn't really solve the larger relational issue here. Even if companies gave up large sums of money from the windfalls of AGI, the company still decides how that wealth will be allocated and to whom it will be allocated. You still have a relationship where large sums of the population will be forced to rely on the generosity of a few individuals, and would still be subject to the whims of an immensely powerful company, one more powerful than any before it because it not only has more wealth but a vastly intelligent AGI. A better solution, though it would require an upheaval of the status quo, would be to abolish the possibility of owning an AGI, or, if you are willing to go as far as I am, to abolish private ownership entirely. Because even if AGI is not developed these issues still exist as long as capitalism exists. Now, I am not advocating that it should be owned by the state either, but by the community that works on it and is immediately benefitted or at risk from it should have a say in its operations. But, I understand that this is generally considered a radical opinion, so do with it as you will.
Private ownership of half of the american soil and production capabilities, and that of a mass produced watch that your grand father offered to you when you were 10 are two different things.
@@automatescellulaires8543 You're confusing private and personal property, my friend. Communists aren't concerned about your watch, they're concerned about the fact that people claim ownership of things like land, factories, and the like and use that as justification to screw people over.
Lets all agree that if any of us come into possession of the one ring, we'll definitely cast it into the fires of mt.doom and not keep it for ourselves.
Pretty much.
There are huge problems with this solution right off the bat.
1. It's relying on a very naive perception of how people just act in general. Being shamed by the populace for something the average person has no knowledge or education about is going to get people to sign a document that says "If you ever 'win', then just stop 'winning' "? Just historically speaking, on much, much smaller scales, this has a very bad track record in general.
2. Never really addresses the idea of reneging on the contract. Who is going to sue someone who literally buys every single lawyer the world over? What are the remaining good samaritans to do when someone uses that money to threaten legal or even physical action on anyone who opposes them? Or they just buy out the contract holders themselves and then absolve the contract?
3. What do you do when the contract just gets held up to be unenforceable in court?
4. Even if everyone agrees to sign this peacefully and then plans to actually make good on it, what do you do when someone creates a shell company and gives the AI to it?
5. Even if all of the above is negated and everyone plays fair and fully intends to uphold the spirit of this, what do you do if a brand new startup company or some guy in his garage manages to beat everyone to the AI? He never signed the contract and may have no reason to give up his gains.
I really liked this video, and I enjoy the thinking exercise and the subject, but if you do this in the future, I think you should find some economic and socio-political experts to discuss the matter with as well. It'll really help illustrate how big of a problem this really is, and also highlight flaws in current ideas.
EDIT: I actually just thought of something even more important that I missed before.
6. What do you do when the AI itself has decided that you enforcing that contract would be detrimental to its goal of getting more money and then decides it can't let you do that? How are you even going to contend with the in-human mind games and loopholes that an AI might play against you when we've already run into some serious human based loopholes?