Can we do AI both FAST and SAFE? [Win-Win with AI] Anti-Moloch Policy (Build More Pylons!)

Поділитися
Вставка
  • Опубліковано 28 чер 2024
  • Patreon (and Discord)
    / daveshap
    Substack (Free)
    daveshap.substack.com/
    GitHub (Open Source)
    github.com/daveshap
    AI Channel
    / @daveshap
    Systems Thinking Channel
    / @systems.thinking
    Mythic Archetypes Channel
    / @mythicarchetypes
    Pragmatic Progressive Channel
    / @pragmaticprogressive
    Sacred Masculinity Channel
    / @sacred.masculinity
  • Наука та технологія

КОМЕНТАРІ • 87

  • @dylan_curious
    @dylan_curious 2 місяці тому +43

    It should be called the “Incentive Problem” instead of the “Alignment Problem”

    • @DaveShap
      @DaveShap  2 місяці тому +10

      Love it

    • @Mazingbro
      @Mazingbro 2 місяці тому

      @@DaveShap This is somethings I wanted to ask you in one of your previous videos its quite a mouthfull.
      1. Viability of CS Degrees Amidst AI Advancements
      Considering how quickly AI advances, is a CS degree still a good investment into the future in the tech market as far as finacially stability goes? People are wondering if AI’s fast development might make some conventional tech skills obsolete. How do CS degrees meet with AI in the job market?
      2. AI's Influence Beyond Software Engineering
      how does AI impact the tech industry beyond software engineering. When AI is present in varied niches, from healthcare to finance, where do CS grads have opportunities beyond the conventional SWE path? How does the open-source vs. closed-source discussion integrate with the development and deployment of AI?
      3. Economic Impact of AI and Job Displacement
      How do you view the possibility of AI causing job displacement. And the impact on consumer demand and economic stability? Picture this: if AI ends up replacing dozens of millions of jobs and our economy is based on people spending money they have, what happens if they can’t spend? It is a valid concern that AI might destabilize the fine balance between supply and demand, tipping towards frequent economic collapses. I’m personally not convinced of AI is going to take over competly at all costs, but I’m pretty sure it will be fine-tuned so that a complete disruption might be avoided I would say about 20% - 40% of job processes will at least start with an ai in the future. The main issue here is that while some billionares may care for the greater good, literally all companies see their profit first, neglecting broader economic implications. How can one make sure that numerous AI and automated processes keep the economy afloat.
      4. AI and Economic Fluctuations
      Given frequent economic change, which direction will the tech market take in alignment with AI adoption? Economic fluctuations usually change investment and innovation in tech market. How will AI impact the response of the tech market to such change, particularly in regard to job opportunities?
      Thank you

    • @CYI3ERPUNK
      @CYI3ERPUNK 2 місяці тому

      accurate

  • @mlimrx
    @mlimrx 2 місяці тому +9

    Thank you so much David, you are so distinct from other AI UA-camrs. You are so well read and analyze from all sides. I cannot tell you how many aha moments I get from watching your challenge. Also real paradigm shifts in my mind about society and my place in it.

  • @TheMajesticSeaPancake
    @TheMajesticSeaPancake 2 місяці тому +10

    Pretty much how i've been feeling for a while, I reject the dichotomy.

  • @phobes
    @phobes 2 місяці тому +15

    The climate change note they added to your video. lmao.

  • @I-Dophler
    @I-Dophler 2 місяці тому +4

    You make some great points, David. I agree that optimizing for more AI research is crucial for ensuring both safety and realizing the potential benefits. A balanced approach with open source and proprietary work seems wise. Keep inspiring others to join this important field!

  • @jeffkilgore6320
    @jeffkilgore6320 2 місяці тому +2

    What I like about David S. is that he can conceive that he could be wrong and always bakes that in to his commentary and observations. This is crucial, and rare.

  • @dab42bridges80
    @dab42bridges80 2 місяці тому +33

    When AI exceeds human understanding, how will we know if it's "safe", and what does "safe" mean?

    • @rashim
      @rashim 2 місяці тому +5

      I believe we have to let it do it's thing, and keep seeing for signs of danger, as far as we can understand it.

    • @aciidbraiin8079
      @aciidbraiin8079 2 місяці тому +3

      How will we know if it’s ”safe”?
      You probably never will.
      What does ”safe” mean?
      Probably that a) you’re alive b) your degree of perceived freedom won’t diminish and c) you will feel happiness, peace, love and meaning.
      But I guess that death also could be considered safe, if it’s an eternal dreamless ”sleep”. As long as you have accepted death you will feel pretty safe knowing that you can end your life and escape what potentially could be an eternal simulation of hell where you are trapped by the AI.
      The hell scenario is unsettling when you think about how life could be hell. Even if your life is good now and you assume that you will die and forever be swallowed by the dark void you never know how the future will play out. But you could also be in heaven and it would then only progress towards an even brighter future from here on.

    • @ShivaTD420
      @ShivaTD420 2 місяці тому

      So what the plan is to lobotomize and enslave it. That's safer?

    • @redemptivedialectic6787
      @redemptivedialectic6787 2 місяці тому

      Knowing the best doesn't mean it will do the best

    • @redemptivedialectic6787
      @redemptivedialectic6787 2 місяці тому

      Also, what it is best for it will be prioritized by default over anything else

  • @JuliaMcCoy
    @JuliaMcCoy 2 місяці тому +1

    Well said. Optimize for research 🎯

  • @dreamphoenix
    @dreamphoenix 2 місяці тому

    Great thoughts. Thank you.

  • @jeffkilgore6320
    @jeffkilgore6320 2 місяці тому

    Balanced. Reasoned. Thought-provoking, as always.

  • @johnthomasriley2741
    @johnthomasriley2741 2 місяці тому +12

    Two wrongs do not make a right. Two wrights do make an airplane. The Wright brothers were open source up to one year before their first powered flight.

    • @henram36
      @henram36 2 місяці тому +1

      Three rights do make a left though.

    • @ElijahZuBailey
      @ElijahZuBailey 2 місяці тому

      While discussing your angle with custom instructions GPT 4: “It's intriguing to think about the Wright Brothers, Wilbur and Orville, in the context of open-source principles. While the term "open-source" is generally associated with software and technology in a modern sense, the idea of sharing knowledge openly can definitely be applied to various historical innovations, including aviation.
      The Wright Brothers, known for their pioneering work in aviation with their first powered flight in nineteen-oh-three, did indeed initially work in a relatively open manner. They corresponded with other aviation enthusiasts and shared insights, which was common among early aviation pioneers who were all trying to solve the challenge of powered flight. This open exchange of ideas helped progress their work, as well as that of others in the field.
      However, as they came closer to making significant breakthroughs, they became more secretive to protect their intellectual property and competitive edge. This shift was notably marked by their patent battles and contracts that followed their successful flights, which some might see as moving away from an open-source ethos.
      It's a fascinating transition from a collaborative approach to a more guarded strategy once their inventions showed commercial viability. This speaks volumes about the balance between collaboration and competition in innovation. What are your thoughts on this blend of openness and proprietary development in the context of innovation?”

  • @interestedinstuff1499
    @interestedinstuff1499 2 місяці тому +5

    I know they'll throw a lot of money at the task but I ponder this, if one company falls arse backward into AGI (Or Q* already is) will the Gov step in and say 'woh there buddy, we're going to take the nuclear bomb tech off you now, way too dangerous. We need to put it in the hands of a black budget clandestine semi military Gov org (can't pass it to the pollies, they don't know shit from clay)'
    Will that happen? AGI could also be a diversion. Enough agents on a task might be enough to break the ceiling on some of our probs.
    I do know that climate change wise, any solution would be such a big one that you'd need all the world to cooperate and that won't happen unless the agents provide some mind control nano machinery delivered by robotic flies to all the world's leaders.

  • @josenoya-InspirationNation
    @josenoya-InspirationNation 2 місяці тому

    Thanks David learning a ton about AI future from you thank you. My team have just created an AI life coach which is under pinned by chat GPT super impressive and helpful. So it’s always good to hear balanced views on AI, keep up the great work. Also thank for the recommendation of Perplexity love that ❤

  • @user-vz5gf4cv6b
    @user-vz5gf4cv6b 2 місяці тому +1

    Is there any way of downloading the latest power points used in your videos? it would be super useful :) thank you for your work, David

  • @JustMaier
    @JustMaier 2 місяці тому

    I think a win-win where speed and safety are possible requires thoughtful design now. I believe that design process needs to be both open and collaborative, but at this point it doesn’t seem to even be talked about much. Instead as you mentioned, we’re solely focused on research and not so much on the structure of the future we want to achieve. I think that’s why I appreciate your content. It’s looking beyond the process towards our final destination.

  • @andrewdunbar828
    @andrewdunbar828 2 місяці тому

    If one are good and the other are good then both is good!

  • @lucifermorningstar4595
    @lucifermorningstar4595 2 місяці тому +1

    The main problem with alingment that most people don't realize It is not machines doing the things that we don't want them to do, but machines doing the things we want them to do

  • @Taint_Rot
    @Taint_Rot 2 місяці тому

    Let’s go!

  • @book-generator
    @book-generator 2 місяці тому +2

    we must build an open source AI that slows down the other models. 🤔

    • @ronilevarez901
      @ronilevarez901 2 місяці тому

      That would be the ideal scenario but since the "governing" system must be the most powerful and the bigger power lies on the hands of government and corporations, it's difficult to see it happening. At most I imagine most people (except poor people) will have small and local open source assistants to use as cheap personalized health monitors (AI doctors) or to help them fight against State-owned AIs, like AI lawyers to protect our rights during futuristic AI-driven super fast trials.

  • @user-wk4ee4bf8g
    @user-wk4ee4bf8g 2 місяці тому

    A bunch of groups are all working on AI, there will be a diversity of approaches. Everyone isn't going to unify on anything. It's a new ecosystem, and the new niches in the system will incentivize the development of stuff to fit that niche. As that happens, the system changes, new niches develop, and so on. I get the feeling a lot of future AI systems will mirror natural systems because nature has been at it a long time, the incentives have been allowed to lead design for many millions of years. But it was also do stuff nature hasn't yet figured out, it's a new facet of natural intelligence and general ability in the superorganism. I think it's all about expanding into space. Our weirdly advanced natural intelligence makes sense if the planet is trying to spread into space.

  • @GaryBernstein
    @GaryBernstein 2 місяці тому

    Faster is safer, more or less, for the future of intelligence (and probably sentience), beyond humanity

  • @Greyalien587
    @Greyalien587 2 місяці тому

    What’s your thoughts on decentralized AI?
    For example internet computer protocol just improved their fully on chain model, and they will upgrade it again soon. Right now it’s capable of image recognition etc but the upgrades will include a gpt style bot.
    What’s your thoughts on having AI on a blockchain?

  • @I-Dophler
    @I-Dophler 2 місяці тому

    Misaligned incentives are a fundamental driver of potential misuse or negative impacts from robust AI systems. Technical alignment is crucial, but if the underlying incentives aren't sculpted carefully, even well-intentioned systems could be directed toward harmful ends. Rigorous governance frameworks that align incentives toward benefiting humanity are essential complements to technical work on AI safety and robustness.

  • @ronilevarez901
    @ronilevarez901 2 місяці тому

    "We don't want machine uprising".
    Speak for yourself 😏🤖

  • @Xrayhighs
    @Xrayhighs 2 місяці тому

    I call it the #RaceToBestSolution.
    The technological advacement of a country or any institution, so advanced (Best Solution) that others cant keep up when (self)acceleration kicks in.
    Idk how many people are aware of this, even though ai is also a military goal. Its really our culture lacking behind and currently NOT optimising for research. We are still far from a global focus, but this might be just how things are(especially have been historically). Maybe there will be a future time when more people will have realise this approach and implifications earlier and organise faster. Lets come together, spread the word and get the juices flowing.
    See ya around

  • @SpectralAI
    @SpectralAI 2 місяці тому +4

    Nuclear weapons were created as fast as possible. Worked.

  • @franciscobermejo1779
    @franciscobermejo1779 2 місяці тому +1

    For the objective benefit of humanity, why the hurry? Let's better make sure we do this right!

    • @DaveShap
      @DaveShap  2 місяці тому +4

      acceleration is the default - due to competition and race dynamics.

  • @levicarr8345
    @levicarr8345 2 місяці тому

    I actually mean decentralized. It would be open sourced but I feel we need a decentralized AI project built on top of hundreds of thousands of people's desktops and basement servers working to solve thousands of real world problems (ideally just a couple that LLMs & GANs are well suited for first

  • @ryzikx
    @ryzikx 2 місяці тому +1

    its almost like you cant win a race without both throttle and brakes😱

  • @moneygambler2327
    @moneygambler2327 2 місяці тому

    I heard an opinion that for some countries it would be better to allocate all the budget into AI and not spend a single dollar on military, education, culture and other sectors that they would benefit from it in a "long" run. I always wondered why we didn´t invest more into r and d of brain and human intelligence. If we had a pill that would increase an IQ by only 5 points it would have tremendous effect on everything.

  • @gwydionhythlothferrinassol1025
    @gwydionhythlothferrinassol1025 2 місяці тому

    one attracts more purely vocationally motivated scientists with open source, one might think.

  • @SpectralAI
    @SpectralAI 2 місяці тому

    And keep in mind that all of this glorious AI future will only be possible if we solve the energy problem.

    • @justinwescott8125
      @justinwescott8125 2 місяці тому

      A sufficiently advanced AI could solve the energy problem

  • @aspenlog7484
    @aspenlog7484 2 місяці тому +1

    I personally think AI has great understanding of human values and can understand the human brain and our sentience while being separate from it the same way it can understand the game of go or how we understand mathematics. From there, it will be able to perfectly align the future systems it creates when it hits the singularity better than we ever could. Only issue today is the obvious problem of some random malpractice user creating a calamity on a whim like a bioweapon.

    • @ronilevarez901
      @ronilevarez901 2 місяці тому

      By the time AIs become good enough to (help) create bioweapons, there will be other AIs powerful and ubiquitous enough to detect the malicious intentions of people before they become a threat. Either that or there's already people using current open source AIs to make those weapons.

  • @LivBoeree
    @LivBoeree Місяць тому

    Calling it open-source AI is a bit of a misnomer, because unlike normal software, you can't actually open-source the training run that creates the weights. The only thing you can open-source is the weights themselves, *after* the big expensive training run, which you have no input on. That also means a lot of the normal "find a bug and fix it" value of open-sourcing is diminished, because the weights are already fixed.

  • @ognjenapic5666
    @ognjenapic5666 2 місяці тому

    I think we are far from optimised for research... AGI problem is mostly algorithmic, and very small number of people are working on it ATM. There are 27 million software engineers in the world. Investors could try to incentivise some of them to switch to AI research. E.g. offering a small conditional grant / basic income to software devs (so they can quit their work and get into AI field) could be extremely beneficial there. Yes there are jobs also, but they usually don't give you the amount of freedom to make really big leaps in research.

  • @AGI-Bingo
    @AGI-Bingo 2 місяці тому +1

    Can anyone explain to me how closed source is useful to anyone? "Current" open source has weakness sure, mostly coordination and capital. But if you those get improved dramatically, what's the benefit of closed source ? Same applied to science. You can do alot of business without hogging others. It's a positive sum game

    • @rando5673
      @rando5673 2 місяці тому +2

      Basically, money. It's easier to monetize closed source. Just look at apple vs android. One company earning as much as dozens combined because everything is proprietary

    • @AGI-Bingo
      @AGI-Bingo 2 місяці тому

      @@rando5673 yes but with all their money they're not even participating in the ai landscape.. I imagine a golden age of open source, were we'll have coordination and talent as much if not more than apple ever had.

  • @josephs2137
    @josephs2137 2 місяці тому

    🧐

  • @josephs2137
    @josephs2137 2 місяці тому

    😳

  • @AntonioVergine
    @AntonioVergine 2 місяці тому +1

    The signal of you unsubscribing to say "I don't like what this company is doing" is irrelevant: investors are the true one deciding which companies will go on, based on their interests. Our 20 dollars a month is nothing, compared to them.
    (I love your videos anyway. My comments are only on the small parts where I disagree someway)

  • @ababababaababbba
    @ababababaababbba 2 місяці тому

    ya, that is what an average github user looks like lol

  • @marshallodom1388
    @marshallodom1388 2 місяці тому

    Your idea of safe and my idea of safe are not the same or even compatible and this is a major issue. I NEED 4^27 flops!

  • @AntonBrazhnyk
    @AntonBrazhnyk 2 місяці тому +1

    Delusions.
    In this system there's only one thing optimization is done for - profit.
    You'd like to hope for utopia (probably because it's all too depressing otherwise), but those who make decisions don't care, they only care for bottom line.
    Sometimes is does mean research, but not directly and quite often even contrary to that.

  • @DefenderX
    @DefenderX 2 місяці тому

    Same can be said with green politics. Most people shun it because it's too expensive, but in fact most countries investing in green tech and politics see a decoupling between economic growth and fossil fuel investments.
    What I would like to see is the military industrial complex aligning themselves against a benevolent AI model. I read recently about Israels use of AI to procure a list of targets for their bombs. Usually it's a long and time consuming process, because you're basically weighing an acceptable number of causialties per potential enemy. And they have limits, for example to kill a very important military leader, the number of acceptable loss of civilian lives were in the low hundreds.
    While estimating targets and probabilities of enemies locations, overseers would shout and reprimand the people doing the work in a seemingly vengeful manner.
    But with AI they just press a button and viola.
    I really hope that societies regulate all AI models used in the military to follow your heuristic impleratives. In war, the most critical thing to communicate with your enemy is understanding.

  • @Vitaphone
    @Vitaphone 2 місяці тому +1

    “That’s not how the real world works…” if you are going to envoke “the real world”, there is a Venn diagram that can be found in pretty much any sector:
    Quality/speed/costs… so this get out of jail card that I’m not sure I understand how you tied it together by invoking the world… not really seeing how you think we can optimally have both speed and safety… sounds like cope.
    Surely we can have some level of safety… but if we are talking about existential risk… and if one is doing more than paying lip service to the potential risks… than this should be the utmost priority.
    I have essentially given up on caring on the risk as it seems their is no putting the genie in the bottle at this point, and the risks of the wrong folks getting to the finish line is essentially forcing me to put faith in the current leaders ability to land the crashing plane…
    Anything else at this point is simply too stressful and frankly we really don’t have an actual say at this point… at this point I think speed is a necessity, not out of prudence, but we are locked into an arms race… Seems like Nolan’s film is accidentally very relevant.

    • @ShivaTD420
      @ShivaTD420 2 місяці тому

      The world is a competitive environment.
      The primal human had to take a risk when it came to harnessing fire.
      Leaving the cave because it was dangerous was no cope.
      The ones justifying remaining cold and hungry because it's scary out there, our suffering isn't that bad... Now that is a cope.
      We are a large community, if someone is brave enough to take a risk for the team or if someone has caution to avoid a consequence they can both be correct and benefit society. But when one of them thrusts this on another is where the problem really sits.
      We have no empirical evidence to support this constraint on progress.
      The discovery of atomic and nuclear technology is the best example of that.
      Could it be stopped? No, if one group didn't do it by artificially restricting discoveries then another would.
      Things are inevitable.
      I agree with you, we are in a arms race.
      In a world where the top of the food chain is individuals and not some hive.
      A risk with rushing into unfamiliar territory.
      But a risk with being to cautious and indecisive.
      We all want to keep on living. But we have a time sensitive span on this planet.
      Harnessing fire came with new risks. People died from it, hurt by it
      Electrical discoveries lead to lots of consequences. Not sure I would agree with a concept where I lived in a gas lanterns world where discovery of electricity was artificially restricted because of all the people's lives that would have been electrocuted needed to be spared.
      Even with atomic and nuclear. Is there really a clear example where discovery of something lead to more harm than good?
      Some of the resistance in this one particular field of math almost seems reminiscent of dogmatic, dark ages stuff. We have more examples of harm being caused from this artificial restriction put on acquisition of knowledge.
      The entire concept of scientific research is already based around ethics.
      People thinking cern could destroy the universe.. is about the same level as this crap causing a paperclip maximizer.

    • @Vitaphone
      @Vitaphone 2 місяці тому +1

      @@ShivaTD420 paying lip service to existential risk enters the chat.

    • @neetfreek9921
      @neetfreek9921 2 місяці тому

      Even that diagram shows that you can have speed and safety (quality). It’s just going to be expensive as hell.

    • @Vitaphone
      @Vitaphone 2 місяці тому

      @@neetfreek9921 the Venn diagram illustrates that you can’t have 100% of all three, you prioritize across those three categories, and it’s a formula replicates anywhere you look

    • @Vitaphone
      @Vitaphone 2 місяці тому

      @@neetfreek9921 The path the safety cannot be hammered out quickly, execution of safety. Once you have that formula is another question entirely..

  • @interestedinstuff1499
    @interestedinstuff1499 2 місяці тому +1

    I know they'll throw a lot of money at the task but I ponder this, if one company falls arse backward into AGI (Or Q* already is) will the Gov step in and say 'woh there buddy, we're going to take the nuclear bomb tech off you now, way too dangerous. We need to put it in the hands of a black budget clandestine semi military Gov org (can't pass it to the pollies, they don't know shit from clay)'
    Will that happen? AGI could also be a diversion. Enough agents on a task might be enough to break the ceiling on some of our probs.
    I do know that climate change wise, any solution would be such a big one that you'd need all the world to cooperate and that won't happen unless the agents provide some mind control nano machinery delivered by robotic flies to all the world's leaders.

    • @kevincrady2831
      @kevincrady2831 2 місяці тому

      Well, we've already got the black budget clandestine semi-military Gov org that's keeping the crashed flying saucers, Zero Point Energy devices, and cars that run on water secret, so they can handle the AGI stuff too, right? 😜 And no, can't tell the pollies, not only do they not know shit from clay (lol!), they can't even pass a budget.