Large Language Models and The End of Programming - CS50 Tech Talk with Dr. Matt Welsh

Поділитися
Вставка
  • Опубліковано 5 лют 2025
  • The field of Computer Science is headed for a major upheaval with the rise of large AI models, such as ChatGPT, that are capable of performing general-purpose reasoning and problem solving. We are headed for a future in which it will no longer be necessary to write computer programs. Rather, I believe that most software will eventually be replaced by AI models that, given an appropriate description of a task, will directly execute that task, without requiring the creation or maintenance of conventional software. In effect, large language models act as a virtual machine that is “programmed” in natural language. This talk will explore the implications of this prediction, drawing on recent research into the cognitive and task execution capabilities of large language models.
    Matt Welsh is Co-founder and Chief Architect of Fixie.ai, a Seattle-based startup developing a new computational platform with AI at the core. He was previously head of engineering at OctoML, a software engineer at Apple and Xnor.ai, engineering director at Google, and a Professor of Computer Science at Harvard University. He holds a PhD from UC Berkeley.
    ***
    This is CS50, Harvard University's introduction to the intellectual enterprises of computer science and the art of programming.
    ***
    HOW TO SUBSCRIBE
    www.youtube.com...
    HOW TO TAKE CS50
    edX: cs50.edx.org/
    Harvard Extension School: cs50.harvard.e...
    Harvard Summer School: cs50.harvard.e...
    OpenCourseWare: cs50.harvard.e...
    HOW TO JOIN CS50 COMMUNITIES
    Discord: / discord
    Ed: cs50.harvard.e...
    Facebook Group: / cs50
    Faceboook Page: / cs50
    GitHub: github.com/cs50
    Gitter: gitter.im/cs50/x
    Instagram: / cs50
    LinkedIn Group: / 7437240
    LinkedIn Page: / cs50
    Medium: / cs50
    Quora: www.quora.com/...
    Reddit: / cs50
    Slack: cs50.edx.org/s...
    Snapchat: / cs50
    SoundCloud: / cs50
    Stack Exchange: cs50.stackexch...
    TikTok: / cs50
    Twitter: / cs50
    UA-cam: / cs50
    HOW TO FOLLOW DAVID J. MALAN
    Facebook: / dmalan
    GitHub: github.com/dmalan
    Instagram: / davidjmalan
    LinkedIn: / malan
    Quora: www.quora.com/...
    TikTok: / davidjmalan
    Twitter: / davidjmalan
    ***
    CS50 SHOP
    cs50.harvardsh...
    ***
    LICENSE
    CC BY-NC-SA 4.0
    Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
    creativecommon...
    David J. Malan
    cs.harvard.edu...
    malan@harvard.edu

КОМЕНТАРІ • 2,1 тис.

  • @amansahani2001
    @amansahani2001 Рік тому +662

    "People, writing in C is a federal crime in 2023" is the most misleading statement, Man how you design low latency embedded systems without C? Lot of low level devices are depenedent on C. Even Tesla FSD or Autopilot uses C++. IOT devices use C.

    • @happywednesday6741
      @happywednesday6741 Рік тому +57

      No one cares bro

    • @anilgandhi
      @anilgandhi Рік тому

      Tesla is going to rewrite 300k lines of code using neural networks, no more C or C++.

    • @easygreasy3989
      @easygreasy3989 Рік тому +28

      I bet u I can get my gran to type that into GPT4 and would do better than what ur whole team could do 2 years ago. U better hold on bra, I don't think ur ready. 😶

    • @amansahani2001
      @amansahani2001 Рік тому +160

      @@easygreasy3989 bruh, go and ask your GPT Boi to write assembly code for newly designed chips from any vendor. Those LLMs can't generate code outside of the scope of training data. If you've written the LLMs from scratch or at least read the paper then you know what I'm talking about. Else I strongly suggest you go and study CS 182.

    • @happywednesday6741
      @happywednesday6741 Рік тому +18

      @@amansahani2001 God of the gaps my guy, soon an AI will be better at that too, why wouldn't they?

  • @HarpaAI
    @HarpaAI Рік тому +166

    🎯 Key Takeaways for quick navigation:
    00:00 🍕 Introduction and Background
    - Introduction of Dr. Matt Welsh and his work on sensor networks.
    - Mention of the challenges in writing code for distributed sensor networks.
    01:23 🤖 The Current State of Computer Science
    - Computer science involves translating ideas into programs for Von Neumann machines.
    - Humans struggle with writing, maintaining, and understanding code.
    - Programming languages and tools have not significantly improved this.
    04:04 🖥️ Evolution of Programming Languages
    - Historical examples of programming languages (Fortran, Basic, APL, Rust) with complex code.
    - Emphasis on the continued difficulty of writing understandable code.
    06:54 🧠 Transition to AI-Powered Programming
    - Introduction to AI-generated code and the use of natural language instructions.
    - Example of instructing GPT-4 to summarize a podcast segment using plain English.
    - Emphasis on the shift towards instructing AI models instead of conventional programming.
    11:26 🚀 Impact of AI Tools like CoPilot
    - CoPilot's role in aiding developers, keeping them in the zone, and improving productivity.
    - Mention of ChatGPT's ability to understand and generate code snippets from natural language requests.
    17:32 💰 Cost and Implications
    - Calculation of the cost savings in replacing human developers with AI tools.
    - Discussion of the potential impact on the software development industry.
    20:24 🤖 Future of Software Development
    - Advantages of using AI for coding, including consistency, speed, and adaptability.
    - Consideration of the changing landscape of software development and its implications.
    23:18 🤖 The role of product managers in a future software team with AI code generators,
    - Product managers translating business and user requirements for AI code generation.
    - Evolution of code review processes with AI-generated code.
    - The changing perspective on code maintainability.
    25:10 🚀 The rapid advancement of AI models and their impact on the field of computer science,
    - Comparing the rapid advancement of AI to the evolution of computer graphics.
    - Shift in societal dialogue regarding AI's potential and impact.
    29:04 📜 Evolution of programming from machine instructions to AI-assisted development,
    - Historical overview of programming evolution.
    - The concept of skipping the programming step entirely.
    - Teaching AI models new skills and interfacing with software.
    33:44 🧠 The emergence of the "natural language computer" architecture and its potential,
    - The natural language computer as a new computational architecture.
    - Leveraging language models as a core component.
    - The development of AI.JSX framework for building LLM-based applications.
    35:09 🛠️ The role of Fixie in simplifying AI integration and its focus on chatbots,
    - Fixie's vision of making AI integration easier for developer teams.
    - Building custom chatbots with AI capabilities.
    - The importance of a unified programming abstraction for natural language and code.
    39:14 🎙️ Demonstrating real-time voice interaction with AI in a drive-thru scenario,
    - Showcase of an interactive voice-driven ordering system.
    - Streamlining interactions with AI for real-time performance.
    44:55 🌍 Expanding access to computing through AI empowerment,
    - The potential for AI to empower individuals without formal computer science training.
    - A vision for broader access to computing capabilities.
    - Aspiration for computing power to be more accessible to all.
    46:49 🧠 Discovering the latent ability of language models for computation.
    - Language models can perform computation when prompted with specific phrases like "let's think step-by-step."
    - This discovery was made empirically and wasn't part of the model's initial training.
    48:17 💻 The challenges of testing AI-generated code.
    - Testing AI-generated code that humans can't easily understand poses challenges.
    - Writing test cases is essential, but the process can be easier than crafting complex logic.
    50:40 🌟 Milestones and technical obstacles for AI in the future.
    - The future of AI development requires addressing milestones and technical challenges.
    - Scaling AI models with more transistors and data is a key milestone, but there are limitations.
    54:23 🤖 The possibility of one AI model explaining another.
    - The idea of one AI model explaining or understanding another is intriguing but not explored in depth.
    - The field of explainability for language models is still evolving.
    55:44 🤔 Godel's theorem and its implications for AI.
    - The discussion about Godel's theorem's relevance to AI and its limitations.
    - Theoretical aspects of AI are not extensively covered in the talk.
    56:42 🔄 Diminishing returns and data challenges.
    - Addressing the diminishing returns of data and computation in AI.
    - Exploring the limitations of data availability for AI training.
    58:34 🚀 The future of programming as an abstraction.
    - The discussion on the future of programming where AI serves as an abstraction layer.
    - The potential for future software engineers to be highly productive but still retain their roles.
    01:04:12 📚 The evolving landscape of computer science education.
    - Considering the relevance of traditional computer science education in light of AI advancements.
    - The need for foundational knowledge alongside evolving programming paradigms.
    Made with HARPA AI

    • @ericamelodecarvalho5714
      @ericamelodecarvalho5714 Рік тому +1

      000p

    • @sitrakaforler8696
      @sitrakaforler8696 Рік тому

      Dam that's niiiice! ! It's like Merlin ?!

    • @xwdarchitect
      @xwdarchitect Рік тому

      @@sitrakaforler8696 better :)

    • @reasonerenlightened2456
      @reasonerenlightened2456 Рік тому +4

      Before thinking of AI use in the society we must agree who will Profit from it, who will own it and who will pay for the mistakes of the AI? Is it going to be like, "Oh well, bad luck" when AI ends someone's life?

    • @முரளி-ழ7த
      @முரளி-ழ7த Рік тому +5

      @@reasonerenlightened2456 you guys need to stop think AI as some conscious thing, it is just like a knife or gun. It is entirely about who is using it with what intent.

  • @donesitackacom
    @donesitackacom Рік тому +1102

    "AI will replace us all, anyway here's my startup"
    Exactly 8 days later, OpenAI released a single feature (GPTs) that solved the entire premise of his startup.

    • @CGiess
      @CGiess Рік тому +61

      So true hahahaha

    • @tomasurbonas5835
      @tomasurbonas5835 Рік тому +26

      Oh my god, thought exactly the same!

    • @miguelfernandes6533
      @miguelfernandes6533 Рік тому +78

      Funny thing is he said programming will die but it was exactly through programming that the new feature that solved the premise of his startup was created

    • @KP-sg9fm
      @KP-sg9fm Рік тому +93

      Which just further reaffirmed everything else he said. Too many people are coping right now, LLM's are gonna put a lot of people out of work, not just programmers. I work customer service and internally I am freaking out right now.

    • @ste1zzzz
      @ste1zzzz Рік тому +29

      so he was correct, AI will replace us all ))

  • @imba69420
    @imba69420 Рік тому +404

    LLMs are going to replace idiots doing stupid talks 100%.

    • @DarthKumar
      @DarthKumar Рік тому +2

      Lmao 😂😂😂

    • @DipeshSapkota-lo3un
      @DipeshSapkota-lo3un Рік тому +4

      natural language programming is a thing now accept it

    • @gdwe1831
      @gdwe1831 Рік тому +10

      ​@@DipeshSapkota-lo3un natural language is imprecise and makes a poor programming language.

    • @DipeshSapkota-lo3un
      @DipeshSapkota-lo3un Рік тому +2

      Yes i get it but which basically means we don't need to have software cycle anymore. all those clean code rules for dev to dev visibility is not required now since just need to understand what is the function doing and for that dev will be there 😉 what matters now is input output and definition of function and that's what the business wants too !

    • @imba69420
      @imba69420 Рік тому +10

      @@DipeshSapkota-lo3un Tell me you've never touched code without telling me.

  • @miraculixxs
    @miraculixxs Рік тому +364

    'See I don't know how it works and I'm ok with that' - that pretty much sums up the presentation.

    • @LarisaPetrenko2992
      @LarisaPetrenko2992 Рік тому +22

      Yeah, you don't have to know every detail of a Honda, just buy it and drive it

    • @rmsoft
      @rmsoft Рік тому +15

      Well, you can get pieces of code and I've done it already, chatting with chatgpt helps a lot to get inside once you ask right questions. This presentation is just babbling, I'm waiting for full useful application development presentation using AI.

    • @contanoiutube
      @contanoiutube Рік тому

      @@LarisaPetrenko2992but then don’t call yourself a car engineer

    • @CapeSkill
      @CapeSkill Рік тому

      @@LarisaPetrenko2992 you can drive it, but you cannot lecture people about how it works and how it's going to revolutionize the ''future''

    • @davidlee588
      @davidlee588 Рік тому

      @@LarisaPetrenko2992 but people who built Honda know every detail of a Honda.

  • @rohan2962
    @rohan2962 Рік тому +68

    He starts off with no one will code and he ends with his own programming language for AIs. lol

    • @znubionek
      @znubionek Рік тому +1

      Lmao

    • @rogerh3306
      @rogerh3306 4 місяці тому

      47:25 bashes the art of programming so he can sell his LLM service. Douchebag move.

  • @cruzjay
    @cruzjay Рік тому +90

    He called CSS "a pile of garbage" and that writing C should be a federal crime. I smell senior engineer burnout, that want's to just cash in on his startup and work on a farm.

    • @-BarathKumarS
      @-BarathKumarS Рік тому +14

      his startup flopped horribly btw lol.

    • @anthonyd4703
      @anthonyd4703 Рік тому +1

      Hahhaha even as a newbie, i kinda agree with you

    • @rogerh3306
      @rogerh3306 4 місяці тому

      47:25 Can he be more apparent w/ his motives? Douchebag move.

  • @fredg8328
    @fredg8328 Рік тому +360

    That reminds me when I was in middle school. My teacher had to teach us how to program in Basic but he really didn't want to. So he simply told us "in 2 or 3 years we will have speech recognition so you don't need to learn programming". That was 35 years ago... That's a bit bold to tell that programming languages have not improved the way we code in 50 years and to think AI will save us.

    • @dansmar_2414
      @dansmar_2414 Рік тому +15

      one day they will get it right

    • @vladimir945
      @vladimir945 Рік тому +37

      I remember one of my teacher, while not been bold enough to speak about speech recognition in the early 90-s, saying that there are _already_ only system programmers left, the application programmers have been made obsolete by - are you ready for it? - SuperCalc, a spreadsheet software for MS-DOS and such. Makes me wonder, now that I think of it, why would there still be a need for system programmers if MS-DOS was already a sufficient operating system for the only applied task that was left - the one of running SuperCalc...

    • @edmundkudzayi7571
      @edmundkudzayi7571 Рік тому +4

      You've clearly not used Grimoire. It's game over.

    • @IAAM9
      @IAAM9 Рік тому +8

      Most probably You have not used AI enough, its magical in some sense. Soon you will realize give it a year or two

    • @raylopez99
      @raylopez99 Рік тому +3

      But speech recognition is really good these days...it just took about 10-35 years, depending on how 'good' you think 'good' is (I recall speech recognition that was decent about 25 years ago).

  • @MarceloDezem
    @MarceloDezem Рік тому +254

    "If the dev is not using copilot then he's fired". Tell me you never worked in a commercial application without telling me you've never worked in a commercial application.

    • @jak3f
      @jak3f Рік тому +20

      What do you think hes writing? Personal pet projects? Lmao.

    • @tracyrreed
      @tracyrreed 10 місяців тому +5

      ​@@jak3fHe's marketing. Not writing.

    • @LarsRyeJeppesen
      @LarsRyeJeppesen 9 місяців тому

      I wager that Code Assist with Gemini 1.5 is much better than Copilot now.

    • @gaiustacitus4242
      @gaiustacitus4242 9 місяців тому +2

      @@jak3f Have you ever heard of copyright law? Are you seriously unaware that federal courts have already ruled that AI generated output is ineligible for copyright protection?

    • @jak3f
      @jak3f 9 місяців тому

      @@gaiustacitus4242 good luck proving that

  • @linonator
    @linonator Рік тому +540

    I get the clickbait title but it can be really discouraging to people who are thinking about getting into software engineering. “Like why even try if ai is gonna do it?”
    Mainly because it’s coming from an institution like this. I know it’ll take time to eventually get there but A lot of people have already lost hope and new students thinking about joining may just turn a different direction
    Note: I’m not speaking of myself here, I’m a senior engineer and I volunteer at coding camps on weekends and tutor online and I get this sentiment from the people I coach and teach. When you’re completely new to a field and you see things like this from a reputable institution along with all the hoopla of tech bloggers online, it does discourage many people from trying to enter this field.

    • @samk6170
      @samk6170 Рік тому +50

      perhaps, but such is reality.

    • @sineadward5225
      @sineadward5225 Рік тому +39

      Still, 'everyone should learn to code' is valid. Just do it anyway for your own intellectual development. No point in trying to blame a video title for not doing something. Just do it.

    • @Boogieeeeeeee
      @Boogieeeeeeee Рік тому +7

      It's the presentation name, bud. Don't get discouraged, presenters often put a clickbaity title but then debunk said title during the presentation. In any case, it's what this guy wanted to call his presentation, can't really fault Harvard for it.

    • @fintech1378
      @fintech1378 Рік тому +10

      we've got to face this 'harsh' reality head on, there is nothing you can do

    • @phsopher
      @phsopher Рік тому +76

      Somwhere in 1889: Welcome to my talk titled "Cars and the end of horse carriages".
      Someone in the audience: Very mean and dicouraging title, dude, what about all the people who want to become a horse carrage driver?

  • @ataleincolor
    @ataleincolor Рік тому +109

    Professor: Ai will replace all programmers
    Students who took student loans to become programmers: 👁️👄👁️

    • @NicholausC.McGee.
      @NicholausC.McGee. Рік тому +7

      Professor: Programing sucks lets let the robots do it!

    • @llothar68
      @llothar68 Рік тому +18

      I don't understand why people think Professors know anything about programming. They have not time to get real practice

    • @tomashorych394
      @tomashorych394 Рік тому +2

      yep. Pretty harsh reality

    • @lmnts556
      @lmnts556 Рік тому +4

      Not the case tho, at least not now lol. Ai is not even close to taking programmers jobs, AI is not very good at programming, just very basic functions and can't put the pieces together.

    • @tomashorych394
      @tomashorych394 Рік тому +7

      @@lmnts556 Are you sure? It can do a lot of stuff. Then, you have all the no code solutions. Then, you have all the SaaSs and libraries. In the end. You need 1 engineer to build a platform instead of a 100. "At least not now" can mean in 5 years (which is very realistic)

  • @firefiber8760
    @firefiber8760 Рік тому +195

    I genuinely cannot understand how humans are just... incapable of thinking of the future. Like, the idea of 'just 'cause you can, doesn't mean you should' is just so much the case, right now. But nope, because we can, we will.
    Okay, so we all slowly forget how to program, and we, generation after generation, depend more on language models writing code for us, and us just instructing the language models. Great, let's just, for a second, take this further shall we? First, the ways we communicate with language models are going to eventually become more like programming languages, because people are lazy, and the entire reason we have ANY symbols in mathematics PROVES this. We don't like to write more than we absolutely have to.
    (EDIT: To expand on this - what I'm trying to say is this: we use specific patterns of sound in our languages to wrap up concepts, or ideas. We do this so that more complex communication can happen, by building on top of the layer below. We create functions in programming to wrap up sets of actions so that we can build on top of that. This is how abstraction works. I've used mathematical symbols as an example, but the same concept applies pretty much anywhere you look. Condense repetition, so that we can build more complexity on top.)
    So we're going to get "AI" based programming dialects, you could say (look at the way image generation prompting has already evolved as an example).
    Then, as we also develop these language models, the models themselves are going to have free rein on the 'coding' part. We will obviously instruct these systems to create newer programming languages that will, after a while, become unreadable to us. And we will ask, well, why do we need to understand it? The machines are there to handle it (this is essentially what this guy is saying). So now we have dialects of humans telling machines what to do, and then we have machines telling other machines what to do in a language we don't understand.
    Does ANYONE see the issue with this? Like, even a little?
    Just because programming is hard does not mean that we have to eliminate it. What absolutely idiotic thinking is this? It must always be a constant pursuit of efficiency. That's the whole point. We always remain in control. We always ultimately KNOW what is happening. By literally INTENTIONALLY taking ourselves out of the equation, we write our own Skynet. I don't mean that in an apocalyptic sense, I mean that in a "we are so fucking dumb as a species, like literally what is the point of programming, or doing anything at all, if not for our own benefit?" kind of way.
    Sure, use these systems and tools to write better code, write better documentation, I mean these are the actual areas where AI systems can help us. Literally to write the documentation and help us write better, more efficient, cleaner code, faster than we ever could. But still code that WE READ, AND WE WRITE, for US.
    This guy literally called Rust and Python "god awful languages" and apparently we need to take the humans out of developing things. Who does he think development is for?
    What's weird is that this is on CS50?

    • @ChrisHarperKC
      @ChrisHarperKC Рік тому +38

      This will be lost on most people, especially academics who live in a fantasy world. Your comments are obvious to anyone who does regular old work.

    • @hamslammula6182
      @hamslammula6182 Рік тому +26

      I think your thinking is a bit biased and shortsighted. And I’m guessing it’s because like me you’re a programmer. What I think you’re wrong about is that once we move up the abstraction layer, we don’t simply forgot the stuff underneath. People can still understand assembly and write programs using it if they so choose to but it’s ultimately a waste of time.
      I don’t think people will simply forget how to program, instead they’ll focus on more important things like solving problems that people are willing to pay for.
      I’m sure if you wanted to, you could rig up a set of logic gates to do some addition and subtraction operations but is that a business problem people are willing to pay you for?
      Essentially ai will be a layer of abstraction which allows us to focus on more complex problems rather than having to focus on getting all the right packages before even attempting to solve the problems of the users.

    • @noone-ld7pt
      @noone-ld7pt Рік тому +18

      Dude, what are you on about? This is what coding has always been, a simplified version for us to convey ideas to computers. We don't write code in binary, we have compilers and interpreters that do that for us. The difference is that now instead of having to learn Python or Rust you can use English or Spanish or whatever to convey your ideas and have them be implemented. You can then ask the LLM directly questions about the implementation of different algorithms and optimize for whatever variable is relevant to your vision. Programming languages have been becoming more and more readable for decades now, this will just be the final step where we can finally interface with computers without having to learn a new language.

    • @gammalgris2497
      @gammalgris2497 Рік тому +9

      Language has its own issues. It's context sensitive and highly ambiguous. Our "experimentations" with programming languages was an exercise in formalized and more precise languages. On the lower levels it's just signal processing with circuits. We built different levels of abstractions on top of that. We can only hide the complexity but we cannot make it vanish. Language models are just another layer of abstraction with its own pitfalls. The best thing one can do is heed the scientific method. Maintain a suitable degree of transparency so that things can be verified by others. 'Others' may be other developers, scientists, AI based tools, etc.. Completely removing humans from the equation will violate the scientific method.

    • @draco4717
      @draco4717 Рік тому +11

      What if LLM write a buggy code in maybe 50 years from now and that code is only understandable this machine and it again writes another buggy code because it does not understand what it is doing and writes another buggy code till infinity 😅 the we as a human have to dust off those old BASIC books in order to start over and how cool is that 🙂

  • @anandiyer_iitm
    @anandiyer_iitm Рік тому +13

    That he stays away from addressing the "most important" problem as he puts it at the beginning of the talk (that of CS education in the future), makes it sound like just empty talk...Unfortunately, I had to watch the entire thing to realize this...

  • @alborzjelvani
    @alborzjelvani Рік тому +412

    The example with Conways game of life does no justice to the 50 years of programming language research he refers to. Also, Rust was designed to overcome the memory safety problems that plagued C and C++; it is a programming language that emphasizes performance and memory-safety. Programming languages like Fortran and C were designed the way they are for a very specific reason: They target Von Neumann architectures, and fall under the category of "Von Neumann programming languages". The goal of these languages is to provide humans with a language to specify the behavior of a Von Neumann machine, so of course the language itself will have constructs that model the von Neumann architecture. Programming languages like Rust or C do exactly what they were designed to do, they are not "attempts" to improve only code readability for Conways game of life when compared to Fortran.

    • @haniel_ulises
      @haniel_ulises Рік тому +8

      Totally agree your comment

    • @datoubi
      @datoubi Рік тому +12

      well they could become irrelevant though. Because the programming language of the future probably looks like minified JavaScript and will be designed by AI for AI.

    • @true_xander
      @true_xander Рік тому

      @@datoubi good luck with that, see you in 10 years. Humans should not loose control over their own life and things that life depends on. As soon as they do, they'll become slaves of their own technology. And despite there still won't be a cent of consciousness in a machine in 50 years, if humans will loose the ability to understand the software on their own without "AI" help, it could quickly become a tragedy because of 1000 other reasons than the comic-book 'machine revolt'.

    • @ruffianeo3418
      @ruffianeo3418 Рік тому +14

      If a natural language were such a SUPERIOR specification language, there would not be on going efforts to find working specification languages. What he claims is, that plain english is the best you can ever get :)

    • @wi2rd
      @wi2rd Рік тому +12

      True, yet non of that is an argument against his point.

  • @ryanxaiken
    @ryanxaiken Рік тому +43

    Do not be discouraged.
    Enjoy life and study what you are interested in. Everything else will fall into its rightful place. Tomorrow is not guaranteed, do not fret about things beyond your control.

    • @pradhyumansolanki6509
      @pradhyumansolanki6509 8 місяців тому

      correct because i thinks its dumb to think so far ahead when we don't even understand how ai work internally or how we are going to take data or if more computing is actually going to help, Dr. Matt Welsh does not know how the algorithm( the most important part) is going to be created ,there are a lot of other thing where he says i believes which is not so reliable (specially when choosing your career )

    • @SaurabhSingh-fr8yi
      @SaurabhSingh-fr8yi 3 місяці тому

      Story of the Chinese farmer........AlanWatts

  • @epajarjestys9981
    @epajarjestys9981 Рік тому +47

    I'm at 6:43 and all I've seen so far is that guy projecting his incompetence onto the rest of humanity.

    • @jzimmer11
      @jzimmer11 Рік тому +7

      Indeed! I mean WTF? Of course, you can always write programs in the least understandable way possible.

    • @Henry_Wilder
      @Henry_Wilder 10 місяців тому

      You call an Harvard Computer Science prof. incompetent?, you fool😂😂

    • @Henry_Wilder
      @Henry_Wilder 10 місяців тому

      Why don't you go ahead and answer the questions, since you're the competent one then🤨...ya'll just come on to the comment section talking trash, no sense🤧

    • @epajarjestys9981
      @epajarjestys9981 10 місяців тому

      @@Henry_Wilder Which questions?

    • @Henry_Wilder
      @Henry_Wilder 10 місяців тому

      @@epajarjestys9981 the questions posed at him that he couldn't answer. He kept saying "I don't know " remember?

  • @pjcamp-eq1mj
    @pjcamp-eq1mj Рік тому +135

    The talk was a perfect segway for AI startup ad

    • @joseoncrack
      @joseoncrack Рік тому +7

      Indeed.

    • @jimbobkentucky
      @jimbobkentucky Рік тому +13

      Seems like a lot of the invited speakers are hawking something.

    • @poeticvogon
      @poeticvogon Рік тому +1

      I am pretty sure it was all an ad.

    • @gaditproductions
      @gaditproductions 10 місяців тому

      @@poeticvogon this is cs50...its a class...they wont just do a add and risk loosing credibility...if this is coming from a institution like this...things are very very serious.

    • @poeticvogon
      @poeticvogon 10 місяців тому

      @@gaditproductions Of course they would. They just did.

  • @fayezhesham1057
    @fayezhesham1057 Рік тому +262

    I think it's time for Dr. Matt and his team to pivot away from fixie's custom chat GPT idea after OpenAI released GPTs.
    How unexpected!

    • @castorseasworth8423
      @castorseasworth8423 Рік тому +15

      I was thinking the same. It is basically the GPTs concept, although Fixie’s AI.JSX still offers seamless integration into a react app. Let’s see OpenAI’s response to that

    • @merridius2006
      @merridius2006 Рік тому +23

      @@rahxl while you are right it doesn't mean he's wrong

    • @brandall101
      @brandall101 Рік тому

      @@castorseasworth8423So you can just use their Assistant's API and create a React front-end on your own.

    • @TransgirlsEnjoyer
      @TransgirlsEnjoyer Рік тому +16

      @@rahxl whether he does it or somebody else, it is immaterial, openAI just proved his concept was right and worthy. he is already successful while u need to find a good job

    •  Рік тому

      @@merridius2006 ​ @TheObserver-we2co this is not scientifically correct, a program written for a given task X can be written (and exist in hardware) so its the theoretical most performant solution, while an AI can cost a million times more to run the same task, take for example "2+2", at the same time, a program is a crystallized form of ontology and intelligence, that means, instead of reasoning the solution on every execution, programs grow as a library of efficient solutions that dont need to be thought over and over again, in the future is programming languages what will remove the need to write code, as we aproach an objective description of computable problems that we will be able to write for the last time, in a way we already did this with libraries (in a disorganized way), and obviously we will use AI to help write these programs, but because we will solve these problems a single time for the infinite we will review and read and write them ourselves as a way of verification, just as today. After that we will use an optimized form of AI that maps these solved solutions on user request, but interfaces will also be mature enough (think of spatial gesture and contextual interfaces) to make speech obsolete. Current LLMs are more a trend of our current times than the ideal, efficient, unfallibe solution we need to standarize on all aspects of society from IT.
      If all the software thats already running in your computers would run using AI, it would cost thousands more in energy and time, software is already closer to the theoretical maximal efficiency, the ideal software is closer to solved math than to stochastic biology or random neuron dynamics. Training better a model wont solve any of these things.
      And AIs that evolve into more performant solutions are statistical models programmed into known subsets of the problem after the mathematical model of the problem is understood enough to do that, is the same as we have already done since forever, statistics like that used in modern LLM have always been used in computers and are part of what programs are required to do.
      Just imagine if every key we pressed were interpreted by AI just to reach your browser.
      Along all these, we still have a lot of work to do, i would say we have only written a third of all the software that we need in the world, and at the same time, almost all the software that already exists needs to be rewritten in new languages more closer to the new level of abstraction and ontological organization described here, given time all code in c++ will be moved to rust, and rust will be replaced by an even better language, and no institution will just let you do it with AI and not read or understand what it did.
      Just go study and stop being silly thinking you know what programming is without any real experience in the field, all these opinions come from marketers, hustlers, wannabes, teenager ai opiniologists and doomers.

  • @kpharck
    @kpharck Рік тому +53

    Law is written in plain English too. For reproducible results, the limit of input precision will lie where the modern legal jargon reaches it's least understandable form. You will be left with an input that is still as hard to comprehend as a programming language text, but much less precise. Good for UA-cam descriptions perhaps, but not for avionics.

    • @oldspammer
      @oldspammer Рік тому +1

      The constitution and most contracts are in legalese which looks like English but is strictly NOT. To know and appreciate fully what is said in legal documents, you must use a legal dictionary. Capitalization is often key. Amature researchers have uncovered much-hidden history by seeing what is said and meant in older legal documents. The world turns out to be more nuanced than I thought by the lectures by these legal scholars telling us what the elite have in store for us.
      Here is an example,
      London the strawman identity youtube
      You have a person, you are not a person. A person is a legal fiction--legal paperwork of identification issued by the government. Ergo, you have a person, you are not a person. That is why a corporation is considered a person and has personhood--it is all about legal fictions written in all capital letters--in the dead handwritten on an individual's tombstone.
      Some tricky legislation was at one time written in a hidden way in some foreign language so that the public would be much less likely to discover what trickery was being done by their so-called elected officials. This was in the 1600s in order to reduce the power of the church and increase that of the crown which turns out to be the inns of court of the crown temple in the City of London that is a separate state than England or UK similar to how the Vatigan in Rome is its own city-state, and that of Washington DC that is its own city-state.
      This was all explained years ago in a video on UA-cam that gave away many secrets so likely it is banned now. but few watched the entire video because of TLDR.
      I found a copy still on UA-cam:
      Ring of power - Empire of the city [Documentary] [Amen Stop Productions]

    • @mikecole2837
      @mikecole2837 11 місяців тому +3

      ie if Product Managers could specify what they wanted with enough precision to create a product, they would be coders.

    • @gaditproductions
      @gaditproductions 10 місяців тому

      law will be impacted heavily. But law has a human aspect - the motivational speaker and projection and questioning a witness with emotional appeal...that's the difference and why its safer.

    • @oldspammer
      @oldspammer 10 місяців тому

      @@gaditproductionsThere is a difference between a living individual, a machine, and an entity with personhood such as an immoral & immortal corporation who holds the debt of people, and nations that cannot be repaid due to usury compounded semi-annual interest charges.
      What if all money in existence was borrowed as debt into existence? Well, that is what has ended up happening as a trick of financial mathematics--the implications of which simple folk do not appreciate the implications, so vote for more government free stuff with their hands out waiting.
      Patrick Bet David of Valuetainment breaks down the information regarding the hyperinflation seen in Venezuela and what other countries did when they saw this same thing happening to them, namely Israel got rid of practically all its debt and so has one of the lowest rates of inflation.
      Lower standards of living are on the way if one is not careful who one has been representing them in Government.
      I had an epub formatted book. I used the ReadAloud Microsoft store app read it to me. It horribly mispronounced some specific word when reading back the material therein. The book was from 1992.
      Here are some of the epub formatted docs in my downloads folder.
      Lords of Creation - Frederick Lewis Allen
      The Contagion - Thomas S. Cowan
      The Gulag Archipelago, 1918-1956. Abridged (1973-1976), Aleksandr Solzhenitsyn
      Votescam of America (Forbidden Bookshelf) - James M. Collier
      Wall Street and the Russian Revolution, 1905-1925 by Richard B. Spence
      The individual voice types in the Windows TTS system determine how to break into syllables each word, and to pronounce well or badly any given word. The word that came out very badly, I believe, was "elephantine." Sometimes some of these TTS voices use online AI to assist in the pronunciations and smooth transitions between sentences, pitch of voice elevation during questions and so forth. Obviously, if there was a Nuke or EMP, the entire power grid would go down for decades unless the well intending people rebuild everything overnight without the build back better destroyers holding them back from doing so.
      As such, it might be better to have each computer holding a small chunk of civilization and enlightenment, lest it all be lost should a key datacenter be targeted directly.
      What safety precautions have your local officials done? How about your electric grid suppliers--what safeguards are in place to get everything back running after there has been no phones, no power grid, no gas station pumps working, no diesel truck fuel pumps running, no credit card transactions, no banking, and so on?
      I asked an AI about EMP precautions. I suggested wrapping spare electrical transformers and generators in metal wrap--thick aluminum foil layers, then burying them somewhat deep in the ground to reduce pulse damage. It said that the foil had better be thick enough and very well grounded to displace the electrical energy.

  • @ldandco
    @ldandco Рік тому +294

    Software Engineering will eventually be the role of just a few, not because of AI replacing jobs, but because of discouragment many people will feel and quitting before even starting the journey

    • @darylallen2485
      @darylallen2485 Рік тому +42

      One day, people may look at code the same way we look at the Pyramids. The knowledge of Pyramid making came and went.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Рік тому +14

      we need 4 mechanical engineers and 2 electronic engineers for every software engineer, because software is easy.

    • @hungrygator4716
      @hungrygator4716 Рік тому

      @@reasonerenlightened2456 software is easy. Good software is hard.

    • @dwight4k
      @dwight4k Рік тому +1

      Or will we need coders for the lower levels?

    • @KienHoang-jc6gw
      @KienHoang-jc6gw Рік тому +30

      @@reasonerenlightened2456 you dont even know the difference between engineer and developer...

  • @MarkMusu92
    @MarkMusu92 Рік тому +21

    I’m legally mandated to pitch my startup… that’s all I needed to know.

  • @BanditHighwayMan
    @BanditHighwayMan Рік тому +61

    Me: Asks chat gpt to help me with a bug I am facing in my code.
    ChatGPT: Returns my exact same code
    (This was a joke)

    • @luckydevil1601
      @luckydevil1601 Рік тому +6

      Ahah yeh, same sh*t happens to me too 😂

    • @invysible
      @invysible Рік тому +3

      true broo... happend to me a few days ago

    • @mykytaso
      @mykytaso Рік тому +13

      In this way ChatGPT hints that the main bug in your code is you :)

    • @IntrospectiveMinds
      @IntrospectiveMinds Рік тому +8

      GPT 3.5 I'm guessing? Try 4. People keep coping by saying it doesn't work but are using the outdated model or have poor instructions.

    • @jbo8540
      @jbo8540 Рік тому +3

      Try 4, and if that doesnt improve things, you need to work on your prompt engineering.

  • @TheOriginalJohnDoe
    @TheOriginalJohnDoe Рік тому +229

    Dr. Welsh does make good statements I think we all can agree on, but as an AI student and Software Engineer for 10+ years, regarding what Welsh said: "People still program in C in 2023", well if you study AI you will even learn Assembly, very very low-level programming and since models have been written by programmers, we still need programmers to maintain and improve on these. AI is getting there, but it's still at a very immature level compared to the maturity we seem to desire as a humanity. We still need PhD students with a solid programming and AI background to do extensive research within the field of AI in order to help invent new technologies, specialized chips, improved algorithms etc. We are still far away from letting AI generate code that is as good as a programmer who has mastered it. Sure, it can write code, but there's still ton of scenarios where it fails to make things work.

    • @timsell8751
      @timsell8751 Рік тому +28

      2 more years should do the trick!

    • @reasonerenlightened2456
      @reasonerenlightened2456 Рік тому +20

      Before thinking of AI use in the society we must agree who will Profit from it, who will own it and who will pay for the mistakes of the AI? Is it going to be like, "Oh well, bad luck" when AI ends someone's life?

    • @LucidDreamn
      @LucidDreamn Рік тому +11

      I give it 5 more years before AI is super-intelligent

    • @headlights-go-up
      @headlights-go-up Рік тому +22

      @@LucidDreamnbased on what data?

    • @chuangcaiyan7114
      @chuangcaiyan7114 Рік тому +4

      I think the problem is about the purpose or the goal of the program that you are programing, in case of the Conway's Game of Life, the concept it self it is not easy to explain even with human language, we could get some ideas watching it performe but to be able to understand it completly, from logic to meaning or even to purpose and what coorelation it has with other topics such as math, physic or phylosophy, it is just not easy to understand, it won't be easy anyway

  • @rodrgds
    @rodrgds Рік тому +117

    🎯 Key Takeaways for quick navigation:
    01:23 🚀 The field of computer science is undergoing a major transformation where AI models like GPT-3 are being used to write code, marking a significant shift in programming.
    06:54 💻 Natural language is becoming a key tool in programming, allowing developers to instruct AI models to generate code without the need for traditional programming languages.
    14:47 📈 AI technology, like GPT-3, has the potential to significantly reduce the cost of software development, making it more efficient and cost-effective.
    20:52 🤖 The rise of AI in programming will likely change the roles of software engineers, with a shift towards product managers instructing AI models and AI-generated code.
    23:46 👁️ Code review practices will evolve to incorporate AI-generated code, requiring a different kind of review process to ensure code quality and functionality.
    24:41 🤖 Code maintainability may become less essential with AI-generated code, as long as it works as intended.
    25:58 📊 The rapid advancement of AI models like ChatGPT has transformed the computer science field and its societal expectations.
    29:04 🌐 Programming is evolving, with AI assisting humans in generating code, and the future may involve direct interaction with AI models instead of traditional programming.
    33:44 💬 The concept of a "natural language computer" is emerging, where AI models process natural language commands and perform tasks autonomously.
    45:52 💡 The model itself becomes the computer, representing a future where AI empowers people without formal computer science training to harness its capabilities.
    49:15 🤖 AI-generated tests are becoming more prevalent, but there's uncertainty about the role of humans in the testing process.
    51:07 🧩 The future of AI models relies on the increased availability of transistors and data, which may require custom hardware solutions.
    52:06 🤔 Formal reasoning about the capabilities of AI models is a significant challenge, and we may need to shift towards more sociological approaches.
    54:23 🤖 Exploring whether one AI model can understand and explain another model is an intriguing idea, but its feasibility remains uncertain.
    59:30 🧠 While AI may make software engineers more productive, certain human aspects, like ethics, may remain essential in software development.
    Made with HARPA AI

  • @caneridge
    @caneridge Рік тому +72

    The purpose of computer science in a nutshell was not to translate ideas into programs. The goal was to find higher levels of abstractions to enable describing and solving ever bigger problems. Programming and programming languages were emergent properties of that goal. The question for LLMs is if they will be able to continue the quest for higher and simpler levels of abstraction or forever get stuck in the mundane as most programmers did by their jobs.

    • @katehamilton7240
      @katehamilton7240 Рік тому +3

      Thanks, I'm saving this idea

    • @mriduldeka850
      @mriduldeka850 Рік тому +2

      Thats a deep thought. I feel purpose of comp science is to automate task which humans can do or think of doing. Programming is just one step for it. Instead of create models which can write code, humans should think of bigger ideas which can impact living beings. It may be accomplish by manual or automatic programming, does not matter

    • @switzerland
      @switzerland Рік тому +2

      Reality is near infinitely complex. As programmers we create a finite abstraction. AI will do it better yet can't solve exponential complexity. AI is not infinite and has not infinite compute. Infinite is usually a warning signal of a lack of knowledge. Infinity means everything starts to behave weird. There is also physics … latency, a set of fundamental problems

    • @aoeu256
      @aoeu256 11 місяців тому

      We have too many people doing software so software salaries are going to go down, we need to tell Indians & Chinese and Westerners to focus on swarm robotics, mini-robots, having the robot sworms build things etc... Take a robot-hand, make all of its parts like legos that it itself can assemble. Then make it so that it can either print out its parts, sketch out its parts, or mold its ports. Have it replicate itself in smaller and smaller until you hav e a huge swarm of robots, but you also need a lot of redundancy and "sanity checks". Swarm robots can do stuff like look for minerals/fossils/animals, look for crime, map out where everything is so you know where you put your cellphone, build houses/food/stuff/energy collectors/computers. @@mriduldeka850

    • @mriduldeka850
      @mriduldeka850 11 місяців тому

      @@aoeu256 That's a good point. Japaneese are good at building robots. Indians are good and abundant in software sector but lagging way behind in manufacturing and hardware industry. Chinese have strength in manufacturing sector so perhaps they can adopt to robotics growth more quickly than Indians.

  • @frankgreco
    @frankgreco Рік тому +18

    His startup is completely based on a Javascript framework. You don't have to use an LLM to tell you that was a bad idea.

    • @godismyway7305
      @godismyway7305 8 місяців тому

      Who said you can't use javascript for ML?

    • @frankgreco
      @frankgreco 8 місяців тому

      @@godismyway7305 No one did.

  • @frankgreco
    @frankgreco Рік тому +4

    46:36 "No one understands how large language models work"... back in 2008, no one understood how derivatives worked.

  • @abnabdullah
    @abnabdullah Рік тому +34

    I am amazed that students didn't ask about anything related to "security" because, right now, we are just seeing an innovation but what about the future, when, on a larger scale, if we say we want to build a public program like Facebook or any other platform. This is presuming to be a live programming or language model building whatsoever it is so how can we encrypt all of our data from building to running and so on.

    • @rookie_racer
      @rookie_racer Рік тому +6

      While security is something lacking I feel your focus is on the wrong aspect of it. You reference encryption which isn’t necessary for the source code so its ability to assist you to build won’t be impacted. I’m more concerned about the data you’re providing to the LLM. If I’m building a proprietary function and I need some insight from an LLM and I need to upload my source code for them to evaluate I am potentially sharing some seriously protected intellectual property. What happens to that? Can that code snippet show up in someone else’s code when trying to solve the same problem? Maybe your competitor?

    • @Invariel
      @Invariel Рік тому

      @@rookie_racer More importantly than that, he's already demonstrated in his talk that these LLMs have -- call it "undocumented" or "emergent" or whatever you want -- behaviour that gives the questioner control over how the answer is given. Recall the "my dear deceased grandmother" "attack" that let people ask about how to make napalm or pipe bombs or whatever. Giving LLMs unfettered access to proprietary data, and having those LLMs all be based on the same nugget/core/kernel vulnerable to the same attack vectors means giving attackers access to all of that proprietary data by "casually" using your interface.

    • @abnabdullah
      @abnabdullah Рік тому +3

      @@rookie_racer yes, you are right... actually what I was trying to highlight is "Data" and I mean how can we trust our confidential information to something that is "open source and a third party revolving around and across the internet.

  • @alphabee8171
    @alphabee8171 Рік тому +59

    It's not that gpt blew up because it was super good overnight. Well sort of but the real reason is it's ease of use. It's just like back when home computers became popular, when you introduce a computer as a marvel of engineering nobody cares about that but if you say "it's a box that lets you play some games and music etc with a bunch of clicks" you have everyone's attention. The idea of making it feasible for the masses that's what kicked it off, poured in billions of dollars and years of research to make computing better and better, same stuff happened with gpt and it's again on the same path but at a much much faster rate.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Рік тому

      GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!

    • @brianallossery4628
      @brianallossery4628 Рік тому

      Computational power increases made gpt possible from what I understand

    • @LyricalMurderer1
      @LyricalMurderer1 Рік тому

      That and it was super good… understood that a lot has to do with data and compute but it really is very good as a product right now…

  • @snarkyboojum
    @snarkyboojum Рік тому +71

    I prefer this take - natural language isn't well suited for describing to computers what they should do, which is why programming languages were developed. LLMs can do some translation from natural languages to programming languages, but not very well and not as accurately as we would like (yet), so they're good for getting you part of the way there, and currently they'll likely generate less than accurate or reliable code, but if you're not trying to write reliable programs, they could be helpful :D

    • @Siroitin
      @Siroitin Рік тому +9

      Good to remember that rigorous symbolic notation for math is pretty modern idea in itself. One could argue that math is just "esoteric language" like Matt Welsh is implicating about programming language.

    • @restingsmirkface
      @restingsmirkface Рік тому +6

      I agree. AI can do things like computing Pi, finding factors, and other relatively trivial things which could just be bits of static data. It may not even be generating code - just returning the closest match. If it is generating code, it's not very useful yet unless you know exactly how to speak those sweet-nothings. I asked ChatGPT about a week ago to create a website in the style of Wikipedia with 4 page-sections relevant to simulation-theory. It gave me an HTML tag with 4 empty DIV elements - nothing else. No other structure, no content, no styling, no mock-up of interactive elements.

    • @Siroitin
      @Siroitin Рік тому

      @@restingsmirkface You might have to do some "prompt engineering".
      When I try ML and statistics related stuff, I often just copy text book formulas. The copied text is obscure for humans but somehow ChatGPT is able to understand it. Also it is really hard to ask python code for neural networks because it forces the use of external packages. C language doesn't have external packages so I often ask ChatGPT to write in C code and I translate the code to Python or Julia

    • @keiichicom7891
      @keiichicom7891 Рік тому +4

      Agree. I noticed, although AI chatbots like ChatGPT can write complex Python programs( I asked it to create simpler neural net chatbots in Tensorflow / Keras), it is often buggy, and it has a hard time fixing the bugs if you ask it.

    • @choc3732
      @choc3732 Рік тому

      @@Siroitinthis is very interesting, ChatGPT has a better hit rate when it comes to writing in C?
      I’ve only tried Python so far, will have to give this a go

  • @Hangglide
    @Hangglide Рік тому +3

    Great presentation! Thank you!
    One nitpick: 19:23 "average lines of code checked in per day ~= 100" I can tell you that is not the case for average SWEs in the silicon valley do. ~10 lines/day would already be pretty good.

  • @simonmeier
    @simonmeier Рік тому +31

    Dr. Matt Welsh points out the crucial point about AI in programming: The better it gets and the more we trust in it, without actively know how to code or without knowing how it does what it's doing, we lose power over our daily automatic routines. Imagine what a risk AI generated code would be in a nuclear power plant. I think this talk is rather a great wake up call for learning how to code and coding inside AI instead of just letting it go.

    • @randotkatsenko5157
      @randotkatsenko5157 Рік тому +1

      Humans are fundamentally lazy and default to the option which takes the least energy and effort. Meaning, most people will try to automate their own work as much as possible. AI learns from this and gets increasingly better, until human-in-the-loop is not needed anymore. Eventually, AI might be even better than humans at programming. As for nuclear power plant, I dont know, depends how reliable the system is.

    • @gordonramsdale
      @gordonramsdale Рік тому +5

      Except in 5 years, you might be saying the opposite. Humans introduce error inherently. Think how much better AI is now than it was programming 5 years ago, give it 5 more years, and writing human code will seem like the insecure risky option.

    • @Ivcota
      @Ivcota Рік тому +1

      @@gordonramsdale My take: A good chunk of software bugs exist because requirements were not refined well enough by the engineer breaking down the work. They make assumptions and write code that does something it shouldn't. With good testing no real bugs get into the system and we have modern compilers that remove the issues with syntax errors. AI coding will likely produces the same errors and make these types of assumptions humans make when working with poorly defined requirements.

    • @dblezi
      @dblezi Рік тому

      Nuclear power plants have a strict design and review process that is fully vetted. So i would not worry about this specialized software aka AI in this application.

    • @simonmeier
      @simonmeier Рік тому

      @@dblezi Hi, I think I understand what you are saying. But then again what does fully vetted mean in that context? We also have a review process where each Merge Request is fully vetted but still, errors can slip trough. AI MRs might slip through more easily.

  • @sortof3337
    @sortof3337 Рік тому +22

    surprise surprise, guy selling the shovel says gold rush is the best.

    • @ldandco
      @ldandco Рік тому

      Yep... noticed the same.

  • @cityofmadrid
    @cityofmadrid Рік тому +9

    Why hasn’t the “lecture” started saying “today we are gonna have my buddy which has an AI for programmers startup”, it would have saved me an hour of this info-commercial

  • @EyeIn_The_Sky
    @EyeIn_The_Sky Рік тому +7

    Guy introducing him: "Hey Kids, this guy is going to make sure that the cripppling debt that you and your parents undertook to send you to college was all for absolutely nothing thanks to his AI"

  • @ai_outline
    @ai_outline Рік тому +71

    Something I did not understand was how would Computer Science become obsolete? So okay, you replace programming with prompting. But who will develop all those magical models that you are prompting? Aren’t they built by computer scientists and SWEs?
    What I mean is, if you are bold enough to claim programming will become obsolete, then doesn’t that mean learning mathematics and physics would also become obsolete? Like I could just ask some AI model to develop what I need in the context of physics and mathematics… and won’t need to understand the dynamics of those sciences, I just need to know how to speak English and ask for something.
    Note: I actually can see programming becoming more automated. But Computer Science? I can’t see that happening… aren’t we supposed to understand how do computers and AI work? Should they be seen as black boxes in the future?
    Also, programming would still not be fully automated because it’s weird to believe that an ambiguous sequence of tokens (English language) can be mapped with precision to a deterministic sequence (code) without any proper revision by a human… what if AI starts to hallucinate and not align with human goals? At best we would create a new programming language that is similar to “Prompting”…
    What are your opinions on these?

    • @stefanbuica5502
      @stefanbuica5502 Рік тому +9

      My opinion is that before doing a ratinal action, there is an emotional action. So all decisions you can write on the prompt, cannot be accurate.
      My take is that technology will automate further and transform and humans will have the opportunity to use more of their creativity and thus becoming more human!

    • @algro9567
      @algro9567 Рік тому +8

      There are two main concepts that you need to wrap your mind around:
      1) Ease of use, 2) Programming as a tool
      When Welsh talks about 'the end' of programming, he means to future mass adoption of LLM models to program for them instead of programming themselves due to ease of use. Essentially, LLM's will be the new user interface for people to use programming languages, so the need for expert programmers will be limited to specialty roles in the future, like how can I write an API for LLMs to interact with or how can I make this LLM that checks that another LLM works properly?
      Obsolete is not the right word here, as you can see Welsh using copilot himself even though he is still technically a programmer. It's just the science of writing code by hand will be displaced by prompting to ask an AI to manipulate code for you. For now, you need to read the code the LLM wrote to use it, but in the future, it might as well be a magical black box that does x for you, testing and implementation included.
      Or in other words:
      LLM's are going to be easier to use than programming by hand, and LLM's will use coding as a tool instead of people. Computer science is then the art of getting better code from LLMs instead of getting humans to write code faster and better.

    • @tomashorych394
      @tomashorych394 Рік тому +3

      You are right. These people will still be needed. But AI might reduce the number of such positions down to

    • @jpcfernandes
      @jpcfernandes Рік тому +10

      Not only that, who develops all the connections between LLMs and all existing systems. Who will replace existing systems that nobody knows what are doing with systems that can use AI. In the short term at least, I foresee more programmers needed, not less.

    • @metadaat5791
      @metadaat5791 Рік тому +14

      I for one will be glad when the people who think that "programming sucks" and "no progress has been made in 50 years" will actually give up and leave the field, they have no idea what CS entails. Computer Science is about computer programming like Astronomy is about looking through telescopes.

  • @twoplustwo5
    @twoplustwo5 Рік тому +4

    🎯 Key Takeaways for quick navigation:
    00:00 🎙️ Introduction of Dr. Matt Welsh and his work
    - Introduction of Dr. Matt Welsh, who has worked on sensor networks and AI,
    - Discussion on the future of programming where AI writes code.
    02:16 💻 The problem with human programmers
    - Humans are not good at writing, maintaining, and understanding programs,
    - Despite 50 years of research, programming languages have not solved these problems.
    04:04 📜 Evolution of programming languages
    - Overview of the evolution of programming languages from Fortran to Rust,
    - Discussion on the difficulty of understanding and writing programs.
    06:24 🤖 The rise of AI in programming
    - Introduction of GPT 4 model and how it is used to write code,
    - Discussion on the potential of AI in replacing conventional programming.
    10:31 🔄 Shift in programming paradigm
    - Prediction of a shift in programming paradigm where language models replace conventional programming,
    - Introduction of CoPilot and its impact on programming.
    14:47 💰 The cost of replacing human developers with AI
    - Calculation of the cost of replacing a human developer with AI,
    - Discussion on the potential impact on the industry.
    20:24 🚀 The advantages of AI over human programmers
    - Discussion on the advantages of AI over human programmers,
    - Prediction of a radical change in the industry due to AI.
    22:48 🔄 The impact of cutting humans out of the loop
    - Discussion on the impact of cutting humans out of the programming loop,
    - Speculation on the future of software development and product management.
    23:18 🤖 The future of software teams
    - The potential structure of future software teams with AI code generators,
    - The changing role of humans in code review and maintenance.
    25:10 🌐 The sudden rise of AI
    - The sudden and unexpected rise of AI in programming,
    - Comparison of the evolution of AI to the evolution of computer graphics.
    27:46 📚 Changing perceptions of AI
    - The shift in societal dialogue about AI from a toy to a potential threat,
    - Discussion on the philosophical and moral questions posed by AI.
    29:04 💻 The evolution of programming
    - The evolution of programming from machine instructions to AI-assisted coding,
    - Prediction of a future where programming is skipped entirely in favor of direct computation by AI.
    33:44 🖥️ The natural language computer
    - Introduction of the concept of the natural language computer,
    - Discussion on the potential of this new computational architecture.
    35:09 🚀 Fixie startup pitch
    - Introduction of Fixie, a startup focused on making it easy to go from data to a live chat bot,
    - Discussion on the importance of integrating natural language and programming language.
    42:07 🎓 The future of computer science education
    - Speculation on the future of computer science education in light of AI advancements,
    - Discussion on the potential for AI to expand access to computing.
    45:52 🌐 The model is the computer
    - Introduction of the phrase "the model is the computer",
    - Acknowledgement of the challenges and unknowns in the field of AI.
    47:18 🤔 The mystery of AI computation
    - The discovery of AI's ability to perform computation,
    - The potential of AI to replace human programmers.
    48:17 ❓ Audience Questions: Testing AI-generated code
    - Discussion on the challenges of testing AI-generated code,
    - The potential of AI-generated tests and the role of humans in the process.
    50:40 🚧 Future Challenges and Milestones
    - Discussion on the future challenges and milestones in AI development,
    - The potential of custom hardware and the need for formal reasoning about AI capabilities.
    54:23 🔄 AI Models Explaining Each Other
    - The idea of one AI model explaining another,
    - The struggle to understand AI models and the potential for AI to provide insights.
    55:44 📈 The Limits of Data and Computation
    - Discussion on the limits of data and computation in AI development,
    - The potential of untapped data sources and the need for more transistors.
    58:34 🧑‍💻 The Future of Software Engineering
    - The potential for future software engineers to be more effective,
    - The need to change the relationship between humans and software development.
    01:02:20 🌌 AI in Algorithm Development
    - The potential of AI in developing unique algorithms,
    - The potential for a symbiosis between humans and AI in algorithm development.
    01:04:12 🎓 The Future of Computer Science Education
    - The relevance of current computer science education in the future of AI,
    - The need for a shift in computer science education to accommodate AI advancements.
    01:06:04 🎉 Closing Remarks
    - The importance of understanding the mechanics behind AI models,
    - The need for critical thinking in AI and the potential of AI as a "magical black box".
    Made with HARPA AI

  • @KaLaka16
    @KaLaka16 Рік тому +117

    If programmers will get replaced, who will not get replaced? Programming is one of the most difficult fields for humans. If most of it can be automated, most of everything else can be automated too. This AI revolution won't affect just programmers, it will affect everyone. Programmers are more aware of it than the average person though.
    It might still take 20 years for us to see AGI. Probably way less, but nobody really knows.

    • @BARONsProductions
      @BARONsProductions Рік тому +38

      Manual labour isn't going to be replaced. Nurses, waitress, handyman, plumber... shit like that

    • @KaLaka16
      @KaLaka16 Рік тому +17

      @@BARONsProductions Eventually it is, unless we specifically want humans for the roles. Machines will do everything better once we get to artificial superintelligence. We will probably get it before 2040, but who knows, it could take way longer. Also, people need time to adapt to technology. When something is invented, it doesn't get immediately applied on the practical level.

    • @ataleincolor
      @ataleincolor Рік тому +16

      @@BARONsProductionsif anything manual labour is going to be replaced faster due to the repetitiveness of their roles.

    • @Nobodylihshdheuhdhd
      @Nobodylihshdheuhdhd Рік тому +8

      ​@@BARONsProductionsthose jobs are more likely to be replaced than programmers

    • @dineshbs444
      @dineshbs444 Рік тому +22

      The physical labour will take more time. For that, actual physical robots should be built that won't be any good for like 10 years at least (I believe). Yeah the digital ones are ones that will take the hit first.

  • @alexforget
    @alexforget 10 місяців тому +1

    More data and transistors will help, but I think that better algorithm will help way more.
    We are continualy rebuilding the same thing and letting them unused.

  • @Rico.308
    @Rico.308 11 місяців тому +2

    Learning to code right now and I can definitely say this has not made me give up it only shows me the cool tools I will one day be able to build.

  • @manabukun
    @manabukun Рік тому +99

    Back in the real world, you still need to double check the code generated by copilot which often is wrong. I'm not sure if I'm bad at using copilot or the people using it are simply not checking what has been generated.
    Not to mention, none of the large companies are willing to use a version of copilot that allows it to send the learned data from their private repos back home for obvious reasons.

    • @Peter-bg1ku
      @Peter-bg1ku Рік тому +28

      that's the problem I find with AI generated code. You have to verify it, which is a task that takes as much, if not more effort that writing the code by hand.

    • @derekcarday
      @derekcarday Рік тому +1

      @@Peter-bg1kuwrong

    • @derekcarday
      @derekcarday Рік тому +2

      wrong

    • @Peter-bg1ku
      @Peter-bg1ku Рік тому +1

      @@derekcarday what do you mean?

    • @derekcarday
      @derekcarday Рік тому

      @@Peter-bg1ku that isn't the problem to worry about. We are so close to solving hallucinations.

  • @another_dude_online
    @another_dude_online Рік тому +9

    "The line, it is drawn, the curse, it is cast
    The slow one now will later be fast
    As the present now will later be past
    The order is rapidly fading
    And the first one now will later be last
    For the times, they are AI-changin'"

  • @sandrinjoy
    @sandrinjoy Рік тому +3

    That has been the most professional Ad Break I have ever seen in my life. HAHA

  • @MaxHeadroomGPT
    @MaxHeadroomGPT Рік тому +2

    I absolutely loved this presentation, however I will _vehemently disagree_ about his point at @47:25 ... *Programming does NOT SUCK; Programming IS FUN !!!* That is the difference between academic narcissists like this guy vs _Real Programmers._ (Yeah, I had to slap him in the face for that remark. He's not a true programmer. He's just a tool, like AI.)

  • @johnmamish3197
    @johnmamish3197 Рік тому +2

    The "analysis" that he does from about 17:50 - 20:00 is insanely naive. The idea that lines of code are something fungible and that 100 lines of GPT code are at all comparable to 100 lines written by an expert is asinine. Some 20 line snippets out there are worth $100k because they took weeks of sweat and tears from experts to work the kinks out of. Some 20-line snippets are the culmination of years of research that span disciplines. I've asked ChatGPT to write lots of C and Python; it isn't much better than a particularly stupid intern with an encyclopedic knowledge.
    This guy has a lot of very solid, intelligent work under his belt; he's too smart to be peddling this. I wonder if he has an ulterior motive....

    • @marioamatucci
      @marioamatucci Рік тому

      he runs an AI startup -somebdoy said in the comments idk

    • @johnmamish3197
      @johnmamish3197 Рік тому

      @@marioamatucci haha yeah I know; that last sentence was sarcasm

  • @manojbp07
    @manojbp07 Рік тому +11

    Time to shutdown CS50, since there is no point learning programming and the field in general anyway... Why will anyone pay taking an education loan to go through and get a useless degree...

  • @restingsmirkface
    @restingsmirkface Рік тому +22

    In almost all scenarios, AI represents an "it runs on my machine" approach to problem-solving - a "good enough", probabilistic mechanism.
    But maybe that is sufficient. We get by in the world despite uncertainty at the quantum level... maybe once _everything_ is AI-ified, the way we think about the truth will shift just enough, away from something absolute and concrete, to something probabilistic, something "good enough" even if we'll never be sure it's at 100% outside of the training-sets run on it.

    • @bens5859
      @bens5859 Рік тому +3

      > the way we think about the truth will shift just enough, away from something absolute and concrete, to something probabilistic, something "good enough"
      This is a deep insight. Many great minds of the western philosophical tradition have expressed this view in one way or another. In fact it's the school of thought known as American Pragmatism (which is known as the quintessentially "American" school, in philosophy circles) which most closely aligns with this view.
      Some pithy quotes about truth from the most notable figures in Pragmatism:
      - William James (active 1878-1910): “Truth is what works.”
      - Charles Sanders Peirce (1867-1914): “The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by the truth.”
      - John Dewey (1884-1951): “Truth is a function of inquiry.”
      - Richard Rorty (1961-2007): “Truth is what your contemporaries let you get away with saying.”

    • @lubeckable
      @lubeckable Рік тому

      dockerize AI problem solved xd lmao

  • @GigaFro
    @GigaFro Рік тому +17

    I believe that in the short term there will be a shift in both time and focus from coding a solution to the architecture design, testing, and security of that solution.

    • @christislight
      @christislight Рік тому +1

      Architecture is KEY

    • @sourenasahraian2055
      @sourenasahraian2055 Рік тому +2

      Architecture is nothing but the applications of known patterns and reasoning/ tradeoffs . I use chatGPT for my architecture challenges all the time and I say though it’s not perfect, it’s already doing a decent job . It will get even better , exponentially better .

    • @Gauravkumar-jm4ve
      @Gauravkumar-jm4ve Рік тому

      agreed

  • @Harshhasteer
    @Harshhasteer 3 місяці тому +2

    He has a product to sell. He will say everything to persuade you. Learning any programming language will still be a Win -Win situation in future. Take a shot my friends. Do it for the sake of your curiosity and your interest in learning to code.

  • @ivan88buble
    @ivan88buble Рік тому +3

    Great sales presentation!

  • @ZaidMarouf-q9e
    @ZaidMarouf-q9e Рік тому +14

    That's a pretty funny and bold claim when a lot of AI systems can't count the number of words in a paragraph excerpt correctly.

    • @ksoss1
      @ksoss1 Рік тому +1

      Can you? All the time? What would it take for you to do it perfectly each time? What would it take for the AI system to do it perfectly every time? Interesting times ahead...

    • @ZaidMarouf-q9e
      @ZaidMarouf-q9e Рік тому

      @@ksoss1 As far as I'm aware, there looks to be a problem that chatbots seem to have where in terms of computational speed causes them to skip some instructions of code that's not too dissimlair when setting compiler execution speed to a certain level that results in some unwanted glitches like in assembly language programs via accidental instruction skips.

    • @juleswombat5309
      @juleswombat5309 Рік тому

      You are referring to simple LLMs, the proposed architecture is LLMs+ Compute Tools (c.f. Calculators etc) Just as an normal human can answer 3x 9 =27 off the top of their head, they would need pencil and paper, or just use a a calculator, to answer what is 4567 x 2382?

    • @ZaidMarouf-q9e
      @ZaidMarouf-q9e Рік тому

      ​@@juleswombat5309So, what does that make my testing of Bing AI's capabilites, built on top of OpenAI tech, in regards to a pretty simple task on a pretty short excerpt of word counting? Because I'm pretty sure Microsofts' proprietary AI app doesn't fall in the category of being powered by a simple LLM.

    • @juleswombat5309
      @juleswombat5309 Рік тому

      @@ZaidMarouf-q9e It means you have not tested against an LLM combined with access to relevant tools.

  • @beMUSICaI
    @beMUSICaI Рік тому +18

    The problem with LLM is that they cannot solve independently computationally irreductible problems. So there is interaction between classical computation and LLM in symbiosis. So I do not agree that computer languages should disappear completely. Also right now checking google is much more energy efficient than prompting chatgpt. So there are the energy efficiency issues. When you build apps with AI somebody has to pay the token bill.

    • @Fs3i
      @Fs3i Рік тому

      > The problem with LLM is that they cannot solve independently computationally irreductible problems
      It can write programs that do. For example, this is what the current GPT-4 can do on the normal openai chat website (can't post url to conversation because YT spam filter). I've asked "Hey there! Can you give me a word which has an MD5 hash starting with `adca` (in hex)?"
      I've chosen adca, because those were the first four hex letters in your name. This is likely not in its training set.
      The model was "analyzing" for a bit, and then replied
      > A word whose MD5 hash starts with adca (in hexadecimal) is '23456'. The MD5 hash for this word is adcaec3805aa912c0d0b14a81bedb6ff. ​​
      You can see how it answered it, it wrote a python program to solve it. I didn't need to prompt to do it, it knows - like a human! - that it should pass these classically computationally irreducible problems off to a classical computer.
      And yes, there's still programming involved, but like, my 16 years of experience with computer science didn't help me at all, except in terms of coming up with an example.

    • @BattleBrotherCasten
      @BattleBrotherCasten Рік тому +1

      No code applications getting better and A.I. getting better looks like a programless future is really close or a near programless one at least. Eventually A.I. will be better,faster and cheaper than any human by a large margian.

    • @icenomad99
      @icenomad99 11 місяців тому

      What you forgot to add is "YET".

  • @artemkotelevych2523
    @artemkotelevych2523 Рік тому +26

    The thing with LLMs is that it's just another level of abstraction. If you take a product documentation as a highest level of abstraction to describe how that product should behave, to have it correct you still need to describe all the corner cases and the way some things should be done, you can't just say "this page should show weekly sales report". And all this documentation might not be easy to understand. Code is just a very precise way to describe behavior.

    • @wi2rd
      @wi2rd Рік тому +1

      Do you trust close friends who know you well to give you a decent result when you ask them "this page should show weekly sales report"?

    • @artemkotelevych2523
      @artemkotelevych2523 Рік тому +3

      @@wi2rd you understand how documentation work right?

    • @MaiThanh-om5nm
      @MaiThanh-om5nm Рік тому +1

      From your logic, it's impossible for non-technical project manager to instruct developers on how the application should be programmed.

    • @MaiThanh-om5nm
      @MaiThanh-om5nm Рік тому +1

      AI can ask clarification questions to make the requirements clearer. It's can do long-term back-and-forth conversations with the whole context of the project.
      It's not just inputting a single prompt and the project is done

    • @marcelocruz7644
      @marcelocruz7644 Рік тому +2

      @@MaiThanh-om5nm Non-technical and people with low abstraction for the field usually will instruct on how something will behave instead of how something is to be programmed.
      Also project managers manage the team time etc, architects, developers and engineers with know-how to translate expected behaviour from clients to technical field are the ones who instruct how it's programmed. Lots of developers are able to understand what a client want without an intermediate, because developers are system users as well and know what could be better on apps and what they'd like to see, expect etc, also you can see freelancers and github projects all around without a project manager etc, confirming they would understand it anyway with or without those helpers.

  • @kenjimiwa3739
    @kenjimiwa3739 Рік тому +18

    There's SO much to SWE jobs aside from just coding, like collaborating with product and design, understanding business needs, convincing management that something is worthwhile. Additionally, someone will need to review the AI code, deal with legacy code, set up services, etc.. I view these AI tools as tools that will make everyone's job more productive but not necessarily replace.

    • @LupusMechanicus
      @LupusMechanicus Рік тому +4

      The cope is real.

    • @TomThompson
      @TomThompson Рік тому +10

      ​@LupusMechanicus Anyone who thinks an AI can help anyone write a program to solve problems hasn't worked in the field at all. More often than not a person will bring a problem and their ill conceived solution. Then the experienced software engineer will discuss the original problem, propose alternate solutions, ideas that still solve the problem but better make use of resources (memory, time, etc) and provide a useful and intuitive workflow. That IS part of being a SWE and if you think an AI is going to do that naturally and simply you are out of touch. Say others are "cope" if you want, but perhaps educate yourself more than watching a UA-cam video by a guy desperate to sell is product.

    • @LupusMechanicus
      @LupusMechanicus Рік тому

      @@TomThompson Bruh try to build a house profitably with just your fingers. You need a saw and air hammers, lifts and screw guns. Thusly you can now build a million dollar house with 8 people in 6 months instead of 40 in 1 year. This will eliminate alot of employees, thusly it is cope.

    • @TomThompson
      @TomThompson Рік тому +10

      @@LupusMechanicus You again miss the point. No one is saying the industry won't be affected, it will. What we are saying is it is uninformed to say the industry is "dead" because of AI. Just look at the history. The job has gone from being primarily hardware based (setting tons of switches) to using a machine level language (assembly). Then gradually to higher level languages (fortran, cobol, c, etc). Then we have gone through adding IDE and lint, and code sharing, and review systems. The introduction of AI will not replace everything and everyone. It will be a tool that will make the job easier. And yes, it could easily mean a company that currently has 100 engineers in staff can gradually cut back to 10. But it also means other jobs will open up in areas such as making these AI and making systems that make using but easier.
      The invention of the hammer didn't kill the home building industry.

    • @2011fallenstar
      @2011fallenstar Рік тому +1

      There won't be legacy code anymore, having a computer that writes code, so ppl will understand the computer's code sounds pointless. Do you need to know your router's code in order to use the wifi?

  • @markotikvic
    @markotikvic Рік тому +3

    14:08 is just incorrect. Look it up kids.
    This is a very dishonest presentation. To say that in 50 years there have been no advancements in programming languages, and then to cherry pick some ancient and esoteric languages and Rust...
    I mean, give me a f-ing break. Who is going to believe you that APL was ever designed for productivity and readability.
    Well, no one respectable would do that. And it just happens to be that he's got this new AI-based company... Do with this information what you want.

  • @suryamanian8492
    @suryamanian8492 Рік тому +45

    the ‘gotch’ in using AI is we need to know if the code is right or not
    so we need to know basic stuffs

    • @augustnkk2788
      @augustnkk2788 Рік тому +6

      For now, eventually it will be able to write perfect code on its own, reducing the need from 100 software engineers to 5-10

    • @Pavel-wj7gy
      @Pavel-wj7gy Рік тому +1

      What is the basic stuff in a pyramid of abstractions? Assembly code?

    • @tiagomaia5173
      @tiagomaia5173 Рік тому +4

      @@augustnkk2788 I don't think it'll replace all good software engineers so soon. And I really don't think it will get to a point of always generating perfect code.

    • @augustnkk2788
      @augustnkk2788 Рік тому

      @@tiagomaia5173 Itll replace maybe 90%, some still need to make sure its safe, but no one will work in wed dev f. ex; all tech work is gonna be about AI, unless the governemnt steps in. I give it 10 years before it can replace every software engineer

    • @dekooks1543
      @dekooks1543 Рік тому

      you have the confidence of someone who doesn't know what they're talking about

  • @1dosstx
    @1dosstx Рік тому +5

    38:17 what is considered kid safe? Based on what milestones? Emotional ? Psychological? Etc? You need to know what child development sources are peer reviewed , etc. yes you could ask the AI for those but then you’d need to ensure they were not hallucinations. Etc.

  • @regularnick
    @regularnick Рік тому +5

    19:26
    > "I've been coding whole day", but you threw away 90%
    Oh, that's pretty bold claim to say, that with chatGPT you will get correct code snippet first try, without any need to prompt it with like 20 more messages clarifying and making sure id doesn't confuse language, paradigm etc.
    You should not compare "clear code" of SWE with GPT tokens, because you are guaranteed to spend many more than ideal. Considering they are dirty cheap, this may not be the problem though

  • @DJPapzin
    @DJPapzin Рік тому

    🎯 Key Takeaways for quick navigation:
    00:00 🎙 *Introduction to Dr. Matt Welsh's background and focus on sensor networks,*
    - Dr. Matt Welsh's expertise in sensor networks and distributed systems,
    - Mention of various applications for sensor networks, including monitoring volcanoes and bridges,
    - Introduction to the idea of computers writing code.
    01:23 🖥 *The fundamental assumption of computer science and its limitations,*
    - Computer science's core idea of translating concepts into runnable programs,
    - The challenge of humans writing, maintaining, and understanding code,
    - Claim that 50 years of research in programming languages hasn't significantly improved the situation.
    03:38 🚀 *Historical overview of programming languages' complexity,*
    - Examples of code in Fortran, Basic, APL, and Rust,
    - Difficulty in understanding and comprehending code across different programming languages,
    - The idea that current programming languages are still hard to understand.
    06:54 🤖 *The emergence of AI-powered code generation,*
    - Introduction to using AI models like GPT-4 for generating code,
    - Example of using plain English to instruct GPT-4 to summarize a podcast transcript,
    - Emphasis on the ease of instructing AI models in natural language.
    09:08 💡 *Benefits and subtleties of AI assistance in coding,*
    - Benefits of AI models like CoPilot in keeping developers in the zone,
    - How AI can speed up code writing and reduce distractions,
    - Mention of ChatGPT's ability to understand APIs and programming libraries.
    11:55 💰 *Cost-effectiveness of AI versus human developers,*
    - Cost comparison between employing human software engineers and using AI,
    - Consideration of AI's potential to replace human developers,
    - Advantages of AI in terms of consistency and speed.
    16:39 🤯 *The potential impact of AI on the software development industry,*
    - The prediction that AI could radically change the software development industry,
    - The idea that AI might make programming more accessible,
    - Speculation about the future of employment in the software development field.
    23:18 🤖 *The future of software development may involve product managers translating requirements for AI code generators. Humans may review AI-generated code differently than traditional code.*
    - Product managers translating requirements for AI code generators.
    - Distinct review process for AI-generated code.
    - Evolution of software development practices.
    25:10 🧠 *The rapid advancement of AI models like ChatGPT has startled many due to the seemingly overnight transformation of the AI field.*
    - AI's rapid advancement, akin to an overnight transformation.
    - Contrast with gradual progress in other fields.
    - Impact on perception and expectations of AI.
    26:53 🌐 *The societal dialogue around AI has shifted from viewing it as a mere toy to recognizing its potential to significantly impact society and even pose existential risks.*
    - Shift in societal perception of AI.
    - Hubert Dreyfus's book and historical views.
    - Contemporary concerns about AI's societal impact.
    29:04 💻 *The evolution of programming, from early manual machine instructions to AI-assisted coding, is discussed, with a prediction that AI may ultimately replace traditional programming.*
    - Evolution of programming, from manual machine instructions to AI assistance.
    - Prediction of AI potentially replacing traditional programming.
    - Considerations for the future of software development.
    33:44 🏢 *The concept of a "natural language computer" is introduced, where programming becomes a matter of instructing AI models in natural language, leading to a new computational architecture.*
    - Introduction of the concept of a "natural language computer."
    - AI models as a new form of computational architecture.
    - Implications for the future of programming.
    35:09 🚀 *The speaker briefly discusses Fixie, a startup focused on enabling developer teams to create custom chatbots using AI, highlighting the need for good programming abstractions in natural language and programming languages.*
    - Introduction to Fixie and its focus on chatbot development.
    - The importance of programming abstractions for natural language and programming languages.
    - The goal of simplifying the process of building AI-driven applications.
    38:47 🎯 *AI.JSX, a framework for building LLM-based applications, is presented, emphasizing the ease of composing operations and integrating natural language and programming language.*
    - Introduction to AI.JSX as a framework for LLM-based applications.
    - Simplification of composing operations and integrating natural and programming languages.
    - Advantages of AI.JSX in building AI-driven applications.
    41:08 🎙 *A demonstration of real-time voice interactions with a chatbot is shown, highlighting the importance of streamlining data passing between AI systems for faster responses.*
    - Real-time voice interaction demonstration.
    - Streamlining data passing for faster responses.
    - The potential of voice interactions with AI-driven applications.
    44:25 🌍 *The speaker discusses the potential for AI to democratize access to computing, making it accessible to people without formal computer science training.*
    - Democratization of computing through AI.
    - Expanding access to the power of computing.
    - The role of AI in reducing barriers to technology.
    46:20 🧩 *Despite the optimism, the speaker acknowledges the challenges, stating that no one fully understands how language models like ChatGPT work.*
    - Acknowledgment of challenges and limitations.
    - The mysterious nature of language models.
    - The need for continued research and understanding.
    46:49 🤖 *Discovering latent abilities of language models*
    - Language models can perform computation when prompted with specific phrases like "let's think step-by-step."
    - The discovery of these latent abilities was empirical, not part of the model's training.
    - The potential for language models to handle computation tasks is a fascinating development in AI.
    48:17 🧪 *Testing AI-generated code*
    - Testing AI-generated code that humans may not fully understand poses challenges.
    - Writing tests can be easier than writing the code itself but still requires human expertise.
    - The role of humans in testing AI-generated code is an open question in the field.
    50:40 🌐 *The future of AI in software engineering*
    - Exploring the milestones and challenges in the future of AI in software engineering.
    - Emphasizing the need for more transistors and data to improve AI models.
    - Highlighting the importance of reasoning about AI capabilities in a formal way.
    55:44 🤖 *AI models explaining each other*
    - Discussing the possibility of one AI model explaining another.
    - Exploring the challenges and potential benefits of AI models understanding and explaining each other's processes.
    - Highlighting the ongoing research in explainability for language models.
    56:12 💾 *The limits of data and computation*
    - Addressing the issue of diminishing returns with more data in AI models.
    - Considering the challenges of obtaining vast amounts of data for training.
    - Speculating on potential solutions, such as custom hardware and leveraging untapped data sources.
    59:02 🧠 *The human aspect in software engineering*
    - Discussing the human qualities and knowledge that may remain essential in software engineering.
    - Reflecting on the unique aspects of human expertise that may not be captured by AI models.
    - Contemplating the future role of software engineers in a changing landscape.
    01:04:12 🎓 *Evolution of computer science education*
    - Considering the relevance of traditional computer science education in a future with advanced AI.
    - Emphasizing the need for academic programs to adapt to the changing landscape.
    - Encouraging critical thinking and understanding of AI models in computer science education.

  • @annoorange123
    @annoorange123 Рік тому +38

    Last week i was working on some rust code that had to deal with linux syscalls, chatgpt gave incorrect data on every single question. There are limits to how well trained it can be based on the amount of data it was trained on. It's good for common problems, not so in a niche environments that real SWE deal with daily. It just makes JS bootcamps obsolete.
    Now imagine if plane control computers were used to generate all the code, as he suggests, without a person in the loop. Good luck flying that. Until AGI is here, we can't talk about any of this

    • @danri9839
      @danri9839 Рік тому

      It's true but for now. What about the evolution of these models over 5, 10, or 15 years. BTW, no model yet receives data directly from the physical world. And sooner or later, it will heppen.

    • @annoorange123
      @annoorange123 Рік тому +2

      @@danri9839 it's a fuzzy black box system. Until we have AGI it's just marketing hype that they are smart, while in reality precision isn't there if there was little training data

    • @not_zafarali
      @not_zafarali Рік тому +1

      ​@@danri9839 The problem is that large language models get data from the world but can't figure out what's useful and what isn't, what's keep and drop on their own what's useful and what isn't. Right now, humans decide for them. If we want models to make their own choices, they need to understand what's right and wrong, which in itself is already complex even for humans in a lot of cases.

    • @dekooks1543
      @dekooks1543 Рік тому

      you're the 927483927839273 I've seen who wrote this comment. You sound like the crypto bros who promised an unprecedented economic crash and how the blockchain would revolutionise everything... and yet.

    • @josephp.3341
      @josephp.3341 11 місяців тому

      I tried to generate Rust code for a relatively trivial problem (8puzzle) and its solution was wrong and didn't compile. I fixed the compilation errors and the solution was still terrible because it used Box::new(parent.clone()) every time a child node was generated (very, very inefficient). I had already written the code myself so it was easy to spot these errors but I really can't see how chatgpt is supposed to write code better than humans...

  • @thomasr22272
    @thomasr22272 Рік тому +54

    My main question is: in which of the LLM ai startups is he an investor?

    • @RoyRope
      @RoyRope Рік тому +5

      crossed my mind lol

    • @rollotomasi1832
      @rollotomasi1832 Рік тому

      Please listen to the talk with an open mind, and face this was reality.

    • @hisham_hm
      @hisham_hm 8 місяців тому

      He literally says at the end: he's pitching his own AI startup.

  • @TheGamerDad82
    @TheGamerDad82 Рік тому +23

    Well, generative models might eventually replace some software engineering interns at companies but as a lead developer / architect I don't see my job endangered yet.
    Software development and designs is not only about writing code. Writing code is the easy part - understanding the problem, both functional and non-functional requirements, the operating circumstances and making design decisions and compromises when needed is a whole different dimension.
    I can already see a lot of startups failing miserably by trying to develop software with a few low cost developers armed with some generative AI tool. This is like "we don't need database experts, we have SQL generators" all over again... 😂

    • @bdjfw2681
      @bdjfw2681 Рік тому +2

      true dude

    • @sgramstrup
      @sgramstrup Рік тому +4

      Doctors are also claiming they can do more, but AI have already beaten top doctors in diagnosing certain illnesses. I think you'll wake up very soon. No offence oc..

    • @farzinfrank2553
      @farzinfrank2553 Рік тому

      I agree with you. Its making the coding much easier but analysis is still a challenge

    • @martinkomora5525
      @martinkomora5525 Рік тому +6

      @@sgramstrup so would you undergo surgery operated fully by AI tomorrow?

    • @Linters-uh1kk
      @Linters-uh1kk Рік тому +2

      These were my thoughts too... I recently started learning full stack. I don't think Dr. Welsh understood fully the way LLM works and how reliant they are on humans. Any reasonable business should feel worried if a "code monkey" was writing random lines without a way to know specifically what was happening. Problems of the future are likely related to security, not necessarily deploying code that works. We need developers with experience and actual understanding of the code and how it interplays with the system. Other comments above mention programming languages with specific use cases such as memory, NOT necessarily human readability. This reminds me futurists who believed teachers and instruction would be outright replaced by multimedia in the 60's and 70's. The Clark and Kozma debates are a famous example of this. I wonder how many people dreamed of being a teacher and gave it up from fearmongering? The fact is context is everything. Humans are making the context, and we will be doing so for a long time. A threat to this is AGI, not a brain in a jar which is generative AI. If I were in computer science I would take what Dr. Welsh says with a grain of salt. Instead, think about what kind of problems are going to be introduced with AI and understand it as deeply as possible. With every innovation, new problems are born.

  • @SINC0MENTARI0S
    @SINC0MENTARI0S Рік тому +4

    This reminds me when the clowns of decades ago made the prophecy that Lotus was going to replace COBOL developers. The argument "Oh, but now it's for real" just won't fly.

  • @davidsmind
    @davidsmind Рік тому +7

    "react for building llm applications"
    I cackled for about a minute

  • @Tetsujinfr
    @Tetsujinfr Рік тому +32

    We are not yet to the stage where one can ask chatGPT4 to write chatGPT5, at least as far as I know. Also, if you ask chatGPT4 to produce the model of the physical world unifying general relativity with the standard model, you will notice it struggles quite a bit and does not deliver. Those models cannot just create new knowledge, or not in a scientific proven way. Maybe through randomness they will to some extend though, but let's see.

    • @christislight
      @christislight Рік тому +5

      You need code to build. God coded humans, we code businesses. Just using language to create code doesnt mean coding is obsolete

    • @RateOfChange
      @RateOfChange Рік тому +5

      AIs are making some breakthroughs in science and math already. Look up the new matrix multiplication algorithm discovered by an AI.

    • @ingmarxhoftovningsr6144
      @ingmarxhoftovningsr6144 Рік тому

      Well, the code for chatGPT5, at least for the model as such, is likely not very complicated, so chatGPT4 might be able to write it. Someone has to tell it what the program should do, though. At this point, that would be a human.

    • @dblezi
      @dblezi Рік тому

      That’s because there has to be an overseer. Like someone else stated God created mankind and this ecosystem. Men manipulated and created based on this ecosystem. The creations of Men didn’t invent themselves. The best special software of AI can do is create derivates of digital data that is digital known to said AI model. Look at art for instance many AI models steal and scan what mankind created to make a model. An AI model would never create a Star Wars, blade runner or mass effect story/universe out base coding blocks which dictate how the software runs. AI needs to plagiarize to create. It’s just that these plagiarized derivates with procedural generation full many normies into thinking it’s so great.

    • @ingmarxhoftovningsr6144
      @ingmarxhoftovningsr6144 Рік тому

      @@dblezi could you please clarify "has to be"? Where does that knowledge come from? What's the logic explanation? What does "an overseer" mean? What does "an overseer" do, in practical terms?

  • @ScreenProductions
    @ScreenProductions Рік тому +8

    Since 8 days later GPTs killed his startup, I propose a few new titles for this video:
    Irony
    What goes around comes around
    Karma’s a GPT

  • @Zale370
    @Zale370 Рік тому +7

    LLMs aren't intelligent or autonomous AI; they have clear and significant limits. While they can improve productivity, the idea that they could replace a team of smart humans is unrealistic.This is coming from a guy who uses LLMs daily and extensively.

    • @vampyrkiller
      @vampyrkiller Рік тому

      i dont know everyone is hyped about agi and q* from open ai

    • @DarkStar666
      @DarkStar666 10 місяців тому

      Same experience here, super skeptical of ‘Devin’ also. This take is hot garbage. Maybe someday but I think we already know that the Transformer architecture is NOT going to get us there. Mamba/S6 alone won’t either. Lots of hard problems to solve yet.

    • @Zale370
      @Zale370 10 місяців тому

      @@DarkStar666 Devin is just crew ai with a fancy UI, and the company is just jumping on the hype train.

  • @CaptTerrific
    @CaptTerrific Рік тому +15

    The biggest red flag was there at the start: the beginning of the video description says that gpt can do general purpose reasoning. It's neither general purpose nor can it reason

    • @MinecraftN3rd
      @MinecraftN3rd Рік тому

      Hmmm I think It is both general purpose and can reason

    • @dekooks1543
      @dekooks1543 Рік тому

      then you should go to a mental health professional

  • @coltennabers634
    @coltennabers634 Рік тому +5

    19:00 Lines of code is a vanity metric that does not translate to value... this guy is definitely in management

  • @christislight
    @christislight Рік тому +3

    I’m an AI Business Owner - It’s great to know how to program even if programming is obsolete due to AI, you can use code as an asset. I created a model that uses Python to solve any math equation. Could’ve used Google, but using Python makes the solution more accurate and near instantaneous.

    • @aqf0786
      @aqf0786 Рік тому

      Can you share a reference to your model?

  • @aungthuhein007
    @aungthuhein007 Рік тому +37

    It's nice of David to let the students have a taste of silicon valley's sensationalism and the outlandish "predictions" of where the future is headed. "This is the only way everyone will ever interact with computers in the future." Even if that turns out to be true, it is soooo far away from the real world right now that it doesn't take a real computer scientist to realize this is delusional. That's not even to mention the question of whether or not we *should* be heading in that direction as a society. Not much more than silicon valley's way of raising funds for more products/services, the vast majority of which fade away after some time.

    • @bdjfw2681
      @bdjfw2681 Рік тому +2

      feel the same. i just think ai is dump and keep dump in at least 100 years, or longer , not in my life time or even not before human extinct will ai become that smart. maybe only advanced alien can actually build that levels of ai.

    • @hamzamalik9705
      @hamzamalik9705 Рік тому +10

      5 years down the line your comment will seem silly !

    • @bdjfw2681
      @bdjfw2681 Рік тому

      if 5 years later AI could be so powerful that my comment seem silly , i am actually happy with that. i do hope tech advanced fast but at the same time Very pessimistic about the speed of technological development@@hamzamalik9705

    • @devsquaredTV
      @devsquaredTV Рік тому +3

      what floored me was his claim that no one could write an algorithm in a programming languauge that is equivalent to his prompt string.

    • @user-oz4tb
      @user-oz4tb Рік тому +1

      For real, I am on my 2nd big tech job since the ChatGPT rise and of all my team members I am the only person who uses it.
      In production i saw some ML models in:
      - adtech for improving ads suggestions. They were there for more than last 6 years, long before the "AI will do everything soon" hypetrain. They were, as i've said, only improvements above the not ML written ad rotation core and didn't generate much money for the company at all.
      - security SIEM systems used for threat detection on users laptops, but in reality it was doing more harm than profit, like banning our git-lfs executables, lol.
      - I saw some LLAMA model, trained for a company internal domain (code, wiki etc), but its usefulness was a joke, to be honest.
      Also I saw an arise of infinite amount of startups with AI solutions for everything after the experts started to promote "Everything as a model" idea. They were trying to solve with ML such problems which never required an ml solution. Looked like every startup, which used to be a crypto startup now is an AI startup or has something from AI word cloud in its name.
      I see all the experts predicting obsoletion of software development as a job in 5-10 years, but I see literally close to none signs of GPT models in production, left alone profit from its usage. Maybe it is used widely in another tech domains? Maybe in 5 years situation will drastically change? Well, maybe, who knows. But now for me it does not look like more than another race for a venture capital.
      P.S.: oh, yeah, ChatGPT-4 is insanely good for catching missing Lisp parenthesis, btw.

  • @ChinchillaBONK
    @ChinchillaBONK Рік тому +18

    The problem with LLMs in Generative AI is that in 5 years time, the AI will be learning upon large percentage of data that other AI have generated and then even longer down the road, how do we know what is real or generated data?
    We still need humans to understand what is fake. The creativity from AI must make sense if the goal for that specific data requires such precision like in the medical industry or other industries for lives are at stake.

    • @verigumetin4291
      @verigumetin4291 Рік тому +2

      It's been established already that synthetic data is superior for training LLMs, compared to raw human data.
      I mean, think about it, does the open web not have data that is bad? Well ChatGPT was trained don it and it does pretty well. Synthetic data has been proven already to be superior to that, so simply by training the next iteration of the LLM on synthetic data is going to get us to the next step.

    • @ChinchillaBONK
      @ChinchillaBONK Рік тому

      @@verigumetin4291 What about fake news or lobbyist outlets? Or books/art generated on someone else's copyright? What if bad actors create fake generated data for their own nefarious purposes? Then these scammers or spammers constantly create these fake data? You can already make a fake Obama dancing "Livin' La Vida Loca". How would the AI know it's real or fake once these generative AI become more skilled? Years down the road, our newer AI LLM may not know the difference and use these data to train. We already got bad science news regarding mask wearing and vaccinations. This will become worse when the less than average intelligent human believes in nonsensical data in a world where such synthetic data will be practically spam.

    • @aligajani
      @aligajani Рік тому

      GPT 4 is getting dumber according to Stanford Research.@@verigumetin4291

    • @tybaltmercutio
      @tybaltmercutio Рік тому +3

      ⁠@@verigumetin4291Do you have any source for that? Preferably a peer-reviewed paper rather that some „research“ by Google or OpenAI published by themselves.
      I am asking because what you are saying does not make any sense to me.

    • @luzak1943
      @luzak1943 Рік тому

      ​@tybaltmercutio I think he is talking about the Orca 2 paper

  • @projectcontractors
    @projectcontractors Рік тому +3

    "it's funny you know all these AI 'weights'. they're just basically numbers in a comma separated value file and that's our digital God, a CSV file." ~Elon Musk. 12/2023

  • @kirills9637
    @kirills9637 8 місяців тому +1

    In short, the guy is Chat GPT seller. Remind me a travelling salesman explaining another housewife that she won't survive without his brand new vacuum cleaner 😂

  • @MaxNerius
    @MaxNerius Рік тому +3

    > It's 2023, and people are still coding in C -- that should be a federal crime
    Not because it's their language of choice, though. Think embedded systems: Even if you want to use Rust or any other language with training wheels on it (metaphorically speaking), the platform you're developing for may not be a targeted by it. Or worse, maybe your toolchain needs to meet certain criteria to pass a regulatory body of sorts.
    Disclaimer: I'm not writing this because of confirmation bias or me being an offended C programmer (I'm working with Java). Please don't get me wrong: I understand that Dr. Welsh didn't intend to oversimplify things, though he generalizes a bit too much imho. It's putting a whole industry in a really bad light and it's just like saying: "if using C is bad because bad behaving C programs have killed people, then, by this logic, we shouldn't be riding trains or going by car anymore".

  • @gaditproductions
    @gaditproductions 10 місяців тому +3

    Giving this lecture to room full of students paying 100k a year for a CS degree is insane...

    • @Loyal_Lion
      @Loyal_Lion 9 місяців тому

      Why?

    • @rogerh3306
      @rogerh3306 4 місяці тому

      @@Loyal_Lion Cus his lecture was a whole sales pitch bro. 47:25 bashing the hell out of programming to empower his own LLM service. F*cking criminal to say such a thing in front of CS students.

  • @ergestx
    @ergestx Рік тому +17

    The speaker here is pushing for a paradigm of “LLM’s as a compute substrate” and English as a programming language” which I definitely see the value of. Certain programs would be easy to express in English but nearly impossible to program using traditional languages. Of course the paradigm does happen to benefit his startup but to claim that this will spell the end of software engineering as we know it is absurd.
    First of all this requires disregarding decades of research into system design principles which call for modularization and separation of concerns, in order to make systems more legible, easier to debug, easier to maintain. I wouldn’t want key operational software that’s an inscrutable black box that requires “magical” phrases to do the right thing.
    Just because an LLM is writing the code this doesn’t invalidate the need for proper design. Software engineers are taught design principles for a reason; not just to make their code easier read, and understand by humans, but also to make it easy to debug, extend and adapt.
    Second, just because it’s easier to program now using just English it doesn’t mean that software engineers are no longer needed. How would you evaluate the correctness of the software generated by the LLM? How would you improve its performance? That requires understanding logic, probability, algorithmic complexity, algorithmic thinking, and a plethora of other software engineering skills taught in college.
    In my opinion it makes the need for highly trained engineers even more important

    • @janekschleicher9661
      @janekschleicher9661 Рік тому

      Indeed, especially as we already have at least 2 completely (very close to plain) english programming languages around for > 50 years that are widely used: SQL and COBOL.
      For small examples, both are great to write, understand and efficient.
      But for real world problems, both are complicated, hard to understand and need a computer science education (for at least to some extend), to get your job done.
      We even deprecated COBOL what is as close as possible to english, especially as it gets very verbose and so harder to understand again compared with more formal languages.
      The problem is not writing the code, but being explicit enough so you really get what you want. And independent of technical constraints, the requirements engineering still is engineering and even if the output is plain english, just read any formal document and you'll find out, it's not simple english. That's true even for non engineering, like law, standardization documents, pharmaceutical documents or to come back to programming RFCs.
      There's probably a reason, why the presenter didn't show the prompt to write Conway's Game of Life via ChatGPT that doesn't involve using external knowledge already. Once you have to define it accurately, it's probably not much shorter than the Fortran or BASIC example and might even be less readable than the Rust version he showed there. The usual text book description are either using images to explain what's going on (what won't work in general), or they just describe it mathematically and would be 1:1 to the APL version he presented. It just sounds so easy as we are used to the concept, but what is a cell, what is a neighbar, how big is the sheet, when does the game end, what does a round mean, what is the initial state, what does it mean to survive, or create new life, or how is it outputted and what do we optimize for? None of it is just trivial to explain unless the concepts are already known (Conway create a game for mathematicians), but in general for most programs, the concepts are not known.

  • @chenjus
    @chenjus Рік тому +31

    12:57 that's exactly right. The way I've been describing using GPT-4 for swe is that whereas I used to have to stop to look up error messages and read documentation, now I can ask GPT-4. GPT-4 smooths out all the road bumps for me so I can keep driving.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Рік тому

      GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their output! Also, GPT-4 is designed by the Wealthy to serve their needs!

    • @miraculixxs
      @miraculixxs Рік тому +4

      Except when it doesn't. But sure spending an afternoon with Copilot can often safe 5 minutes of RTFM

    • @fappylp2574
      @fappylp2574 Рік тому +1

      @@miraculixxs "Hello Chat GPT, please read this F manual for me"

  • @SkyNhett
    @SkyNhett 3 місяці тому

    29:50 This is profound and hard for me to accept as a software developer, but I can envision that future, very clearly.

  • @ChetanVashistth
    @ChetanVashistth Рік тому

    Questions in this lecture are very interesting. Even better than the whole lecture.

  • @jonkbox2009
    @jonkbox2009 Рік тому +23

    I took a clip of the FORTRAN code and sent it to GPT-4 Vision and asked it what the code did but it could not tell me because the pictured code was incomplete. Understandable. I sent it the BASIC code and it got it right. I asked it if the name CONWAY helped with its answer. It said No. I started a new chat and sent the BASIC program without the program name. It got it right. I sent the APL program and it didn't recognize the language or understand it at all, even that it was a programming language. I told it the language was APL and it got it right. Pretty cool.

    • @reddove17
      @reddove17 Рік тому +4

      Because they are somewhere in the training set, the presenter got them from somewhere I would assume.

    • @elawchess
      @elawchess Рік тому

      @@reddove17 The best of them are good enough to recognize a program that was not directly in the training set. Of course something about the program is in the training set e.g the idea of Conways game of life (or whatever it was), but that piece of code itself doesn't need to be in the training data for it to be able recognise it.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Рік тому

      GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!

  • @me_souljah
    @me_souljah Рік тому +24

    This feels like the Theranos equivalent of the future of software, it's all dreamville

    • @jwesley235
      @jwesley235 Рік тому +15

      Tell me you don't understand what's going on in AI without saying you don't know what's going on in AI.

    • @me_souljah
      @me_souljah Рік тому

      Sure, I know nothing JOn SNow.@@jwesley235

    • @AD-ox4ng
      @AD-ox4ng Рік тому +6

      ​@@jwesley235how about you explain it to us then?

    • @calliped1
      @calliped1 Рік тому +3

      ​@@AD-ox4nghow about you do your own research.

  • @bilalarain4632
    @bilalarain4632 Рік тому +5

    Welcome to the new erra of debugging.

  • @kostian8354
    @kostian8354 Рік тому +2

    About prompt program
    - Can you reason about it's performance and class of algorithmic complexity ?
    - Can you reason about the resources required to run it, like RAM ?
    - Can it process more data than fits into RAM ?
    One day it will, but not yet...

  • @EnglishGeekWahoo
    @EnglishGeekWahoo Рік тому +1

    This is a good video for high school students to be careful when they want to go to college, they might not only think not to approach CS but to go to something that wont be replaced by AI soon. Our era is tough, and it has never been any easier.

    • @rogerh3306
      @rogerh3306 4 місяці тому

      This is a good marketing video for selling his own software by bashing programming and calling it annoying (47:25).

  • @CasualViewer-t4f
    @CasualViewer-t4f Рік тому +16

    It’s a lot to expect everyone to know what they want to enter into a query. It will take some time for the query interface to truly be inviting. I’m also mildly concerned that AI will grow impatient with us end users and spit out something we may not want and will simply say “deal with it 😎”

    • @robbrown2
      @robbrown2 Рік тому +3

      Seems like an AI that is owned by a company that makes a profit would train it not to do as you describe, since that would drive people away. Chat GPT, in its current state, is incredibly patient, and that is one of its most striking and valuable features. I don't think that's an accident.

    • @metznoah
      @metznoah Рік тому

      @@robbrown2 It will literally return the statistically next most likely token as soon as it is physically able. What is your definition of patient for this to meet it?

    • @sgramstrup
      @sgramstrup Рік тому

      They won't write, but just discuss the final product with the AI while it builds it. No writing is needed/wanted for future programming.

    • @elawchess
      @elawchess Рік тому +2

      @robertfletcher8964 The way you've characterised it undersells it quite a bit by saying the stuff about "statistically likely". Don't forget RLHF (Reinforcement Learning with Human Feedback) where many undesirable styles the model might do are weeded out and the model is steered towards answering in a way humans prefer. You say it spits out statistically likely within user context but you seem to not be considering that part of that user context could be "patience", the very thing that you seem to be alleging that it can't do.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Рік тому

      GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!

  • @sevilnatas
    @sevilnatas Рік тому +3

    Yikes, the guy that admits he can't read Rust code, managed a Rust based startup? Bold admission. Can he read the code of his new startup that professes to write code for you? This is a Harvard education, an infomercial?

  • @moonstrobe
    @moonstrobe Рік тому +13

    I didn't hear him get into the topic of consistency and feature updates. How about performance based programming for games and ultra efficiency? Or shower thought innovations that create entirely new paradigms and ways of approaching problems? AI might be able to do some of this eventually, but I doubt it will be as rosy as he imagines.

    • @fappylp2574
      @fappylp2574 Рік тому +1

      yeah, like 99% of people don't invent new paradigms or ways of approaching problems. The vast majority of people in software will be out of jobs, with maybe a few hyper-PhDs sticking around.

    • @dekooks1543
      @dekooks1543 Рік тому

      stay fappin, fappy. It's not going to happen. Maybe the soydev macbook in startbucks react bros will get replaced but true programming that actually requires deep knowledge ? not happening.

  • @matthewrummler
    @matthewrummler Рік тому +1

    I'm putting this here as a note for myself (I'll see if that works).
    POINTS REGARDING HIS "IMPOSSIBLE" ALGORITHM (no I don't think he literally means impossible):
    1. The AI is not a simple algorithm itself
    - The AI can not be summarized as an algorithm in the way someone would write one... the complexity is fairly expansive... even to setup the ML models
    2. Most of what he is asking would not be difficult for a reasonably simple program
    - Getting the title, etc...
    3. DO NOT "": This would be the default of a program
    - When he says DO NOT use any information about the world... does not mean do not utilize your predictive analysis it just means don't mix information in that is not in the transcript
    4. Summarizing is hard, a targeted predictive learning model IS probably the best algorithm for this
    - The only very difficult piece for a custom built program (including one or more algorithms to make this infinitely repeatable) IS the summarization
    So, my conclusion: Part of writing code well will, in the future, include targeted ML*
    (though my take is not monolithic, gargantuan systems like Open AI & Google produce... though those could be a good way to train a targeted ML model)

  • @aleksapex679
    @aleksapex679 Рік тому +2

    I have so many things to say so I just deleted my comment, because there are so many things I disagree on this video. Just one thing to say, as a year long copilot user this thing way to often behaves like junior dev with too many suggestions that are irrelevant and it's way to often makes me actually OUT OF a zone. Some times I feel like a driver without a copilot but with a backseat driver and I have to make constant decisions when to follow it's advices and when to ignore :D

  • @MikkoRantalainen
    @MikkoRantalainen Рік тому +7

    Great lecture! I've been writing code professionally for 20 years and I feel like Copilot is a the level of first year university student learning IT stuff. Not perfect co-worker, obviously, but much better than basic autocomplete in your IDE or some other tools you could use. I'm fully expecting to see Copilot rapidly improve so much that I write all my code with it. Right now, I feel that it can provide some support already and with fast internet connection, having it available is a good thing.
    Most of the time Copilot writes a bit worse code than I could do myself but it's much faster at it. As a result, I can do all the non-important stuff with a bit lower quality code that Copilot generates so I can focus my time on the important parts only. I'd love to see Copilot to improve even at the level that the easy stuff is perfect.

    • @ndic3
      @ndic3 Рік тому

      Copilot is terrible though. Gpt4 is 50x times better. In comparison co-pilot is unusable
      Edit: number is obv made up from what it feels like

    • @MikkoRantalainen
      @MikkoRantalainen Рік тому

      @@ndic3 Can get GPT-4 integrated in your code editor?

    • @LionKimbro
      @LionKimbro Рік тому +5

      I’ve been programming for 40 years of my life. Professionally for about 24 years. I absolutely coding with Chat-GPT. But what people don’t get is that architecture still matters. You are still accountable for the code working out. You still need a picture of the system as a whole. You still need to get what’s going on. You still need to understand algorithms, you still need to be able to perform calculations on performance and resources. You still have to know stuff. You have to put the pieces together into a working whole. And the appetite for software is near infinite.
      I don’t think people quite get that.
      Chat-GPT can’t do it all for you, by a long shot. Chat-GPT is a great intern. But you can’t make Excel with even two hundred interns. Not even a thousand interns can make Excel. There are other problems.
      And I am not saying that one day we won’t have AIs that can fully replace competent programmers. We probably will- one day. But that day is not today, and it is not even tomorrow.
      What I tell young people who are afraid, “but will there even be programmers in ten years?” I tell them, “maybe not, but I can tell you this: It has never been easier to learn programming, than it is today. You can ask anything of Chat GPT, and it will answer for you. If you know one programming language, you can now write in any programming language. The cost of learning to program has dropped incredibly. And the money is right- right over there.”

    • @edwardgarson
      @edwardgarson Рік тому

      ​@@ndic3Copilot is based on GPT-4

  • @smanqele
    @smanqele Рік тому +5

    I agree, the biggest problem with humans in programming is how we mentally map how to solve problems. Code reviews can be a huge waste of time if you don't have it in you to push back. It truly makes me wonder the ROI for companies to host a lot of the software development ceremonies today.

    • @jamesschinner5388
      @jamesschinner5388 Рік тому

      Code review is all about regression to the mean

    • @smanqele
      @smanqele Рік тому

      @@jamesschinner5388 But we probably haven't got a single methodology to arrive at the mean. Our individual Means are terribly diverse

  • @simulation5627
    @simulation5627 Рік тому +11

    It started interesting but it's just an ad for (another one) gpt wrapper.

  • @mrthanhca
    @mrthanhca Рік тому +1

    Thank you for the information; it's very useful.

  • @sebstream
    @sebstream Рік тому +1

    This video is bearable if you skip the first 1:06:56 HH:MM:SS of it.

  • @andrebatista8501
    @andrebatista8501 Рік тому +8

    If AI can write programs, it’d be able to substitute a lot of people, and not just on tech but on many fields, then we gonna have more efficient services but with so many people unemployed, who would pay for those services?

    • @compateur
      @compateur Рік тому +4

      This is a very interesting question. Take it to the extreme: LLMs are able to take over any job. What makes live worthwhile? Can ChatGPT enjoy the first sun ray that warms up its AI chip, does it enjoy the tranquility of Nature, can it enjoy the soft sea breeze, can it get excited about new discoveries? What makes the heart of ChatGPT tick? Does it have a heart? Sometimes we forget that we are multidimensional creatures. Maybe we have to come up with a complete new model for society. We have to redefine ourselves.

    • @-BarathKumarS
      @-BarathKumarS Рік тому +1

      @@compateur dude seriously,think about it! One of my friends works as a consultant and another one works as an accountant at top firm,i have personally looked at the kind of work they do which at the end of the day is the most brain numbing manual repetitive task that i have ever seen...to put it pluntly an high schooler can do their job well enough.
      What will happen to these people then?

  • @Rizhiy13
    @Rizhiy13 Рік тому +3

    Ok, here is the summary of the talk: Dr. Welsh has a programming skill issue.

  • @vinipoars
    @vinipoars Рік тому +14

    I'm wondering if Fixie (35:00) hasn't already become obsolete with OpenAI's announcement on November 7th... lol

    • @ltnlabs
      @ltnlabs Рік тому +3

      Exactly

    • @ranjancse26
      @ranjancse26 Рік тому

      AI.JSX, who needs to learn in the era of AI lol

  • @sandormiglecz1186
    @sandormiglecz1186 9 місяців тому +1

    I'm afraid that just out of pure laziness or greed, we'll build an addiction to LLMs that is impossible to quit. If anything happens to the LLM services, we'll not be able to get along without them.
    So it'll increase the level of abstraction and dependency of out everyday lives. One more thing to worry about, making life exposed.

  • @OswaldoDantas
    @OswaldoDantas Рік тому +1

    Thought-provoking talk that needs to be taken with a serious amount of critical thinking. I personally have a different view about how programming will evolve and by no means I would ever agree with adding "The End of Programming" in a title or main message unless the objective is in short click baiting to a sales talk.
    Just as photography didn't kill painting and ai generated images won't kill photography, if you have to write your instructions in English or whatever other language and you already expect to be following some specific patterns to get the expected results, with some try and error in between, well, you are basically programming :)
    Dr. Welsh raises valid concerns about the evolution of programming and the nature of being a programmer or software engineer, although I beg to differ in the specificities.

    • @aqf0786
      @aqf0786 Рік тому +2

      All I see is a English to targeted language compiler, where we don't know exactly how the compiler works... it doesn't seem like a good idea