"People, writing in C is a federal crime in 2023" is the most misleading statement, Man how you design low latency embedded systems without C? Lot of low level devices are depenedent on C. Even Tesla FSD or Autopilot uses C++. IOT devices use C.
I bet u I can get my gran to type that into GPT4 and would do better than what ur whole team could do 2 years ago. U better hold on bra, I don't think ur ready. 😶
@@easygreasy3989 bruh, go and ask your GPT Boi to write assembly code for newly designed chips from any vendor. Those LLMs can't generate code outside of the scope of training data. If you've written the LLMs from scratch or at least read the paper then you know what I'm talking about. Else I strongly suggest you go and study CS 182.
Well, you can get pieces of code and I've done it already, chatting with chatgpt helps a lot to get inside once you ask right questions. This presentation is just babbling, I'm waiting for full useful application development presentation using AI.
I get the clickbait title but it can be really discouraging to people who are thinking about getting into software engineering. “Like why even try if ai is gonna do it?” Mainly because it’s coming from an institution like this. I know it’ll take time to eventually get there but A lot of people have already lost hope and new students thinking about joining may just turn a different direction Note: I’m not speaking of myself here, I’m a senior engineer and I volunteer at coding camps on weekends and tutor online and I get this sentiment from the people I coach and teach. When you’re completely new to a field and you see things like this from a reputable institution along with all the hoopla of tech bloggers online, it does discourage many people from trying to enter this field.
Still, 'everyone should learn to code' is valid. Just do it anyway for your own intellectual development. No point in trying to blame a video title for not doing something. Just do it.
It's the presentation name, bud. Don't get discouraged, presenters often put a clickbaity title but then debunk said title during the presentation. In any case, it's what this guy wanted to call his presentation, can't really fault Harvard for it.
Somwhere in 1889: Welcome to my talk titled "Cars and the end of horse carriages". Someone in the audience: Very mean and dicouraging title, dude, what about all the people who want to become a horse carrage driver?
"AI will replace us all, anyway here's my startup" Exactly 8 days later, OpenAI released a single feature (GPTs) that solved the entire premise of his startup.
Funny thing is he said programming will die but it was exactly through programming that the new feature that solved the premise of his startup was created
Which just further reaffirmed everything else he said. Too many people are coping right now, LLM's are gonna put a lot of people out of work, not just programmers. I work customer service and internally I am freaking out right now.
That reminds me when I was in middle school. My teacher had to teach us how to program in Basic but he really didn't want to. So he simply told us "in 2 or 3 years we will have speech recognition so you don't need to learn programming". That was 35 years ago... That's a bit bold to tell that programming languages have not improved the way we code in 50 years and to think AI will save us.
I remember one of my teacher, while not been bold enough to speak about speech recognition in the early 90-s, saying that there are _already_ only system programmers left, the application programmers have been made obsolete by - are you ready for it? - SuperCalc, a spreadsheet software for MS-DOS and such. Makes me wonder, now that I think of it, why would there still be a need for system programmers if MS-DOS was already a sufficient operating system for the only applied task that was left - the one of running SuperCalc...
But speech recognition is really good these days...it just took about 10-35 years, depending on how 'good' you think 'good' is (I recall speech recognition that was decent about 25 years ago).
Yes i get it but which basically means we don't need to have software cycle anymore. all those clean code rules for dev to dev visibility is not required now since just need to understand what is the function doing and for that dev will be there 😉 what matters now is input output and definition of function and that's what the business wants too !
Do not be discouraged. Enjoy life and study what you are interested in. Everything else will fall into its rightful place. Tomorrow is not guaranteed, do not fret about things beyond your control.
correct because i thinks its dumb to think so far ahead when we don't even understand how ai work internally or how we are going to take data or if more computing is actually going to help, Dr. Matt Welsh does not know how the algorithm( the most important part) is going to be created ,there are a lot of other thing where he says i believes which is not so reliable (specially when choosing your career )
I was thinking the same. It is basically the GPTs concept, although Fixie’s AI.JSX still offers seamless integration into a react app. Let’s see OpenAI’s response to that
@@rahxl whether he does it or somebody else, it is immaterial, openAI just proved his concept was right and worthy. he is already successful while u need to find a good job
Рік тому
@@merridius2006 @TheObserver-we2co this is not scientifically correct, a program written for a given task X can be written (and exist in hardware) so its the theoretical most performant solution, while an AI can cost a million times more to run the same task, take for example "2+2", at the same time, a program is a crystallized form of ontology and intelligence, that means, instead of reasoning the solution on every execution, programs grow as a library of efficient solutions that dont need to be thought over and over again, in the future is programming languages what will remove the need to write code, as we aproach an objective description of computable problems that we will be able to write for the last time, in a way we already did this with libraries (in a disorganized way), and obviously we will use AI to help write these programs, but because we will solve these problems a single time for the infinite we will review and read and write them ourselves as a way of verification, just as today. After that we will use an optimized form of AI that maps these solved solutions on user request, but interfaces will also be mature enough (think of spatial gesture and contextual interfaces) to make speech obsolete. Current LLMs are more a trend of our current times than the ideal, efficient, unfallibe solution we need to standarize on all aspects of society from IT. If all the software thats already running in your computers would run using AI, it would cost thousands more in energy and time, software is already closer to the theoretical maximal efficiency, the ideal software is closer to solved math than to stochastic biology or random neuron dynamics. Training better a model wont solve any of these things. And AIs that evolve into more performant solutions are statistical models programmed into known subsets of the problem after the mathematical model of the problem is understood enough to do that, is the same as we have already done since forever, statistics like that used in modern LLM have always been used in computers and are part of what programs are required to do. Just imagine if every key we pressed were interpreted by AI just to reach your browser. Along all these, we still have a lot of work to do, i would say we have only written a third of all the software that we need in the world, and at the same time, almost all the software that already exists needs to be rewritten in new languages more closer to the new level of abstraction and ontological organization described here, given time all code in c++ will be moved to rust, and rust will be replaced by an even better language, and no institution will just let you do it with AI and not read or understand what it did. Just go study and stop being silly thinking you know what programming is without any real experience in the field, all these opinions come from marketers, hustlers, wannabes, teenager ai opiniologists and doomers.
Law is written in plain English too. For reproducible results, the limit of input precision will lie where the modern legal jargon reaches it's least understandable form. You will be left with an input that is still as hard to comprehend as a programming language text, but much less precise. Good for UA-cam descriptions perhaps, but not for avionics.
The constitution and most contracts are in legalese which looks like English but is strictly NOT. To know and appreciate fully what is said in legal documents, you must use a legal dictionary. Capitalization is often key. Amature researchers have uncovered much-hidden history by seeing what is said and meant in older legal documents. The world turns out to be more nuanced than I thought by the lectures by these legal scholars telling us what the elite have in store for us. Here is an example, London the strawman identity youtube You have a person, you are not a person. A person is a legal fiction--legal paperwork of identification issued by the government. Ergo, you have a person, you are not a person. That is why a corporation is considered a person and has personhood--it is all about legal fictions written in all capital letters--in the dead handwritten on an individual's tombstone. Some tricky legislation was at one time written in a hidden way in some foreign language so that the public would be much less likely to discover what trickery was being done by their so-called elected officials. This was in the 1600s in order to reduce the power of the church and increase that of the crown which turns out to be the inns of court of the crown temple in the City of London that is a separate state than England or UK similar to how the Vatigan in Rome is its own city-state, and that of Washington DC that is its own city-state. This was all explained years ago in a video on UA-cam that gave away many secrets so likely it is banned now. but few watched the entire video because of TLDR. I found a copy still on UA-cam: Ring of power - Empire of the city [Documentary] [Amen Stop Productions]
law will be impacted heavily. But law has a human aspect - the motivational speaker and projection and questioning a witness with emotional appeal...that's the difference and why its safer.
@@gaditproductionsThere is a difference between a living individual, a machine, and an entity with personhood such as an immoral & immortal corporation who holds the debt of people, and nations that cannot be repaid due to usury compounded semi-annual interest charges. What if all money in existence was borrowed as debt into existence? Well, that is what has ended up happening as a trick of financial mathematics--the implications of which simple folk do not appreciate the implications, so vote for more government free stuff with their hands out waiting. Patrick Bet David of Valuetainment breaks down the information regarding the hyperinflation seen in Venezuela and what other countries did when they saw this same thing happening to them, namely Israel got rid of practically all its debt and so has one of the lowest rates of inflation. Lower standards of living are on the way if one is not careful who one has been representing them in Government. I had an epub formatted book. I used the ReadAloud Microsoft store app read it to me. It horribly mispronounced some specific word when reading back the material therein. The book was from 1992. Here are some of the epub formatted docs in my downloads folder. Lords of Creation - Frederick Lewis Allen The Contagion - Thomas S. Cowan The Gulag Archipelago, 1918-1956. Abridged (1973-1976), Aleksandr Solzhenitsyn Votescam of America (Forbidden Bookshelf) - James M. Collier Wall Street and the Russian Revolution, 1905-1925 by Richard B. Spence The individual voice types in the Windows TTS system determine how to break into syllables each word, and to pronounce well or badly any given word. The word that came out very badly, I believe, was "elephantine." Sometimes some of these TTS voices use online AI to assist in the pronunciations and smooth transitions between sentences, pitch of voice elevation during questions and so forth. Obviously, if there was a Nuke or EMP, the entire power grid would go down for decades unless the well intending people rebuild everything overnight without the build back better destroyers holding them back from doing so. As such, it might be better to have each computer holding a small chunk of civilization and enlightenment, lest it all be lost should a key datacenter be targeted directly. What safety precautions have your local officials done? How about your electric grid suppliers--what safeguards are in place to get everything back running after there has been no phones, no power grid, no gas station pumps working, no diesel truck fuel pumps running, no credit card transactions, no banking, and so on? I asked an AI about EMP precautions. I suggested wrapping spare electrical transformers and generators in metal wrap--thick aluminum foil layers, then burying them somewhat deep in the ground to reduce pulse damage. It said that the foil had better be thick enough and very well grounded to displace the electrical energy.
The example with Conways game of life does no justice to the 50 years of programming language research he refers to. Also, Rust was designed to overcome the memory safety problems that plagued C and C++; it is a programming language that emphasizes performance and memory-safety. Programming languages like Fortran and C were designed the way they are for a very specific reason: They target Von Neumann architectures, and fall under the category of "Von Neumann programming languages". The goal of these languages is to provide humans with a language to specify the behavior of a Von Neumann machine, so of course the language itself will have constructs that model the von Neumann architecture. Programming languages like Rust or C do exactly what they were designed to do, they are not "attempts" to improve only code readability for Conways game of life when compared to Fortran.
well they could become irrelevant though. Because the programming language of the future probably looks like minified JavaScript and will be designed by AI for AI.
@@datoubi good luck with that, see you in 10 years. Humans should not loose control over their own life and things that life depends on. As soon as they do, they'll become slaves of their own technology. And despite there still won't be a cent of consciousness in a machine in 50 years, if humans will loose the ability to understand the software on their own without "AI" help, it could quickly become a tragedy because of 1000 other reasons than the comic-book 'machine revolt'.
If a natural language were such a SUPERIOR specification language, there would not be on going efforts to find working specification languages. What he claims is, that plain english is the best you can ever get :)
@@poeticvogon this is cs50...its a class...they wont just do a add and risk loosing credibility...if this is coming from a institution like this...things are very very serious.
I genuinely cannot understand how humans are just... incapable of thinking of the future. Like, the idea of 'just 'cause you can, doesn't mean you should' is just so much the case, right now. But nope, because we can, we will. Okay, so we all slowly forget how to program, and we, generation after generation, depend more on language models writing code for us, and us just instructing the language models. Great, let's just, for a second, take this further shall we? First, the ways we communicate with language models are going to eventually become more like programming languages, because people are lazy, and the entire reason we have ANY symbols in mathematics PROVES this. We don't like to write more than we absolutely have to. (EDIT: To expand on this - what I'm trying to say is this: we use specific patterns of sound in our languages to wrap up concepts, or ideas. We do this so that more complex communication can happen, by building on top of the layer below. We create functions in programming to wrap up sets of actions so that we can build on top of that. This is how abstraction works. I've used mathematical symbols as an example, but the same concept applies pretty much anywhere you look. Condense repetition, so that we can build more complexity on top.) So we're going to get "AI" based programming dialects, you could say (look at the way image generation prompting has already evolved as an example). Then, as we also develop these language models, the models themselves are going to have free rein on the 'coding' part. We will obviously instruct these systems to create newer programming languages that will, after a while, become unreadable to us. And we will ask, well, why do we need to understand it? The machines are there to handle it (this is essentially what this guy is saying). So now we have dialects of humans telling machines what to do, and then we have machines telling other machines what to do in a language we don't understand. Does ANYONE see the issue with this? Like, even a little? Just because programming is hard does not mean that we have to eliminate it. What absolutely idiotic thinking is this? It must always be a constant pursuit of efficiency. That's the whole point. We always remain in control. We always ultimately KNOW what is happening. By literally INTENTIONALLY taking ourselves out of the equation, we write our own Skynet. I don't mean that in an apocalyptic sense, I mean that in a "we are so fucking dumb as a species, like literally what is the point of programming, or doing anything at all, if not for our own benefit?" kind of way. Sure, use these systems and tools to write better code, write better documentation, I mean these are the actual areas where AI systems can help us. Literally to write the documentation and help us write better, more efficient, cleaner code, faster than we ever could. But still code that WE READ, AND WE WRITE, for US. This guy literally called Rust and Python "god awful languages" and apparently we need to take the humans out of developing things. Who does he think development is for? What's weird is that this is on CS50?
I think your thinking is a bit biased and shortsighted. And I’m guessing it’s because like me you’re a programmer. What I think you’re wrong about is that once we move up the abstraction layer, we don’t simply forgot the stuff underneath. People can still understand assembly and write programs using it if they so choose to but it’s ultimately a waste of time. I don’t think people will simply forget how to program, instead they’ll focus on more important things like solving problems that people are willing to pay for. I’m sure if you wanted to, you could rig up a set of logic gates to do some addition and subtraction operations but is that a business problem people are willing to pay you for? Essentially ai will be a layer of abstraction which allows us to focus on more complex problems rather than having to focus on getting all the right packages before even attempting to solve the problems of the users.
Dude, what are you on about? This is what coding has always been, a simplified version for us to convey ideas to computers. We don't write code in binary, we have compilers and interpreters that do that for us. The difference is that now instead of having to learn Python or Rust you can use English or Spanish or whatever to convey your ideas and have them be implemented. You can then ask the LLM directly questions about the implementation of different algorithms and optimize for whatever variable is relevant to your vision. Programming languages have been becoming more and more readable for decades now, this will just be the final step where we can finally interface with computers without having to learn a new language.
Language has its own issues. It's context sensitive and highly ambiguous. Our "experimentations" with programming languages was an exercise in formalized and more precise languages. On the lower levels it's just signal processing with circuits. We built different levels of abstractions on top of that. We can only hide the complexity but we cannot make it vanish. Language models are just another layer of abstraction with its own pitfalls. The best thing one can do is heed the scientific method. Maintain a suitable degree of transparency so that things can be verified by others. 'Others' may be other developers, scientists, AI based tools, etc.. Completely removing humans from the equation will violate the scientific method.
What if LLM write a buggy code in maybe 50 years from now and that code is only understandable this machine and it again writes another buggy code because it does not understand what it is doing and writes another buggy code till infinity 😅 the we as a human have to dust off those old BASIC books in order to start over and how cool is that 🙂
Software Engineering will eventually be the role of just a few, not because of AI replacing jobs, but because of discouragment many people will feel and quitting before even starting the journey
He called CSS "a pile of garbage" and that writing C should be a federal crime. I smell senior engineer burnout, that want's to just cash in on his startup and work on a farm.
I am amazed that students didn't ask about anything related to "security" because, right now, we are just seeing an innovation but what about the future, when, on a larger scale, if we say we want to build a public program like Facebook or any other platform. This is presuming to be a live programming or language model building whatsoever it is so how can we encrypt all of our data from building to running and so on.
While security is something lacking I feel your focus is on the wrong aspect of it. You reference encryption which isn’t necessary for the source code so its ability to assist you to build won’t be impacted. I’m more concerned about the data you’re providing to the LLM. If I’m building a proprietary function and I need some insight from an LLM and I need to upload my source code for them to evaluate I am potentially sharing some seriously protected intellectual property. What happens to that? Can that code snippet show up in someone else’s code when trying to solve the same problem? Maybe your competitor?
@@rookie_racer More importantly than that, he's already demonstrated in his talk that these LLMs have -- call it "undocumented" or "emergent" or whatever you want -- behaviour that gives the questioner control over how the answer is given. Recall the "my dear deceased grandmother" "attack" that let people ask about how to make napalm or pipe bombs or whatever. Giving LLMs unfettered access to proprietary data, and having those LLMs all be based on the same nugget/core/kernel vulnerable to the same attack vectors means giving attackers access to all of that proprietary data by "casually" using your interface.
@@rookie_racer yes, you are right... actually what I was trying to highlight is "Data" and I mean how can we trust our confidential information to something that is "open source and a third party revolving around and across the internet.
It's not that gpt blew up because it was super good overnight. Well sort of but the real reason is it's ease of use. It's just like back when home computers became popular, when you introduce a computer as a marvel of engineering nobody cares about that but if you say "it's a box that lets you play some games and music etc with a bunch of clicks" you have everyone's attention. The idea of making it feasible for the masses that's what kicked it off, poured in billions of dollars and years of research to make computing better and better, same stuff happened with gpt and it's again on the same path but at a much much faster rate.
GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!
Why don't you go ahead and answer the questions, since you're the competent one then🤨...ya'll just come on to the comment section talking trash, no sense🤧
The purpose of computer science in a nutshell was not to translate ideas into programs. The goal was to find higher levels of abstractions to enable describing and solving ever bigger problems. Programming and programming languages were emergent properties of that goal. The question for LLMs is if they will be able to continue the quest for higher and simpler levels of abstraction or forever get stuck in the mundane as most programmers did by their jobs.
Thats a deep thought. I feel purpose of comp science is to automate task which humans can do or think of doing. Programming is just one step for it. Instead of create models which can write code, humans should think of bigger ideas which can impact living beings. It may be accomplish by manual or automatic programming, does not matter
Reality is near infinitely complex. As programmers we create a finite abstraction. AI will do it better yet can't solve exponential complexity. AI is not infinite and has not infinite compute. Infinite is usually a warning signal of a lack of knowledge. Infinity means everything starts to behave weird. There is also physics … latency, a set of fundamental problems
We have too many people doing software so software salaries are going to go down, we need to tell Indians & Chinese and Westerners to focus on swarm robotics, mini-robots, having the robot sworms build things etc... Take a robot-hand, make all of its parts like legos that it itself can assemble. Then make it so that it can either print out its parts, sketch out its parts, or mold its ports. Have it replicate itself in smaller and smaller until you hav e a huge swarm of robots, but you also need a lot of redundancy and "sanity checks". Swarm robots can do stuff like look for minerals/fossils/animals, look for crime, map out where everything is so you know where you put your cellphone, build houses/food/stuff/energy collectors/computers. @@mriduldeka850
@@aoeu256 That's a good point. Japaneese are good at building robots. Indians are good and abundant in software sector but lagging way behind in manufacturing and hardware industry. Chinese have strength in manufacturing sector so perhaps they can adopt to robotics growth more quickly than Indians.
Dr. Welsh does make good statements I think we all can agree on, but as an AI student and Software Engineer for 10+ years, regarding what Welsh said: "People still program in C in 2023", well if you study AI you will even learn Assembly, very very low-level programming and since models have been written by programmers, we still need programmers to maintain and improve on these. AI is getting there, but it's still at a very immature level compared to the maturity we seem to desire as a humanity. We still need PhD students with a solid programming and AI background to do extensive research within the field of AI in order to help invent new technologies, specialized chips, improved algorithms etc. We are still far away from letting AI generate code that is as good as a programmer who has mastered it. Sure, it can write code, but there's still ton of scenarios where it fails to make things work.
Before thinking of AI use in the society we must agree who will Profit from it, who will own it and who will pay for the mistakes of the AI? Is it going to be like, "Oh well, bad luck" when AI ends someone's life?
I think the problem is about the purpose or the goal of the program that you are programing, in case of the Conway's Game of Life, the concept it self it is not easy to explain even with human language, we could get some ideas watching it performe but to be able to understand it completly, from logic to meaning or even to purpose and what coorelation it has with other topics such as math, physic or phylosophy, it is just not easy to understand, it won't be easy anyway
I prefer this take - natural language isn't well suited for describing to computers what they should do, which is why programming languages were developed. LLMs can do some translation from natural languages to programming languages, but not very well and not as accurately as we would like (yet), so they're good for getting you part of the way there, and currently they'll likely generate less than accurate or reliable code, but if you're not trying to write reliable programs, they could be helpful :D
Good to remember that rigorous symbolic notation for math is pretty modern idea in itself. One could argue that math is just "esoteric language" like Matt Welsh is implicating about programming language.
I agree. AI can do things like computing Pi, finding factors, and other relatively trivial things which could just be bits of static data. It may not even be generating code - just returning the closest match. If it is generating code, it's not very useful yet unless you know exactly how to speak those sweet-nothings. I asked ChatGPT about a week ago to create a website in the style of Wikipedia with 4 page-sections relevant to simulation-theory. It gave me an HTML tag with 4 empty DIV elements - nothing else. No other structure, no content, no styling, no mock-up of interactive elements.
@@restingsmirkface You might have to do some "prompt engineering". When I try ML and statistics related stuff, I often just copy text book formulas. The copied text is obscure for humans but somehow ChatGPT is able to understand it. Also it is really hard to ask python code for neural networks because it forces the use of external packages. C language doesn't have external packages so I often ask ChatGPT to write in C code and I translate the code to Python or Julia
Agree. I noticed, although AI chatbots like ChatGPT can write complex Python programs( I asked it to create simpler neural net chatbots in Tensorflow / Keras), it is often buggy, and it has a hard time fixing the bugs if you ask it.
@@Siroitinthis is very interesting, ChatGPT has a better hit rate when it comes to writing in C? I’ve only tried Python so far, will have to give this a go
Great presentation! Thank you! One nitpick: 19:23 "average lines of code checked in per day ~= 100" I can tell you that is not the case for average SWEs in the silicon valley do. ~10 lines/day would already be pretty good.
"If the dev is not using copilot then he's fired". Tell me you never worked in a commercial application without telling me you've never worked in a commercial application.
@@jak3f Have you ever heard of copyright law? Are you seriously unaware that federal courts have already ruled that AI generated output is ineligible for copyright protection?
Dr. Matt Welsh points out the crucial point about AI in programming: The better it gets and the more we trust in it, without actively know how to code or without knowing how it does what it's doing, we lose power over our daily automatic routines. Imagine what a risk AI generated code would be in a nuclear power plant. I think this talk is rather a great wake up call for learning how to code and coding inside AI instead of just letting it go.
Humans are fundamentally lazy and default to the option which takes the least energy and effort. Meaning, most people will try to automate their own work as much as possible. AI learns from this and gets increasingly better, until human-in-the-loop is not needed anymore. Eventually, AI might be even better than humans at programming. As for nuclear power plant, I dont know, depends how reliable the system is.
Except in 5 years, you might be saying the opposite. Humans introduce error inherently. Think how much better AI is now than it was programming 5 years ago, give it 5 more years, and writing human code will seem like the insecure risky option.
@@gordonramsdale My take: A good chunk of software bugs exist because requirements were not refined well enough by the engineer breaking down the work. They make assumptions and write code that does something it shouldn't. With good testing no real bugs get into the system and we have modern compilers that remove the issues with syntax errors. AI coding will likely produces the same errors and make these types of assumptions humans make when working with poorly defined requirements.
Nuclear power plants have a strict design and review process that is fully vetted. So i would not worry about this specialized software aka AI in this application.
@@dblezi Hi, I think I understand what you are saying. But then again what does fully vetted mean in that context? We also have a review process where each Merge Request is fully vetted but still, errors can slip trough. AI MRs might slip through more easily.
In almost all scenarios, AI represents an "it runs on my machine" approach to problem-solving - a "good enough", probabilistic mechanism. But maybe that is sufficient. We get by in the world despite uncertainty at the quantum level... maybe once _everything_ is AI-ified, the way we think about the truth will shift just enough, away from something absolute and concrete, to something probabilistic, something "good enough" even if we'll never be sure it's at 100% outside of the training-sets run on it.
> the way we think about the truth will shift just enough, away from something absolute and concrete, to something probabilistic, something "good enough" This is a deep insight. Many great minds of the western philosophical tradition have expressed this view in one way or another. In fact it's the school of thought known as American Pragmatism (which is known as the quintessentially "American" school, in philosophy circles) which most closely aligns with this view. Some pithy quotes about truth from the most notable figures in Pragmatism: - William James (active 1878-1910): “Truth is what works.” - Charles Sanders Peirce (1867-1914): “The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by the truth.” - John Dewey (1884-1951): “Truth is a function of inquiry.” - Richard Rorty (1961-2007): “Truth is what your contemporaries let you get away with saying.”
I believe that in the short term there will be a shift in both time and focus from coding a solution to the architecture design, testing, and security of that solution.
Architecture is nothing but the applications of known patterns and reasoning/ tradeoffs . I use chatGPT for my architecture challenges all the time and I say though it’s not perfect, it’s already doing a decent job . It will get even better , exponentially better .
Last week i was working on some rust code that had to deal with linux syscalls, chatgpt gave incorrect data on every single question. There are limits to how well trained it can be based on the amount of data it was trained on. It's good for common problems, not so in a niche environments that real SWE deal with daily. It just makes JS bootcamps obsolete. Now imagine if plane control computers were used to generate all the code, as he suggests, without a person in the loop. Good luck flying that. Until AGI is here, we can't talk about any of this
It's true but for now. What about the evolution of these models over 5, 10, or 15 years. BTW, no model yet receives data directly from the physical world. And sooner or later, it will heppen.
@@danri9839 it's a fuzzy black box system. Until we have AGI it's just marketing hype that they are smart, while in reality precision isn't there if there was little training data
@@danri9839 The problem is that large language models get data from the world but can't figure out what's useful and what isn't, what's keep and drop on their own what's useful and what isn't. Right now, humans decide for them. If we want models to make their own choices, they need to understand what's right and wrong, which in itself is already complex even for humans in a lot of cases.
you're the 927483927839273 I've seen who wrote this comment. You sound like the crypto bros who promised an unprecedented economic crash and how the blockchain would revolutionise everything... and yet.
I tried to generate Rust code for a relatively trivial problem (8puzzle) and its solution was wrong and didn't compile. I fixed the compilation errors and the solution was still terrible because it used Box::new(parent.clone()) every time a child node was generated (very, very inefficient). I had already written the code myself so it was easy to spot these errors but I really can't see how chatgpt is supposed to write code better than humans...
"The line, it is drawn, the curse, it is cast The slow one now will later be fast As the present now will later be past The order is rapidly fading And the first one now will later be last For the times, they are AI-changin'"
Back in the real world, you still need to double check the code generated by copilot which often is wrong. I'm not sure if I'm bad at using copilot or the people using it are simply not checking what has been generated. Not to mention, none of the large companies are willing to use a version of copilot that allows it to send the learned data from their private repos back home for obvious reasons.
that's the problem I find with AI generated code. You have to verify it, which is a task that takes as much, if not more effort that writing the code by hand.
There's SO much to SWE jobs aside from just coding, like collaborating with product and design, understanding business needs, convincing management that something is worthwhile. Additionally, someone will need to review the AI code, deal with legacy code, set up services, etc.. I view these AI tools as tools that will make everyone's job more productive but not necessarily replace.
@LupusMechanicus Anyone who thinks an AI can help anyone write a program to solve problems hasn't worked in the field at all. More often than not a person will bring a problem and their ill conceived solution. Then the experienced software engineer will discuss the original problem, propose alternate solutions, ideas that still solve the problem but better make use of resources (memory, time, etc) and provide a useful and intuitive workflow. That IS part of being a SWE and if you think an AI is going to do that naturally and simply you are out of touch. Say others are "cope" if you want, but perhaps educate yourself more than watching a UA-cam video by a guy desperate to sell is product.
@@TomThompson Bruh try to build a house profitably with just your fingers. You need a saw and air hammers, lifts and screw guns. Thusly you can now build a million dollar house with 8 people in 6 months instead of 40 in 1 year. This will eliminate alot of employees, thusly it is cope.
@@LupusMechanicus You again miss the point. No one is saying the industry won't be affected, it will. What we are saying is it is uninformed to say the industry is "dead" because of AI. Just look at the history. The job has gone from being primarily hardware based (setting tons of switches) to using a machine level language (assembly). Then gradually to higher level languages (fortran, cobol, c, etc). Then we have gone through adding IDE and lint, and code sharing, and review systems. The introduction of AI will not replace everything and everyone. It will be a tool that will make the job easier. And yes, it could easily mean a company that currently has 100 engineers in staff can gradually cut back to 10. But it also means other jobs will open up in areas such as making these AI and making systems that make using but easier. The invention of the hammer didn't kill the home building industry.
There won't be legacy code anymore, having a computer that writes code, so ppl will understand the computer's code sounds pointless. Do you need to know your router's code in order to use the wifi?
If programmers will get replaced, who will not get replaced? Programming is one of the most difficult fields for humans. If most of it can be automated, most of everything else can be automated too. This AI revolution won't affect just programmers, it will affect everyone. Programmers are more aware of it than the average person though. It might still take 20 years for us to see AGI. Probably way less, but nobody really knows.
@@BARONsProductions Eventually it is, unless we specifically want humans for the roles. Machines will do everything better once we get to artificial superintelligence. We will probably get it before 2040, but who knows, it could take way longer. Also, people need time to adapt to technology. When something is invented, it doesn't get immediately applied on the practical level.
The physical labour will take more time. For that, actual physical robots should be built that won't be any good for like 10 years at least (I believe). Yeah the digital ones are ones that will take the hit first.
🎯 Key Takeaways for quick navigation: 00:00 🍕 Introduction and Background - Introduction of Dr. Matt Welsh and his work on sensor networks. - Mention of the challenges in writing code for distributed sensor networks. 01:23 🤖 The Current State of Computer Science - Computer science involves translating ideas into programs for Von Neumann machines. - Humans struggle with writing, maintaining, and understanding code. - Programming languages and tools have not significantly improved this. 04:04 🖥️ Evolution of Programming Languages - Historical examples of programming languages (Fortran, Basic, APL, Rust) with complex code. - Emphasis on the continued difficulty of writing understandable code. 06:54 🧠 Transition to AI-Powered Programming - Introduction to AI-generated code and the use of natural language instructions. - Example of instructing GPT-4 to summarize a podcast segment using plain English. - Emphasis on the shift towards instructing AI models instead of conventional programming. 11:26 🚀 Impact of AI Tools like CoPilot - CoPilot's role in aiding developers, keeping them in the zone, and improving productivity. - Mention of ChatGPT's ability to understand and generate code snippets from natural language requests. 17:32 💰 Cost and Implications - Calculation of the cost savings in replacing human developers with AI tools. - Discussion of the potential impact on the software development industry. 20:24 🤖 Future of Software Development - Advantages of using AI for coding, including consistency, speed, and adaptability. - Consideration of the changing landscape of software development and its implications. 23:18 🤖 The role of product managers in a future software team with AI code generators, - Product managers translating business and user requirements for AI code generation. - Evolution of code review processes with AI-generated code. - The changing perspective on code maintainability. 25:10 🚀 The rapid advancement of AI models and their impact on the field of computer science, - Comparing the rapid advancement of AI to the evolution of computer graphics. - Shift in societal dialogue regarding AI's potential and impact. 29:04 📜 Evolution of programming from machine instructions to AI-assisted development, - Historical overview of programming evolution. - The concept of skipping the programming step entirely. - Teaching AI models new skills and interfacing with software. 33:44 🧠 The emergence of the "natural language computer" architecture and its potential, - The natural language computer as a new computational architecture. - Leveraging language models as a core component. - The development of AI.JSX framework for building LLM-based applications. 35:09 🛠️ The role of Fixie in simplifying AI integration and its focus on chatbots, - Fixie's vision of making AI integration easier for developer teams. - Building custom chatbots with AI capabilities. - The importance of a unified programming abstraction for natural language and code. 39:14 🎙️ Demonstrating real-time voice interaction with AI in a drive-thru scenario, - Showcase of an interactive voice-driven ordering system. - Streamlining interactions with AI for real-time performance. 44:55 🌍 Expanding access to computing through AI empowerment, - The potential for AI to empower individuals without formal computer science training. - A vision for broader access to computing capabilities. - Aspiration for computing power to be more accessible to all. 46:49 🧠 Discovering the latent ability of language models for computation. - Language models can perform computation when prompted with specific phrases like "let's think step-by-step." - This discovery was made empirically and wasn't part of the model's initial training. 48:17 💻 The challenges of testing AI-generated code. - Testing AI-generated code that humans can't easily understand poses challenges. - Writing test cases is essential, but the process can be easier than crafting complex logic. 50:40 🌟 Milestones and technical obstacles for AI in the future. - The future of AI development requires addressing milestones and technical challenges. - Scaling AI models with more transistors and data is a key milestone, but there are limitations. 54:23 🤖 The possibility of one AI model explaining another. - The idea of one AI model explaining or understanding another is intriguing but not explored in depth. - The field of explainability for language models is still evolving. 55:44 🤔 Godel's theorem and its implications for AI. - The discussion about Godel's theorem's relevance to AI and its limitations. - Theoretical aspects of AI are not extensively covered in the talk. 56:42 🔄 Diminishing returns and data challenges. - Addressing the diminishing returns of data and computation in AI. - Exploring the limitations of data availability for AI training. 58:34 🚀 The future of programming as an abstraction. - The discussion on the future of programming where AI serves as an abstraction layer. - The potential for future software engineers to be highly productive but still retain their roles. 01:04:12 📚 The evolving landscape of computer science education. - Considering the relevance of traditional computer science education in light of AI advancements. - The need for foundational knowledge alongside evolving programming paradigms. Made with HARPA AI
Before thinking of AI use in the society we must agree who will Profit from it, who will own it and who will pay for the mistakes of the AI? Is it going to be like, "Oh well, bad luck" when AI ends someone's life?
@@reasonerenlightened2456 you guys need to stop think AI as some conscious thing, it is just like a knife or gun. It is entirely about who is using it with what intent.
The problem with LLM is that they cannot solve independently computationally irreductible problems. So there is interaction between classical computation and LLM in symbiosis. So I do not agree that computer languages should disappear completely. Also right now checking google is much more energy efficient than prompting chatgpt. So there are the energy efficiency issues. When you build apps with AI somebody has to pay the token bill.
> The problem with LLM is that they cannot solve independently computationally irreductible problems It can write programs that do. For example, this is what the current GPT-4 can do on the normal openai chat website (can't post url to conversation because YT spam filter). I've asked "Hey there! Can you give me a word which has an MD5 hash starting with `adca` (in hex)?" I've chosen adca, because those were the first four hex letters in your name. This is likely not in its training set. The model was "analyzing" for a bit, and then replied > A word whose MD5 hash starts with adca (in hexadecimal) is '23456'. The MD5 hash for this word is adcaec3805aa912c0d0b14a81bedb6ff. You can see how it answered it, it wrote a python program to solve it. I didn't need to prompt to do it, it knows - like a human! - that it should pass these classically computationally irreducible problems off to a classical computer. And yes, there's still programming involved, but like, my 16 years of experience with computer science didn't help me at all, except in terms of coming up with an example.
No code applications getting better and A.I. getting better looks like a programless future is really close or a near programless one at least. Eventually A.I. will be better,faster and cheaper than any human by a large margian.
That he stays away from addressing the "most important" problem as he puts it at the beginning of the talk (that of CS education in the future), makes it sound like just empty talk...Unfortunately, I had to watch the entire thing to realize this...
Not the case tho, at least not now lol. Ai is not even close to taking programmers jobs, AI is not very good at programming, just very basic functions and can't put the pieces together.
@@lmnts556 Are you sure? It can do a lot of stuff. Then, you have all the no code solutions. Then, you have all the SaaSs and libraries. In the end. You need 1 engineer to build a platform instead of a 100. "At least not now" can mean in 5 years (which is very realistic)
Something I did not understand was how would Computer Science become obsolete? So okay, you replace programming with prompting. But who will develop all those magical models that you are prompting? Aren’t they built by computer scientists and SWEs? What I mean is, if you are bold enough to claim programming will become obsolete, then doesn’t that mean learning mathematics and physics would also become obsolete? Like I could just ask some AI model to develop what I need in the context of physics and mathematics… and won’t need to understand the dynamics of those sciences, I just need to know how to speak English and ask for something. Note: I actually can see programming becoming more automated. But Computer Science? I can’t see that happening… aren’t we supposed to understand how do computers and AI work? Should they be seen as black boxes in the future? Also, programming would still not be fully automated because it’s weird to believe that an ambiguous sequence of tokens (English language) can be mapped with precision to a deterministic sequence (code) without any proper revision by a human… what if AI starts to hallucinate and not align with human goals? At best we would create a new programming language that is similar to “Prompting”… What are your opinions on these?
My opinion is that before doing a ratinal action, there is an emotional action. So all decisions you can write on the prompt, cannot be accurate. My take is that technology will automate further and transform and humans will have the opportunity to use more of their creativity and thus becoming more human!
There are two main concepts that you need to wrap your mind around: 1) Ease of use, 2) Programming as a tool When Welsh talks about 'the end' of programming, he means to future mass adoption of LLM models to program for them instead of programming themselves due to ease of use. Essentially, LLM's will be the new user interface for people to use programming languages, so the need for expert programmers will be limited to specialty roles in the future, like how can I write an API for LLMs to interact with or how can I make this LLM that checks that another LLM works properly? Obsolete is not the right word here, as you can see Welsh using copilot himself even though he is still technically a programmer. It's just the science of writing code by hand will be displaced by prompting to ask an AI to manipulate code for you. For now, you need to read the code the LLM wrote to use it, but in the future, it might as well be a magical black box that does x for you, testing and implementation included. Or in other words: LLM's are going to be easier to use than programming by hand, and LLM's will use coding as a tool instead of people. Computer science is then the art of getting better code from LLMs instead of getting humans to write code faster and better.
Not only that, who develops all the connections between LLMs and all existing systems. Who will replace existing systems that nobody knows what are doing with systems that can use AI. In the short term at least, I foresee more programmers needed, not less.
I for one will be glad when the people who think that "programming sucks" and "no progress has been made in 50 years" will actually give up and leave the field, they have no idea what CS entails. Computer Science is about computer programming like Astronomy is about looking through telescopes.
The thing with LLMs is that it's just another level of abstraction. If you take a product documentation as a highest level of abstraction to describe how that product should behave, to have it correct you still need to describe all the corner cases and the way some things should be done, you can't just say "this page should show weekly sales report". And all this documentation might not be easy to understand. Code is just a very precise way to describe behavior.
AI can ask clarification questions to make the requirements clearer. It's can do long-term back-and-forth conversations with the whole context of the project. It's not just inputting a single prompt and the project is done
@@MaiThanh-om5nm Non-technical and people with low abstraction for the field usually will instruct on how something will behave instead of how something is to be programmed. Also project managers manage the team time etc, architects, developers and engineers with know-how to translate expected behaviour from clients to technical field are the ones who instruct how it's programmed. Lots of developers are able to understand what a client want without an intermediate, because developers are system users as well and know what could be better on apps and what they'd like to see, expect etc, also you can see freelancers and github projects all around without a project manager etc, confirming they would understand it anyway with or without those helpers.
@@augustnkk2788 I don't think it'll replace all good software engineers so soon. And I really don't think it will get to a point of always generating perfect code.
@@tiagomaia5173 Itll replace maybe 90%, some still need to make sure its safe, but no one will work in wed dev f. ex; all tech work is gonna be about AI, unless the governemnt steps in. I give it 10 years before it can replace every software engineer
We are not yet to the stage where one can ask chatGPT4 to write chatGPT5, at least as far as I know. Also, if you ask chatGPT4 to produce the model of the physical world unifying general relativity with the standard model, you will notice it struggles quite a bit and does not deliver. Those models cannot just create new knowledge, or not in a scientific proven way. Maybe through randomness they will to some extend though, but let's see.
Well, the code for chatGPT5, at least for the model as such, is likely not very complicated, so chatGPT4 might be able to write it. Someone has to tell it what the program should do, though. At this point, that would be a human.
That’s because there has to be an overseer. Like someone else stated God created mankind and this ecosystem. Men manipulated and created based on this ecosystem. The creations of Men didn’t invent themselves. The best special software of AI can do is create derivates of digital data that is digital known to said AI model. Look at art for instance many AI models steal and scan what mankind created to make a model. An AI model would never create a Star Wars, blade runner or mass effect story/universe out base coding blocks which dictate how the software runs. AI needs to plagiarize to create. It’s just that these plagiarized derivates with procedural generation full many normies into thinking it’s so great.
@@dblezi could you please clarify "has to be"? Where does that knowledge come from? What's the logic explanation? What does "an overseer" mean? What does "an overseer" do, in practical terms?
I’m an AI Business Owner - It’s great to know how to program even if programming is obsolete due to AI, you can use code as an asset. I created a model that uses Python to solve any math equation. Could’ve used Google, but using Python makes the solution more accurate and near instantaneous.
12:57 that's exactly right. The way I've been describing using GPT-4 for swe is that whereas I used to have to stop to look up error messages and read documentation, now I can ask GPT-4. GPT-4 smooths out all the road bumps for me so I can keep driving.
GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their output! Also, GPT-4 is designed by the Wealthy to serve their needs!
About prompt program - Can you reason about it's performance and class of algorithmic complexity ? - Can you reason about the resources required to run it, like RAM ? - Can it process more data than fits into RAM ? One day it will, but not yet...
I agree, the biggest problem with humans in programming is how we mentally map how to solve problems. Code reviews can be a huge waste of time if you don't have it in you to push back. It truly makes me wonder the ROI for companies to host a lot of the software development ceremonies today.
I took a clip of the FORTRAN code and sent it to GPT-4 Vision and asked it what the code did but it could not tell me because the pictured code was incomplete. Understandable. I sent it the BASIC code and it got it right. I asked it if the name CONWAY helped with its answer. It said No. I started a new chat and sent the BASIC program without the program name. It got it right. I sent the APL program and it didn't recognize the language or understand it at all, even that it was a programming language. I told it the language was APL and it got it right. Pretty cool.
@@reddove17 The best of them are good enough to recognize a program that was not directly in the training set. Of course something about the program is in the training set e.g the idea of Conways game of life (or whatever it was), but that piece of code itself doesn't need to be in the training data for it to be able recognise it.
GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!
🎯 Key Takeaways for quick navigation: 01:23 🚀 The field of computer science is undergoing a major transformation where AI models like GPT-3 are being used to write code, marking a significant shift in programming. 06:54 💻 Natural language is becoming a key tool in programming, allowing developers to instruct AI models to generate code without the need for traditional programming languages. 14:47 📈 AI technology, like GPT-3, has the potential to significantly reduce the cost of software development, making it more efficient and cost-effective. 20:52 🤖 The rise of AI in programming will likely change the roles of software engineers, with a shift towards product managers instructing AI models and AI-generated code. 23:46 👁️ Code review practices will evolve to incorporate AI-generated code, requiring a different kind of review process to ensure code quality and functionality. 24:41 🤖 Code maintainability may become less essential with AI-generated code, as long as it works as intended. 25:58 📊 The rapid advancement of AI models like ChatGPT has transformed the computer science field and its societal expectations. 29:04 🌐 Programming is evolving, with AI assisting humans in generating code, and the future may involve direct interaction with AI models instead of traditional programming. 33:44 💬 The concept of a "natural language computer" is emerging, where AI models process natural language commands and perform tasks autonomously. 45:52 💡 The model itself becomes the computer, representing a future where AI empowers people without formal computer science training to harness its capabilities. 49:15 🤖 AI-generated tests are becoming more prevalent, but there's uncertainty about the role of humans in the testing process. 51:07 🧩 The future of AI models relies on the increased availability of transistors and data, which may require custom hardware solutions. 52:06 🤔 Formal reasoning about the capabilities of AI models is a significant challenge, and we may need to shift towards more sociological approaches. 54:23 🤖 Exploring whether one AI model can understand and explain another model is an intriguing idea, but its feasibility remains uncertain. 59:30 🧠 While AI may make software engineers more productive, certain human aspects, like ethics, may remain essential in software development. Made with HARPA AI
I wonder about who is the bigger fool , those that listen to the speaker ot the speaker? chatGPT is trained by the wealthy for the benefit of the wealthy.
Agreed. There's a lot of push-back against his message in the comments, but I'm already seeing it happen within tech companies where, for example, 10% of employees are let go and the ones staying are now doing several of those roles, along with their own, all by using AI.
Even if robots generate code, you would still want it to have less duplication and some abstractions, because it will lower the amount of context tokens required to modify the code. You would probably also want to keep interfaces between regenerations, because you would like to keep the tests from the older version...
No you don’t , they can write optimized code , that’s literally the whole point of AI , it’s an optimization problem, adjust my weight to reduce the cost function and code duplication be yet another parameter.
I didn't hear him get into the topic of consistency and feature updates. How about performance based programming for games and ultra efficiency? Or shower thought innovations that create entirely new paradigms and ways of approaching problems? AI might be able to do some of this eventually, but I doubt it will be as rosy as he imagines.
yeah, like 99% of people don't invent new paradigms or ways of approaching problems. The vast majority of people in software will be out of jobs, with maybe a few hyper-PhDs sticking around.
stay fappin, fappy. It's not going to happen. Maybe the soydev macbook in startbucks react bros will get replaced but true programming that actually requires deep knowledge ? not happening.
The biggest red flag was there at the start: the beginning of the video description says that gpt can do general purpose reasoning. It's neither general purpose nor can it reason
The problem with LLMs in Generative AI is that in 5 years time, the AI will be learning upon large percentage of data that other AI have generated and then even longer down the road, how do we know what is real or generated data? We still need humans to understand what is fake. The creativity from AI must make sense if the goal for that specific data requires such precision like in the medical industry or other industries for lives are at stake.
It's been established already that synthetic data is superior for training LLMs, compared to raw human data. I mean, think about it, does the open web not have data that is bad? Well ChatGPT was trained don it and it does pretty well. Synthetic data has been proven already to be superior to that, so simply by training the next iteration of the LLM on synthetic data is going to get us to the next step.
@@verigumetin4291 What about fake news or lobbyist outlets? Or books/art generated on someone else's copyright? What if bad actors create fake generated data for their own nefarious purposes? Then these scammers or spammers constantly create these fake data? You can already make a fake Obama dancing "Livin' La Vida Loca". How would the AI know it's real or fake once these generative AI become more skilled? Years down the road, our newer AI LLM may not know the difference and use these data to train. We already got bad science news regarding mask wearing and vaccinations. This will become worse when the less than average intelligent human believes in nonsensical data in a world where such synthetic data will be practically spam.
@@verigumetin4291Do you have any source for that? Preferably a peer-reviewed paper rather that some „research“ by Google or OpenAI published by themselves. I am asking because what you are saying does not make any sense to me.
It’s a lot to expect everyone to know what they want to enter into a query. It will take some time for the query interface to truly be inviting. I’m also mildly concerned that AI will grow impatient with us end users and spit out something we may not want and will simply say “deal with it 😎”
Seems like an AI that is owned by a company that makes a profit would train it not to do as you describe, since that would drive people away. Chat GPT, in its current state, is incredibly patient, and that is one of its most striking and valuable features. I don't think that's an accident.
@@robbrown2 GPT isn't patient, and doesn't think. All it does is propose the most statistically likely word that should come next given a user provided context. This isn't AGI its a predictive model. I'm not trying to be mean or critical, but you need to understand this if you want to use the tool efficiently
@@robbrown2 It will literally return the statistically next most likely token as soon as it is physically able. What is your definition of patient for this to meet it?
@@robertfletcher8964 The way you've characterised it undersells it quite a bit by saying the stuff about "statistically likely". Don't forget RLHF (Reinforcement Learning with Human Feedback) where many undesirable styles the model might do are weeded out and the model is steered towards answering in a way humans prefer. You say it spits out statistically likely within user context but you seem to not be considering that part of that user context could be "patience", the very thing that you seem to be alleging that it can't do.
This is a good video for high school students to be careful when they want to go to college, they might not only think not to approach CS but to go to something that wont be replaced by AI soon. Our era is tough, and it has never been any easier.
38:17 what is considered kid safe? Based on what milestones? Emotional ? Psychological? Etc? You need to know what child development sources are peer reviewed , etc. yes you could ask the AI for those but then you’d need to ensure they were not hallucinations. Etc.
Great lecture! I've been writing code professionally for 20 years and I feel like Copilot is a the level of first year university student learning IT stuff. Not perfect co-worker, obviously, but much better than basic autocomplete in your IDE or some other tools you could use. I'm fully expecting to see Copilot rapidly improve so much that I write all my code with it. Right now, I feel that it can provide some support already and with fast internet connection, having it available is a good thing. Most of the time Copilot writes a bit worse code than I could do myself but it's much faster at it. As a result, I can do all the non-important stuff with a bit lower quality code that Copilot generates so I can focus my time on the important parts only. I'd love to see Copilot to improve even at the level that the easy stuff is perfect.
I’ve been programming for 40 years of my life. Professionally for about 24 years. I absolutely coding with Chat-GPT. But what people don’t get is that architecture still matters. You are still accountable for the code working out. You still need a picture of the system as a whole. You still need to get what’s going on. You still need to understand algorithms, you still need to be able to perform calculations on performance and resources. You still have to know stuff. You have to put the pieces together into a working whole. And the appetite for software is near infinite. I don’t think people quite get that. Chat-GPT can’t do it all for you, by a long shot. Chat-GPT is a great intern. But you can’t make Excel with even two hundred interns. Not even a thousand interns can make Excel. There are other problems. And I am not saying that one day we won’t have AIs that can fully replace competent programmers. We probably will- one day. But that day is not today, and it is not even tomorrow. What I tell young people who are afraid, “but will there even be programmers in ten years?” I tell them, “maybe not, but I can tell you this: It has never been easier to learn programming, than it is today. You can ask anything of Chat GPT, and it will answer for you. If you know one programming language, you can now write in any programming language. The cost of learning to program has dropped incredibly. And the money is right- right over there.”
The speaker here is pushing for a paradigm of “LLM’s as a compute substrate” and English as a programming language” which I definitely see the value of. Certain programs would be easy to express in English but nearly impossible to program using traditional languages. Of course the paradigm does happen to benefit his startup but to claim that this will spell the end of software engineering as we know it is absurd. First of all this requires disregarding decades of research into system design principles which call for modularization and separation of concerns, in order to make systems more legible, easier to debug, easier to maintain. I wouldn’t want key operational software that’s an inscrutable black box that requires “magical” phrases to do the right thing. Just because an LLM is writing the code this doesn’t invalidate the need for proper design. Software engineers are taught design principles for a reason; not just to make their code easier read, and understand by humans, but also to make it easy to debug, extend and adapt. Second, just because it’s easier to program now using just English it doesn’t mean that software engineers are no longer needed. How would you evaluate the correctness of the software generated by the LLM? How would you improve its performance? That requires understanding logic, probability, algorithmic complexity, algorithmic thinking, and a plethora of other software engineering skills taught in college. In my opinion it makes the need for highly trained engineers even more important
Indeed, especially as we already have at least 2 completely (very close to plain) english programming languages around for > 50 years that are widely used: SQL and COBOL. For small examples, both are great to write, understand and efficient. But for real world problems, both are complicated, hard to understand and need a computer science education (for at least to some extend), to get your job done. We even deprecated COBOL what is as close as possible to english, especially as it gets very verbose and so harder to understand again compared with more formal languages. The problem is not writing the code, but being explicit enough so you really get what you want. And independent of technical constraints, the requirements engineering still is engineering and even if the output is plain english, just read any formal document and you'll find out, it's not simple english. That's true even for non engineering, like law, standardization documents, pharmaceutical documents or to come back to programming RFCs. There's probably a reason, why the presenter didn't show the prompt to write Conway's Game of Life via ChatGPT that doesn't involve using external knowledge already. Once you have to define it accurately, it's probably not much shorter than the Fortran or BASIC example and might even be less readable than the Rust version he showed there. The usual text book description are either using images to explain what's going on (what won't work in general), or they just describe it mathematically and would be 1:1 to the APL version he presented. It just sounds so easy as we are used to the concept, but what is a cell, what is a neighbar, how big is the sheet, when does the game end, what does a round mean, what is the initial state, what does it mean to survive, or create new life, or how is it outputted and what do we optimize for? None of it is just trivial to explain unless the concepts are already known (Conway create a game for mathematicians), but in general for most programs, the concepts are not known.
More data and transistors will help, but I think that better algorithm will help way more. We are continualy rebuilding the same thing and letting them unused.
It’s very likely that AI startups will get replaced by OpenAI products for a while until the tech saturates. I think we could do most of the donut demo with what OpenAI announced a few days ago.
Give credit to the people that coded copilot, chat gpt etc; now to seamless to use these LLMs but behind the scene are still the coders, the statisticians, the scientists, the engineers optimizing these models. You have to know both; how to code and how to use the models.
Exactly what I think. A SWE needs write code explicity and write models to get solutions implicitly. None of these tasks seen to desapear in the future.
@@LuisFernandoGaido he never said they will. he just said the way software development happens will change drastically. it already has actually everyone at my job uses copilot
Well, generative models might eventually replace some software engineering interns at companies but as a lead developer / architect I don't see my job endangered yet. Software development and designs is not only about writing code. Writing code is the easy part - understanding the problem, both functional and non-functional requirements, the operating circumstances and making design decisions and compromises when needed is a whole different dimension. I can already see a lot of startups failing miserably by trying to develop software with a few low cost developers armed with some generative AI tool. This is like "we don't need database experts, we have SQL generators" all over again... 😂
Doctors are also claiming they can do more, but AI have already beaten top doctors in diagnosing certain illnesses. I think you'll wake up very soon. No offence oc..
These were my thoughts too... I recently started learning full stack. I don't think Dr. Welsh understood fully the way LLM works and how reliant they are on humans. Any reasonable business should feel worried if a "code monkey" was writing random lines without a way to know specifically what was happening. Problems of the future are likely related to security, not necessarily deploying code that works. We need developers with experience and actual understanding of the code and how it interplays with the system. Other comments above mention programming languages with specific use cases such as memory, NOT necessarily human readability. This reminds me futurists who believed teachers and instruction would be outright replaced by multimedia in the 60's and 70's. The Clark and Kozma debates are a famous example of this. I wonder how many people dreamed of being a teacher and gave it up from fearmongering? The fact is context is everything. Humans are making the context, and we will be doing so for a long time. A threat to this is AGI, not a brain in a jar which is generative AI. If I were in computer science I would take what Dr. Welsh says with a grain of salt. Instead, think about what kind of problems are going to be introduced with AI and understand it as deeply as possible. With every innovation, new problems are born.
I'm putting this here as a note for myself (I'll see if that works). POINTS REGARDING HIS "IMPOSSIBLE" ALGORITHM (no I don't think he literally means impossible): 1. The AI is not a simple algorithm itself - The AI can not be summarized as an algorithm in the way someone would write one... the complexity is fairly expansive... even to setup the ML models 2. Most of what he is asking would not be difficult for a reasonably simple program - Getting the title, etc... 3. DO NOT "": This would be the default of a program - When he says DO NOT use any information about the world... does not mean do not utilize your predictive analysis it just means don't mix information in that is not in the transcript 4. Summarizing is hard, a targeted predictive learning model IS probably the best algorithm for this - The only very difficult piece for a custom built program (including one or more algorithms to make this infinitely repeatable) IS the summarization So, my conclusion: Part of writing code well will, in the future, include targeted ML* (though my take is not monolithic, gargantuan systems like Open AI & Google produce... though those could be a good way to train a targeted ML model)
Can you? All the time? What would it take for you to do it perfectly each time? What would it take for the AI system to do it perfectly every time? Interesting times ahead...
@@ksoss1 As far as I'm aware, there looks to be a problem that chatbots seem to have where in terms of computational speed causes them to skip some instructions of code that's not too dissimlair when setting compiler execution speed to a certain level that results in some unwanted glitches like in assembly language programs via accidental instruction skips.
You are referring to simple LLMs, the proposed architecture is LLMs+ Compute Tools (c.f. Calculators etc) Just as an normal human can answer 3x 9 =27 off the top of their head, they would need pencil and paper, or just use a a calculator, to answer what is 4567 x 2382?
@@juleswombat5309So, what does that make my testing of Bing AI's capabilites, built on top of OpenAI tech, in regards to a pretty simple task on a pretty short excerpt of word counting? Because I'm pretty sure Microsofts' proprietary AI app doesn't fall in the category of being powered by a simple LLM.
19:26 > "I've been coding whole day", but you threw away 90% Oh, that's pretty bold claim to say, that with chatGPT you will get correct code snippet first try, without any need to prompt it with like 20 more messages clarifying and making sure id doesn't confuse language, paradigm etc. You should not compare "clear code" of SWE with GPT tokens, because you are guaranteed to spend many more than ideal. Considering they are dirty cheap, this may not be the problem though
Thought-provoking talk that needs to be taken with a serious amount of critical thinking. I personally have a different view about how programming will evolve and by no means I would ever agree with adding "The End of Programming" in a title or main message unless the objective is in short click baiting to a sales talk. Just as photography didn't kill painting and ai generated images won't kill photography, if you have to write your instructions in English or whatever other language and you already expect to be following some specific patterns to get the expected results, with some try and error in between, well, you are basically programming :) Dr. Welsh raises valid concerns about the evolution of programming and the nature of being a programmer or software engineer, although I beg to differ in the specificities.
I think large language models are really cool, buts too much of black box. Sure there’s plenty of of use cases, as far as entirety replacing code, it needs to be customisable enough and consistent with its functionality. Not sure how that would be possible!
You are restricting your options to computers as we know them, operating on limited versions of ones and zeroes. We cannot have true AI until we have bio-chips that operate like real brains.
Most of tech is currently already a black box. I write mostly C++ and can't even begin to fathom how these modern optimizing compilers work (and I never will). Heck, even the V8-runtime is almost arcane to most people. Only very few exceptional human beings can understand and work on these systems, everyone else can start to look for toilet cleaning jobs.
If AI can write programs, it’d be able to substitute a lot of people, and not just on tech but on many fields, then we gonna have more efficient services but with so many people unemployed, who would pay for those services?
This is a very interesting question. Take it to the extreme: LLMs are able to take over any job. What makes live worthwhile? Can ChatGPT enjoy the first sun ray that warms up its AI chip, does it enjoy the tranquility of Nature, can it enjoy the soft sea breeze, can it get excited about new discoveries? What makes the heart of ChatGPT tick? Does it have a heart? Sometimes we forget that we are multidimensional creatures. Maybe we have to come up with a complete new model for society. We have to redefine ourselves.
@@compateur dude seriously,think about it! One of my friends works as a consultant and another one works as an accountant at top firm,i have personally looked at the kind of work they do which at the end of the day is the most brain numbing manual repetitive task that i have ever seen...to put it pluntly an high schooler can do their job well enough. What will happen to these people then?
Why hasn’t the “lecture” started saying “today we are gonna have my buddy which has an AI for programmers startup”, it would have saved me an hour of this info-commercial
We must move forward with the advanced computational and reasoning capabilities these software models affords us, but we cannot move forward with these black box models which have no formal method of verification or "instruction manual", so to speak. These models should be considered idle malware. I mean imagine: these advanced advanced models and models like these in our appliances, our aircraft, and our ground transportation systems which cannot be verified yet behave properly 99.99 percent of the time yet cannot be actually Verified correct...
> It's 2023, and people are still coding in C -- that should be a federal crime Not because it's their language of choice, though. Think embedded systems: Even if you want to use Rust or any other language with training wheels on it (metaphorically speaking), the platform you're developing for may not be a targeted by it. Or worse, maybe your toolchain needs to meet certain criteria to pass a regulatory body of sorts. Disclaimer: I'm not writing this because of confirmation bias or me being an offended C programmer (I'm working with Java). Please don't get me wrong: I understand that Dr. Welsh didn't intend to oversimplify things, though he generalizes a bit too much imho. It's putting a whole industry in a really bad light and it's just like saying: "if using C is bad because bad behaving C programs have killed people, then, by this logic, we shouldn't be riding trains or going by car anymore".
Guy introducing him: "Hey Kids, this guy is going to make sure that the cripppling debt that you and your parents undertook to send you to college was all for absolutely nothing thanks to his AI"
That's great as a hobby like fishing. But if your boss cannot afford to employ you, as AI tools means he only needs to hire a few staff, then you will not make a living from coding. Adapt to exploiting these tools if you still want to make a living in the computer industry.
This is why i minored in philosophy. Computer science is applied philosophy. The real ability is thinking logically and understanding the human mind and what it is you want to create. Thinking clearly. My personal opinion... When you create something and don't know why it did what it does but does so consistently is because you stumbled upon an equation of nature, that is some fundamental way nature works. In this case human nature. Computer science has always been a funny term. How can there be a science of the computer which is not a natural phenomenon. The science of computation or how to calculate. I find it fascinating that giving chatgpt a personality like you would an actor and shaping a narrative works. But we do this as people everyday going through the different aspects of ourselves depending on the circumstance. So excited for the future of the field.
It's nice of David to let the students have a taste of silicon valley's sensationalism and the outlandish "predictions" of where the future is headed. "This is the only way everyone will ever interact with computers in the future." Even if that turns out to be true, it is soooo far away from the real world right now that it doesn't take a real computer scientist to realize this is delusional. That's not even to mention the question of whether or not we *should* be heading in that direction as a society. Not much more than silicon valley's way of raising funds for more products/services, the vast majority of which fade away after some time.
feel the same. i just think ai is dump and keep dump in at least 100 years, or longer , not in my life time or even not before human extinct will ai become that smart. maybe only advanced alien can actually build that levels of ai.
if 5 years later AI could be so powerful that my comment seem silly , i am actually happy with that. i do hope tech advanced fast but at the same time Very pessimistic about the speed of technological development@@hamzamalik9705
For real, I am on my 2nd big tech job since the ChatGPT rise and of all my team members I am the only person who uses it. In production i saw some ML models in: - adtech for improving ads suggestions. They were there for more than last 6 years, long before the "AI will do everything soon" hypetrain. They were, as i've said, only improvements above the not ML written ad rotation core and didn't generate much money for the company at all. - security SIEM systems used for threat detection on users laptops, but in reality it was doing more harm than profit, like banning our git-lfs executables, lol. - I saw some LLAMA model, trained for a company internal domain (code, wiki etc), but its usefulness was a joke, to be honest. Also I saw an arise of infinite amount of startups with AI solutions for everything after the experts started to promote "Everything as a model" idea. They were trying to solve with ML such problems which never required an ml solution. Looked like every startup, which used to be a crypto startup now is an AI startup or has something from AI word cloud in its name. I see all the experts predicting obsoletion of software development as a job in 5-10 years, but I see literally close to none signs of GPT models in production, left alone profit from its usage. Maybe it is used widely in another tech domains? Maybe in 5 years situation will drastically change? Well, maybe, who knows. But now for me it does not look like more than another race for a venture capital. P.S.: oh, yeah, ChatGPT-4 is insanely good for catching missing Lisp parenthesis, btw.
Two things, speaking from 35 years of banking-software programming: 1) code reviewers are only as good as their expertise (and in years!) in the language (and business functionality). If AI removes all opportunity for experience in the language, where does this expertise come from? 2) The business function knowledge side of the business now subsumes the entire burden of the required specifications to the AI - an enormous effort. How long before we try to automate that? An infinite regress is arising here....
"People, writing in C is a federal crime in 2023" is the most misleading statement, Man how you design low latency embedded systems without C? Lot of low level devices are depenedent on C. Even Tesla FSD or Autopilot uses C++. IOT devices use C.
No one cares bro
Tesla is going to rewrite 300k lines of code using neural networks, no more C or C++.
I bet u I can get my gran to type that into GPT4 and would do better than what ur whole team could do 2 years ago. U better hold on bra, I don't think ur ready. 😶
@@easygreasy3989 bruh, go and ask your GPT Boi to write assembly code for newly designed chips from any vendor. Those LLMs can't generate code outside of the scope of training data. If you've written the LLMs from scratch or at least read the paper then you know what I'm talking about. Else I strongly suggest you go and study CS 182.
@@amansahani2001 God of the gaps my guy, soon an AI will be better at that too, why wouldn't they?
'See I don't know how it works and I'm ok with that' - that pretty much sums up the presentation.
Yeah, you don't have to know every detail of a Honda, just buy it and drive it
Well, you can get pieces of code and I've done it already, chatting with chatgpt helps a lot to get inside once you ask right questions. This presentation is just babbling, I'm waiting for full useful application development presentation using AI.
@@LarisaPetrenko2992but then don’t call yourself a car engineer
@@LarisaPetrenko2992 you can drive it, but you cannot lecture people about how it works and how it's going to revolutionize the ''future''
@@LarisaPetrenko2992 but people who built Honda know every detail of a Honda.
I get the clickbait title but it can be really discouraging to people who are thinking about getting into software engineering. “Like why even try if ai is gonna do it?”
Mainly because it’s coming from an institution like this. I know it’ll take time to eventually get there but A lot of people have already lost hope and new students thinking about joining may just turn a different direction
Note: I’m not speaking of myself here, I’m a senior engineer and I volunteer at coding camps on weekends and tutor online and I get this sentiment from the people I coach and teach. When you’re completely new to a field and you see things like this from a reputable institution along with all the hoopla of tech bloggers online, it does discourage many people from trying to enter this field.
perhaps, but such is reality.
Still, 'everyone should learn to code' is valid. Just do it anyway for your own intellectual development. No point in trying to blame a video title for not doing something. Just do it.
It's the presentation name, bud. Don't get discouraged, presenters often put a clickbaity title but then debunk said title during the presentation. In any case, it's what this guy wanted to call his presentation, can't really fault Harvard for it.
we've got to face this 'harsh' reality head on, there is nothing you can do
Somwhere in 1889: Welcome to my talk titled "Cars and the end of horse carriages".
Someone in the audience: Very mean and dicouraging title, dude, what about all the people who want to become a horse carrage driver?
"AI will replace us all, anyway here's my startup"
Exactly 8 days later, OpenAI released a single feature (GPTs) that solved the entire premise of his startup.
So true hahahaha
Oh my god, thought exactly the same!
Funny thing is he said programming will die but it was exactly through programming that the new feature that solved the premise of his startup was created
Which just further reaffirmed everything else he said. Too many people are coping right now, LLM's are gonna put a lot of people out of work, not just programmers. I work customer service and internally I am freaking out right now.
so he was correct, AI will replace us all ))
That reminds me when I was in middle school. My teacher had to teach us how to program in Basic but he really didn't want to. So he simply told us "in 2 or 3 years we will have speech recognition so you don't need to learn programming". That was 35 years ago... That's a bit bold to tell that programming languages have not improved the way we code in 50 years and to think AI will save us.
one day they will get it right
I remember one of my teacher, while not been bold enough to speak about speech recognition in the early 90-s, saying that there are _already_ only system programmers left, the application programmers have been made obsolete by - are you ready for it? - SuperCalc, a spreadsheet software for MS-DOS and such. Makes me wonder, now that I think of it, why would there still be a need for system programmers if MS-DOS was already a sufficient operating system for the only applied task that was left - the one of running SuperCalc...
You've clearly not used Grimoire. It's game over.
Most probably You have not used AI enough, its magical in some sense. Soon you will realize give it a year or two
But speech recognition is really good these days...it just took about 10-35 years, depending on how 'good' you think 'good' is (I recall speech recognition that was decent about 25 years ago).
LLMs are going to replace idiots doing stupid talks 100%.
Lmao 😂😂😂
natural language programming is a thing now accept it
@@DipeshSapkota-lo3un natural language is imprecise and makes a poor programming language.
Yes i get it but which basically means we don't need to have software cycle anymore. all those clean code rules for dev to dev visibility is not required now since just need to understand what is the function doing and for that dev will be there 😉 what matters now is input output and definition of function and that's what the business wants too !
@@DipeshSapkota-lo3un Tell me you've never touched code without telling me.
Do not be discouraged.
Enjoy life and study what you are interested in. Everything else will fall into its rightful place. Tomorrow is not guaranteed, do not fret about things beyond your control.
correct because i thinks its dumb to think so far ahead when we don't even understand how ai work internally or how we are going to take data or if more computing is actually going to help, Dr. Matt Welsh does not know how the algorithm( the most important part) is going to be created ,there are a lot of other thing where he says i believes which is not so reliable (specially when choosing your career )
Story of the Chinese farmer........AlanWatts
I think it's time for Dr. Matt and his team to pivot away from fixie's custom chat GPT idea after OpenAI released GPTs.
How unexpected!
I was thinking the same. It is basically the GPTs concept, although Fixie’s AI.JSX still offers seamless integration into a react app. Let’s see OpenAI’s response to that
@@rahxl while you are right it doesn't mean he's wrong
@@castorseasworth8423So you can just use their Assistant's API and create a React front-end on your own.
@@rahxl whether he does it or somebody else, it is immaterial, openAI just proved his concept was right and worthy. he is already successful while u need to find a good job
@@merridius2006 @TheObserver-we2co this is not scientifically correct, a program written for a given task X can be written (and exist in hardware) so its the theoretical most performant solution, while an AI can cost a million times more to run the same task, take for example "2+2", at the same time, a program is a crystallized form of ontology and intelligence, that means, instead of reasoning the solution on every execution, programs grow as a library of efficient solutions that dont need to be thought over and over again, in the future is programming languages what will remove the need to write code, as we aproach an objective description of computable problems that we will be able to write for the last time, in a way we already did this with libraries (in a disorganized way), and obviously we will use AI to help write these programs, but because we will solve these problems a single time for the infinite we will review and read and write them ourselves as a way of verification, just as today. After that we will use an optimized form of AI that maps these solved solutions on user request, but interfaces will also be mature enough (think of spatial gesture and contextual interfaces) to make speech obsolete. Current LLMs are more a trend of our current times than the ideal, efficient, unfallibe solution we need to standarize on all aspects of society from IT.
If all the software thats already running in your computers would run using AI, it would cost thousands more in energy and time, software is already closer to the theoretical maximal efficiency, the ideal software is closer to solved math than to stochastic biology or random neuron dynamics. Training better a model wont solve any of these things.
And AIs that evolve into more performant solutions are statistical models programmed into known subsets of the problem after the mathematical model of the problem is understood enough to do that, is the same as we have already done since forever, statistics like that used in modern LLM have always been used in computers and are part of what programs are required to do.
Just imagine if every key we pressed were interpreted by AI just to reach your browser.
Along all these, we still have a lot of work to do, i would say we have only written a third of all the software that we need in the world, and at the same time, almost all the software that already exists needs to be rewritten in new languages more closer to the new level of abstraction and ontological organization described here, given time all code in c++ will be moved to rust, and rust will be replaced by an even better language, and no institution will just let you do it with AI and not read or understand what it did.
Just go study and stop being silly thinking you know what programming is without any real experience in the field, all these opinions come from marketers, hustlers, wannabes, teenager ai opiniologists and doomers.
Law is written in plain English too. For reproducible results, the limit of input precision will lie where the modern legal jargon reaches it's least understandable form. You will be left with an input that is still as hard to comprehend as a programming language text, but much less precise. Good for UA-cam descriptions perhaps, but not for avionics.
The constitution and most contracts are in legalese which looks like English but is strictly NOT. To know and appreciate fully what is said in legal documents, you must use a legal dictionary. Capitalization is often key. Amature researchers have uncovered much-hidden history by seeing what is said and meant in older legal documents. The world turns out to be more nuanced than I thought by the lectures by these legal scholars telling us what the elite have in store for us.
Here is an example,
London the strawman identity youtube
You have a person, you are not a person. A person is a legal fiction--legal paperwork of identification issued by the government. Ergo, you have a person, you are not a person. That is why a corporation is considered a person and has personhood--it is all about legal fictions written in all capital letters--in the dead handwritten on an individual's tombstone.
Some tricky legislation was at one time written in a hidden way in some foreign language so that the public would be much less likely to discover what trickery was being done by their so-called elected officials. This was in the 1600s in order to reduce the power of the church and increase that of the crown which turns out to be the inns of court of the crown temple in the City of London that is a separate state than England or UK similar to how the Vatigan in Rome is its own city-state, and that of Washington DC that is its own city-state.
This was all explained years ago in a video on UA-cam that gave away many secrets so likely it is banned now. but few watched the entire video because of TLDR.
I found a copy still on UA-cam:
Ring of power - Empire of the city [Documentary] [Amen Stop Productions]
ie if Product Managers could specify what they wanted with enough precision to create a product, they would be coders.
law will be impacted heavily. But law has a human aspect - the motivational speaker and projection and questioning a witness with emotional appeal...that's the difference and why its safer.
@@gaditproductionsThere is a difference between a living individual, a machine, and an entity with personhood such as an immoral & immortal corporation who holds the debt of people, and nations that cannot be repaid due to usury compounded semi-annual interest charges.
What if all money in existence was borrowed as debt into existence? Well, that is what has ended up happening as a trick of financial mathematics--the implications of which simple folk do not appreciate the implications, so vote for more government free stuff with their hands out waiting.
Patrick Bet David of Valuetainment breaks down the information regarding the hyperinflation seen in Venezuela and what other countries did when they saw this same thing happening to them, namely Israel got rid of practically all its debt and so has one of the lowest rates of inflation.
Lower standards of living are on the way if one is not careful who one has been representing them in Government.
I had an epub formatted book. I used the ReadAloud Microsoft store app read it to me. It horribly mispronounced some specific word when reading back the material therein. The book was from 1992.
Here are some of the epub formatted docs in my downloads folder.
Lords of Creation - Frederick Lewis Allen
The Contagion - Thomas S. Cowan
The Gulag Archipelago, 1918-1956. Abridged (1973-1976), Aleksandr Solzhenitsyn
Votescam of America (Forbidden Bookshelf) - James M. Collier
Wall Street and the Russian Revolution, 1905-1925 by Richard B. Spence
The individual voice types in the Windows TTS system determine how to break into syllables each word, and to pronounce well or badly any given word. The word that came out very badly, I believe, was "elephantine." Sometimes some of these TTS voices use online AI to assist in the pronunciations and smooth transitions between sentences, pitch of voice elevation during questions and so forth. Obviously, if there was a Nuke or EMP, the entire power grid would go down for decades unless the well intending people rebuild everything overnight without the build back better destroyers holding them back from doing so.
As such, it might be better to have each computer holding a small chunk of civilization and enlightenment, lest it all be lost should a key datacenter be targeted directly.
What safety precautions have your local officials done? How about your electric grid suppliers--what safeguards are in place to get everything back running after there has been no phones, no power grid, no gas station pumps working, no diesel truck fuel pumps running, no credit card transactions, no banking, and so on?
I asked an AI about EMP precautions. I suggested wrapping spare electrical transformers and generators in metal wrap--thick aluminum foil layers, then burying them somewhat deep in the ground to reduce pulse damage. It said that the foil had better be thick enough and very well grounded to displace the electrical energy.
The example with Conways game of life does no justice to the 50 years of programming language research he refers to. Also, Rust was designed to overcome the memory safety problems that plagued C and C++; it is a programming language that emphasizes performance and memory-safety. Programming languages like Fortran and C were designed the way they are for a very specific reason: They target Von Neumann architectures, and fall under the category of "Von Neumann programming languages". The goal of these languages is to provide humans with a language to specify the behavior of a Von Neumann machine, so of course the language itself will have constructs that model the von Neumann architecture. Programming languages like Rust or C do exactly what they were designed to do, they are not "attempts" to improve only code readability for Conways game of life when compared to Fortran.
Totally agree your comment
well they could become irrelevant though. Because the programming language of the future probably looks like minified JavaScript and will be designed by AI for AI.
@@datoubi good luck with that, see you in 10 years. Humans should not loose control over their own life and things that life depends on. As soon as they do, they'll become slaves of their own technology. And despite there still won't be a cent of consciousness in a machine in 50 years, if humans will loose the ability to understand the software on their own without "AI" help, it could quickly become a tragedy because of 1000 other reasons than the comic-book 'machine revolt'.
If a natural language were such a SUPERIOR specification language, there would not be on going efforts to find working specification languages. What he claims is, that plain english is the best you can ever get :)
True, yet non of that is an argument against his point.
The talk was a perfect segway for AI startup ad
Indeed.
Seems like a lot of the invited speakers are hawking something.
I am pretty sure it was all an ad.
@@poeticvogon this is cs50...its a class...they wont just do a add and risk loosing credibility...if this is coming from a institution like this...things are very very serious.
@@gaditproductions Of course they would. They just did.
I genuinely cannot understand how humans are just... incapable of thinking of the future. Like, the idea of 'just 'cause you can, doesn't mean you should' is just so much the case, right now. But nope, because we can, we will.
Okay, so we all slowly forget how to program, and we, generation after generation, depend more on language models writing code for us, and us just instructing the language models. Great, let's just, for a second, take this further shall we? First, the ways we communicate with language models are going to eventually become more like programming languages, because people are lazy, and the entire reason we have ANY symbols in mathematics PROVES this. We don't like to write more than we absolutely have to.
(EDIT: To expand on this - what I'm trying to say is this: we use specific patterns of sound in our languages to wrap up concepts, or ideas. We do this so that more complex communication can happen, by building on top of the layer below. We create functions in programming to wrap up sets of actions so that we can build on top of that. This is how abstraction works. I've used mathematical symbols as an example, but the same concept applies pretty much anywhere you look. Condense repetition, so that we can build more complexity on top.)
So we're going to get "AI" based programming dialects, you could say (look at the way image generation prompting has already evolved as an example).
Then, as we also develop these language models, the models themselves are going to have free rein on the 'coding' part. We will obviously instruct these systems to create newer programming languages that will, after a while, become unreadable to us. And we will ask, well, why do we need to understand it? The machines are there to handle it (this is essentially what this guy is saying). So now we have dialects of humans telling machines what to do, and then we have machines telling other machines what to do in a language we don't understand.
Does ANYONE see the issue with this? Like, even a little?
Just because programming is hard does not mean that we have to eliminate it. What absolutely idiotic thinking is this? It must always be a constant pursuit of efficiency. That's the whole point. We always remain in control. We always ultimately KNOW what is happening. By literally INTENTIONALLY taking ourselves out of the equation, we write our own Skynet. I don't mean that in an apocalyptic sense, I mean that in a "we are so fucking dumb as a species, like literally what is the point of programming, or doing anything at all, if not for our own benefit?" kind of way.
Sure, use these systems and tools to write better code, write better documentation, I mean these are the actual areas where AI systems can help us. Literally to write the documentation and help us write better, more efficient, cleaner code, faster than we ever could. But still code that WE READ, AND WE WRITE, for US.
This guy literally called Rust and Python "god awful languages" and apparently we need to take the humans out of developing things. Who does he think development is for?
What's weird is that this is on CS50?
This will be lost on most people, especially academics who live in a fantasy world. Your comments are obvious to anyone who does regular old work.
I think your thinking is a bit biased and shortsighted. And I’m guessing it’s because like me you’re a programmer. What I think you’re wrong about is that once we move up the abstraction layer, we don’t simply forgot the stuff underneath. People can still understand assembly and write programs using it if they so choose to but it’s ultimately a waste of time.
I don’t think people will simply forget how to program, instead they’ll focus on more important things like solving problems that people are willing to pay for.
I’m sure if you wanted to, you could rig up a set of logic gates to do some addition and subtraction operations but is that a business problem people are willing to pay you for?
Essentially ai will be a layer of abstraction which allows us to focus on more complex problems rather than having to focus on getting all the right packages before even attempting to solve the problems of the users.
Dude, what are you on about? This is what coding has always been, a simplified version for us to convey ideas to computers. We don't write code in binary, we have compilers and interpreters that do that for us. The difference is that now instead of having to learn Python or Rust you can use English or Spanish or whatever to convey your ideas and have them be implemented. You can then ask the LLM directly questions about the implementation of different algorithms and optimize for whatever variable is relevant to your vision. Programming languages have been becoming more and more readable for decades now, this will just be the final step where we can finally interface with computers without having to learn a new language.
Language has its own issues. It's context sensitive and highly ambiguous. Our "experimentations" with programming languages was an exercise in formalized and more precise languages. On the lower levels it's just signal processing with circuits. We built different levels of abstractions on top of that. We can only hide the complexity but we cannot make it vanish. Language models are just another layer of abstraction with its own pitfalls. The best thing one can do is heed the scientific method. Maintain a suitable degree of transparency so that things can be verified by others. 'Others' may be other developers, scientists, AI based tools, etc.. Completely removing humans from the equation will violate the scientific method.
What if LLM write a buggy code in maybe 50 years from now and that code is only understandable this machine and it again writes another buggy code because it does not understand what it is doing and writes another buggy code till infinity 😅 the we as a human have to dust off those old BASIC books in order to start over and how cool is that 🙂
Software Engineering will eventually be the role of just a few, not because of AI replacing jobs, but because of discouragment many people will feel and quitting before even starting the journey
One day, people may look at code the same way we look at the Pyramids. The knowledge of Pyramid making came and went.
we need 4 mechanical engineers and 2 electronic engineers for every software engineer, because software is easy.
@@reasonerenlightened2456 software is easy. Good software is hard.
Or will we need coders for the lower levels?
@@reasonerenlightened2456 you dont even know the difference between engineer and developer...
He called CSS "a pile of garbage" and that writing C should be a federal crime. I smell senior engineer burnout, that want's to just cash in on his startup and work on a farm.
his startup flopped horribly btw lol.
Hahhaha even as a newbie, i kinda agree with you
47:25 Can he be more apparent w/ his motives? Douchebag move.
I am amazed that students didn't ask about anything related to "security" because, right now, we are just seeing an innovation but what about the future, when, on a larger scale, if we say we want to build a public program like Facebook or any other platform. This is presuming to be a live programming or language model building whatsoever it is so how can we encrypt all of our data from building to running and so on.
While security is something lacking I feel your focus is on the wrong aspect of it. You reference encryption which isn’t necessary for the source code so its ability to assist you to build won’t be impacted. I’m more concerned about the data you’re providing to the LLM. If I’m building a proprietary function and I need some insight from an LLM and I need to upload my source code for them to evaluate I am potentially sharing some seriously protected intellectual property. What happens to that? Can that code snippet show up in someone else’s code when trying to solve the same problem? Maybe your competitor?
@@rookie_racer More importantly than that, he's already demonstrated in his talk that these LLMs have -- call it "undocumented" or "emergent" or whatever you want -- behaviour that gives the questioner control over how the answer is given. Recall the "my dear deceased grandmother" "attack" that let people ask about how to make napalm or pipe bombs or whatever. Giving LLMs unfettered access to proprietary data, and having those LLMs all be based on the same nugget/core/kernel vulnerable to the same attack vectors means giving attackers access to all of that proprietary data by "casually" using your interface.
@@rookie_racer yes, you are right... actually what I was trying to highlight is "Data" and I mean how can we trust our confidential information to something that is "open source and a third party revolving around and across the internet.
He starts off with no one will code and he ends with his own programming language for AIs. lol
Lmao
47:25 bashes the art of programming so he can sell his LLM service. Douchebag move.
It's not that gpt blew up because it was super good overnight. Well sort of but the real reason is it's ease of use. It's just like back when home computers became popular, when you introduce a computer as a marvel of engineering nobody cares about that but if you say "it's a box that lets you play some games and music etc with a bunch of clicks" you have everyone's attention. The idea of making it feasible for the masses that's what kicked it off, poured in billions of dollars and years of research to make computing better and better, same stuff happened with gpt and it's again on the same path but at a much much faster rate.
GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!
Computational power increases made gpt possible from what I understand
That and it was super good… understood that a lot has to do with data and compute but it really is very good as a product right now…
I'm at 6:43 and all I've seen so far is that guy projecting his incompetence onto the rest of humanity.
Indeed! I mean WTF? Of course, you can always write programs in the least understandable way possible.
You call an Harvard Computer Science prof. incompetent?, you fool😂😂
Why don't you go ahead and answer the questions, since you're the competent one then🤨...ya'll just come on to the comment section talking trash, no sense🤧
@@Henry_Wilder Which questions?
@@epajarjestys9981 the questions posed at him that he couldn't answer. He kept saying "I don't know " remember?
The purpose of computer science in a nutshell was not to translate ideas into programs. The goal was to find higher levels of abstractions to enable describing and solving ever bigger problems. Programming and programming languages were emergent properties of that goal. The question for LLMs is if they will be able to continue the quest for higher and simpler levels of abstraction or forever get stuck in the mundane as most programmers did by their jobs.
Thanks, I'm saving this idea
Thats a deep thought. I feel purpose of comp science is to automate task which humans can do or think of doing. Programming is just one step for it. Instead of create models which can write code, humans should think of bigger ideas which can impact living beings. It may be accomplish by manual or automatic programming, does not matter
Reality is near infinitely complex. As programmers we create a finite abstraction. AI will do it better yet can't solve exponential complexity. AI is not infinite and has not infinite compute. Infinite is usually a warning signal of a lack of knowledge. Infinity means everything starts to behave weird. There is also physics … latency, a set of fundamental problems
We have too many people doing software so software salaries are going to go down, we need to tell Indians & Chinese and Westerners to focus on swarm robotics, mini-robots, having the robot sworms build things etc... Take a robot-hand, make all of its parts like legos that it itself can assemble. Then make it so that it can either print out its parts, sketch out its parts, or mold its ports. Have it replicate itself in smaller and smaller until you hav e a huge swarm of robots, but you also need a lot of redundancy and "sanity checks". Swarm robots can do stuff like look for minerals/fossils/animals, look for crime, map out where everything is so you know where you put your cellphone, build houses/food/stuff/energy collectors/computers. @@mriduldeka850
@@aoeu256 That's a good point. Japaneese are good at building robots. Indians are good and abundant in software sector but lagging way behind in manufacturing and hardware industry. Chinese have strength in manufacturing sector so perhaps they can adopt to robotics growth more quickly than Indians.
Dr. Welsh does make good statements I think we all can agree on, but as an AI student and Software Engineer for 10+ years, regarding what Welsh said: "People still program in C in 2023", well if you study AI you will even learn Assembly, very very low-level programming and since models have been written by programmers, we still need programmers to maintain and improve on these. AI is getting there, but it's still at a very immature level compared to the maturity we seem to desire as a humanity. We still need PhD students with a solid programming and AI background to do extensive research within the field of AI in order to help invent new technologies, specialized chips, improved algorithms etc. We are still far away from letting AI generate code that is as good as a programmer who has mastered it. Sure, it can write code, but there's still ton of scenarios where it fails to make things work.
2 more years should do the trick!
Before thinking of AI use in the society we must agree who will Profit from it, who will own it and who will pay for the mistakes of the AI? Is it going to be like, "Oh well, bad luck" when AI ends someone's life?
I give it 5 more years before AI is super-intelligent
@@LucidDreamnbased on what data?
I think the problem is about the purpose or the goal of the program that you are programing, in case of the Conway's Game of Life, the concept it self it is not easy to explain even with human language, we could get some ideas watching it performe but to be able to understand it completly, from logic to meaning or even to purpose and what coorelation it has with other topics such as math, physic or phylosophy, it is just not easy to understand, it won't be easy anyway
I prefer this take - natural language isn't well suited for describing to computers what they should do, which is why programming languages were developed. LLMs can do some translation from natural languages to programming languages, but not very well and not as accurately as we would like (yet), so they're good for getting you part of the way there, and currently they'll likely generate less than accurate or reliable code, but if you're not trying to write reliable programs, they could be helpful :D
Good to remember that rigorous symbolic notation for math is pretty modern idea in itself. One could argue that math is just "esoteric language" like Matt Welsh is implicating about programming language.
I agree. AI can do things like computing Pi, finding factors, and other relatively trivial things which could just be bits of static data. It may not even be generating code - just returning the closest match. If it is generating code, it's not very useful yet unless you know exactly how to speak those sweet-nothings. I asked ChatGPT about a week ago to create a website in the style of Wikipedia with 4 page-sections relevant to simulation-theory. It gave me an HTML tag with 4 empty DIV elements - nothing else. No other structure, no content, no styling, no mock-up of interactive elements.
@@restingsmirkface You might have to do some "prompt engineering".
When I try ML and statistics related stuff, I often just copy text book formulas. The copied text is obscure for humans but somehow ChatGPT is able to understand it. Also it is really hard to ask python code for neural networks because it forces the use of external packages. C language doesn't have external packages so I often ask ChatGPT to write in C code and I translate the code to Python or Julia
Agree. I noticed, although AI chatbots like ChatGPT can write complex Python programs( I asked it to create simpler neural net chatbots in Tensorflow / Keras), it is often buggy, and it has a hard time fixing the bugs if you ask it.
@@Siroitinthis is very interesting, ChatGPT has a better hit rate when it comes to writing in C?
I’ve only tried Python so far, will have to give this a go
I’m legally mandated to pitch my startup… that’s all I needed to know.
Did you work for free ?
Great presentation! Thank you!
One nitpick: 19:23 "average lines of code checked in per day ~= 100" I can tell you that is not the case for average SWEs in the silicon valley do. ~10 lines/day would already be pretty good.
"If the dev is not using copilot then he's fired". Tell me you never worked in a commercial application without telling me you've never worked in a commercial application.
What do you think hes writing? Personal pet projects? Lmao.
@@jak3fHe's marketing. Not writing.
I wager that Code Assist with Gemini 1.5 is much better than Copilot now.
@@jak3f Have you ever heard of copyright law? Are you seriously unaware that federal courts have already ruled that AI generated output is ineligible for copyright protection?
@@gaiustacitus4242 good luck proving that
Dr. Matt Welsh points out the crucial point about AI in programming: The better it gets and the more we trust in it, without actively know how to code or without knowing how it does what it's doing, we lose power over our daily automatic routines. Imagine what a risk AI generated code would be in a nuclear power plant. I think this talk is rather a great wake up call for learning how to code and coding inside AI instead of just letting it go.
Humans are fundamentally lazy and default to the option which takes the least energy and effort. Meaning, most people will try to automate their own work as much as possible. AI learns from this and gets increasingly better, until human-in-the-loop is not needed anymore. Eventually, AI might be even better than humans at programming. As for nuclear power plant, I dont know, depends how reliable the system is.
Except in 5 years, you might be saying the opposite. Humans introduce error inherently. Think how much better AI is now than it was programming 5 years ago, give it 5 more years, and writing human code will seem like the insecure risky option.
@@gordonramsdale My take: A good chunk of software bugs exist because requirements were not refined well enough by the engineer breaking down the work. They make assumptions and write code that does something it shouldn't. With good testing no real bugs get into the system and we have modern compilers that remove the issues with syntax errors. AI coding will likely produces the same errors and make these types of assumptions humans make when working with poorly defined requirements.
Nuclear power plants have a strict design and review process that is fully vetted. So i would not worry about this specialized software aka AI in this application.
@@dblezi Hi, I think I understand what you are saying. But then again what does fully vetted mean in that context? We also have a review process where each Merge Request is fully vetted but still, errors can slip trough. AI MRs might slip through more easily.
In almost all scenarios, AI represents an "it runs on my machine" approach to problem-solving - a "good enough", probabilistic mechanism.
But maybe that is sufficient. We get by in the world despite uncertainty at the quantum level... maybe once _everything_ is AI-ified, the way we think about the truth will shift just enough, away from something absolute and concrete, to something probabilistic, something "good enough" even if we'll never be sure it's at 100% outside of the training-sets run on it.
> the way we think about the truth will shift just enough, away from something absolute and concrete, to something probabilistic, something "good enough"
This is a deep insight. Many great minds of the western philosophical tradition have expressed this view in one way or another. In fact it's the school of thought known as American Pragmatism (which is known as the quintessentially "American" school, in philosophy circles) which most closely aligns with this view.
Some pithy quotes about truth from the most notable figures in Pragmatism:
- William James (active 1878-1910): “Truth is what works.”
- Charles Sanders Peirce (1867-1914): “The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by the truth.”
- John Dewey (1884-1951): “Truth is a function of inquiry.”
- Richard Rorty (1961-2007): “Truth is what your contemporaries let you get away with saying.”
dockerize AI problem solved xd lmao
I believe that in the short term there will be a shift in both time and focus from coding a solution to the architecture design, testing, and security of that solution.
Architecture is KEY
Architecture is nothing but the applications of known patterns and reasoning/ tradeoffs . I use chatGPT for my architecture challenges all the time and I say though it’s not perfect, it’s already doing a decent job . It will get even better , exponentially better .
agreed
Last week i was working on some rust code that had to deal with linux syscalls, chatgpt gave incorrect data on every single question. There are limits to how well trained it can be based on the amount of data it was trained on. It's good for common problems, not so in a niche environments that real SWE deal with daily. It just makes JS bootcamps obsolete.
Now imagine if plane control computers were used to generate all the code, as he suggests, without a person in the loop. Good luck flying that. Until AGI is here, we can't talk about any of this
It's true but for now. What about the evolution of these models over 5, 10, or 15 years. BTW, no model yet receives data directly from the physical world. And sooner or later, it will heppen.
@@danri9839 it's a fuzzy black box system. Until we have AGI it's just marketing hype that they are smart, while in reality precision isn't there if there was little training data
@@danri9839 The problem is that large language models get data from the world but can't figure out what's useful and what isn't, what's keep and drop on their own what's useful and what isn't. Right now, humans decide for them. If we want models to make their own choices, they need to understand what's right and wrong, which in itself is already complex even for humans in a lot of cases.
you're the 927483927839273 I've seen who wrote this comment. You sound like the crypto bros who promised an unprecedented economic crash and how the blockchain would revolutionise everything... and yet.
I tried to generate Rust code for a relatively trivial problem (8puzzle) and its solution was wrong and didn't compile. I fixed the compilation errors and the solution was still terrible because it used Box::new(parent.clone()) every time a child node was generated (very, very inefficient). I had already written the code myself so it was easy to spot these errors but I really can't see how chatgpt is supposed to write code better than humans...
"The line, it is drawn, the curse, it is cast
The slow one now will later be fast
As the present now will later be past
The order is rapidly fading
And the first one now will later be last
For the times, they are AI-changin'"
Learning to code right now and I can definitely say this has not made me give up it only shows me the cool tools I will one day be able to build.
Back in the real world, you still need to double check the code generated by copilot which often is wrong. I'm not sure if I'm bad at using copilot or the people using it are simply not checking what has been generated.
Not to mention, none of the large companies are willing to use a version of copilot that allows it to send the learned data from their private repos back home for obvious reasons.
that's the problem I find with AI generated code. You have to verify it, which is a task that takes as much, if not more effort that writing the code by hand.
@@Peter-bg1kuwrong
wrong
@@cardiderek what do you mean?
@@Peter-bg1ku that isn't the problem to worry about. We are so close to solving hallucinations.
There's SO much to SWE jobs aside from just coding, like collaborating with product and design, understanding business needs, convincing management that something is worthwhile. Additionally, someone will need to review the AI code, deal with legacy code, set up services, etc.. I view these AI tools as tools that will make everyone's job more productive but not necessarily replace.
The cope is real.
@LupusMechanicus Anyone who thinks an AI can help anyone write a program to solve problems hasn't worked in the field at all. More often than not a person will bring a problem and their ill conceived solution. Then the experienced software engineer will discuss the original problem, propose alternate solutions, ideas that still solve the problem but better make use of resources (memory, time, etc) and provide a useful and intuitive workflow. That IS part of being a SWE and if you think an AI is going to do that naturally and simply you are out of touch. Say others are "cope" if you want, but perhaps educate yourself more than watching a UA-cam video by a guy desperate to sell is product.
@@TomThompson Bruh try to build a house profitably with just your fingers. You need a saw and air hammers, lifts and screw guns. Thusly you can now build a million dollar house with 8 people in 6 months instead of 40 in 1 year. This will eliminate alot of employees, thusly it is cope.
@@LupusMechanicus You again miss the point. No one is saying the industry won't be affected, it will. What we are saying is it is uninformed to say the industry is "dead" because of AI. Just look at the history. The job has gone from being primarily hardware based (setting tons of switches) to using a machine level language (assembly). Then gradually to higher level languages (fortran, cobol, c, etc). Then we have gone through adding IDE and lint, and code sharing, and review systems. The introduction of AI will not replace everything and everyone. It will be a tool that will make the job easier. And yes, it could easily mean a company that currently has 100 engineers in staff can gradually cut back to 10. But it also means other jobs will open up in areas such as making these AI and making systems that make using but easier.
The invention of the hammer didn't kill the home building industry.
There won't be legacy code anymore, having a computer that writes code, so ppl will understand the computer's code sounds pointless. Do you need to know your router's code in order to use the wifi?
If programmers will get replaced, who will not get replaced? Programming is one of the most difficult fields for humans. If most of it can be automated, most of everything else can be automated too. This AI revolution won't affect just programmers, it will affect everyone. Programmers are more aware of it than the average person though.
It might still take 20 years for us to see AGI. Probably way less, but nobody really knows.
Manual labour isn't going to be replaced. Nurses, waitress, handyman, plumber... shit like that
@@BARONsProductions Eventually it is, unless we specifically want humans for the roles. Machines will do everything better once we get to artificial superintelligence. We will probably get it before 2040, but who knows, it could take way longer. Also, people need time to adapt to technology. When something is invented, it doesn't get immediately applied on the practical level.
@@BARONsProductionsif anything manual labour is going to be replaced faster due to the repetitiveness of their roles.
@@BARONsProductionsthose jobs are more likely to be replaced than programmers
The physical labour will take more time. For that, actual physical robots should be built that won't be any good for like 10 years at least (I believe). Yeah the digital ones are ones that will take the hit first.
His startup is completely based on a Javascript framework. You don't have to use an LLM to tell you that was a bad idea.
Who said you can't use javascript for ML?
@@godismyway7305 No one did.
🎯 Key Takeaways for quick navigation:
00:00 🍕 Introduction and Background
- Introduction of Dr. Matt Welsh and his work on sensor networks.
- Mention of the challenges in writing code for distributed sensor networks.
01:23 🤖 The Current State of Computer Science
- Computer science involves translating ideas into programs for Von Neumann machines.
- Humans struggle with writing, maintaining, and understanding code.
- Programming languages and tools have not significantly improved this.
04:04 🖥️ Evolution of Programming Languages
- Historical examples of programming languages (Fortran, Basic, APL, Rust) with complex code.
- Emphasis on the continued difficulty of writing understandable code.
06:54 🧠 Transition to AI-Powered Programming
- Introduction to AI-generated code and the use of natural language instructions.
- Example of instructing GPT-4 to summarize a podcast segment using plain English.
- Emphasis on the shift towards instructing AI models instead of conventional programming.
11:26 🚀 Impact of AI Tools like CoPilot
- CoPilot's role in aiding developers, keeping them in the zone, and improving productivity.
- Mention of ChatGPT's ability to understand and generate code snippets from natural language requests.
17:32 💰 Cost and Implications
- Calculation of the cost savings in replacing human developers with AI tools.
- Discussion of the potential impact on the software development industry.
20:24 🤖 Future of Software Development
- Advantages of using AI for coding, including consistency, speed, and adaptability.
- Consideration of the changing landscape of software development and its implications.
23:18 🤖 The role of product managers in a future software team with AI code generators,
- Product managers translating business and user requirements for AI code generation.
- Evolution of code review processes with AI-generated code.
- The changing perspective on code maintainability.
25:10 🚀 The rapid advancement of AI models and their impact on the field of computer science,
- Comparing the rapid advancement of AI to the evolution of computer graphics.
- Shift in societal dialogue regarding AI's potential and impact.
29:04 📜 Evolution of programming from machine instructions to AI-assisted development,
- Historical overview of programming evolution.
- The concept of skipping the programming step entirely.
- Teaching AI models new skills and interfacing with software.
33:44 🧠 The emergence of the "natural language computer" architecture and its potential,
- The natural language computer as a new computational architecture.
- Leveraging language models as a core component.
- The development of AI.JSX framework for building LLM-based applications.
35:09 🛠️ The role of Fixie in simplifying AI integration and its focus on chatbots,
- Fixie's vision of making AI integration easier for developer teams.
- Building custom chatbots with AI capabilities.
- The importance of a unified programming abstraction for natural language and code.
39:14 🎙️ Demonstrating real-time voice interaction with AI in a drive-thru scenario,
- Showcase of an interactive voice-driven ordering system.
- Streamlining interactions with AI for real-time performance.
44:55 🌍 Expanding access to computing through AI empowerment,
- The potential for AI to empower individuals without formal computer science training.
- A vision for broader access to computing capabilities.
- Aspiration for computing power to be more accessible to all.
46:49 🧠 Discovering the latent ability of language models for computation.
- Language models can perform computation when prompted with specific phrases like "let's think step-by-step."
- This discovery was made empirically and wasn't part of the model's initial training.
48:17 💻 The challenges of testing AI-generated code.
- Testing AI-generated code that humans can't easily understand poses challenges.
- Writing test cases is essential, but the process can be easier than crafting complex logic.
50:40 🌟 Milestones and technical obstacles for AI in the future.
- The future of AI development requires addressing milestones and technical challenges.
- Scaling AI models with more transistors and data is a key milestone, but there are limitations.
54:23 🤖 The possibility of one AI model explaining another.
- The idea of one AI model explaining or understanding another is intriguing but not explored in depth.
- The field of explainability for language models is still evolving.
55:44 🤔 Godel's theorem and its implications for AI.
- The discussion about Godel's theorem's relevance to AI and its limitations.
- Theoretical aspects of AI are not extensively covered in the talk.
56:42 🔄 Diminishing returns and data challenges.
- Addressing the diminishing returns of data and computation in AI.
- Exploring the limitations of data availability for AI training.
58:34 🚀 The future of programming as an abstraction.
- The discussion on the future of programming where AI serves as an abstraction layer.
- The potential for future software engineers to be highly productive but still retain their roles.
01:04:12 📚 The evolving landscape of computer science education.
- Considering the relevance of traditional computer science education in light of AI advancements.
- The need for foundational knowledge alongside evolving programming paradigms.
Made with HARPA AI
000p
Dam that's niiiice! ! It's like Merlin ?!
@@sitrakaforler8696 better :)
Before thinking of AI use in the society we must agree who will Profit from it, who will own it and who will pay for the mistakes of the AI? Is it going to be like, "Oh well, bad luck" when AI ends someone's life?
@@reasonerenlightened2456 you guys need to stop think AI as some conscious thing, it is just like a knife or gun. It is entirely about who is using it with what intent.
The problem with LLM is that they cannot solve independently computationally irreductible problems. So there is interaction between classical computation and LLM in symbiosis. So I do not agree that computer languages should disappear completely. Also right now checking google is much more energy efficient than prompting chatgpt. So there are the energy efficiency issues. When you build apps with AI somebody has to pay the token bill.
> The problem with LLM is that they cannot solve independently computationally irreductible problems
It can write programs that do. For example, this is what the current GPT-4 can do on the normal openai chat website (can't post url to conversation because YT spam filter). I've asked "Hey there! Can you give me a word which has an MD5 hash starting with `adca` (in hex)?"
I've chosen adca, because those were the first four hex letters in your name. This is likely not in its training set.
The model was "analyzing" for a bit, and then replied
> A word whose MD5 hash starts with adca (in hexadecimal) is '23456'. The MD5 hash for this word is adcaec3805aa912c0d0b14a81bedb6ff.
You can see how it answered it, it wrote a python program to solve it. I didn't need to prompt to do it, it knows - like a human! - that it should pass these classically computationally irreducible problems off to a classical computer.
And yes, there's still programming involved, but like, my 16 years of experience with computer science didn't help me at all, except in terms of coming up with an example.
No code applications getting better and A.I. getting better looks like a programless future is really close or a near programless one at least. Eventually A.I. will be better,faster and cheaper than any human by a large margian.
What you forgot to add is "YET".
That he stays away from addressing the "most important" problem as he puts it at the beginning of the talk (that of CS education in the future), makes it sound like just empty talk...Unfortunately, I had to watch the entire thing to realize this...
Professor: Ai will replace all programmers
Students who took student loans to become programmers: 👁️👄👁️
Professor: Programing sucks lets let the robots do it!
I don't understand why people think Professors know anything about programming. They have not time to get real practice
yep. Pretty harsh reality
Not the case tho, at least not now lol. Ai is not even close to taking programmers jobs, AI is not very good at programming, just very basic functions and can't put the pieces together.
@@lmnts556 Are you sure? It can do a lot of stuff. Then, you have all the no code solutions. Then, you have all the SaaSs and libraries. In the end. You need 1 engineer to build a platform instead of a 100. "At least not now" can mean in 5 years (which is very realistic)
Something I did not understand was how would Computer Science become obsolete? So okay, you replace programming with prompting. But who will develop all those magical models that you are prompting? Aren’t they built by computer scientists and SWEs?
What I mean is, if you are bold enough to claim programming will become obsolete, then doesn’t that mean learning mathematics and physics would also become obsolete? Like I could just ask some AI model to develop what I need in the context of physics and mathematics… and won’t need to understand the dynamics of those sciences, I just need to know how to speak English and ask for something.
Note: I actually can see programming becoming more automated. But Computer Science? I can’t see that happening… aren’t we supposed to understand how do computers and AI work? Should they be seen as black boxes in the future?
Also, programming would still not be fully automated because it’s weird to believe that an ambiguous sequence of tokens (English language) can be mapped with precision to a deterministic sequence (code) without any proper revision by a human… what if AI starts to hallucinate and not align with human goals? At best we would create a new programming language that is similar to “Prompting”…
What are your opinions on these?
My opinion is that before doing a ratinal action, there is an emotional action. So all decisions you can write on the prompt, cannot be accurate.
My take is that technology will automate further and transform and humans will have the opportunity to use more of their creativity and thus becoming more human!
There are two main concepts that you need to wrap your mind around:
1) Ease of use, 2) Programming as a tool
When Welsh talks about 'the end' of programming, he means to future mass adoption of LLM models to program for them instead of programming themselves due to ease of use. Essentially, LLM's will be the new user interface for people to use programming languages, so the need for expert programmers will be limited to specialty roles in the future, like how can I write an API for LLMs to interact with or how can I make this LLM that checks that another LLM works properly?
Obsolete is not the right word here, as you can see Welsh using copilot himself even though he is still technically a programmer. It's just the science of writing code by hand will be displaced by prompting to ask an AI to manipulate code for you. For now, you need to read the code the LLM wrote to use it, but in the future, it might as well be a magical black box that does x for you, testing and implementation included.
Or in other words:
LLM's are going to be easier to use than programming by hand, and LLM's will use coding as a tool instead of people. Computer science is then the art of getting better code from LLMs instead of getting humans to write code faster and better.
You are right. These people will still be needed. But AI might reduce the number of such positions down to
Not only that, who develops all the connections between LLMs and all existing systems. Who will replace existing systems that nobody knows what are doing with systems that can use AI. In the short term at least, I foresee more programmers needed, not less.
I for one will be glad when the people who think that "programming sucks" and "no progress has been made in 50 years" will actually give up and leave the field, they have no idea what CS entails. Computer Science is about computer programming like Astronomy is about looking through telescopes.
The thing with LLMs is that it's just another level of abstraction. If you take a product documentation as a highest level of abstraction to describe how that product should behave, to have it correct you still need to describe all the corner cases and the way some things should be done, you can't just say "this page should show weekly sales report". And all this documentation might not be easy to understand. Code is just a very precise way to describe behavior.
Do you trust close friends who know you well to give you a decent result when you ask them "this page should show weekly sales report"?
@@wi2rd you understand how documentation work right?
From your logic, it's impossible for non-technical project manager to instruct developers on how the application should be programmed.
AI can ask clarification questions to make the requirements clearer. It's can do long-term back-and-forth conversations with the whole context of the project.
It's not just inputting a single prompt and the project is done
@@MaiThanh-om5nm Non-technical and people with low abstraction for the field usually will instruct on how something will behave instead of how something is to be programmed.
Also project managers manage the team time etc, architects, developers and engineers with know-how to translate expected behaviour from clients to technical field are the ones who instruct how it's programmed. Lots of developers are able to understand what a client want without an intermediate, because developers are system users as well and know what could be better on apps and what they'd like to see, expect etc, also you can see freelancers and github projects all around without a project manager etc, confirming they would understand it anyway with or without those helpers.
the ‘gotch’ in using AI is we need to know if the code is right or not
so we need to know basic stuffs
For now, eventually it will be able to write perfect code on its own, reducing the need from 100 software engineers to 5-10
What is the basic stuff in a pyramid of abstractions? Assembly code?
@@augustnkk2788 I don't think it'll replace all good software engineers so soon. And I really don't think it will get to a point of always generating perfect code.
@@tiagomaia5173 Itll replace maybe 90%, some still need to make sure its safe, but no one will work in wed dev f. ex; all tech work is gonna be about AI, unless the governemnt steps in. I give it 10 years before it can replace every software engineer
you have the confidence of someone who doesn't know what they're talking about
My main question is: in which of the LLM ai startups is he an investor?
crossed my mind lol
Please listen to the talk with an open mind, and face this was reality.
He literally says at the end: he's pitching his own AI startup.
We are not yet to the stage where one can ask chatGPT4 to write chatGPT5, at least as far as I know. Also, if you ask chatGPT4 to produce the model of the physical world unifying general relativity with the standard model, you will notice it struggles quite a bit and does not deliver. Those models cannot just create new knowledge, or not in a scientific proven way. Maybe through randomness they will to some extend though, but let's see.
You need code to build. God coded humans, we code businesses. Just using language to create code doesnt mean coding is obsolete
AIs are making some breakthroughs in science and math already. Look up the new matrix multiplication algorithm discovered by an AI.
Well, the code for chatGPT5, at least for the model as such, is likely not very complicated, so chatGPT4 might be able to write it. Someone has to tell it what the program should do, though. At this point, that would be a human.
That’s because there has to be an overseer. Like someone else stated God created mankind and this ecosystem. Men manipulated and created based on this ecosystem. The creations of Men didn’t invent themselves. The best special software of AI can do is create derivates of digital data that is digital known to said AI model. Look at art for instance many AI models steal and scan what mankind created to make a model. An AI model would never create a Star Wars, blade runner or mass effect story/universe out base coding blocks which dictate how the software runs. AI needs to plagiarize to create. It’s just that these plagiarized derivates with procedural generation full many normies into thinking it’s so great.
@@dblezi could you please clarify "has to be"? Where does that knowledge come from? What's the logic explanation? What does "an overseer" mean? What does "an overseer" do, in practical terms?
I love Prof. Malan for maintaining such a badass UA-cam channel!
I’m an AI Business Owner - It’s great to know how to program even if programming is obsolete due to AI, you can use code as an asset. I created a model that uses Python to solve any math equation. Could’ve used Google, but using Python makes the solution more accurate and near instantaneous.
Can you share a reference to your model?
12:57 that's exactly right. The way I've been describing using GPT-4 for swe is that whereas I used to have to stop to look up error messages and read documentation, now I can ask GPT-4. GPT-4 smooths out all the road bumps for me so I can keep driving.
GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their output! Also, GPT-4 is designed by the Wealthy to serve their needs!
Except when it doesn't. But sure spending an afternoon with Copilot can often safe 5 minutes of RTFM
@@miraculixxs "Hello Chat GPT, please read this F manual for me"
That has been the most professional Ad Break I have ever seen in my life. HAHA
Thank you CS50 team for sharing this with all of us
About prompt program
- Can you reason about it's performance and class of algorithmic complexity ?
- Can you reason about the resources required to run it, like RAM ?
- Can it process more data than fits into RAM ?
One day it will, but not yet...
I agree, the biggest problem with humans in programming is how we mentally map how to solve problems. Code reviews can be a huge waste of time if you don't have it in you to push back. It truly makes me wonder the ROI for companies to host a lot of the software development ceremonies today.
Code review is all about regression to the mean
@@jamesschinner5388 But we probably haven't got a single methodology to arrive at the mean. Our individual Means are terribly diverse
I took a clip of the FORTRAN code and sent it to GPT-4 Vision and asked it what the code did but it could not tell me because the pictured code was incomplete. Understandable. I sent it the BASIC code and it got it right. I asked it if the name CONWAY helped with its answer. It said No. I started a new chat and sent the BASIC program without the program name. It got it right. I sent the APL program and it didn't recognize the language or understand it at all, even that it was a programming language. I told it the language was APL and it got it right. Pretty cool.
Because they are somewhere in the training set, the presenter got them from somewhere I would assume.
@@reddove17 The best of them are good enough to recognize a program that was not directly in the training set. Of course something about the program is in the training set e.g the idea of Conways game of life (or whatever it was), but that piece of code itself doesn't need to be in the training data for it to be able recognise it.
GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!
Great sales presentation!
Me: Asks chat gpt to help me with a bug I am facing in my code.
ChatGPT: Returns my exact same code
(This was a joke)
Ahah yeh, same sh*t happens to me too 😂
true broo... happend to me a few days ago
In this way ChatGPT hints that the main bug in your code is you :)
GPT 3.5 I'm guessing? Try 4. People keep coping by saying it doesn't work but are using the outdated model or have poor instructions.
Try 4, and if that doesnt improve things, you need to work on your prompt engineering.
🎯 Key Takeaways for quick navigation:
01:23 🚀 The field of computer science is undergoing a major transformation where AI models like GPT-3 are being used to write code, marking a significant shift in programming.
06:54 💻 Natural language is becoming a key tool in programming, allowing developers to instruct AI models to generate code without the need for traditional programming languages.
14:47 📈 AI technology, like GPT-3, has the potential to significantly reduce the cost of software development, making it more efficient and cost-effective.
20:52 🤖 The rise of AI in programming will likely change the roles of software engineers, with a shift towards product managers instructing AI models and AI-generated code.
23:46 👁️ Code review practices will evolve to incorporate AI-generated code, requiring a different kind of review process to ensure code quality and functionality.
24:41 🤖 Code maintainability may become less essential with AI-generated code, as long as it works as intended.
25:58 📊 The rapid advancement of AI models like ChatGPT has transformed the computer science field and its societal expectations.
29:04 🌐 Programming is evolving, with AI assisting humans in generating code, and the future may involve direct interaction with AI models instead of traditional programming.
33:44 💬 The concept of a "natural language computer" is emerging, where AI models process natural language commands and perform tasks autonomously.
45:52 💡 The model itself becomes the computer, representing a future where AI empowers people without formal computer science training to harness its capabilities.
49:15 🤖 AI-generated tests are becoming more prevalent, but there's uncertainty about the role of humans in the testing process.
51:07 🧩 The future of AI models relies on the increased availability of transistors and data, which may require custom hardware solutions.
52:06 🤔 Formal reasoning about the capabilities of AI models is a significant challenge, and we may need to shift towards more sociological approaches.
54:23 🤖 Exploring whether one AI model can understand and explain another model is an intriguing idea, but its feasibility remains uncertain.
59:30 🧠 While AI may make software engineers more productive, certain human aspects, like ethics, may remain essential in software development.
Made with HARPA AI
scary accurate summary ...
Well played
I wonder about who is the bigger fool , those that listen to the speaker ot the speaker?
chatGPT is trained by the wealthy for the benefit of the wealthy.
That's scary.
Thanks!
A great lecture/talk, illuminating and informative. As a practitioner, I find it very true and relevant.
Agreed!
Agreed. There's a lot of push-back against his message in the comments, but I'm already seeing it happen within tech companies where, for example, 10% of employees are let go and the ones staying are now doing several of those roles, along with their own, all by using AI.
surprise surprise, guy selling the shovel says gold rush is the best.
Yep... noticed the same.
Even if robots generate code, you would still want it to have less duplication and some abstractions, because it will lower the amount of context tokens required to modify the code.
You would probably also want to keep interfaces between regenerations, because you would like to keep the tests from the older version...
You’ll need to code the robot, or code a solution to code into the robot. It’s deeper than these people understand
No you don’t , they can write optimized code , that’s literally the whole point of AI , it’s an optimization problem, adjust my weight to reduce the cost function and code duplication be yet another parameter.
I didn't hear him get into the topic of consistency and feature updates. How about performance based programming for games and ultra efficiency? Or shower thought innovations that create entirely new paradigms and ways of approaching problems? AI might be able to do some of this eventually, but I doubt it will be as rosy as he imagines.
yeah, like 99% of people don't invent new paradigms or ways of approaching problems. The vast majority of people in software will be out of jobs, with maybe a few hyper-PhDs sticking around.
stay fappin, fappy. It's not going to happen. Maybe the soydev macbook in startbucks react bros will get replaced but true programming that actually requires deep knowledge ? not happening.
The biggest red flag was there at the start: the beginning of the video description says that gpt can do general purpose reasoning. It's neither general purpose nor can it reason
Hmmm I think It is both general purpose and can reason
then you should go to a mental health professional
The problem with LLMs in Generative AI is that in 5 years time, the AI will be learning upon large percentage of data that other AI have generated and then even longer down the road, how do we know what is real or generated data?
We still need humans to understand what is fake. The creativity from AI must make sense if the goal for that specific data requires such precision like in the medical industry or other industries for lives are at stake.
It's been established already that synthetic data is superior for training LLMs, compared to raw human data.
I mean, think about it, does the open web not have data that is bad? Well ChatGPT was trained don it and it does pretty well. Synthetic data has been proven already to be superior to that, so simply by training the next iteration of the LLM on synthetic data is going to get us to the next step.
@@verigumetin4291 What about fake news or lobbyist outlets? Or books/art generated on someone else's copyright? What if bad actors create fake generated data for their own nefarious purposes? Then these scammers or spammers constantly create these fake data? You can already make a fake Obama dancing "Livin' La Vida Loca". How would the AI know it's real or fake once these generative AI become more skilled? Years down the road, our newer AI LLM may not know the difference and use these data to train. We already got bad science news regarding mask wearing and vaccinations. This will become worse when the less than average intelligent human believes in nonsensical data in a world where such synthetic data will be practically spam.
GPT 4 is getting dumber according to Stanford Research.@@verigumetin4291
@@verigumetin4291Do you have any source for that? Preferably a peer-reviewed paper rather that some „research“ by Google or OpenAI published by themselves.
I am asking because what you are saying does not make any sense to me.
@tybaltmercutio I think he is talking about the Orca 2 paper
It’s a lot to expect everyone to know what they want to enter into a query. It will take some time for the query interface to truly be inviting. I’m also mildly concerned that AI will grow impatient with us end users and spit out something we may not want and will simply say “deal with it 😎”
Seems like an AI that is owned by a company that makes a profit would train it not to do as you describe, since that would drive people away. Chat GPT, in its current state, is incredibly patient, and that is one of its most striking and valuable features. I don't think that's an accident.
@@robbrown2 GPT isn't patient, and doesn't think. All it does is propose the most statistically likely word that should come next given a user provided context.
This isn't AGI its a predictive model. I'm not trying to be mean or critical, but you need to understand this if you want to use the tool efficiently
@@robbrown2 It will literally return the statistically next most likely token as soon as it is physically able. What is your definition of patient for this to meet it?
They won't write, but just discuss the final product with the AI while it builds it. No writing is needed/wanted for future programming.
@@robertfletcher8964 The way you've characterised it undersells it quite a bit by saying the stuff about "statistically likely". Don't forget RLHF (Reinforcement Learning with Human Feedback) where many undesirable styles the model might do are weeded out and the model is steered towards answering in a way humans prefer. You say it spits out statistically likely within user context but you seem to not be considering that part of that user context could be "patience", the very thing that you seem to be alleging that it can't do.
It started interesting but it's just an ad for (another one) gpt wrapper.
Welcome to the new erra of debugging.
This is a good video for high school students to be careful when they want to go to college, they might not only think not to approach CS but to go to something that wont be replaced by AI soon. Our era is tough, and it has never been any easier.
This is a good marketing video for selling his own software by bashing programming and calling it annoying (47:25).
Over time this professor is absolutely correct. I have been a developer since the late 1980s.
Maybe it takes a developer from the late 1970s or the early 2020s to understand how this professor is wrong.
I've been doing it since the 90s and I disagree with him.
38:17 what is considered kid safe? Based on what milestones? Emotional ? Psychological? Etc? You need to know what child development sources are peer reviewed , etc. yes you could ask the AI for those but then you’d need to ensure they were not hallucinations. Etc.
Great lecture! I've been writing code professionally for 20 years and I feel like Copilot is a the level of first year university student learning IT stuff. Not perfect co-worker, obviously, but much better than basic autocomplete in your IDE or some other tools you could use. I'm fully expecting to see Copilot rapidly improve so much that I write all my code with it. Right now, I feel that it can provide some support already and with fast internet connection, having it available is a good thing.
Most of the time Copilot writes a bit worse code than I could do myself but it's much faster at it. As a result, I can do all the non-important stuff with a bit lower quality code that Copilot generates so I can focus my time on the important parts only. I'd love to see Copilot to improve even at the level that the easy stuff is perfect.
Copilot is terrible though. Gpt4 is 50x times better. In comparison co-pilot is unusable
Edit: number is obv made up from what it feels like
@@ndic3 Can get GPT-4 integrated in your code editor?
I’ve been programming for 40 years of my life. Professionally for about 24 years. I absolutely coding with Chat-GPT. But what people don’t get is that architecture still matters. You are still accountable for the code working out. You still need a picture of the system as a whole. You still need to get what’s going on. You still need to understand algorithms, you still need to be able to perform calculations on performance and resources. You still have to know stuff. You have to put the pieces together into a working whole. And the appetite for software is near infinite.
I don’t think people quite get that.
Chat-GPT can’t do it all for you, by a long shot. Chat-GPT is a great intern. But you can’t make Excel with even two hundred interns. Not even a thousand interns can make Excel. There are other problems.
And I am not saying that one day we won’t have AIs that can fully replace competent programmers. We probably will- one day. But that day is not today, and it is not even tomorrow.
What I tell young people who are afraid, “but will there even be programmers in ten years?” I tell them, “maybe not, but I can tell you this: It has never been easier to learn programming, than it is today. You can ask anything of Chat GPT, and it will answer for you. If you know one programming language, you can now write in any programming language. The cost of learning to program has dropped incredibly. And the money is right- right over there.”
@@ndic3Copilot is based on GPT-4
The speaker here is pushing for a paradigm of “LLM’s as a compute substrate” and English as a programming language” which I definitely see the value of. Certain programs would be easy to express in English but nearly impossible to program using traditional languages. Of course the paradigm does happen to benefit his startup but to claim that this will spell the end of software engineering as we know it is absurd.
First of all this requires disregarding decades of research into system design principles which call for modularization and separation of concerns, in order to make systems more legible, easier to debug, easier to maintain. I wouldn’t want key operational software that’s an inscrutable black box that requires “magical” phrases to do the right thing.
Just because an LLM is writing the code this doesn’t invalidate the need for proper design. Software engineers are taught design principles for a reason; not just to make their code easier read, and understand by humans, but also to make it easy to debug, extend and adapt.
Second, just because it’s easier to program now using just English it doesn’t mean that software engineers are no longer needed. How would you evaluate the correctness of the software generated by the LLM? How would you improve its performance? That requires understanding logic, probability, algorithmic complexity, algorithmic thinking, and a plethora of other software engineering skills taught in college.
In my opinion it makes the need for highly trained engineers even more important
Indeed, especially as we already have at least 2 completely (very close to plain) english programming languages around for > 50 years that are widely used: SQL and COBOL.
For small examples, both are great to write, understand and efficient.
But for real world problems, both are complicated, hard to understand and need a computer science education (for at least to some extend), to get your job done.
We even deprecated COBOL what is as close as possible to english, especially as it gets very verbose and so harder to understand again compared with more formal languages.
The problem is not writing the code, but being explicit enough so you really get what you want. And independent of technical constraints, the requirements engineering still is engineering and even if the output is plain english, just read any formal document and you'll find out, it's not simple english. That's true even for non engineering, like law, standardization documents, pharmaceutical documents or to come back to programming RFCs.
There's probably a reason, why the presenter didn't show the prompt to write Conway's Game of Life via ChatGPT that doesn't involve using external knowledge already. Once you have to define it accurately, it's probably not much shorter than the Fortran or BASIC example and might even be less readable than the Rust version he showed there. The usual text book description are either using images to explain what's going on (what won't work in general), or they just describe it mathematically and would be 1:1 to the APL version he presented. It just sounds so easy as we are used to the concept, but what is a cell, what is a neighbar, how big is the sheet, when does the game end, what does a round mean, what is the initial state, what does it mean to survive, or create new life, or how is it outputted and what do we optimize for? None of it is just trivial to explain unless the concepts are already known (Conway create a game for mathematicians), but in general for most programs, the concepts are not known.
Questions in this lecture are very interesting. Even better than the whole lecture.
More data and transistors will help, but I think that better algorithm will help way more.
We are continualy rebuilding the same thing and letting them unused.
46:36 "No one understands how large language models work"... back in 2008, no one understood how derivatives worked.
It’s very likely that AI startups will get replaced by OpenAI products for a while until the tech saturates.
I think we could do most of the donut demo with what OpenAI announced a few days ago.
Give credit to the people that coded copilot, chat gpt etc; now to seamless to use these LLMs but behind the scene are still the coders, the statisticians, the scientists, the engineers optimizing these models. You have to know both; how to code and how to use the models.
Exactly what I think. A SWE needs write code explicity and write models to get solutions implicitly. None of these tasks seen to desapear in the future.
@@LuisFernandoGaido he never said they will. he just said the way software development happens will change drastically. it already has actually everyone at my job uses copilot
Well, generative models might eventually replace some software engineering interns at companies but as a lead developer / architect I don't see my job endangered yet.
Software development and designs is not only about writing code. Writing code is the easy part - understanding the problem, both functional and non-functional requirements, the operating circumstances and making design decisions and compromises when needed is a whole different dimension.
I can already see a lot of startups failing miserably by trying to develop software with a few low cost developers armed with some generative AI tool. This is like "we don't need database experts, we have SQL generators" all over again... 😂
true dude
Doctors are also claiming they can do more, but AI have already beaten top doctors in diagnosing certain illnesses. I think you'll wake up very soon. No offence oc..
I agree with you. Its making the coding much easier but analysis is still a challenge
@@sgramstrup so would you undergo surgery operated fully by AI tomorrow?
These were my thoughts too... I recently started learning full stack. I don't think Dr. Welsh understood fully the way LLM works and how reliant they are on humans. Any reasonable business should feel worried if a "code monkey" was writing random lines without a way to know specifically what was happening. Problems of the future are likely related to security, not necessarily deploying code that works. We need developers with experience and actual understanding of the code and how it interplays with the system. Other comments above mention programming languages with specific use cases such as memory, NOT necessarily human readability. This reminds me futurists who believed teachers and instruction would be outright replaced by multimedia in the 60's and 70's. The Clark and Kozma debates are a famous example of this. I wonder how many people dreamed of being a teacher and gave it up from fearmongering? The fact is context is everything. Humans are making the context, and we will be doing so for a long time. A threat to this is AGI, not a brain in a jar which is generative AI. If I were in computer science I would take what Dr. Welsh says with a grain of salt. Instead, think about what kind of problems are going to be introduced with AI and understand it as deeply as possible. With every innovation, new problems are born.
I'm putting this here as a note for myself (I'll see if that works).
POINTS REGARDING HIS "IMPOSSIBLE" ALGORITHM (no I don't think he literally means impossible):
1. The AI is not a simple algorithm itself
- The AI can not be summarized as an algorithm in the way someone would write one... the complexity is fairly expansive... even to setup the ML models
2. Most of what he is asking would not be difficult for a reasonably simple program
- Getting the title, etc...
3. DO NOT "": This would be the default of a program
- When he says DO NOT use any information about the world... does not mean do not utilize your predictive analysis it just means don't mix information in that is not in the transcript
4. Summarizing is hard, a targeted predictive learning model IS probably the best algorithm for this
- The only very difficult piece for a custom built program (including one or more algorithms to make this infinitely repeatable) IS the summarization
So, my conclusion: Part of writing code well will, in the future, include targeted ML*
(though my take is not monolithic, gargantuan systems like Open AI & Google produce... though those could be a good way to train a targeted ML model)
This feels like the Theranos equivalent of the future of software, it's all dreamville
Tell me you don't understand what's going on in AI without saying you don't know what's going on in AI.
Sure, I know nothing JOn SNow.@@jwesley235
@@jwesley235how about you explain it to us then?
@@AD-ox4nghow about you do your own research.
That's a pretty funny and bold claim when a lot of AI systems can't count the number of words in a paragraph excerpt correctly.
Can you? All the time? What would it take for you to do it perfectly each time? What would it take for the AI system to do it perfectly every time? Interesting times ahead...
@@ksoss1 As far as I'm aware, there looks to be a problem that chatbots seem to have where in terms of computational speed causes them to skip some instructions of code that's not too dissimlair when setting compiler execution speed to a certain level that results in some unwanted glitches like in assembly language programs via accidental instruction skips.
You are referring to simple LLMs, the proposed architecture is LLMs+ Compute Tools (c.f. Calculators etc) Just as an normal human can answer 3x 9 =27 off the top of their head, they would need pencil and paper, or just use a a calculator, to answer what is 4567 x 2382?
@@juleswombat5309So, what does that make my testing of Bing AI's capabilites, built on top of OpenAI tech, in regards to a pretty simple task on a pretty short excerpt of word counting? Because I'm pretty sure Microsofts' proprietary AI app doesn't fall in the category of being powered by a simple LLM.
@@ZaidMarouf-q9e It means you have not tested against an LLM combined with access to relevant tools.
19:26
> "I've been coding whole day", but you threw away 90%
Oh, that's pretty bold claim to say, that with chatGPT you will get correct code snippet first try, without any need to prompt it with like 20 more messages clarifying and making sure id doesn't confuse language, paradigm etc.
You should not compare "clear code" of SWE with GPT tokens, because you are guaranteed to spend many more than ideal. Considering they are dirty cheap, this may not be the problem though
Thought-provoking talk that needs to be taken with a serious amount of critical thinking. I personally have a different view about how programming will evolve and by no means I would ever agree with adding "The End of Programming" in a title or main message unless the objective is in short click baiting to a sales talk.
Just as photography didn't kill painting and ai generated images won't kill photography, if you have to write your instructions in English or whatever other language and you already expect to be following some specific patterns to get the expected results, with some try and error in between, well, you are basically programming :)
Dr. Welsh raises valid concerns about the evolution of programming and the nature of being a programmer or software engineer, although I beg to differ in the specificities.
All I see is a English to targeted language compiler, where we don't know exactly how the compiler works... it doesn't seem like a good idea
I think large language models are really cool, buts too much of black box. Sure there’s plenty of of use cases, as far as entirety replacing code, it needs to be customisable enough and consistent with its functionality. Not sure how that would be possible!
I mean we've already almost got there. Won't be long. Context window is huge now
You are restricting your options to computers as we know them, operating on limited versions of ones and zeroes. We cannot have true AI until we have bio-chips that operate like real brains.
Most of tech is currently already a black box. I write mostly C++ and can't even begin to fathom how these modern optimizing compilers work (and I never will). Heck, even the V8-runtime is almost arcane to most people. Only very few exceptional human beings can understand and work on these systems, everyone else can start to look for toilet cleaning jobs.
I'm wondering if Fixie (35:00) hasn't already become obsolete with OpenAI's announcement on November 7th... lol
Exactly
AI.JSX, who needs to learn in the era of AI lol
If AI can write programs, it’d be able to substitute a lot of people, and not just on tech but on many fields, then we gonna have more efficient services but with so many people unemployed, who would pay for those services?
This is a very interesting question. Take it to the extreme: LLMs are able to take over any job. What makes live worthwhile? Can ChatGPT enjoy the first sun ray that warms up its AI chip, does it enjoy the tranquility of Nature, can it enjoy the soft sea breeze, can it get excited about new discoveries? What makes the heart of ChatGPT tick? Does it have a heart? Sometimes we forget that we are multidimensional creatures. Maybe we have to come up with a complete new model for society. We have to redefine ourselves.
@@compateur dude seriously,think about it! One of my friends works as a consultant and another one works as an accountant at top firm,i have personally looked at the kind of work they do which at the end of the day is the most brain numbing manual repetitive task that i have ever seen...to put it pluntly an high schooler can do their job well enough.
What will happen to these people then?
Why hasn’t the “lecture” started saying “today we are gonna have my buddy which has an AI for programmers startup”, it would have saved me an hour of this info-commercial
Love this video, he's thinking ahead of the curve.
Good presentation. I particularly like when he used Rust as an example of bad language design.
We must move forward with the advanced computational and reasoning capabilities these software models affords us, but we cannot move forward with these black box models which have no formal method of verification or "instruction manual", so to speak. These models should be considered idle malware. I mean imagine: these advanced advanced models and models like these in our appliances, our aircraft, and our ground transportation systems which cannot be verified yet behave properly 99.99 percent of the time yet cannot be actually Verified correct...
Sir... This is a Dr. Donut.
> It's 2023, and people are still coding in C -- that should be a federal crime
Not because it's their language of choice, though. Think embedded systems: Even if you want to use Rust or any other language with training wheels on it (metaphorically speaking), the platform you're developing for may not be a targeted by it. Or worse, maybe your toolchain needs to meet certain criteria to pass a regulatory body of sorts.
Disclaimer: I'm not writing this because of confirmation bias or me being an offended C programmer (I'm working with Java). Please don't get me wrong: I understand that Dr. Welsh didn't intend to oversimplify things, though he generalizes a bit too much imho. It's putting a whole industry in a really bad light and it's just like saying: "if using C is bad because bad behaving C programs have killed people, then, by this logic, we shouldn't be riding trains or going by car anymore".
I have tried it for a few days and a job that needs 2-3 days became 4 hours for the first pass code. very nice.
Thank you for the information; it's very useful.
"react for building llm applications"
I cackled for about a minute
i NEED a timestamp please
Guy introducing him: "Hey Kids, this guy is going to make sure that the cripppling debt that you and your parents undertook to send you to college was all for absolutely nothing thanks to his AI"
Programming is challenging, beautiful, fun, and makes you think like a machine.
That's great as a hobby like fishing. But if your boss cannot afford to employ you, as AI tools means he only needs to hire a few staff, then you will not make a living from coding. Adapt to exploiting these tools if you still want to make a living in the computer industry.
This is why i minored in philosophy. Computer science is applied philosophy. The real ability is thinking logically and understanding the human mind and what it is you want to create. Thinking clearly. My personal opinion... When you create something and don't know why it did what it does but does so consistently is because you stumbled upon an equation of nature, that is some fundamental way nature works. In this case human nature. Computer science has always been a funny term. How can there be a science of the computer which is not a natural phenomenon. The science of computation or how to calculate. I find it fascinating that giving chatgpt a personality like you would an actor and shaping a narrative works. But we do this as people everyday going through the different aspects of ourselves depending on the circumstance. So excited for the future of the field.
2.5 years proffesional software dev here currently developing Trichotillomania.
19:00 Lines of code is a vanity metric that does not translate to value... this guy is definitely in management
It's nice of David to let the students have a taste of silicon valley's sensationalism and the outlandish "predictions" of where the future is headed. "This is the only way everyone will ever interact with computers in the future." Even if that turns out to be true, it is soooo far away from the real world right now that it doesn't take a real computer scientist to realize this is delusional. That's not even to mention the question of whether or not we *should* be heading in that direction as a society. Not much more than silicon valley's way of raising funds for more products/services, the vast majority of which fade away after some time.
feel the same. i just think ai is dump and keep dump in at least 100 years, or longer , not in my life time or even not before human extinct will ai become that smart. maybe only advanced alien can actually build that levels of ai.
5 years down the line your comment will seem silly !
if 5 years later AI could be so powerful that my comment seem silly , i am actually happy with that. i do hope tech advanced fast but at the same time Very pessimistic about the speed of technological development@@hamzamalik9705
what floored me was his claim that no one could write an algorithm in a programming languauge that is equivalent to his prompt string.
For real, I am on my 2nd big tech job since the ChatGPT rise and of all my team members I am the only person who uses it.
In production i saw some ML models in:
- adtech for improving ads suggestions. They were there for more than last 6 years, long before the "AI will do everything soon" hypetrain. They were, as i've said, only improvements above the not ML written ad rotation core and didn't generate much money for the company at all.
- security SIEM systems used for threat detection on users laptops, but in reality it was doing more harm than profit, like banning our git-lfs executables, lol.
- I saw some LLAMA model, trained for a company internal domain (code, wiki etc), but its usefulness was a joke, to be honest.
Also I saw an arise of infinite amount of startups with AI solutions for everything after the experts started to promote "Everything as a model" idea. They were trying to solve with ML such problems which never required an ml solution. Looked like every startup, which used to be a crypto startup now is an AI startup or has something from AI word cloud in its name.
I see all the experts predicting obsoletion of software development as a job in 5-10 years, but I see literally close to none signs of GPT models in production, left alone profit from its usage. Maybe it is used widely in another tech domains? Maybe in 5 years situation will drastically change? Well, maybe, who knows. But now for me it does not look like more than another race for a venture capital.
P.S.: oh, yeah, ChatGPT-4 is insanely good for catching missing Lisp parenthesis, btw.
Best talk in CS so far in 2023!
Two things, speaking from 35 years of banking-software programming: 1) code reviewers are only as good as their expertise (and in years!) in the language (and business functionality). If AI removes all opportunity for experience in the language, where does this expertise come from? 2) The business function knowledge side of the business now subsumes the entire burden of the required specifications to the AI - an enormous effort. How long before we try to automate that? An infinite regress is arising here....
Dry audience, really enjoyed the the talk and the gags