I've been a developer for decades and while I agree with you in my soul I think the reality is 2-years from now every developer will be competing against people who are good at using these tools and the corporate machine will simply expect a productivity level that can only be achieved by being an AI supervisor.
I agree that the speed will be highly prioritised above all else, but if there is something the AI can't do (at least for a period of time) that requires developers to work with the underlying concepts still (e.g. AI can do 90% of tasks with JS, but there is 10% of tasks that still require a human who is knowledgable in JS) then I think the need for someone who does understand the underlying concepts will be forced upon them. If most devs are largely depending on AI, then very few will have practiced the skills required to be knowledgable in these areas. Maybe this means in the future we have a larger number of devs who work at a higher level directing AI and never really interacting with the underlying concepts (e.g. JavaScript or whatever other language), and we have a smaller number of devs who can still do the tasks that require deeper knowledge.
@@JoshuaMorony This is kind of paradox. If AI can do 90% of your work, but there are still 10% left which can only be done by humans, every developer is still automatically forced to be knowledgeable at these 10% to handle them. That's why there is no reason to not let the AI do or help you with the 90% of your work it can easily do because there is basically no reason to not rely on it. And the 10% the AI isn't able to do you have to understand anyway to handle them, until the AI is able to do so too. That's why there won't be a knowledge gap, except for the part the AI can easily handle anyway. So I think the dependence on AI will grow more equally among developers, instead of there being a small group which are doing the 10% all the others can't do.
@@JoshuaMorony That's very much like it is today. We have a large number of high level devs that work with high level languages, and a small number of devs that have deeper, more specialized knowledge. I don't have to know how C# code that I write is parsed or interpreted to MSIL, and then how it is ultimately compliled to native code, to be a good C# developer. Coding using AI is just a level up from this. I'm leveraging AI tools to help me code, because I know it will keep me marketable and productive. I won't be replaced by AI (at least not yet), but I could easily be replaced by a developer who uses AI, if I don't use it.
@@pvanukoff Fair points by both you and Hydro - this is all at least partially me thinking out loud, I definitely don't want to claim the way I'm thinking is definitely right. I think there are some differences in the move to AI though vs other progressions in the past (i.e. higher level languages, the calculator, the typewriter etc.). For example, in your C# vs MSIL vs native code example, you can do everything you need at the C# layer and the MSIL layer is not your concern. With AI, it will kind of be like AI is this layer above C#, but as it stands today you can't do everything you need on the AI layer so you still need to be reaching down to the C# layer (until AI is good enough at C# itself). My gut feeling is that for the parts where we still need to know the layer beneath (for people heavily relying on AI) I think that 10% will likely just be brute forced or ignored wherever possible by a lot of people (i.e. with a get it done with duct tape sort of mentality) because having little need to work at that layer generally will make that work harder. But I also acknowledge that maybe it won't be that way, maybe people do just focus on learning what they need to know for the parts AI can't do yet and it all works out fine.
As someone interested in programming with no background in computer science. These tools can only help so much. And they lack the big picture thinking we have. I think the video demonstrates how if you know what you're doing it's a great tool, but without a fundamental understanding of how to achieve the goal, the tool is useless.
I've found Chat GPT-4 to be excellent at explaining programming and not just writing code. It can explain a lot of concepts better than the docs for most projects. It may be worth asking it questions instead of asking it to write code. This is especially helpful when you're writing code in a language that you don't use often. Your programmer mind knows what you want to do, but perhaps not the right syntax. It can remind you of the syntax and also teach you how to use it. It can also be very useful for one-off scripts and things you wouldn't normally invest time in programming. If it can write the code for you quickly and also teach you something along the way, you're still building your own neural networks and getting things done at the same time.
it's true but lot of people, me included learn and memorise by doing. Same for coding, you learn by doing, by your mistake, debugging and so on... This is where you create real clue. You can read 10 documentations per day if you want but if you built nothing with them then you will forget them in few days, that's will be pointless. Same apply with gpt 3 and 4 if you rely on him to write code, explaination and explain you key concept your brain will start to avoid memorising concept since he don't need too and you will finish by staying at the same stage. However, you could use GPT-3 and 4 in order to use him as a personal mentor, asking question while you try to do it yourself like a good student will do and improve yourself by having a more involving shadow programming. We still need to be aware on how to use it. It's a skill, like searching on google was a skill before.
@@xavierpierre5586 You don't need to memorise anymore. You have an external memory. Once there are neural links, your memory will be to access the body of works on the internet. This is why school is so bad. You don't need to memorise junk that you can easily access. In the future, you won't need to learn other languages, it will all be translated in real time using APIs.
@@defaultdefault812 That still memorize, kind of. For your skills you definitely need to memorize and that's why we called it a skill. Even how to use ChatGPT would required some lever of "skills", you can't ask ChatGPT how to use ChatGPT cause it does not make sense. Therefore it does not mean that you no longer need "memorize", unless AI replace the humanity and human culture has been destroyed completely, as long as there's human, memorize always take in place.
100% agree. I find it helps me much easier learn concepts and more quickly points me to the right answer rather than searching through the opinions on stackoverflow. Not too mention, coding is more enjoyable because I spend more time accomplishing the objective rather than digging through the Internet for answers.
ive been mainly just using it to learn, ive always hated docs and much preffered just asking my lecturers a direct question. thats how I like to use GPT4, it can point me in the right direction of a topic to look more into. really great for beginners who feel lost and dont know what they should be learning.
Sooner or later people will be bored of AI and will go back to pen and paper. And actually meeting the person, since video call can be faked using deepfake.
Yup, I am in UG. Learned about event loop, examples of callbacks, promises, async-await and many confusing topics so clearly. It's fun conversing to a personal teacher and asking the doubts I would be shy to ask in public.
AI is not a software engineer, it's a "coder". It can do the tasks that usually junior engineers do in a company: supervised, instructed code writing. Not software design. By using AI you free yourself from stuff like writing boilerplate code, reading framework documentation, googling function specifications and looking for solutions that are already well known. This frees up a lot of time to focus on the actual software design task. In the end it's up to you to decide, whether you learn something from the AI generated code or just take it as given.
The problem being that this will, in time, greatly lower those low experience "coder" jobs available and slowly drain the pool of new talents coming in. And make no mistake, it will eventually reach upper levels in the expertise chain too.
@@antedeguemon1194 It will simply raise the bar and the usual "cheap coders", that think they are software engineers, will learn that just attending classes in university is not enough. So yes, people will need to put way more into it. And guess what: I am a software engineer and I learned 99% by myself. I learned the stuff that distinguishes me from the 99% of people, who can hack together some python code and write some web apps. AI isn't even close to do my job. They might be out of a job, but I will not. And neither will be the people I work with. At the point we are replaced by AI, humanity is fucked anyway. But that is probably gonna take a few more decades, lol.
@@timokreuzer1820 I mean, you're not wrong, but it is worrysome in general that entry level jobs are going away. Great thing it won't affect you in the short term! My point still stands, it's going to do a number on the entry levels, not only for IT and dev but for a lot other things, and I don't see that as an objectively good thing. And it will, in fact, going to affect you a lot earlier than you expect, unless you're actual super rich, I doubt you will be be insulated from the general mess just because your job will last longer. And, note, I'm not saying you or anyone should "feel bad" or anything, and I also understand there's no stopping this train. But at the very minimum I'd hope people at least acknowledge the issue, and perhaps discuss and vote for policies that could help mitigate this, instead of having an attitude that amounts to "not my problem"
@@antedeguemon1194 The people who can quickly learn how to use the latest technology to get things done will be fine. Those who can't might have a hard time. Fair or not, it is what it is. That's how I became a programmer in a few years, and that's why I'm learning to use AI now.
Within about 10 years we will be at a point where writing in a high level, language like c or JavaScript or PHP will be akin to writing an assembly code in 2023. Something like a GPT-5 will definitely be able to consider an entire code base, and how it functions and interacts with itself and with outside APIs and develop and maintain it and patch it almost autonomously with English language instructions from the "developer".
I agree this is a likely endgame - my sense is that we will either discover some sort of hard limit of the LLM approach for coding, or it will continue to evolve quite quickly and mostly everyone will code in their spoken language (a kind of useless guess, but I think there is a decent chance of this happening on a much shorter time frame, maybe even a couple of years, just based on how quickly things have been improving). But there is this sort of weird in between part with AI. When C became available, you can just use C for everything, so there is no need to understand the layer underneath at all. With AI assisted coding, we still need to understand that underneath layer to some degree until AI can *completely* do everything we ask of it in that underneath layer. That's where I think over relying on AI might lead to problems with devs not understanding things they still need to understand.
@@JoshuaMorony There certainly is great benefit understanding the layer below C, though general conceptual way is enough. You are right to the point of not needing to be able to write assembler, but for any serious C programmer it is essential to understand the machine model C builds on that have tight correspondence with how CPU and memory works. C appears deterministic and linear, on the surface, but because the way modern CPUs are parallelly doing many things, there is a risk of nasty behavior or performance surprises about how your code works if you lack elementary understanding about that.
Do you know how absolutely complex large software systems are? This is not LeetCode style problems.What you're essentially saying is: GPT-5 is going to be AGI which is, frankly, and insanely stupid thing to say (even renowned AI researches rarely make such assumptions).
@@walidchtioui9328 I do. I do realize how large complex software systems are. Even if Moore's law is dead, it seems to be living on in AI and LLM development. I don't think that will be GPT-5, but maybe a few iterations down the line. If a sufficiently advanced LLM had access to some specialized tools to actual run and lint the code / test stuff, it could very easily manage a pretty robust codebase. But you're right, you will still need really experienced software engineers and architects to actually review and optimize everything, for now.
how the hell can you state that "GPT-5 will definitely do" this or that? lol you're delusional. You remind me of those who said "GPT-4 will have 100 trillion parameters" :D Nobody know what the future holds, and how well transformers will handle inference. Not to say inference costs could surpass the costs of having humans do the work , for example. Too many variables to take into account, it makes absolutely no sense to say "definitely" in the context of AI, it proves you know very little
"If you think without writing, you only think your thinking"-Leslie Lamport This video hits the nail on the head for me. I don't use GPT for blogging either; my goal when writing isn't to create content and move on, it is to see what my own thoughts are. It also needs to be far, far more trustworthy than it currently is. After 30+ years, I code rapidly with an error rate close to zero. Moreover, in my projects my code is usually manipulating disparate other parts of my own code, or calling a library. I write a lot of financial apps, so my code needs to be correct. Rarely do I write code that does something to the values themselves, like a pure function. The AI doesn't know my codebase and how I designed it to work or be used, reassembled to implement new features. Whenever AI has done something for me, I have been left feeling in the dark. There's a psychological bias called The Illusion of Explanatory Depth. It's when we mistake what's familiar to us, or what we've read, with what we truly understand, and that this illusion is only broken when we try to explain it ourselves in full depth. Personally, I find it takes longer to understand code I did not write than it takes to just write it myself, which is why I often refactor someone else's code in order to grok it. Our brains need thorough concrete knowledge in order to recombine the raw material and make new ideas. AI writing code reminds me of very large frameworks. The alluring demo always shows how easy it is to make some part of an app, I fall for it, and then it turns out my needs are just enough outside the "norms" that the framework has painted me in a corner, led me astray, or hidden things from me, or made me dumb and helpless when I'm stuck, like Microsoft Webforms hid the simple, elegant reality of HTTP. This all reminds me of the bizarreness of programmer interviews. People rarely test me on the things that matter, which are at the macro level and are rooted in good design, good choices, things that are verging on issues of "taste" and art. I suspect a Gartner peak of inflation expectations followed by a sobering trough of disillusionment before we really understand how best to work with it.
“It will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” That was Plato explaining why writing is bad for you. You can walk everywhere if you want to, but I'm taking the bus.
Seriously. I've been using GPT to help with my job for two days now, and already, I'm like...if you're not using it, you might as well be choosing to dig a ditch with a spoon. If that makes you feel better about yourself, good for you I guess, but if you actually want to be maximally effective... I still have to understanding what it's telling me and know how to ask for what I want... It's no substitute for actually having knowledge and skill, at least not yet... But the amount of time it saves is staggering.
I am coding for more than 10 years now and I am using chatgpt since it came out and I think my coding has improved a lot since then. I like to read the explanation of the code from chat gpt, so that improved my understanding of code in general and is probably a good way to learn to code for new developers. Also, the solutions I get from the AI are often different from what I would have done and after adapting them I am also using them in the future. So I have to disagree, that it hinders your improvement. Maybe it will do it for 10x developers, but not for 99% of the rest.
I've found the opposite, after about 2 weeks of working with it, it takes me twice as long fighting with GPT-4, fixing errors, cleaning up output, punching up output, do what would have taken half the time without trying to use chat GPT.
Its like an easier to use stack overflow. Despite what you need likely being there, you have to understand the concepts and vocab in order to express the outcome in sufficient detail. You also have to know how to tweak it to your use case. At that point, you're an engineer still just with a more sophisticated tool, allowing you to solve much larger problems. Anything short of full stack will soon be a joke.
So so vastly better than SO. I ask GPT why I am I getting this error? It explains clearly what the error means and gives me useful suggestions. SO gives me 10 answers all marked duplicates of a thread that explains nothing and is irrelevant to my situation.
Stack overflow is worse than reddit lmao, one of the most toxic sites ive ever gazed upon, its kiwi farms levels of bad, never had a single question ive searched get answered properly that flamewars sass or making up their own question unrelated then answering that
As a proffessional developer, 99% of the code I write are intuitively obvious solutions made difficult by the fact that I need to make calls to library functions I'm not familiar with. The idea that every problem is a learning experience is rediculous. Especially when something is being written for a client, it's very, very rare to encounter a genuinely novel problem. I've rarely ever felt like I've improved as a programmer by doing the sorts of tasks I've actually been paid to do. The idea of using AI to solve programming challenges, which are *designed* to be learned from, is dumb because you're using the problem wrong. Believing typical coding is like one of those problems is even dumber, though.
Amen. Coding is essentially a made-up role in society that we can do without because it's honestly not a great human experience. The sooner we can get rid of having to do it ourselves the better as we can get on with learning more useful, intuitive problem solving skillsets that are not just tedious, repetitive, labour work. I see software development as basically what digging with a shovel used to be before diggers existed. Far too many people are losing sight of the zoomed out picture of creatively building stuff, which is the end goal, because their life is so zoomed into the coding aspect that they had to focus on for most of their life. I've personally never seen the fun in coding. Building something with code is fun, but the coding part is 0 fun. Much like how I imagine shovelling 2 tons of dirt to build a house foundation in the past was no fun.
Great points! I was using copilot for a couple of months and as much as I liked the ability to just TAB in boilerplate code it made me a lazy developer. At some point, I realized I had no idea what I was writing, why, and how these code blocks feat together. After disabling copilot I am writing much more boilerplate code but same time I am also much more secure in my codebase and can take full responsibility.
This is pretty much what I envisioned for myself, but it's great hearing someone's actual experience. I had similar experiences with using extensions/generators (non AI) where I felt like I just wasn't groking the big picture as well. I also know that some people have a totally different experience with AI, but this fits I think with my style.
This is just growing pains. It will require a whole new way of thinking/working. Develop alongside AI, make it question your thinking, analyze for security risks, optimize code, explain code, explain different ways of doing it, summarize teammembers commits to fully understand impact etc. Ctrl+enter add full suggestion from copilot is probably not the optimal way. Considering the free market it will be a rough road ahead for those avoiding this new Alien assistant.
@@MrTPGuitar I agree but Copolit (being also paid now) is also lacking a lot and many times it introduces extremely shitty code legitimately pisses me off I guess as the Primigan stated: - build your logic And let copilot built upon it I think this is for now at least the only use case
Interesting, I've been shocked at how bad it is. I've spent a lot of time just telling it it's code is wrong and ending up in an almost infinite loop of "I apologise for my earlier confusion..." It's great for things that involve words and great for ideas but for particular coding problem solving I find it just as time consuming as looking for needles in hayoverflow. Concepts are a good thing to ask it for, and examples but I don't get any of the magic results others do.
As I mentioned I'm generally avoiding it for coding, but - although I've seen some impressive GPT coding examples - I think where most people are seeing success is moreso with CoPilot, e.g. starting to type out the function name and having CoPilot auto complete the function for you. That's the sort of thing I am most concerned about incorporating into my workflow
Toxaq I tried it and it gave me the perfect result all the three times I think you are writing the prompt in correctly and if you write better prompts the I will understand better .
I have 30 years of programming experience so I am going to program with AI now, I am very bored of the programming I have seen many many times happen over and over again and very happy that no longer do I need to spend hours on boilerplate code so I can focus on the part of the problem I like to solve versus all the parts of the code. That being said, it will be interesting to see how a next generation of programmers will relate to the code. Will it just be a jungle for the far majority of them, undecipherable without help of AI, or will they find their way in even if they hardly ever write their own code?
Same here. Also, the teams and companies building libraries and languages we use are constantly adding new features that can make coding easier and more efficient. But given the constrains of daily life, many of us cannot devote the time to keep up with all of these changes in addition to the changes in the business processes we support. So, if an AI augmentation system can help use create better more efficient code quickly, we should use it.
5:43 I get your anaology perfectly! I have concistently avoided GPS all my life and I know understans how evertything is interconnected since I have plotted it all out in my brain during my many years of travel. Roads have crossed, I sometime borrow a piece from one trip for another and I know intimetly "feel" this webb of roads and the sites it connects in my own internal "gps".
Good point. But the reality is is that humans naturally gravitate to what’s more simple and requires and less energy to get their work done, and that’s a good thing. AI tools provide just that and are constantly improving. Developers can try to avoid it for a little longer, but eventually most devs by a large margin will be very dependent on AI tools to get their work done.
I agree in general but I think this only works (well) if AI can can do everything/there is no need to understand the underlying concepts anymore, e.g. people give the typewriter -> computer example, that works because there is never any need to use a typewriter anymore. Same say with machine code/assembly - if you're a JS dev there is no need for you to ever touch machine code. But if you're a JS dev and AI can do 90% of your coding for you, I'm suggesting that in this case it's dangerous because I think people are going to struggle to be effective with that required 10% if AI is doing most of the work most of the time because they aren't building those skills required. If AI can do all of the JS (or replace whatever language you like) work, then I think it becomes mostly unnecessary to ever have to know what JS is doing.
I suppose, in some sense, it is similar to how these days you would (and usually should) use a library to sort an array, rather than implement the sort yourself - but it is still important and useful to know how sorting algorithms are implemented and what the differences between them are. Especially since they are often not 'intrinsically' better or worse compared to one another, but rather more or less optimal, given a particular set of circumstances and/or priorities (speed vs. memory usage, etc.).
A friend of mine cane to me seeking advice on how to code. I gave them the advice, but also stressed on the fact that ai can write code but it won’t teach you problems solving. And that is why I turn it off when learning new concepts and turn it on when writing boilerplate code.
I use it with Gherkin. Plus lots of unit testing and fitness functions. I find I can see d more time on code quality and my creativity is unleashed when as I need to spend less attention to boilerplate and syntax and more on system thinking.
Not only that. It is also very likely that one will unlearn learning and thinking. After all, they say 'use it or lose it'! Outsourcing thinking will lead to a decreasing IQ
I totally see your point for experienced developers, though for beginners I'm not so sure. By the time the beginner starts to become more advanced I feel ai will have developed more to the point where it truly doesn't matter, making those skills that were learned more usless....
Yeah it's an interesting situation, I can see that playing out - the hard thing is betting on 1) Will AI be able to do all of the work at some point? or will there end up being some plateau with the LLM/GPT approach? 2) If AI will be good enough, will it be in 1-2 years (prob wouldn't matter in this case) or in 10 years (then it would because that's a big chunk of a career) or more?
It's important to realize that LLMs are adding machines. They take the past 10 years of internet data and regurgitate it using weights and biases. It's great for synthesizing this heap of knowledge and serving it to you clearly and concisely (albeit with a varying degree of incorrectness). It's not intelligent. It's an advanced adding machine / search engine.
Very thought provoking. This makes me wonder if people who use these tools heavily will actually be less productive. Maybe they’ll be quick when starting a greenfield project but eventually steer themselves into a mess they can’t get out of. Or they’ll become less capable of critical thought and dealing with rare tools and scenarios.
I’m using GPT purely to cut through the bullshit for learning or validating my questions. It is much easier to ask it “show me an example of using Observables with an Rxjs websocket subject to pull in data and filter it with Rxjs operators”. It will give me the example I ask for and I saved time on looking up a few imports and figuring out how to pull them together. Then I go and read the docs and experiment having already seen an example that I can relate to. I will never use it to produce code for my job, it helps me out with learning or prototyping faster, it will never replace my understanding because if I don’t understand the solution then I can’t fix it or be sure we are providing what our clients paid us for
As a self-learner with a moderate amount of skill/experience I tend to use the AI as though it were a tutor. I do my own work and ask it for tips on specific errors I encounter. Using it to build things from scratch when you have little experience is more or less worthless. It MAY build something that works, but the user needs the insight to steer it's motivation in design and functionality as well as the ability to pick out and troubleshoot errors, or they'll get nowhere.
I've used it to explain somebody else's uncommented and convoluted code and it did a great job. I've also found it convenient for code optimization and even adding comments
Good comparison to GPS. If you haven't seen them, see the GPS is bad for your brain articles. I find programming to be super boring, but the one thing I like about it is using my brain to solve the problem at hand. If I just farm that out to AI, I'm losing the very thing I like about programming. I'm very much into healthy aging and have been for quite some time. The top of my list for things I don't want as I age is dementia/Alzheimer's. I actually consider programming to be part of my strategy to avoid that (along with diet, exercise and sleep). I'm OK with getting rid of some boilerplate code. I'm really hesitant to just let something else do all the work for me.
You seem like a proper software ENGINEER, where a lot of others are software developers and use a lot of stackoverflow copy/paste anyway. So using the AI is no different for them because they don't code to learn. And tbh, most coding is tedious and in the realm of shitty, laborious, chore-like work that we have to do instead of being serially creative and entrepreneurial continuously. AI will streamline the human experience to do more of the things we love which is to play, create and experience.
Really liked the reasoning behind this, as someone who's far newer at coding, since a lot of the "co-pilot" aspect would come from the LLM being training on what is basically previous coders works, wouldn't that teach or translate some of their skills as you work alongside the LLM?
last month i learned a lot of thing about programming thanks to chat gpt. When i ask it to do something i already know how to do, the ai does it in a way that i didnt know, forcing me to learn this and step out of my confort zone, then i try something bigger and the ai code gets some little mistakes, forcing me to further learn about the generated code so that i can fix it. Im a better programmer now beacuse of this and i can do this new ways of code without any help of forums or ai now because the ai forced me to learn AI use is good or bad depending on how you use it.
Totally agree. That's why I'm using mostly to help me with sintax as I'm horrible at it... but I'm forcing myself to memorize it since I forbid my self to just copy & paste.
I like your comparison to GPS navigation. I've been using Tabnine for a while now and GPS navigation came to mind too, just a little differently than you. Every time I drive by the sat nav, I feel like I have no idea which way I'm going. I can't learn the way. If I wanted to drive the same longer route the next day without navigation, I'd probably be completely screwed. And I feel the same way about using AI completion for programming. It adds a bunch of stuff that I would otherwise write by hand that I've written a thousand times before... but I just have trouble navigating my own project, I can't remember where things are when I look at it the next day. Even though that I structure my code the similar way, I can't remember where things are, how they are named, etc. That didn't happen to me when I was writing everything by hand.
Thanks for sharing this perspective - this is one of the things I feel like would happen to me as well, I've found this in the past when I've relied more heavily on things like generators - I just have a tougher time making sense of where everything is and how it connects. I feel like that would be compounded by Copilot since it's basically generators taken to the next couple of levels.
We usually put in more effort into resolving blockers. Using AI, the definition of blockers changes some, and a subset of problems are no longer problems... You'll still have to learn to better handle the things that actually become issues. But, to each their own.
I mostly agree here. I've developed a few AI side projects and something interesting I've noticed is that, despite my constant exposure to AI, I barely ever get the chance to use it. Another interesting thing I've noticed is that while it *seems* like it would be tempting to use AI for complex problems, I don't often find myself tempted to use it at all. As I write code, I'm always thinking of what the code I'm writing will force me to do down the line, and it's really difficult to predict that when I'm not the one writing the code, exponentially moreso the more important and/or more complex the problem is. I'm most tempted to use it (and do use it) in situations where I basically already have the code I want in my head, and just want the AI to do the actual writing.
What's the difference of using an AI to write code to importing library x? Most people will simply assume that library x just works, they never look at the code, they never debug it, and they will never learn how it works internally. Following your reasoning, people should stop using libraries and implement std::string themselves.
What a terrible take. It’s more than obvious. Because you can validate the library works and see exactly what happens. There is no hidden black box with it.
@@TheStickofWar Huh? If you use a library, it's more of a black box than the code from an AI. The library is there and you either understand it or not. Nobody will explain it to you. Nobody will change it for you to suit your needs better. The AI will explain the code to you, if you ask, and even adjust it to whatever you desire. The only black box is the AI, which is probably less of a black box than the people who wrote the library. Or do you know what hidden motivation some guy (maybe with an @nsa.gov email) might have had, when "improving" some encryption code in a way that you don't understand? Your take on it is also extremely ignorant towards factual reality. Nobody validates the code from a library, if it works. People will trust everything that works. Most people will not even ever look at a single line of the library, while they are forced to at least copy/paste the AIcode to a file and compile it. And trusting code from anonymous people on the internet is worse than trusting code from an AI.
AI + Low Skills = Improved skills through learning leading to higher skills AI + High Skills = Substantially improved skills through learning leading to "Taking the Lead" Incorporating AI and the tools we use to code/analyse will be a way to improve our skills overall I think. You hit the nail on the head with do not rely totally on AI as skills will erode over time.
20+ developer here, developing programs is simply to complex to understand for the majority of people. It's a job that requires tremendous amount of focus and creativity AND NOT something you can do with AI. I do believe that 90% of "so-called" developers will find themself in a very difficult spot, but they were merely copy-paste people that produced more errors than GPT-n will do, so yes... in a day to day grinding I can only say WATCH OUT for the future as it will peel of your lazy approach to work.
The only reason we have this advantage is because the window size of current models is smaller than the average codebase, including all its dependencies. Theres no apparent fundamental limitation to this, it's jsut a matter of hardware improvments. If that's the only advantage we have over AI, then that advantage is not going to last more than a couple of years.
In the near term, these tools will be just like calculators. Are we creating value by understanding, instead of solving? More time should be spent on solving needs, instead of thinking about, is this code the best it could be. Better isn't always better... Beta vs VHS, ATM vs Ethernet... the list goes on. In the end what works? What's available? What's cheaper?
Haven't people had this same argument every time a programming language became more abstract? You lose some control when abstraction comes in, but in the end of the day you have to understand the underlaying principles and structures about the system you're using. Probably these LLM's will be used for debugging and code testing purposes, and something that you'd just pull from stackoverflow anyway. You don't ask it to write random code, you feed it your code to provide context, but you still need to specify it very well to provide working code. Of course this has to happen locally for any proprietary stuff but big houses can afford that. If it makes developing code faster and easier, there's some tipping point somewhere how many mistakes it can make to be considered economically viable. Correct me if I'm wrong
The key distinction for me is that it isn't just this higher level of abstraction (or at least it's not a complete one). If a higher level language is introduced, generally you can do all of the tasks you need to do with it (as you mention perhaps with some lesser level of control). But since AI can't do all of the coding for you (at least not yet) then it's kind of like a higher level language being introduced, but for X% of tasks you still need to use the lower level language. I'm positing that if you are mostly working at that higher level, but are then forced to understand what is going on at the lower level, most people will probably have a hard time because they won't have the knowledge/context required for the lower level.
This reminds me of a Matisyahu song in which he sings "I can tell you where to dig and what to dig for, but the digging you must do yourself" in this case the human is deciding what to dig for and where to dig. It's ok if AI does the actual digging. But if the human can't even tell what to dig for and where to dig, then that human isn't providing value.
Agree in general with the points made. But I would argue using copilot and gpt4 or not (or how much to use it) also depends on how much time one wants to spend in becoming a really good software developer. If one wants to stay in the field of software development and isn’t interested much in other skills, sure. But for generalists, for who coding is just one of many skills that are awesome to learn to build projects, those AU tools can be even more useful. Personally I don’t plan on becoming a world class software developer. I just want to quickly write code that works, integrates with the other skills I know and that works reliably and decently safe. Also, at least for my gpt4 supported workflow so far, it has been always an iterative process, where (because of gpt4) I keep learning about new libraries and ways tk write code I wasn’t previously aware of. So one still keeps learning more about coding.
This is a great point and something I agree with totally - I think this kind of distinction exists in software developers today already, there are devs more on the "software engineering" side where that is their career and they often work with large/complex codebases. And there are other types of software developers who are mostly coding as a way to achieve their own goals, launch their own projects/startups etc, and larger scale software design concepts aren't really a concern (and I think AI is going to unequivocally be a massive boon for this). The things I am talking about in the video definitely apply much more to the software engineering side of things.
Relying on unreliable AI to write "decent" and "safe" code is an absolutely bad idea for non-experienced programmers. Take me as an example, GPT could somewhat easily convince me about how a certain system work since it almost always presents text in a simplified and convincing manner. Anyways, reading a StackOverflow post may be "less productive" (idk why people are like: OmG tHiS wIlL iNcReAsE mY pRoDuCtIviTy. As if most problems with software engineering are related to "productivity" and no writing absolutely garbage code) but it is much better to dive deep into something and understand it.
@@walidchtioui9328 in the end its a tool. A very useful tool. But agree, one shouldn't just copy & paste blindly everything and just assume that it works perfectly. Because it won't. If one can directly test if the code does exactly what it's supposed to, then I don't see much of a problem with it. And there still is a lot of learning involved in that process. But using it to write critical infrastructure.. yeah... hell no. Maybe more for trying to find bugs in existing code, and creating more unit tests for example.
I've followed you for a long time but I disagree with this line of thinking. In the traditional way of thinking, doing = learning. But this isn't intentional learning. It's learning tangentially through the work you are doing. And there's nothing wrong with that. But adding a tool like GPT makes it such that when you need to accomplish a task fast, you don't need to also prioritize the learning. You can do that in a separate project/session. Just like any tool, there are places and situations that are ideal for using it, and for not using it. So I personally disagree with a blanket decision to not use GPT in my workflow because there are certainly workflows in which GPT actually might increase my learning by giving me that time back that I can then use to learn. But regardless, this was a good video to watch because it made me think about why and how I use GPT in my workflow. Great content, as always.
Definitely don't disagree with this, I am going with a general blanket not using it for my coding workflow at the moment (as in no Copilot auto-complete, I still chat with my GPT mate separately about some things) but my intent is to re-evaluate this as I go and likely incorporate it to some degree eventually as part of my every day workflow. I'm just preferring to err on the side of underusing it than overusing it at the moment.
@@JoshuaMorony Totally understand that, I suppose I didn't quite understand the full context until you clarified it with your comment. Thanks for the reply and looking forward to more great content from you!
Great video! Your newsletter recently linked a tweet from Max Lynch about using GPT-4 for Ionic. I used it to create a log-in page and it was not perfect. Debugging the AI output took an hour. Debugging my own log-in page probably would have taken longer. However, you are quite right that we should do the hard work ourselves, because at the very least, debugging AI script is a solely human task.
I agree with you about telling an AI where to go for what. However, what I'd offer up as some thoughtful consideration is the fact that AI will likely cut out the middle-man. If you're programming and want stuff generated for your code, you ask for certain things; but, imagine you just ask it for an entire app? It will know the exact context and, most-likely, the best way to do it. You have started from A and arrived at B without a middle-man programmer. Which is the ultimate issue about AI. Not that it can code better, but that it can replace us.
I also agree that this is a plausible scenario - i.e. it removing the coding step entirely. In that case my point in the video doesn't matter, because you don't need any kind of coding skills if you are operating completely at a higher level (e.g. describing requirements in English).
@@JoshuaMorony It is, indeed. It is also a peculiar situation to be in. Where are we on this curve of AI advancement? Are we to have governments to step in to classify what exactly humans and AI can do? Or are we to have whichever entity that makes the better product--in every aspect? This is truly concerning to those who are just now getting in the field, professionally. Where are their jobs going to go? Or are they going to become different--such as code reviewers? Or, will we become obsolete? Becoming obsolete implies there is a better way. If there is a better way, are we involved? If not, what do we do? If we don't need to do anything, then, what do we think/feel? That is my biggest concern; as a societal member. I am completely content programming, I love it, but a love for something doesn't pay you; and something that pays you doesn't have to mean it is your means of survival. Where are we going to fit in? Are we going to fit in? If not, what are we going to do, and, thus, what are we going to think/feel?
@@TW-lt1vr I think these are all great questions - I feel like people tend toward AI being just another advancement, like agriculture, or the industrial revolution, or the typewriter, or the computer (and they might be right, because I don't know any more than they do). My guess is that AI might just be fundamentally different, or maybe like 5 or 10 jumps all at once. I think probably nobody has the ability to comprehend or predict what this change might look like, because there are just so many factors involved that could all change in such big ways. Or maybe we reach some sort of plateau with the LLM/GPT approach soon, and we don't see any significant advancements in other avenues of AI research for a while.
@@JoshuaMorony Your individual attention means a lot to me. I greatly appreciate the time you've given to respond to my responses. Thank you! I would absolutely love to see you make a video diving into these possibilities and their respective (probable) consequences, if at all possible. This is such a hot topic that I feel needs more attention: the topic of what we should be asking, thinking, and/or feeling about various probable outcomes from AI.
While i do agree with you, there is a lot of issues trying to do everything yourself the first is time i guess sure if your not hurting for money your good. For example im using chat to help me plan out a book, now im not familer with the framework of a book in general and looking at other books just makes my head hurt. On top of that very few people i know have written anything, so since google was no help i went to chat and chat gave me a example of the framework which i will use in the books creation. I have also thought about developing a horror game in the style of resident evil using unreal 5, the amount of just bs out there on how to do this amazed me. I then looked at hireing out most people where out of my price range, so i figured i would at least do the basics and that where chat came in at just gave me a over view and next steps. I'm not bad at explaining things or talking with people in general yet trying to get things done in this day and time seems more than working hard if that makes sense?
Isn’t that a bit like saying you won’t use a calculator because you will lose the skill of multiplication, division and so on. I guess some might avoid calculators for that reason but most probably do use calculators even if this does mean they are no longer practiced at multiplication
The difference here I think is that AI can't (at least yet) do the entire task of coding for you (and there might be other areas besides coding where this applies too). So I guess it would be like having a calculator that could multiply a lot but not all numbers, so the loss of multiplication skills here would actually matter since it can't be entirely delegated to the calculator.
I think what you are describing is akin to moving from an actual programmer position to a team leader, where you let others do the actual programming. Your programming skills will suffer of course, but as long as you do not have to program again and are solely responsible for the direction of the project part - and keep basic knowledge - it is not a problem. I work at an university at usually let my students write annoying code I dont like to write myself and supervise them through the planning. My python skills got rusty, nontheless I got the code and results I wanted. Personally, nowadays, after 10 years of c++ programming all kinds of stuff + python, I think programming is a rather boring task and automating it is actually great. I only will dig into pytorch and co again to make myself more familiar with state of the art machine learning tools.
That's an interesting analogy - I like that, though I think I take different meaning from it. Moving to a team lead/manager position makes sense, and the people you are managing should be capable of doing all the tasks you need them to do. My concern here is that with AI at the moment, it can't do all the tasks for you - so perhaps it would kind of be like moving to a management position, except your team can only code certain things and so you would still need to step in and do the coding yourself some of the time. In this case, the lack of practice/context from generally not coding on that project might cause a problem.
On the other hand, the developers that become really good at using AI will be the ones companies hire because they'll be faster. Embrace your robot overlords because that's what companies will do.
you barely need to learn AI at all, it's just "writing", even a baby and retarded person could do that, there are only a handful of AI writing technique, which you can master in only about 2 hour or 8 hour if you're slow at learning. learning one programming language, Algorithm or IDE is way harder, prompt writing? dead easy you're not as special as you think you are. AI won't make you any special, giving a ChatGPT to a monkey won't transform the monkey into a human. Remember this, just because someone is hating on AI, doesn't mean they didn't use it. in reverse, they're way better than you both in Programming and Prompt Writing. there are no AI vs Anti-AI, everyone both the hater or supporter use it. don't wear your AI hat as a Pride, it doesn't make you any superior, and it's dorky
In Russian there is a proverb "trust but prove" (доверяй, но проверяй). In this case it means that AI can do a work for you (a lot of work) but you still need to understand what happening there. If it's blsht or not. So you still need knowledge as we needed it before. Sometimes it's easier and faster to adjust something by yourself rather than explaining it to AI. If it so, you need to have a readable code which should be produced correctly by AI. I think we will be doing more code review than we did before but from now on it will be more produced by AI. Yes, it might means that we will need to have less developers but more code reviewers (but still professional ones)
What work do you do tha AI can help. I can’t even get it to look over a bash script. I have to explicitly explain a bunc of stuff, or cut out absolutely all noise for it to have any idea what’s going on. I basically ask it questions that are googleable, and somtiems it’s better than the first results. Other times it’s junk.
Just like GPS, people choose to not pay attention to what they passed on the way. Jim commented before as well about asking GPT questions about the code, and that makes the most sense to me. I still have code reviews on my team as well, so at some point multiple humans are going to go through my code. At the same time, I know what you are saying because I also believe Uncle Bob that dev's need to regulate themselves a lot of times, and I have worked with plenty of people who don't use SOLID, or write tests, or do code reviews. So even if I do feel good that I will ask most of these questions and learn the how and why, I think a lot of people won't and it will instead be a crutch for them.
Correct me if I'm wrong given I don't know much about how AI coding works. But to my understanding from the video the concern is that GPT 4 and future versions alike have the potential to make us lazy in our work by relying on it too much which could reduce our critical thinking skills?
Sort of, the specific concern here is that yes in a sense we could rely on it a lot and lose a lot of context of what is going on and likely some ability to do the task - this isn't really a problem if the AI can do all of the task for us, because we don't need that knowledge/context then. But with the current state of AI assisted coding it can't complete all of the tasks for us, so my thinking is that if we rely on it too much, when we come to those tasks we still need to do we will be lacking a lot of relevant knowledge and context that we have passed off to AI.
I’ve gone back and forth on predictions for AI, but I have settled on thinking that it will not be a replacement. Coding is about specifying a system, and there are so many subtle decisions that go into it. Any half decent programmer understand this. English simply isn’t a good language for specifying a system precisely. And we need the ability for precise understanding and specification to be able to create quality products. You can’t just ask an AI to “create a self driving car” say because what does that actually mean? Someone needs to decide on the details, and the best way to understand a system, often, is to actually implement it. I also wonder about the theoretical ability of AI to even create quality code. How does it know what is quality code and not quality code if it’s training data is filled with not quality code?
I have wanted to create a mobile for the last 10 years. No one else has made it, so there's probably no demand for something like this. I have a vague idea about programming. Last time I was paid money for programming was 15 years ago. When I asked dev how much they wanted to create the app. The lowest estimates were around 5000 dollars. It's a pretty simple app. Now I decided to write the app myself using GPT4 and the programming language I know best -- English. I'm pretty sure I’ll get it done in 50 to 60 hours. This means that I make/save 5000/60=83 dollars per hour. Not a huge amount, but if I had to buy it, I would have to do my day job for 20 to 30 more hours. Seems like a win. Sure, right now, you can do this with only simple coding projects. But this too will pass, in a year or three, English will be all you need.
I think the effect is not that you will stagnate but the opposite. If chatGPT solves something differently than you thought then you can do your research and learn from that...
About the video scripts you turn into articles, I wonder why don't you just write the blog post first and then when recoding the video you simply read the post. I sometimes do that. I have to change some parts while reading though
I feel like the style (at least my style) for UA-cam is just very different, and the algorithm is a lot less forgiving of content then someone who arrives at your blog post, so I would prefer to optimise the content for UA-cam
Your logic is not convoluted at all. Your reasoning is straightforward and accurate. You are entirely correct in stating that AI's ability to expand our knowledge through self-learning is limited, as it may slow down the learning process.
Human development was taken for granted because life made it so that you had to make efforts, physical, moral and intellectual. Machines have long taken most of our physical burden (yes car, I am looking at you), and we generalized sports to counter that. Machines are going to take most of our intellectual burden, and we have to find a new reason to make efforts and solve problems. Making it enjoyable to learn for instance. As for the moral battleground, I don't know ...
There’s nothing to fear, it’s like playing games with cheat codes, yes you can take advantage while using it but it will get boring faster. Seeing these tools as assistants, or even a 24/7 teacher is a better approach than relaying your job on it. Like any Industrial Revolution, the workers must evolve or die, if any feels that this will replace your problem solving skills, it may certainly will…
I don't believe that AI would cause ambitious developers to stagnate. You can always use it to assist in your current job and save time to learn new things. So, it depends on what you want to achieve. If you want to free up more time to play video games during remote work, you will definitely be at risk of burning out quickly.
My sense is the more experienced you are the less danger there is in potential over reliance on AI, so probably a good bet for you to make I reckon - I guess we kind of have diminishing returns the later we are in our career and focus on more and more niche things. Maybe there are far greater gains to be had by diving into AI in this case. It will be interesting to see how it plays out for people more on the beginner end though. Say coming out of a coding bootcamp or university, being able to be quite productive straight away with AI, but perhaps missing out on some fundamental learning that will become apparently important later. Or, perhaps not!
My biggest fear about this A.I revolution isn't so much about productivity demands or whether or not AI can replace current developers or not. I think at least in the short to medium term, if you're somewhat established in this field, you will be in demand, and you will be using AI to improve your results and productivity. The problem is precisely new people coming in. Unlike many here, I don't really work for an IT company, but instead for small marketing agencies. Code is not their end product, it's a means for an end. They will sell a basic wordpress site or Open Cart store just to give more "value" and grab customers for their actual marketing and design products. They don't require highly specialized devs and don't even want them - they're too expensive after all. The moment they can justify downsizing their front end team from, say, 3 people to 1 + AI, they will. And these jobs are some of the best entry level work available for front end developers, for example. I can honestly see these advances as being extremely damaging for those starting on this field and lowering job opportunities for getting into it. And then it will just go UP in the expertise chain from there.
Exactly. Frankly speaking, I don't even want to hire juniors anymore because I know their tasks could be 90% automatized or fast-tracked with these tools.
At the moment Chat-GPT is Stack-overflow but personalized for your problem its still far away solving real world problems, I can give Chat-GPT easy to medium problems and I use only for that purpose. But overall problems need much more context and problems are often very deep and requires to think constantly about the infrastructure of the projects which I think Chat-GPT forgets after 4-5 query's, I feel I do it same as chat-gpt or chat-gpt does same as me but the difference is in depth of understanding is huge difference!
I can assure you, I have used GPT-4 to solve multiple _real-world_ problems per day, every day for the past two weeks. And these are coding problems, and they are not simple ones.
@@marczhu7473 Still its dumb when it comes to specific problems is just a good tool for generic questions same you can find on google but sometimes are very good adapted and sometimes don't work so its not comparable with human level but in some fields like speed and practicability is great, I am not saying bad stuff about GPT-4 or any other LLM but the this is what we are dealing is not replacing thinking its replacing repetitive thinking with addition of adaptability which search engines are lacking, in some months we will get back on good old human written algorithms :)
I can see the problem you explaining, and for sure some definitely will rely on it too much without thinking. However, I do like when I did solve problem and proomt the AI to beat it with different limitations. With that I was able to optimize my code quite few times already. Another use-case where it come handy was looking up how to use a particular tool in context of my problem. So essentially finding alternative solutions and generating leetcode feels pretty good to me.
Or we would free up memory (from remembering syntax etc etc etc). In the same way books did (ie, we dont go around and sing / tell the entire epics anymore to remember them, we have them written down). I'm not sure this has made us dumber. I'm sure someone said the same about the calculator ;)
Yeah It will make things simple by giving syntax. My worry is we would not move independently without tools like copilot if we depend on them for this syntax things.Any how people have mixed feelings that's ok
AI can be used to learn as well. If you solve a problem with known to you, good enough solution. Then it may be easy to leave it this way and learn nothing. Prompting AI to refactor it can suggest something brand new for you. It is hard to learn something you dont even know you dont know. Other People can introduce you to such subjects, but so can AI.
The new programming language is English. The skill you’ll need for coding or any other job in the near future is knowing how to use AI to maximize your productivity.
I appreciate your perspective. You worry that relying on AI for coding will affect your coding skills. While this is certainly likely, those skills will diminish in value once programming is much easier and more accessible through AI. Resisting change while the world evolves is risky. If those skills are only worth 10% what they are now, why wouldn’t you just retrain or move up. Be the one who masters AI to focus on what only humans can do or do better? What we should focus on is interesting and challenging problems for the satisfaction of it, but there is the complication of how to pay the bills.
I hope you are too, because I put Adelaide references in my video every now and then on the off chance someone else from Adelaide sees it! (we never get mentioned lol)
@@JoshuaMorony I was born in Whyalla and live(d) close to the Adelaide Oval for many years. I became a Software Developer at TAFESA😜. Right now I’m in Europe for some time working as a fullstack Angular/Netcore and I am constantly learning good stuff from you. So good to hear that you’re from Adelaide!! My god! Your work on youtube is amazing. Keep it up Joshua
I think its about time software engineers start working at a higher abstraction level, than rebuilding microservices and UI in new frameworks over and over again.
Idk bro I used GPS everyday to get back and forth to work. Then one day I didn’t need it. Still used it, just didn’t need it. I see much of the same here.
Sooner AI learns to output binary or some bytecode that implements a given task, the better. As long as it's correct, safe, and reasonably fast, it does not matter how it's produced. Coding skills will be deemed useless and, like any other skill in history that became useless, will go away. And that's a good thing. That does not mean nobody will code anymore, though. Coding skills and programming languages are just tools that help us in times when we don't have access to competent AI. When that happens, then everything programming-related will go away...forever. I personally can't wait.
I kind of disagree with your thesis on the issue. Just because there are caveats to using GPT-4 as a powerful coding tool ( the ones you mention ), doesn't mean it can't be used wisely in a way that mitigates those caveats. With the time saved creating a piece of code from scratch, one could audit and review the code to give it a good quality check to make sure it satisfies the coding standard. The time savings would also allow one to learn about any techniques one is not familiar with. Also with experience you can decide what parts of the code GPT-4 would be good at, and which parts it isn't and do those completely from scratch. At this stage, I think it's fair to say GPT-4 is useful to any project, if used wisely. And if isn't, just wait for GPT-5 or GPT-6. It's clear where things are headed, so everyone best get onboard.
Your ending argument explained that you should use the AI in areas that won't affect your quality. So it's an argument for limited AI use rather than. What the title implies (shouldn't use it)
Yes I agree with that - my position for the video isn't that it should never be used, the intent with the title is basically to juxtapose the concept of this thing being clearly amazing, and then highlighting reasons why you shouldn't use it (not that you should never use it). But yes, "when you shouldn't use it" would perhaps be more accurate
Ask yourself why do you even code in the first place. Why do you need to work. Imagine in 10 years from now, where the prompter just asks the ai to make the full software he needs, giving the hardware specs and infrastructure available.
I think this becomes a pseudo problem, specially when you use IA to be more productive, and also to learn new things you didn't know what to do, so you learn more and also you produce more. And yes, there are people that doesn't use it to learn, just to do the work.
I do agree with your concerns about integrating AI into your everyday coding workflow. The temptation to rely on AI to solve complex problems might indeed hinder one's ability to learn and grow as a software developer. Your analogy with GPS navigation was an interesting way to explain this concept. I think it's essential to strike a balance between utilizing AI for productivity gains and maintaining our skills as developers. Perhaps we can use AI to help with mundane tasks or as a reference when we're stuck, but still focus on honing our problem-solving skills and understanding the broader context of our projects. Thanks for sharing your thoughts on this topic. It's crucial that we, as a community, discuss the implications of AI integration in our work and carefully consider the trade-offs. Looking forward to more content like this! Keep up the great work. 👍
Definitely agree a balance can be struck, and as I mentioned in the video I think people like ThePrimeagen are doing that well. For me, at least for now, it seems easier to just mostly not use it (or use ChatGPT where I specifically have to go out of my way to use it, it's not integrated into my workflow). Maybe we can have self enforced "AI budgets" where we can only use it a limited time each day like restricting the time we spend on social media lol
But the argument can also be made that human coding is getting worse and worse cause we're choosing to write "neat" code instead of "good" code... So how would this be any different? And unrelated to that, I feel like tools lie chatGPT and copilot would be a lot more helpful to me, personally, if they would just "line complete" for me so I have to type less... and when a method is complete, give me a little Clippy popup going "hey, instead of doing that try/finalize thing you did there, you should just do a try() instead... So it's more like a "copilot" then actually doing the job for you... analyzing the code as you go and suggesting improvements that you will prolly have to do later when you refactor anyway...
Copilot X is way better which is why GPT 4 was trained on it to begin with. It shouldn't be the pilot of your code adventures, unless you want it too. Which is what copilot X is.
I bet using AI to solve problems will have it's own unique quirks, and you may use the same AI to optimize the code or learn what is best for the given application and why. Some knowledge might be lost, but much more will be brought to the table. I don't really agree, but time will tell if we will degrade by using AI or will be able to be experts in way more fields. Could be both at the same time
Good point indeed! I'm worried about all the possibilities that Junior/entry-level developers will go extinct due to corporate "spending optimizations". Learning to write code is one thing, learning to build systems, especially large enterprise systems is something completely different, and that requires work in the field and gaining knowledge through experience. If Juniors are no longer hired, or very few of them are hired, our profession will die in the next several decades, due to natural causes - lack of new people practicing it.
Join my mailing list for more exclusive content and access to the archive of my private tips of the week: mobirony.ck.page/4a331b9076
I've been a developer for decades and while I agree with you in my soul I think the reality is 2-years from now every developer will be competing against people who are good at using these tools and the corporate machine will simply expect a productivity level that can only be achieved by being an AI supervisor.
I agree that the speed will be highly prioritised above all else, but if there is something the AI can't do (at least for a period of time) that requires developers to work with the underlying concepts still (e.g. AI can do 90% of tasks with JS, but there is 10% of tasks that still require a human who is knowledgable in JS) then I think the need for someone who does understand the underlying concepts will be forced upon them. If most devs are largely depending on AI, then very few will have practiced the skills required to be knowledgable in these areas.
Maybe this means in the future we have a larger number of devs who work at a higher level directing AI and never really interacting with the underlying concepts (e.g. JavaScript or whatever other language), and we have a smaller number of devs who can still do the tasks that require deeper knowledge.
@@JoshuaMorony This is kind of paradox. If AI can do 90% of your work, but there are still 10% left which can only be done by humans, every developer is still automatically forced to be knowledgeable at these 10% to handle them. That's why there is no reason to not let the AI do or help you with the 90% of your work it can easily do because there is basically no reason to not rely on it. And the 10% the AI isn't able to do you have to understand anyway to handle them, until the AI is able to do so too. That's why there won't be a knowledge gap, except for the part the AI can easily handle anyway. So I think the dependence on AI will grow more equally among developers, instead of there being a small group which are doing the 10% all the others can't do.
@@JoshuaMorony That's very much like it is today. We have a large number of high level devs that work with high level languages, and a small number of devs that have deeper, more specialized knowledge. I don't have to know how C# code that I write is parsed or interpreted to MSIL, and then how it is ultimately compliled to native code, to be a good C# developer. Coding using AI is just a level up from this. I'm leveraging AI tools to help me code, because I know it will keep me marketable and productive. I won't be replaced by AI (at least not yet), but I could easily be replaced by a developer who uses AI, if I don't use it.
@@pvanukoff Fair points by both you and Hydro - this is all at least partially me thinking out loud, I definitely don't want to claim the way I'm thinking is definitely right. I think there are some differences in the move to AI though vs other progressions in the past (i.e. higher level languages, the calculator, the typewriter etc.). For example, in your C# vs MSIL vs native code example, you can do everything you need at the C# layer and the MSIL layer is not your concern. With AI, it will kind of be like AI is this layer above C#, but as it stands today you can't do everything you need on the AI layer so you still need to be reaching down to the C# layer (until AI is good enough at C# itself).
My gut feeling is that for the parts where we still need to know the layer beneath (for people heavily relying on AI) I think that 10% will likely just be brute forced or ignored wherever possible by a lot of people (i.e. with a get it done with duct tape sort of mentality) because having little need to work at that layer generally will make that work harder. But I also acknowledge that maybe it won't be that way, maybe people do just focus on learning what they need to know for the parts AI can't do yet and it all works out fine.
As someone interested in programming with no background in computer science. These tools can only help so much. And they lack the big picture thinking we have. I think the video demonstrates how if you know what you're doing it's a great tool, but without a fundamental understanding of how to achieve the goal, the tool is useless.
I've found Chat GPT-4 to be excellent at explaining programming and not just writing code. It can explain a lot of concepts better than the docs for most projects. It may be worth asking it questions instead of asking it to write code. This is especially helpful when you're writing code in a language that you don't use often. Your programmer mind knows what you want to do, but perhaps not the right syntax. It can remind you of the syntax and also teach you how to use it. It can also be very useful for one-off scripts and things you wouldn't normally invest time in programming. If it can write the code for you quickly and also teach you something along the way, you're still building your own neural networks and getting things done at the same time.
it's true but lot of people, me included learn and memorise by doing. Same for coding, you learn by doing, by your mistake, debugging and so on... This is where you create real clue. You can read 10 documentations per day if you want but if you built nothing with them then you will forget them in few days, that's will be pointless. Same apply with gpt 3 and 4 if you rely on him to write code, explaination and explain you key concept your brain will start to avoid memorising concept since he don't need too and you will finish by staying at the same stage.
However, you could use GPT-3 and 4 in order to use him as a personal mentor, asking question while you try to do it yourself like a good student will do and improve yourself by having a more involving shadow programming. We still need to be aware on how to use it. It's a skill, like searching on google was a skill before.
@@xavierpierre5586 You don't need to memorise anymore. You have an external memory. Once there are neural links, your memory will be to access the body of works on the internet. This is why school is so bad. You don't need to memorise junk that you can easily access. In the future, you won't need to learn other languages, it will all be translated in real time using APIs.
@@defaultdefault812 That still memorize, kind of. For your skills you definitely need to memorize and that's why we called it a skill. Even how to use ChatGPT would required some lever of "skills", you can't ask ChatGPT how to use ChatGPT cause it does not make sense. Therefore it does not mean that you no longer need "memorize", unless AI replace the humanity and human culture has been destroyed completely, as long as there's human, memorize always take in place.
@@kelvinchin5942 have you *tried* asking ChatGPT how to interact with it? It's surprisingly insightful
100% agree. I find it helps me much easier learn concepts and more quickly points me to the right answer rather than searching through the opinions on stackoverflow. Not too mention, coding is more enjoyable because I spend more time accomplishing the objective rather than digging through the Internet for answers.
ive been mainly just using it to learn, ive always hated docs and much preffered just asking my lecturers a direct question. thats how I like to use GPT4, it can point me in the right direction of a topic to look more into. really great for beginners who feel lost and dont know what they should be learning.
Sooner or later people will be bored of AI and will go back to pen and paper. And actually meeting the person, since video call can be faked using deepfake.
Yup, I am in UG. Learned about event loop, examples of callbacks, promises, async-await and many confusing topics so clearly. It's fun conversing to a personal teacher and asking the doubts I would be shy to ask in public.
AI is not a software engineer, it's a "coder". It can do the tasks that usually junior engineers do in a company: supervised, instructed code writing. Not software design. By using AI you free yourself from stuff like writing boilerplate code, reading framework documentation, googling function specifications and looking for solutions that are already well known. This frees up a lot of time to focus on the actual software design task. In the end it's up to you to decide, whether you learn something from the AI generated code or just take it as given.
The problem being that this will, in time, greatly lower those low experience "coder" jobs available and slowly drain the pool of new talents coming in. And make no mistake, it will eventually reach upper levels in the expertise chain too.
@@antedeguemon1194 It will simply raise the bar and the usual "cheap coders", that think they are software engineers, will learn that just attending classes in university is not enough. So yes, people will need to put way more into it. And guess what: I am a software engineer and I learned 99% by myself. I learned the stuff that distinguishes me from the 99% of people, who can hack together some python code and write some web apps. AI isn't even close to do my job. They might be out of a job, but I will not. And neither will be the people I work with. At the point we are replaced by AI, humanity is fucked anyway. But that is probably gonna take a few more decades, lol.
@@timokreuzer1820 I mean, you're not wrong, but it is worrysome in general that entry level jobs are going away. Great thing it won't affect you in the short term!
My point still stands, it's going to do a number on the entry levels, not only for IT and dev but for a lot other things, and I don't see that as an objectively good thing. And it will, in fact, going to affect you a lot earlier than you expect, unless you're actual super rich, I doubt you will be be insulated from the general mess just because your job will last longer.
And, note, I'm not saying you or anyone should "feel bad" or anything, and I also understand there's no stopping this train. But at the very minimum I'd hope people at least acknowledge the issue, and perhaps discuss and vote for policies that could help mitigate this, instead of having an attitude that amounts to "not my problem"
@@antedeguemon1194 The people who can quickly learn how to use the latest technology to get things done will be fine. Those who can't might have a hard time. Fair or not, it is what it is. That's how I became a programmer in a few years, and that's why I'm learning to use AI now.
Within about 10 years we will be at a point where writing in a high level, language like c or JavaScript or PHP will be akin to writing an assembly code in 2023.
Something like a GPT-5 will definitely be able to consider an entire code base, and how it functions and interacts with itself and with outside APIs and develop and maintain it and patch it almost autonomously with English language instructions from the "developer".
I agree this is a likely endgame - my sense is that we will either discover some sort of hard limit of the LLM approach for coding, or it will continue to evolve quite quickly and mostly everyone will code in their spoken language (a kind of useless guess, but I think there is a decent chance of this happening on a much shorter time frame, maybe even a couple of years, just based on how quickly things have been improving).
But there is this sort of weird in between part with AI. When C became available, you can just use C for everything, so there is no need to understand the layer underneath at all. With AI assisted coding, we still need to understand that underneath layer to some degree until AI can *completely* do everything we ask of it in that underneath layer. That's where I think over relying on AI might lead to problems with devs not understanding things they still need to understand.
@@JoshuaMorony There certainly is great benefit understanding the layer below C, though general conceptual way is enough. You are right to the point of not needing to be able to write assembler, but for any serious C programmer it is essential to understand the machine model C builds on that have tight correspondence with how CPU and memory works. C appears deterministic and linear, on the surface, but because the way modern CPUs are parallelly doing many things, there is a risk of nasty behavior or performance surprises about how your code works if you lack elementary understanding about that.
Do you know how absolutely complex large software systems are? This is not LeetCode style problems.What you're essentially saying is: GPT-5 is going to be AGI which is, frankly, and insanely stupid thing to say (even renowned AI researches rarely make such assumptions).
@@walidchtioui9328 I do. I do realize how large complex software systems are.
Even if Moore's law is dead, it seems to be living on in AI and LLM development.
I don't think that will be GPT-5, but maybe a few iterations down the line.
If a sufficiently advanced LLM had access to some specialized tools to actual run and lint the code / test stuff, it could very easily manage a pretty robust codebase.
But you're right, you will still need really experienced software engineers and architects to actually review and optimize everything, for now.
how the hell can you state that "GPT-5 will definitely do" this or that? lol you're delusional. You remind me of those who said "GPT-4 will have 100 trillion parameters" :D
Nobody know what the future holds, and how well transformers will handle inference. Not to say inference costs could surpass the costs of having humans do the work , for example. Too many variables to take into account, it makes absolutely no sense to say "definitely" in the context of AI, it proves you know very little
"If you think without writing, you only think your thinking"-Leslie Lamport
This video hits the nail on the head for me. I don't use GPT for blogging either; my goal when writing isn't to create content and move on, it is to see what my own thoughts are.
It also needs to be far, far more trustworthy than it currently is. After 30+ years, I code rapidly with an error rate close to zero. Moreover, in my projects my code is usually manipulating disparate other parts of my own code, or calling a library. I write a lot of financial apps, so my code needs to be correct.
Rarely do I write code that does something to the values themselves, like a pure function. The AI doesn't know my codebase and how I designed it to work or be used, reassembled to implement new features.
Whenever AI has done something for me, I have been left feeling in the dark. There's a psychological bias called The Illusion of Explanatory Depth.
It's when we mistake what's familiar to us, or what we've read, with what we truly understand, and that this illusion is only broken when we try to explain it ourselves in full depth.
Personally, I find it takes longer to understand code I did not write than it takes to just write it myself, which is why I often refactor someone else's code in order to grok it.
Our brains need thorough concrete knowledge in order to recombine the raw material and make new ideas.
AI writing code reminds me of very large frameworks. The alluring demo always shows how easy it is to make some part of an app, I fall for it, and then it turns out my needs are just enough outside the "norms" that the framework has painted me in a corner, led me astray, or hidden things from me, or made me dumb and helpless when I'm stuck, like Microsoft Webforms hid the simple, elegant reality of HTTP.
This all reminds me of the bizarreness of programmer interviews. People rarely test me on the things that matter, which are at the macro level and are rooted in good design, good choices, things that are verging on issues of "taste" and art.
I suspect a Gartner peak of inflation expectations followed by a sobering trough of disillusionment before we really understand how best to work with it.
“It will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.”
That was Plato explaining why writing is bad for you.
You can walk everywhere if you want to, but I'm taking the bus.
Seriously. I've been using GPT to help with my job for two days now, and already, I'm like...if you're not using it, you might as well be choosing to dig a ditch with a spoon. If that makes you feel better about yourself, good for you I guess, but if you actually want to be maximally effective... I still have to understanding what it's telling me and know how to ask for what I want... It's no substitute for actually having knowledge and skill, at least not yet... But the amount of time it saves is staggering.
I am coding for more than 10 years now and I am using chatgpt since it came out and I think my coding has improved a lot since then. I like to read the explanation of the code from chat gpt, so that improved my understanding of code in general and is probably a good way to learn to code for new developers. Also, the solutions I get from the AI are often different from what I would have done and after adapting them I am also using them in the future. So I have to disagree, that it hinders your improvement. Maybe it will do it for 10x developers, but not for 99% of the rest.
I've found the opposite, after about 2 weeks of working with it, it takes me twice as long fighting with GPT-4, fixing errors, cleaning up output, punching up output, do what would have taken half the time without trying to use chat GPT.
Its like an easier to use stack overflow. Despite what you need likely being there, you have to understand the concepts and vocab in order to express the outcome in sufficient detail. You also have to know how to tweak it to your use case. At that point, you're an engineer still just with a more sophisticated tool, allowing you to solve much larger problems.
Anything short of full stack will soon be a joke.
full stack has been a joke all the time
So so vastly better than SO. I ask GPT why I am I getting this error? It explains clearly what the error means and gives me useful suggestions. SO gives me 10 answers all marked duplicates of a thread that explains nothing and is irrelevant to my situation.
Stack overflow is worse than reddit lmao, one of the most toxic sites ive ever gazed upon, its kiwi farms levels of bad, never had a single question ive searched get answered properly that flamewars sass or making up their own question unrelated then answering that
As a proffessional developer, 99% of the code I write are intuitively obvious solutions made difficult by the fact that I need to make calls to library functions I'm not familiar with. The idea that every problem is a learning experience is rediculous. Especially when something is being written for a client, it's very, very rare to encounter a genuinely novel problem. I've rarely ever felt like I've improved as a programmer by doing the sorts of tasks I've actually been paid to do. The idea of using AI to solve programming challenges, which are *designed* to be learned from, is dumb because you're using the problem wrong. Believing typical coding is like one of those problems is even dumber, though.
Amen. Coding is essentially a made-up role in society that we can do without because it's honestly not a great human experience. The sooner we can get rid of having to do it ourselves the better as we can get on with learning more useful, intuitive problem solving skillsets that are not just tedious, repetitive, labour work. I see software development as basically what digging with a shovel used to be before diggers existed. Far too many people are losing sight of the zoomed out picture of creatively building stuff, which is the end goal, because their life is so zoomed into the coding aspect that they had to focus on for most of their life.
I've personally never seen the fun in coding. Building something with code is fun, but the coding part is 0 fun. Much like how I imagine shovelling 2 tons of dirt to build a house foundation in the past was no fun.
Tbh this video seems like a clickbait, none of the arguments even remotely make sense
@@heruka111 I think that's an exaggerated statement. They do make sense and are relevant but they are shoe-horned to make sense.
Great points! I was using copilot for a couple of months and as much as I liked the ability to just TAB in boilerplate code it made me a lazy developer. At some point, I realized I had no idea what I was writing, why, and how these code blocks feat together. After disabling copilot I am writing much more boilerplate code but same time I am also much more secure in my codebase and can take full responsibility.
This is pretty much what I envisioned for myself, but it's great hearing someone's actual experience. I had similar experiences with using extensions/generators (non AI) where I felt like I just wasn't groking the big picture as well. I also know that some people have a totally different experience with AI, but this fits I think with my style.
This is just growing pains. It will require a whole new way of thinking/working. Develop alongside AI, make it question your thinking, analyze for security risks, optimize code, explain code, explain different ways of doing it, summarize teammembers commits to fully understand impact etc. Ctrl+enter add full suggestion from copilot is probably not the optimal way. Considering the free market it will be a rough road ahead for those avoiding this new Alien assistant.
@@MrTPGuitar I agree but Copolit (being also paid now) is also lacking a lot and many times it introduces extremely shitty code legitimately pisses me off
I guess as the Primigan stated:
- build your logic And let copilot built upon it
I think this is for now at least the only use case
Interesting, I've been shocked at how bad it is. I've spent a lot of time just telling it it's code is wrong and ending up in an almost infinite loop of "I apologise for my earlier confusion..." It's great for things that involve words and great for ideas but for particular coding problem solving I find it just as time consuming as looking for needles in hayoverflow. Concepts are a good thing to ask it for, and examples but I don't get any of the magic results others do.
I agree. Even with writing, there is a shallow depth to it all.
As I mentioned I'm generally avoiding it for coding, but - although I've seen some impressive GPT coding examples - I think where most people are seeing success is moreso with CoPilot, e.g. starting to type out the function name and having CoPilot auto complete the function for you. That's the sort of thing I am most concerned about incorporating into my workflow
I see a lot of coping. 😂 It isn’t perfect but it’s a lot better than a lot give it credit for.
@@mattizzle81 it’s another tool to be used. The power is in the wielding.
Toxaq
I tried it and it gave me the perfect result all the three times I think you are writing the prompt in correctly and if you write better prompts the I will understand better .
I have 30 years of programming experience so I am going to program with AI now, I am very bored of the programming I have seen many many times happen over and over again and very happy that no longer do I need to spend hours on boilerplate code so I can focus on the part of the problem I like to solve versus all the parts of the code. That being said, it will be interesting to see how a next generation of programmers will relate to the code. Will it just be a jungle for the far majority of them, undecipherable without help of AI, or will they find their way in even if they hardly ever write their own code?
Same here. Also, the teams and companies building libraries and languages we use are constantly adding new features that can make coding easier and more efficient. But given the constrains of daily life, many of us cannot devote the time to keep up with all of these changes in addition to the changes in the business processes we support. So, if an AI augmentation system can help use create better more efficient code quickly, we should use it.
5:43 I get your anaology perfectly! I have concistently avoided GPS all my life and I know understans how evertything is interconnected since I have plotted it all out in my brain during my many years of travel. Roads have crossed, I sometime borrow a piece from one trip for another and I know intimetly "feel" this webb of roads and the sites it connects in my own internal "gps".
Good point. But the reality is is that humans naturally gravitate to what’s more simple and requires and less energy to get their work done, and that’s a good thing. AI tools provide just that and are constantly improving. Developers can try to avoid it for a little longer, but eventually most devs by a large margin will be very dependent on AI tools to get their work done.
I agree in general but I think this only works (well) if AI can can do everything/there is no need to understand the underlying concepts anymore, e.g. people give the typewriter -> computer example, that works because there is never any need to use a typewriter anymore. Same say with machine code/assembly - if you're a JS dev there is no need for you to ever touch machine code. But if you're a JS dev and AI can do 90% of your coding for you, I'm suggesting that in this case it's dangerous because I think people are going to struggle to be effective with that required 10% if AI is doing most of the work most of the time because they aren't building those skills required. If AI can do all of the JS (or replace whatever language you like) work, then I think it becomes mostly unnecessary to ever have to know what JS is doing.
Yes. just wait for IDE integration and 100% of devs will use.
I suppose, in some sense, it is similar to how these days you would (and usually should) use a library to sort an array, rather than implement the sort yourself - but it is still important and useful to know how sorting algorithms are implemented and what the differences between them are. Especially since they are often not 'intrinsically' better or worse compared to one another, but rather more or less optimal, given a particular set of circumstances and/or priorities (speed vs. memory usage, etc.).
A friend of mine cane to me seeking advice on how to code. I gave them the advice, but also stressed on the fact that ai can write code but it won’t teach you problems solving.
And that is why I turn it off when learning new concepts and turn it on when writing boilerplate code.
The way the argument was presented appeared both convincing and sophisticated to me.
I use it with Gherkin. Plus lots of unit testing and fitness functions. I find I can see d more time on code quality and my creativity is unleashed when as I need to spend less attention to boilerplate and syntax and more on system thinking.
Not only that. It is also very likely that one will unlearn learning and thinking. After all, they say 'use it or lose it'! Outsourcing thinking will lead to a decreasing IQ
I totally see your point for experienced developers, though for beginners I'm not so sure. By the time the beginner starts to become more advanced I feel ai will have developed more to the point where it truly doesn't matter, making those skills that were learned more usless....
Yeah it's an interesting situation, I can see that playing out - the hard thing is betting on 1) Will AI be able to do all of the work at some point? or will there end up being some plateau with the LLM/GPT approach? 2) If AI will be good enough, will it be in 1-2 years (prob wouldn't matter in this case) or in 10 years (then it would because that's a big chunk of a career) or more?
It's important to realize that LLMs are adding machines. They take the past 10 years of internet data and regurgitate it using weights and biases. It's great for synthesizing this heap of knowledge and serving it to you clearly and concisely (albeit with a varying degree of incorrectness). It's not intelligent. It's an advanced adding machine / search engine.
Very thought provoking. This makes me wonder if people who use these tools heavily will actually be less productive. Maybe they’ll be quick when starting a greenfield project but eventually steer themselves into a mess they can’t get out of. Or they’ll become less capable of critical thought and dealing with rare tools and scenarios.
I’m using GPT purely to cut through the bullshit for learning or validating my questions. It is much easier to ask it “show me an example of using Observables with an Rxjs websocket subject to pull in data and filter it with Rxjs operators”. It will give me the example I ask for and I saved time on looking up a few imports and figuring out how to pull them together.
Then I go and read the docs and experiment having already seen an example that I can relate to.
I will never use it to produce code for my job, it helps me out with learning or prototyping faster, it will never replace my understanding because if I don’t understand the solution then I can’t fix it or be sure we are providing what our clients paid us for
As a self-learner with a moderate amount of skill/experience I tend to use the AI as though it were a tutor. I do my own work and ask it for tips on specific errors I encounter.
Using it to build things from scratch when you have little experience is more or less worthless. It MAY build something that works, but the user needs the insight to steer it's motivation in design and functionality as well as the ability to pick out and troubleshoot errors, or they'll get nowhere.
I've used it to explain somebody else's uncommented and convoluted code and it did a great job. I've also found it convenient for code optimization and even adding comments
Good comparison to GPS. If you haven't seen them, see the GPS is bad for your brain articles. I find programming to be super boring, but the one thing I like about it is using my brain to solve the problem at hand. If I just farm that out to AI, I'm losing the very thing I like about programming. I'm very much into healthy aging and have been for quite some time. The top of my list for things I don't want as I age is dementia/Alzheimer's. I actually consider programming to be part of my strategy to avoid that (along with diet, exercise and sleep). I'm OK with getting rid of some boilerplate code. I'm really hesitant to just let something else do all the work for me.
You seem like a proper software ENGINEER, where a lot of others are software developers and use a lot of stackoverflow copy/paste anyway. So using the AI is no different for them because they don't code to learn. And tbh, most coding is tedious and in the realm of shitty, laborious, chore-like work that we have to do instead of being serially creative and entrepreneurial continuously. AI will streamline the human experience to do more of the things we love which is to play, create and experience.
Really liked the reasoning behind this, as someone who's far newer at coding, since a lot of the "co-pilot" aspect would come from the LLM being training on what is basically previous coders works, wouldn't that teach or translate some of their skills as you work alongside the LLM?
last month i learned a lot of thing about programming thanks to chat gpt.
When i ask it to do something i already know how to do, the ai does it in a way that i didnt know, forcing me to learn this and step out of my confort zone, then i try something bigger and the ai code gets some little mistakes, forcing me to further learn about the generated code so that i can fix it.
Im a better programmer now beacuse of this and i can do this new ways of code without any help of forums or ai now because the ai forced me to learn
AI use is good or bad depending on how you use it.
Totally agree. That's why I'm using mostly to help me with sintax as I'm horrible at it... but I'm forcing myself to memorize it since I forbid my self to just copy & paste.
I like your comparison to GPS navigation.
I've been using Tabnine for a while now and GPS navigation came to mind too, just a little differently than you.
Every time I drive by the sat nav, I feel like I have no idea which way I'm going. I can't learn the way. If I wanted to drive the same longer route the next day without navigation, I'd probably be completely screwed.
And I feel the same way about using AI completion for programming. It adds a bunch of stuff that I would otherwise write by hand that I've written a thousand times before... but I just have trouble navigating my own project, I can't remember where things are when I look at it the next day. Even though that I structure my code the similar way, I can't remember where things are, how they are named, etc. That didn't happen to me when I was writing everything by hand.
Thanks for sharing this perspective - this is one of the things I feel like would happen to me as well, I've found this in the past when I've relied more heavily on things like generators - I just have a tougher time making sense of where everything is and how it connects. I feel like that would be compounded by Copilot since it's basically generators taken to the next couple of levels.
We usually put in more effort into resolving blockers. Using AI, the definition of blockers changes some, and a subset of problems are no longer problems... You'll still have to learn to better handle the things that actually become issues.
But, to each their own.
How to find 6 digit prime number with no repeated digits, difference of prime number before and after this number is 180?
Using python
Spot on! We still have the ability to imagine, create, invent... reminds me that conversation between Hal and the astronaut
I mostly agree here. I've developed a few AI side projects and something interesting I've noticed is that, despite my constant exposure to AI, I barely ever get the chance to use it.
Another interesting thing I've noticed is that while it *seems* like it would be tempting to use AI for complex problems, I don't often find myself tempted to use it at all. As I write code, I'm always thinking of what the code I'm writing will force me to do down the line, and it's really difficult to predict that when I'm not the one writing the code, exponentially moreso the more important and/or more complex the problem is.
I'm most tempted to use it (and do use it) in situations where I basically already have the code I want in my head, and just want the AI to do the actual writing.
What's the difference of using an AI to write code to importing library x? Most people will simply assume that library x just works, they never look at the code, they never debug it, and they will never learn how it works internally. Following your reasoning, people should stop using libraries and implement std::string themselves.
What a terrible take. It’s more than obvious. Because you can validate the library works and see exactly what happens. There is no hidden black box with it.
@@TheStickofWar Huh? If you use a library, it's more of a black box than the code from an AI. The library is there and you either understand it or not. Nobody will explain it to you. Nobody will change it for you to suit your needs better. The AI will explain the code to you, if you ask, and even adjust it to whatever you desire. The only black box is the AI, which is probably less of a black box than the people who wrote the library. Or do you know what hidden motivation some guy (maybe with an @nsa.gov email) might have had, when "improving" some encryption code in a way that you don't understand? Your take on it is also extremely ignorant towards factual reality. Nobody validates the code from a library, if it works. People will trust everything that works. Most people will not even ever look at a single line of the library, while they are forced to at least copy/paste the AIcode to a file and compile it. And trusting code from anonymous people on the internet is worse than trusting code from an AI.
Prompt engineer is the new data scientist.
AI + Low Skills = Improved skills through learning leading to higher skills
AI + High Skills = Substantially improved skills through learning leading to "Taking the Lead"
Incorporating AI and the tools we use to code/analyse will be a way to improve our skills overall I think.
You hit the nail on the head with do not rely totally on AI as skills will erode over time.
AI + XX Skills + Discipline, but the stagnation (and even degradation) risk is still there
20+ developer here, developing programs is simply to complex to understand for the majority of people. It's a job that requires tremendous amount of focus and creativity AND NOT something you can do with AI.
I do believe that 90% of "so-called" developers will find themself in a very difficult spot, but they were merely copy-paste people that produced more errors than GPT-n will do, so yes... in a day to day grinding I can only say WATCH OUT for the future as it will peel of your lazy approach to work.
The only reason we have this advantage is because the window size of current models is smaller than the average codebase, including all its dependencies. Theres no apparent fundamental limitation to this, it's jsut a matter of hardware improvments. If that's the only advantage we have over AI, then that advantage is not going to last more than a couple of years.
In the near term, these tools will be just like calculators. Are we creating value by understanding, instead of solving? More time should be spent on solving needs, instead of thinking about, is this code the best it could be. Better isn't always better... Beta vs VHS, ATM vs Ethernet... the list goes on. In the end what works? What's available? What's cheaper?
You make an excellent point!!!
Haven't people had this same argument every time a programming language became more abstract? You lose some control when abstraction comes in, but in the end of the day you have to understand the underlaying principles and structures about the system you're using. Probably these LLM's will be used for debugging and code testing purposes, and something that you'd just pull from stackoverflow anyway. You don't ask it to write random code, you feed it your code to provide context, but you still need to specify it very well to provide working code. Of course this has to happen locally for any proprietary stuff but big houses can afford that. If it makes developing code faster and easier, there's some tipping point somewhere how many mistakes it can make to be considered economically viable. Correct me if I'm wrong
The key distinction for me is that it isn't just this higher level of abstraction (or at least it's not a complete one). If a higher level language is introduced, generally you can do all of the tasks you need to do with it (as you mention perhaps with some lesser level of control). But since AI can't do all of the coding for you (at least not yet) then it's kind of like a higher level language being introduced, but for X% of tasks you still need to use the lower level language. I'm positing that if you are mostly working at that higher level, but are then forced to understand what is going on at the lower level, most people will probably have a hard time because they won't have the knowledge/context required for the lower level.
This reminds me of a Matisyahu song in which he sings "I can tell you where to dig and what to dig for, but the digging you must do yourself" in this case the human is deciding what to dig for and where to dig. It's ok if AI does the actual digging. But if the human can't even tell what to dig for and where to dig, then that human isn't providing value.
Agree in general with the points made. But I would argue using copilot and gpt4 or not (or how much to use it) also depends on how much time one wants to spend in becoming a really good software developer. If one wants to stay in the field of software development and isn’t interested much in other skills, sure. But for generalists, for who coding is just one of many skills that are awesome to learn to build projects, those AU tools can be even more useful. Personally I don’t plan on becoming a world class software developer. I just want to quickly write code that works, integrates with the other skills I know and that works reliably and decently safe. Also, at least for my gpt4 supported workflow so far, it has been always an iterative process, where (because of gpt4) I keep learning about new libraries and ways tk write code I wasn’t previously aware of. So one still keeps learning more about coding.
This is a great point and something I agree with totally - I think this kind of distinction exists in software developers today already, there are devs more on the "software engineering" side where that is their career and they often work with large/complex codebases. And there are other types of software developers who are mostly coding as a way to achieve their own goals, launch their own projects/startups etc, and larger scale software design concepts aren't really a concern (and I think AI is going to unequivocally be a massive boon for this). The things I am talking about in the video definitely apply much more to the software engineering side of things.
Relying on unreliable AI to write "decent" and "safe" code is an absolutely bad idea for non-experienced programmers. Take me as an example, GPT could somewhat easily convince me about how a certain system work since it almost always presents text in a simplified and convincing manner. Anyways, reading a StackOverflow post may be "less productive" (idk why people are like: OmG tHiS wIlL iNcReAsE mY pRoDuCtIviTy. As if most problems with software engineering are related to "productivity" and no writing absolutely garbage code) but it is much better to dive deep into something and understand it.
@@walidchtioui9328 in the end its a tool. A very useful tool. But agree, one shouldn't just copy & paste blindly everything and just assume that it works perfectly. Because it won't. If one can directly test if the code does exactly what it's supposed to, then I don't see much of a problem with it. And there still is a lot of learning involved in that process. But using it to write critical infrastructure.. yeah... hell no. Maybe more for trying to find bugs in existing code, and creating more unit tests for example.
Totally agree with this. Great one as always
I've followed you for a long time but I disagree with this line of thinking. In the traditional way of thinking, doing = learning. But this isn't intentional learning. It's learning tangentially through the work you are doing. And there's nothing wrong with that. But adding a tool like GPT makes it such that when you need to accomplish a task fast, you don't need to also prioritize the learning. You can do that in a separate project/session. Just like any tool, there are places and situations that are ideal for using it, and for not using it. So I personally disagree with a blanket decision to not use GPT in my workflow because there are certainly workflows in which GPT actually might increase my learning by giving me that time back that I can then use to learn. But regardless, this was a good video to watch because it made me think about why and how I use GPT in my workflow. Great content, as always.
Definitely don't disagree with this, I am going with a general blanket not using it for my coding workflow at the moment (as in no Copilot auto-complete, I still chat with my GPT mate separately about some things) but my intent is to re-evaluate this as I go and likely incorporate it to some degree eventually as part of my every day workflow. I'm just preferring to err on the side of underusing it than overusing it at the moment.
@@JoshuaMorony Totally understand that, I suppose I didn't quite understand the full context until you clarified it with your comment. Thanks for the reply and looking forward to more great content from you!
Great video!
Your newsletter recently linked a tweet from Max Lynch about using GPT-4 for Ionic. I used it to create a log-in page and it was not perfect. Debugging the AI output took an hour. Debugging my own log-in page probably would have taken longer. However, you are quite right that we should do the hard work ourselves, because at the very least, debugging AI script is a solely human task.
I agree with you about telling an AI where to go for what. However, what I'd offer up as some thoughtful consideration is the fact that AI will likely cut out the middle-man. If you're programming and want stuff generated for your code, you ask for certain things; but, imagine you just ask it for an entire app? It will know the exact context and, most-likely, the best way to do it. You have started from A and arrived at B without a middle-man programmer. Which is the ultimate issue about AI. Not that it can code better, but that it can replace us.
I also agree that this is a plausible scenario - i.e. it removing the coding step entirely. In that case my point in the video doesn't matter, because you don't need any kind of coding skills if you are operating completely at a higher level (e.g. describing requirements in English).
@@JoshuaMorony It is, indeed. It is also a peculiar situation to be in. Where are we on this curve of AI advancement? Are we to have governments to step in to classify what exactly humans and AI can do? Or are we to have whichever entity that makes the better product--in every aspect?
This is truly concerning to those who are just now getting in the field, professionally. Where are their jobs going to go? Or are they going to become different--such as code reviewers? Or, will we become obsolete?
Becoming obsolete implies there is a better way. If there is a better way, are we involved? If not, what do we do? If we don't need to do anything, then, what do we think/feel?
That is my biggest concern; as a societal member. I am completely content programming, I love it, but a love for something doesn't pay you; and something that pays you doesn't have to mean it is your means of survival.
Where are we going to fit in? Are we going to fit in? If not, what are we going to do, and, thus, what are we going to think/feel?
@@TW-lt1vr I think these are all great questions - I feel like people tend toward AI being just another advancement, like agriculture, or the industrial revolution, or the typewriter, or the computer (and they might be right, because I don't know any more than they do). My guess is that AI might just be fundamentally different, or maybe like 5 or 10 jumps all at once. I think probably nobody has the ability to comprehend or predict what this change might look like, because there are just so many factors involved that could all change in such big ways.
Or maybe we reach some sort of plateau with the LLM/GPT approach soon, and we don't see any significant advancements in other avenues of AI research for a while.
@@JoshuaMorony Your individual attention means a lot to me. I greatly appreciate the time you've given to respond to my responses. Thank you! I would absolutely love to see you make a video diving into these possibilities and their respective (probable) consequences, if at all possible. This is such a hot topic that I feel needs more attention: the topic of what we should be asking, thinking, and/or feeling about various probable outcomes from AI.
While i do agree with you, there is a lot of issues trying to do everything yourself the first is time i guess sure if your not hurting for money your good. For example im using chat to help me plan out a book, now im not familer with the framework of a book in general and looking at other books just makes my head hurt. On top of that very few people i know have written anything, so since google was no help i went to chat and chat gave me a example of the framework which i will use in the books creation.
I have also thought about developing a horror game in the style of resident evil using unreal 5, the amount of just bs out there on how to do this amazed me. I then looked at hireing out most people where out of my price range, so i figured i would at least do the basics and that where chat came in at just gave me a over view and next steps. I'm not bad at explaining things or talking with people in general yet trying to get things done in this day and time seems more than working hard if that makes sense?
Isn’t that a bit like saying you won’t use a calculator because you will lose the skill of multiplication, division and so on. I guess some might avoid calculators for that reason but most probably do use calculators even if this does mean they are no longer practiced at multiplication
The difference here I think is that AI can't (at least yet) do the entire task of coding for you (and there might be other areas besides coding where this applies too). So I guess it would be like having a calculator that could multiply a lot but not all numbers, so the loss of multiplication skills here would actually matter since it can't be entirely delegated to the calculator.
I think what you are describing is akin to moving from an actual programmer position to a team leader, where you let others do the actual programming. Your programming skills will suffer of course, but as long as you do not have to program again and are solely responsible for the direction of the project part - and keep basic knowledge - it is not a problem. I work at an university at usually let my students write annoying code I dont like to write myself and supervise them through the planning. My python skills got rusty, nontheless I got the code and results I wanted. Personally, nowadays, after 10 years of c++ programming all kinds of stuff + python, I think programming is a rather boring task and automating it is actually great. I only will dig into pytorch and co again to make myself more familiar with state of the art machine learning tools.
That's an interesting analogy - I like that, though I think I take different meaning from it. Moving to a team lead/manager position makes sense, and the people you are managing should be capable of doing all the tasks you need them to do. My concern here is that with AI at the moment, it can't do all the tasks for you - so perhaps it would kind of be like moving to a management position, except your team can only code certain things and so you would still need to step in and do the coding yourself some of the time. In this case, the lack of practice/context from generally not coding on that project might cause a problem.
Pytorch would need a huge refactoring btw.
On the other hand, the developers that become really good at using AI will be the ones companies hire because they'll be faster. Embrace your robot overlords because that's what companies will do.
you barely need to learn AI at all, it's just "writing", even a baby and retarded person could do that,
there are only a handful of AI writing technique, which you can master in only about 2 hour or 8 hour if you're slow at learning.
learning one programming language, Algorithm or IDE is way harder,
prompt writing? dead easy
you're not as special as you think you are.
AI won't make you any special, giving a ChatGPT to a monkey won't transform the monkey into a human.
Remember this,
just because someone is hating on AI, doesn't mean they didn't use it.
in reverse, they're way better than you both in Programming and Prompt Writing.
there are no AI vs Anti-AI,
everyone both the hater or supporter use it.
don't wear your AI hat as a Pride, it doesn't make you any superior, and it's dorky
In Russian there is a proverb "trust but prove" (доверяй, но проверяй). In this case it means that AI can do a work for you (a lot of work) but you still need to understand what happening there. If it's blsht or not. So you still need knowledge as we needed it before.
Sometimes it's easier and faster to adjust something by yourself rather than explaining it to AI. If it so, you need to have a readable code which should be produced correctly by AI.
I think we will be doing more code review than we did before but from now on it will be more produced by AI. Yes, it might means that we will need to have less developers but more code reviewers (but still professional ones)
What work do you do tha AI can help. I can’t even get it to look over a bash script. I have to explicitly explain a bunc of stuff, or cut out absolutely all noise for it to have any idea what’s going on.
I basically ask it questions that are googleable, and somtiems it’s better than the first results. Other times it’s junk.
Just like GPS, people choose to not pay attention to what they passed on the way. Jim commented before as well about asking GPT questions about the code, and that makes the most sense to me. I still have code reviews on my team as well, so at some point multiple humans are going to go through my code. At the same time, I know what you are saying because I also believe Uncle Bob that dev's need to regulate themselves a lot of times, and I have worked with plenty of people who don't use SOLID, or write tests, or do code reviews. So even if I do feel good that I will ask most of these questions and learn the how and why, I think a lot of people won't and it will instead be a crutch for them.
Correct me if I'm wrong given I don't know much about how AI coding works. But to my understanding from the video the concern is that GPT 4 and future versions alike have the potential to make us lazy in our work by relying on it too much which could reduce our critical thinking skills?
Sort of, the specific concern here is that yes in a sense we could rely on it a lot and lose a lot of context of what is going on and likely some ability to do the task - this isn't really a problem if the AI can do all of the task for us, because we don't need that knowledge/context then. But with the current state of AI assisted coding it can't complete all of the tasks for us, so my thinking is that if we rely on it too much, when we come to those tasks we still need to do we will be lacking a lot of relevant knowledge and context that we have passed off to AI.
I’ve gone back and forth on predictions for AI, but I have settled on thinking that it will not be a replacement. Coding is about specifying a system, and there are so many subtle decisions that go into it. Any half decent programmer understand this. English simply isn’t a good language for specifying a system precisely. And we need the ability for precise understanding and specification to be able to create quality products. You can’t just ask an AI to “create a self driving car” say because what does that actually mean? Someone needs to decide on the details, and the best way to understand a system, often, is to actually implement it.
I also wonder about the theoretical ability of AI to even create quality code. How does it know what is quality code and not quality code if it’s training data is filled with not quality code?
I have wanted to create a mobile for the last 10 years. No one else has made it, so there's probably no demand for something like this. I have a vague idea about programming. Last time I was paid money for programming was 15 years ago.
When I asked dev how much they wanted to create the app. The lowest estimates were around 5000 dollars. It's a pretty simple app. Now I decided to write the app myself using GPT4 and the programming language I know best -- English.
I'm pretty sure I’ll get it done in 50 to 60 hours. This means that I make/save 5000/60=83 dollars per hour. Not a huge amount, but if I had to buy it, I would have to do my day job for 20 to 30 more hours.
Seems like a win.
Sure, right now, you can do this with only simple coding projects. But this too will pass, in a year or three, English will be all you need.
Exactly my thoughts. Thank you for the video!
I use Ai to assist me in making software for my company. My advice, take advantage of this profound technology or get left in the dust.
I think the effect is not that you will stagnate but the opposite. If chatGPT solves something differently than you thought then you can do your research and learn from that...
i think it's more accurate to say that AI will replace hgh level programming languages as assembler has been replaced.
About the video scripts you turn into articles, I wonder why don't you just write the blog post first and then when recoding the video you simply read the post. I sometimes do that. I have to change some parts while reading though
I feel like the style (at least my style) for UA-cam is just very different, and the algorithm is a lot less forgiving of content then someone who arrives at your blog post, so I would prefer to optimise the content for UA-cam
Your logic is not convoluted at all. Your reasoning is straightforward and accurate. You are entirely correct in stating that AI's ability to expand our knowledge through self-learning is limited, as it may slow down the learning process.
Human development was taken for granted because life made it so that you had to make efforts, physical, moral and intellectual.
Machines have long taken most of our physical burden (yes car, I am looking at you), and we generalized sports to counter that.
Machines are going to take most of our intellectual burden, and we have to find a new reason to make efforts and solve problems. Making it enjoyable to learn for instance.
As for the moral battleground, I don't know ...
There’s nothing to fear, it’s like playing games with cheat codes, yes you can take advantage while using it but it will get boring faster. Seeing these tools as assistants, or even a 24/7 teacher is a better approach than relaying your job on it. Like any Industrial Revolution, the workers must evolve or die, if any feels that this will replace your problem solving skills, it may certainly will…
I will start using it for awesome powershell scripts
I don't believe that AI would cause ambitious developers to stagnate. You can always use it to assist in your current job and save time to learn new things. So, it depends on what you want to achieve. If you want to free up more time to play video games during remote work, you will definitely be at risk of burning out quickly.
My approach is to dive all the way in and see what happens :)
I can always brush up rusty skills
My sense is the more experienced you are the less danger there is in potential over reliance on AI, so probably a good bet for you to make I reckon - I guess we kind of have diminishing returns the later we are in our career and focus on more and more niche things. Maybe there are far greater gains to be had by diving into AI in this case.
It will be interesting to see how it plays out for people more on the beginner end though. Say coming out of a coding bootcamp or university, being able to be quite productive straight away with AI, but perhaps missing out on some fundamental learning that will become apparently important later. Or, perhaps not!
My biggest fear about this A.I revolution isn't so much about productivity demands or whether or not AI can replace current developers or not. I think at least in the short to medium term, if you're somewhat established in this field, you will be in demand, and you will be using AI to improve your results and productivity.
The problem is precisely new people coming in. Unlike many here, I don't really work for an IT company, but instead for small marketing agencies. Code is not their end product, it's a means for an end. They will sell a basic wordpress site or Open Cart store just to give more "value" and grab customers for their actual marketing and design products. They don't require highly specialized devs and don't even want them - they're too expensive after all. The moment they can justify downsizing their front end team from, say, 3 people to 1 + AI, they will. And these jobs are some of the best entry level work available for front end developers, for example. I can honestly see these advances as being extremely damaging for those starting on this field and lowering job opportunities for getting into it. And then it will just go UP in the expertise chain from there.
Exactly.
Frankly speaking, I don't even want to hire juniors anymore because I know their tasks could be 90% automatized or fast-tracked with these tools.
just carefully review the code that gpt writes and it will become yours and will be no problems with making programs
At the moment Chat-GPT is Stack-overflow but personalized for your problem its still far away solving real world problems, I can give Chat-GPT easy to medium problems and I use only for that purpose. But overall problems need much more context and problems are often very deep and requires to think constantly about the infrastructure of the projects which I think Chat-GPT forgets after 4-5 query's, I feel I do it same as chat-gpt or chat-gpt does same as me but the difference is in depth of understanding is huge difference!
I can assure you, I have used GPT-4 to solve multiple _real-world_ problems per day, every day for the past two weeks. And these are coding problems, and they are not simple ones.
Gpt4 has a bigger prompt than chat gpt and less risk of forgetting thing.
@@marczhu7473 Still its dumb when it comes to specific problems is just a good tool for generic questions same you can find on google but sometimes are very good adapted and sometimes don't work so its not comparable with human level but in some fields like speed and practicability is great, I am not saying bad stuff about GPT-4 or any other LLM but the this is what we are dealing is not replacing thinking its replacing repetitive thinking with addition of adaptability which search engines are lacking, in some months we will get back on good old human written algorithms :)
I can see the problem you explaining, and for sure some definitely will rely on it too much without thinking. However, I do like when I did solve problem and proomt the AI to beat it with different limitations. With that I was able to optimize my code quite few times already. Another use-case where it come handy was looking up how to use a particular tool in context of my problem. So essentially finding alternative solutions and generating leetcode feels pretty good to me.
I agree, In search of saving time we would loose our thinking capability.
Or we would free up memory (from remembering syntax etc etc etc). In the same way books did (ie, we dont go around and sing / tell the entire epics anymore to remember them, we have them written down). I'm not sure this has made us dumber. I'm sure someone said the same about the calculator ;)
Yeah It will make things simple by giving syntax. My worry is we would not move independently without tools like copilot if we depend on them for this syntax things.Any how people have mixed feelings that's ok
I will use AI for all my future coding.
If you won't use it others will, and they'll out compete you. If you want to remain relevant these tools will be essential.
AI can be used to learn as well. If you solve a problem with known to you, good enough solution. Then it may be easy to leave it this way and learn nothing. Prompting AI to refactor it can suggest something brand new for you. It is hard to learn something you dont even know you dont know. Other People can introduce you to such subjects, but so can AI.
The new programming language is English. The skill you’ll need for coding or any other job in the near future is knowing how to use AI to maximize your productivity.
Good observation.
I appreciate your perspective. You worry that relying on AI for coding will affect your coding skills. While this is certainly likely, those skills will diminish in value once programming is much easier and more accessible through AI. Resisting change while the world evolves is risky. If those skills are only worth 10% what they are now, why wouldn’t you just retrain or move up. Be the one who masters AI to focus on what only humans can do or do better? What we should focus on is interesting and challenging problems for the satisfaction of it, but there is the complication of how to pay the bills.
OH my god, you're from Adelaide???!!!
I hope you are too, because I put Adelaide references in my video every now and then on the off chance someone else from Adelaide sees it! (we never get mentioned lol)
@@JoshuaMorony I was born in Whyalla and live(d) close to the Adelaide Oval for many years. I became a Software Developer at TAFESA😜. Right now I’m in Europe for some time working as a fullstack Angular/Netcore and I am constantly learning good stuff from you. So good to hear that you’re from Adelaide!! My god! Your work on youtube is amazing. Keep it up Joshua
@@joeyvico Nice! and thanks for the kind words!
I think its about time software engineers start working at a higher abstraction level, than rebuilding microservices and UI in new frameworks over and over again.
Idk bro I used GPS everyday to get back and forth to work. Then one day I didn’t need it. Still used it, just didn’t need it. I see much of the same here.
Sooner AI learns to output binary or some bytecode that implements a given task, the better. As long as it's correct, safe, and reasonably fast, it does not matter how it's produced. Coding skills will be deemed useless and, like any other skill in history that became useless, will go away. And that's a good thing. That does not mean nobody will code anymore, though. Coding skills and programming languages are just tools that help us in times when we don't have access to competent AI. When that happens, then everything programming-related will go away...forever. I personally can't wait.
I use chatgpt for things that are not programming.
I know to write code not to fix printers.
I kind of disagree with your thesis on the issue. Just because there are caveats to using GPT-4 as a powerful coding tool ( the ones you mention ), doesn't mean it can't be used wisely in a way that mitigates those caveats. With the time saved creating a piece of code from scratch, one could audit and review the code to give it a good quality check to make sure it satisfies the coding standard. The time savings would also allow one to learn about any techniques one is not familiar with. Also with experience you can decide what parts of the code GPT-4 would be good at, and which parts it isn't and do those completely from scratch. At this stage, I think it's fair to say GPT-4 is useful to any project, if used wisely. And if isn't, just wait for GPT-5 or GPT-6. It's clear where things are headed, so everyone best get onboard.
Your ending argument explained that you should use the AI in areas that won't affect your quality. So it's an argument for limited AI use rather than. What the title implies (shouldn't use it)
Yes I agree with that - my position for the video isn't that it should never be used, the intent with the title is basically to juxtapose the concept of this thing being clearly amazing, and then highlighting reasons why you shouldn't use it (not that you should never use it). But yes, "when you shouldn't use it" would perhaps be more accurate
Don't forget to comment your code and make documentation, then you'll be good 👍
Ask yourself why do you even code in the first place. Why do you need to work. Imagine in 10 years from now, where the prompter just asks the ai to make the full software he needs, giving the hardware specs and infrastructure available.
I think this becomes a pseudo problem, specially when you use IA to be more productive, and also to learn new things you didn't know what to do, so you learn more and also you produce more.
And yes, there are people that doesn't use it to learn, just to do the work.
I do agree with your concerns about integrating AI into your everyday coding workflow. The temptation to rely on AI to solve complex problems might indeed hinder one's ability to learn and grow as a software developer. Your analogy with GPS navigation was an interesting way to explain this concept.
I think it's essential to strike a balance between utilizing AI for productivity gains and maintaining our skills as developers. Perhaps we can use AI to help with mundane tasks or as a reference when we're stuck, but still focus on honing our problem-solving skills and understanding the broader context of our projects.
Thanks for sharing your thoughts on this topic. It's crucial that we, as a community, discuss the implications of AI integration in our work and carefully consider the trade-offs. Looking forward to more content like this! Keep up the great work. 👍
Definitely agree a balance can be struck, and as I mentioned in the video I think people like ThePrimeagen are doing that well. For me, at least for now, it seems easier to just mostly not use it (or use ChatGPT where I specifically have to go out of my way to use it, it's not integrated into my workflow). Maybe we can have self enforced "AI budgets" where we can only use it a limited time each day like restricting the time we spend on social media lol
it can't solve simple problems, let alone complex ones. I tried using it, it just gives basic google search results without a hint of understanding.
But the argument can also be made that human coding is getting worse and worse cause we're choosing to write "neat" code instead of "good" code... So how would this be any different?
And unrelated to that, I feel like tools lie chatGPT and copilot would be a lot more helpful to me, personally, if they would just "line complete" for me so I have to type less... and when a method is complete, give me a little Clippy popup going "hey, instead of doing that try/finalize thing you did there, you should just do a try() instead...
So it's more like a "copilot" then actually doing the job for you... analyzing the code as you go and suggesting improvements that you will prolly have to do later when you refactor anyway...
Copilot X is way better which is why GPT 4 was trained on it to begin with. It shouldn't be the pilot of your code adventures, unless you want it too. Which is what copilot X is.
Which are those areas where humans still have an advantage over the bots?
I bet using AI to solve problems will have it's own unique quirks, and you may use the same AI to optimize the code or learn what is best for the given application and why. Some knowledge might be lost, but much more will be brought to the table. I don't really agree, but time will tell if we will degrade by using AI or will be able to be experts in way more fields. Could be both at the same time
its basically a debate between using your human brain vs agi
Good point indeed! I'm worried about all the possibilities that Junior/entry-level developers will go extinct due to corporate "spending optimizations". Learning to write code is one thing, learning to build systems, especially large enterprise systems is something completely different, and that requires work in the field and gaining knowledge through experience. If Juniors are no longer hired, or very few of them are hired, our profession will die in the next several decades, due to natural causes - lack of new people practicing it.