The fact that AI can't help Junior developers should tip you off to the fact that even with AI, a normal person cannot write production level code, a massive blow to those thinking that software devs will get replaced.
One llm doesn't prove anything. Copilot sucks. It really does. I'm a software dev with a good amount of experience. Using claude, ive developed small applications without hand writing any code, just describing the problems clearly. Small problems like having it build ui controls that have been built a billion times it gets right the first time every time. The thing about that is, if you're doing a good job writing code you have a lot of little black boxes that don't have a ton of connections to other parts of the software. Obviously however, that doesn't describe all types of problems that needs solving, and as stuff gets more complicated, hands off auto generation ceases to be a thing. At that point it becomes more of a way to Jumpstart a problem. Where it REALLY fails however is architecture, which really requires systems thinking which LLM are not all that good at yet, and thats compounded by the fact that the context size of LLMs is limited, and for projects that hit real world needs you simply can't fit it all in the context. Even so, you can still speed up code generation if you're using the tools intelligently. But again, copilot sucks, it consistently gives weak and wrong answers. Ive only used those and local versions of llama, which I haven't reaaaaally put through its paces, so I donno how chatGpt really stacks up.
@@the_derpler maybe your way in describing it. i have no issue working using rust with claude that rust is more annoying than c and has less example and libraries out there compared to c
@@RealTwiner Even then, I don't think that will ever happen. There's no way that AI can reach that potential where it can understand context and architecture at such a high level as a human. We're talking about replicating the power of the human brain. We simply do not have the power to create that.
The only people who think AI can replace programmers are people who don't code and don't understand the intricacies of software development. They see AI spit out some gobbledygook code and think its magic because they don't even know what they are looking at. Those of us who have used AI know that it's more of a calculator for programmers than anything else; we understand that the controversy surrounding AI is just venture capital-driven hype.
Im a programmer, I think copilot is very unhelpful, and and even chatting with ChatGPT about the high level architecture is questionable, because it sends me down the wrong path very often. Even still, I think it’s plausible that AI could replace programmers pretty soon. They can *almost* write useful code, and they are improving quite rapidly
He finds symmetry he says. I mean highlighting everything is also symmetrical. But I mostly listen more than watch, so I didn't even know until he mentioned it.
As a german dev, my impression is that germans are a lot more skeptical when it comes to the promises of new technologies and a little bull-headed about doing things yourself. Sometimes that preserves and deepens meaningful craftsmanship, sometimes it leads to trailing behind the state of the art.
State of the art is overrated. The U.S. was the first to do a lot of things, and good for us. But because we were state of the art back then, now we're locked into older infrastructure and it's harder to move to what is considered state of the art now.
@@JabberwockybirdSomeone’s got to blaze the trail. Europe wouldn’t use 230v (it still varies) power if they hadn’t learned from widespread electrification in the US
On one hand you have offices that don't get anything done because the workflows were developed for typewriters and don't make sense for computer systems. On the other hand and I'm being dramatic here, the reason our education system still works at all is because people with dumb ideas haven't been able to tear it apart yet. Generally there are many many things that need to be fixed or optimised in any given institution but you also can't allow the wrong people to sell you on stupid things. Actually recognising the helpful opinions from the right people seems to be the real problem especially when the ones who have the say on the topic hate to hear them.
Don't you guys think that if AI COULD replace developers, then it COULD replace all other jobs like HR departments, managers, all these other type jobs where they just produce "text" and "talk" ? I mean Developers translate real-world models into logical statements that a computer can run. If you can automate that, you gonna automate literally everything. So people who cheer that developers will be replaced should really worry about their positions in the first place.
The big difference I can see is that you can run tests on code to see if it is objectively working, or right / wrong. Many office jobs don't have those well-defined boundaries i.e. there is a lot of unquantifiable bullshit that creates a moat of mystification around the role. Makes them harder to automate.
@@calmhorizons I absolutely agree with unquantifiable nature of their work. But I would argue that LLMs are currently BEST at doing & understanding this unquantifiable work as of now. IMO LLMs are less good at mathematical and hard logical tasks. For sure human knowledge is necessary to get the job done RIGHT, this is where LLMs are dubious - we never know if they are generating the right output unless we test their output well enough. So I believe that IF one day LLMs get perfect that at "writing code", this would surely contain the non-tech jobs as well. I mean an important step in programming is Requirement Engineering, which is a task that can only be done by human at the moment. Imagine if an LLM listens to a client's description and yields a perfect requirements report. This is as hard as unquantifiable bullshit I think. Overall I think we are going to adopt LLMs more and more in every task we do - and end up checking & correcting their output and loose time this way :P
@@calmhorizons Yeah that's like the opposite of true. If you're writing code that can be defined in black and white you're making a notepad app. Highly intelligent engineers spend years debating foundational architecture for a reason. And if you think that's just dev side stuff that doesn't provide any value to business then I'm convinced you only see a code base through a Jira board.
This was interesting. You described my situation to a ‘T’. I’m about to graduate, but through my program we’ve been encouraged to always use chatGPT and copilot. One night driving home I got hit with this feeling of existential dread. I’m going to graduate knowing all the words and phrases, but I’m not going to know how to code. That co-pilot pause. You couldn’t have hit the nail more on the head.
You can solve this fairly easily. Set yourself some small tasks to code, with clear and defined bounds. Practice that every day. Do it without Ai. Soon you won't even have to think about the easy stuff, like taking IO from the user, reading a file etc.
How about after getting your answer from chatgpt, actually type it out instead of copy-pasting. The process of typing it out makes you think through what you are doing, and helps develop fluency and muscle memory.
Fantastic hot take. 24 years under my belt as a programmer. The exposure and repetition builds muscle memory and mental models needed to almost feel a program. You know what would work, what wouldn’t. You know how code being contributed will impact and if it’s going to move things forward or negatively impact. It’s doing the work. It can’t be fast tracked. So fall in love with the journey and you’ll enjoy each new challenge.
I don't know if Prime reads these, but often I don't like videos because my goddamn tv just closes them as soon as they are done, and then I can't like them anymore. So either I have to like videos before I've watched them completely, or I have to find the video again to like it it was a pain. This is why so many big channels have a song or something else that is always the same at the end of their videos.
@@ThePrimeTimeagen yeah UA-cam doesn't really consider the user experience differences between watching on a computer and watching on a TV. Having an outro makes a big difference for TV viewers.
Just like it during the long sign off ... agen Or disable auto play on your TV and it should just sit on the end of the video and wait for you to choose the next one
@@georgehelyar My tv does (after playing some ads) stay at the end of the video if I'm not on a playlist, but there is no way to like the video on that screen on my tv. Your tv's player might be different.
It's worth noting that Germany has incredibly strict overtime and holiday rules, and staff have the legal right to ignore work communications outside of work hours unless they're being paid to be on call. A quick google search tells me that Brazil also has laws stating that mandatory overtime is a maximum of 2 hours per day, and must be paid at time and a half, and also has 30 days of paid vacation per year. I wouldn't be surprised if the differences between the listed countries in terms of perceived impact of AI on their code quality came from some countries having programmers who are working 60-80 hour weeks (not counting whatever they get asked to do while they're at home) and maybe getting as many as two weeks of holiday time, and other countries having programmers working 40 hour weeks, not being contacted outside of work hours, and getting plenty of paid holiday time. I know if I were working 60+ hours per week in some corporate programming gig, my code quality would be absolute dogshit, because tired programmers write dogshit code.
Also: language barrier. These tools are optimized for communication in English. Quality decreases significantly once you use a different language than English while working with these tools.
There might be laws to protect from overtime, but also there might be a lot of people who willingly don't pursue being protected by such laws - willingly working overtime without proper remuneration. It happens due to different things, depending on the company and position there
@@Vindolin In that case you've worked in *very* different places than I have. From state agencies to insurance companies to banks to whole sale to medium-sized engineering companies, there're plenty of places where *everything* business- and technology rule related is specified in German using very specific industry- and company internal terminology that is distinctly not found in most public dictionaries. Source: 30 years of work experience in Germany and the UK in the above industries.
@@totalermist So far I have mainly worked in web agencies and thank god everyone there programmed in English. Of course, you sometimes stumble across completely untranslatable German words that mess up the whole naming of your database schema and symbols :)
Amount of bugs seems like a very hard thing to measure. I belive that most codebases has tons and tons of bugs in them that the companies don't even know about. So you could get the same kind of effects as you get in crime statistics, where numbers sometimes go up not because there are actually more bugs, but because more of them are discovered/reported.
That, and there's not really an easy way to quantify "seriousness" of a bug. Like you could reference a wrong variable somewhere or you could introduce a memory leak in some very specific hard to notice edge case. Technically both of those could be "1 bug"
Using copilot to write code feels like explaining what code I want to write over slack to a person who has like 2 months of programming practice and trying to get them to write it for me instead. In the amount of time you spend giving paragraphs of clarifying details and correcting mistakes you could just write the code yourself many times over.
It is very useful if you are doing something that only needs those 2 months of experience in your random bash script or magic sed/awk command. If you come across a TCL file in the depths of your ancient code base, copy/pasting into an LLM can save you tons of time asking what the foobar is an "lindex" or discovering that "[exec string totitle $baz]" is a function call that returns string in baz but with first letters capitalized. Even if you RTFM and man up, searching manually takes just as much time when you just need some flags. I think to use LLMs well, you need to honestly try to scrutinize what every line of code it generates does. That way, you can make those edits yourself during the read through and you don't have to ask the LLM the second time you have to do it.
In my experience it really doesn't save time because any time savings are offset but subtle errors that it can generate that take time to debug. Until we can generate code that is trustable and correct then it is not worth the headache.
LLMs like copilot don't care about structure or facts, they care about the statistical correlation of text snippets. Copilot can't generate functional code, you need a different neutral network that also weighs the validity of outputs or understands logic logically in order to do what you ask Which is cool, but that's gonna take another half century
The most annoying part is that those co pilot ramblings take precedence over suggestions from an LSP that are based on actual type information. To be fair, I don't even expect my own code to be trustable and correct, though that could also be fully attributed to using TypeScript. I expect even less from a LLM that keeps imagining enum members and non-existent keys because 'it reads nice'.
You are not using it right if it's not saving you time. A simple win is auto completing predefined schemas. It's like finding Excel autocomplete to not be useful 😅
@@ThePlayerOfGames This is missing the forest through the trees. They care about the statistical correlation of text snippets *with your prompt* . The thing that statistically correlates the most with prompts asking "implement X" is *an actual implementation* of X, and the LLMs internally develop complex structures to model mechanisms to get to that. Now, obviously they can still be bad. But it's not just outputting whatever is *in general* strongest correlated, but rather most strongly correlated in regards to the given prompt. I asked ChatGPT to write me a for loop in C++ where the index variable is "o", and it correctly wrote it, even though there are millions upon millions of examples of the index variable being "i" and probably like two at best of it actually *being* `int o` .
I think it takes a lot of experience working with the tools. Over time you get a good feel for things it will do well and things it will fuck up completely.
My experience with coding assistants: - Wasted time reviewing generated stuff. Usually, it comes with errors/hallucinations. To be fair, I think this is expected since it is trying to "guess" what you need, but it breaks my concentration. So it's just faster to write by myself. - Most of the boilerplate it eliminates was already baked in IDEs for a long time, so it becomes kinda redundant. - Junior developers using code assistants is a mistake IMO. I've also seen some colleagues talking about their companies banning said tools for non-senior developers. So much time is wasted during code review because these developers don't even understand what the generated code does. And don't get me started on how many times I reviewed code that doesn't even work. I've used AI to help me learn some stuff outside of work. But I take everything with a grain of salt and always double-check the provided information. But as coding assistants? Eh, not really worth the hassle. Maybe I'm getting old lol.
I don’t understand this logic. Should we ban google then too? At the end of the day people who are good engineers will be trying to understand what they are outputting…
@@MRM.98 I think the difference is, when googling and then coding yourself, you are applying knowledge (which deepens understanding and memory of that coding topic - more so than reading code). But with AI generated code you're just reading the code it outputted, you aren't in the same thought patterns of trying to implement something in code (be it wholly your code, or implementing something that you copy-pasted, which still requires some level of active thought). So, in my opinion, you wouldn't be improving in your skill at coding as much when using AI - you can read a textbook/(AI generated code) as much as you want, but to get truely good at a subject, you need to practice applying that subject.
I'm responsible for reviewing code before goes to production. The amount of code I have to review now since AI became a thing is insane. Right now I just focus whether the code may break something outside their code or have security issues, otherwise "LGTM". Unless I see someone put real effort into the code, then I'll provide good feedback. What I'm expecting is for the problems pile up to the point where the AI will no longer be able to solve it, and for developers to realize they need to use their actual brain. Their contributions are localized by design, with multiple guards in-place, so I'm not as concerned about their code impacting overall production environment.
@@cypher_302 That’s a fair assessment. However, my original point is that bad engineers are bad engineers. Don’t blame the tool. I think you can effectively learn utilizing AI by simply asking for further clarifications or explanations. It can also be good in pointing you in a directions that you may not have thought of before. But, obviously you need to understand its limitation. So, I do agree with you that it can be detrimental if someone isn’t trying to actively learn when using it. This applies to really any tool.
15:20 Here's one big mentality difference between US-Americans and Germans: The former figures things out as they go. The latter figures things out first and then goes. That's a pretty fundamental cultural difference which you can even notice during vacations. But obviously this is a generalization, so you will always meet people who aren't like this.
I disagree. I can’t speak to Germans, but I assure you Americans don’t figure out as they go. They make assumptions, believe to have figured it out, and go around proud of themselves for that.
@@brettshearerme The lexeme “literally” has both a literal meaning and a separate non-literal meaning in the modern North American English dialect. Please don’t literally pretend that the latter doesn’t exist.
I've been using it for 3 years (so even before ChatGPT and LLM hype) exactly that way and I don't why people has been telling otherwise since Autocomplete on steroids is already extremely useful
I think Copilot increases developer satisfaction because, when they encounter a stupid bug they can go: "OMG, this AI is thing is so dumb!" instead of going "OMG, I'm so dumb!" Feels better.
AI is great for senior dev who really know what he is doing. But for junior? No, a junior doesn't has the comprehension/experience to know when AI produce good code and when AI produce garbage
Yesterday i saw a comment on some video about AI, and the comment was a person saying that be was using AI to make a mobile app WITHOUT knowing programming or even the syntax 💀 If that's not bad enough, that person also said that AIs can replace programmers.
bruh im junior dev and succesfully using claude sonnet 3. to create fullstack audio visualizer website using sveltekit that can run without internet once the user visit it firstly that embedded to rust axum backend become 1 binary and created rust audio visualizer graphic and computation library that compiled to webassembly and import it to the sveltekit. the result is very amazing audio visualizer graphic that has very fast, smooth, and responsive animation. no lag hehe. i also created android apk, just a keyboard but not simple keyboard, but a full featured keyboard that has theme, also can ask to ai directly from the keyboard, translate using ai from the keyboard, ctrl key using kotlin. its the first time i code kotlin bruh
@@Master120 bruh im junior dev and succesfully using claude sonnet 3. to create fullstack audio visualizer website using sveltekit that can run without internet once the user visit it firstly that embedded to rust axum backend become 1 binary and created rust audio visualizer graphic and computation library that compiled to webassembly and import it to the sveltekit. the result is very amazing audio visualizer graphic that has very fast, smooth, and responsive animation. no lag hehe. i also created android apk, just a keyboard but not simple keyboard, but a full featured keyboard that has theme, also can ask to ai directly from the keyboard, translate using ai from the keyboard, ctrl key using kotlin. its the first time i code kotlin bruh
Then some bright person in sales suggests they just tell customers to expect a wider material thickness tolerance on their sheets. You may have ordered 11ga, but you'll never know which parts of your sheet are actually 14ga or 3/16". Better yet, have a customer service person that stonewalls any hard questions and never admits to any process change on their end at all.
Dont you thinks its shocking that it works 60 percent of the time? It shows that there is more potential there, no one reasonable is saying that AI is replacing programmers now but in a large enough time frame it seems very likely
Going from an average of 10 defects per 1000 feet of rolled steel to 14 defects per 1000 isn't that big of a jump. Even less so in an industry like coding where if the average was 10 the existing variability of a given dev would be like 5 to 50 per 1000 lines of code.
That section about errors is spot on. I work with a group of devs who won't or can't read the error messages. They ask for my help and I see the error message and the problem is spelled out for them but they just can't see it. And it's not just coding. Same when software installation fails. Maybe I do it because part of my early career I coded with C++?
That's been driving me crazy for years. It doesn't matter how easy you clear you make the errors and how easy it is to look up fixes. Most people just don't read them.
its useful for shit like "generate compose yaml for this and that" and giving you quickstart nothing more... definitely not worth the hype, but can save time.
@@zhacks_admin yeah mindless stuff, if you start actually letting it write business logic for you then you'll quickly find yourself unable to be productive without an AI... and the code is trash so you and your colleagues will pay for it later fixing/debugging stuff...
@@nicolaska1761 I tried pretty much all AIs couple of months ago when the hype was off the charts, not sure how much it changed but what i said applies to all of them. They do not actually reason. If you are doing work that has tons and tons of example code out there then you might mistakenly think it can reason and give you good responses but its really not, its a sophisticated autocomplete that had enough data (not that our brains are not)... For UI and very popular codebases it may be useful but i still wouldnt let it write actuall business logic for me... thats a very quick way to degrade your skills and get yourself replaced.
Front end wise, it has been helping me a lot for repetitive stuff. It definitely doesn't work well if you don't even understand what copilot is suggesting . Or if you use something very opiniated like tailwind.
my satisfaction comes from: not having to remember stupid random JavaScript functions and not having to search and accept cookies just to see that it was just BS.
Thank you for challenging facts vs. opinion & feelings! And then there are flawed surveys, clarity around who responded, and misrepresentation of results...
I started programming with javascript and python. AI has been helpful in understanding how to do similar things in other languages, or give a brief overview of some better practices or flows in different languages and tools, or other services to leverage. When GPT went public, I said it felt like googling a question, and then having the LLM parse the results for you. Happy to report that still feels pretty accurate. A good productivity tool to a competent dev, a terrible crutch to the naive junior
And this is the a problem. It feels like googling the question but it is fundamentally different. Googling (at least few years back) gave ou indexes of pages with potntial answers to your querry that you then sifted through to find the answer writen by human. Now you get out of context summation of an average that may or may not be accurate, that also sums up and averages out of dozens and dozens other AI generated mess. And if it can be so absurdly wrong as to suggest eating rocks. Do you think you will notice if it i just subtelly, but confidently wrong?
23:45 hey Prime, I think what dev satisfaction might be the feeling that when you're writing and/or deleting a lot of code, it *feels* like you're doing a lot of work whereas if you're stumped on a particularly hard problem and just writing and rewriting potential solutions (Especially if they don't work), then you feel much worse, like you're spinning your wheels, even though you are learning and becoming better overall.
Yes! I for sure experience this. I once spend almost an entire day trying to reproduce some CI failure until I eventually found out it was running using sh instead of bash on the CI, but I had bash in my local environment. I felt really stupid, but I learned a lesson that day xD
@@lordkekz4 One of my first jobs, problems like that were a very common issue. Mostly different lib versions, but sometimes entire packages. If I cannot develop directly on the production server or a dev clone, I like to run as close to the same distro/pkg set/config as possible, locally. It can be the main, multi boot or vm.
and how much better would they be of instead of playing a guitar game they played a real guitar for same amount of hours? The analogy is not that you cannot learn and get better using copilot and guitar hero, it states that you would be considerably better if you just did the "real thing".
@marcola80 I agree with you, the answer to my question is none, no one learns to play guitar by playing a game. Is like playing surgeon simulator and thinking that you can perform a surgery. Those are totally different skills, and playing a game instead of practicing the actual skill will not bring you any further. 🦾
@DemiSupremi "MC Lars, is an American record producer, rapper, cartoonist, podcaster and educator. He is one of the self-proclaimed originators of "lit-hop",[4] and is the founder and CEO of the independent record label Horris Records." Man did a lot more than just play guitar hero.
My experience with using AI in coding is I don't use it directly. I use it as a search engine replacement for querying documentation. I explicitly programmed my LLM client to tell chatGPT to not put code samples in its output. I run my LLM client in a separate tmux window I have to tab into. If I am learning a new high level concepts I try and have a dialogue with it after reading some Wikipedia articles on the subject. I ask questions like "My understanding is technology does through methods is this correct?" or "When you say is that similar to analogous concept?" I think it forces me to think about prompts, ask questions, and then parse the responses. It removes the sort through search engine results part of research. IDK anyone have any critical feedback on this approach? I am open to being wrong.
Same. It's also a really good "fuzzy match" search engine when I know how to describe the thing I am doing, but can't recall the technical term or pattern directly. It then gets me started in the right direction, and I can go look at official docs to make sure I do it right.
The researchers who conduct studies, and the people who are making links that they want you to click on, have entirely different objectives.
Місяць тому+2
I am using copilot and I think it's great at writing plumbing code, but yes you need to read carefully the code that it generates (especially the part which executes the main logic) which is fine, since reading code is much faster than writing it. Junior developers, on the other hand, will benefit little from copilot, exactly because they will tend to rely blindly on the generated code and won't know in which part the LLM got wrong, because LLMs do make mistakes and often.
When it comes to AI and coding, I find that if you ever need to prompt the AI to write code for you, you're not doing it right. The time taken to understand, review, and bug fix an AI's work should not be underestimated. AI is more useful to enhance your own skills, not replace it. Therefore, I limit my AI use to locally hosted model for autocomplete on crack.
8:00 That comment on founders is spot on. I have a family member who was an architect and loved it, but realized to make any money he'd have to become a principal and/or start their own company and then they would end up stop doing what they loved.
I founded a company and realized even though "I can Excel" I can't be the business guy. Mainly because I''m an introvert and cold calling people I once met to ask something took half a day of ampping up and rest the day something pleasant to calm down. A real business guy makes a dozen calls before I dial the first one. In fact that's how he calms down after something unpleasant.
This is me. For multiple times it has generated code that look fine at first glances but then I would waste hours debugging because of that exact piece of code, so I just turn it off by default now and have a keyboard shortcut to turn it back on temporarily to generate a small snippet I don’t want to write. It’s good for really simple stuff sometimes but don’t have it on at all times.
Every time I use a copilot or gpt, the flow will be the same 1. Provide the problem 2. Get wrong solution 3. Fight with gpt to get proper solution 4. Run the code and it fails 5. Say fk it, and write it myself 6. Use gpt to mail content and jira mesage.
I'm learning to program in x86 assembly, and I recently ran into an issue understanding why a specific program loop functioned the way it did. I spent about two hours chatting back and forth with Google Gemini. It did a decent job of explaining different steps to me, but in the end, it took me looking up some Intel manuals as well as a university cheat sheet, for that lightbulb moment to go off. Now that I passed that hurdle, it's starting to become a lot easier. Nothing beats "doing the hard work" and really wrestling with a problem until you actually understand it.
I still find yet to find AI to explain ancient assembler routine to convert byte to hex digit. (add 90h, daa, adc 40h, daa). On good day they don't mess up with arithmetic completely.
Regarding how people use playlists instead of subscribing: there are actually a bunch of channels I really enjoy but I don't subscribe because the UA-cam algorithm already pushes them enough and I don't want the stuff I've actually subscribed to to get drowned out even more than it is already.
Subscribing doesn't mean you receive all notifications. That's only if you click the notification bell. Otherwise, I think you'd find that it doesn't really affect your recommendations much, if at all, relative to a channel you already watch frequently.
My experience is that when I'm fixing a bug or making a small modification to existing code, copilot is very helpful. But when I'm starting something from scratch (so there's less context), copilot can introduce subtle bugs if I try to go too fast. Which makes sense; it's a generative AI, so it needs context to push it in the right direction. Also, it's really good at finishing code from online examples (to the point that I have to turn it off when I'm trying to follow along in a getting started doc or a book).
As a senior dev it absolutely works and absolutely saves me a lot of time. But you have to be a good coder regardless and use it as a tool to increase your productivity. You can't just expect it to make the code for you and have the code be of high quality as-is.
I've been a software engineer for over 7 years and have been programming for around 15 years. I’ve worked across all levels-chips, embedded systems, drivers, and OS-based software (at this moment, i have multiple virtual machines of different OSes I'm publishing to). Copilot doesn’t help me much. It often slows me down because the code it generates rarely follows SOLID principles. I mean, at first glance it always seems to be solid, but if you really start making it solid in context, it very often has details that shouldn't even matter. So.. I mainly use it for autocompleting comments and basic, repetitive code. For tasks that require deep understanding, like working with chips and embedded systems, Copilot isn’t useful at all from a whole implementation perspective. Recently, I used it to recreate a product I had already built. With all the context in place, if my original product was a 7/10, the AI’s version was only a 4/10. In my experience, fixing the AI’s output to reach a 7 takes just as much effort as building it properly from scratch using SOLID principles. Not trying to undermine your experience or anything. I'm just sharing what value it had for me, another senior dev.
Having used it for 2 years (roughly) now. It does feel like it is getting worse. But it is handy for repetitive stuff or small scope stuff. When it tries to generate a big block, it will often be wrong. I do wish I could limit it to only prompt me for small scope stuff. I'm relying on it less and less.
Codepilot is like guitar hero. Where more people think they can play guitar, when in fact when handed a real guitar, they can't. So in the end more Juniors coders are attempting to code more complex things, and think they've done it right, when in fact now they've introduced more bugs.
I mean this study is pretty obvious even though 800 people is not really a statistically significant amount. You would have to have many studies over a longer period of time to truly understand the impact of AI on programming and code quality. My hunch which is of course anecdotal is that it really depends on the domain and that for the most part AI is pretty good at helping out with boilerplate and other repetitive operations but it's not geat at helping you build something that is new/unique. People seem to forget that AI is limited by the data it's trained on as well as the quality of the prompts that it's being fed. A junior developer without as much experience is unlikely to understand the details that they might need to look out for and they likely cannot use the AI as efficiently as somebody who fully understands the problem that they're trying to solve. Also, while AI might be able to help with general development because of the sheer amount of data that it was trained on it probably can't help you with something much more niche.
I agree with not using feelings as measurements in most cases, but it's worth noting that in some cases it's necessary. For example, studies on how people feel when they have a specific illness (or rather what they would usually report) is the closest you're going to be able to get when trying to get diagnostic data for medical triage.
I find that it's really only useful when I know exactly what I want and it's just a more sophisticated tab-complete or if I'm so tired/migrained that I can't put together coherent code so that I can make some progress even on bad days.
@@triplea657aaa I have migraines too, so I kind of get what you mean (I think). If you can stop your short term "task memory" from completely resetting during a bad migraine day it at least makes it easier to continue the next morning.
This might be a very dense read of what you said but feelings are very important in things outside of quantitative data, and even in analysing quantitative data, feelings are important to contextualise them. Sorry if this is a bit nitpicky.
Cursors boosted Copilot does the autocomplete thing 10X! I am subbed for 4 months now, its very good to have claude 3.5 with codebase, custom docs and websites in context, also quick inline fixes
AI coding assistants are a fantastically useful tool for young and ambitious mid level programmers who want to climb the corporate ladder It allows them to mock up cardboard cutouts of systems in a week, and demonstrate something resembling a system to upper management They can get their hit of praise for minimal effort, and be seen as the truly 100x engineer that their parents always wished them to become Since these sort of people get the most say in many organisations.. the overall message coming out of the industry as a whole is that AI is the future
i learn it too with Ai, but as a better google. U dont need to find that one keyword to search in google , u can just ask. But at work to gett the job done fast i use it , sometime.
The single biggest benefit of AI for me has been to help with my RSI especially while re-learning to type on an ergo keyboard. Its great for filling in boilerplate for function signatures and stubbing things out and then turn it off when it comes to the actual implementation
The question isn't if coding assistants will increase the productivity of current developers. The question is how much not having them is going to decrease the productivity of developers from the post-coding assistant era.
I never used ai assistant tools. I simply prefer to ejaculate code by myself, then having to check whatever alluction ai comes up with. And it's easier and faster for me. For once, not using copilot, allows me to enter a flow state, where i know exactly what i am doing, and i am able to produce code (which mostly work, ignoring the stupid mistakes everyone does when coding lol) And also, waiting 1 second for text to appear it's just stupid lol. In 1 second, you could type 1 or 2 words at least
As a junior-to-mid dev, i do use ai. Mostly when i am stuck on something for more than half an hour, stackoverflow researches included. Then i ask first general questions of how to do the general thing I am trying to do, see if it helps me understand. Then i add in more context, ask questions, try to understand better. The last thing i do is give it my code (anonymized if need be, i change names and variables to generic ones) and ask questions specific to my code. Usually all these steps helped me understand better and gave me the answer i was looking for. I report it in a kind of journal so it's available for the next time and to hammer it in my head.
Working on a side-project allows you time to switch gears mentally and also has less pressure because there's probably no deadline. Working on work means you haven't switched off and have just powered through. Which is fine now and then to get something solved but, when it becomes the norm and you have no time to walk away and the deadline is looming and it's all crushing in on you...
Basically hate it in my IDE, am fine with it as a chat, mostly because I usually give it to it when I know what output I want and can then just quick read to make sure it's right. E.g. today just wanted it to write a loop that imported certain files and stuff in bash, and I keep forgetting bash syntax, so it's something I can easily check logic wise and running it does syntax check. Would I ask it to generate more than like 10-20 lines of code? Not really, its context sucks at that point. That os if, I'd probably do something like os.read and see what the auto complete gives me as an option.
I try to like every video I watch as soon as I start watching it, because I have ADHD and I will forget to do it after watching even if I don't use a playlist (I hate playlists and autoplay). The main reason is that UA-cam seems to reset the watch time indicator (that red "progress bar" line on the bottom of the thumbnails) after about one year. And then the Algorithm keeps try to push videos I've already watched over and over. A few years ago I was recommended a video from one of the channels I watch everything of, and I was surprised that the lack of progress bar indicated that I hadn't watched it yet; but then everything felt weirdly familiar; and then when I scrolled down I realised I had already liked the video despite supposedly not having watched it according to the indicator on the thumbnail. I've had weird deja vu feelings watching videos before too, but since I didn't compulsively "like" every video yet; I had no indication of having watched it unless I remembered (and I watch thousands of videos every year so I can't remember every video; besides they often change the title and thumbnail). So now I "like" every video once I start watching it; if I really hate the video I change it to dislike. It's only if I start watching on my phone (or some other device without tabs) and realise I don't have time to watch it all (which is always the case when I realise the video I started watching on the toilet is 30+ minutes), that I save it to "watch later" and "unlike" it. That way I know if there is a like on the video I've either watched it all or it's already half watched in an open tab on my computer.
I’m not really sure if 41% is even as high as it’s going to get. So many newbies are leaning on and schools are permitting the use of ai in education I worry too significant of a portion of people entering both the workforce and public facing code that we’ll see a major increase in bugs but also a surge in incompetence feeding extra trash back into the models
Oh man, you don't even know. I'd vaguer that at least half of my uni class in computer engineering wouldn't know how to write a fizzbuzz if they didn't have ChatGPT to hold their hand. The worst part is having people like that in a team assignment. I guess those were the people who used to just write the reports and documentation, but now they're actively ruining the codebase the rest are working on.
@ unfortunately I’m wholly familiar with how bleak it’s gotten. I’m a private tutor in programming and in for my BS of CS as well. Both my classmates and students just expect answers on tap and it’s crazy basically nobody’s doing anything about it.
LLMs are powerful tools, like past innovations. Coding errors existed long before AI, though no studies were made to compare. These models boost productivity but require skilled use- bugs persist, and the tools for addressing them have simply evolved.
I think this is probably only true if programming is a significant part of your job (Which is presumably what you meant, I just felt like uh, I felt like it would be funny to say this even though it probably isn’t. Still going to hit send.)
At the risk of sounding rude to the author which I do not intend to, Is it just me or that article reads like an ad for uplevel? Also, I am honestly glad I took the time to learn C++ error messages. They're actually pretty "nice". It does actually give you a somewhat of a decent insight into all the things the compiler is trying, especially with overload resolution. Like, I don't know what to do or feel about AI in programming. Should I be "scared" that it will exponentially get better and will replace me and keep up-skilling (somehow?). But some say it's foolish to be scared. Others say AI isn't good and anyone fearing it isn't a good dev and reasons like "if you can think, you are better than AI" "you fear what you don't understand, if you understood it, you'll see it isn't gonna replace you" and so on. But others say not internalizing AI will replace programmers is "coping", just like horse-carriages eventually were overtaken (lol) by cars. And both sides seem equally loud. So I'm here, just trying to do my thing, using AI for non-coding things.
23:15 Prime straight out of Taken: "I have are a very particular set of skills, skills I have acquired over a very long career, skills that make me a nightmare for people like you"
16:24 I'm one of this 3%, I have been learning coding for 6 years, I have 2 years of commercial experience, I have no drive to test this. I had experience with doing reviews of guys code which was generated by AI and it was distasteful, amount of bugs, poor designe, poor quality, there was used really weird librares like ast in Python for transforming dicts and quite frankly, I'm disgusted, they lied to me many times that it's not AI and I was forced to do reviews for AI everyday and act like nothing happened, baceuse my managment doesn't care. I don't want to see this shit never in my life.
Stabillity is rooted very deep within German culture in many areas. I think that´s what makes it efficient long term once something is running well, but also really slows them down when it comes to adapt for issues in the future. Like things rarely change here, unless there´s a direct trigger. (eg. regulation changeing after an accident.)
True. I train AI to do coding as a side job almost every day. The code generated by AI is still full of bugs. It takes hours to fix the code generated by the model in just one conversation (that only has 2-3 prompts).
I don't like this language "41%" Remamber this one contraception drug that gave one person in a million side-effects, and then there were headlines like "the new and improved version has a 100% higher rate of side effects" This language is missleading. What is the base line? What is the deffinition of a bug? etc...
The funny thing is, is that ai can probably replace the ceos before the programmers. Someone tell the shareholders that, suddenly ceos will feel what the "little guy" feels
Oh satisfaction you nailed it! I love to play Escape from Tarkov (EFT): there the saem thing happened.. they added a PVE mode, and basically I'm nowhere near as good as great streamers. But since that enaables you to play vs. AI only and basically allows you to feel really good and like you are on that level and then have their equipment, you basically get the feeling you are at that level. INTERESTING
Whenever I get AI-created code suggested I basically cannot trust it, and have to quadruple-check everything. It's like having a junior whom I cannot trust to actually improve or do the thing we need.
Most programmers are smart and might prefer not to disclose to their project managers that they're leveraging AI to boost their productivity. This could be to manage expectations or to avoid being assigned additional tasks
it rages me so fucking much. When i am reving a code and a see a snip it that is just incomprehensible for something that is just so basic and i just realize that the bullshit developer ask chat gpt for the code that he does understand and just copy and paste in the project. THIS IS HAPPENING ALL THE FUCKING TIME.
I really like coding too, what I don't like is mapping 200 fields of business logic specified in an Excel spreadsheet between incoming CSV, internal app model and multiple database tables. AI can do that in 5 minutes and I take 15 minutes to test it. Without AI it would take me more than a day of tedious and very error-prone work. So yes - AI when used right prevents burnout for people that like to code and design but hate the part that's a thankless busywork.
This is underappreciated. I do a lot of work in SQL, and AI has saved me hours by doing a lot of the repetitive typing that you inevitably have to do when working between different layers of an ETL / ELT architecture. Very similar Tables, Views, Stored Procs etc. that you are constantly having to go back to again and again to tweak some typo or Ctrl Find Replace with convoluted REGEX. LLMs are good when I teach it the pattern and then have it replicate the process so I can just run my test scripts at the end. The simple fact is most developers are not getting paid because they are innovating amazing new solutions or inventing algorithms, they get paid because the average person has no patience for programming or technical architecture.
I find AI tools really good at generating commit messages. Especially when the changes are fairly small. You can adjust the prompt to fit your commit style as well. It won't work well if your style requires saying why lines are changing rather than what is changing (it will often get the motivation wrong due to missing the bigger picture). Another ok use is generating simple tests based on an existing one to cover more corner cases. Or giving it a well defined class/function and have it generate a complete test suite. You obviously need to go through the test cases and fix the errors but I find it still saves time.
I want to support Primes idea about outside-work programming. Matt "Creator of the Parker-Square" Parker called it "recreational Maths". The concept of interacting with Maths in your free time for fun and interesst. I think the same can be said about programming. "Recreational programming" can happen even for fellow developers. Work enviroment (incl. the language you use) is different enough, that interacting with code with a different perspective can still be enjoyable. My company will not switch to functional programming in the next 10 years, but this does not mean I can have fun with Elixir or Elm on my personal projects.
LLMs by definition works not by saying the right things, but by saying things that sound right. Or, in this case, code that looks correct. Perfect way to generate hard to find bugs.
Copilot mileage really varies. I have dev buddies that hate on it. I love it for PHP Laravel stack. I don't rely on it to write code but to autocomplete what I was going to write anyway, if that makes sense. There are some rare occasions where I am doing something unfamiliar and I will ask it to do X Y Z and it really does it right most of the time. I think its really a net benefit for me.
To be honest i was learning to code on cs50x, after completing that i moved on to cs50G which is no longer an active course, i use co pilot to help me understand the code that we are given from the course as the videos are too in depth, i then try to adjust the code to get an understanding of what each section is actually doing, and i found errors that needed fixing, with co pilots help i manage to make it worse, fix it, make it worse, fix it, before finally understanding what the code was needing and deleting all the co pilot stuff that i had implemented, my code finally works as intended and gave me a deeper understanding of what was actually going wrong. I keep telling my partner, I dont understand why they think co pilot is good, i spend most of my days arguing simple logic that it gets wrong xD
Ok, here's my relationship with AI (namely chatGPT, nothing else). If I want to learn new stuff, I read books, and sometimes get stuck on topics that I don't fully understand, so I ask AI to explain it to me. As I don't like to rely on third party libraries, I write everything I need myself and AI really accelerates in providing and summarizing documentation as it was trained with that data too. Most of the times, in books, there are a bunch of questions after each chapter and I tell AI what the question is and provide my answer in the same prompt. This really helped me to actually learn more efficiently. However, the lack of creative and critical thinking of AI makes it unsuitable to produce production level code.
The AI plugins are useful to help you write boilerplate code, but if you release that without checking it and making sure it does what you want, you deserve everything you get. The secret is to use it to write small, discrete chunks instead of entire pages of code
ive bin going to school for programming, and one thing ive notice my self doing is outright wanting to actually write out everything my self to specifically thinking about how i want the code in function to work, effectively a dump of my mental sate onto the program. because of being in the middle of learning to code, the mental structure of how a project should run is being built effectively real time instead of already fully knowing exactly before hand like a veteran coder since their mental structure probably thinks the segments instead of individual segments and how it relates to the rest. so when copilot does the thinking in segments bit, those jr devs or what ever dont realize how those segments fully strucutred into the full range of everything. so they got to go back and fully read the entire structure to finally get that. it might be a good learning tool by example and testing. but for full work like that, it probably wont make things more efficent because of those people needing to learn what exact thing they need.
7:00 There is another factor to consider. Jira sprints are planned and time-boxed. Most developers who finish their sprint early are NOT going to go looking for more work. IF Copilot makes devs go faster, but their sprint says they're done they aren't going to go faster. And the code they just made with Copilot sucks 10 times as much.
There are ways to collect good data through questionares, but they need a bunch of data to estimate how much people are lying to you, which is very hard in any field that hasn't already had a ton of study. I still prefer the 'analyse landfill content' methods to figure out how people are behaving rather than asking them. I'd love to have an analysis of abandoned Github projects, mapping copilot content put into them. Knowing if its rookie devs, or big teams getting frustrated, or whether the issues with GPT spam has tipped the scales on maintaining projects, and whatnot could be a really interesting analysis.
The thing I always keep coming back to when considering AI coding assistants is a situation where you're working on something so proprietary or otherwise secret (like a government contract or something), where you are not allowed to paste any code into an LLM out of a fear of having that code stolen. How many modern programmers would simply crumble instantly? When you can't use any AI to help you, are you still able to find solutions to problems by yourself? Also, the only code that LLM's most often have access to is public. And that means a majority of it is amateur grade. Most of the proper professional grade code is private and not accessible to LLM's. So whenever you ask an LLM for help, it's very likely that it'll be an amateurs implementation. Most of my experience is in video game development, and some of the code for Unity that LLM's spit out is like straight from a babys first gamedev tutorial where it's obvious the person that wrote it had no idea how to program a game for expandability, performance or readability. Stuff that will immediately break if you need to add anything related to it.
For me AI saves time by decreasing aimless link clicking, especially when i have no clue what to search for. It's good for boilerplate, introduction to patterns, solutions and even algos, just filling the potholes on my road.
It’s already been said that it depends on the experience of the developer. I would say I am an experienced SQL developer and an intermediate typescript developer. When coding in typescript I would ask copilot to give suggestion of what I require and ask another more experienced typescript developer in our team to review to ensure it’s good and not blindly trust copilot. For SQL I would make some changes to copilot code if I think it would be better doing it a different way or to fact check it for performance because some SQL can cause performance issue. Our team use copilot and it seems to reduce code completion time with suggestion, therefore overall save some time.
We have been doing interviews at my work all week for a few positions. It's amazing how many people now have years of experience being a software developer, but they cannot even code a simple fizzbuzz when asked. It's really depressing.
Yeah. The quality of the candidates has been steadily dropping before chatgpt. Some can do leet code type questions because that’s what they think is what people want so they train on those. But even then, you can tell they don’t really get it. And any deviation from that and they buckle. All I want is enthusiasm for the field, some creativity, some fundamentals and potential to grow. I’m not looking for additional baggage to carry.
I think the hard part about this is that Scrum teams won't notice or will actually like it because this way they are pushing PRs and pumping metrics at a higher rate. All they care about is metrics, so with faster PRs and more bugs to fix (therefore more PRs) they will be happy and write it off as a win.
The fact that AI can't help Junior developers should tip you off to the fact that even with AI, a normal person cannot write production level code, a massive blow to those thinking that software devs will get replaced.
One llm doesn't prove anything. Copilot sucks.
It really does. I'm a software dev with a good amount of experience.
Using claude, ive developed small applications without hand writing any code, just describing the problems clearly.
Small problems like having it build ui controls that have been built a billion times it gets right the first time every time. The thing about that is, if you're doing a good job writing code you have a lot of little black boxes that don't have a ton of connections to other parts of the software.
Obviously however, that doesn't describe all types of problems that needs solving, and as stuff gets more complicated, hands off auto generation ceases to be a thing. At that point it becomes more of a way to Jumpstart a problem.
Where it REALLY fails however is architecture, which really requires systems thinking which LLM are not all that good at yet, and thats compounded by the fact that the context size of LLMs is limited, and for projects that hit real world needs you simply can't fit it all in the context.
Even so, you can still speed up code generation if you're using the tools intelligently.
But again, copilot sucks, it consistently gives weak and wrong answers.
Ive only used those and local versions of llama, which I haven't reaaaaally put through its paces, so I donno how chatGpt really stacks up.
@@RealTwiner I describe my problems clearly every day, with C :)
@@the_derpler maybe your way in describing it. i have no issue working using rust with claude that rust is more annoying than c and has less example and libraries out there compared to c
@@my_online_logs yeah nobody cares about your hobby project , were talking about the real world here .
@@RealTwiner Even then, I don't think that will ever happen. There's no way that AI can reach that potential where it can understand context and architecture at such a high level as a human. We're talking about replicating the power of the human brain. We simply do not have the power to create that.
The only people who think AI can replace programmers are people who don't code and don't understand the intricacies of software development. They see AI spit out some gobbledygook code and think its magic because they don't even know what they are looking at. Those of us who have used AI know that it's more of a calculator for programmers than anything else; we understand that the controversy surrounding AI is just venture capital-driven hype.
It saved a lot of hours of processing JSON by the humans, but I don't think it will even pay for the cost of training. EVER !
The calculator comparison is apt. I often use the same analogy.
Im a programmer, I think copilot is very unhelpful, and and even chatting with ChatGPT about the high level architecture is questionable, because it sends me down the wrong path very often.
Even still, I think it’s plausible that AI could replace programmers pretty soon. They can *almost* write useful code, and they are improving quite rapidly
Tbf there are a lot of "developers" that are basically glorified copy pasters.
Ok? But this isn't about replacing programmers. It's about programmers using it as a tool, and it not working as it should.
I really need to know why Prime refuses to highlight the first and last character of a given paragraph. There has to be a reason, right?
He thinks that's cool.
He finds symmetry he says. I mean highlighting everything is also symmetrical.
But I mostly listen more than watch, so I didn't even know until he mentioned it.
@@nikarmotteCool! It doesn’t bother me or anything it’s just something I always notice
Because theres a chance, depending on website formatting, you could select entire paragraphs/page/unrelated text/ads.
That way he won't accidentally highlight the entire text on the page
Using AI to write your code is basically pair programming with a junior developer, but without helping someone get better in their career.
Well in a weirdly roundabout way you are helping it get better
@@kesky6363not really. If you use the shitty code it gives you to write more shitty code, you just make it worse
It's more like pair programming with a really smart goldfish that pops hallucinogens.
💯
That is available 24/7 and has infinite patience
As a german dev, my impression is that germans are a lot more skeptical when it comes to the promises of new technologies and a little bull-headed about doing things yourself. Sometimes that preserves and deepens meaningful craftsmanship, sometimes it leads to trailing behind the state of the art.
State of the art is overrated. The U.S. was the first to do a lot of things, and good for us. But because we were state of the art back then, now we're locked into older infrastructure and it's harder to move to what is considered state of the art now.
As a fellow German, I think this is partially a cultural thing. German people just really like stabillity and predictable, reliable outcomes.
@@JabberwockybirdSomeone’s got to blaze the trail. Europe wouldn’t use 230v (it still varies) power if they hadn’t learned from widespread electrification in the US
On one hand you have offices that don't get anything done because the workflows were developed for typewriters and don't make sense for computer systems.
On the other hand and I'm being dramatic here, the reason our education system still works at all is because people with dumb ideas haven't been able to tear it apart yet.
Generally there are many many things that need to be fixed or optimised in any given institution but you also can't allow the wrong people to sell you on stupid things. Actually recognising the helpful opinions from the right people seems to be the real problem especially when the ones who have the say on the topic hate to hear them.
Americans always gives us legit reasons to dislike them, right? I mean, they legit believe in the sh!t they say. It’s so pathetic.
Days passes and people start to realize that the whole "AI will substitute dev workers" was just another different excuse to fire people.
Exactly.
No one wants excuses to fire someone that actually produces, that is what a worker is
I think its really trying to excuse the mutli millions and billions invested into something that will not have the same return... "AI"
waiting for the pendulum swing
Not many places fired people because of AI
Don't you guys think that if AI COULD replace developers, then it COULD replace all other jobs like HR departments, managers, all these other type jobs where they just produce "text" and "talk" ? I mean Developers translate real-world models into logical statements that a computer can run. If you can automate that, you gonna automate literally everything. So people who cheer that developers will be replaced should really worry about their positions in the first place.
The big difference I can see is that you can run tests on code to see if it is objectively working, or right / wrong. Many office jobs don't have those well-defined boundaries i.e. there is a lot of unquantifiable bullshit that creates a moat of mystification around the role. Makes them harder to automate.
@@calmhorizons I absolutely agree with unquantifiable nature of their work. But I would argue that LLMs are currently BEST at doing & understanding this unquantifiable work as of now. IMO LLMs are less good at mathematical and hard logical tasks.
For sure human knowledge is necessary to get the job done RIGHT, this is where LLMs are dubious - we never know if they are generating the right output unless we test their output well enough. So I believe that IF one day LLMs get perfect that at "writing code", this would surely contain the non-tech jobs as well. I mean an important step in programming is Requirement Engineering, which is a task that can only be done by human at the moment. Imagine if an LLM listens to a client's description and yields a perfect requirements report. This is as hard as unquantifiable bullshit I think.
Overall I think we are going to adopt LLMs more and more in every task we do - and end up checking & correcting their output and loose time this way :P
This. Is funny how the narrative is always about devs but never about other departments.
@@calmhorizons Yeah that's like the opposite of true. If you're writing code that can be defined in black and white you're making a notepad app. Highly intelligent engineers spend years debating foundational architecture for a reason.
And if you think that's just dev side stuff that doesn't provide any value to business then I'm convinced you only see a code base through a Jira board.
Ai would make a good ceo replacement.
This was interesting. You described my situation to a ‘T’.
I’m about to graduate, but through my program we’ve been encouraged to always use chatGPT and copilot.
One night driving home I got hit with this feeling of existential dread. I’m going to graduate knowing all the words and phrases, but I’m not going to know how to code.
That co-pilot pause. You couldn’t have hit the nail more on the head.
I think it's ridiculous that some universities promote the use of CoPilot and allow it in exams. That seems incredibly counterproductive to me.
You can solve this fairly easily.
Set yourself some small tasks to code, with clear and defined bounds. Practice that every day. Do it without Ai.
Soon you won't even have to think about the easy stuff, like taking IO from the user, reading a file etc.
How about after getting your answer from chatgpt, actually type it out instead of copy-pasting. The process of typing it out makes you think through what you are doing, and helps develop fluency and muscle memory.
What school are you attending? I'd like to make a note of it so I can never hire anyone who went there.
Fantastic hot take. 24 years under my belt as a programmer. The exposure and repetition builds muscle memory and mental models needed to almost feel a program. You know what would work, what wouldn’t. You know how code being contributed will impact and if it’s going to move things forward or negatively impact.
It’s doing the work. It can’t be fast tracked. So fall in love with the journey and you’ll enjoy each new challenge.
I don't know if Prime reads these, but often I don't like videos because my goddamn tv just closes them as soon as they are done, and then I can't like them anymore. So either I have to like videos before I've watched them completely, or I have to find the video again to like it it was a pain. This is why so many big channels have a song or something else that is always the same at the end of their videos.
oh. that is wild
@@ThePrimeTimeagen yeah UA-cam doesn't really consider the user experience differences between watching on a computer and watching on a TV. Having an outro makes a big difference for TV viewers.
also in playlists (e.g. watch later). youtube just skips to the next video, not giving a chance to like/read comments
Just like it during the long sign off ... agen
Or disable auto play on your TV and it should just sit on the end of the video and wait for you to choose the next one
@@georgehelyar My tv does (after playing some ads) stay at the end of the video if I'm not on a playlist, but there is no way to like the video on that screen on my tv. Your tv's player might be different.
It's worth noting that Germany has incredibly strict overtime and holiday rules, and staff have the legal right to ignore work communications outside of work hours unless they're being paid to be on call. A quick google search tells me that Brazil also has laws stating that mandatory overtime is a maximum of 2 hours per day, and must be paid at time and a half, and also has 30 days of paid vacation per year. I wouldn't be surprised if the differences between the listed countries in terms of perceived impact of AI on their code quality came from some countries having programmers who are working 60-80 hour weeks (not counting whatever they get asked to do while they're at home) and maybe getting as many as two weeks of holiday time, and other countries having programmers working 40 hour weeks, not being contacted outside of work hours, and getting plenty of paid holiday time.
I know if I were working 60+ hours per week in some corporate programming gig, my code quality would be absolute dogshit, because tired programmers write dogshit code.
Also: language barrier. These tools are optimized for communication in English. Quality decreases significantly once you use a different language than English while working with these tools.
There might be laws to protect from overtime, but also there might be a lot of people who willingly don't pursue being protected by such laws - willingly working overtime without proper remuneration. It happens due to different things, depending on the company and position there
@@totalermist I've never met a German developer who codes in German or talks German to copilot. Source: I'm German.
@@Vindolin In that case you've worked in *very* different places than I have. From state agencies to insurance companies to banks to whole sale to medium-sized engineering companies, there're plenty of places where *everything* business- and technology rule related is specified in German using very specific industry- and company internal terminology that is distinctly not found in most public dictionaries. Source: 30 years of work experience in Germany and the UK in the above industries.
@@totalermist So far I have mainly worked in web agencies and thank god everyone there programmed in English. Of course, you sometimes stumble across completely untranslatable German words that mess up the whole naming of your database schema and symbols :)
Amount of bugs seems like a very hard thing to measure. I belive that most codebases has tons and tons of bugs in them that the companies don't even know about. So you could get the same kind of effects as you get in crime statistics, where numbers sometimes go up not because there are actually more bugs, but because more of them are discovered/reported.
That was also my first thought. How the hell do you measure that at all? Counting commit messages with “fix” in it?
Also, how many times something is reported as a bug when it is actually something not accounted for? We've all been there...
That, and there's not really an easy way to quantify "seriousness" of a bug. Like you could reference a wrong variable somewhere or you could introduce a memory leak in some very specific hard to notice edge case. Technically both of those could be "1 bug"
They probably used tools like sonarr.
Yeah, we know that AI isn't going to take over the software developer
But this study is definitely misleading so 🤧
Using copilot to write code feels like explaining what code I want to write over slack to a person who has like 2 months of programming practice and trying to get them to write it for me instead. In the amount of time you spend giving paragraphs of clarifying details and correcting mistakes you could just write the code yourself many times over.
It is very useful if you are doing something that only needs those 2 months of experience in your random bash script or magic sed/awk command. If you come across a TCL file in the depths of your ancient code base, copy/pasting into an LLM can save you tons of time asking what the foobar is an "lindex" or discovering that "[exec string totitle $baz]" is a function call that returns string in baz but with first letters capitalized. Even if you RTFM and man up, searching manually takes just as much time when you just need some flags. I think to use LLMs well, you need to honestly try to scrutinize what every line of code it generates does. That way, you can make those edits yourself during the read through and you don't have to ask the LLM the second time you have to do it.
I use you in my playlist as watch later so that I can listen to your videos as a podcast, it's really great thanks for that I subbed.
In my experience it really doesn't save time because any time savings are offset but subtle errors that it can generate that take time to debug. Until we can generate code that is trustable and correct then it is not worth the headache.
LLMs like copilot don't care about structure or facts, they care about the statistical correlation of text snippets.
Copilot can't generate functional code, you need a different neutral network that also weighs the validity of outputs or understands logic logically in order to do what you ask
Which is cool, but that's gonna take another half century
The most annoying part is that those co pilot ramblings take precedence over suggestions from an LSP that are based on actual type information. To be fair, I don't even expect my own code to be trustable and correct, though that could also be fully attributed to using TypeScript. I expect even less from a LLM that keeps imagining enum members and non-existent keys because 'it reads nice'.
You are not using it right if it's not saving you time. A simple win is auto completing predefined schemas. It's like finding Excel autocomplete to not be useful 😅
@@ThePlayerOfGames This is missing the forest through the trees. They care about the statistical correlation of text snippets *with your prompt* . The thing that statistically correlates the most with prompts asking "implement X" is *an actual implementation* of X, and the LLMs internally develop complex structures to model mechanisms to get to that. Now, obviously they can still be bad. But it's not just outputting whatever is *in general* strongest correlated, but rather most strongly correlated in regards to the given prompt.
I asked ChatGPT to write me a for loop in C++ where the index variable is "o", and it correctly wrote it, even though there are millions upon millions of examples of the index variable being "i" and probably like two at best of it actually *being* `int o` .
I think it takes a lot of experience working with the tools.
Over time you get a good feel for things it will do well and things it will fuck up completely.
My experience with coding assistants:
- Wasted time reviewing generated stuff. Usually, it comes with errors/hallucinations. To be fair, I think this is expected since it is trying to "guess" what you need, but it breaks my concentration. So it's just faster to write by myself.
- Most of the boilerplate it eliminates was already baked in IDEs for a long time, so it becomes kinda redundant.
- Junior developers using code assistants is a mistake IMO. I've also seen some colleagues talking about their companies banning said tools for non-senior developers. So much time is wasted during code review because these developers don't even understand what the generated code does. And don't get me started on how many times I reviewed code that doesn't even work.
I've used AI to help me learn some stuff outside of work. But I take everything with a grain of salt and always double-check the provided information. But as coding assistants? Eh, not really worth the hassle. Maybe I'm getting old lol.
I don’t understand this logic. Should we ban google then too? At the end of the day people who are good engineers will be trying to understand what they are outputting…
Exactly my experience. These AIs has annoyed me more than it helped.
@@MRM.98 I think the difference is, when googling and then coding yourself, you are applying knowledge (which deepens understanding and memory of that coding topic - more so than reading code). But with AI generated code you're just reading the code it outputted, you aren't in the same thought patterns of trying to implement something in code (be it wholly your code, or implementing something that you copy-pasted, which still requires some level of active thought).
So, in my opinion, you wouldn't be improving in your skill at coding as much when using AI - you can read a textbook/(AI generated code) as much as you want, but to get truely good at a subject, you need to practice applying that subject.
I'm responsible for reviewing code before goes to production. The amount of code I have to review now since AI became a thing is insane. Right now I just focus whether the code may break something outside their code or have security issues, otherwise "LGTM". Unless I see someone put real effort into the code, then I'll provide good feedback.
What I'm expecting is for the problems pile up to the point where the AI will no longer be able to solve it, and for developers to realize they need to use their actual brain. Their contributions are localized by design, with multiple guards in-place, so I'm not as concerned about their code impacting overall production environment.
@@cypher_302 That’s a fair assessment. However, my original point is that bad engineers are bad engineers. Don’t blame the tool. I think you can effectively learn utilizing AI by simply asking for further clarifications or explanations. It can also be good in pointing you in a directions that you may not have thought of before. But, obviously you need to understand its limitation. So, I do agree with you that it can be detrimental if someone isn’t trying to actively learn when using it. This applies to really any tool.
15:20 Here's one big mentality difference between US-Americans and Germans:
The former figures things out as they go.
The latter figures things out first and then goes.
That's a pretty fundamental cultural difference which you can even notice during vacations.
But obviously this is a generalization, so you will always meet people who aren't like this.
I disagree. I can’t speak to Germans, but I assure you Americans don’t figure out as they go. They make assumptions, believe to have figured it out, and go around proud of themselves for that.
it’s literally autocomplete on steroids and we should treat it as such
how does autocomplete literally take steroids?
@@brettshearerme The lexeme “literally” has both a literal meaning and a separate non-literal meaning in the modern North American English dialect.
Please don’t literally pretend that the latter doesn’t exist.
I've been using it for 3 years (so even before ChatGPT and LLM hype) exactly that way and I don't why people has been telling otherwise since
Autocomplete on steroids is already extremely useful
I think Copilot increases developer satisfaction because, when they encounter a stupid bug they can go: "OMG, this AI is thing is so dumb!" instead of going "OMG, I'm so dumb!" Feels better.
AI is great for senior dev who really know what he is doing. But for junior? No, a junior doesn't has the comprehension/experience to know when AI produce good code and when AI produce garbage
Ai is great for CRUD app developers. It won't work for anything remotely complex.
Yesterday i saw a comment on some video about AI, and the comment was a person saying that be was using AI to make a mobile app WITHOUT knowing programming or even the syntax 💀
If that's not bad enough, that person also said that AIs can replace programmers.
bruh im junior dev and succesfully using claude sonnet 3. to create fullstack audio visualizer website using sveltekit that can run without internet once the user visit it firstly that embedded to rust axum backend become 1 binary and created rust audio visualizer graphic and computation library that compiled to webassembly and import it to the sveltekit. the result is very amazing audio visualizer graphic that has very fast, smooth, and responsive animation. no lag hehe. i also created android apk, just a keyboard but not simple keyboard, but a full featured keyboard that has theme, also can ask to ai directly from the keyboard, translate using ai from the keyboard, ctrl key using kotlin. its the first time i code kotlin bruh
@@Master120 bruh im junior dev and succesfully using claude sonnet 3. to create fullstack audio visualizer website using sveltekit that can run without internet once the user visit it firstly that embedded to rust axum backend become 1 binary and created rust audio visualizer graphic and computation library that compiled to webassembly and import it to the sveltekit. the result is very amazing audio visualizer graphic that has very fast, smooth, and responsive animation. no lag hehe. i also created android apk, just a keyboard but not simple keyboard, but a full featured keyboard that has theme, also can ask to ai directly from the keyboard, translate using ai from the keyboard, ctrl key using kotlin. its the first time i code kotlin bruh
@@Master120yeah but... That person probably doesn't have a job as a software developer, nor they will ever have one
Imagine for example steel industry that invents new rolling machine, that only introduces 41% more defects into product and everyone just ok with it.
just goes to show how little anyone knows what they're doing, especially in management
Then some bright person in sales suggests they just tell customers to expect a wider material thickness tolerance on their sheets. You may have ordered 11ga, but you'll never know which parts of your sheet are actually 14ga or 3/16".
Better yet, have a customer service person that stonewalls any hard questions and never admits to any process change on their end at all.
Dont you thinks its shocking that it works 60 percent of the time? It shows that there is more potential there, no one reasonable is saying that AI is replacing programmers now but in a large enough time frame it seems very likely
Going from an average of 10 defects per 1000 feet of rolled steel to 14 defects per 1000 isn't that big of a jump. Even less so in an industry like coding where if the average was 10 the existing variability of a given dev would be like 5 to 50 per 1000 lines of code.
@@natzos6372 you will still need people who can articulate the problem correctly, will be able to read and understand the code and deploy it.
That section about errors is spot on. I work with a group of devs who won't or can't read the error messages. They ask for my help and I see the error message and the problem is spelled out for them but they just can't see it. And it's not just coding. Same when software installation fails. Maybe I do it because part of my early career I coded with C++?
That's been driving me crazy for years. It doesn't matter how easy you clear you make the errors and how easy it is to look up fixes. Most people just don't read them.
its useful for shit like "generate compose yaml for this and that" and giving you quickstart
nothing more... definitely not worth the hype, but can save time.
..and for things like writing backend schemas/models, and data typings.
You should try claude, it's miles aheqd of copilot in terms of both good practices and code trust. You can give it a lot of context as well
@@zhacks_admin yeah mindless stuff, if you start actually letting it write business logic for you then you'll quickly find yourself unable to be productive without an AI... and the code is trash so you and your colleagues will pay for it later fixing/debugging stuff...
@@nicolaska1761 I tried pretty much all AIs couple of months ago when the hype was off the charts, not sure how much it changed but what i said applies to all of them. They do not actually reason.
If you are doing work that has tons and tons of example code out there then you might mistakenly think it can reason and give you good responses but its really not, its a sophisticated autocomplete that had enough data (not that our brains are not)... For UI and very popular codebases it may be useful but i still wouldnt let it write actuall business logic for me... thats a very quick way to degrade your skills and get yourself replaced.
Front end wise, it has been helping me a lot for repetitive stuff.
It definitely doesn't work well if you don't even understand what copilot is suggesting .
Or if you use something very opiniated like tailwind.
my satisfaction comes from: not having to remember stupid random JavaScript functions and not having to search and accept cookies just to see that it was just BS.
Sounds reasonable
Nobody:
I still don’t care about cookies:
Thank you for challenging facts vs. opinion & feelings! And then there are flawed surveys, clarity around who responded, and misrepresentation of results...
I started programming with javascript and python. AI has been helpful in understanding how to do similar things in other languages, or give a brief overview of some better practices or flows in different languages and tools, or other services to leverage. When GPT went public, I said it felt like googling a question, and then having the LLM parse the results for you. Happy to report that still feels pretty accurate. A good productivity tool to a competent dev, a terrible crutch to the naive junior
Imagine a generation of devs growing up with AI crutch; how good are they when they become senior?
And this is the a problem. It feels like googling the question but it is fundamentally different. Googling (at least few years back) gave ou indexes of pages with potntial answers to your querry that you then sifted through to find the answer writen by human. Now you get out of context summation of an average that may or may not be accurate, that also sums up and averages out of dozens and dozens other AI generated mess. And if it can be so absurdly wrong as to suggest eating rocks. Do you think you will notice if it i just subtelly, but confidently wrong?
23:45 hey Prime, I think what dev satisfaction might be the feeling that when you're writing and/or deleting a lot of code, it *feels* like you're doing a lot of work whereas if you're stumped on a particularly hard problem and just writing and rewriting potential solutions (Especially if they don't work), then you feel much worse, like you're spinning your wheels, even though you are learning and becoming better overall.
Yes! I for sure experience this. I once spend almost an entire day trying to reproduce some CI failure until I eventually found out it was running using sh instead of bash on the CI, but I had bash in my local environment. I felt really stupid, but I learned a lesson that day xD
@@lordkekz4 One of my first jobs, problems like that were a very common issue. Mostly different lib versions, but sometimes entire packages. If I cannot develop directly on the production server or a dev clone, I like to run as close to the same distro/pkg set/config as possible, locally. It can be the main, multi boot or vm.
"Copilot is like guitar hero for coding". And guess how many guitar players became good by just playing guitar hero?
- 0
MC Lars joined the chat.
and how much better would they be of instead of playing a guitar game they played a real guitar for same amount of hours?
The analogy is not that you cannot learn and get better using copilot and guitar hero, it states that you would be considerably better if you just did the "real thing".
@marcola80 I agree with you, the answer to my question is none, no one learns to play guitar by playing a game. Is like playing surgeon simulator and thinking that you can perform a surgery. Those are totally different skills, and playing a game instead of practicing the actual skill will not bring you any further. 🦾
@@alexandrecolautoneto7374 Well no, the answer is MC Lars the Guitar Hero Hero.
@DemiSupremi "MC Lars, is an American record producer, rapper, cartoonist, podcaster and educator. He is one of the self-proclaimed originators of "lit-hop",[4] and is the founder and CEO of the independent record label Horris Records." Man did a lot more than just play guitar hero.
My experience with using AI in coding is I don't use it directly. I use it as a search engine replacement for querying documentation. I explicitly programmed my LLM client to tell chatGPT to not put code samples in its output. I run my LLM client in a separate tmux window I have to tab into.
If I am learning a new high level concepts I try and have a dialogue with it after reading some Wikipedia articles on the subject. I ask questions like "My understanding is technology does through methods is this correct?" or "When you say is that similar to analogous concept?"
I think it forces me to think about prompts, ask questions, and then parse the responses. It removes the sort through search engine results part of research.
IDK anyone have any critical feedback on this approach? I am open to being wrong.
Same. It's also a really good "fuzzy match" search engine when I know how to describe the thing I am doing, but can't recall the technical term or pattern directly. It then gets me started in the right direction, and I can go look at official docs to make sure I do it right.
The researchers who conduct studies, and the people who are making links that they want you to click on, have entirely different objectives.
I am using copilot and I think it's great at writing plumbing code, but yes you need to read carefully the code that it generates (especially the part which executes the main logic) which is fine, since reading code is much faster than writing it. Junior developers, on the other hand, will benefit little from copilot, exactly because they will tend to rely blindly on the generated code and won't know in which part the LLM got wrong, because LLMs do make mistakes and often.
When it comes to AI and coding, I find that if you ever need to prompt the AI to write code for you, you're not doing it right. The time taken to understand, review, and bug fix an AI's work should not be underestimated. AI is more useful to enhance your own skills, not replace it. Therefore, I limit my AI use to locally hosted model for autocomplete on crack.
8:00 That comment on founders is spot on. I have a family member who was an architect and loved it, but realized to make any money he'd have to become a principal and/or start their own company and then they would end up stop doing what they loved.
I founded a company and realized even though "I can Excel" I can't be the business guy. Mainly because I''m an introvert and cold calling people I once met to ask something took half a day of ampping up and rest the day something pleasant to calm down. A real business guy makes a dozen calls before I dial the first one. In fact that's how he calms down after something unpleasant.
This is me. For multiple times it has generated code that look fine at first glances but then I would waste hours debugging because of that exact piece of code, so I just turn it off by default now and have a keyboard shortcut to turn it back on temporarily to generate a small snippet I don’t want to write. It’s good for really simple stuff sometimes but don’t have it on at all times.
Every time I use a copilot or gpt, the flow will be the same
1. Provide the problem
2. Get wrong solution
3. Fight with gpt to get proper solution
4. Run the code and it fails
5. Say fk it, and write it myself
6. Use gpt to mail content and jira mesage.
I'm learning to program in x86 assembly, and I recently ran into an issue understanding why a specific program loop functioned the way it did.
I spent about two hours chatting back and forth with Google Gemini. It did a decent job of explaining different steps to me, but in the end, it took me looking up some Intel manuals as well as a university cheat sheet, for that lightbulb moment to go off.
Now that I passed that hurdle, it's starting to become a lot easier.
Nothing beats "doing the hard work" and really wrestling with a problem until you actually understand it.
I still find yet to find AI to explain ancient assembler routine to convert byte to hex digit. (add 90h, daa, adc 40h, daa). On good day they don't mess up with arithmetic completely.
Regarding how people use playlists instead of subscribing: there are actually a bunch of channels I really enjoy but I don't subscribe because the UA-cam algorithm already pushes them enough and I don't want the stuff I've actually subscribed to to get drowned out even more than it is already.
Subscribing doesn't mean you receive all notifications. That's only if you click the notification bell. Otherwise, I think you'd find that it doesn't really affect your recommendations much, if at all, relative to a channel you already watch frequently.
yeah I prefer using freetube these days for that reason; no algorithm no engagement, just a bunch of categorized subscriptions
My experience is that when I'm fixing a bug or making a small modification to existing code, copilot is very helpful. But when I'm starting something from scratch (so there's less context), copilot can introduce subtle bugs if I try to go too fast. Which makes sense; it's a generative AI, so it needs context to push it in the right direction.
Also, it's really good at finishing code from online examples (to the point that I have to turn it off when I'm trying to follow along in a getting started doc or a book).
Copilot has turned writing code into endless junior code review
As a senior dev it absolutely works and absolutely saves me a lot of time. But you have to be a good coder regardless and use it as a tool to increase your productivity. You can't just expect it to make the code for you and have the code be of high quality as-is.
I've been a software engineer for over 7 years and have been programming for around 15 years. I’ve worked across all levels-chips, embedded systems, drivers, and OS-based software (at this moment, i have multiple virtual machines of different OSes I'm publishing to).
Copilot doesn’t help me much. It often slows me down because the code it generates rarely follows SOLID principles. I mean, at first glance it always seems to be solid, but if you really start making it solid in context, it very often has details that shouldn't even matter. So.. I mainly use it for autocompleting comments and basic, repetitive code.
For tasks that require deep understanding, like working with chips and embedded systems, Copilot isn’t useful at all from a whole implementation perspective.
Recently, I used it to recreate a product I had already built. With all the context in place, if my original product was a 7/10, the AI’s version was only a 4/10. In my experience, fixing the AI’s output to reach a 7 takes just as much effort as building it properly from scratch using SOLID principles.
Not trying to undermine your experience or anything. I'm just sharing what value it had for me, another senior dev.
Having used it for 2 years (roughly) now. It does feel like it is getting worse. But it is handy for repetitive stuff or small scope stuff. When it tries to generate a big block, it will often be wrong.
I do wish I could limit it to only prompt me for small scope stuff.
I'm relying on it less and less.
Codepilot is like guitar hero. Where more people think they can play guitar, when in fact when handed a real guitar, they can't.
So in the end more Juniors coders are attempting to code more complex things, and think they've done it right, when in fact now they've introduced more bugs.
I mean this study is pretty obvious even though 800 people is not really a statistically significant amount. You would have to have many studies over a longer period of time to truly understand the impact of AI on programming and code quality. My hunch which is of course anecdotal is that it really depends on the domain and that for the most part AI is pretty good at helping out with boilerplate and other repetitive operations but it's not geat at helping you build something that is new/unique. People seem to forget that AI is limited by the data it's trained on as well as the quality of the prompts that it's being fed. A junior developer without as much experience is unlikely to understand the details that they might need to look out for and they likely cannot use the AI as efficiently as somebody who fully understands the problem that they're trying to solve. Also, while AI might be able to help with general development because of the sheer amount of data that it was trained on it probably can't help you with something much more niche.
I agree with not using feelings as measurements in most cases, but it's worth noting that in some cases it's necessary. For example, studies on how people feel when they have a specific illness (or rather what they would usually report) is the closest you're going to be able to get when trying to get diagnostic data for medical triage.
I find that it's really only useful when I know exactly what I want and it's just a more sophisticated tab-complete or if I'm so tired/migrained that I can't put together coherent code so that I can make some progress even on bad days.
"Something is better than nothing" sends shivers down my spine tbh, please not in any codebase I rely on
@@JuusoAlasuutari I didn't mean that you would commit that code... but to be able to make progress when one is unable to function normally is helpful.
@@triplea657aaa I have migraines too, so I kind of get what you mean (I think). If you can stop your short term "task memory" from completely resetting during a bad migraine day it at least makes it easier to continue the next morning.
This might be a very dense read of what you said but feelings are very important in things outside of quantitative data, and even in analysing quantitative data, feelings are important to contextualise them. Sorry if this is a bit nitpicky.
Cursors boosted Copilot does the autocomplete thing 10X! I am subbed for 4 months now, its very good to have claude 3.5 with codebase, custom docs and websites in context, also quick inline fixes
AI coding assistants are a fantastically useful tool for young and ambitious mid level programmers who want to climb the corporate ladder
It allows them to mock up cardboard cutouts of systems in a week, and demonstrate something resembling a system to upper management
They can get their hit of praise for minimal effort, and be seen as the truly 100x engineer that their parents always wished them to become
Since these sort of people get the most say in many organisations.. the overall message coming out of the industry as a whole is that AI is the future
i learn it too with Ai, but as a better google. U dont need to find that one keyword to search in google , u can just ask. But at work to gett the job done fast i use it , sometime.
The single biggest benefit of AI for me has been to help with my RSI especially while re-learning to type on an ergo keyboard. Its great for filling in boilerplate for function signatures and stubbing things out and then turn it off when it comes to the actual implementation
The question isn't if coding assistants will increase the productivity of current developers. The question is how much not having them is going to decrease the productivity of developers from the post-coding assistant era.
Why would there be such an era to begin with though? Genuinely wondering why you think that'll happen.
I never used ai assistant tools. I simply prefer to ejaculate code by myself, then having to check whatever alluction ai comes up with.
And it's easier and faster for me. For once, not using copilot, allows me to enter a flow state, where i know exactly what i am doing, and i am able to produce code (which mostly work, ignoring the stupid mistakes everyone does when coding lol)
And also, waiting 1 second for text to appear it's just stupid lol. In 1 second, you could type 1 or 2 words at least
Aren't compilers just a "coding assistant" then?
@@nikarmotte Because juniors are now using ChatGPT left and right. And one day they will be seniors or at least supposed to fill their roles.
@@HolyMacaroni-i8e By post era I mean developers that were trained in a period with this technology being widely available.
As a junior-to-mid dev, i do use ai. Mostly when i am stuck on something for more than half an hour, stackoverflow researches included. Then i ask first general questions of how to do the general thing I am trying to do, see if it helps me understand. Then i add in more context, ask questions, try to understand better. The last thing i do is give it my code (anonymized if need be, i change names and variables to generic ones) and ask questions specific to my code. Usually all these steps helped me understand better and gave me the answer i was looking for. I report it in a kind of journal so it's available for the next time and to hammer it in my head.
I like reviewing code.
I hate having my code reviewed.
Working on a side-project allows you time to switch gears mentally and also has less pressure because there's probably no deadline. Working on work means you haven't switched off and have just powered through. Which is fine now and then to get something solved but, when it becomes the norm and you have no time to walk away and the deadline is looming and it's all crushing in on you...
Basically hate it in my IDE, am fine with it as a chat, mostly because I usually give it to it when I know what output I want and can then just quick read to make sure it's right.
E.g. today just wanted it to write a loop that imported certain files and stuff in bash, and I keep forgetting bash syntax, so it's something I can easily check logic wise and running it does syntax check.
Would I ask it to generate more than like 10-20 lines of code? Not really, its context sucks at that point.
That os if, I'd probably do something like os.read and see what the auto complete gives me as an option.
I try to like every video I watch as soon as I start watching it, because I have ADHD and I will forget to do it after watching even if I don't use a playlist (I hate playlists and autoplay). The main reason is that UA-cam seems to reset the watch time indicator (that red "progress bar" line on the bottom of the thumbnails) after about one year. And then the Algorithm keeps try to push videos I've already watched over and over. A few years ago I was recommended a video from one of the channels I watch everything of, and I was surprised that the lack of progress bar indicated that I hadn't watched it yet; but then everything felt weirdly familiar; and then when I scrolled down I realised I had already liked the video despite supposedly not having watched it according to the indicator on the thumbnail. I've had weird deja vu feelings watching videos before too, but since I didn't compulsively "like" every video yet; I had no indication of having watched it unless I remembered (and I watch thousands of videos every year so I can't remember every video; besides they often change the title and thumbnail).
So now I "like" every video once I start watching it; if I really hate the video I change it to dislike. It's only if I start watching on my phone (or some other device without tabs) and realise I don't have time to watch it all (which is always the case when I realise the video I started watching on the toilet is 30+ minutes), that I save it to "watch later" and "unlike" it. That way I know if there is a like on the video I've either watched it all or it's already half watched in an open tab on my computer.
I’m not really sure if 41% is even as high as it’s going to get. So many newbies are leaning on and schools are permitting the use of ai in education I worry too significant of a portion of people entering both the workforce and public facing code that we’ll see a major increase in bugs but also a surge in incompetence feeding extra trash back into the models
Oh man, you don't even know. I'd vaguer that at least half of my uni class in computer engineering wouldn't know how to write a fizzbuzz if they didn't have ChatGPT to hold their hand.
The worst part is having people like that in a team assignment. I guess those were the people who used to just write the reports and documentation, but now they're actively ruining the codebase the rest are working on.
@ unfortunately I’m wholly familiar with how bleak it’s gotten. I’m a private tutor in programming and in for my BS of CS as well. Both my classmates and students just expect answers on tap and it’s crazy basically nobody’s doing anything about it.
LLMs are powerful tools, like past innovations. Coding errors existed long before AI, though no studies were made to compare. These models boost productivity but require skilled use- bugs persist, and the tools for addressing them have simply evolved.
Programming outside of work 100% increases your burnout even if it's a project that you like
That’s probably true for most people.
Yes, it does. PrimeBot is wrong on this one.
yeah I tried it. I wish I had that level of energy. If im gonna code outside of work, it has to be related enough to where it helps my job
I think this is probably only true if programming is a significant part of your job
(Which is presumably what you meant, I just felt like uh,
I felt like it would be funny to say this even though it probably isn’t. Still going to hit send.)
At the risk of sounding rude to the author which I do not intend to, Is it just me or that article reads like an ad for uplevel?
Also, I am honestly glad I took the time to learn C++ error messages. They're actually pretty "nice". It does actually give you a somewhat of a decent insight into all the things the compiler is trying, especially with overload resolution.
Like, I don't know what to do or feel about AI in programming. Should I be "scared" that it will exponentially get better and will replace me and keep up-skilling (somehow?). But some say it's foolish to be scared. Others say AI isn't good and anyone fearing it isn't a good dev and reasons like "if you can think, you are better than AI" "you fear what you don't understand, if you understood it, you'll see it isn't gonna replace you" and so on. But others say not internalizing AI will replace programmers is "coping", just like horse-carriages eventually were overtaken (lol) by cars.
And both sides seem equally loud. So I'm here, just trying to do my thing, using AI for non-coding things.
Did he really just say 'rewrite tests'?
23:15 Prime straight out of Taken: "I have are a very particular set of skills, skills I have acquired over a very long career, skills that make me a nightmare for people like you"
16:24 I'm one of this 3%, I have been learning coding for 6 years, I have 2 years of commercial experience, I have no drive to test this. I had experience with doing reviews of guys code which was generated by AI and it was distasteful, amount of bugs, poor designe, poor quality, there was used really weird librares like ast in Python for transforming dicts and quite frankly, I'm disgusted, they lied to me many times that it's not AI and I was forced to do reviews for AI everyday and act like nothing happened, baceuse my managment doesn't care. I don't want to see this shit never in my life.
Stabillity is rooted very deep within German culture in many areas. I think that´s what makes it efficient long term once something is running well, but also really slows them down when it comes to adapt for issues in the future. Like things rarely change here, unless there´s a direct trigger. (eg. regulation changeing after an accident.)
True. I train AI to do coding as a side job almost every day. The code generated by AI is still full of bugs. It takes hours to fix the code generated by the model in just one conversation (that only has 2-3 prompts).
Not sure about this one, sounds more like a hit piece than anything remotely credible. Not a fan of AI but this cant be right.
You are on the money here.
I don't like this language "41%"
Remamber this one contraception drug that gave one person in a million side-effects, and then there were headlines like "the new and improved version has a 100% higher rate of side effects"
This language is missleading. What is the base line? What is the deffinition of a bug? etc...
I find AI quite helpful with the little things. It locates stray apostrophes for me, helps with any syntax I might have forgotten.
One view in 39 seconds? Mans fell off
The funny thing is, is that ai can probably replace the ceos before the programmers. Someone tell the shareholders that, suddenly ceos will feel what the "little guy" feels
First!
Less than 1 minute into the video, Flip already out here being a ninja wizard.
Oh satisfaction you nailed it! I love to play Escape from Tarkov (EFT): there the saem thing happened.. they added a PVE mode, and basically I'm nowhere near as good as great streamers. But since that enaables you to play vs. AI only and basically allows you to feel really good and like you are on that level and then have their equipment, you basically get the feeling you are at that level. INTERESTING
Whenever I get AI-created code suggested I basically cannot trust it, and have to quadruple-check everything. It's like having a junior whom I cannot trust to actually improve or do the thing we need.
Wait, you're telling me there's a 41% chance your program will self-terminate? This is truly a progressive win!
1:37 because they have the money. It doesn't matter how the developers feel, it matters how the ctos feel the developers feel.
Most programmers are smart and might prefer not to disclose to their project managers that they're leveraging AI to boost their productivity. This could be to manage expectations or to avoid being assigned additional tasks
it rages me so fucking much. When i am reving a code and a see a snip it that is just incomprehensible for something that is just so basic and i just realize that the bullshit developer ask chat gpt for the code that he does understand and just copy and paste in the project. THIS IS HAPPENING ALL THE FUCKING TIME.
Uplevel is mentioned because they were the client for the PR hit. Paul Graham's "Submarine" taught me that.
The first few minutes of this are golden.
I really like coding too, what I don't like is mapping 200 fields of business logic specified in an Excel spreadsheet between incoming CSV, internal app model and multiple database tables. AI can do that in 5 minutes and I take 15 minutes to test it. Without AI it would take me more than a day of tedious and very error-prone work. So yes - AI when used right prevents burnout for people that like to code and design but hate the part that's a thankless busywork.
This is underappreciated. I do a lot of work in SQL, and AI has saved me hours by doing a lot of the repetitive typing that you inevitably have to do when working between different layers of an ETL / ELT architecture. Very similar Tables, Views, Stored Procs etc. that you are constantly having to go back to again and again to tweak some typo or Ctrl Find Replace with convoluted REGEX. LLMs are good when I teach it the pattern and then have it replicate the process so I can just run my test scripts at the end.
The simple fact is most developers are not getting paid because they are innovating amazing new solutions or inventing algorithms, they get paid because the average person has no patience for programming or technical architecture.
I find AI tools really good at generating commit messages. Especially when the changes are fairly small. You can adjust the prompt to fit your commit style as well. It won't work well if your style requires saying why lines are changing rather than what is changing (it will often get the motivation wrong due to missing the bigger picture).
Another ok use is generating simple tests based on an existing one to cover more corner cases. Or giving it a well defined class/function and have it generate a complete test suite. You obviously need to go through the test cases and fix the errors but I find it still saves time.
I loved this video because it confirmed what I already was thinking.
I want to support Primes idea about outside-work programming. Matt "Creator of the Parker-Square" Parker called it "recreational Maths". The concept of interacting with Maths in your free time for fun and interesst. I think the same can be said about programming. "Recreational programming" can happen even for fellow developers. Work enviroment (incl. the language you use) is different enough, that interacting with code with a different perspective can still be enjoyable. My company will not switch to functional programming in the next 10 years, but this does not mean I can have fun with Elixir or Elm on my personal projects.
LLMs by definition works not by saying the right things, but by saying things that sound right. Or, in this case, code that looks correct. Perfect way to generate hard to find bugs.
Copilot mileage really varies. I have dev buddies that hate on it. I love it for PHP Laravel stack. I don't rely on it to write code but to autocomplete what I was going to write anyway, if that makes sense. There are some rare occasions where I am doing something unfamiliar and I will ask it to do X Y Z and it really does it right most of the time. I think its really a net benefit for me.
To be honest i was learning to code on cs50x, after completing that i moved on to cs50G which is no longer an active course, i use co pilot to help me understand the code that we are given from the course as the videos are too in depth, i then try to adjust the code to get an understanding of what each section is actually doing, and i found errors that needed fixing, with co pilots help i manage to make it worse, fix it, make it worse, fix it, before finally understanding what the code was needing and deleting all the co pilot stuff that i had implemented, my code finally works as intended and gave me a deeper understanding of what was actually going wrong. I keep telling my partner, I dont understand why they think co pilot is good, i spend most of my days arguing simple logic that it gets wrong xD
Ok, here's my relationship with AI (namely chatGPT, nothing else).
If I want to learn new stuff, I read books, and sometimes get stuck on topics that I don't fully understand, so I ask AI to explain it to me. As I don't like to rely on third party libraries, I write everything I need myself and AI really accelerates in providing and summarizing documentation as it was trained with that data too. Most of the times, in books, there are a bunch of questions after each chapter and I tell AI what the question is and provide my answer in the same prompt. This really helped me to actually learn more efficiently. However, the lack of creative and critical thinking of AI makes it unsuitable to produce production level code.
The AI plugins are useful to help you write boilerplate code, but if you release that without checking it and making sure it does what you want, you deserve everything you get. The secret is to use it to write small, discrete chunks instead of entire pages of code
ive bin going to school for programming, and one thing ive notice my self doing is outright wanting to actually write out everything my self to specifically thinking about how i want the code in function to work, effectively a dump of my mental sate onto the program. because of being in the middle of learning to code, the mental structure of how a project should run is being built effectively real time instead of already fully knowing exactly before hand like a veteran coder since their mental structure probably thinks the segments instead of individual segments and how it relates to the rest. so when copilot does the thinking in segments bit, those jr devs or what ever dont realize how those segments fully strucutred into the full range of everything. so they got to go back and fully read the entire structure to finally get that. it might be a good learning tool by example and testing. but for full work like that, it probably wont make things more efficent because of those people needing to learn what exact thing they need.
7:00 There is another factor to consider. Jira sprints are planned and time-boxed. Most developers who finish their sprint early are NOT going to go looking for more work.
IF Copilot makes devs go faster, but their sprint says they're done they aren't going to go faster. And the code they just made with Copilot sucks 10 times as much.
There are ways to collect good data through questionares, but they need a bunch of data to estimate how much people are lying to you, which is very hard in any field that hasn't already had a ton of study. I still prefer the 'analyse landfill content' methods to figure out how people are behaving rather than asking them.
I'd love to have an analysis of abandoned Github projects, mapping copilot content put into them. Knowing if its rookie devs, or big teams getting frustrated, or whether the issues with GPT spam has tipped the scales on maintaining projects, and whatnot could be a really interesting analysis.
The last 30 seconds was low key a gem.
The thing I always keep coming back to when considering AI coding assistants is a situation where you're working on something so proprietary or otherwise secret (like a government contract or something), where you are not allowed to paste any code into an LLM out of a fear of having that code stolen. How many modern programmers would simply crumble instantly? When you can't use any AI to help you, are you still able to find solutions to problems by yourself?
Also, the only code that LLM's most often have access to is public. And that means a majority of it is amateur grade. Most of the proper professional grade code is private and not accessible to LLM's. So whenever you ask an LLM for help, it's very likely that it'll be an amateurs implementation. Most of my experience is in video game development, and some of the code for Unity that LLM's spit out is like straight from a babys first gamedev tutorial where it's obvious the person that wrote it had no idea how to program a game for expandability, performance or readability. Stuff that will immediately break if you need to add anything related to it.
For me AI saves time by decreasing aimless link clicking, especially when i have no clue what to search for. It's good for boilerplate, introduction to patterns, solutions and even algos, just filling the potholes on my road.
It’s already been said that it depends on the experience of the developer. I would say I am an experienced SQL developer and an intermediate typescript developer. When coding in typescript I would ask copilot to give suggestion of what I require and ask another more experienced typescript developer in our team to review to ensure it’s good and not blindly trust copilot. For SQL I would make some changes to copilot code if I think it would be better doing it a different way or to fact check it for performance because some SQL can cause performance issue.
Our team use copilot and it seems to reduce code completion time with suggestion, therefore overall save some time.
Thank god! I was having a problem with coming up with new bugs. Glad to have a new partner in crime.
We have been doing interviews at my work all week for a few positions. It's amazing how many people now have years of experience being a software developer, but they cannot even code a simple fizzbuzz when asked.
It's really depressing.
Yeah. The quality of the candidates has been steadily dropping before chatgpt. Some can do leet code type questions because that’s what they think is what people want so they train on those. But even then, you can tell they don’t really get it. And any deviation from that and they buckle. All I want is enthusiasm for the field, some creativity, some fundamentals and potential to grow. I’m not looking for additional baggage to carry.
I think the hard part about this is that Scrum teams won't notice or will actually like it because this way they are pushing PRs and pumping metrics at a higher rate. All they care about is metrics, so with faster PRs and more bugs to fix (therefore more PRs) they will be happy and write it off as a win.