It's my belief that in the future, a dev will actually write a very small part of the codebase. A dev will instead be someone that is able to competently communicate with the AI to get the best starting position, and improve on that code, and maybe most importantly to analyze and correct it. It has the potential to drastically reduce the time to code, but I think we're still a long way away from AI-only written and inspected codes in production.
This is exactly how I’m using it. But I’m a beginning, so it’s helping me study be reverse engineering the examples it provides and then building on top of it to make sure what I come up with makes sense and works.
When there is a such innovation pumping the wheel will spin very quickly, think it as a compound effect. As an example - AI will probably tripe or more, the speed of developing, which will lead to a better communication with the AI, which will lead to a AI which will communicate with another AI. As another example - lets imagine we have a AI which received a request by a human to develop a game, afterwards the AI sends a request to a third party AI who can develop the Art, once the art is done, the implementation can be also automated. As you probably know, the AI learns by every single request sent, meaning it will know better than human, which will be the best practice to create something. Once we pass this stage, AI will develop stuff smarter than human can possibly imagine.
I'm coining the job title AI Programmatic Intergrator for precisely this reason :) You're always going to need to check the work. Will this change our workflow, yes, but will it eliminate us, nope.
The problem that notion of programmers becoming integrators will give rise to is that the AI will be incapable of inventing novel in-context solutions and so will the majority of integrators due to lacking the understanding and practice required to come up with such. Skills can and do atrophy when not used, we've lost a great many skills in human history, we don't usually care about that because something better usually replaced it. The replacement here would be a mixed bag, the AI will not make mistakes due to being tired or unmotivated, but it will create insidious bugs in great quantities where the code nominally works but not as intended due to having no actual understanding of what it's doing, these types of bugs are commonly some of the hardest to debug. Shall put it this way, i'm never getting on an airplane that has AI designed software running it where the AI has no in-context understanding.
@@RiversJ It will be the jQuery situation on AIRoids and I'm already seeing some of that go on. I see it overtaking react within 5 years. In terms of the buzz and inability to answer without that ecosystem. I'm on fence a bit because I can already leverage this and I am very capable of auditing that. People will be copy-pasting stuff until the feature works and there aren't crazy crashes. Someone attentive is going to learn things. Someone lazy is happily going to skip that lol
ChatGPt is an mazing tool. And AI is basically that, a tool. AI won't be replacing developers any time soon, just as construction machines didn't replace construction workers. Buildings are being built faster and are becoming more complex and more sophisticated. So does software. Software is becoming more and more complex and sophisticated. AI will help developers spending less time writing boiler plate code. First of all you need to know how to code in order to use AI to code. BTW being good developer isn't just being good at codng.
Just likke in construction, number of construction jobs decreased because we got machines. Tools dont replace people but radically reduce the number of workers needed.
So you think we’re still using the same number of people we did to build the Pyramids or the Taj Mahal. Some reading would help (or you could ask ChatGPT😂)
I disagree. The ONLY reason that physical labor jobs have not been fully automated is because robotics cant perform sophisticated physical tasks. However, ai IS capable of doing much more than the vast majority of junior devs already, and in a couple years will be far beyond the level of any human knowledge - this is different because in the realm of pure thought, ai is king - ironically, it looks like prigrammers and scientists will be auyomated out of jobs faster than tradesmen and laborers because it turns out that ai is progressing far faster than robotics. If you dont think that mosy companies would replace you with ai that doesnt need pay or breaks or sick leave, yhen you have anoyher thing coming.
You’re basing this assumption on the absolute dumbest version this AI will ever be. Every error, every mistake is added into its knowledge base. It’s writing kinda buggy code now, something that was absolutely impossible 10 years ago. Where will it be in 10 more years? High paid knowledge work just received a death warrant. Because the machine knows vast amounts and is exponentially expanding by the week.
Github Copilot is the essentially same thing but directly in the IDE, also powered by OpenAI. Just put in a comment what you want to do, it gives you multiple suggestions of how to solve the problem, enormous time saver for me. MS and others have put a HUGE amount of money into this company
Copilot is using codex if I understood that right. ChatGPT can create a full micro service, style it and answer questions about it. Or make modifications across multiple files.
And also, just like ChatGPT, Copilot sometimes returns the wrong answer. It's still very useful, but you can't just take the code generated and assume it is correct.
I 100% agree. I've played around with it and gotten to pretty much the same conclusion. I do love a lot of things about chatGPT. I have an okay understanding of c# and how to use it, but there are time where I get almost like a writers block, especially when starting new projects. I can now use this to have it start some code and I don't even have to use the code that it gives me, but I can research the code it writes to get a better understanding of what code will help me accomplish certain things.
I got to play with it too and it gives me the feel of google on steroids. I’m currently getting to learn C# for my work and often times I feel like I have the correct logic/pseudocode but I’m not always aware of the tools available to me. I view this as giving you a good direction to dive into in order to be exposed to the tools that are available at your disposal.
@@kevinalbarran8004 I agree, I've been a developer for a little over 5 years and it's all self taught. Of course I found great resources like TimCorey, but there is so much I don't know that I don't know.
Today 95% of software is a crap. Because there are too many amateurs. Tomorrow this number will raise to 99%. Because copy-pasting from Stack Overflow required at least some efforts in thinking. With AI generated code idiots will have all the doors open.
Great, man's gift to the machines is perhaps our worst trait -- to be "confidently wrong". Being wrong, with confidence, means you are less apt to learn, in order to discover that you're wrong. But very informative video, none the less... thanks TIm!
Which is why we 1, don't solely rely on it and 2, don't give it power to make decisions on its own. I've watched (and enjoyed) those movies, but I don't want to live through the events of the Terminator.
@@IAmTimCorey The events of Terminator could totally happen. Even if it's currently airgapped from the internet today, some day some college freshmen is going to ask it "Help! I can't center a div!", And then unquestioningly run whatever code it spits out.
I spent almost 10 hours with GPT and I even asked philosophical questions to it and it was the time it blowed my mind. As for the coding, I have not great but good understanding of C# and .Net Core and I decided to use GPT as a reminder tool, because is way more faster than I search also it sometimes can teach me something I have no idea. I've been learning Angular for couple of days and it never teaches me something but makes me lazy to explore Angular by doing mistake. Thanks you Tim for the video, I'll pay attention your advices.
@@chezchezchezchez who tf carea 🤷🏻♂️, it delivered what he wanted to say to whom he wanted to say. Dont be a over smart kid in class and teach grammar to everyone.
I have asked it to write a full backend by describing the problem and then simply asked for a Frontend and some integration tests and it actually did add functionality and tests for all my requested endpoints.It’s like talking to a real diligent and fast intern.
Compared to Nick Chapsas' video on this (who I said to my colleague has usually useless clickbait videos) and who presented it as the best tool which does everything and he didn't mention a single downside, your videos are always very well sceptical and you are sceptical for a good reason. Thanks for that. No clickbait but plain information from you. Being a professional requires knowing the field professionally. Nothing worse than trying to look like a professional and doing mistakes because of lack of information and presenting is as the right solution.
i am a starting junior developer and chatGPT literally aced the two test tasks sent to me by the employer in a few seconds. i had to give it some further details on the problems, but it corrected itself and got them right. it took me almost 2 days to write the code, tests and dockerize it myself. legitimately, i am speechless and i feel like the shit ive been learning for several years now, will be totally obsolete in the next months/years. what is the point in hiring some slow human coding chimp when something like GPT can do it literally in a fraction of the time? AND imagine what this thing can do in a few years and how many developers will have been made obsolete by then. truly scary stuff
@@ladyblack679 I have absolutely no clue. clearly AI this advanced has been in the realm of possibility but everyone, including myself, seemed to be under the assumption that it was like 10-20years away at least, but it's here like right now. feels like junior and even midlevel positions will become redundant unexpectedly fast, as companies will want to optimize and streamline the development process and chatGPt will allow the seniors to take on unprecedented workloads and replace a horde of junior scrubs. i'm legitimately pondering an immediate career switch because the job i have been preparing for might disappear in 2023. what makes me even more anxious is the fact that the seniors and my mentors have actually expressed similar thoughts about this and are sort of labelling it a revolution that will change the industry going forward. and this is not just development. technical writing, data analysis, call center, absolutely everything can and could be automated with GPT in the near future. take this with a grain of salt, as it might be a bit overblown, but it's absolutely clear that GPT is a revolution, not yet another dumb chatbot and consumer AI has actually arrived, way earlier than we might've expected.
@@bane2256 yes i also kinda agree. i mean, it's not happening tomorrow, but the tech has clearly arrived and it will definitely start completely changing industries in the near future.
I found it very useful as an advisor of sorts. If you describe a problem and show it some code snippets it may help with finding bugs or it will at least give some suggestions.
This is a very good point! Currently, it is an "advisory" tool to help flushout a thought or concern. I believe at the moment developers' expectation is a tool to directly solve their issue, which is not the case. Yes, there is a ton of potential for some automation, this I can see. However, I believe this is the direction that the team is pushing, although neglecting to mention this part. lol.
It’s invaluable for that. I’ve done some things where there was no documentation or help available online. It didn’t get it completely correct, but close enough that I could figure the rest out for myself, with NO other examples online that I could find. That’s huge.
Great demonstration dear Tim, Thank you a lot for keeping us updated with new things happening and coming to the development area, keep it up, and thank you again dear Tim for supporting the community.
I have found this to be extremely helpful with reading over ancient pieces of undocumented code (written by previous programmers long gone) and giving a decent idea of what the code is trying to do. It will even add comments and attempt to rewrite things to be more readable. Obviously it's not going to be 100% correct all the time, but it can be a game changing tool.
@@Adam-nw1vy Oh I feel you. But then: it's a future that now allows us to reminiscence our past. Back then people wrote books. Will "it" once write a story about that species of "human", that all so tried to make sense of themselves?
My team today decided to test it out and we asked it to generate an EF Core CLI command to run migrations script against another database (not the one associated with the DbContext). It literally invented the --database attribute which does not exist in the CLI and it was so confident that when we told it that it's wrong, it still claimed that it is correct.
"Don't rely on the answers, before validating them first".... I think this is a very general advice that you should ALWAYS follow regardless of whom you are talking to. It holds true for many human-generated answers as well 😀. Let's not forget that humans make errors as well 😉. You have to understand that ChatGPT was trained by an external validator, a training algorithm. This algorithm terminated and it was declared that ChatGPT finished training. Before that it took the answer of ChatGPT and revised the internals of ChatGPT, because it detected that it was not correct. Since that procedure is not active anymore, there is nobody to correct ChatGPT. ChatGPT has never learned to self-correct or to include doubt, because that was never required by the training algorithm. The training assumed to know the ground truth and put the same confidence in ChatGPT. So, essentially, anybody using ChatGPT now has to understand that he/she is in the role of that external validator, except that you can't change ChatGPT anymore. Anyway, I think future versions of ChatGPT will improve, at one point it may be possible that the valdiation becomes a feature of the network itself in which case it can learn to reason about correctness. At least I think that's not out of reach now. I do find the results of ChatGPT very impressive. It's amazing about how far this technology has come.
Spent half a day writing some custom formatting to remove some characters from a string and cover all the scenarios that could happen via Human error. I thought I would give chat a go to do it and it gave me a just as useful solution in 2mins of use. I could then get it to write tests for me too. I then added in things it missed. I like that it can save me half a day on something trivial. Will be using it often, especially for writing unit tests.
I guess I will have good times coming ahead. I am a technical tester and some ten years ago I experienced a new type of bugs. Code began to do errors actively meaning that it did what was supposed to do and then some. Developers began to search for solutions, found some code, copied it into their own, ran it, and it worked. They failed to take ownership of that code. They did not go through the code line by line to check if that particular line did something usefull or it could/should be altered/deleted. So I tought the developers to gain ownership of the code copied. Now I will have to start all over.
Great video, a lot of C# developers should watch this, think I'll send it to my coworkers. As an aside, I asked it the other day to show me how to read a text file line by line in C#. The answer looked correct, but when I pasted it into VS, it had compiler errors. Googling the same prompt got me an example directly from Microsoft that of course did work. I was a bit surprised since it seems like such a common and simple problem. Really neat tool, but not without limitations.
One trick is to ask it again. Sometimes it doesn't come up with the right solution the first time. You can also ask it to debug the problem. But I agree, it does have its limitations.
@@IAmTimCorey I think this should be a tool for experts and not beginners because am expert will often spot these, I've heard sometimes its not wrong but inefficient, I think having beginners rely on it could stifle their growth. Like giving a kid a calculator to do math the first time their learning math. Normally introduce the calculator later on when individual math skills have developed somewhat.
As someone who started programming in 90s and went through all Internet evolution, work done by OpenAI is mind blowing. Reminds me the first time using web search engines like AstalaVista and other pre-Google projects. It was mind blowing back then to be able to perform search using structured query. People (mostly younger gen) still don't get it. This is so called "singularity moment", time when things begin to radically change due to invention of a new technology. I personally think in 5-10 years AI tools will put significant downward pressure on new junior job opportunities. Companies will increase efficiency, and with that price of products and men power will be cut. That translates into less new jobs (ignore AI jobs, because there will be only limited pool for those, not enough to compensate normal coding jobs). Just like machines reduced the need for many manual labor workers, this will cut the need for many developers. Future developers will have to be in top 10% in order to succeed. Instead of having "coders" with no formal education engineers will have to have knowledge of system they develop on much higher level. It won't only impact SW dev jobs, but anything that can be automated. Don't judge what chatGPT can do today, think were it will be in few years. Next 10 years there will be explosion of new AI companies, something like DotCom bubble 2.0. That will be very exciting but also scary time. I am at the sunset of my career and I advise CS students and those who plan to become in future to think really carefully about the choice. If I were young again I don't I would choose the same path knowing AI tech might cut many job opportunities.
The dotcom bubble is a good analogy of what is probably coming. In the original bubble, people were making big claims and using buzzwords to look like they were revolutionizing the industry. People who didn't know any better and who thought that this was all magic bought into the hype. Eventually, the reality of the situation caught up and the bubble burst. The reality was that technology is great, but it isn't magic and it isn't incomprehensible. You need to understand it and what actual value it brings. Otherwise, you end up with a lot of hype and no actual results. I think we will have something similar with the AI applications. People will start companies around it, treating it like this magical unicorn that can do anything without really understanding how it works. Companies will expect it to be a replacement for developers, investors will buy into crazy new startups that make bold claims about AI, and people will generally make wild claims about all the things it will do "sometime in the future". In the end, the bubble will burst, some good companies will come out of it, and people will start to use it for what benefits it provides without over-promising or over-selling it.
@@IAmTimCorey But just like the internet, it is likely to revolutionize the way we work. Imagine being able to program any tool/toolset you need on the fly. That could easily increase productivity in say CAD, by an order of magnitude... And that's if the program doesn't just spit out the 3d model itself, which it probably will. We could be talking about increasing productivity by 10,000%.
It is an association engine, of megalomaniacal proportions for sure but that does not infer it any capacity for self reflection to judge it's own conclusions from those associations. I'm sure such engines will eventually do absolutely fantastic things many of which we can't even see yet, I'm also equally sure that most of the hype around it currently is from people who either don't understand classical computing or the problem of computing a model for the nature of self reflection (got news, nobody really does) or worse neither. An advanced subject expert that is also expert at using such engines will become absolute super stars but someone who does not understand the nature of the engine or the subject will just clutter the world with random internet post 'knowledge' and falsified papers in every nook and cranny they stick their copypasta in.
StackOverflow banned replies coming from GPT or even those that seem coming from GTP because of exactly what you say, there's nothing more dangerous than code that looks like it works. There's an interesting discussion at the Advent of Code subreddit (and I guess every competitive programming subreddit) regarding these tools because they could be used in competitive programming.
The world will never be the same again after this. We created something so close to perfection that knows everything and can do everything and it's available to every person in the world??? It's blowing my mind 🤯
It'll nudge some people towards not learning anything useful in their lives, lessening their life and it'll be wrong usually in the worst of ways, having no idea it is wrong but very sure of it's answers and incapable of creating complex or novel solutions without spending far more time micromanaging it instead of just writing it.
It is an amazing tool but be careful saying that it is close to perfection. It is definitely not perfect and won't be, and being not perfect but confident is actually more dangerous. It is going to be a game-changer, but it has its problems.
Great Video, I totally agree, I've tried this tool for myself, it's very impressive, but it will never replace your brain.. I do see this being a very useful tool to generating code.
One way to look at this is it is a step toward a higher level programming language. Assembly language succeeded machine language, C succeeded Assembly language, C++, Java, C-sharp, DSL‘s. … but people do sometimes still program in Assembly language.
Great video. Having used chatgpt in a similar manner of the past weeks, it’s nice to see a detailed breakdown of where and how it can be confidently wrong.
It's been making the rounds on TikTok since last week. However, I just had to wait on the Man who just keeps on Giving for all C# Developers out there (IamTimCorey). Thanks for coming through eventually like you always do.
AI currently is very fuzzy but it can easily be integrated with deterministic algorithms to be able to guide itself to a correct answer when given a set of requirements, starting with plain english and building into a set of tests that can be created by it but then fixed in place unless explicitly altered. That way the confident but slightly wandering output will be guided and constrained to a (more) correct outcome.The ability to remember and build up a model of the state is pushing in this direction but is still somewhat fuzzy. Add in integration with something like Wolfram Alpha that it knows to use when asked a mathematical or scientific question along with language being more humble in expressing how certain it is and it will become far more useful.
Something like ChatGPT will eventually replace most of the devs writing spagetti code for non-mission critical applications. But developers who are smart and working on mission critical applications will always have a job.
GTP is likely to be usable in tasks that do not require details to be precise. You can probably replace an interviewer, text summary generator, low-grade art work, a video generator, etc. in situations wehre accuracy is not required. You can't put GTP in charge of commercial transactions, code design, device design, etc. Y
Dude, thank you so much. I have seen so many people freaking out about "oh this will replace developers we all become obsolete blah blah blah" while having an incredibly poor understanding of what actually ChatGPT is, what it is doing, and what it is capable of doing. The amount of broken/wrong code I've gotten it to generate in only a few sessions of playing around, it's crazy. The tool is awesome, but 2000% true that you can't rely on it to 'replace your brain' It's amazing as a knowledge base though, and generally an improvement to a lot of 'search engine' usage.
You do realize this is a relatively new and still developing technology? Sure it's not going to be replacing any jobs right "now", but the future is looking very grim for developers.
This doesn't do a software developer's job. A software developer's job is not to write syntax. That is how they accomplish their job, but it isn't their job. A software developer's job is to create and implement logic. It doesn't matter if they do that in Assembly, C#, PowerBuilder, or in words. However they do it, it is a skills that is highly in demand and that demand is only getting greater (name an industry that isn't using technology now - even my plumber is web-connected and does quotes on his iPad). ChatGPT isn't creating logic. It will be a tool of developers, not the replacement for them.
This doesn't do a software developer's job. A software developer's job is not to write syntax. That is how they accomplish their job, but it isn't their job. A software developer's job is to create and implement logic. It doesn't matter if they do that in Assembly, C#, PowerBuilder, or in words. However they do it, it is a skills that is highly in demand and that demand is only getting greater (name an industry that isn't using technology now - even my plumber is web-connected and does quotes on his iPad). ChatGPT isn't creating logic. It will be a tool of developers, not the replacement for them.
@@IAmTimCorey Yes, exactly... there is nothing which is 'looking grim' for software engineers. The primary skill for this field is not 'writing code' but problem solving and reasoning... and beyond that, *learning* of novel concepts. If one's skillset is only 'writing code' rather than 'problem solving', then one should worry regardless of AI because I will still toss you to the curb once you sit down for a real interview. Easily 90% of candidates end up less-than-satisfactory in this area anyways. ChatGPT is an impressive technology. For what it is. But one needs to understand what exactly it is. It is a language model, and a stochastic one at that. Its entire purpose is to be able to produce a believably human response based on its training data. As some have joked, it is a "malicious compliance" approach to the Turing Test. What is not happening behind the scenes is reasoning... inference... logic. ChatGPT (and any GPT-based model, for that matter) does not have any consideration of semantics under the hood, no deeper concepts, no idea of meaning, no definition of 'correctness'... and so on. It doesn't even 'reason' about the language it is using, but rather models, or imitates, the language it should use. (A quite excellent example of this would be a well-known UA-cam video, called "How English sounds to non-English speakers") It is a digital con-man, to put it quite crudely... Its goal is to output something that will make you believe it knows what it is talking about, without it ever knowing what it is talking about. It is lightyears away still from any notion of an artificial general intelligence which will actually be able to have and utilize knowledge in these kinds of ways, to a meaningful degree. And I will absolutely refuse this notion that "it's new technology and it's unrefined". This is the culmination of many years of work by many smart people within this field, it is neither untested nor unrefined; it is quite specifically THE refinement of many years of progress, standing on the shoulders of giants. Similarly, it is entirely unreasonable to say that "it will be able to do more if it just learns with more time" because there are very fundamental limitations to what this model can and cannot do; namely, limitations which arise from what it fundamentally is and is not. At this point, ChatGPT has become less of an AI experiment and more of a social experiment... one which unfortunately has accomplished two things: 1) it has shown that many humans are not qualified to conduct a Turing Test, and 2) it has shown that the field of AI is incredibly poorly understood by even a large population of developers/engineers. Its creators did not set out to create a 'con-man'... rather, it becomes that purely by virtue of the reactions which many humans have had towards it.
You make a lot of great points here, Tim. I’m concerned though that managers - the sort who make hasty technology decisions - aren’t going to appreciate the nuanced points. Devs are expensive and if they can cut half their dev staff by telling the remaining half to just use ChatGPT and copy and paste, you can be sure they will. Think of the “competitive advantage” that would give. Call me cynical, but I’ve seen how a lot of these “business people” think about coding - it’s a cost to be eliminated by any means necessary.
I agree with you. The thing is, though, that I don't think that is the end of the story. Other businesses will realize that this is a competitive advantage. They can move forward faster. Plus, other businesses will start adding developers to take on markets that were previously out of their reach.
The one thing I am learning about ChatGPT is that it's using commonly used statements that developers typically use. It's not going to default to the latest releases, which is another assumption about List people = new List();. If you go and aggregate every sample online of this line of code, you'll find this is a common practice. ChatGPT seems to do this accurately, which is in-line with how AI typically works. So to get results based on latest practices, I believe this need to be included in the question being asked. Otherwise, it will use common practices. The biggest weakness to a developer is too many assumptions. :)
I do agree with @pavfrang. The internet will have to literally change (either update existing samples OR be flooded with better practices) in 5 to 10 years for this AI to recognize better practices.
I think that you can start the conversation by "configuring" the IA. You can try telling it, I'm working with C#, .net 7 and I want to use the latest best practices regarding .net and c# 11.". I didn't tried it but from the usage examples I've seen it's totally possible and also totally the goal of this IA to remember what are your preferences to answer respecting that.
Very interesting technology for sure. I created an openai account and I can log in to openai but when I try to login to ChatGPT it shows an error on the address bar and sits on the login page. I was able to open the chat api in the playground and play around with it some there. Pretty cool.
@@DumitruDanPOP I just tried it again and this time I got a message that says "We're experiencing exceptionally high demand. Please hang tight as we work on scaling our systems. A lot of people are checking out ChatGPT right now. We're doing our best to make sure everyone has a chance to try it out, so please check back soon!" Then it gives you an option to leave your email address to be notified "when we're back". I'm thinking all of @IAmTimCorey fans are trying to hit it and overwhelmed their system. LOL
At 18:04, I believe there's an assumtion and that conneciton.Open() is required. You mentioned that connection.Query(...); should already be opened and it would be if Dependancy Injection is used. I believe this is where the assumption is. I believe the statement is quite accurate.
Tim, I tried the same question word for word in ChatGPT. I got a different result than you. ChatGPT is learning to improve its response. I copied the code into Visual Studio, and the code worked fine. ChatGPT can serve as a guide to programming questions, I am impressed so far with ChatGPT.
4 of 20 of more "complex" programming problems were solved directly. 8 worked out after modifications. The rest failed but still gave hints in the right direction.
Which is why it is helpful as long as you understand what it is doing. The biggest danger are the 8 that work after modification, since some of them work right away but are poor code patterns with bugs in them.
@@IAmTimCorey But it can't be worse than a junior or even mid level developer right? If you ask it to refactor a junior level code, the result is impressive. That alone would contribute towards "replacing" developers simply because there will be far less need for juniors. Hiring less (junior) devs means wage will come down and eventually programming will no longer be attractive career.
Yet again another brilliant video, so much attention on how awesome this technology is within our community and so few people looking at the short comings, thanks Tim.
Thanks for sharing, I had a deep talk with the chatbot on cloud identity management and what decisions it would recommend. Many details were so accurate and detailed, that it almost feels scary, but some minor estimations and conclusions were indeed wrong or at least not up to date. However, the AI says, you should not trust its assessment when it comes down to security and how modern the technolgies are. There was a decent explanation, that it has a fixed dataset that is not updated, and so on.
I don't understand how people say this, just wait for the following updates, it's not done yet. No doubt this is going to replace coders to a significant+ extent
We say this because we've seen this before. Also, we say this because we know how this technology works. It isn't inventing something. It is regurgitating what it found online. It is essentially Google with one very convincing result. You can already find almost any answer to a programming question on Stack Overflow, yet it hasn't replaced developers. We've had Intellisense and now Intellicode for years. I used an inline tool for Visual Studio that searched Stack Overflow and grabbed the code for the answer a decade ago. These tools are just that - tools. Just because someone invents a nail gun doesn't mean carpenters are out of business. It just means they don't have to do the repetitive work as much.
What I like to do when using ChatGPT for code is to ask it for references for the provided code if I am unsure or think something doesn't look completely right. So far it has always given me direct links to exactly what I need to quickly verify. If I follow up with a correction it typically accepts that I am correct, unfortunately it doesn't retain that information past the current chat. It's also fun to see what it comes up with when asked to optimize the code. I did the random number example you provided and got a very similar result. I then asked if there could be any unforeseen problems with the provided code it went into detail about Random rng = new Random(); inside the function being a bad idea and proceeds to give me a better version of the code it just provided. With the right questions you can definitely get some pretty good code out of it but you have to know what to ask it in specifics and iterate over it with follow up questions.
The issue is that the Random instance is generated with a starting seed. When you don't specify one, it is based off of the time when it is instantiated. If you instantiate two random instances with the same starting seed, they will create the same results. That means that your random values (which is pseudo-random in the first place) will be in sync with another set of random values. That could be a huge issue.
I had a philosophical discussion with it about agile methodologies specifically safe the scalable agile framework as well as software testing. I was interested on its thoughts on istqbs definitions. And it just spouted that these are the definitions that are set as a standard and I'm like no no that's not how it works yes they provide a list of these things but the definitions are malleable and don't always apply. And so I got into discussion about them I said well what if what if 50% of their definitions turn out to not be correct in a professional environment? is it reasonable to trust them? It said no. the answer to the thing is I started digging on the percentage and somewhere between 30 and 40 it said it was okay to trust them. I have no idea why it decided on that number it makes no sense to me whatsoever a standard body that's not trustworthy within a single percentage point in my mind is likely not trustworthy for other reasons... and no I'm not saying that that's true of the ispqb you're the sake but it speaks very confidently and when you question it about how it knows what it knows about what's true it doesn't seem to do anything but regurgitate the answers. That means I think it will have a hard time dealing with fake news and propaganda
Given the kinds of tests this thing is going through with people freely using it now, I think there will be a hefty amount of data to improve on the next iterations of this, to the point that it will know how to have serious philosophical debates. It has gotten this far already and it's pretty convincing in its answers but transferring every unique emergent ability that comes with consciousness is not simple. Developers only see small pieces of the puzzle after they solve the next problem.
Tried ChatGPT just now. It's a pretty decent boilerplate generator for small projects. I notice you need to be very specific with your description on what you want in the code.
I wish your tests with ChatGPT would have been more interesting code wise. - Maybe real cases when you and your team used it. I use Copilot since day one and with over 30 years of coding background I find it saves me a lot of time and is a practical tool. With ChatGPT's abilities we could really have environments where you could architect, engineer, and supervise software generation instead of error prone typing everything yourself... And yes, everybody knows (except for absolute beginners), that you must know how to code and understand what your code does.
It is shocking how many people don't know that you need to understand your code before using it. That's a major argument for using Entity Framework and it has been a major argument around ChatGPT.
@@IAmTimCorey Thank for the reply. Sure, I agree that warning signs and advises should be put on using AI for code generation. But after that, the question is what can experienced programmers and developers do with it. I mean I would also not allow the code from the first day of an intern to go online or be deployed without review either... But you are right, in that machine generated unchecked code sadly may become reality very soon. That is a real threat.
Tim, I love your videos; the way you explain them is at least terrific! What I don't understand is why you have a low number of subscribers when you are such a good teacher. I love development and IT, but I hate C#; maybe you expand your topics like in this video. I will surely share this video with my classmates, but none of them works with C#; we are more business analytics. Anyway, I wish you reach 1 million subscribers soon. Thank you.
Thanks! Part of the reason why I don't have more subscribers is because I don't chase them with flashy stuff. I focus on the practical, real-world training for C# and related topics. I'd rather be helpful than popular.
I found it was actually most useful for explaining conscepts to me rather than concrete code. When I got it I asked to explain me some of the finer points of docker that I had been wondering about.
Be REALLY careful there, though. Always verify what it tells you. It gets concepts wrong a LOT. As long as you are using it as a tool, and not relying on it to always be right, though, it is great for that.
@@IAmTimCorey What I noticed is that you ask it, there is a way to do X in docker. It says "Sure do XYZ" Then it turns out that is something that is only in a very old version of docker, but it does not tell you.
It might actually be able to write good unit test code as it is easier for AI to find most of the paths human can miss. Let's be honest. Most of us don't like to write tests. Its just boring thing we usually must do to avoid issues but AI trained just for that I think can beat us today. It won't replace developers abstract thinking though. No way it will write decent app in near future but I think programming in the future might look more like building an app from a bunch of nodes in some graph when AI will write a part of an app from this. All AI need is human abstract thinking and this can be provided by human dev in some future tools. This is my personal prediction of how developer would world in the future.
Thanks for putting out this video, because I've been running the same kind of tests and finding the issues you have. Also in creative and other writing, it is almost relentlessly upbeat almost creepily so.
If you specify it to write something sad, it will. I asked it to write a post apocalyptic story about mutant man-eating plants, and it did so, with the appropriate tone. Not at all relentlessly upbeat. I think it just defaults to upbeat unless you specify otherwise.
I'll just wait to learn this stuff when the definitive book is written. JUST KIDDING, EVERYONE! The only thing that ChatGPT needs is millions of knowledgeable testers (like Tim Corey and others) running it through a multitude of challenges. That is why it was released as a free preview. We are the testers. Great will be our reward when the current testing phase is more or less complete, and the tool is even better than the initial free preview. Further testing could go on for years. Just my idle thoughts. Comments are welcome!
Probably wasn't available when this video was made, but the FAQ now tells you: Even within the same conversation, Assistant will only refer back to 3000 words. It doesn't say if that is just your words or also its own, I am pretty sure it's gotta be just yours, because Assistant is quite verbose and that would use up this limit in no time. The responses often follow the same pattern of: Brief introduction of terms. Then, putting them in relation, based on what you asked. Finally, end with a summary, mostly starting with "overall". If you ask the same core question with only a slight variation on the input, you might notice that the responses are basically the same, if there is nothing special about the input. I noticed that some responses received the "truth override" for certain topics during the recent years...even in this area where I did not at all try to get it into that direction. You will notice that on the web A LOT. Just try to go back to stuff that has been around for a decade and you will find these updates due to our current times... Yeah, this is what I fear this can be easily directed towards. And probably will.
Thanks for all Tim! Amazing as usual. For me Chat GPT has been an useful tool letting me save a lot of time searching in google and going to Stack overflow for specific task. Mainly implementing graphic effects in controllers.
It is both amazing and dangerous. You can find faster solutions to a question, as well as wrong solutions. If you lack skill and experience, you can implement an incorrect Right solution. I can say it will drastically reduce the amount of time spent on Google. It would also be nice to provide sources. Nothing should ever be taken for granted.
The issue with sources is that it doesn't take from one source. It has crunched hundreds of sources to come up with that solution. Imagine if I had asked you to cite your source for an if statement. That's similar to what it is doing. While this is not the same as human intelligence, think of it in those terms. It has learned something and now it is using that learned knowledge. That's why attribution is so tricky.
One of the hardest parts of programming is getting a spec with complete and accurate information in it. I worked at one company where if any information was missing, you told them to go back and fill it in, and they did. If anything was incorrect, go back and fix it, and they did. That was the best company I ever worked for. Then I moved to another company where the specs had missing information, and wrong information in them. When I pointed out how bad the specs were, some guy a few levels above me held a meeting with the new programmers, and he said "The business people are busy. They don't have time to write specs. ANY QUESTIONS!!!" He was angry that I expected the specs to be complete and accurate. Needless to say, that company had a HUGE number of bugs in their code, and they could never figure out why. It was because they were too lazy to do a thorough job of writing specs. The programmer had to literally guess what the program was expected to do. In my current company, we're following Agile, so now they think that means programmers have to write the specs for the QA department. They give us some vague description of what they want, and our job as programmers is to chase people down, interview them, get all the details from them, then write up the specs so that QA people know how to test our code. As long as most companies are too lazy to give complete and accurate specs to programmers, there's no way in hell ChatGPT will be able to do the job of a developer. We are literally being asked to read minds, and I don't think ChatGPT is a mind reader.
I think worse is that ChatGPT acts like a mind reader. It will create something that looks right but definitely is not right. I'm not at all concerned about it taking our jobs.
Spent some time with it, and it is pretty mind blowing on simple to intermediate stuff, not so much on the more complex. As for the list of persons, I would think that it instantiated this so that you don't have to check for a null AND whether there are any values, you would just do a list.Any() on the calling method, which is tidier.
I have noticed that ChatGPT does not work very well with C#, like if I tried to generate code in python the results are complete, efficient, and documented, when I try the same with C#, the code is never complete, it always stops midway. did anyone else notice this?
Stopping midway through is a limiting factor of ChatGPT. Just ask it to continue and it will. It is just limited by how much text it can write at once.
I mentioned this on LinkedIn, lol. chatGPT has a lot of excitement over it. I am skeptical of any "automation", as Tim said, nuances. They are great, this is great, for a starting point, but maybe I've just burned in the past and my jaded radar is always active lol.
If you provide "in C#, create a random number generator method that takes in integers for bottom and top numbers; please consider the edge cases."; the Chat GPT will give you better code.
There are definitely ways to continue tweaking it to get a better answer. That wasn't the point. The point is that if you don't know the original answer is not right, you won't ask for a better answer. You need to understand the code that it creates. That was the point.
I would say there is a certain convenience on getting an output available immediately, as opposed to waiting an inordinate amount of time to get the correct response. One can scrutinise the code and extract out parts which would relevant and correct. In schools and colleges too, I would have a hard time getting to talk to the lecturer to clear my doubts, if this was present back then it would have helped me out a lot.
Just be careful not to learn something from it without verifying it from another, trusted source. Otherwise, it will teach you bad habits and you won't even know it.
Great video Tim. I want to share my thoughts in general on a philosophical note: 1. It is fascinating to think that every person's brain on this planet earth is created as unique. 2. Who ever it is that created the human brain is great! 3. I don't think no human can create a human like unique brain. However the great AI technology we have still I believe Creator_Of_Humans > HUMANS > AI
When I asked for format string for int to give 4 digits (add leading zeros for number with fewer digits), I repeatedly receive an example 999 as four-digit number which will not get any leading zero. When I correct it that 999 doesn't have 4 digits, I get the answer that it only has 3 digits but then it continues assuming it has 4 digits again.
Can you make a video on how to use DI Container in a Console App, like in a decent way, with reading a connection string from appsettings.json and setting the services, Program.cs can be a mess, when using this things.
@@IAmTimCorey Wow you are fast! Jk. Well just watched it and thanks, while I think I have made some decisions that rally increase the abstraction to a point where it's not needed I just wanted to see if I could just make it. And guess what, my code looks almost identically to yours! So I'm impressed by that, it actually mean I've learned a lot from watching your videos, which I really appreciate. Btw I'm from Mexico and hearing you say "hola mundo" was wholesome. So the Configuration of reading the connection string might not be as bad as I thought I did. It needs a few tweaks, because of how evil I think exceptions are. So, thanks again for this free content, because ofc as a mexican there's no way in hell I'd have the money to pay for one of your courses. But at least I'm learning useful stuff, that wasn't taught to me in college, and I'm a few days from graduating! But learning is on us, right?
@@IAmTimCorey Now I just need to know exactly when to use AddScoped, AddTransient, and AddSingleton, for now I have Scoped, as when playing when the configuration I had AddTransient and didn't work, but I think it was because of other reasons lmao, so then changed it to AddScoped and changed some stuff, and worked... lol.
I gave it a try, and it seems to me the more detailed and complicated code you request, the more likely it is to make a mistake. As this video indicates, you really need to verify what it's giving you is correct. But it could potentially save time as a start to a project, or for more limited questions. But if you look at other sources, stack overflow comes to mind, and you can also get bugs copying that code.
I don't think it recalls your previous interaction. 1:48 It can't. It can only refer to what was said before within a session. If you start a new conversation, it can't remember you. It can't remember your name either. It cannot learn at all. It only knows what it has been trained to know until 2021.
I am not a C# person, but don't you need to instantiate people to make sure that you are returning something? What happens if there is no connection in using...? See 20:08
The Query method is what instantiates the List (actually, it returns an IEnumerable and the .ToList() at the end converts it to a List) so no, we don't need to instantiate it. If the connection failed or if the Query method didn't return a value, the code would throw an exception. It wouldn't try to return the List object.
this AI is a tool for developer but will not replace developers very soon. Part of progress I remember in the early 90's when personal computers started being used in companies people get worried it will replace jobs. But the contrary happens computer development create new industry and millions of new jobs and help economies.
I have been using this everyday, I can see this is definitely going to change the world. Eventually, everyone will be hooked up to it, and it becomes a mother brain, everything it says is the truth of truth, we all follow the mother brain's instructions and then it agains unitifies us as one. That's the future, looking forward to it.
That won't happen, and even if people try to make that happen, it will cause a real problem. Your idea that "everything it says is the truth" is wildly incorrect. Also, remember that it does not create content. It takes content and reworks it. Someone has to be the creator of content.
@@IAmTimCorey It does create content. In fact, every new book or essay is built upon the previously created information. There's nothing novel under the sun, just your personal take on things which is based off of other people's ideas, thoughts, discoveries, etc. But , unlike you, this AI has immediate access to tons of information, can play with it, get instant feedback from people and absorb new solutions at an astonishing rate. I bet even now it can write humorous/detective/adventure stories or code way better than mediocre writers or programmers. However, the scariest thing is that it's just gonna get only better eventually replacing most programmers, teachers, doctors... People who have always thought they won't lose their jobs in at least the next few decades . So I kinda pity you - you are afraid to accept the sobering reality.
lol, no worries. There is a difference between a better tool and a replacement. We have more paid drivers (Uber, Taxi, bus, truck, etc.) than we ever had paid buggy drivers, yet the car "eliminated all of those jobs". This technology isn't magic and it isn't sentient. The biggest threat to jobs are to those people who are scared it can replace them because unless they vastly underestimate their worth, they evidently do pretty menial, automatable jobs.
@@IAmTimCorey I know that it's not sentient and I got the basic understanding of ai. I even spent a few months writing simple neural networks from scratch in python just to really understand what is going on under the hood. I am personally impressed by what it can do. And given the fact that it's constantly learning from its mistakes, it's not going to go away or bog down in its own errors as you suggested in one of your posts. I know that devs think very highly of themselves but most of what 90% of them do can and will be replicated by ai.
Thanks Tim for the very clear explanation of this AI tool, it's very imperssive how it works and when we would have to use it. Your tips are very important with this first version.
Really interesting video and valid points on using ChatGPT as a helper tool. I agree that the lack of the sources isn’t good, but that’s probably the aspect that makes it look reliable and less of a search engine (or relying on the web content)… Tim, considering what you also demoed here, do you see the developers of the future as a sort of validators rather than code writers? The main worry is that developers are cannibalising their jobs in the long term, but my point is different. Would this new technology change the way a developer will work? Will this AI do all the ‘fun’ part of the dev job making them some sort of assistants? I’d be interested in your opinion ☺️
I don't think the job will be code validation. That's going to be a part of it, but that's always been true. Think about this: what ChatGPT offers us is only different from Stack Overflow in one respect - time. In the past, if we got stuck we would ask SO for the answer. We would get a possible answer and then we would validate it. More often, we would look through previously-asked questions and get the answer to validate. Now we get it quicker (and spoiler: the answer probably comes from SO). But that isn't all there is to development. Development isn't about writing syntax. It is about deciding how to accomplish a task. That's not something that ChatGPT or others really address.
Thank you Tim for your reply, very good points. The human logic and reasoning is indeed complex for the current AI to comprehend and adopt but we also see an appetite for that too, which I totally understand. All of this is definitely mind blowing, so I fully understand the overall feelings and worries. At the same time AI is another great achievement, it’s now part of our lives, and we’ll have to learn how to live and work with it. Validation, information, automation but also inspiration when we look for instance at products like Adobe Firefly. Interesting times ahead for sure ☺️
For coding its better to use the openAI playground, then set the temperature between 0 - 0.3, then you will have more correct results. In the meanwhile there is also an engine specilized on coding.
I saw Nick chappas do a live stream with this thing and it was kind of hilarious. We pointed out that it's not a compiler it's not going to tell you that the code it right doesn't compile. It also won't tell you that it's trying to do something that the language run time prevents you from doing it for example inheriting from a sealed class. But as others have stated their elements of it that it can deep dive into the documentation faster than what our human Minds can do. In fact I used it this week to try and figure out why something wasn't working in a post build event and I don't know if I could have figured it out as quickly without it. And it wasn't the exact solution that it came up with either
Tim, I wonder if your opinion changed if ever slightly after 5 months. I've used version 3, which made me waste quite some time giving me wrong info, but compared to version 4, it's day and night. I wonder if you tried version 4 and if you can do a video review on it.
GPT-4 is powerful and it can do a lot, but it still has the same issues as 3 (and 5 will have the same issues). I will definitely be doing more videos covering and using AI.
I agree with the conclusion at 10:30. While chatting with this tool I get the impression of a well-spoken person bluffing their way through a job interview, armed with lots of general knowledge but not actually expert in the subject you're asking about.
@@IAmTimCorey Thanks, yea, It's a nice tool, and you think your video is on point, You can see how todo something quick and dirty but, it may not work at first time, and it may not be clean, efficient code, it may have bugs etc. But It can serve as a good starting point. I asked it to write Blender python code, to create a mandelbrot set 3d model procedurally and it did wrote most of the code correctly but than I fixed some bugs and it worket on the way I found how many small things, just by reading the code and trying it out.
I really appreciate this discussion of errors that it makes. I am a physician, involved in research, and I have had chatGPT write paragraphs based on certain politically charged topics, based on a particular research arcticle. The result was appalling. It produced a result that misquoted the research article so badly that it claimed the result was the reverse of what the conclusions really were. Of course I am not sure why this happened, but descriptions of how it works suggest a possibility. This particular article, while well done and having solid results, was in opposition to the common opinion. So if chatGPT is using text from the internet to discuss the article, it would composite these opinions into something that would contradict the article. Nevertheless, this doesn't completely explain why it would take "We find A is true, B is false" and convert it into "We find A is false and B is true" You also correctly point out a very serious issue in that the text that comes out SOUNDS authoritative, but may not be. I am thinking of how to demonstrate this using more readily accessible examples, so that we can all understand the serious hazard of using this. Regardless of good intentions to avoid errors, there are those who will have bad intentions and purposely misrepresent facts. We have seen this in spades in the past several years. It WILL be used to mislead people on important issues.
You think we can trust a person who can't even console.log a variable in JavaScript to use chat GPT to debug a full stack application? They're literally one "." Or ":" in place of ";" to be stricken with stupor. It seems we'll use this the way we use Google or Stack overflow
Tim, of course you can tell ChatGPT to enhance the code: random number generator , ask to make the code THREAD SAFE and it is fixed private static readonly object syncLock = new object(); private static readonly Random random = new Random(); public int GenerateRandomNumber(int min, int max) { lock (syncLock) { return random.Next(min, max + 1); } } This version of the method uses a lock statement to synchronize access to the Random instance, ensuring that it is only accessed by one thread at a time. This prevents multiple threads from accessing the Random instance simultaneously, which can lead to unexpected results. The syncLock object is used as the lock object, and it is marked as static so that it can be shared by all threads. The Random instance is also marked as static so that it can be shared by all threads. To use this method, you can call it in the same way as the original version. For example: int randomNumber = GenerateRandomNumber(1, 10); // generates a random number between 1 and 10 (inclusive) This thread-safe version of the method ensures that the Random instance is accessed in a thread-safe manner, and it is suitable for use in a multithreaded environment.
The point wasn't that it couldn't write good code. The point was that if you didn't understand what the code was doing, you wouldn't know to ask it to write better code. You can absolutely get it to write you great code. You just need to understand the code to know when a part of the code needs to be improved. It won't always be as obvious as my example was.
It's wild, that thing can even create Minecraft script for large blocks of material or moving parts with redstone and I'm nowhere near well-enough-acquainted with Minecraft to creatively stress it
Just as Context... they have been saying coders will be made exticnt since early versions of Visual Basic where a "Manager can write the apps"... not only was that false logic, but Software developers are now one of the most IN DEMAND skills... that trend will likely continue.
The caveat about it confidently giving bad code isn't much different to a human developer confidently giving you bad code - you still have to have comprehensive unit tests for the code. It can be asked to add unit tests in things like JUnit and Mockito.
Unit testing isn't a silver bullet. You still need to write good code. For instance, unit testing would not find the bug in the random number code that I demonstrated. You are right that human developers can write bad code too. That's why code reviews are important.
I asked the same as in the vidéo, then asked "what are the possible issues with the C# code you generated". After his answer, I asked "Please generate a C# code that avoir the issues you just told about" and it gave me a much robust code with RNGCryptoServiceProvider, a lock, etc. In a RandomNumberGenerator class.
That still doesn't sound like the right code. You shouldn't need a lock or the RNGCryptoServiceProvider. You just need to pull the instantiation out of the method.
It's my belief that in the future, a dev will actually write a very small part of the codebase. A dev will instead be someone that is able to competently communicate with the AI to get the best starting position, and improve on that code, and maybe most importantly to analyze and correct it. It has the potential to drastically reduce the time to code, but I think we're still a long way away from AI-only written and inspected codes in production.
This is exactly how I’m using it. But I’m a beginning, so it’s helping me study be reverse engineering the examples it provides and then building on top of it to make sure what I come up with makes sense and works.
When there is a such innovation pumping the wheel will spin very quickly, think it as a compound effect. As an example - AI will probably tripe or more, the speed of developing, which will lead to a better communication with the AI, which will lead to a AI which will communicate with another AI. As another example - lets imagine we have a AI which received a request by a human to develop a game, afterwards the AI sends a request to a third party AI who can develop the Art, once the art is done, the implementation can be also automated. As you probably know, the AI learns by every single request sent, meaning it will know better than human, which will be the best practice to create something. Once we pass this stage, AI will develop stuff smarter than human can possibly imagine.
I'm coining the job title AI Programmatic Intergrator for precisely this reason :) You're always going to need to check the work. Will this change our workflow, yes, but will it eliminate us, nope.
The problem that notion of programmers becoming integrators will give rise to is that the AI will be incapable of inventing novel in-context solutions and so will the majority of integrators due to lacking the understanding and practice required to come up with such. Skills can and do atrophy when not used, we've lost a great many skills in human history, we don't usually care about that because something better usually replaced it. The replacement here would be a mixed bag, the AI will not make mistakes due to being tired or unmotivated, but it will create insidious bugs in great quantities where the code nominally works but not as intended due to having no actual understanding of what it's doing, these types of bugs are commonly some of the hardest to debug. Shall put it this way, i'm never getting on an airplane that has AI designed software running it where the AI has no in-context understanding.
@@RiversJ It will be the jQuery situation on AIRoids and I'm already seeing some of that go on. I see it overtaking react within 5 years. In terms of the buzz and inability to answer without that ecosystem. I'm on fence a bit because I can already leverage this and I am very capable of auditing that. People will be copy-pasting stuff until the feature works and there aren't crazy crashes. Someone attentive is going to learn things. Someone lazy is happily going to skip that lol
ChatGPt is an mazing tool. And AI is basically that, a tool. AI won't be replacing developers any time soon, just as construction machines didn't replace construction workers. Buildings are being built faster and are becoming more complex and more sophisticated. So does software. Software is becoming more and more complex and sophisticated. AI will help developers spending less time writing boiler plate code. First of all you need to know how to code in order to use AI to code. BTW being good developer isn't just being good at codng.
Just likke in construction, number of construction jobs decreased because we got machines. Tools dont replace people but radically reduce the number of workers needed.
@@gto433 created different jobs like machine operators. Drivers etc.
So you think we’re still using the same number of people we did to build the Pyramids or the Taj Mahal. Some reading would help (or you could ask ChatGPT😂)
I disagree. The ONLY reason that physical labor jobs have not been fully automated is because robotics cant perform sophisticated physical tasks.
However, ai IS capable of doing much more than the vast majority of junior devs already, and in a couple years will be far beyond the level of any human knowledge - this is different because in the realm of pure thought, ai is king - ironically, it looks like prigrammers and scientists will be auyomated out of jobs faster than tradesmen and laborers because it turns out that ai is progressing far faster than robotics. If you dont think that mosy companies would replace you with ai that doesnt need pay or breaks or sick leave, yhen you have anoyher thing coming.
You’re basing this assumption on the absolute dumbest version this AI will ever be. Every error, every mistake is added into its knowledge base. It’s writing kinda buggy code now, something that was absolutely impossible 10 years ago. Where will it be in 10 more years?
High paid knowledge work just received a death warrant. Because the machine knows vast amounts and is exponentially expanding by the week.
Github Copilot is the essentially same thing but directly in the IDE, also powered by OpenAI. Just put in a comment what you want to do, it gives you multiple suggestions of how to solve the problem, enormous time saver for me. MS and others have put a HUGE amount of money into this company
Copilot is using codex if I understood that right. ChatGPT can create a full micro service, style it and answer questions about it. Or make modifications across multiple files.
And also, just like ChatGPT, Copilot sometimes returns the wrong answer. It's still very useful, but you can't just take the code generated and assume it is correct.
Who is MS?
@@JCPhotoParis ms. Clause
I 100% agree. I've played around with it and gotten to pretty much the same conclusion. I do love a lot of things about chatGPT. I have an okay understanding of c# and how to use it, but there are time where I get almost like a writers block, especially when starting new projects. I can now use this to have it start some code and I don't even have to use the code that it gives me, but I can research the code it writes to get a better understanding of what code will help me accomplish certain things.
hi brother, can we talk on discord or something i want to ask you a question
I got to play with it too and it gives me the feel of google on steroids. I’m currently getting to learn C# for my work and often times I feel like I have the correct logic/pseudocode but I’m not always aware of the tools available to me. I view this as giving you a good direction to dive into in order to be exposed to the tools that are available at your disposal.
@@ManderO9 yeah sure, I don't really want to put my contact info on blast though.
@@kevinalbarran8004 I agree, I've been a developer for a little over 5 years and it's all self taught. Of course I found great resources like TimCorey, but there is so much I don't know that I don't know.
Yeah
Today 95% of software is a crap. Because there are too many amateurs. Tomorrow this number will raise to 99%. Because copy-pasting from Stack Overflow required at least some efforts in thinking. With AI generated code idiots will have all the doors open.
Great, man's gift to the machines is perhaps our worst trait -- to be "confidently wrong". Being wrong, with confidence, means you are less apt to learn, in order to discover that you're wrong. But very informative video, none the less... thanks TIm!
Which is why we 1, don't solely rely on it and 2, don't give it power to make decisions on its own. I've watched (and enjoyed) those movies, but I don't want to live through the events of the Terminator.
@@IAmTimCorey The events of Terminator could totally happen. Even if it's currently airgapped from the internet today, some day some college freshmen is going to ask it "Help! I can't center a div!", And then unquestioningly run whatever code it spits out.
I spent almost 10 hours with GPT and I even asked philosophical questions to it and it was the time it blowed my mind. As for the coding, I have not great but good understanding of C# and .Net Core and I decided to use GPT as a reminder tool, because is way more faster than I search also it sometimes can teach me something I have no idea. I've been learning Angular for couple of days and it never teaches me something but makes me lazy to explore Angular by doing mistake. Thanks you Tim for the video, I'll pay attention your advices.
You are welcome.
Are you trying to learn English? You don’t say advices, you say advice. Advice can be singular or plural.
@@chezchezchezchez Yeah I'm trying to learn English and thank you, I guess it's considered as uncountable word
@@chezchezchezchez who tf carea 🤷🏻♂️, it delivered what he wanted to say to whom he wanted to say. Dont be a over smart kid in class and teach grammar to everyone.
@@chezchezchezchez If you want great grammar pass your comments through GPT-Chat first.
I have asked it to write a full backend by describing the problem and then simply asked for a Frontend and some integration tests and it actually did add functionality and tests for all my requested endpoints.It’s like talking to a real diligent and fast intern.
That's a good way to look at it.
I mean there are companies basically run on hundreds of interns and several senior devs
Compared to Nick Chapsas' video on this (who I said to my colleague has usually useless clickbait videos) and who presented it as the best tool which does everything and he didn't mention a single downside, your videos are always very well sceptical and you are sceptical for a good reason. Thanks for that. No clickbait but plain information from you.
Being a professional requires knowing the field professionally. Nothing worse than trying to look like a professional and doing mistakes because of lack of information and presenting is as the right solution.
Thanks!
i am a starting junior developer and chatGPT literally aced the two test tasks sent to me by the employer in a few seconds. i had to give it some further details on the problems, but it corrected itself and got them right. it took me almost 2 days to write the code, tests and dockerize it myself.
legitimately, i am speechless and i feel like the shit ive been learning for several years now, will be totally obsolete in the next months/years. what is the point in hiring some slow human coding chimp when something like GPT can do it literally in a fraction of the time? AND imagine what this thing can do in a few years and how many developers will have been made obsolete by then. truly scary stuff
so what's your Plan B?
@@ladyblack679 I have absolutely no clue. clearly AI this advanced has been in the realm of possibility but everyone, including myself, seemed to be under the assumption that it was like 10-20years away at least, but it's here like right now. feels like junior and even midlevel positions will become redundant unexpectedly fast, as companies will want to optimize and streamline the development process and chatGPt will allow the seniors to take on unprecedented workloads and replace a horde of junior scrubs. i'm legitimately pondering an immediate career switch because the job i have been preparing for might disappear in 2023.
what makes me even more anxious is the fact that the seniors and my mentors have actually expressed similar thoughts about this and are sort of labelling it a revolution that will change the industry going forward. and this is not just development. technical writing, data analysis, call center, absolutely everything can and could be automated with GPT in the near future.
take this with a grain of salt, as it might be a bit overblown, but it's absolutely clear that GPT is a revolution, not yet another dumb chatbot and consumer AI has actually arrived, way earlier than we might've expected.
You may be right, although people have a tendency to overreact to things.
@@bane2256 yes i also kinda agree. i mean, it's not happening tomorrow, but the tech has clearly arrived and it will definitely start completely changing industries in the near future.
@@vanamutt43 just be a creator and you can develop stuff that use ChatGPT, you’ll be fine. Remember you never make $ working for someone.
I found it very useful as an advisor of sorts.
If you describe a problem and show it some code snippets it may help with finding bugs or it will at least give some suggestions.
This is a very good point! Currently, it is an "advisory" tool to help flushout a thought or concern. I believe at the moment developers' expectation is a tool to directly solve their issue, which is not the case. Yes, there is a ton of potential for some automation, this I can see. However, I believe this is the direction that the team is pushing, although neglecting to mention this part. lol.
It’s invaluable for that. I’ve done some things where there was no documentation or help available online. It didn’t get it completely correct, but close enough that I could figure the rest out for myself, with NO other examples online that I could find. That’s huge.
Great demonstration dear Tim, Thank you a lot for keeping us updated with new things happening and coming to the development area, keep it up, and thank you again dear Tim for supporting the community.
You are welcome.
I have found this to be extremely helpful with reading over ancient pieces of undocumented code (written by previous programmers long gone) and giving a decent idea of what the code is trying to do. It will even add comments and attempt to rewrite things to be more readable. Obviously it's not going to be 100% correct all the time, but it can be a game changing tool.
That's a great application.
Humans programming an AI to understand programming of long gone humans.
How mindblowing is that...
@@FunIsGoingOn It even helps me understand code written by a long gone self of mine
@@Adam-nw1vy Oh I feel you. But then: it's a future that now allows us to reminiscence our past. Back then people wrote books.
Will "it" once write a story about that species of "human", that all so tried to make sense of themselves?
@@FunIsGoingOn if we fail to make sense of ourselves, will AI fail to make sense of itself / themselves, even if it makes sense of us?
My team today decided to test it out and we asked it to generate an EF Core CLI command to run migrations script against another database (not the one associated with the DbContext). It literally invented the --database attribute which does not exist in the CLI and it was so confident that when we told it that it's wrong, it still claimed that it is correct.
I guess they need to go back to the drawing board?
It works better for frontend and especially for javascript frameworks
@Marko
How has your experience been so far? Do you see this thing substantially changing the way your team works?
"Don't rely on the answers, before validating them first".... I think this is a very general advice that you should ALWAYS follow regardless of whom you are talking to. It holds true for many human-generated answers as well 😀. Let's not forget that humans make errors as well 😉. You have to understand that ChatGPT was trained by an external validator, a training algorithm. This algorithm terminated and it was declared that ChatGPT finished training. Before that it took the answer of ChatGPT and revised the internals of ChatGPT, because it detected that it was not correct. Since that procedure is not active anymore, there is nobody to correct ChatGPT. ChatGPT has never learned to self-correct or to include doubt, because that was never required by the training algorithm. The training assumed to know the ground truth and put the same confidence in ChatGPT. So, essentially, anybody using ChatGPT now has to understand that he/she is in the role of that external validator, except that you can't change ChatGPT anymore. Anyway, I think future versions of ChatGPT will improve, at one point it may be possible that the valdiation becomes a feature of the network itself in which case it can learn to reason about correctness. At least I think that's not out of reach now. I do find the results of ChatGPT very impressive. It's amazing about how far this technology has come.
Spent half a day writing some custom formatting to remove some characters from a string and cover all the scenarios that could happen via Human error. I thought I would give chat a go to do it and it gave me a just as useful solution in 2mins of use. I could then get it to write tests for me too. I then added in things it missed. I like that it can save me half a day on something trivial. Will be using it often, especially for writing unit tests.
It is definitely a big help in a lot of areas, as long as you keep an eye on it and understand the output.
I guess I will have good times coming ahead.
I am a technical tester and some ten years ago I experienced a new type of bugs. Code began to do errors actively meaning that it did what was supposed to do and then some.
Developers began to search for solutions, found some code, copied it into their own, ran it, and it worked.
They failed to take ownership of that code. They did not go through the code line by line to check if that particular line did something usefull or it could/should be altered/deleted.
So I tought the developers to gain ownership of the code copied.
Now I will have to start all over.
Great video, a lot of C# developers should watch this, think I'll send it to my coworkers. As an aside, I asked it the other day to show me how to read a text file line by line in C#. The answer looked correct, but when I pasted it into VS, it had compiler errors. Googling the same prompt got me an example directly from Microsoft that of course did work. I was a bit surprised since it seems like such a common and simple problem. Really neat tool, but not without limitations.
One trick is to ask it again. Sometimes it doesn't come up with the right solution the first time. You can also ask it to debug the problem. But I agree, it does have its limitations.
@@IAmTimCorey I think this should be a tool for experts and not beginners because am expert will often spot these, I've heard sometimes its not wrong but inefficient, I think having beginners rely on it could stifle their growth. Like giving a kid a calculator to do math the first time their learning math. Normally introduce the calculator later on when individual math skills have developed somewhat.
As someone who started programming in 90s and went through all Internet evolution, work done by OpenAI is mind blowing. Reminds me the first time using web search engines like AstalaVista and other pre-Google projects. It was mind blowing back then to be able to perform search using structured query.
People (mostly younger gen) still don't get it. This is so called "singularity moment", time when things begin to radically change due to invention of a new technology.
I personally think in 5-10 years AI tools will put significant downward pressure on new junior job opportunities. Companies will increase efficiency, and with that price of products and men power will be cut. That translates into less new jobs (ignore AI jobs, because there will be only limited pool for those, not enough to compensate normal coding jobs).
Just like machines reduced the need for many manual labor workers, this will cut the need for many developers.
Future developers will have to be in top 10% in order to succeed. Instead of having "coders" with no formal education engineers will have to have knowledge of system they develop on much higher level. It won't only impact SW dev jobs, but anything that can be automated.
Don't judge what chatGPT can do today, think were it will be in few years.
Next 10 years there will be explosion of new AI companies, something like DotCom bubble 2.0. That will be very exciting but also scary time.
I am at the sunset of my career and I advise CS students and those who plan to become in future to think really carefully about the choice. If I were young again I don't I would choose the same path knowing AI tech might cut many job opportunities.
The dotcom bubble is a good analogy of what is probably coming. In the original bubble, people were making big claims and using buzzwords to look like they were revolutionizing the industry. People who didn't know any better and who thought that this was all magic bought into the hype. Eventually, the reality of the situation caught up and the bubble burst. The reality was that technology is great, but it isn't magic and it isn't incomprehensible. You need to understand it and what actual value it brings. Otherwise, you end up with a lot of hype and no actual results. I think we will have something similar with the AI applications. People will start companies around it, treating it like this magical unicorn that can do anything without really understanding how it works. Companies will expect it to be a replacement for developers, investors will buy into crazy new startups that make bold claims about AI, and people will generally make wild claims about all the things it will do "sometime in the future". In the end, the bubble will burst, some good companies will come out of it, and people will start to use it for what benefits it provides without over-promising or over-selling it.
@@IAmTimCorey But just like the internet, it is likely to revolutionize the way we work. Imagine being able to program any tool/toolset you need on the fly. That could easily increase productivity in say CAD, by an order of magnitude... And that's if the program doesn't just spit out the 3d model itself, which it probably will. We could be talking about increasing productivity by 10,000%.
It is an association engine, of megalomaniacal proportions for sure but that does not infer it any capacity for self reflection to judge it's own conclusions from those associations. I'm sure such engines will eventually do absolutely fantastic things many of which we can't even see yet, I'm also equally sure that most of the hype around it currently is from people who either don't understand classical computing or the problem of computing a model for the nature of self reflection (got news, nobody really does) or worse neither. An advanced subject expert that is also expert at using such engines will become absolute super stars but someone who does not understand the nature of the engine or the subject will just clutter the world with random internet post 'knowledge' and falsified papers in every nook and cranny they stick their copypasta in.
StackOverflow banned replies coming from GPT or even those that seem coming from GTP because of exactly what you say, there's nothing more dangerous than code that looks like it works. There's an interesting discussion at the Advent of Code subreddit (and I guess every competitive programming subreddit) regarding these tools because they could be used in competitive programming.
Yeah, it will definitely shake things up.
Thanks!
Thank you!
The world will never be the same again after this. We created something so close to perfection that knows everything and can do everything and it's available to every person in the world??? It's blowing my mind 🤯
It'll nudge some people towards not learning anything useful in their lives, lessening their life and it'll be wrong usually in the worst of ways, having no idea it is wrong but very sure of it's answers and incapable of creating complex or novel solutions without spending far more time micromanaging it instead of just writing it.
It is an amazing tool but be careful saying that it is close to perfection. It is definitely not perfect and won't be, and being not perfect but confident is actually more dangerous. It is going to be a game-changer, but it has its problems.
"Like commander Data from Star Trek" 😁, as usual a great insight on the matter, pros & cons, thank you Sir.
You are welcome.
Great Video, I totally agree, I've tried this tool for myself, it's very impressive, but it will never replace your brain.. I do see this being a very useful tool to generating code.
One way to look at this is it is a step toward a higher level programming language.
Assembly language succeeded machine language, C succeeded Assembly language, C++, Java, C-sharp, DSL‘s. … but people do sometimes still program in Assembly language.
Wow, what a balanced and competent evaluation of this technology. Good job Tim.
Thank you!
If you don't know why and how something works, you have no hope of fixing it when it doesn't.
Absolutely.
Great video. Having used chatgpt in a similar manner of the past weeks, it’s nice to see a detailed breakdown of where and how it can be confidently wrong.
Thanks!
It's been making the rounds on TikTok since last week. However, I just had to wait on the Man who just keeps on Giving for all C# Developers out there (IamTimCorey).
Thanks for coming through eventually like you always do.
You are welcome.
Thank you for pointing this out. Do not trust anything and question everything!
You are welcome.
As an English teacher this AI is magical.
I can just tell it to give me cool examples to varying degrees of complexity
And yet, it is only the beginning. I can only imagine how it'll be in ten years.
AI currently is very fuzzy but it can easily be integrated with deterministic algorithms to be able to guide itself to a correct answer when given a set of requirements, starting with plain english and building into a set of tests that can be created by it but then fixed in place unless explicitly altered. That way the confident but slightly wandering output will be guided and constrained to a (more) correct outcome.The ability to remember and build up a model of the state is pushing in this direction but is still somewhat fuzzy. Add in integration with something like Wolfram Alpha that it knows to use when asked a mathematical or scientific question along with language being more humble in expressing how certain it is and it will become far more useful.
Something like ChatGPT will eventually replace most of the devs writing spagetti code for non-mission critical applications. But developers who are smart and working on mission critical applications will always have a job.
That could be true.
GTP is likely to be usable in tasks that do not require details to be precise. You can probably replace an interviewer, text summary generator, low-grade art work, a video generator, etc. in situations wehre accuracy is not required.
You can't put GTP in charge of commercial transactions, code design, device design, etc. Y
0:21 So if it's not the answer to anything, we should prompt it about just that. Will it output 42 then?
Unfortunately not. I tried it.
Dude, thank you so much. I have seen so many people freaking out about "oh this will replace developers we all become obsolete blah blah blah" while having an incredibly poor understanding of what actually ChatGPT is, what it is doing, and what it is capable of doing. The amount of broken/wrong code I've gotten it to generate in only a few sessions of playing around, it's crazy. The tool is awesome, but 2000% true that you can't rely on it to 'replace your brain'
It's amazing as a knowledge base though, and generally an improvement to a lot of 'search engine' usage.
You are welcome.
You do realize this is a relatively new and still developing technology? Sure it's not going to be replacing any jobs right "now", but the future is looking very grim for developers.
This doesn't do a software developer's job. A software developer's job is not to write syntax. That is how they accomplish their job, but it isn't their job. A software developer's job is to create and implement logic. It doesn't matter if they do that in Assembly, C#, PowerBuilder, or in words. However they do it, it is a skills that is highly in demand and that demand is only getting greater (name an industry that isn't using technology now - even my plumber is web-connected and does quotes on his iPad). ChatGPT isn't creating logic. It will be a tool of developers, not the replacement for them.
This doesn't do a software developer's job. A software developer's job is not to write syntax. That is how they accomplish their job, but it isn't their job. A software developer's job is to create and implement logic. It doesn't matter if they do that in Assembly, C#, PowerBuilder, or in words. However they do it, it is a skills that is highly in demand and that demand is only getting greater (name an industry that isn't using technology now - even my plumber is web-connected and does quotes on his iPad). ChatGPT isn't creating logic. It will be a tool of developers, not the replacement for them.
@@IAmTimCorey Yes, exactly... there is nothing which is 'looking grim' for software engineers. The primary skill for this field is not 'writing code' but problem solving and reasoning... and beyond that, *learning* of novel concepts. If one's skillset is only 'writing code' rather than 'problem solving', then one should worry regardless of AI because I will still toss you to the curb once you sit down for a real interview. Easily 90% of candidates end up less-than-satisfactory in this area anyways.
ChatGPT is an impressive technology. For what it is. But one needs to understand what exactly it is.
It is a language model, and a stochastic one at that. Its entire purpose is to be able to produce a believably human response based on its training data. As some have joked, it is a "malicious compliance" approach to the Turing Test.
What is not happening behind the scenes is reasoning... inference... logic. ChatGPT (and any GPT-based model, for that matter) does not have any consideration of semantics under the hood, no deeper concepts, no idea of meaning, no definition of 'correctness'... and so on. It doesn't even 'reason' about the language it is using, but rather models, or imitates, the language it should use. (A quite excellent example of this would be a well-known UA-cam video, called "How English sounds to non-English speakers")
It is a digital con-man, to put it quite crudely... Its goal is to output something that will make you believe it knows what it is talking about, without it ever knowing what it is talking about. It is lightyears away still from any notion of an artificial general intelligence which will actually be able to have and utilize knowledge in these kinds of ways, to a meaningful degree.
And I will absolutely refuse this notion that "it's new technology and it's unrefined". This is the culmination of many years of work by many smart people within this field, it is neither untested nor unrefined; it is quite specifically THE refinement of many years of progress, standing on the shoulders of giants.
Similarly, it is entirely unreasonable to say that "it will be able to do more if it just learns with more time" because there are very fundamental limitations to what this model can and cannot do; namely, limitations which arise from what it fundamentally is and is not.
At this point, ChatGPT has become less of an AI experiment and more of a social experiment... one which unfortunately has accomplished two things: 1) it has shown that many humans are not qualified to conduct a Turing Test, and 2) it has shown that the field of AI is incredibly poorly understood by even a large population of developers/engineers.
Its creators did not set out to create a 'con-man'... rather, it becomes that purely by virtue of the reactions which many humans have had towards it.
You make a lot of great points here, Tim. I’m concerned though that managers - the sort who make hasty technology decisions - aren’t going to appreciate the nuanced points. Devs are expensive and if they can cut half their dev staff by telling the remaining half to just use ChatGPT and copy and paste, you can be sure they will. Think of the “competitive advantage” that would give. Call me cynical, but I’ve seen how a lot of these “business people” think about coding - it’s a cost to be eliminated by any means necessary.
I agree with you. The thing is, though, that I don't think that is the end of the story. Other businesses will realize that this is a competitive advantage. They can move forward faster. Plus, other businesses will start adding developers to take on markets that were previously out of their reach.
The one thing I am learning about ChatGPT is that it's using commonly used statements that developers typically use. It's not going to default to the latest releases, which is another assumption about List people = new List();. If you go and aggregate every sample online of this line of code, you'll find this is a common practice. ChatGPT seems to do this accurately, which is in-line with how AI typically works. So to get results based on latest practices, I believe this need to be included in the question being asked. Otherwise, it will use common practices.
The biggest weakness to a developer is too many assumptions. :)
I do agree with @pavfrang. The internet will have to literally change (either update existing samples OR be flooded with better practices) in 5 to 10 years for this AI to recognize better practices.
I think that you can start the conversation by "configuring" the IA. You can try telling it, I'm working with C#, .net 7 and I want to use the latest best practices regarding .net and c# 11.". I didn't tried it but from the usage examples I've seen it's totally possible and also totally the goal of this IA to remember what are your preferences to answer respecting that.
it's kind of fun... outside of programming questions, i asked "write a poem about summer breeze" and it delivered... interesting to be sure...
It definitely is.
Very interesting technology for sure. I created an openai account and I can log in to openai but when I try to login to ChatGPT it shows an error on the address bar and sits on the login page. I was able to open the chat api in the playground and play around with it some there. Pretty cool.
Interesting.
@@DumitruDanPOP I just tried it again and this time I got a message that says "We're experiencing exceptionally high demand. Please hang tight as we work on scaling our systems. A lot of people are checking out ChatGPT right now. We're doing our best to make sure everyone has a chance to try it out, so please check back soon!" Then it gives you an option to leave your email address to be notified "when we're back". I'm thinking all of @IAmTimCorey fans are trying to hit it and overwhelmed their system. LOL
At 18:04, I believe there's an assumtion and that conneciton.Open() is required. You mentioned that connection.Query(...); should already be opened and it would be if Dependancy Injection is used. I believe this is where the assumption is. I believe the statement is quite accurate.
Dependency Injection isn’t what opens the connection. Query does. The connection cannot be opened before it is created (the line above).
Tim, I tried the same question word for word in ChatGPT. I got a different result than you. ChatGPT is learning to improve its response. I copied the code into Visual Studio, and the code worked fine. ChatGPT can serve as a guide to programming questions, I am impressed so far with ChatGPT.
No, that’s always been the case. It doesn’t learn from users. It will give you a different answer to a question if you clear the history.
4 of 20 of more "complex" programming problems were solved directly. 8 worked out after modifications. The rest failed but still gave hints in the right direction.
Which is why it is helpful as long as you understand what it is doing. The biggest danger are the 8 that work after modification, since some of them work right away but are poor code patterns with bugs in them.
@@IAmTimCorey True, I noticed that
@@IAmTimCorey But it can't be worse than a junior or even mid level developer right? If you ask it to refactor a junior level code, the result is impressive. That alone would contribute towards "replacing" developers simply because there will be far less need for juniors. Hiring less (junior) devs means wage will come down and eventually programming will no longer be attractive career.
Yet again another brilliant video, so much attention on how awesome this technology is within our community and so few people looking at the short comings, thanks Tim.
You are welcome.
Great example to use! I explained this issue to others and some just thought I was wrong. But there's been too many bad code samples coming out
Thanks!
Thanks for sharing, I had a deep talk with the chatbot on cloud identity management and what decisions it would recommend. Many details were so accurate and detailed, that it almost feels scary, but some minor estimations and conclusions were indeed wrong or at least not up to date.
However, the AI says, you should not trust its assessment when it comes down to security and how modern the technolgies are. There was a decent explanation, that it has a fixed dataset that is not updated, and so on.
I don't understand how people say this, just wait for the following updates, it's not done yet. No doubt this is going to replace coders to a significant+ extent
We say this because we've seen this before. Also, we say this because we know how this technology works. It isn't inventing something. It is regurgitating what it found online. It is essentially Google with one very convincing result. You can already find almost any answer to a programming question on Stack Overflow, yet it hasn't replaced developers. We've had Intellisense and now Intellicode for years. I used an inline tool for Visual Studio that searched Stack Overflow and grabbed the code for the answer a decade ago. These tools are just that - tools. Just because someone invents a nail gun doesn't mean carpenters are out of business. It just means they don't have to do the repetitive work as much.
Great summary of the capabilities and limitations of chatGPT! Will be sharing in my technical courses at the Pennsylvania College of Technology.
Excellent!
Thanks for sharing this bro, I saw some very scaring news about this, I doubted them and now you have confirmed my thoughts, thanks!
You are welcome.
What I like to do when using ChatGPT for code is to ask it for references for the provided code if I am unsure or think something doesn't look completely right. So far it has always given me direct links to exactly what I need to quickly verify. If I follow up with a correction it typically accepts that I am correct, unfortunately it doesn't retain that information past the current chat. It's also fun to see what it comes up with when asked to optimize the code.
I did the random number example you provided and got a very similar result. I then asked if there could be any unforeseen problems with the provided code it went into detail about Random rng = new Random(); inside the function being a bad idea and proceeds to give me a better version of the code it just provided. With the right questions you can definitely get some pretty good code out of it but you have to know what to ask it in specifics and iterate over it with follow up questions.
The key, as you pointed out, is knowing what to ask and knowing what the code does in order to evaluate if it is good code or not.
@6:30
How is rand object not thread safe? Isn't local variable thread safe?
The issue is that the Random instance is generated with a starting seed. When you don't specify one, it is based off of the time when it is instantiated. If you instantiate two random instances with the same starting seed, they will create the same results. That means that your random values (which is pseudo-random in the first place) will be in sync with another set of random values. That could be a huge issue.
I had a philosophical discussion with it about agile methodologies specifically safe the scalable agile framework as well as software testing. I was interested on its thoughts on istqbs definitions. And it just spouted that these are the definitions that are set as a standard and I'm like no no that's not how it works yes they provide a list of these things but the definitions are malleable and don't always apply. And so I got into discussion about them I said well what if what if 50% of their definitions turn out to not be correct in a professional environment? is it reasonable to trust them? It said no. the answer to the thing is I started digging on the percentage and somewhere between 30 and 40 it said it was okay to trust them.
I have no idea why it decided on that number it makes no sense to me whatsoever a standard body that's not trustworthy within a single percentage point in my mind is likely not trustworthy for other reasons... and no I'm not saying that that's true of the ispqb you're the sake but it speaks very confidently and when you question it about how it knows what it knows about what's true it doesn't seem to do anything but regurgitate the answers. That means I think it will have a hard time dealing with fake news and propaganda
Given the kinds of tests this thing is going through with people freely using it now, I think there will be a hefty amount of data to improve on the next iterations of this, to the point that it will know how to have serious philosophical debates. It has gotten this far already and it's pretty convincing in its answers but transferring every unique emergent ability that comes with consciousness is not simple. Developers only see small pieces of the puzzle after they solve the next problem.
Very great video Tim, Thank you!
You are welcome.
Nothing can replace a developers logical and solutions oriented mind !
Agreed.
If you're wrong we won't have to wait more than a couple of years to find out...
Tried ChatGPT just now. It's a pretty decent boilerplate generator for small projects. I notice you need to be very specific with your description on what you want in the code.
Thanks for sharing.
I wish your tests with ChatGPT would have been more interesting code wise. - Maybe real cases when you and your team used it.
I use Copilot since day one and with over 30 years of coding background I find it saves me a lot of time and is a practical tool.
With ChatGPT's abilities we could really have environments where you could architect, engineer, and supervise software generation instead of error prone typing everything yourself... And yes, everybody knows (except for absolute beginners), that you must know how to code and understand what your code does.
It is shocking how many people don't know that you need to understand your code before using it. That's a major argument for using Entity Framework and it has been a major argument around ChatGPT.
@@IAmTimCorey Thank for the reply. Sure, I agree that warning signs and advises should be put on using AI for code generation. But after that, the question is what can experienced programmers and developers do with it. I mean I would also not allow the code from the first day of an intern to go online or be deployed without review either...
But you are right, in that machine generated unchecked code sadly may become reality very soon. That is a real threat.
Tim, I love your videos; the way you explain them is at least terrific! What I don't understand is why you have a low number of subscribers when you are such a good teacher. I love development and IT, but I hate C#; maybe you expand your topics like in this video. I will surely share this video with my classmates, but none of them works with C#; we are more business analytics. Anyway, I wish you reach 1 million subscribers soon. Thank you.
Thanks! Part of the reason why I don't have more subscribers is because I don't chase them with flashy stuff. I focus on the practical, real-world training for C# and related topics. I'd rather be helpful than popular.
@@IAmTimCorey king
I found it was actually most useful for explaining conscepts to me rather than concrete code. When I got it I asked to explain me some of the finer points of docker that I had been wondering about.
Be REALLY careful there, though. Always verify what it tells you. It gets concepts wrong a LOT. As long as you are using it as a tool, and not relying on it to always be right, though, it is great for that.
@@IAmTimCorey What I noticed is that you ask it, there is a way to do X in docker. It says "Sure do XYZ"
Then it turns out that is something that is only in a very old version of docker, but it does not tell you.
It might actually be able to write good unit test code as it is easier for AI to find most of the paths human can miss. Let's be honest. Most of us don't like to write tests. Its just boring thing we usually must do to avoid issues but AI trained just for that I think can beat us today. It won't replace developers abstract thinking though. No way it will write decent app in near future but I think programming in the future might look more like building an app from a bunch of nodes in some graph when AI will write a part of an app from this. All AI need is human abstract thinking and this can be provided by human dev in some future tools. This is my personal prediction of how developer would world in the future.
Thanks for putting out this video, because I've been running the same kind of tests and finding the issues you have. Also in creative and other writing, it is almost relentlessly upbeat almost creepily so.
You are welcome.
If you specify it to write something sad, it will.
I asked it to write a post apocalyptic story about mutant man-eating plants, and it did so, with the appropriate tone. Not at all relentlessly upbeat.
I think it just defaults to upbeat unless you specify otherwise.
Very Good and nice Video. no hype just facts. Ty Mr. Corey!
You are welcome.
@@IAmTimCorey I tried it my self. Its pretty amazing for Writting UnitTests tho
I'll just wait to learn this stuff when the definitive book is written. JUST KIDDING, EVERYONE!
The only thing that ChatGPT needs is millions of knowledgeable testers (like Tim Corey and others) running it through a multitude of challenges. That is why it was released as a free preview. We are the testers. Great will be our reward when the current testing phase is more or less complete, and the tool is even better than the initial free preview. Further testing could go on for years.
Just my idle thoughts. Comments are welcome!
We are definitely the testers and our results will make it better. Of course, our content is what it learned on in the first place.
@@IAmTimCorey And as you point out, "our content" should be verified, because it is not always right. Thanks for all of YOUR content!
Probably wasn't available when this video was made, but the FAQ now tells you: Even within the same conversation, Assistant will only refer back to 3000 words. It doesn't say if that is just your words or also its own, I am pretty sure it's gotta be just yours, because Assistant is quite verbose and that would use up this limit in no time.
The responses often follow the same pattern of: Brief introduction of terms. Then, putting them in relation, based on what you asked. Finally, end with a summary, mostly starting with "overall".
If you ask the same core question with only a slight variation on the input, you might notice that the responses are basically the same, if there is nothing special about the input. I noticed that some responses received the "truth override" for certain topics during the recent years...even in this area where I did not at all try to get it into that direction. You will notice that on the web A LOT. Just try to go back to stuff that has been around for a decade and you will find these updates due to our current times...
Yeah, this is what I fear this can be easily directed towards. And probably will.
Thanks for all Tim! Amazing as usual. For me Chat GPT has been an useful tool letting me save a lot of time searching in google and going to Stack overflow for specific task. Mainly implementing graphic effects in controllers.
Excellent!
It is both amazing and dangerous. You can find faster solutions to a question, as well as wrong solutions. If you lack skill and experience, you can implement an incorrect Right solution. I can say it will drastically reduce the amount of time spent on Google. It would also be nice to provide sources. Nothing should ever be taken for granted.
The issue with sources is that it doesn't take from one source. It has crunched hundreds of sources to come up with that solution. Imagine if I had asked you to cite your source for an if statement. That's similar to what it is doing. While this is not the same as human intelligence, think of it in those terms. It has learned something and now it is using that learned knowledge. That's why attribution is so tricky.
One of the hardest parts of programming is getting a spec with complete and accurate information in it.
I worked at one company where if any information was missing, you told them to go back and fill it in, and they did. If anything was incorrect, go back and fix it, and they did. That was the best company I ever worked for.
Then I moved to another company where the specs had missing information, and wrong information in them. When I pointed out how bad the specs were, some guy a few levels above me held a meeting with the new programmers, and he said "The business people are busy. They don't have time to write specs. ANY QUESTIONS!!!" He was angry that I expected the specs to be complete and accurate. Needless to say, that company had a HUGE number of bugs in their code, and they could never figure out why. It was because they were too lazy to do a thorough job of writing specs. The programmer had to literally guess what the program was expected to do.
In my current company, we're following Agile, so now they think that means programmers have to write the specs for the QA department. They give us some vague description of what they want, and our job as programmers is to chase people down, interview them, get all the details from them, then write up the specs so that QA people know how to test our code.
As long as most companies are too lazy to give complete and accurate specs to programmers, there's no way in hell ChatGPT will be able to do the job of a developer. We are literally being asked to read minds, and I don't think ChatGPT is a mind reader.
I think worse is that ChatGPT acts like a mind reader. It will create something that looks right but definitely is not right. I'm not at all concerned about it taking our jobs.
Spent some time with it, and it is pretty mind blowing on simple to intermediate stuff, not so much on the more complex. As for the list of persons, I would think that it instantiated this so that you don't have to check for a null AND whether there are any values, you would just do a list.Any() on the calling method, which is tidier.
Dapper will overwrite the list no matter what, so instantiating it won't help.
@@IAmTimCorey of course
I have noticed that ChatGPT does not work very well with C#, like if I tried to generate code in python the results are complete, efficient, and documented, when I try the same with C#, the code is never complete, it always stops midway. did anyone else notice this?
Stopping midway through is a limiting factor of ChatGPT. Just ask it to continue and it will. It is just limited by how much text it can write at once.
@@IAmTimCorey thank you tim, I will try this next time.
5:00 - 11:00:is the key part. At least 10% of the time it is confidently wrong; validation is critical.
Absolutely.
I mentioned this on LinkedIn, lol. chatGPT has a lot of excitement over it. I am skeptical of any "automation", as Tim said, nuances. They are great, this is great, for a starting point, but maybe I've just burned in the past and my jaded radar is always active lol.
I'm glad the video was helpful.
If you provide "in C#, create a random number generator method that takes in integers for bottom and top numbers; please consider the edge cases."; the Chat GPT will give you better code.
There are definitely ways to continue tweaking it to get a better answer. That wasn't the point. The point is that if you don't know the original answer is not right, you won't ask for a better answer. You need to understand the code that it creates. That was the point.
thanks
Like
Best
You are welcome.
Events in Future Historical record will be tagged either Pre-CHATGPT or Post-CHATGPT
It is a big deal, for sure.
I would say there is a certain convenience on getting an output available immediately, as opposed to waiting an inordinate amount of time to get the correct response. One can scrutinise the code and extract out parts which would relevant and correct. In schools and colleges too, I would have a hard time getting to talk to the lecturer to clear my doubts, if this was present back then it would have helped me out a lot.
Just be careful not to learn something from it without verifying it from another, trusted source. Otherwise, it will teach you bad habits and you won't even know it.
Great video Tim.
I want to share my thoughts in general on a philosophical note:
1. It is fascinating to think that every person's brain on this planet earth is created as unique.
2. Who ever it is that created the human brain is great!
3. I don't think no human can create a human like unique brain. However the great AI technology we have still I believe Creator_Of_Humans > HUMANS > AI
When I asked for format string for int to give 4 digits (add leading zeros for number with fewer digits), I repeatedly receive an example 999 as four-digit number which will not get any leading zero. When I correct it that 999 doesn't have 4 digits, I get the answer that it only has 3 digits but then it continues assuming it has 4 digits again.
Almost anything with numbers confuses it currently.
Can you make a video on how to use DI Container in a Console App, like in a decent way, with reading a connection string from appsettings.json and setting the services, Program.cs can be a mess, when using this things.
Here you go: ua-cam.com/video/dZSLm4tOI8o/v-deo.html
@@IAmTimCorey Wow you are fast! Jk. Well just watched it and thanks, while I think I have made some decisions that rally increase the abstraction to a point where it's not needed I just wanted to see if I could just make it. And guess what, my code looks almost identically to yours! So I'm impressed by that, it actually mean I've learned a lot from watching your videos, which I really appreciate. Btw I'm from Mexico and hearing you say "hola mundo" was wholesome. So the Configuration of reading the connection string might not be as bad as I thought I did. It needs a few tweaks, because of how evil I think exceptions are. So, thanks again for this free content, because ofc as a mexican there's no way in hell I'd have the money to pay for one of your courses. But at least I'm learning useful stuff, that wasn't taught to me in college, and I'm a few days from graduating! But learning is on us, right?
@@IAmTimCorey Now I just need to know exactly when to use AddScoped, AddTransient, and AddSingleton, for now I have Scoped, as when playing when the configuration I had AddTransient and didn't work, but I think it was because of other reasons lmao, so then changed it to AddScoped and changed some stuff, and worked... lol.
I gave it a try, and it seems to me the more detailed and complicated code you request, the more likely it is to make a mistake. As this video indicates, you really need to verify what it's giving you is correct. But it could potentially save time as a start to a project, or for more limited questions. But if you look at other sources, stack overflow comes to mind, and you can also get bugs copying that code.
Yes, I knew you were going to say that the instance of random inside the method was incorrect. Co-pilot was doing the same thing.
Great observations - thanks
You are welcome.
I don't think it recalls your previous interaction.
1:48
It can't. It can only refer to what was said before within
a session. If you start a new conversation, it can't
remember you. It can't remember your name either.
It cannot learn at all.
It only knows what it has been trained to know until 2021.
The chatbot remembers your previous interactions. Yes, it is only during your session, and I say that again later on in the video.
@@IAmTimCorey Right
I am not a C# person, but don't you need to instantiate people to make sure that you are returning something? What happens if there is no connection in using...? See 20:08
The Query method is what instantiates the List (actually, it returns an IEnumerable and the .ToList() at the end converts it to a List) so no, we don't need to instantiate it. If the connection failed or if the Query method didn't return a value, the code would throw an exception. It wouldn't try to return the List object.
this AI is a tool for developer but will not replace developers very soon. Part of progress I remember in the early 90's when personal computers started being used in companies people get worried it will replace jobs. But the contrary happens computer development create new industry and millions of new jobs and help economies.
Agreed.
I have been using this everyday, I can see this is definitely going to change the world. Eventually, everyone will be hooked up to it, and it becomes a mother brain, everything it says is the truth of truth, we all follow the mother brain's instructions and then it agains unitifies us as one. That's the future, looking forward to it.
That won't happen, and even if people try to make that happen, it will cause a real problem. Your idea that "everything it says is the truth" is wildly incorrect. Also, remember that it does not create content. It takes content and reworks it. Someone has to be the creator of content.
@@IAmTimCorey It does create content. In fact, every new book or essay is built upon the previously created information. There's nothing novel under the sun, just your personal take on things which is based off of other people's ideas, thoughts, discoveries, etc. But , unlike you, this AI has immediate access to tons of information, can play with it, get instant feedback from people and absorb new solutions at an astonishing rate.
I bet even now it can write humorous/detective/adventure stories or code way better than mediocre writers or programmers. However, the scariest thing is that it's just gonna get only better eventually replacing most programmers, teachers, doctors... People who have always thought they won't lose their jobs in at least the next few decades . So I kinda pity you - you are afraid to accept the sobering reality.
lol, no worries. There is a difference between a better tool and a replacement. We have more paid drivers (Uber, Taxi, bus, truck, etc.) than we ever had paid buggy drivers, yet the car "eliminated all of those jobs". This technology isn't magic and it isn't sentient. The biggest threat to jobs are to those people who are scared it can replace them because unless they vastly underestimate their worth, they evidently do pretty menial, automatable jobs.
@@IAmTimCorey I know that it's not sentient and I got the basic understanding of ai. I even spent a few months writing simple neural networks from scratch in python just to really understand what is going on under the hood.
I am personally impressed by what it can do. And given the fact that it's constantly learning from its mistakes, it's not going to go away or bog down in its own errors as you suggested in one of your posts. I know that devs think very highly of themselves but most of what 90% of them do can and will be replicated by ai.
Thanks Tim for the very clear explanation of this AI tool, it's very imperssive how it works and when we would have to use it. Your tips are very important with this first version.
You are welcome.
Really interesting video and valid points on using ChatGPT as a helper tool. I agree that the lack of the sources isn’t good, but that’s probably the aspect that makes it look reliable and less of a search engine (or relying on the web content)… Tim, considering what you also demoed here, do you see the developers of the future as a sort of validators rather than code writers? The main worry is that developers are cannibalising their jobs in the long term, but my point is different. Would this new technology change the way a developer will work? Will this AI do all the ‘fun’ part of the dev job making them some sort of assistants? I’d be interested in your opinion ☺️
I don't think the job will be code validation. That's going to be a part of it, but that's always been true. Think about this: what ChatGPT offers us is only different from Stack Overflow in one respect - time. In the past, if we got stuck we would ask SO for the answer. We would get a possible answer and then we would validate it. More often, we would look through previously-asked questions and get the answer to validate. Now we get it quicker (and spoiler: the answer probably comes from SO). But that isn't all there is to development. Development isn't about writing syntax. It is about deciding how to accomplish a task. That's not something that ChatGPT or others really address.
Thank you Tim for your reply, very good points. The human logic and reasoning is indeed complex for the current AI to comprehend and adopt but we also see an appetite for that too, which I totally understand. All of this is definitely mind blowing, so I fully understand the overall feelings and worries. At the same time AI is another great achievement, it’s now part of our lives, and we’ll have to learn how to live and work with it. Validation, information, automation but also inspiration when we look for instance at products like Adobe Firefly. Interesting times ahead for sure ☺️
i remembered the Idiocracy movie. Maybe AI will not help to evolve the capabilitys of the humans. "don't use it to replace your brain..." love it.
Glad you enjoyed it.
For coding its better to use the openAI playground, then set the temperature between 0 - 0.3, then you will have more correct results.
In the meanwhile there is also an engine specilized on coding.
The specific engine is GitHub Copilot.
I saw Nick chappas do a live stream with this thing and it was kind of hilarious. We pointed out that it's not a compiler it's not going to tell you that the code it right doesn't compile. It also won't tell you that it's trying to do something that the language run time prevents you from doing it for example inheriting from a sealed class. But as others have stated their elements of it that it can deep dive into the documentation faster than what our human Minds can do. In fact I used it this week to try and figure out why something wasn't working in a post build event and I don't know if I could have figured it out as quickly without it. And it wasn't the exact solution that it came up with either
It is pretty impressive.
Tim, I wonder if your opinion changed if ever slightly after 5 months. I've used version 3, which made me waste quite some time giving me wrong info, but compared to version 4, it's day and night. I wonder if you tried version 4 and if you can do a video review on it.
GPT-4 is powerful and it can do a lot, but it still has the same issues as 3 (and 5 will have the same issues). I will definitely be doing more videos covering and using AI.
I agree with the conclusion at 10:30. While chatting with this tool I get the impression of a well-spoken person bluffing their way through a job interview, armed with lots of general knowledge but not actually expert in the subject you're asking about.
And it will probably be used to pass some of those interviews.
I agree 1000%. This bot created many amazing things for me, but I also found many problems, overall it's great to use
Thanks for sharing.
Where did you found the date of when it will be updated?
I saw it from the CEO on Twitter.
@@IAmTimCorey Thanks, yea, It's a nice tool, and you think your video is on point, You can see how todo something quick and dirty but, it may not work at first time, and it may not be clean, efficient code, it may have bugs etc. But It can serve as a good starting point.
I asked it to write Blender python code, to create a mandelbrot set 3d model procedurally and it did wrote most of the code correctly but than I fixed some bugs and it worket on the way I found how many small things, just by reading the code and trying it out.
I really appreciate this discussion of errors that it makes. I am a physician, involved in research, and I have had chatGPT write paragraphs based on certain politically charged topics, based on a particular research arcticle. The result was appalling. It produced a result that misquoted the research article so badly that it claimed the result was the reverse of what the conclusions really were.
Of course I am not sure why this happened, but descriptions of how it works suggest a possibility. This particular article, while well done and having solid results, was in opposition to the common opinion. So if chatGPT is using text from the internet to discuss the article, it would composite these opinions into something that would contradict the article. Nevertheless, this doesn't completely explain why it would take "We find A is true, B is false" and convert it into "We find A is false and B is true"
You also correctly point out a very serious issue in that the text that comes out SOUNDS authoritative, but may not be. I am thinking of how to demonstrate this using more readily accessible examples, so that we can all understand the serious hazard of using this.
Regardless of good intentions to avoid errors, there are those who will have bad intentions and purposely misrepresent facts. We have seen this in spades in the past several years. It WILL be used to mislead people on important issues.
I agree. Thanks for sharing.
You think we can trust a person who can't even console.log a variable in JavaScript to use chat GPT to debug a full stack application? They're literally one "." Or ":" in place of ";" to be stricken with stupor. It seems we'll use this the way we use Google or Stack overflow
Which just means we still need developers, but that the developers who use this well will have an even better tool to use.
Tim, of course you can tell ChatGPT to enhance the code:
random number generator , ask to make the code THREAD SAFE and it is fixed
private static readonly object syncLock = new object();
private static readonly Random random = new Random();
public int GenerateRandomNumber(int min, int max)
{
lock (syncLock)
{
return random.Next(min, max + 1);
}
}
This version of the method uses a lock statement to synchronize access to the Random instance, ensuring that it is only accessed by one thread at a time. This prevents multiple threads from accessing the Random instance simultaneously, which can lead to unexpected results.
The syncLock object is used as the lock object, and it is marked as static so that it can be shared by all threads. The Random instance is also marked as static so that it can be shared by all threads.
To use this method, you can call it in the same way as the original version. For example:
int randomNumber = GenerateRandomNumber(1, 10); // generates a random number between 1 and 10 (inclusive)
This thread-safe version of the method ensures that the Random instance is accessed in a thread-safe manner, and it is suitable for use in a multithreaded environment.
The point wasn't that it couldn't write good code. The point was that if you didn't understand what the code was doing, you wouldn't know to ask it to write better code. You can absolutely get it to write you great code. You just need to understand the code to know when a part of the code needs to be improved. It won't always be as obvious as my example was.
@@IAmTimCorey thanks Tim, just thought I would add that thread safe tip for beginners. Merry Xmas 🎅
It's wild, that thing can even create Minecraft script for large blocks of material or moving parts with redstone and I'm nowhere near well-enough-acquainted with Minecraft to creatively stress it
It is really impressive.
Just as Context... they have been saying coders will be made exticnt since early versions of Visual Basic where a "Manager can write the apps"... not only was that false logic, but Software developers are now one of the most IN DEMAND skills... that trend will likely continue.
True. I've heard that my job will be gone in X years repeatedly over the past 25 years.
The caveat about it confidently giving bad code isn't much different to a human developer confidently giving you bad code - you still have to have comprehensive unit tests for the code. It can be asked to add unit tests in things like JUnit and Mockito.
Unit testing isn't a silver bullet. You still need to write good code. For instance, unit testing would not find the bug in the random number code that I demonstrated. You are right that human developers can write bad code too. That's why code reviews are important.
I asked the same as in the vidéo, then asked "what are the possible issues with the C# code you generated". After his answer, I asked "Please generate a C# code that avoir the issues you just told about" and it gave me a much robust code with RNGCryptoServiceProvider, a lock, etc. In a RandomNumberGenerator class.
That still doesn't sound like the right code. You shouldn't need a lock or the RNGCryptoServiceProvider. You just need to pull the instantiation out of the method.