Dude thank you for not using a click baity title like "THIS IS GOING TO OBLITERATE PROGRAMING JOBS! YOU JUST BECAME IRRELEVANT" I've been seeing that crap all over my feed and it's just annoying.
ChatGPT is helping me learn a new language. Rather than have it create an entire solution, I ask about specific aspects such as how do I convert decimal to hexadecimal, and it will give me the standard function to use. This way, it fits into my way of thinking. Once done I can ask for recommended improvements and I incorporate them into my next idea. Its like having my own mentor that doesn't make me feel stupid for not knowing something.
@Onya Malloy you are point on! I had issues with metors who aren't patient with me in my JavaScript learning, but with ChatGPT it's like I have a personal trainer who is ever available for my every whim and fancy! It's everything that a newb programmer could wish for and more, because it helps so much with programming fundamental concepts.
I gave it a try at work this past week in three different scenarios: 1. I took a multithreaded code snippet that an experienced colleague wrote and asked it to analyze. It suggested using a different API that we weren't aware of, which resulted in a better, simplified version. We tweaked it a bit, but it's safe to say that AI was quite useful here. 2. I had to do some simple data processing, and figured writing a bash script would be the fastest way to go about it, but despite using it from time to time for over a decade, I simply can't remember the syntax of the language. I asked the AI to generate it - and despite it getting some things very wrong (it would've been deleting some important data - certainly not something you'd want a total novice to just "use"), I could easily fix those parts and it's safe to say it saved me googling time. So dangerous but useful in experienced hands. 3. We were experiencing an unwanted behavior change from updating a UI framework - and we couldn't find anything in the official API that would help us fix it. We asked the AI on how to get our desired behavior, and it was stubbornly suggesting we use an API that simply didn't exist. Despite asking it "are you sure XYZ exists?" it would proudly say it does and that we just need to use the correct library version (lies!). It was not until we simply called it out and stated "no, XYZ doesn't exist!" that it admitted it was wrong and offered a different solution (which also turned out not to be useful, but this time it was at least using an API that exists). We ended up fixing it ourselves in a totally different way, so I would say in this instance the AI was unhelpful and quite misleading. All in all I'm convinced Chat GPT is already useful as a non-deterministic analysis tool and simple code generator. I would gladly use it now when doing code reviews -> I'd love to see it show results of "Analyze this code" as an assistance to me. I'd even be happy if our juniors used it as an "Can you improve this code" prompt before sending me PRs to review, since I'm confident the AI could give useful suggestions there and reduce dev iteration cycles.
I had a similar experience with the JsonSchema library. The lib is horrible if you want to return an array of field->error results for validation problems so most people basically just accept sub-optimal output, none of the stack overflow examples even came close to doing what I wanted and most werent capable of handling anything but payload without sub-objects (which is almost no json payload?). So i tried sending it over to ChatGPT and it spit out a working example that, while not completely correct for what I needed, set me in the right path. Such an incredible tool for assisting with research because its a bit like talking to a colleague that knows wtf they are doing and using him for rubberducky style debugging
Same frustrating experience you describe in 3. I know almost nothing at DevOps skills and I asked the IA to generate a github action .yaml to deploy a cloud function on push but I discovered after multiples failure in trusting it's output that most of the statement it wrote simply didn't exist.
I tried replacing google with chatgpt at work today. Holy moly i am blown away. I asked for things like one liners with my specific values and flags and it completely out did itself. I honestly had to just copy and paste the commands instead of googling then taking their examples and adding my own values and such.
@@ArjanCodes I played some more with it and it is frustratingly good. I've wanted to start learning some image processing in python and I just asked the ai to write a python script that detects changes in two similar images. it generated a 3rd image that highlighted the changes perfectly. Then i got more curious. I asked it to take a video feed where it highlighted the motions.. AND again it out did itself. One minor bug that any monkey could have solved thou so im still very impressed. Lastly, i just told it to lower the sensitivity and create a highlight box over the object in motion instead of highlighting the pixels that changed.. it understood that from just our chat context?! From a very bad macbook camera it can see through small holes in my balcony and detect the ocean moving behind it.. I am STUNNED to say the least. I am also frustrated at my self because i wanted to be able to code this up myself without help from AI.. But i do so much coding at work that i honestly dont want to burn out doing home projects as well. But maybe this can catapult my learning.
These days I've been playing around with ChatGPT a bit, my conclusion is the same, it's very helpful for support. Best in fields you have an understanding yourself. Sometimes it can express things better than me, more concise and clear. But when it's wrong, I can spot the mistakes and correct it, or point it out to ChatGPT and have it correct it itself. But for fields where I lack knowledge, it's hard or impossible to tell when ChatGPT returns a bullshit answer. In that case it's dangerous. So never trust it blindly, but it's a productivity booster, kind of a AI assisted pair programming. It can help save lots of time, but doesn't replace the training and knowledge needed to do things (this doesn't only apply to code.) For me it looks like it replaces Google for many things, because I can ask about very specific questions and get a useful response, or if not useful, can drill down deeper in the chat. Finding results on Google for specific questions like that can be very hard.
Can confirm, it was impossible to get it to do some acsii animation in assembly because I don't know shit about asm, so couldn't fix the issues because I didn't know. And the bot is quite limited on fixes for it's own code, you end up in a flip flop state, or it giving you a solution that's already present.
As a third year SE student at university this is my experience with chatGPT. Blown away with its ability, seems to be able to do ~everything I've learned in my first three-four years programming. Heres what I tried with it 1. First question I wanted to ask it was to model something in a python class. Looking around my room I choose my phone, very quickly it outputs a phone class that stores the phone type (Apple, Samsung), price, sample phone #, all w/ getter and setters. It also included functionality to send and receive messages from/to other phone objects w/ out even asking. Asking chaptGPT to make the class more complex, it added a batteryCharge variable along w appropriate getters and setters. As well as a contact list, w/ a function to find a certain contact, as well as get all contacts. Just that had me really impressed but it keeps going 2. As I'm finishing up my DBMS class I asked it to imagine we wanted to store these phone obj in a SQL table, how could we do this? Since we were already working with python above it instantly gave me step by step instructions for connecting to MySQL in python (necessary pip installs, correct imports). It also provided sample code for creating a couple phones (all with different attr) and inserting them into the DB. For kicks i asked it to visualize what the SQL table would look like after those inserts and sure enough a pretty table is outputted. Ok API me 3. Alright chatGpt give me a web API using FastAPI in python with the ability to make queries to the database we just created. Once again a pretty much perfect code sample is outputted. Controllers for update, create, delete, and retrieve w/ correct SQL queries within the function. Also instructions for installing FastAPI and a description that seemed to be spot on. Mentioned that it is lightweight and not meant for large-scale projects. I've also just finished up a course where we used Java and Springboot, so i wondered if it could convert this fastApi to a PhoneController class in Java for me. Again it does it with no issue. 4. One more very specific example for me. The other day I was working on my computer architecture course, in that class we use a textbook that has its own language (basically a subset of armV8 & x86 instructions, to my knowledge) called legv8. When i asked chatGPt simple questions about legV8 it was giving answers that were pretty accurate. Just crazy to me. One issue I have run into a few times is chatGPT will just stop right in the middle of a code example, I'm not sure what causes this but sometimes rewording your question will help. In one scenario I asked why it stopped in the middle of the file. It said to me well if you want the entire thing please say that (lol). So I rephrase the question asking for the entire example and the next output was complete. tldr: chatGpt has no issues doing these things: Generating Class in python, making it more complex. Creating a SQL table to insert these objects into, visualizing the table, creating a web API to access the table. Switching from one API framework to another, or maybe better said, one language to another.
I’m not a coder, I have never been able to grasp the concepts. ChatGPT can keep dumbing down concepts for me and doesn’t get annoyed at how challenged I am, it gives me a relief knowing that can pursue learning to code now.
I foresee having to debug AI generated "solutions" in the near future. Also, AI is trained on existing code written by developers. As the human authored training set dwindles, I wonder how these models are going to handle over fitting.
Dwindling training data isn't bad. It will reduce redundant questions, and still allow for new or rare questions because it won't know how to handle those as well, and thus it will allow more dialogue on rare situations to become more apparent and reduce "how do I make a hello world" app and other asked-a-million-times questions
My first idea having seen this thumbnail was to show GPT some code and ask it to improve cohesion. I knew you’d try it! One of the appeals of ChatGPT is that when you notice something you don’t like, like using the older string format style, you can correct it. I tried having it write a program for Fanuc robots. It started with what looked like CNC G code, but I showed it an example program and corrected it a couple times and it got extremely close despite clearly never having been trained on that language. I wonder if one of the next big applications for AI programming is documentation. Like it seems like it could generate class diagrams and function descriptions without relying on specific formatting.
I find the part where the AI would generate the machine code the most amazing. You could have a very expressive programming language where you explain what you want, go back and forth with the system to figure out how you want to deal with the specific edge cases and then "render" this model to create the program. Maybe it could even become more visual or spacial, where you draw out your ideas or visualise together with the AI.
I think the long-term trajectory of this is to find a new optimum in balancing writing-output and rigor/control. Right now people don't want to simply generate code from telling an AI a natural-language sentence, because they fear that the code is actually not what they want and it's no faster to modify AI code than to make it from scratch. But if it gets good enough, it would be just like asking a human to write code for you, which you might then have to review. At some point you overcome your fear of overlooking some bug and just accept the fact that code will never be 100% perfect, but good enough product is still useful.
I am glad I have found your channel in the last few months. You cover all the topics I am interested in and this is no different. I have been chatGPT for the past week and I am truly amazed
I wonder how well this AI can translate one programming language to other one, that seems like a perfect task for such bot that understands code: it's a, in theory, simple task if you understand both languages but a very cumbersome one.
Product manager be like: *1st week* PM: I need a solution that solves the problem A. ChatGPT: That is the solution A to the problem A. *2nd week* PM: Ok, so the solution A for the problem A needs also handle with the problem B. ChatGPT: This is the solution AB, that solves the B problem within the solution A. *A few weeks later* *20th week* PM: The solution ABCDEFGHIJKLMNOPRST is great but the client expects the IJKLMNOPRST to work a little bit differently, more like KLNOPSUWXYZ. I don't think it's a big change. ChatGPT: An error occured. PM: *Retry* ChatGPT: An error occured. PM: *Retry* ChatGPT: Did I stutter?
This is perhaps a much improved Stack Overflow. It is writing from templates. "I have seen this example, and this other one, and this other one, so here is something like those." It can't maintain the software that it writes. It can't iterate a better solution. Anything that is sufficiently complex will still, for the time being, require someone who actually knows what they are doing. Giving this to a non-coder and telling them to use it to write business-critical software is not going to work well. If you know what you are doing in the first place, this could be a very interesting tool.
I’m excited about adding a new tool to the toolbox. The refactoring and testing is particularly cool. I’ve been using GitHub copilot for about 6 months. Recently I’ve been using comments in the code to give prompts to copilot similar to what you’ve done with ChatGPT. For more complicated requests, I’ll write a function name, signature, and return type, then use a doc string to give examples of the behavior that I’m looking for. The additional explanations in the comments often make the difference between copilot suggesting garbage or some really helpful suggestions.
It will definitely be nice to automate most unit testing. But IMO the biggest challenge in development is getting the specifications right. It's a rare day when a user hands you a whole leetcode prompt. If you describe the specifications with enough precision to be complete... you're coding. That's essentially what modern high level languages are doing - filling in some of the minute details so we don't have to. Models like these will continue that process and fill in less-minute details. But there is a limit to how much design decisionmaking can be automated away. The prompts and results need to be validated. And I think the ideal-scenario payoff is still going to be marginal. Something along the lines of having a built-in max() method instead of rolling your own, for example, but able to generate simple algorithms on the fly that would have been rote work.
You should be very worried. The is not there yet but think about they can achieve if they apply it on specific languages / frameworks. I imagine a startup would pop up and take on refining and optimizing the current model for a single language.
Great video! Love GPTChat and glad to learn some nice ways of using it. On a slight separate note, what is your keyboard? i really love the sound of it! :)
I've tried to get it to write several simple Python scripts (maybe 20-40 lines worth of code) and watched it run into a variety of problems. It would try to perform operations on undefined variable or even completely ignore important parts of what I was asking it to do. There's a lot of hype around this thing but it feels very much like when Siri first came out - it blows everyone's minds at first and people acted like it was "true AI" that was on par with a human. Then, once the hype faded, we realized it wasn't anywhere near that. There are inherent limitations in any deep learning model, and this thing absolutely has an upper bound to what it can do. It's not replacing engineers or data scientists anytime soon.
I tried it for Python and it seemed to do pretty well for me. It seems to be a GREAT tool for quickly learning, like having a tutor that sometimes gets stuff wrong, which is FAR better than using Google trying to find an answer or slogging through documentation. For example, I had it show me code for using pythonocc package to display STL models using PyQt5 and having it show me various options for changing the display window, such as, background color, model color, camera and navigation inputs for rotating and moving the models.
Stack Overflow is way more valuable, responses are given by experts with context and real world experience, and those responses are validated by the community and updated over time.
For sure, but I guess most of the queries on stackoverflow are trivial and can find a decent answer using ChatGPT. For more complex ones stackoverflow will remain the main source.
You didn't need to send the code when asking it to write the unit tests, you can just say "the above code" or, my original "luhm checksum", or the "improved example".
I asked it to invent a new high level language that is easy to learn. It called it "Easycode". Then it wrote tic tac toe in that new language for me, and finally I asked to write a compiler for that language using C. I don't care if it's wrong. There will be a day, pretty soon, that it will be totally able to do this and more. I feel like a kid with a new toy. :)
I think the problem of all this AI is that the understanding of the code will become less. New developers will not have the experience of coding themselves and will start to rely heavily on the AI to write the code without having the experience to understand it, find problems or making it better. They will have no clue of what is really going on.
Finally someone with more than 10 IQ points. All the people on internet are very funny with their comments like “haha, you will lose your job”, “learn to farm” and stuff like that
@@RACAPEI agree. This ai actually is more like a search engine where sometimes it gives with confidence bad results or stuff that does not even exist. I think this happens because it tries to compile responses from different sources. Also it could run in the problem that it cannot give updated results. For example right now the training is done with latest data from 2021. What if some library gets updated and it does stuff in slightly different way? Of course it will give a bad result. Other problems that I have noticed, sometimes it gives a network error in the middle of spitting the response, deleting the response altogether, this is unacceptable but I understand is beta. Also the results do not come instantly, those come row by row, I want it instantly as it's a waste of time to watch it thinking in slow motion.. Now how come this beta can replace programming? Even if it's going to be out of beta, I think it will just be another search engine, but for that they actually have to fix those major problems, removing false responses, keeping it up to date with crawlers just like classical search engines and make it instant. I doubt it will get to that point.
It can predict the text so it is mostly correct (including code), but it doesn't understand actually anything and people should be aware it, otherwise that tool can cause some pretty big troubles.
If you ones tell him to show you f strings examples over format for the next suggestions it will. Think of it as fine tune, you can additionally tell him what you prefer.
I've asked it a simple task : write a bash script that output how much memory in gibibytes is used by my system and round it to 2 decimals. First try : - read from /proc/meminfo and grep MemTotal and MemUsed - use awk - use bc to divide and convert from bytes to gibibytes - echo Total Memory - echo Used Memory first error : MemUsed doesn't exist second possible error : /proc/meminfo gives values in kB (with RHEL/Ubuntu/Arch at least) and not Bytes, the end converted result is thus not in gibibytes. Second Try : (same request but I precise it's on a Ubuntu system since that's certainly what it knows best) This time it picks kB instead of bytes but it just echo the Total Memory and completely miss the main request. --- As you can see this a really simple task and even without knowing bash you could write this program by yourself in few minutes with Stack Overflow yet it failed. It's not replacing anything anytime soon to write programs, as a helper why not, but only if you can spot when it's wrong.
Today I was trying to figure out how to parse datetime strings that may have different format (e.g. 2 digit and 4 digit years). Chat GPT suggested having two datetime formats with a pipe separating them: from datetime import datetime datetime_string = "02/13/23 12:34:56" # Example date and time string # Date and time format string with 2 or 4-digit year datetime_format = "%m/%d/%y %H:%M:%S|%m/%d/%Y %H:%M:%S" # Convert date and time string to datetime object datetime_obj = datetime.strptime(datetime_string, datetime_format) # Print the datetime object print(datetime_obj) I could not make this work and asked for clarification, but it insisted this was correct. I am not a Python expert, so I tried a different solution.
I asked to write an example Picat ( not Python) program to optimise a schedule and it was wrong syntactically on some lines ( not all) but it got the gist correct
I think this means that low complexity coding will be automated and the real focus for people will become more on the complex aspects of system design. until AI can do that too. ;)
@@heroe1486 "I think you're overestimating it, even non complex applications are still way too complex for any available AI" I agree. I also know that we are in the infancy of AI systems still and as progress is made, more money, skills, people, etc. will be poured into it, accelerating the growth and improvements in AI. In many respects the path will most likely be similar to self-driving cars, with the easy stuff knocked out in short order while each higher level of difficulty will take a geometrically increased amount of time and effort until we get to the point where the AIs take over the process. Even then, they will represent an order of magnitude increase in the focus and processing we are applying to advancing AI. This is how almost all engineering problems go, and AI is an engineering problem at the moment. I also think that most AI systems today are just trained expert systems that are given massive amounts of data to learn rules on how to respond to "inputs". The methodology of using a simulated neural net via whatever means is different than a straight up rules system, but at the end of it all, we are just grooming the systems to take one thing in and give another thing out. That we do not understand how this is happening in so many cases does not change what it is.
Let's say I had a policy, at work, where I couldn't put any code that I write out onto the internet. Is there a way to download a local copy of this tool?
To preempt those who will be butt-hurt because I don't think this is the Second Coming, I have *no* problem at all with the concept of an AI writing code; creating a non-human system that is actually sentient has been a personal aspiration since I was a kid in the 70s. At the same time, I'm not a fanboy who's going to drool over the latest parlor trick. With that said... Frankly, ChatGPT may do what it does more efficiently and over a broader domain than what has come before, but what it is doing is not new. Just reviewing the comments here, not to mention the 10 billion other similar videos, the hit-and-miss nature of the results should make it clear that this is not a system that *understands* the question/directive and is *intentionally* composing a solution. It establishes a context from the features of the query/directive (e.g. word selection, sentence structure, themes that can be associated with various portions of the text and across different scopes, etc.). This can be effective for identify features associated with the actual *meaning* of the input. At the same time, it can be sensitive to the data set and the training process, and given the same input, you'll always get the same output. With a human mind, yes, we have our "training set" and "training process", yet when we apply a given set of inputs, we can examine the output in our mind, and perturb the evaluation, modifying the result until we have a preferred result or set of results. That is the gap here; there is no internal feedback loop for it to tweak results on its own, or a standard of measure of the effectiveness of the solution, so that it can iterate its evaluation process until it meets some threshold of success. You can, as was demonstrated, have a human in the loop providing that feedback, and that may be sufficient for your purposes. Further, if what you're after is just to have something "kickstart" you into finding your own solution, whatever it comes up with may do that job. Until a system can create code, apply it to the target task, evaluate if the code accomplished the task, and be able to modify the code until it can solve the problem, this kind of system is going to be rather limited. It could also be counter-productive, because if it is taking as its training material random content off the internet, scraping, interpreting, and adding it to a "body of knowledge" without vetting if the content is actually correct, the system can learn incorrectly. Moreover, if solutions that such systems produce become part of the training set, there is the potential for the informational equivalent of inbreeding. Even the closed-loop, self modifying system described previously does not guarantee understanding and intentional creation of a solution; what means to "understand" code (or anything else) is still an open question. That said, if used as a tool to potentially help you gain a starting point or a different perspective on a coding problem, as long as one keeps in mind the limitations, then use the tools where appropriate. I'm suggesting it is pointless and click-baity to suggest that ChatGPT is understands code the way Donald Knuth understands code. There will be a day, I am certain, where sentient machine intelligence will displace software engineers and many other professionals, and I think it will be sooner than later. However, ChatGPT is not it, and there will need to be a hell of a lot more real engineering done by people who are actually innovating and not cutting-and-pasting from a *chatbot on steroids* before that day arrives.
Had to use libwebsockets on an embedded device without a built in socket library and I couldn't find anything about a custom IO which we would need. A few weeks later I found out about this AI and I asked it about a Custom IO solution in libwebsocket and it provides me with an entire code snippet to realize it. I'm both out of words and scared hat this will some day be a paid service any competent code developer needs to stay on top of their competition.
It's not going to replace humans completely for a while, but 'developers' need to understand something. It doesn't have to get much better to obliterate most code monkey jobs. What used to be lead dev and a team of juniors is going to become a lead and a team of AIs - and then eventually a Business Analyst and an AI, and eventually an AI that understands the business. So if you are pinning your hopes on a future as a developer... get started understand business processes. Spending a sprint writing a form using React and going "phew... that was definitely an XL problem" ain't gonna cut it. There's a job apocalypse a few months down the line right now. Yes, months. Maybe a couple if years. But make no mistake, there's a LOT of money to be saved axing code monkeys and ramping up automation of everything.
"Developers" used to write assembly. Then they wrote C. Then they just became good at using Google. I think this is the next step. "developers" job description will become "being able to provide efficient prompts to AI".
@@glowerworm That's called a prompt engineer and is a non-job that will disappear completely once the models get better. You're still not getting the seismic shift coming.
I spent years and years as a developer implementing/working with crappy designs from business analysts, so the thought of an AI being able to improve their output would have been most welcomed...
Production code is sent to lots of companies: cloud providers, logging services, git repository hosts, it goes through CDNs, and so on. How do you decide which company is ‘random’? And what makes OpenAI different or worse than Microsoft, Amazon or Google?
@@ArjanCodes that is fair. I feel you just have to be aware that that is the case. Especially with free services there could be a risk, since you do not want any leaks to happen. At the company I work at, we host our code on our own Gitlab instance and we host things ourselves, so for us it would be a big deal to start sharing our source code with anyone, since we're not doing to currently.
That's not killing anything, that's just an enhancement. I think he's wrong when saying that "people with no development background could write applications or prototypes" with that, it's not likely to be the case unless it's a Hello world, if you don't understand the output you don't see errors and can't fix anything, and fixing stuff is sometimes harder than doing it from scratch.
I'd say this AI copilot isn't particularly helpful when you cannot validate whether the generated code is correct. Why not have a real pair programming sessions instead? ChatGPT is a solution for a problem we don't have 😂 Also from an environmental point of view, the energy consumption of such large AI models is pretty crazy. But no one is even talking about those issues. AI is always shiny and cool no matter what 🤔 And when you're mentioning "AIs could generate machine code". Yeah nice idea, but how about improving the understanding of how a PC works to program it more efficiently? And maybe don't use a language that's 80x slower than C and wonder why your energy bills are so expensive?! 😅 I'm pretty sure if people take this ChatGPT too far it could be the final step towards Jonathan Blow's tech decline prophecy. People will start to forget how their cruft is being built and the knowledge how our tech society works will be lost, leading to mega desaster 🤔
@@marcotroster8247 I mean, we can all see the applications of natural general intelligence (like human intelligence). Welp, an artificial version would at least have those to begin with lol. Of course human brains compute veeery efficiently when it comes to energy consumption, but they're NOT scalable. So yeah, I don't know, sounds like a big deal to me lol
@@tacitozetticci9308 We cannot even build a computer resembling the brain of a fly although we perfectly know its physical structure. What makes you so confident that we should mess with AGI if we cannot even do that? And even if we could, it's not useful in any way to construct such an intelligent computer. What do you expect it to tell you? About the purpose in life or what? It'll treat humans like we treat flies which is a world I don't wanna live in. We have existential problems like climate change. Just fix those problems and we're good. Why always chase for the stars when we're living in paradise earth?
@@marcotroster8247 paradise Earth? Are we from the same planet? lol Earth would be a paradise if only it weren't real. Who doesn't love watching the intricacies of the circle of life on a screen, like a work of art? Sadly though, those interesting creatures working and suffering their way towards their doom aren't just beautiful cogs in a beautiful machine, they're real - humans included. The sane conclusion is that this place is hell and we need great solutions. Fixing the climate is relatively nothing. Life itself is the problem. But now we might have some hope, we can fix it. The alternative is to keep living as we always did, and keep making children "carrying wood to the burning house".
I would love to experiment with ChatGPT, but I'm weirded out that they want my phone number. I'm curious if it would have recognized Luhn if you omitted luhn_ from the procedure name.
RIKER: Computer, I'd like some place to play some music. A little atmosphere. COMPUTER: Specify. RIKER: Jazz. COMPUTER: Era? RIKER: Circa 1958. COMPUTER: Location. RIKER: Kansas City. No, wait. New Orleans. Bourbon Street Bar, New Orleans. Around two a.m. COMPUTER: Program complete. Enter when ready.
👷 Join the FREE Code Diagnosis Workshop to help you review code more effectively using my 3-Factor Diagnosis Framework: www.arjancodes.com/diagnosis
Dude thank you for not using a click baity title like "THIS IS GOING TO OBLITERATE PROGRAMING JOBS! YOU JUST BECAME IRRELEVANT"
I've been seeing that crap all over my feed and it's just annoying.
ChatGPT is helping me learn a new language. Rather than have it create an entire solution, I ask about specific aspects such as how do I convert decimal to hexadecimal, and it will give me the standard function to use. This way, it fits into my way of thinking. Once done I can ask for recommended improvements and I incorporate them into my next idea. Its like having my own mentor that doesn't make me feel stupid for not knowing something.
Definitely gonna try this. Thanks for the idea haha
I think this is the real use case that a lot of people are missing
@Onya Malloy you are point on!
I had issues with metors who aren't patient with me in my JavaScript learning, but with ChatGPT it's like I have a personal trainer who is ever available for my every whim and fancy!
It's everything that a newb programmer could wish for and more, because it helps so much with programming fundamental concepts.
Exactly what I use it for as well, this helps break down the stuff I am confused with after lecture. Just like a personal tutor just for me.
This is primarily what I've been using it for, so..cool. I'm dissapointed it isn't called Jarvis.
I gave it a try at work this past week in three different scenarios:
1. I took a multithreaded code snippet that an experienced colleague wrote and asked it to analyze. It suggested using a different API that we weren't aware of, which resulted in a better, simplified version. We tweaked it a bit, but it's safe to say that AI was quite useful here.
2. I had to do some simple data processing, and figured writing a bash script would be the fastest way to go about it, but despite using it from time to time for over a decade, I simply can't remember the syntax of the language. I asked the AI to generate it - and despite it getting some things very wrong (it would've been deleting some important data - certainly not something you'd want a total novice to just "use"), I could easily fix those parts and it's safe to say it saved me googling time. So dangerous but useful in experienced hands.
3. We were experiencing an unwanted behavior change from updating a UI framework - and we couldn't find anything in the official API that would help us fix it. We asked the AI on how to get our desired behavior, and it was stubbornly suggesting we use an API that simply didn't exist. Despite asking it "are you sure XYZ exists?" it would proudly say it does and that we just need to use the correct library version (lies!). It was not until we simply called it out and stated "no, XYZ doesn't exist!" that it admitted it was wrong and offered a different solution (which also turned out not to be useful, but this time it was at least using an API that exists). We ended up fixing it ourselves in a totally different way, so I would say in this instance the AI was unhelpful and quite misleading.
All in all I'm convinced Chat GPT is already useful as a non-deterministic analysis tool and simple code generator. I would gladly use it now when doing code reviews -> I'd love to see it show results of "Analyze this code" as an assistance to me. I'd even be happy if our juniors used it as an "Can you improve this code" prompt before sending me PRs to review, since I'm confident the AI could give useful suggestions there and reduce dev iteration cycles.
I had a similar experience with the JsonSchema library. The lib is horrible if you want to return an array of field->error results for validation problems so most people basically just accept sub-optimal output, none of the stack overflow examples even came close to doing what I wanted and most werent capable of handling anything but payload without sub-objects (which is almost no json payload?). So i tried sending it over to ChatGPT and it spit out a working example that, while not completely correct for what I needed, set me in the right path.
Such an incredible tool for assisting with research because its a bit like talking to a colleague that knows wtf they are doing and using him for rubberducky style debugging
Same frustrating experience you describe in 3. I know almost nothing at DevOps skills and I asked the IA to generate a github action .yaml to deploy a cloud function on push but I discovered after multiples failure in trusting it's output that most of the statement it wrote simply didn't exist.
I tried replacing google with chatgpt at work today. Holy moly i am blown away.
I asked for things like one liners with my specific values and flags and it completely out did itself. I honestly had to just copy and paste the commands instead of googling then taking their examples and adding my own values and such.
Nice!
@@ArjanCodes I played some more with it and it is frustratingly good.
I've wanted to start learning some image processing in python and I just asked the ai to write a python script that detects changes in two similar images.
it generated a 3rd image that highlighted the changes perfectly.
Then i got more curious. I asked it to take a video feed where it highlighted the motions.. AND again it out did itself. One minor bug that any monkey could have solved thou so im still very impressed.
Lastly, i just told it to lower the sensitivity and create a highlight box over the object in motion instead of highlighting the pixels that changed.. it understood that from just our chat context?!
From a very bad macbook camera it can see through small holes in my balcony and detect the ocean moving behind it..
I am STUNNED to say the least.
I am also frustrated at my self because i wanted to be able to code this up myself without help from AI.. But i do so much coding at work that i honestly dont want to burn out doing home projects as well.
But maybe this can catapult my learning.
These days I've been playing around with ChatGPT a bit, my conclusion is the same, it's very helpful for support. Best in fields you have an understanding yourself. Sometimes it can express things better than me, more concise and clear. But when it's wrong, I can spot the mistakes and correct it, or point it out to ChatGPT and have it correct it itself.
But for fields where I lack knowledge, it's hard or impossible to tell when ChatGPT returns a bullshit answer. In that case it's dangerous.
So never trust it blindly, but it's a productivity booster, kind of a AI assisted pair programming. It can help save lots of time, but doesn't replace the training and knowledge needed to do things (this doesn't only apply to code.)
For me it looks like it replaces Google for many things, because I can ask about very specific questions and get a useful response, or if not useful, can drill down deeper in the chat. Finding results on Google for specific questions like that can be very hard.
Can confirm, it was impossible to get it to do some acsii animation in assembly because I don't know shit about asm, so couldn't fix the issues because I didn't know. And the bot is quite limited on fixes for it's own code, you end up in a flip flop state, or it giving you a solution that's already present.
As a third year SE student at university this is my experience with chatGPT.
Blown away with its ability, seems to be able to do ~everything I've learned in my first three-four years programming. Heres what I tried with it
1. First question I wanted to ask it was to model something in a python class. Looking around my room I choose my phone, very quickly it outputs a phone class that stores the phone type (Apple, Samsung), price, sample phone #, all w/ getter and setters. It also included functionality to send and receive messages from/to other phone objects w/ out even asking. Asking chaptGPT to make the class more complex, it added a batteryCharge variable along w appropriate getters and setters. As well as a contact list, w/ a function to find a certain contact, as well as get all contacts.
Just that had me really impressed but it keeps going
2. As I'm finishing up my DBMS class I asked it to imagine we wanted to store these phone obj in a SQL table, how could we do this? Since we were already working with python above it instantly gave me step by step instructions for connecting to MySQL in python (necessary pip installs, correct imports). It also provided sample code for creating a couple phones (all with different attr) and inserting them into the DB. For kicks i asked it to visualize what the SQL table would look like after those inserts and sure enough a pretty table is outputted.
Ok API me
3. Alright chatGpt give me a web API using FastAPI in python with the ability to make queries to the database we just created. Once again a pretty much perfect code sample is outputted. Controllers for update, create, delete, and retrieve w/ correct SQL queries within the function. Also instructions for installing FastAPI and a description that seemed to be spot on. Mentioned that it is lightweight and not meant for large-scale projects.
I've also just finished up a course where we used Java and Springboot, so i wondered if it could convert this fastApi to a PhoneController class in Java for me. Again it does it with no issue.
4. One more very specific example for me. The other day I was working on my computer architecture course, in that class we use a textbook that has its own language (basically a subset of armV8 & x86 instructions, to my knowledge) called legv8. When i asked chatGPt simple questions about legV8 it was giving answers that were pretty accurate. Just crazy to me.
One issue I have run into a few times is chatGPT will just stop right in the middle of a code example, I'm not sure what causes this but sometimes rewording your question will help. In one scenario I asked why it stopped in the middle of the file. It said to me well if you want the entire thing please say that (lol). So I rephrase the question asking for the entire example and the next output was complete.
tldr: chatGpt has no issues doing these things: Generating Class in python, making it more complex. Creating a SQL table to insert these objects into, visualizing the table, creating a web API to access the table. Switching from one API framework to another, or maybe better said, one language to another.
You can just say "continue" and the bot will continue the code or the text from where it left it
@@sebaperalta2001 I’ll have to check that out thanks
I’m not a coder, I have never been able to grasp the concepts. ChatGPT can keep dumbing down concepts for me and doesn’t get annoyed at how challenged I am, it gives me a relief knowing that can pursue learning to code now.
Same here!
I foresee having to debug AI generated "solutions" in the near future. Also, AI is trained on existing code written by developers. As the human authored training set dwindles, I wonder how these models are going to handle over fitting.
Dwindling training data isn't bad. It will reduce redundant questions, and still allow for new or rare questions because it won't know how to handle those as well, and thus it will allow more dialogue on rare situations to become more apparent and reduce "how do I make a hello world" app and other asked-a-million-times questions
My first idea having seen this thumbnail was to show GPT some code and ask it to improve cohesion. I knew you’d try it!
One of the appeals of ChatGPT is that when you notice something you don’t like, like using the older string format style, you can correct it.
I tried having it write a program for Fanuc robots. It started with what looked like CNC G code, but I showed it an example program and corrected it a couple times and it got extremely close despite clearly never having been trained on that language.
I wonder if one of the next big applications for AI programming is documentation. Like it seems like it could generate class diagrams and function descriptions without relying on specific formatting.
I find the part where the AI would generate the machine code the most amazing. You could have a very expressive programming language where you explain what you want, go back and forth with the system to figure out how you want to deal with the specific edge cases and then "render" this model to create the program. Maybe it could even become more visual or spacial, where you draw out your ideas or visualise together with the AI.
I agree with you , Arjan, soon we will only need the architects but not necesarily the coders...
I think the long-term trajectory of this is to find a new optimum in balancing writing-output and rigor/control. Right now people don't want to simply generate code from telling an AI a natural-language sentence, because they fear that the code is actually not what they want and it's no faster to modify AI code than to make it from scratch.
But if it gets good enough, it would be just like asking a human to write code for you, which you might then have to review. At some point you overcome your fear of overlooking some bug and just accept the fact that code will never be 100% perfect, but good enough product is still useful.
I am glad I have found your channel in the last few months. You cover all the topics I am interested in and this is no different. I have been chatGPT for the past week and I am truly amazed
My favaorite chanel of code.
amaizing it solved my unanswered StackOverflow question
Thanks so much Eric, glad the content is helpful!
AI and job automation is always a good thing, don't let the people who's purse hinges on technology remaining primitive convince you otherwise
I am so happy I found this channel about a year ago! Totally in synch with my interests!
Thanks so much Ivan, glad the content is helpful!
I wonder how well this AI can translate one programming language to other one, that seems like a perfect task for such bot that understands code: it's a, in theory, simple task if you understand both languages but a very cumbersome one.
It needs to learn how to deal with extremely mind changing product managers.
Haha, that would be very useful 😊.
Product manager be like:
*1st week*
PM: I need a solution that solves the problem A.
ChatGPT: That is the solution A to the problem A.
*2nd week*
PM: Ok, so the solution A for the problem A needs also handle with the problem B.
ChatGPT: This is the solution AB, that solves the B problem within the solution A.
*A few weeks later*
*20th week*
PM: The solution ABCDEFGHIJKLMNOPRST is great but the client expects the IJKLMNOPRST to work a little bit differently, more like KLNOPSUWXYZ. I don't think it's a big change.
ChatGPT: An error occured.
PM: *Retry*
ChatGPT: An error occured.
PM: *Retry*
ChatGPT: Did I stutter?
This is perhaps a much improved Stack Overflow. It is writing from templates. "I have seen this example, and this other one, and this other one, so here is something like those." It can't maintain the software that it writes. It can't iterate a better solution. Anything that is sufficiently complex will still, for the time being, require someone who actually knows what they are doing. Giving this to a non-coder and telling them to use it to write business-critical software is not going to work well. If you know what you are doing in the first place, this could be a very interesting tool.
I’m excited about adding a new tool to the toolbox. The refactoring and testing is particularly cool.
I’ve been using GitHub copilot for about 6 months. Recently I’ve been using comments in the code to give prompts to copilot similar to what you’ve done with ChatGPT.
For more complicated requests, I’ll write a function name, signature, and return type, then use a doc string to give examples of the behavior that I’m looking for. The additional explanations in the comments often make the difference between copilot suggesting garbage or some really helpful suggestions.
Great recommendation, thanks!
Great suggestion Phil, thank you!
It will definitely be nice to automate most unit testing. But IMO the biggest challenge in development is getting the specifications right. It's a rare day when a user hands you a whole leetcode prompt. If you describe the specifications with enough precision to be complete... you're coding. That's essentially what modern high level languages are doing - filling in some of the minute details so we don't have to. Models like these will continue that process and fill in less-minute details. But there is a limit to how much design decisionmaking can be automated away. The prompts and results need to be validated. And I think the ideal-scenario payoff is still going to be marginal. Something along the lines of having a built-in max() method instead of rolling your own, for example, but able to generate simple algorithms on the fly that would have been rote work.
do you think we should be worried about the safety of these crazy language models? Am pretty amazed how powerful this is
You should be very worried. The is not there yet but think about they can achieve if they apply it on specific languages / frameworks. I imagine a startup would pop up and take on refining and optimizing the current model for a single language.
Great video! Love GPTChat and glad to learn some nice ways of using it.
On a slight separate note, what is your keyboard? i really love the sound of it! :)
It wrote Fortran code to calculating the median value.
I've tried to get it to write several simple Python scripts (maybe 20-40 lines worth of code) and watched it run into a variety of problems. It would try to perform operations on undefined variable or even completely ignore important parts of what I was asking it to do. There's a lot of hype around this thing but it feels very much like when Siri first came out - it blows everyone's minds at first and people acted like it was "true AI" that was on par with a human. Then, once the hype faded, we realized it wasn't anywhere near that. There are inherent limitations in any deep learning model, and this thing absolutely has an upper bound to what it can do. It's not replacing engineers or data scientists anytime soon.
I mean it may replace those clueless enough to think it could
I tried it for Python and it seemed to do pretty well for me. It seems to be a GREAT tool for quickly learning, like having a tutor that sometimes gets stuff wrong, which is FAR better than using Google trying to find an answer or slogging through documentation. For example, I had it show me code for using pythonocc package to display STL models using PyQt5 and having it show me various options for changing the display window, such as, background color, model color, camera and navigation inputs for rotating and moving the models.
Can't wait for Microsoft Clippy to turn up in my IDE!
I don't think it will be a threat for programmers (at least not for now), but could be a serious issue for stackoverflow
Stack Overflow is way more valuable, responses are given by experts with context and real world experience, and those responses are validated by the community and updated over time.
For sure, but I guess most of the queries on stackoverflow are trivial and can find a decent answer using ChatGPT. For more complex ones stackoverflow will remain the main source.
You didn't need to send the code when asking it to write the unit tests, you can just say "the above code" or, my original "luhm checksum", or the "improved example".
I asked it to invent a new high level language that is easy to learn. It called it "Easycode". Then it wrote tic tac toe in that new language for me, and finally I asked to write a compiler for that language using C.
I don't care if it's wrong. There will be a day, pretty soon, that it will be totally able to do this and more. I feel like a kid with a new toy. :)
I think the problem of all this AI is that the understanding of the code will become less. New developers will not have the experience of coding themselves and will start to rely heavily on the AI to write the code without having the experience to understand it, find problems or making it better. They will have no clue of what is really going on.
Finally someone with more than 10 IQ points. All the people on internet are very funny with their comments like “haha, you will lose your job”, “learn to farm” and stuff like that
@@RACAPEI agree. This ai actually is more like a search engine where sometimes it gives with confidence bad results or stuff that does not even exist. I think this happens because it tries to compile responses from different sources. Also it could run in the problem that it cannot give updated results. For example right now the training is done with latest data from 2021. What if some library gets updated and it does stuff in slightly different way? Of course it will give a bad result. Other problems that I have noticed, sometimes it gives a network error in the middle of spitting the response, deleting the response altogether, this is unacceptable but I understand is beta. Also the results do not come instantly, those come row by row, I want it instantly as it's a waste of time to watch it thinking in slow motion.. Now how come this beta can replace programming? Even if it's going to be out of beta, I think it will just be another search engine, but for that they actually have to fix those major problems, removing false responses, keeping it up to date with crawlers just like classical search engines and make it instant. I doubt it will get to that point.
It can predict the text so it is mostly correct (including code), but it doesn't understand actually anything and people should be aware it, otherwise that tool can cause some pretty big troubles.
Thanks for your excellent article. Amazing stuff...
Thanks so much Bill, glad the content is helpful!
If you ones tell him to show you f strings examples over format for the next suggestions it will. Think of it as fine tune, you can additionally tell him what you prefer.
Hm, I should try that - thanks for the suggestion!
amazing tools those learning to code have today!!
Thanks so much, glad the content is helpful!
I've asked it a simple task : write a bash script that output how much memory in gibibytes is used by my system and round it to 2 decimals.
First try :
- read from /proc/meminfo and grep MemTotal and MemUsed
- use awk
- use bc to divide and convert from bytes to gibibytes
- echo Total Memory
- echo Used Memory
first error : MemUsed doesn't exist
second possible error : /proc/meminfo gives values in kB (with RHEL/Ubuntu/Arch at least) and not Bytes, the end converted result is thus not in gibibytes.
Second Try : (same request but I precise it's on a Ubuntu system since that's certainly what it knows best)
This time it picks kB instead of bytes but it just echo the Total Memory and completely miss the main request.
---
As you can see this a really simple task and even without knowing bash you could write this program by yourself in few minutes with Stack Overflow yet it failed.
It's not replacing anything anytime soon to write programs, as a helper why not, but only if you can spot when it's wrong.
Now if you ask for some unit testing code to validate what it did in the first place maybe.....
Try adding "Imagine your an XYZ professor and expert. How do you ..."
Today I was trying to figure out how to parse datetime strings that may have different format (e.g. 2 digit and 4 digit years). Chat GPT suggested having two datetime formats with a pipe separating them:
from datetime import datetime
datetime_string = "02/13/23 12:34:56" # Example date and time string
# Date and time format string with 2 or 4-digit year
datetime_format = "%m/%d/%y %H:%M:%S|%m/%d/%Y %H:%M:%S"
# Convert date and time string to datetime object
datetime_obj = datetime.strptime(datetime_string, datetime_format)
# Print the datetime object
print(datetime_obj)
I could not make this work and asked for clarification, but it insisted this was correct. I am not a Python expert, so I tried a different solution.
Thanks!
Thank you too! :)
Thank you too!
I asked to write an example Picat ( not Python) program to optimise a schedule and it was wrong syntactically on some lines ( not all) but it got the gist correct
Yesterday.. a dream.
Today.. a novelty.
Tomorrow.. a basic necessity.
what are your thoughts on the openai ERC thats on uniswap?
I think this means that low complexity coding will be automated and the real focus for people will become more on the complex aspects of system design. until AI can do that too. ;)
Eventually, AI will create compilers uniquely designed for your application. Eventually, all languages will converge to one "AI universal language".
@@glowerworm The question at that point is:
Base 10 or Base 10...
@@glowerworm And its called English
I think you're overestimating it, even non complex applications are still way too complex for any available AI
@@heroe1486 "I think you're overestimating it, even non complex applications are still way too complex for any available AI"
I agree. I also know that we are in the infancy of AI systems still and as progress is made, more money, skills, people, etc. will be poured into it, accelerating the growth and improvements in AI.
In many respects the path will most likely be similar to self-driving cars, with the easy stuff knocked out in short order while each higher level of difficulty will take a geometrically increased amount of time and effort until we get to the point where the AIs take over the process. Even then, they will represent an order of magnitude increase in the focus and processing we are applying to advancing AI.
This is how almost all engineering problems go, and AI is an engineering problem at the moment.
I also think that most AI systems today are just trained expert systems that are given massive amounts of data to learn rules on how to respond to "inputs". The methodology of using a simulated neural net via whatever means is different than a straight up rules system, but at the end of it all, we are just grooming the systems to take one thing in and give another thing out. That we do not understand how this is happening in so many cases does not change what it is.
You can ask it to write documentation!!!
Thats absolutely what I was thinking :D
print("I am not worried, my job is safe"*100)
🎉
Let's say I had a policy, at work, where I couldn't put any code that I write out onto the internet. Is there a way to download a local copy of this tool?
Unless you have an IBM Z 9/16 Mainframe or supercomputer. The compute costs are eye-watering
This thing is amazing …
Thanks, happy you’re enjoying the content!
Nice vid
Quick tip. This stores data. So be careful pasting proprietary code
To preempt those who will be butt-hurt because I don't think this is the Second Coming, I have *no* problem at all with the concept of an AI writing code; creating a non-human system that is actually sentient has been a personal aspiration since I was a kid in the 70s. At the same time, I'm not a fanboy who's going to drool over the latest parlor trick. With that said...
Frankly, ChatGPT may do what it does more efficiently and over a broader domain than what has come before, but what it is doing is not new. Just reviewing the comments here, not to mention the 10 billion other similar videos, the hit-and-miss nature of the results should make it clear that this is not a system that *understands* the question/directive and is *intentionally* composing a solution. It establishes a context from the features of the query/directive (e.g. word selection, sentence structure, themes that can be associated with various portions of the text and across different scopes, etc.). This can be effective for identify features associated with the actual *meaning* of the input. At the same time, it can be sensitive to the data set and the training process, and given the same input, you'll always get the same output. With a human mind, yes, we have our "training set" and "training process", yet when we apply a given set of inputs, we can examine the output in our mind, and perturb the evaluation, modifying the result until we have a preferred result or set of results.
That is the gap here; there is no internal feedback loop for it to tweak results on its own, or a standard of measure of the effectiveness of the solution, so that it can iterate its evaluation process until it meets some threshold of success. You can, as was demonstrated, have a human in the loop providing that feedback, and that may be sufficient for your purposes. Further, if what you're after is just to have something "kickstart" you into finding your own solution, whatever it comes up with may do that job.
Until a system can create code, apply it to the target task, evaluate if the code accomplished the task, and be able to modify the code until it can solve the problem, this kind of system is going to be rather limited. It could also be counter-productive, because if it is taking as its training material random content off the internet, scraping, interpreting, and adding it to a "body of knowledge" without vetting if the content is actually correct, the system can learn incorrectly. Moreover, if solutions that such systems produce become part of the training set, there is the potential for the informational equivalent of inbreeding.
Even the closed-loop, self modifying system described previously does not guarantee understanding and intentional creation of a solution; what means to "understand" code (or anything else) is still an open question.
That said, if used as a tool to potentially help you gain a starting point or a different perspective on a coding problem, as long as one keeps in mind the limitations, then use the tools where appropriate. I'm suggesting it is pointless and click-baity to suggest that ChatGPT is understands code the way Donald Knuth understands code. There will be a day, I am certain, where sentient machine intelligence will displace software engineers and many other professionals, and I think it will be sooner than later. However, ChatGPT is not it, and there will need to be a hell of a lot more real engineering done by people who are actually innovating and not cutting-and-pasting from a *chatbot on steroids* before that day arrives.
Had to use libwebsockets on an embedded device without a built in socket library and I couldn't find anything about a custom IO which we would need. A few weeks later I found out about this AI and I asked it about a Custom IO solution in libwebsocket and it provides me with an entire code snippet to realize it. I'm both out of words and scared hat this will some day be a paid service any competent code developer needs to stay on top of their competition.
Everybody gangster until GPT learns to code itself
It's not going to replace humans completely for a while, but 'developers' need to understand something. It doesn't have to get much better to obliterate most code monkey jobs. What used to be lead dev and a team of juniors is going to become a lead and a team of AIs - and then eventually a Business Analyst and an AI, and eventually an AI that understands the business. So if you are pinning your hopes on a future as a developer... get started understand business processes. Spending a sprint writing a form using React and going "phew... that was definitely an XL problem" ain't gonna cut it. There's a job apocalypse a few months down the line right now. Yes, months. Maybe a couple if years. But make no mistake, there's a LOT of money to be saved axing code monkeys and ramping up automation of everything.
"Developers" used to write assembly. Then they wrote C. Then they just became good at using Google.
I think this is the next step. "developers" job description will become "being able to provide efficient prompts to AI".
@@glowerworm That's called a prompt engineer and is a non-job that will disappear completely once the models get better. You're still not getting the seismic shift coming.
Killer combo will be automating even excel tasks. Will eliminate even MBA analysts and product managers.
Everybody will be out of job
I spent years and years as a developer implementing/working with crappy designs from business analysts, so the thought of an AI being able to improve their output would have been most welcomed...
You're sending production code to a random company here. Not a fan.
Production code is sent to lots of companies: cloud providers, logging services, git repository hosts, it goes through CDNs, and so on. How do you decide which company is ‘random’? And what makes OpenAI different or worse than Microsoft, Amazon or Google?
@@ArjanCodes that is fair. I feel you just have to be aware that that is the case. Especially with free services there could be a risk, since you do not want any leaks to happen.
At the company I work at, we host our code on our own Gitlab instance and we host things ourselves, so for us it would be a big deal to start sharing our source code with anyone, since we're not doing to currently.
So no singularity quite yet? Oh well.
Try and find a good Dutch joke for next week. 🙂
But the singularity is nearer though 🙂. Regarding the Dutch jokes, will do!
programmer may not be the shortest occupation ever exists, but developers, they sure are the first occupation who killed their employment themselves.
That's not killing anything, that's just an enhancement. I think he's wrong when saying that "people with no development background could write applications or prototypes" with that, it's not likely to be the case unless it's a Hello world, if you don't understand the output you don't see errors and can't fix anything, and fixing stuff is sometimes harder than doing it from scratch.
Such tools are really about chasing AGI. So if we head into that direction, all jobs are at stakes
1st!
I'd say this AI copilot isn't particularly helpful when you cannot validate whether the generated code is correct. Why not have a real pair programming sessions instead? ChatGPT is a solution for a problem we don't have 😂
Also from an environmental point of view, the energy consumption of such large AI models is pretty crazy. But no one is even talking about those issues. AI is always shiny and cool no matter what 🤔
And when you're mentioning "AIs could generate machine code". Yeah nice idea, but how about improving the understanding of how a PC works to program it more efficiently? And maybe don't use a language that's 80x slower than C and wonder why your energy bills are so expensive?! 😅
I'm pretty sure if people take this ChatGPT too far it could be the final step towards Jonathan Blow's tech decline prophecy. People will start to forget how their cruft is being built and the knowledge how our tech society works will be lost, leading to mega desaster 🤔
Why don't you appreciate it as a step towards general AI?
@@tacitozetticci9308 Sure it could be seen like that. But I don't see any useful application of AGI und ChatGPT 🤔
@@marcotroster8247 I mean, we can all see the applications of natural general intelligence (like human intelligence). Welp, an artificial version would at least have those to begin with lol.
Of course human brains compute veeery efficiently when it comes to energy consumption, but they're NOT scalable.
So yeah, I don't know, sounds like a big deal to me lol
@@tacitozetticci9308 We cannot even build a computer resembling the brain of a fly although we perfectly know its physical structure. What makes you so confident that we should mess with AGI if we cannot even do that?
And even if we could, it's not useful in any way to construct such an intelligent computer. What do you expect it to tell you? About the purpose in life or what? It'll treat humans like we treat flies which is a world I don't wanna live in.
We have existential problems like climate change. Just fix those problems and we're good. Why always chase for the stars when we're living in paradise earth?
@@marcotroster8247 paradise Earth? Are we from the same planet? lol
Earth would be a paradise if only it weren't real.
Who doesn't love watching the intricacies of the circle of life on a screen, like a work of art? Sadly though, those interesting creatures working and suffering their way towards their doom aren't just beautiful cogs in a beautiful machine, they're real - humans included.
The sane conclusion is that this place is hell and we need great solutions. Fixing the climate is relatively nothing.
Life itself is the problem. But now we might have some hope, we can fix it.
The alternative is to keep living as we always did, and keep making children "carrying wood to the burning house".
1st
I would love to experiment with ChatGPT, but I'm weirded out that they want my phone number.
I'm curious if it would have recognized Luhn if you omitted luhn_ from the procedure name.
RIKER: Computer, I'd like some place to play some music. A little atmosphere.
COMPUTER: Specify.
RIKER: Jazz.
COMPUTER: Era?
RIKER: Circa 1958.
COMPUTER: Location.
RIKER: Kansas City. No, wait. New Orleans. Bourbon Street Bar, New Orleans. Around two a.m.
COMPUTER: Program complete. Enter when ready.