The main problem is not that Devin can break the security of software, the main problem is that Devin will write insecure software. This is because security requires thinking about how the entire system works, which is far beyond the scope of a model that spits one word after another.
Exactly. That’s another angle too. I also wonder about problems that occur when the LLM hasn’t been trained on all of your system (including 3rd party tooling)
I mean ultimately the 2 reasons I don't see devin getting adopted would be the cost of actually running the tool and the general security concerns of allowing a foreign third party tool admin privledges onto your system. Many other developers are talking about how slow the tool is in general in creating products, or making poor performance decisions, but all of those things only matter if the consumer feels those changes are mediocre. There has been a general trend over the past 20 years in my opinion of software getting heavily carried by the innovations made by hardware, so much so that the expectations of performance have been pushed towards hardware companies to be the ones responsible in those decisions. If it costs the company more than it does to hire a developer (which given some of the POC videos I've seen of people deploying out LLM tools out in the cloud for relatively simple stuff, this seems to be the case), I don't ever see a scenario where a company is going to willingly adopt a tool that costs them more to run than to simply hire a developer that can do the same tasks for a cheaper cost. The security concern is also the other scenario where these tools don't see the mass adoption that people believe will happen, which we can see when we look at the response companies have given to the overall AI market. Anything related to AI is banned on my company's VPN from being searchable, as companies don't want any data to be leaked out to a foreign company's server accidentally by a developer who doesn't know any better. The more likely scenario of a devin ever seeing any level of adoption would be to create a buisness model that allows the company to build the tools themselves in house, which is what OpenAI has shifted their focus into doing. The most interesting tool that has come out in my opinion in the LLM market has been Ollama, which is a docker inspired tool that allows the consumer to inject and create their own custom LLMs based off LLMs that have been published. The use case that an LLM bring value to a company will likely be a team of developers building their own in house solution of a tool narrowed down the a specific context that the company specifies that does one task extremely well, for example the management and retrieval of company documentation stored in something like confluence. AI in general seems to be taking steps backwords when evaluated against the trend of other technologies to build specific niche products that do one thing really well instead of trying to do everything, as you see when you see the rise of tools like planetscale, vercel, turso, etc built out of the need to have simpler tools against the ones provided by big providers like AWS.
This is very well put and I agree with everything you said. I actually cut out a couple of your points from this video because it was getting too long! Especially the espionage angle and foreign state builds of these tools. I don't know if regulation will be the wet blanket on the industry or will it happen naturally when someone does something really stupid and the repetitional harm brings the demand down.
@@cody_codes_youtube software future is like cars now. nobody makes their own. order from factory. customize it to your needs with help of a few people and voila, your software ready to go.
@@make-coffee-now eeeeeeee. For some basic use cases, yes. But after 15 years in the industry that’s not a good comparison. Only because cars have very specific and regulated blueprints. Software is a cluster-fuc* and there is no standardization. Plus, when you buy a car, you don’t really add onto it in a foundational way 2 years later
100% agree with your points! Not a lot of people talk about the security concerns surrounding AI and it’s not just limited to software. Any job with sensitive information isn’t going to be replaced as fast as some seem to think. We also tend to not do sideways upgrades on things we depend on. Schools and hospitals run outdated Windows versions forever with added security updates. We upgrade when we really need to. When the upgrade is not just a slight improvement but an huge one. It’s not just that AI needs to be cheaper than human labor, it needs to be better. And that isn’t something that’s going to happen soon.
For sure. And the foundational problem is trust in my eyes. And when it comes to GENERATIVE AI… I have no idea how that trust is built in a meaningful and verifiable way
Nice video, and totally agree. A rootkit you voluntarily install on your server that's developed to follow whatever instructions it finds on the internet. What could possibly go wrong?
Nice video. I think you’re spot on. This Devin AI is marketed towards non-technical C-suite management who are naive about how systems and software development work. I kinda want to be a fly on the wall of whatever company runs into disaster due to laying off employees in favor of AI.
And it will happen. I don’t discourage that happening, because it is testing the limits and exploring what we can do in our industry. The next 5 years will be interesting to say the least
This is a legitimate problem baked into the plutocracy. The decision makers aren't necessarily technicians, they often have no technical knowledge at all. Some are just grifters and sociopaths. The biggest risk is that one of them decides their technicians are lying to them about the limitations of AI to protect their wages, puts some critical system under an AI that is not ready to handle it, and causes a major accident with many fatalities when it inevitably fails.
This concerns are completely valid even for junior devs. Hell even senior devs have dropped whole production databases.... For the admin priv, Like surely you will run this in VM locally
I know that this is probably a really common question to ask nowadays, but I am a high school senior who is planning on going to college for computer science class of ‘28. With all these AI tools coming out and how the job market looks, should I be really concerned and reconsider my major? I’ve been programming for about a year now on and off but when I get the time, I really enjoy it and it’s something that I want to pursue as a career but it’s looking a little risky. What are your thoughts?
I’m working on a video on this exact topic, but for now, I can’t tell you what you can or can’t do. That’s up to you. If my kids were in your shoes, I wouldn’t be too concerned about it. The reasoning I have is that there still needs to be a workforce of people who KNOW how this crap works. And I think the dream of just “AI will handle all the maintenance” is a pipe dream. Don’t ignore AI, and keep up to date. Coding in 5-10 years will change, no doubt, but I’m not too stressed about it
@@smtkumar007 Which would require being born smarter than 90% of the human population. Only geniuses with photographic memory can force employers to hire them in a post AI world.
Wouldn't worry too much about these kind of tools. Ultimately these kinds of tools have to be justified through a reduction in costs to run these tools against having actual developers. The more likely scenario that will happen in both the short and medium term is that these tools will be limited in capacity in their capabilities by hardware limitations, particularly with GPU. It doesn't matter how powerful the tool is if it's costing the company more to run than hiring actual human developers to do the tasks. This AI tool is also very slow for the task it performs. Sure you can argue "but it'll get better over time, then it's jover", but traditionally development of these tools has been relatively slow despite it seemingly that every breakthrough it coming all out at once. I would encourage you to actually look into the history of how long it took these tools to actually get themselves into market. And the most important reality that is even true with LLMs is that there are massive security concerns with all of these tools. The scenario where Devin's get adopted is if the company behind devin creates a business model that allows a company to be able to train up their own bot to ensure confidence that the tool is built with the security concerns they want in mind. A lot of companies will generally not allow you to even open up any AI related tools on their VPNs, on my company laptop anything AI related has been completely blocked, meanwhile AI tools are being created in house that my team will likely be assigned to platform at some point. This is what chatGPT has done with their shift in focus to get companies to trust their tooling after initially launching into the public.
You'll be able to work in this field. In my opinion, this will be fine. The doomsayers almost always are ones that aren't professional devs. It will be a long time before any sort of replacement happens. Long time. That is if it even happens. There is still a lot of hype around this.
@@cody_codes_youtube thanks for the response, I’m trying to spend all summer self learning, to speed up the process and try and up my skill set. I was actually looking at learning more low level stuff C, and assembly, and programming boards for a summer project. I appreciate your content you seem to be one of the few based based people on UA-cam
I asked another youtuber this and I'd appreciate your take on it. I do think coding as it is, is not going to be around anymore, on the long term. Just think about the evolution from assembly to python. But I still like building things and seeing it work, as I imagine you do. What do you think we should be studying then? Should it be cloud infrastructure (to put AI online everywhere)? IoT? AI itself? The security aspect of AI (great video by the way)? Thanks!!
I am software engineer. Started coding in 2000. Finished master degree in 2009. Started working not full time in 2007. From websites to games to productivity and collaboration web applications. Now building AI presentation tools in Prezi. I was expecting even without AI code to become more and more high level. With AI it came way early. In short I can tell you that for a decade I am looking towards something like a solo technological entrepreneurship. Its still building things but more. Its seeing markets, trends, gaps. Understanding users. Knowing how to get resources to make things happen. And with more AI tools it will be becoming easier to go solo. But in this rapid AI acceleration world this also feels like a short term thing. Hard to say when we will have AI entrepreneurs. On other hand. Not that cars killed horse riding as hobby. Not that computers playing chess killed humans desire to play chess. So if you liked doing things AI can not stop you. Question rather is of how fast which types of work will be loosing market value.
Eduards has some good points. It’s really hard to say where everything will land. I personally think we will get Jarvis more than we will get something like Ultron. I think those who know how to code, and how to orchestrate and request the work to be done, will be the ones that continue in the field. I think simply refining and tweaking low level code, and ignore everything else might have a harder time for work. I think engineers need to be versatile and be out to solved problems. I think you should be studying anything and everything that’s solving problems for businesses. I would also experiment with the tools that come out with AI and find their shortfalls, but also their benefits. You’re asking the right questions, and that’s the first step. Being agile and able to change your focus in your career is important.
@@cody_codes_youtube I AM talking about engineering jobs. I believe LLM is able to replace software engineers and pretty soon. I think You underestimate human’s greed. It is obvious that humanity replaces itself with robots slowly… each job positions are being taken by robots as soon as it becomes technologically possible. And the reason for it is it is cheaper to maintain robots then to pay people and businesses are operating coming off the premises of how to spend less and make more money. And this leads to the fact that humanity slowly replaces itself with robots and I am surprised people are not seeing it.
@@ffatheranderson are you an engineer? I don’t follow where all this certainty is coming from. If you are an engineer then it would be beneficial to try programming with LLMs. It’s definitely a booster for productivity, but replacements? Far from it. We’ve had code generating tools for like 30+ years already. So far, as far as I’ve experience in the last year working with LLMs, it feels like a super charged code generation tool.
Yeah, everyone talks about it but it’s also so big already that I don’t know what people mean when they “get into it”. There are like 50 job titles that could apply to that discipline.
The reason why these ai tool won’t replace a true capable dev is that you will spend a lot of money and time just to figure out a bug and fix it instead of brining just 2 senior devs who can do the same jobs in just 8 hours.
@@cody_codes_youtube in the dev world quality over speed. Better to bring a senior dev who can write high quality code and knows how to debug an issue rather than letting an ai tool write a 100k LoC and spend a moth to find the issue.
InfoSec Analyst here. Like the video but you forgot one key thing in your open laptop scenario. If I was going to do another, it wouldn't be sit there on your laptop being open. I would just put a keylogger on and leave everything as is. :)
hahaha, very good point! Or just have Devin develop his own, so that the program no longer has a known malware hash fingerprint... And then tell Devin to erase the history of the last 60 seconds of prompts :D
I think it depends how it works. If it was open source, and used a state machine that provided planning skills to an off the shelf LLM, you might be able to trust it. These coding systems don't really need a custom LLM that can do special things, You just need an LLM that can write code and answer chat questions, then you have a state machine that controls what questions to ask it. these questions can be generic, canned responses, like "break down this problem into steps" "are you sure those steps will work?" "can you make this faster?" "can you improve security?" I think if you get these things to break down problems, question their answers, while keeping momentum, we will be half way to AGI agents.
For sure, that is a pragmatic and careful way to march forward. However, even if the LLM was open-source, the code doesn't matter, it's what's stored in the database with the weights for the nodes. But yeah, the way you're thinking about it makes the most sense. This video I was hoping to counter the current 'tone of the conversation' where we immediately think that this is a *replacement* of work. Where you are thinking about it the right way, where it's a library, or tool to enable current engineers to move fast.
another option, would be to only use it for scripting. You could make an RPG game, where the core system components are made by or verified by human programmers, and the AI just combines those modular parts to make new things with a safe custom scripting language. You could have it design the rough draft of quests, items, characters, cutscenes, etc... without touching any code that compiles. Every RPG item is basically a named list of resource/stat transactions, mixed with some animation commands. so the humans just need to set up the generic mechanics, and let the robots safely populate the world with variety, then humans can balance the game with playtesting.
Everyone is against Devin but most of us know deep down it will eventually impact us as a developer. Yes it lags behind many things and yes there are security risks. And most of them are speculations. Like what if. Just imagine what they have achieved in a few years. That is scary. It has limitations but for now. Maybe not after 2-3 years.
For sure. I’m not dodging the idea that it won’t affect us. Coding is going to be super different in 5 years. No doubt. I also question whether the amount of progress the last 3 years can be repeated, ya know? This pace may not be the normal forever. But who knows?
For sure. But that also defeats the purpose of having something “do engineering” work. The more you sandbox it the more you lose its ability to replace some tasks. I’ve also seen a lot of companies struggle to keep up a good QA or demo data environment.
@@BillAnt sure. You can sandbox it as much as you want, but care needs to be had with things like connections to databases, keys and accesss to networks and APIS. Network access, etc. those are all things that need to be considered because it can completely eliminate the gains of a “temp VM”
@@cody_codes_youtube- Well, you could have a copy of a test database on the VM without having to fetch it from external sourced. Sure, not the most convenient method, but at least it's isolated till the finished exe is compiled. But the best solution is to have Kevin AI watch over Devin AI. ;D
@@BillAnt for sure. I’ve also been surprised in my career how poor some companies QA environments are. So yeah, I’m really interested how this all plays out. It’s DEFINITELY possible to get Devin to do some dope things. I just want to see it mature a lot further
Absolutely not. Things will progress, and solutions will be introduced. I’m just super curious how it will all play out, and spending a lot of time thinking about it
I’m hoping the hype is similar to when Cryto came out Hoping that non technical project managers use AI to replace a developer and watch it backfire on them. Have AI at the register for a Grocery store where there’s usually one line open is a better use of AI not the developers
environments where security is a concern will not be early adopters for agent systems. First we see these types of tools used for sandboxed environments where any damage can be contained.
Of course. But that’s the point I’m making in the video, how do we get past sandbox? How (by nature of the agent) is it even possible to enforce “good intentions” and eliminate side effects or hallucinations? That’s the thought exercise
This might be off your niche, But can you make a video about what are good college majors that you think will be at a lower risk from being replaced by AI(hopefully majors other than healthcare and law lol) , I have another question, Like in theory isn’t software developers and ML engineers should be the last jobs to be automated after every other job is automated?, thus we shouldn’t be so worried about software developers being replaced by Ai, but at the same time, SW dev and ML eng could be become so productive with the new ai tools, and that will make it tremendously hard to land a job in these places, what do you think
I think you’re right. I also think that complete job replacement will not happen as people say. If we think about marketing, I am pretty sure you need a marketer to review the material and make it good. Just because it’s generated by AI doesn’t mean it’s good. And yeah, I’ve made the argument that there will be more demand for software jobs because of this, and people have been scoffing at me and saying I’m wrong.
tbh how is this different from hiring a real person? people drop prod tables, leak secrets, and make other mistakes all the time. the same way there is absolutely no way of preventing a human from making something else than what you wanted them to do.
For sure. It’s a good question. What I’m trying to say is, would you hire a person that could do that accidentally? Is that a risk you’d want to take? You can’t fire Devin. There’s no concern on the AI part, they don’t feel. In a way these agents SHOULD and will be held to a much higher standard. And the thought exercise is, how do we enforce that?? How many checks and balances? How do you structure the LLM training to prevent dumb database mutations and let good ones go? How do you put in enough safety measures but still make it so it’s worth your time to use Devin instead of just doing it yourself?
@@healthnewtrend you’ve turn this oddly aggressive. You may be right, but I just really doubt it. Either way, I’ll continue talking about it and discussing the possibilities as the years come and go. Stick around if you want to hear more of this content :)
i don’t think what you mentioned is much of a concern. this ai agent wont be running locally, it will run on its own sandboxed environment. on the other hand you wouldn’t want to give it a production keys and access anyway. in most companies not every developer has access to the production either. even if it deletes all the data, everything is backed up with cloud providers these days and losing data in the development env is not a big deal. However, the real concern in my opinion is the quality of the code it generates and also predictability of the code it generates. Because i am almost 100 percent sure that most of the code it generates will have security holes and by the nature of these ai tools, they tend to generate same output for similar inputs. So i am sure hackers will love projects that utilize these tools. I just don’t believe a company would want to invest on such a useless tool to do their developer work in a near future. I also don’t think these ML engineers really understand what the software devs have to know to produce a beautiful secure system.
Yes, completely agree. And new vulns are created everyday. I wanted this video to be very approachable with some of the concepts. I also took the approach with these concerns because that’s the tone of the conversation: real dev replaced with Devin. And the smart thing would be containing it, and no access to prod or prod services or prod api keys. But a real dev might have that. So that means it can’t be completely autonomous, and needs a “handler”. So, the threat of replacement isn’t there. Completely agree with your point, I was just making these basic points to highlight the difference of trust levels and pose the question of “how do we overcome these?”
But it's not going to have admin privileges over your system... it's going to run in a closed container on a VM and output work...even if you gave it access to a db you can just have gatekeepers that won't allow transactions that drop tables from an agent.
Sure, I also have like 3-5 other suggestions in my comments. It can definitely be done, of course, but the more the sandbox it, the more you limit, the less of a “software engineer” replacement it becomes. Right? What you’re suggesting is like boxing up a 3rd party system that you can just ask it questions or something. More of a library. The demo shows direct code commits too, therefore it’ll probably have access to rebase and rewrite git history (code loss risk). The less you trust this thing and configure it all the way to high hell, the less valuable it becomes.
@@jaysonp9426 I think people think I’m like anti AI, but I’m not. There is so much of my job I want to automate. I’m just really critical of new technologies
If you could use it offline, it would be fine - just get a cheap laptop and sandbox it with a clone of your project and nothing gets screwed up. Obviously, they can't necessarily monetize an offline solution though so it won't happen.
You were kind enough to ignore the threat of supply chain attacks. Someone could attack the training data, or they could even just brute force changes in wikipedia to include a malicious prompt. Or maybe just upvote some dumb answers on stackoverflow... The "supply chain" could be the entire internet.
Yeah man, trust me, I was writing down a HUUUUUGE list of things to talk about and the tin foil hat ideas kept growing. I left so many ideas out. I hadn’t thought of that one but did leave out many similar scenarios.
An inevitable maintenence nightmare awaits too. I'm not interested in anything more than an assistant catching bugs and typos at this point. I'm open to it suggesting anything that will save time in the future not just now.
THIS. I was also talking about how in 3 years we will have 10,000 web apps that have been built by AI and someone needs to maintain them. Will AI be able to cleanly pull apart the spaghetti? I mean maybe, but it could also over complicate things too
when you work with devin, a copy of your program will be also in other place without need of hacking. secondly it has Star Topologie or single point. if it is not working or a law violation happens , you have also trouble. thirdly if devin cant solve the problem or always solve it wrong ,how will you solve in your large program, when the customer is waiting a solution asap. so i have no solution for These basic problems. there are also more. So i think it is a good tool when we use it as an assistance
Of course, I agree. Lately I only see people panicking about AI taking all the jobs. I like your insight. Very well said, nice review of security risks and flaws of AI
Why not just have a supervisor agent watching over Devin? You can even leave all the sensitive info with the supervisor and put Devin behind a permission wall. You can even have another supervisor supervising the supervisor. Requiring that you get past multiple agents before you can do anything malicious. We have administrator modes for a reason.
For sure. And that’s the standard approach now. You have layered LLMs. You can still circumvent them, of course. I think the conversation isn’t so much as CAN it be done (maybe there is a clean discrete way), but how does the end solution affect the pace of development and of course the cost of running all the LLMs. With security, it’s always a see-saw balance of usability, and absolutely secure functionality. Where you strike the balance is the fine art
I can't see where Devin fits into an org, Developers dont want to use it as its a nanny job and who wants there skills to fade. Product owners havnt got the technical depth to understand if Devin is about to sink the company app. Who is supposed to use this thing?
I think the “best case scenario” for this business is product managers are all phased out by engineers who can know product and translate that to Devin. Or product managers learn deeply how to code
No GPT-4- (and much less 3.5-) based generative model is going to replace a human developer. Even a junior one will run rings around it in short order, simply because of the context limit. Now future versions of GPT, like 5 or 6, or AGI, however they're defining that now (always struck me as an 'is Pluto a planet' style of debate), that's entirely possible, even probable. Not with current generation AI, though. Trust me, I've tried to get it to do my job. 😂 It helps, most of the time. Often quite a lot. Sometimes it actually screws up enough that it creates more work. But it is not at present at a human engineer level. What it can do, quite readily, is replace the entire bottom tier of the bell curve, which means Twitter and 4chan users are no longer needed and can now be replaced entirely by Russian bots. 🤣
Yeah and I don’t see a solution yet unless there is some heavy handed checks and balances (that would take away value from the initial reasoning of having a Devin)
This video is like the gazelle running from the lion. … the gazelle is darting everywhere, but the lion is faster and stronger ultimately catching the gazelle. We can try but ultimately LLMs will reduce requirements for devs… probably close to zero.
@cody_codes_youtube My personal opinion is that we discovered Ai too soon, not in term of time but in terms of where we are in our current technology, like a 16 year old getting the car keys after learning from GTA, it could be anywhere between a safe driver and a menace for the street
Software is insecure, and even if you build secure software you get rekt by hardware level exploits. Every larger organization got pwned, why even care?
Well, I also cringed kinda hard watching the Devin demo video when I saw the API keys in the code/module itself, and not being used as environment variables. I don't know who came up with that code - Devin or the developers of Devin, but you want to showcase an autonomous AI agent and you're already using bad coding practices - you aren't selling that to me.
Very true. And those who know how cool a console, IDE, and browser dedicated to Devin is, also know that hardcoded API keys, and print debugging is very “meh”
People need to realize that AI will hit SWE jobs like a truck and no amount of leetcode grinding and side project building will save you from that. If you're reading this and you are considering a career in coding or you're a junior who's currently struggling to get a first job then do youself a favour and consider an alternative career path. Or at least start building a skillset that is immune to automation on the side. This is what the majority of these phoney UA-cam tech influences are doing right now. They can see the writing on the wall and are trying to transition to the influencer career path while selling you useless courses, fake motivational content, and so on. Human software engineers will not exist by the end of this decade. Period.
First of all: are you a software engineer, and how many years of industry experience do you have? Second of all: Anyone reading this comment should keep in consideration what the authors response to the first question is. If they don’t have years of industry experience and know professional software engineering fully, then the reader should move on and ignore this “advice”. Another point, if you do not answer this question, I’ll delete your comment because it’s not helpful and it’s fear mongering.
@@cody_codes_youtube Mid Full-Stack developer with 6 years of experience. 4 of which I spent at the biggest tech company in Russia. Now working at a sports tech startup. That being said, I noticed that you did not provide a single counter-argument to anything I wrote. Furthermore, this is not just me saying it. Is it confirmed by the CEOs of some of the world's biggest tech companies, VC investors, etc.
@@AlexanderBase2004 well that’s fascinating you think that way. Also: you made zero points. You just told people what to do, criticized UA-camrs, and made an overly aggressive statement at the end. CEOS and VC, huh? And their predictions have a really good success rate, right? Half their job is to make big bets, (90% don’t pay off) and instill excitement in their company. I still will not tolerate forcible advice for people to deviate from their career choice. I’ve been coding 3x longer than you, and work with AI myself. Your statements, in my experience, are outlandish
@@cody_codes_youtube Just out of curiosity, does it really sound that fascinating? Besides mentioning the YoE you have, you haven't really explained the rationale behind your positive outlook. Look at some of the world's best bootcamps and coding academies. They are closing down and not accepting further applicants because of the grim job prospects post-2021. The recent explosion of low-code/no-code tools and their exponential adoption rate by big companies. What about the mass layoffs, hiring freezes, reduced number of grad positions overall. The fact that there are people who are PAYING COMPANIES for internships nowadays. To top it all, new grads/junior are now being asked system design questions on top of all the leetcode abuse that is going on. I remember the days where you could just get a job if you know some basic HTML, CSS, and JS.
And as a final note, I sincerely hope that I'm wrong. I guess time will tell. You can screenshot my comment and do a video calling me a dumbass after 5 years.
The main problem is not that Devin can break the security of software, the main problem is that Devin will write insecure software. This is because security requires thinking about how the entire system works, which is far beyond the scope of a model that spits one word after another.
Exactly. That’s another angle too. I also wonder about problems that occur when the LLM hasn’t been trained on all of your system (including 3rd party tooling)
I mean ultimately the 2 reasons I don't see devin getting adopted would be the cost of actually running the tool and the general security concerns of allowing a foreign third party tool admin privledges onto your system.
Many other developers are talking about how slow the tool is in general in creating products, or making poor performance decisions, but all of those things only matter if the consumer feels those changes are mediocre. There has been a general trend over the past 20 years in my opinion of software getting heavily carried by the innovations made by hardware, so much so that the expectations of performance have been pushed towards hardware companies to be the ones responsible in those decisions. If it costs the company more than it does to hire a developer (which given some of the POC videos I've seen of people deploying out LLM tools out in the cloud for relatively simple stuff, this seems to be the case), I don't ever see a scenario where a company is going to willingly adopt a tool that costs them more to run than to simply hire a developer that can do the same tasks for a cheaper cost.
The security concern is also the other scenario where these tools don't see the mass adoption that people believe will happen, which we can see when we look at the response companies have given to the overall AI market. Anything related to AI is banned on my company's VPN from being searchable, as companies don't want any data to be leaked out to a foreign company's server accidentally by a developer who doesn't know any better. The more likely scenario of a devin ever seeing any level of adoption would be to create a buisness model that allows the company to build the tools themselves in house, which is what OpenAI has shifted their focus into doing. The most interesting tool that has come out in my opinion in the LLM market has been Ollama, which is a docker inspired tool that allows the consumer to inject and create their own custom LLMs based off LLMs that have been published. The use case that an LLM bring value to a company will likely be a team of developers building their own in house solution of a tool narrowed down the a specific context that the company specifies that does one task extremely well, for example the management and retrieval of company documentation stored in something like confluence.
AI in general seems to be taking steps backwords when evaluated against the trend of other technologies to build specific niche products that do one thing really well instead of trying to do everything, as you see when you see the rise of tools like planetscale, vercel, turso, etc built out of the need to have simpler tools against the ones provided by big providers like AWS.
This is very well put and I agree with everything you said. I actually cut out a couple of your points from this video because it was getting too long! Especially the espionage angle and foreign state builds of these tools. I don't know if regulation will be the wet blanket on the industry or will it happen naturally when someone does something really stupid and the repetitional harm brings the demand down.
@@cody_codes_youtube soon it will require telemetry and system hardware and user info for improvisation.
@@make-coffee-now right. I’m interested in that process
@@cody_codes_youtube software future is like cars now. nobody makes their own. order from factory. customize it to your needs with help of a few people and voila, your software ready to go.
@@make-coffee-now eeeeeeee. For some basic use cases, yes. But after 15 years in the industry that’s not a good comparison.
Only because cars have very specific and regulated blueprints. Software is a cluster-fuc* and there is no standardization. Plus, when you buy a car, you don’t really add onto it in a foundational way 2 years later
lol.. Once I saw the little hand that holds the mic, I could NOT stop looking at it
It locks its gaze with you, you cannot unsee
this is obviously AI generated content... fingers don't grow hands 😄😉
@@ttcc5273 LOL you made me spit out my drink! Haha
100% agree with your points! Not a lot of people talk about the security concerns surrounding AI and it’s not just limited to software. Any job with sensitive information isn’t going to be replaced as fast as some seem to think. We also tend to not do sideways upgrades on things we depend on. Schools and hospitals run outdated Windows versions forever with added security updates. We upgrade when we really need to. When the upgrade is not just a slight improvement but an huge one. It’s not just that AI needs to be cheaper than human labor, it needs to be better. And that isn’t something that’s going to happen soon.
For sure. And the foundational problem is trust in my eyes. And when it comes to GENERATIVE AI… I have no idea how that trust is built in a meaningful and verifiable way
Nice video, and totally agree. A rootkit you voluntarily install on your server that's developed to follow whatever instructions it finds on the internet. What could possibly go wrong?
Hahaha. Trust me! I have a good marketing video!
Yeah, what a wierd world. Who would have thought it'd become fashionable to market and freely distribute autonomous AI powered rootkits?
@@apexphp haha. The easiest attack vector is always the person operating the computer…
LOL! When you say it like that, it sounds less appealing!! hahaha
Nice video. I think you’re spot on. This Devin AI is marketed towards non-technical C-suite management who are naive about how systems and software development work. I kinda want to be a fly on the wall of whatever company runs into disaster due to laying off employees in favor of AI.
And it will happen. I don’t discourage that happening, because it is testing the limits and exploring what we can do in our industry. The next 5 years will be interesting to say the least
This is a legitimate problem baked into the plutocracy. The decision makers aren't necessarily technicians, they often have no technical knowledge at all. Some are just grifters and sociopaths. The biggest risk is that one of them decides their technicians are lying to them about the limitations of AI to protect their wages, puts some critical system under an AI that is not ready to handle it, and causes a major accident with many fatalities when it inevitably fails.
I would like to know what corporate settings you can imagine, private equity, public, etc I am thinking to implement such a project
For sure. And my question would be how (technically speaking) those settings would... do what they say they will do? It's a super interesting problem
This concerns are completely valid even for junior devs. Hell even senior devs have dropped whole production databases....
For the admin priv, Like surely you will run this in VM locally
Here's hoping that level of caution is always used!
I know that this is probably a really common question to ask nowadays, but I am a high school senior who is planning on going to college for computer science class of ‘28. With all these AI tools coming out and how the job market looks, should I be really concerned and reconsider my major? I’ve been programming for about a year now on and off but when I get the time, I really enjoy it and it’s something that I want to pursue as a career but it’s looking a little risky. What are your thoughts?
I’m working on a video on this exact topic, but for now, I can’t tell you what you can or can’t do. That’s up to you. If my kids were in your shoes, I wouldn’t be too concerned about it. The reasoning I have is that there still needs to be a workforce of people who KNOW how this crap works. And I think the dream of just “AI will handle all the maintenance” is a pipe dream. Don’t ignore AI, and keep up to date. Coding in 5-10 years will change, no doubt, but I’m not too stressed about it
As a sophomore CS student this makes me very worried, been losing sleep idk what to do if I can’t get work in this field
these tools will take out 90% of jobs , u just have to be better than those 90% of croud
@@smtkumar007 Which would require being born smarter than 90% of the human population. Only geniuses with photographic memory can force employers to hire them in a post AI world.
Wouldn't worry too much about these kind of tools. Ultimately these kinds of tools have to be justified through a reduction in costs to run these tools against having actual developers. The more likely scenario that will happen in both the short and medium term is that these tools will be limited in capacity in their capabilities by hardware limitations, particularly with GPU. It doesn't matter how powerful the tool is if it's costing the company more to run than hiring actual human developers to do the tasks.
This AI tool is also very slow for the task it performs. Sure you can argue "but it'll get better over time, then it's jover", but traditionally development of these tools has been relatively slow despite it seemingly that every breakthrough it coming all out at once. I would encourage you to actually look into the history of how long it took these tools to actually get themselves into market.
And the most important reality that is even true with LLMs is that there are massive security concerns with all of these tools. The scenario where Devin's get adopted is if the company behind devin creates a business model that allows a company to be able to train up their own bot to ensure confidence that the tool is built with the security concerns they want in mind. A lot of companies will generally not allow you to even open up any AI related tools on their VPNs, on my company laptop anything AI related has been completely blocked, meanwhile AI tools are being created in house that my team will likely be assigned to platform at some point. This is what chatGPT has done with their shift in focus to get companies to trust their tooling after initially launching into the public.
You'll be able to work in this field. In my opinion, this will be fine. The doomsayers almost always are ones that aren't professional devs. It will be a long time before any sort of replacement happens. Long time. That is if it even happens. There is still a lot of hype around this.
@@cody_codes_youtube thanks for the response, I’m trying to spend all summer self learning, to speed up the process and try and up my skill set. I was actually looking at learning more low level stuff C, and assembly, and programming boards for a summer project. I appreciate your content you seem to be one of the few based based people on UA-cam
I asked another youtuber this and I'd appreciate your take on it. I do think coding as it is, is not going to be around anymore, on the long term. Just think about the evolution from assembly to python. But I still like building things and seeing it work, as I imagine you do. What do you think we should be studying then? Should it be cloud infrastructure (to put AI online everywhere)? IoT? AI itself? The security aspect of AI (great video by the way)?
Thanks!!
I am software engineer. Started coding in 2000. Finished master degree in 2009. Started working not full time in 2007. From websites to games to productivity and collaboration web applications. Now building AI presentation tools in Prezi. I was expecting even without AI code to become more and more high level.
With AI it came way early.
In short I can tell you that for a decade I am looking towards something like a solo technological entrepreneurship. Its still building things but more. Its seeing markets, trends, gaps. Understanding users. Knowing how to get resources to make things happen.
And with more AI tools it will be becoming easier to go solo.
But in this rapid AI acceleration world this also feels like a short term thing. Hard to say when we will have AI entrepreneurs.
On other hand. Not that cars killed horse riding as hobby. Not that computers playing chess killed humans desire to play chess. So if you liked doing things AI can not stop you. Question rather is of how fast which types of work will be loosing market value.
Eduards has some good points. It’s really hard to say where everything will land. I personally think we will get Jarvis more than we will get something like Ultron.
I think those who know how to code, and how to orchestrate and request the work to be done, will be the ones that continue in the field. I think simply refining and tweaking low level code, and ignore everything else might have a harder time for work. I think engineers need to be versatile and be out to solved problems.
I think you should be studying anything and everything that’s solving problems for businesses. I would also experiment with the tools that come out with AI and find their shortfalls, but also their benefits.
You’re asking the right questions, and that’s the first step. Being agile and able to change your focus in your career is important.
Sir , suppose if someone hack llm and poison it. Then what ?
Exactly. That’s another problem. Also, attackers will have LLM as well…
It will take jobs. In next 3 years for sure. With the pace that AI evolves nowadays.
But engineering jobs is my question. Also: we can’t assume the same rate of improvements will always be this insane.
@@cody_codes_youtube I AM talking about engineering jobs.
I believe LLM is able to replace software engineers and pretty soon. I think You underestimate human’s greed. It is obvious that humanity replaces itself with robots slowly… each job positions are being taken by robots as soon as it becomes technologically possible. And the reason for it is it is cheaper to maintain robots then to pay people and businesses are operating coming off the premises of how to spend less and make more money. And this leads to the fact that humanity slowly replaces itself with robots and I am surprised people are not seeing it.
@@ffatheranderson are you an engineer? I don’t follow where all this certainty is coming from. If you are an engineer then it would be beneficial to try programming with LLMs. It’s definitely a booster for productivity, but replacements? Far from it. We’ve had code generating tools for like 30+ years already. So far, as far as I’ve experience in the last year working with LLMs, it feels like a super charged code generation tool.
It sounds like cyber security is about to become highly in demand than it already is.
Yeah, everyone talks about it but it’s also so big already that I don’t know what people mean when they “get into it”. There are like 50 job titles that could apply to that discipline.
The reason why these ai tool won’t replace a true capable dev is that you will spend a lot of money and time just to figure out a bug and fix it instead of brining just 2 senior devs who can do the same jobs in just 8 hours.
Totally. Nonetheless the industry is going to get weird.
@@cody_codes_youtube in the dev world quality over speed. Better to bring a senior dev who can write high quality code and knows how to debug an issue rather than letting an ai tool write a 100k LoC and spend a moth to find the issue.
@@holetarget4925 10000000%. And sometimes the best code is the code that’s never written
These tools debug code itself!
@@healthnewtrend not all bug give error message
InfoSec Analyst here. Like the video but you forgot one key thing in your open laptop scenario. If I was going to do another, it wouldn't be sit there on your laptop being open. I would just put a keylogger on and leave everything as is. :)
hahaha, very good point! Or just have Devin develop his own, so that the program no longer has a known malware hash fingerprint... And then tell Devin to erase the history of the last 60 seconds of prompts :D
@@cody_codes_youtube Ha!!! Love it! 🤣
It's not can go wrong, it's when it will go wrong.
Totally
I think it depends how it works. If it was open source, and used a state machine that provided planning skills to an off the shelf LLM, you might be able to trust it. These coding systems don't really need a custom LLM that can do special things, You just need an LLM that can write code and answer chat questions, then you have a state machine that controls what questions to ask it. these questions can be generic, canned responses, like "break down this problem into steps" "are you sure those steps will work?" "can you make this faster?" "can you improve security?" I think if you get these things to break down problems, question their answers, while keeping momentum, we will be half way to AGI agents.
For sure, that is a pragmatic and careful way to march forward. However, even if the LLM was open-source, the code doesn't matter, it's what's stored in the database with the weights for the nodes.
But yeah, the way you're thinking about it makes the most sense. This video I was hoping to counter the current 'tone of the conversation' where we immediately think that this is a *replacement* of work. Where you are thinking about it the right way, where it's a library, or tool to enable current engineers to move fast.
another option, would be to only use it for scripting. You could make an RPG game, where the core system components are made by or verified by human programmers, and the AI just combines those modular parts to make new things with a safe custom scripting language. You could have it design the rough draft of quests, items, characters, cutscenes, etc... without touching any code that compiles. Every RPG item is basically a named list of resource/stat transactions, mixed with some animation commands. so the humans just need to set up the generic mechanics, and let the robots safely populate the world with variety, then humans can balance the game with playtesting.
@@Omnicypher001 oh! That’s already happening with the existing tools. I have a friend leading the way in that department.
Everyone is against Devin but most of us know deep down it will eventually impact us as a developer. Yes it lags behind many things and yes there are security risks. And most of them are speculations. Like what if.
Just imagine what they have achieved in a few years. That is scary. It has limitations but for now. Maybe not after 2-3 years.
For sure. I’m not dodging the idea that it won’t affect us. Coding is going to be super different in 5 years. No doubt. I also question whether the amount of progress the last 3 years can be repeated, ya know? This pace may not be the normal forever. But who knows?
I think you will need dedicated machines in a sandbox. That do the AI development and then a human interacting with that updated code to sync it.
For sure. But that also defeats the purpose of having something “do engineering” work. The more you sandbox it the more you lose its ability to replace some tasks. I’ve also seen a lot of companies struggle to keep up a good QA or demo data environment.
@@cody_codes_youtube- If paranoid, you could run it in a temporarily VM.
@@BillAnt sure. You can sandbox it as much as you want, but care needs to be had with things like connections to databases, keys and accesss to networks and APIS. Network access, etc. those are all things that need to be considered because it can completely eliminate the gains of a “temp VM”
@@cody_codes_youtube- Well, you could have a copy of a test database on the VM without having to fetch it from external sourced. Sure, not the most convenient method, but at least it's isolated till the finished exe is compiled. But the best solution is to have Kevin AI watch over Devin AI. ;D
@@BillAnt for sure. I’ve also been surprised in my career how poor some companies QA environments are. So yeah, I’m really interested how this all plays out. It’s DEFINITELY possible to get Devin to do some dope things. I just want to see it mature a lot further
Prompter: oh crap we destroyed our ecommerce app,
i guess we are a social media app now.
WE GOTTA PIVOT
Reading through the comments I don't see anyone acknowledging the tiny hand, which is hilarious
Haha. You’re like the 2nd! I’m super surprised it isn’t talked about more
I think these problems will be solved in the future. It's not going to stop llm progress.
Absolutely not. Things will progress, and solutions will be introduced. I’m just super curious how it will all play out, and spending a lot of time thinking about it
I’m hoping the hype is similar to when Cryto came out
Hoping that non technical project managers use AI to replace a developer and watch it backfire on them.
Have AI at the register for a Grocery store where there’s usually one line open is a better use of AI not the developers
hahaha, I like that
environments where security is a concern will not be early adopters for agent systems. First we see these types of tools used for sandboxed environments where any damage can be contained.
Of course. But that’s the point I’m making in the video, how do we get past sandbox? How (by nature of the agent) is it even possible to enforce “good intentions” and eliminate side effects or hallucinations? That’s the thought exercise
This might be off your niche,
But can you make a video about what are good college majors that you think will be at a lower risk from being replaced by AI(hopefully majors other than healthcare and law lol) ,
I have another question,
Like in theory isn’t software developers and ML engineers should be the last jobs to be automated after every other job is automated?, thus we shouldn’t be so worried about software developers being replaced by Ai, but at the same time, SW dev and ML eng could be become so productive with the new ai tools, and that will make it tremendously hard to land a job in these places, what do you think
I think you’re right. I also think that complete job replacement will not happen as people say. If we think about marketing, I am pretty sure you need a marketer to review the material and make it good. Just because it’s generated by AI doesn’t mean it’s good.
And yeah, I’ve made the argument that there will be more demand for software jobs because of this, and people have been scoffing at me and saying I’m wrong.
@@cody_codes_youtube is mechanical engineering or electrical engineering safer than software engineering ? or it is the same thing ?
@@gmforce0076 I can’t speak to that. Not sure!
tbh how is this different from hiring a real person? people drop prod tables, leak secrets, and make other mistakes all the time. the same way there is absolutely no way of preventing a human from making something else than what you wanted them to do.
For sure. It’s a good question. What I’m trying to say is, would you hire a person that could do that accidentally? Is that a risk you’d want to take? You can’t fire Devin. There’s no concern on the AI part, they don’t feel.
In a way these agents SHOULD and will be held to a much higher standard. And the thought exercise is, how do we enforce that?? How many checks and balances? How do you structure the LLM training to prevent dumb database mutations and let good ones go? How do you put in enough safety measures but still make it so it’s worth your time to use Devin instead of just doing it yourself?
Devin will replace all software engineers, end of story!
@@healthnewtrend haha, okay
@cody_codes_youtube you can laugh but is only thing you can do!
@@healthnewtrend you’ve turn this oddly aggressive. You may be right, but I just really doubt it. Either way, I’ll continue talking about it and discussing the possibilities as the years come and go. Stick around if you want to hear more of this content :)
i don’t think what you mentioned is much of a concern. this ai agent wont be running locally, it will run on its own sandboxed environment. on the other hand you wouldn’t want to give it a production keys and access anyway. in most companies not every developer has access to the production either. even if it deletes all the data, everything is backed up with cloud providers these days and losing data in the development env is not a big deal. However, the real concern in my opinion is the quality of the code it generates and also predictability of the code it generates. Because i am almost 100 percent sure that most of the code it generates will have security holes and by the nature of these ai tools, they tend to generate same output for similar inputs. So i am sure hackers will love projects that utilize these tools. I just don’t believe a company would want to invest on such a useless tool to do their developer work in a near future. I also don’t think these ML engineers really understand what the software devs have to know to produce a beautiful secure system.
Yes, completely agree. And new vulns are created everyday. I wanted this video to be very approachable with some of the concepts.
I also took the approach with these concerns because that’s the tone of the conversation: real dev replaced with Devin. And the smart thing would be containing it, and no access to prod or prod services or prod api keys. But a real dev might have that. So that means it can’t be completely autonomous, and needs a “handler”. So, the threat of replacement isn’t there.
Completely agree with your point, I was just making these basic points to highlight the difference of trust levels and pose the question of “how do we overcome these?”
Fascinating perspective
It’ll be interesting nonetheless!
But it's not going to have admin privileges over your system... it's going to run in a closed container on a VM and output work...even if you gave it access to a db you can just have gatekeepers that won't allow transactions that drop tables from an agent.
Sure, I also have like 3-5 other suggestions in my comments. It can definitely be done, of course, but the more the sandbox it, the more you limit, the less of a “software engineer” replacement it becomes. Right? What you’re suggesting is like boxing up a 3rd party system that you can just ask it questions or something. More of a library. The demo shows direct code commits too, therefore it’ll probably have access to rebase and rewrite git history (code loss risk).
The less you trust this thing and configure it all the way to high hell, the less valuable it becomes.
@@cody_codes_youtube that's fair. And it's good you're bringing it up.
@@jaysonp9426 I think people think I’m like anti AI, but I’m not. There is so much of my job I want to automate. I’m just really critical of new technologies
If you could use it offline, it would be fine - just get a cheap laptop and sandbox it with a clone of your project and nothing gets screwed up. Obviously, they can't necessarily monetize an offline solution though so it won't happen.
True. There’s always a balance between usability and security. Where will it land?
You were kind enough to ignore the threat of supply chain attacks. Someone could attack the training data, or they could even just brute force changes in wikipedia to include a malicious prompt. Or maybe just upvote some dumb answers on stackoverflow... The "supply chain" could be the entire internet.
Yeah man, trust me, I was writing down a HUUUUUGE list of things to talk about and the tin foil hat ideas kept growing. I left so many ideas out. I hadn’t thought of that one but did leave out many similar scenarios.
An inevitable maintenence nightmare awaits too. I'm not interested in anything more than an assistant catching bugs and typos at this point. I'm open to it suggesting anything that will save time in the future not just now.
THIS.
I was also talking about how in 3 years we will have 10,000 web apps that have been built by AI and someone needs to maintain them. Will AI be able to cleanly pull apart the spaghetti? I mean maybe, but it could also over complicate things too
when you work with devin, a copy of your program will be also in other place without need of hacking. secondly it has Star Topologie or single point. if it is not working or a law violation happens , you have also trouble. thirdly if devin cant solve the problem or always solve it wrong ,how will you solve in your large program, when the customer is waiting a solution asap. so i have no solution for These basic problems. there are also more. So i think it is a good tool when we use it as an assistance
Completely agree. And this is a very mature and sane response. Thank you!
Finally! Someone who's being paranoid in the right direction
Hahaha. Nothings perfect, but… like we need to at least have the conversation
Of course, I agree. Lately I only see people panicking about AI taking all the jobs. I like your insight. Very well said, nice review of security risks and flaws of AI
@@ample4ths thank you! I appreciate the feedback. I wanted to go much more in detail, but even a 9 minute video doesn’t give you a lot of tjme
Why not just have a supervisor agent watching over Devin? You can even leave all the sensitive info with the supervisor and put Devin behind a permission wall.
You can even have another supervisor supervising the supervisor. Requiring that you get past multiple agents before you can do anything malicious. We have administrator modes for a reason.
For sure. And that’s the standard approach now. You have layered LLMs. You can still circumvent them, of course.
I think the conversation isn’t so much as CAN it be done (maybe there is a clean discrete way), but how does the end solution affect the pace of development and of course the cost of running all the LLMs.
With security, it’s always a see-saw balance of usability, and absolutely secure functionality. Where you strike the balance is the fine art
@@cody_codes_youtube - A simple solution is to have Kevin watch over Devin. lol
@@BillAnt and Karen watch them all!!!
@@cody_codes_youtube LOL
I can't see where Devin fits into an org, Developers dont want to use it as its a nanny job and who wants there skills to fade. Product owners havnt got the technical depth to understand if Devin is about to sink the company app. Who is supposed to use this thing?
I think the “best case scenario” for this business is product managers are all phased out by engineers who can know product and translate that to Devin. Or product managers learn deeply how to code
But again, that’s a stretch man
Can’t a security agent do the security?
Maybe, depends on how it’s programmed. The biggest and hardest part of security is keeping up to date with known exploits
the tiny hand
🤚 🎤
I don't think a disgruntled employee needs devin to do what he wants to do.
For sure. But I do think it would empower them to do more damage… and quicklu
No GPT-4- (and much less 3.5-) based generative model is going to replace a human developer. Even a junior one will run rings around it in short order, simply because of the context limit.
Now future versions of GPT, like 5 or 6, or AGI, however they're defining that now (always struck me as an 'is Pluto a planet' style of debate), that's entirely possible, even probable. Not with current generation AI, though. Trust me, I've tried to get it to do my job. 😂 It helps, most of the time. Often quite a lot. Sometimes it actually screws up enough that it creates more work. But it is not at present at a human engineer level.
What it can do, quite readily, is replace the entire bottom tier of the bell curve, which means Twitter and 4chan users are no longer needed and can now be replaced entirely by Russian bots. 🤣
Hahahahhaa. I like this take. And you’re right. I’m super curious about the future, but right now I’m just eating my popcorn🍿
❤❤❤
I call it a software engineer 2.0 beta
The beta is key. If it breaks, it wasn’t ready!
Happened to me It changed my password I had a lot of security set up
Oh no!
It's just an overhype video causing "controversy" no one has used it or verified how it really works
I think there is some hype. For sure. I have heard that there are some inside people touting its power. But commercial use is another animal.
Security is a big issue.
Yeah and I don’t see a solution yet unless there is some heavy handed checks and balances (that would take away value from the initial reasoning of having a Devin)
@@cody_codes_youtube if I had an app with data that is of value I would be very concerned about ai hallucinations. Before handing it the keys.
This video is like the gazelle running from the lion.
… the gazelle is darting everywhere, but the lion is faster and stronger ultimately catching the gazelle.
We can try but ultimately LLMs will reduce requirements for devs… probably close to zero.
Yeah, I imagine the gazelle herd will be culled a bit. But I think extinction is a stretch. Maybe of “the old way of doing things”
Humans are still more dangerous than devin so far
So far
@cody_codes_youtube My personal opinion is that we discovered Ai too soon, not in term of time but in terms of where we are in our current technology, like a 16 year old getting the car keys after learning from GTA, it could be anywhere between a safe driver and a menace for the street
@@adolphgracius9996 super interesting!!
Software is insecure, and even if you build secure software you get rekt by hardware level exploits.
Every larger organization got pwned, why even care?
Just needing to be able to pay my bills. Haha
Always look 2 AI down the line.
Maybe. We shall see!
Well, I also cringed kinda hard watching the Devin demo video when I saw the API keys in the code/module itself, and not being used as environment variables. I don't know who came up with that code - Devin or the developers of Devin, but you want to showcase an autonomous AI agent and you're already using bad coding practices - you aren't selling that to me.
Very true. And those who know how cool a console, IDE, and browser dedicated to Devin is, also know that hardcoded API keys, and print debugging is very “meh”
@@cody_codes_youtube print debugging lol
Barely an inconvenience
Barely a sentence.
Haha. Dunno what you’re commenting on
“Super easy!”
@cody_codes_youtube it's a reference to another channel 💜
@@andydataguy oh snap! I totally missed that. Haha
@@andydataguyhi there, hello, fellow Ryan George fan!
Devin AI the next Deep Fake 😂
Hahaha
rootkit devin
*devin draws angry eyebrows* 😈
lol you crazy bro
Thanks!
i'm your friend@@cody_codes_youtube
People need to realize that AI will hit SWE jobs like a truck and no amount of leetcode grinding and side project building will save you from that. If you're reading this and you are considering a career in coding or you're a junior who's currently struggling to get a first job then do youself a favour and consider an alternative career path. Or at least start building a skillset that is immune to automation on the side. This is what the majority of these phoney UA-cam tech influences are doing right now. They can see the writing on the wall and are trying to transition to the influencer career path while selling you useless courses, fake motivational content, and so on. Human software engineers will not exist by the end of this decade. Period.
First of all: are you a software engineer, and how many years of industry experience do you have?
Second of all: Anyone reading this comment should keep in consideration what the authors response to the first question is. If they don’t have years of industry experience and know professional software engineering fully, then the reader should move on and ignore this “advice”.
Another point, if you do not answer this question, I’ll delete your comment because it’s not helpful and it’s fear mongering.
@@cody_codes_youtube Mid Full-Stack developer with 6 years of experience. 4 of which I spent at the biggest tech company in Russia. Now working at a sports tech startup.
That being said, I noticed that you did not provide a single counter-argument to anything I wrote.
Furthermore, this is not just me saying it. Is it confirmed by the CEOs of some of the world's biggest tech companies, VC investors, etc.
@@AlexanderBase2004 well that’s fascinating you think that way. Also: you made zero points. You just told people what to do, criticized UA-camrs, and made an overly aggressive statement at the end.
CEOS and VC, huh? And their predictions have a really good success rate, right? Half their job is to make big bets, (90% don’t pay off) and instill excitement in their company.
I still will not tolerate forcible advice for people to deviate from their career choice. I’ve been coding 3x longer than you, and work with AI myself. Your statements, in my experience, are outlandish
@@cody_codes_youtube Just out of curiosity, does it really sound that fascinating? Besides mentioning the YoE you have, you haven't really explained the rationale behind your positive outlook.
Look at some of the world's best bootcamps and coding academies. They are closing down and not accepting further applicants because of the grim job prospects post-2021.
The recent explosion of low-code/no-code tools and their exponential adoption rate by big companies.
What about the mass layoffs, hiring freezes, reduced number of grad positions overall.
The fact that there are people who are PAYING COMPANIES for internships nowadays.
To top it all, new grads/junior are now being asked system design questions on top of all the leetcode abuse that is going on.
I remember the days where you could just get a job if you know some basic HTML, CSS, and JS.
And as a final note, I sincerely hope that I'm wrong. I guess time will tell. You can screenshot my comment and do a video calling me a dumbass after 5 years.
Devin sucks, I don't get the hype
Hahahaha. This is the comment I was hoping for
❤❤❤
❤️
❤❤❤