It might be the mission of the industry, but it's not true that "we don't give a crap" if you're referring to the people in that industry. Many, and dare I say even most, developers code primarily for love of the process.
@@jason_v12345 Then just tell me how many programmers do you know that still write their code in assembly language?. How many game programmers do you know that they do their own game engine instead of using unreal engine, godot, unity or even game maker?.What I meant by saying by don't giving a crap is that as a SWE should not be our main focus to write code just for the sake of it, of course it matters to us because is our primary tool to actually build that software today, and it's what we learnt to have a job in the IT industry and we have some bond to it, but we do it primarily to build software, the mission of the industry should be our mission too because that's the reason to write code in the first place, do you build a wooden house just because you like to use a hammer?. I don't think there will be a point that we write zero code either, but code tools in the future will be so polished that we'll just have to uso a few code to give some instructions to build software in the way we want it and 95% it will be generative stuff. Code is awesome, but let's not forget that is a tool, it's a cool tool, but it's just a tool and is what you build with it what matters, just keep that in mind.
Before I worked with LLMs at a deeper level, I was extremely optimistic about them. Now, I am still optimistic, but I have come to realize that they are not as magic as media claims they are. I am working on creating a RAG system for with one of my company’s clients in the insurance industry. The system has involved creating an a chatbot with deep vertical knowledge, knowledge about particular cases, and automated conversion of unstructured data to structured data. Trying to pull this off has been very difficult, and has made me see more clearly what the fundamental flaws of this tech is.
@@itech40 Regarding RAG, the saying “Garbage in, garbage out” applies. My client was essentially trying to build a smart insurance AI chatbot out of garbage documents. You HAVE to have good, structured data to get the best results. Some more simple chunking strategies (like fixed chunking) will also result in suboptimal responses, at least for my use case. If the knowledge base documents do not have the information you query for, you’re going to get negative responses or even hallucination. I guess these things are not new. RAG frameworks have all the flaws that LLMs have plus the problems that come with chunking and retrieval. So basically, you wanna make you’re choosing the best chunking and retrieval strategies for your specific project.
AI is not going to kill jobs suddenly. It will be 5% then 10% ... then more. This is not just layoffs, but future job openings that will not happen. It is going to be the proverbial "boiling the frog" - slow enough, so that the society gets used to it, and the governments do not feel compelled enough to take action (universal income, min wage, etc.) (The worst hit would be college graduates, specially those with student loans, my hearth cries for them) But make no mistake - over a period of time the devastation will be significant. And it is not just "white collar" jobs that will be affected ... construction robots, self-driving vehicles are coming too.
People are burying their heads in the sand on this issue. They don't want to acknowledge where this path leads. And it's happening crazy fast, too fast for people to comprehend. This does feel like exponential progress with coding
Even if governments are on the ball, providing some version of universal income I'm not convinced we're looking at a pleasant outcome. I personally cannot imagine anything more depressing than a world in which humans have been reduced to only consuming, leaving all production to the machines. Wall-E seems like the best case scenario of AI, and that's not a world I find all that appealing.
Always trust politicians to ban things or regulate them if it involves losing votes. This is one such invention that will lead to that path. For example, in India, the government has told verbatim that it won't allow AI Self Driving Cars into the Indian market. The only places where the governments around the world won't be able to stop AI, if it becomes even remotely valuable without Human Requirement (which it isn't as of now), is places like Defence or Geopolitical competitions. Let's see where it all ends.
@@SwolePatrol_1969 I noticed that people who bury their heads in the sand are senior devs which is understandable because coding is what makes them feel valuable and important but AI is going to significantly reduce their importance. Instead of acknowledging their fear, they pretend that AI will never be as good as them.
@@bruhmoment3731 Well, firstly, many senior devs are terrible at their jobs and have very little of an actual refined skillset, so maybe they could get replaced. But the job of a senior dev is ideally supposed to be a *general* intelligence job, synthesizing knowledge across many domains, excelling a lot in ambiguity, while also having the skills to dive deep and get extremely low-level when necessary. So until we have Artificial General Intelligence, I don't think skilled senior devs should be worried about their general intelligence job getting replaced. Augmented, certainly, but there's a far cry between a senior dev augmented by AI tools and a non-technical position directing an AI and praying what comes out is optimal (or even functional).
As a java developer, I’ve rarely got to work with new technologies… most of the work I do even now in 2024 is still running on java 8, which is 10+ years old. All that to say that although these new AI tools and code assistants provide us a lot of help they’re not rendering developers useless anytime soon. It’s just a risk that the companies can’t take… just like new languages, libraries and frameworks that come out every day… they’re nice and all, but at the end of the day they’re not widely adopted.
Hi Jorge. I'm new in the tech world, I'm a beginner in proggraming, I've already study the basics and created and deployed some websites. I'm really worried about AI replacing me in the future. I haven't even got my first job... is It over for beginners? I'm don't want to put pressure on you, but I'm kinda (VERY) desperate, and I have no one to ask for help or some light on this darkness of fear.
@anonimoanonimo-wb5gk @Pdfnaega you might want to check out some of my other videos, including one directly for students on if they should major in CS, when are tech jobs coming back, etc. My two cents: If you are a junior dev right now the market is very tough. Nobody really knows how all of this stuff is going to pan out over the next few years. It's all very frustrating as nobody really knows which specific things are going to be good long term paths. I will just say that the more skills you acquire, the better your odds, but nothing is certain right now. The tl;dr for the vids is that networking is key - find a mentor, find a community.
@@anonimoanonimo-wb5gk hey brother I know this is 2 months later, but I have one piece of advice. Get in now while the technology is still relatively new. Do projects and get skills in areas that you enjoy and that you think will grow in the future. But my number 1 advice is: NETWORK. This is more important than any other thing you do. I am a junior dev, only been working for 2 years. The thing that got me my job was not my resume, it was talking to the right person (my boss) at the right time. I was a server at a restaurant while I was finishing my degree. I got to meet a lot of different people that way. I just so happened to serve my now boss. We started talking about software, and later on he invited me to do an interview. Keep yourself sharp, work on your social skills, learn how to talk to people. You want a job? This is what will get you one. And then figure out a way to meet and get in front of a lot of people frequently.
Started using AI to simply aggregate information from the web into my companies datasets last Wednesday. I can guarantee that this tech will cost jobs because if the amount of work one person can now do easily. A year ago there was no such thing as text to image. Now there is text to video. (Not a dev) I think folks were trying to code with AI ~18 months ago, now it appears to write code by itself, albeit as noted, simple code. At the pace the models are learning, I’m hard pressed to see how this tech is not disruptive in a very short period of time. I’m not wishing ill fortune on anyone, however this is the fastest tech change I’ve seen in a long life (by that I mean change AND adoption) so caution and planning for alternate possibilities would seem to be wise.
@@andresarias5303 AI adoption in th US is pretty pathetic. And the models, up till now, were not that user freindly. I'm confident we're a long way from the limits on returns, or are even approaching dimishing returns.
I tried using AI autocompletion stuff and it blows dude, like it's cool that they can deliver boilerplate for web components I guess, but anything else it's comparable to consulting stuff on stack overflow, and if you're a senior dev, you weren't looking up stuff that much. it's in fact much quicker to do the stuff you want to do yourself. at this point what needs a reality check are LLMs and their obvious limits. "managing AI" sounds like an incredibly roundabout way to do stuff!!!
I agree. The hype is largely unfounded and the AI bubble will burst soon enough. Those are a junior and lower level seem to believe it's "magic"' and overste it's use-cases. This is being driven by capitalists and the end result with corporate influence will not be a human net positive.
The difference between using AI for development and managing human developers is that human developers actually learn from your feedback. well, it may sometimes take multiple iterations but then new knowledge actually persists and is taken into account. With LLMs it is a problem: there is no easy way to add something to parametric knowledge of a model, fine-tuning is not injecting the knowledge deep enough for a model to take it into account in different contexts, you need to inject it into the prompt which is limited, so you need to use some sophisticated RAG technique which is a significant design challenge
I think you're being too optimistic here. Here's a thought: Instead of a Dev Manager, what you are is actually a Dev Janitor - cleaning up the code & fixing up the bugs which the LLMs generated. You're not managing anything, here. You're just cleaning after them.
The idea behind ASI is the idea of self play and self learning. It's been proven repeatedly that by letting an AI system follow a series of rules they can come up with better and better strategies to accomplish goals within that system of rules and far surpass the human mind's ability to grasp the employed strategies. Look at Alpha Fold, for example, they found 250 million new proteins in a single day. Alpha Chip is another example of an AI virtuous cycle. The AI designs its own hardware, and that hardware provides new efficiencies for new AI systems, wash and repeat. We're in a hyper exponential improvement curve right now. It's difficult for me not to think that Google's legal challenges right now is an attempt to slow it down because it's disrupting science by catapulting progress way more than anything we've witnessed in the past.
@annieorben yeah, the curve is very interesting. They eventually wind up flattening out, but the timeline is up in the air. Data? Energy? Some other raw resource? Something fundamental to the nature of intelligence? Or some kind of crazy self replicating robotics that winds up just limited by physics? Crazy times.
So how do older devs survive through all this? How in mid thirties do I keep a job until retirement? Pivot ? Go into building houses because the young guns don’t want to know how to use their hands? Go back to school?
The short, unsatisfactory but honest answer is nobody really knows. Either the BigCos investing in AIs are right, in which case it's probably going to hit jobs, or they are wrong, in which case the market will take a huge, huge hit, which probably means jobs. My guess is that eventually this will sort out to a new level (a bit like how the industrial revolution did lead to better living standards) but with massive political and social changes (eg analogous to most of the 20th century technological change). I mean, it's kind of like trying to figure out how to explain to someone born in 1900 what their life is going to be like. Uhh... it's a lot.
Oh legacy is totally a big thing. We did a project involving a CICS backend just a few years ago. I did a vid on legacy stuff a bit ago that nobody watched lol ua-cam.com/video/fpU1yg-BMMQ/v-deo.htmlsi=x7ByO17kBoqHFQ9H FWIW I have seen LLMs do very, very well with legacy stuff that has a ton of material. I got very good results using it to help write Pl/SQL stuff for PostgREST services a year ago. Vs say very poor results w/Svelte & SvelteKit just because it's not very popular and there just isn't very much training data.
But it's just not true about v0 being "magic": it is quite a simple GPT-wrapper, it saves some copy-pasting work which is nice but far from being something revolutional. Still, in my experience AI tools are very helpful to find information, come up with an exaple of API use in a particular language/framework, but whenever I try to actually DO something with it, even simple thing I don't feel like it is more productive than doing it myself: it eventually gets stuck and I need to actually think through the code to get it unstuck, then after a few messages it repeats errors that were already fixed etc.
This is exactly the type of issues that I have come accross using LLMs for coding !! Even a simple application the model starts hallucinating and starts forgetting or repearing stuff we already discussed a few promots back, introducing errors, bugs. If you try to develop something blindly, you will end up with a big mess
There were leaders in the world who rejected the Knitting machine because they were afraid that people will lose jobs, they did, but eventually everybody stop caring. Accept reality. But personally I think Computer Science (etc.) graduates are smart enough to get into different career paths.
@@kingofmontechristo sure it will. But what do you think happens when the workforce has tens of millions more people looking for tens of millions fewer jobs? What do you think that does to wages and benefits? It's not like most manual labor jobs can't be learned by most people if the incentive for doing so is there.
@@tracy419 obviously people would be broke unless government creates crazy regulation and something like a universal income, but again, I think it is pointless to think about these things because humans will always find a way to change unnatural situations. I am sure that in that world there would be tons of criminals who want to simply fight back because the idea of humans not having to work would actually create fundamentally a lot of issues
@@kingofmontechristo personally, I think the potential negatives you point to are precisely the reason to think about these things now. If we can find ways for people to find purpose without work, we can help eliminate a lot of the negatives. The world will look vastly different when my kids are my age in the next few decades. Kinda exciting and scary at the same time 😄
Yeah, there's a quote floating around along the lines of "what jobs would you bet me AI can't do in five or ten years" FWIW you might find my vids on AGI ua-cam.com/video/lorNRMBo_PA/v-deo.htmlsi=bldyCV8RbkHnZZeP and UBI ua-cam.com/video/IIVDLCDeZT8/v-deo.htmlsi=t6dYRKCW_k3P9tYb interesting
So funny, I’m a professional vfx person, so I know what AI can actually help me with in VFX, it feels dangerous and bad habits. BUT I’m using cursor to write python scripts and do database work to a level I could never accomplish before. Our roles are switched here where you were needing help in Davinci. Maybe the AI just gives pros from one area an ability to RAPIDLY increase their abilities in edge case work related to their main skill. So the person has high skill in one area and the combo of that skill along with an AI can make them really useful in other areas.
Yeah, I keep coming to video stuff in this really backwards way. Started more from a game engine perspective and now I'm backing in to the video stuff so I keep being confused by the weird legacy physical/film jargon and translating it to PBR stuff in my head. The LLM stuff mainly seems to be helping me sort stuff out more quickly. Very little of it is doing stuff I just flat out couldn't do, it's just saving me a lot of time eg on the dev side beating my head on docs/forums/etc. But w/o the understanding logic and "need" on my side it doesn't mean anything. It all gets very meta and weird.
A year ago people were using these LLMs as a helpful tool to write code ( a replacement for StackOverflow at best). Fast forward a year, I have a friend who isn't a coder writing a full React app with Cursor integrating Shadcn. You'll still need good developers for the really hard stuff, but that raises the question of how many Devs do you need on the team if you only need them for the really hard stuff. I see a future where more Product managers and UX/UI designers start learning to use tools like Cursor reducing the amount of tasks that would be assigned to developers.
Code will be natural language in 5 years. We need to accept this. Coding will not be a special skill, almost certainly in 5-10 years time. Things like CRM will be the first to go as they're essentially variations on the same system.
@@WillyJuniortaking the time to sustain an interest in the field is the skill. GPT didn't 10x the amount of authors. Authors already select themselves as those willing to see finishing and editing something. I live the democratization of this all but the individual has to care why a deployment failed somewhere; if they just don't want to think at a given technical level GPT will make no difference in one's life. Unless the future is just landing pages made by react. I'm speaking to the other the 99% of business needs
The nature of the work is going to be very interesting. Lots of folks doing stuff like landing pages, SEO/SEM, etc etc vs say deep CS or physics or whatever. FWIW I find the nature of work on Star Trek eg the science folks working with the computers to be much more interesting of late, lol.
Hmm. An interesting retrospective. In my personal experience, co-pilot and chat-gpt are the only models so far that have been useful at all. Other models either take forever and return nothing, or output gibberish that just repeats my previous code. My biggest takeaway is that Microsoft is doing some very special proprietary magic to handle context. Gathering and correctly using context seem to be the quiet revolution that has become as important (if not more) than the transformer itself. I'll have to give cursor a try. I currently am having all kinds of issues with integration to IDEs. It's honestly mind-blowing what a shit show integration still is.
Currently, Sonnet 3.5 is better at least in some ways. In some ways, o1 is better. It feels like they are all different authors who are better at some narrow things. Like O1 writes good plans but can't really edit code. Well-prompted Sonnet 3.5 seems to be surprisingly good at doing good-looking web UIs However, how those models are integrated also makes a lot of difference. In Cursor they are provided with good indexing of the. code base that allows the model to pick up things not documented anywhere. Some people prefer how Aider works. I liked Claude-Dev in VSCode but problem with that when. I tried that eats tokens super fast. All in all, it's a wild west for now. No clear winners and new things coming up weekly.
I have just written my thesis on compressing an LLM (Mistral-7b). I obviously had to evaluate the model (i used the MMLU Dataset) and it seems that everybody evaluates them with few shot prompts (pass correct examples of the task to be solved to the model before making the request, e.g. 5-shot means pass 5 examples of the solved task before putting the query itself). I can say that the most challenging part was getting the prompt right when reimplementing the evaluation from scratch, and that greatly affected the result (10% less accuracy). So from what i have seen I completely agree with your assessment that correctly using context is extremely important.
Cursor is doing a bunch of stuff behind the scenes to get the LLM to work w/the IDE fast. It's pretty interesting, much like some of the brand new stuff eg Replit and Bolt. There are some podcasts with that team I found very interesting getting into the hacking of the LLM integration w/things like partial prompt caching etc I consistently find the older/more established the better the results. Vanilla JS/CS, React, Postgres and PL/SQL stuff is all very, very good. I really like SvelteKit but most of them are really terrible at it because it's just too new. Which makes it very interesting prioritizing what stack to pick...
@@ChangeNode yeah, like IDE support was crucial befire, syntax highlighting and auto complete. Now LLM supporg becomes a thing. But cursor, github, sourcegraph kinda try to solve it.
I love how many different experiences I'm seeing here. For what it's worth, I was previously working on traffic-control for embedded controllers storing signal data. So, it was all in C++. Now, in my job I'm working on webservices in Java and C#, and on the side I'm learning about game-engines in C++. I thought Java was dominant in webservices, so I was surprised to see that O1, at least, was giving me better results with C#. I got the feeling that many of the Java frameworks have changed too much recently. There's also a huge split in how to approach the web - reactive of traditional- in Java, and this seems to confuse the AIs. They kept changing their solutions between different web clients. Maybe it says something about the state of the community and the quality of code out there???
14:38 Don't forget the development of AI is exponential. It might not be have changed our entire world in a big way yet, but it absolutely will in the near future.
@bruhmoment3731 - as I put in another comment - yeah, the curve is very interesting. They eventually wind up flattening out, but the timeline is up in the air. Data? Energy? Some other raw resource? Something fundamental to the nature of intelligence? Or some kind of crazy self replicating robotics that winds up just limited by physics?
I'm not sure about the exponential development curve. I think we reached the limits of current approaches and it will take some decades for truly intelligent systems. I think "techbros" hyping it up to secure insane amounts investment (which the investors probably will never see again). But this technology is already deployed to censor/ autodelete "unwanted" comments etc.
where are you getting the sense that it's exponential. if anything it already shows diminishing returns, and every model is converging into the same thing, because no matter what you do, or what fancy model architecture you're using, you can't make a meta-algorithm whose sole purpose is approximating your dataset, to somehow trascend what's on your dataset. and we have no more data to feed these things.
The time is coming where anyone will be able to write an application with a prompt. You talk about changing roles, a developer becomes the conductor of the orchestra instead of one of the guys playing a violin. Sure that's on the roadmap. But it won't stop there. AI Agents will work together like people in a company. And at that point, why do people need software applications anyway - you get what you need to get done using the AI, not using a specific application for that task. AI *is* the application. All applications.
That's the acid test I'm waiting for, and stuff like Replit or Cursor isn't there quite yet. Stuff like v0 does make me wonder if maybe the first/intermediate step will be dynamically generated UIs that talk to more structured backends. Hmm.
This is over simplified way to look at software development. Saying AI would replace developers is laughable. Let’s look at it this way. The type and complexity of software we build has moved from simple websites and applications to complex applications. And this cuts across different domains of software development. Game, web, mobile etc. Looking at how things are, we keep developing complex systems and the likes. LLMs basically need to be trained then based on the training data, you use in its domain it’s been trained on. So, LlM would have to catch up with how we develop stuff. And this is even assuming that LLMs become perfect in what they are trained on, which it isn’t.
Yes, he kinda hinted at your statements when he said "people will become more of AI Managers", but his talk wasnt about AI hype and people loosing their jobs so it was good not to dwell on it, as we have mostly heard enough of that topic.
Yeah, I unpack this a bit more in my AGI video ua-cam.com/video/lorNRMBo_PA/v-deo.htmlsi=gEkhcKMr-rsFg5h0 - it's about tasks, not jobs specifically. I mean, when they added spell checking in Word it was a) awesome b) a huge time saver but it's hard to say how many editors or copy proofers lost their jobs because of it. Same thing when they added refactoring tooling to IDEs - super cool but I don't think anyone could point to a single job loss because of it.
I have a slightly modified version. I think that the LLM being able to interactively build the app and not break anything with each revision is likely, which is one version of a one shot. One of the benchmarks I'm looking at is when we kind of don't have apps anymore, we just have an LLM that dynamically generates interfaces for us. Part of what I'm finding interesting is trying to fill in the gaps for what we will see between today and scifi...
FWIW I believe that there are algos for security that are intended to not be easily cracked by QC - stuff like factoring primes is trivial for a QC but other stuff isn't. My understanding is that several years ago the CS security folks just started with the assumption that the big state actors already had QC breaking in place but it was secret (eg Ultra) and so they needed to get on it.
Nonsense. How many of you are actual experienced software engineers and how many are "tech bros." 3 years ago these hype-up tech bros were citing how "the blockchain" (an append-only Merkel tree) and NFTs were going to make fiat currency a thing of the past in 5 years. Tired of the capitalistic hype angle.
People try to compare AI to computer programs. That's not a fair comparison though. AI will never be 100% accurate, it's like a human. We aren't 100% accurate either. AI is getting better, we just barely (sort-of) reached level 2 on the 5 tier scale to AGI. I don't think people are going to be super impressed with AI until we get at a minimum level 3 (agentic, meaning the AI can create its own objectives), and quite possibly level 4 (the AI can do any cognitive task a human can do.) As you said, it's still impressive, and there's a lot it can do. You just have to acknowledge its limitations for the time being. You can't trust it, for coding or for accuracy. It's still immensely useful though as long as you fact check it. Treat it like a politician and you'll be great. :) AI doesn't have to be 100% accurate to be able to replace humans. It simply has to be as accurate as the human that would be doing that job.
@@mohammedalrawaf9622 in some ways, sure. Not for the majority of tasks though. I use it daily, and I am vastly more accurate than the AI most of the time in the ways that I use it.
so if AI automates everything and we all mostly out of job or no longer needed, as thus no much paycheck to purchase things, then who will be buying the products from the companies? hope there is plan for AI to buy the products too, thats if they decide to take a break from the money printing ofcourse for the hell of it
Suggestions for the next video? Let me know!
Now is the time for individuals to think independently in the AI world and unleash their own creativity, rather than relying solely on others’ tools.
The mission of an SWE is to build software, not to write code. We build software, we don't give a crap about the tools we have to use to do so.
It might be the mission of the industry, but it's not true that "we don't give a crap" if you're referring to the people in that industry. Many, and dare I say even most, developers code primarily for love of the process.
@@jason_v12345 Then just tell me how many programmers do you know that still write their code in assembly language?. How many game programmers do you know that they do their own game engine instead of using unreal engine, godot, unity or even game maker?.What I meant by saying by don't giving a crap is that as a SWE should not be our main focus to write code just for the sake of it, of course it matters to us because is our primary tool to actually build that software today, and it's what we learnt to have a job in the IT industry and we have some bond to it, but we do it primarily to build software, the mission of the industry should be our mission too because that's the reason to write code in the first place, do you build a wooden house just because you like to use a hammer?. I don't think there will be a point that we write zero code either, but code tools in the future will be so polished that we'll just have to uso a few code to give some instructions to build software in the way we want it and 95% it will be generative stuff. Code is awesome, but let's not forget that is a tool, it's a cool tool, but it's just a tool and is what you build with it what matters, just keep that in mind.
Before I worked with LLMs at a deeper level, I was extremely optimistic about them. Now, I am still optimistic, but I have come to realize that they are not as magic as media claims they are.
I am working on creating a RAG system for with one of my company’s clients in the insurance industry. The system has involved creating an a chatbot with deep vertical knowledge, knowledge about particular cases, and automated conversion of unstructured data to structured data. Trying to pull this off has been very difficult, and has made me see more clearly what the fundamental flaws of this tech is.
Please, elaborate on the flaws
@@itech40 Regarding RAG, the saying “Garbage in, garbage out” applies. My client was essentially trying to build a smart insurance AI chatbot out of garbage documents. You HAVE to have good, structured data to get the best results. Some more simple chunking strategies (like fixed chunking) will also result in suboptimal responses, at least for my use case. If the knowledge base documents do not have the information you query for, you’re going to get negative responses or even hallucination.
I guess these things are not new. RAG frameworks have all the flaws that LLMs have plus the problems that come with chunking and retrieval. So basically, you wanna make you’re choosing the best chunking and retrieval strategies for your specific project.
AI is not going to kill jobs suddenly. It will be 5% then 10% ... then more. This is not just layoffs, but future job openings that will not happen. It is going to be the proverbial "boiling the frog" - slow enough, so that the society gets used to it, and the governments do not feel compelled enough to take action (universal income, min wage, etc.)
(The worst hit would be college graduates, specially those with student loans, my hearth cries for them)
But make no mistake - over a period of time the devastation will be significant. And it is not just "white collar" jobs that will be affected ... construction robots, self-driving vehicles are coming too.
People are burying their heads in the sand on this issue. They don't want to acknowledge where this path leads. And it's happening crazy fast, too fast for people to comprehend. This does feel like exponential progress with coding
Even if governments are on the ball, providing some version of universal income I'm not convinced we're looking at a pleasant outcome. I personally cannot imagine anything more depressing than a world in which humans have been reduced to only consuming, leaving all production to the machines. Wall-E seems like the best case scenario of AI, and that's not a world I find all that appealing.
Always trust politicians to ban things or regulate them if it involves losing votes. This is one such invention that will lead to that path. For example, in India, the government has told verbatim that it won't allow AI Self Driving Cars into the Indian market.
The only places where the governments around the world won't be able to stop AI, if it becomes even remotely valuable without Human Requirement (which it isn't as of now), is places like Defence or Geopolitical competitions.
Let's see where it all ends.
@@SwolePatrol_1969 I noticed that people who bury their heads in the sand are senior devs which is understandable because coding is what makes them feel valuable and important but AI is going to significantly reduce their importance. Instead of acknowledging their fear, they pretend that AI will never be as good as them.
@@bruhmoment3731 Well, firstly, many senior devs are terrible at their jobs and have very little of an actual refined skillset, so maybe they could get replaced. But the job of a senior dev is ideally supposed to be a *general* intelligence job, synthesizing knowledge across many domains, excelling a lot in ambiguity, while also having the skills to dive deep and get extremely low-level when necessary. So until we have Artificial General Intelligence, I don't think skilled senior devs should be worried about their general intelligence job getting replaced. Augmented, certainly, but there's a far cry between a senior dev augmented by AI tools and a non-technical position directing an AI and praying what comes out is optimal (or even functional).
As a java developer, I’ve rarely got to work with new technologies… most of the work I do even now in 2024 is still running on java 8, which is 10+ years old. All that to say that although these new AI tools and code assistants provide us a lot of help they’re not rendering developers useless anytime soon. It’s just a risk that the companies can’t take… just like new languages, libraries and frameworks that come out every day… they’re nice and all, but at the end of the day they’re not widely adopted.
Hi Jorge. I'm new in the tech world, I'm a beginner in proggraming, I've already study the basics and created and deployed some websites. I'm really worried about AI replacing me in the future. I haven't even got my first job... is It over for beginners? I'm don't want to put pressure on you, but I'm kinda (VERY) desperate, and I have no one to ask for help or some light on this darkness of fear.
@@anonimoanonimo-wb5gkeven I am on the same page
@anonimoanonimo-wb5gk @Pdfnaega you might want to check out some of my other videos, including one directly for students on if they should major in CS, when are tech jobs coming back, etc.
My two cents: If you are a junior dev right now the market is very tough. Nobody really knows how all of this stuff is going to pan out over the next few years. It's all very frustrating as nobody really knows which specific things are going to be good long term paths. I will just say that the more skills you acquire, the better your odds, but nothing is certain right now. The tl;dr for the vids is that networking is key - find a mentor, find a community.
@@anonimoanonimo-wb5gk hey brother I know this is 2 months later, but I have one piece of advice. Get in now while the technology is still relatively new. Do projects and get skills in areas that you enjoy and that you think will grow in the future.
But my number 1 advice is: NETWORK. This is more important than any other thing you do. I am a junior dev, only been working for 2 years. The thing that got me my job was not my resume, it was talking to the right person (my boss) at the right time. I was a server at a restaurant while I was finishing my degree. I got to meet a lot of different people that way. I just so happened to serve my now boss. We started talking about software, and later on he invited me to do an interview.
Keep yourself sharp, work on your social skills, learn how to talk to people. You want a job? This is what will get you one. And then figure out a way to meet and get in front of a lot of people frequently.
Started using AI to simply aggregate information from the web into my companies datasets last Wednesday. I can guarantee that this tech will cost jobs because if the amount of work one person can now do easily.
A year ago there was no such thing as text to image. Now there is text to video.
(Not a dev) I think folks were trying to code with AI ~18 months ago, now it appears to write code by itself, albeit as noted, simple code. At the pace the models are learning, I’m hard pressed to see how this tech is not disruptive in a very short period of time.
I’m not wishing ill fortune on anyone, however this is the fastest tech change I’ve seen in a long life (by that I mean change AND adoption) so caution and planning for alternate possibilities would seem to be wise.
Why is machine learning and pattern recognition being called "AI?"
There is the curve of diminishing return it's an economic principle that hasn't been beat everything has a limit
@@fluxonite Well, good luck pushing the tide back out.
@@andresarias5303 AI adoption in th US is pretty pathetic. And the models, up till now, were not that user freindly. I'm confident we're a long way from the limits on returns, or are even approaching dimishing returns.
@@fluxoniteexactly this is not even ai 😭 ai propaganda and false marketing to make sure investors stay
Layoffs are going down, basically its the calm before the storm.
Going down? Or increasing? Check the reports bruh and it's not because of ai but over hiring recession mostly in America and Europe
I tried using AI autocompletion stuff and it blows dude, like it's cool that they can deliver boilerplate for web components I guess, but anything else it's comparable to consulting stuff on stack overflow, and if you're a senior dev, you weren't looking up stuff that much. it's in fact much quicker to do the stuff you want to do yourself. at this point what needs a reality check are LLMs and their obvious limits. "managing AI" sounds like an incredibly roundabout way to do stuff!!!
Which ones did you try?
@@ChangeNode claude, qwen and copilot
I agree. The hype is largely unfounded and the AI bubble will burst soon enough. Those are a junior and lower level seem to believe it's "magic"' and overste it's use-cases. This is being driven by capitalists and the end result with corporate influence will not be a human net positive.
The difference between using AI for development and managing human developers is that human developers actually learn from your feedback. well, it may sometimes take multiple iterations but then new knowledge actually persists and is taken into account. With LLMs it is a problem: there is no easy way to add something to parametric knowledge of a model, fine-tuning is not injecting the knowledge deep enough for a model to take it into account in different contexts, you need to inject it into the prompt which is limited, so you need to use some sophisticated RAG technique which is a significant design challenge
I think you're being too optimistic here. Here's a thought: Instead of a Dev Manager, what you are is actually a Dev Janitor - cleaning up the code & fixing up the bugs which the LLMs generated.
You're not managing anything, here. You're just cleaning after them.
lol... that's exactly how a lot of dev managers feel today. Janitors running from one mess to the other.
The idea behind ASI is the idea of self play and self learning. It's been proven repeatedly that by letting an AI system follow a series of rules they can come up with better and better strategies to accomplish goals within that system of rules and far surpass the human mind's ability to grasp the employed strategies. Look at Alpha Fold, for example, they found 250 million new proteins in a single day.
Alpha Chip is another example of an AI virtuous cycle. The AI designs its own hardware, and that hardware provides new efficiencies for new AI systems, wash and repeat.
We're in a hyper exponential improvement curve right now.
It's difficult for me not to think that Google's legal challenges right now is an attempt to slow it down because it's disrupting science by catapulting progress way more than anything we've witnessed in the past.
@annieorben yeah, the curve is very interesting. They eventually wind up flattening out, but the timeline is up in the air. Data? Energy? Some other raw resource? Something fundamental to the nature of intelligence? Or some kind of crazy self replicating robotics that winds up just limited by physics?
Crazy times.
So how do older devs survive through all this? How in mid thirties do I keep a job until retirement? Pivot ? Go into building houses because the young guns don’t want to know how to use their hands? Go back to school?
The short, unsatisfactory but honest answer is nobody really knows.
Either the BigCos investing in AIs are right, in which case it's probably going to hit jobs, or they are wrong, in which case the market will take a huge, huge hit, which probably means jobs.
My guess is that eventually this will sort out to a new level (a bit like how the industrial revolution did lead to better living standards) but with massive political and social changes (eg analogous to most of the 20th century technological change).
I mean, it's kind of like trying to figure out how to explain to someone born in 1900 what their life is going to be like. Uhh... it's a lot.
EXCELLENT take. Going to save this one.
Thanks!
IBM still sells OS/z mainframe in 2024 and it is a big part of there business. I don’t think it will be that fast as people are saying.
OS/z mainframe in 2024 .....haha
Oh legacy is totally a big thing. We did a project involving a CICS backend just a few years ago. I did a vid on legacy stuff a bit ago that nobody watched lol ua-cam.com/video/fpU1yg-BMMQ/v-deo.htmlsi=x7ByO17kBoqHFQ9H
FWIW I have seen LLMs do very, very well with legacy stuff that has a ton of material. I got very good results using it to help write Pl/SQL stuff for PostgREST services a year ago. Vs say very poor results w/Svelte & SvelteKit just because it's not very popular and there just isn't very much training data.
Gotta get my VC fund song going, need to hit the charts!
Search results of LLM is better whilst generic results seem to be purposely bad... Amazing!
But it's just not true about v0 being "magic": it is quite a simple GPT-wrapper, it saves some copy-pasting work which is nice but far from being something revolutional. Still, in my experience AI tools are very helpful to find information, come up with an exaple of API use in a particular language/framework, but whenever I try to actually DO something with it, even simple thing I don't feel like it is more productive than doing it myself: it eventually gets stuck and I need to actually think through the code to get it unstuck, then after a few messages it repeats errors that were already fixed etc.
This is exactly the type of issues that I have come accross using LLMs for coding !! Even a simple application the model starts hallucinating and starts forgetting or repearing stuff we already discussed a few promots back, introducing errors, bugs. If you try to develop something blindly, you will end up with a big mess
There were leaders in the world who rejected the Knitting machine because they were afraid that people will lose jobs, they did, but eventually everybody stop caring.
Accept reality. But personally I think Computer Science (etc.) graduates are smart enough to get into different career paths.
The real issue is that as AI gets better, there won't be new career paths it can't do better than humans for humans to switch to.
@@tracy419 I disagree. Manual labor will certainly be needed. Simply helping robots do their job or things like that.
@@kingofmontechristo sure it will.
But what do you think happens when the workforce has tens of millions more people looking for tens of millions fewer jobs?
What do you think that does to wages and benefits?
It's not like most manual labor jobs can't be learned by most people if the incentive for doing so is there.
@@tracy419 obviously people would be broke unless government creates crazy regulation and something like a universal income, but again, I think it is pointless to think about these things because humans will always find a way to change unnatural situations. I am sure that in that world there would be tons of criminals who want to simply fight back because the idea of humans not having to work would actually create fundamentally a lot of issues
@@kingofmontechristo personally, I think the potential negatives you point to are precisely the reason to think about these things now.
If we can find ways for people to find purpose without work, we can help eliminate a lot of the negatives.
The world will look vastly different when my kids are my age in the next few decades.
Kinda exciting and scary at the same time 😄
I can't wait to point and laugh at everyone who thought that A.I. couldn't take their job. I'm really looking forward to it!
Yeah, there's a quote floating around along the lines of "what jobs would you bet me AI can't do in five or ten years"
FWIW you might find my vids on AGI ua-cam.com/video/lorNRMBo_PA/v-deo.htmlsi=bldyCV8RbkHnZZeP and UBI ua-cam.com/video/IIVDLCDeZT8/v-deo.htmlsi=t6dYRKCW_k3P9tYb interesting
And you are giving away all the credit for your work to the machines as well..
So funny, I’m a professional vfx person, so I know what AI can actually help me with in VFX, it feels dangerous and bad habits. BUT I’m using cursor to write python scripts and do database work to a level I could never accomplish before. Our roles are switched here where you were needing help in Davinci. Maybe the AI just gives pros from one area an ability to RAPIDLY increase their abilities in edge case work related to their main skill. So the person has high skill in one area and the combo of that skill along with an AI can make them really useful in other areas.
Yeah, I keep coming to video stuff in this really backwards way. Started more from a game engine perspective and now I'm backing in to the video stuff so I keep being confused by the weird legacy physical/film jargon and translating it to PBR stuff in my head.
The LLM stuff mainly seems to be helping me sort stuff out more quickly. Very little of it is doing stuff I just flat out couldn't do, it's just saving me a lot of time eg on the dev side beating my head on docs/forums/etc. But w/o the understanding logic and "need" on my side it doesn't mean anything. It all gets very meta and weird.
A year ago people were using these LLMs as a helpful tool to write code ( a replacement for StackOverflow at best). Fast forward a year, I have a friend who isn't a coder writing a full React app with Cursor integrating Shadcn.
You'll still need good developers for the really hard stuff, but that raises the question of how many Devs do you need on the team if you only need them for the really hard stuff. I see a future where more Product managers and UX/UI designers start learning to use tools like Cursor reducing the amount of tasks that would be assigned to developers.
Code will be natural language in 5 years. We need to accept this. Coding will not be a special skill, almost certainly in 5-10 years time. Things like CRM will be the first to go as they're essentially variations on the same system.
@@WillyJuniortaking the time to sustain an interest in the field is the skill. GPT didn't 10x the amount of authors. Authors already select themselves as those willing to see finishing and editing something. I live the democratization of this all but the individual has to care why a deployment failed somewhere; if they just don't want to think at a given technical level GPT will make no difference in one's life.
Unless the future is just landing pages made by react. I'm speaking to the other the 99% of business needs
The nature of the work is going to be very interesting. Lots of folks doing stuff like landing pages, SEO/SEM, etc etc vs say deep CS or physics or whatever.
FWIW I find the nature of work on Star Trek eg the science folks working with the computers to be much more interesting of late, lol.
too much base in your audio. look at it
Hmm. An interesting retrospective. In my personal experience, co-pilot and chat-gpt are the only models so far that have been useful at all. Other models either take forever and return nothing, or output gibberish that just repeats my previous code.
My biggest takeaway is that Microsoft is doing some very special proprietary magic to handle context. Gathering and correctly using context seem to be the quiet revolution that has become as important (if not more) than the transformer itself.
I'll have to give cursor a try. I currently am having all kinds of issues with integration to IDEs. It's honestly mind-blowing what a shit show integration still is.
Currently, Sonnet 3.5 is better at least in some ways.
In some ways, o1 is better. It feels like they are all different authors who are better at some narrow things.
Like O1 writes good plans but can't really edit code. Well-prompted Sonnet 3.5 seems to be surprisingly good at doing good-looking web UIs
However, how those models are integrated also makes a lot of difference. In Cursor they are provided with good indexing of the. code base that allows the model to pick up things not documented anywhere. Some people prefer how Aider works. I liked Claude-Dev in VSCode but problem with that when. I tried that eats tokens super fast.
All in all, it's a wild west for now. No clear winners and new things coming up weekly.
I have just written my thesis on compressing an LLM (Mistral-7b). I obviously had to evaluate the model (i used the MMLU Dataset) and it seems that everybody evaluates them with few shot prompts (pass correct examples of the task to be solved to the model before making the request, e.g. 5-shot means pass 5 examples of the solved task before putting the query itself). I can say that the most challenging part was getting the prompt right when reimplementing the evaluation from scratch, and that greatly affected the result (10% less accuracy).
So from what i have seen I completely agree with your assessment that correctly using context is extremely important.
Cursor is doing a bunch of stuff behind the scenes to get the LLM to work w/the IDE fast. It's pretty interesting, much like some of the brand new stuff eg Replit and Bolt. There are some podcasts with that team I found very interesting getting into the hacking of the LLM integration w/things like partial prompt caching etc
I consistently find the older/more established the better the results. Vanilla JS/CS, React, Postgres and PL/SQL stuff is all very, very good. I really like SvelteKit but most of them are really terrible at it because it's just too new. Which makes it very interesting prioritizing what stack to pick...
@@ChangeNode yeah, like IDE support was crucial befire, syntax highlighting and auto complete. Now LLM supporg becomes a thing.
But cursor, github, sourcegraph kinda try to solve it.
I love how many different experiences I'm seeing here. For what it's worth, I was previously working on traffic-control for embedded controllers storing signal data. So, it was all in C++. Now, in my job I'm working on webservices in Java and C#, and on the side I'm learning about game-engines in C++.
I thought Java was dominant in webservices, so I was surprised to see that O1, at least, was giving me better results with C#.
I got the feeling that many of the Java frameworks have changed too much recently. There's also a huge split in how to approach the web - reactive of traditional- in Java, and this seems to confuse the AIs. They kept changing their solutions between different web clients. Maybe it says something about the state of the community and the quality of code out there???
14:38 Don't forget the development of AI is exponential. It might not be have changed our entire world in a big way yet, but it absolutely will in the near future.
@bruhmoment3731 - as I put in another comment - yeah, the curve is very interesting. They eventually wind up flattening out, but the timeline is up in the air. Data? Energy? Some other raw resource? Something fundamental to the nature of intelligence? Or some kind of crazy self replicating robotics that winds up just limited by physics?
I'm not sure about the exponential development curve. I think we reached the limits of current approaches and it will take some decades for truly intelligent systems. I think "techbros" hyping it up to secure insane amounts investment (which the investors probably will never see again). But this technology is already deployed to censor/ autodelete "unwanted" comments etc.
where are you getting the sense that it's exponential. if anything it already shows diminishing returns, and every model is converging into the same thing, because no matter what you do, or what fancy model architecture you're using, you can't make a meta-algorithm whose sole purpose is approximating your dataset, to somehow trascend what's on your dataset. and we have no more data to feed these things.
Let me introduce you to sigmoid curve, it looks like exponential for quite a long time
@@szebike I finished this book amzn.to/4eH7LF6 recently and .... yeah. 😬
The time is coming where anyone will be able to write an application with a prompt. You talk about changing roles, a developer becomes the conductor of the orchestra instead of one of the guys playing a violin. Sure that's on the roadmap. But it won't stop there. AI Agents will work together like people in a company. And at that point, why do people need software applications anyway - you get what you need to get done using the AI, not using a specific application for that task. AI *is* the application. All applications.
Hey....what a concept.
That's the acid test I'm waiting for, and stuff like Replit or Cursor isn't there quite yet.
Stuff like v0 does make me wonder if maybe the first/intermediate step will be dynamically generated UIs that talk to more structured backends. Hmm.
This is over simplified way to look at software development. Saying AI would replace developers is laughable. Let’s look at it this way. The type and complexity of software we build has moved from simple websites and applications to complex applications. And this cuts across different domains of software development. Game, web, mobile etc. Looking at how things are, we keep developing complex systems and the likes. LLMs basically need to be trained then based on the training data, you use in its domain it’s been trained on. So, LlM would have to catch up with how we develop stuff. And this is even assuming that LLMs become perfect in what they are trained on, which it isn’t.
Yes, he kinda hinted at your statements when he said "people will become more of AI Managers", but his talk wasnt about AI hype and people loosing their jobs so it was good not to dwell on it, as we have mostly heard enough of that topic.
Yeah, I unpack this a bit more in my AGI video ua-cam.com/video/lorNRMBo_PA/v-deo.htmlsi=gEkhcKMr-rsFg5h0 - it's about tasks, not jobs specifically.
I mean, when they added spell checking in Word it was a) awesome b) a huge time saver but it's hard to say how many editors or copy proofers lost their jobs because of it. Same thing when they added refactoring tooling to IDEs - super cool but I don't think anyone could point to a single job loss because of it.
Give it 2 years and you will be able to 1 shot an app. All these limits that we see today are going away.
Just like quantum computer... It's just around the corner... Give it two years and encryptions are going to be myth
I have a slightly modified version. I think that the LLM being able to interactively build the app and not break anything with each revision is likely, which is one version of a one shot.
One of the benchmarks I'm looking at is when we kind of don't have apps anymore, we just have an LLM that dynamically generates interfaces for us.
Part of what I'm finding interesting is trying to fill in the gaps for what we will see between today and scifi...
FWIW I believe that there are algos for security that are intended to not be easily cracked by QC - stuff like factoring primes is trivial for a QC but other stuff isn't. My understanding is that several years ago the CS security folks just started with the assumption that the big state actors already had QC breaking in place but it was secret (eg Ultra) and so they needed to get on it.
Nonsense. How many of you are actual experienced software engineers and how many are "tech bros." 3 years ago these hype-up tech bros were citing how "the blockchain" (an append-only Merkel tree) and NFTs were going to make fiat currency a thing of the past in 5 years. Tired of the capitalistic hype angle.
People try to compare AI to computer programs. That's not a fair comparison though. AI will never be 100% accurate, it's like a human. We aren't 100% accurate either. AI is getting better, we just barely (sort-of) reached level 2 on the 5 tier scale to AGI. I don't think people are going to be super impressed with AI until we get at a minimum level 3 (agentic, meaning the AI can create its own objectives), and quite possibly level 4 (the AI can do any cognitive task a human can do.)
As you said, it's still impressive, and there's a lot it can do. You just have to acknowledge its limitations for the time being. You can't trust it, for coding or for accuracy. It's still immensely useful though as long as you fact check it. Treat it like a politician and you'll be great. :)
AI doesn't have to be 100% accurate to be able to replace humans. It simply has to be as accurate as the human that would be doing that job.
AI is way more accurate than most people....
@@mohammedalrawaf9622 in some ways, sure. Not for the majority of tasks though. I use it daily, and I am vastly more accurate than the AI most of the time in the ways that I use it.
0:55 - ffs... we don't care about your music preference, just get to the point.
The less people understand about software engineering, the more they buy into the "AI" hype bubble. 🤣🤡
coping...nice video though, subscribed
so if AI automates everything and we all mostly out of job or no longer needed, as thus no much paycheck to purchase things, then who will be buying the products from the companies? hope there is plan for AI to buy the products too, thats if they decide to take a break from the money printing ofcourse for the hell of it
That’s more or less the topic of this video UBI and AI Simplified
ua-cam.com/video/IIVDLCDeZT8/v-deo.html
With developers engineering there own demise I can see humanity doing the same thing
There != their