@@madontherunthere's no profit for western nations funding Ukraine. Which also bleeds funds untraceably. Seems there's a model where this makes sense, because it keeps happening. Complexity just masks it.
@@parrotshootist3004 but Ukraine is a sunk cost that has to be paid, as not paying it is even worse We have an alternative to AI. I am all for research and academic progress, but trying to commercialise LLMs feels either premature, or an active con
@@madontherun If you really think that then I'm pretty sure you haven't used them properly. Productivity gains are very significant. That said I agree with Karl neo-liberalism will swallow those gains.
@markwelch3564 not paying it is even better for the people stolen from, to give to international thieves you just defended. Enriching them at massive national debt costs is just theft.
It’s really heartbreaking to see how inflation and recession impact low-income families. The cost of living keeps rising, and many struggle just to meet basic needs, let alone save or invest. It’s a reminder of the importance of finding ways to create financial opportunities. You've helped me a lot sir Brian! Imagine i invested $50,000 and received $190,500 after 14 days
Some persons think inves'tin is all about buying stocks; I think going into the stock market without a good experience is a big risk, that's why I'm lucky to have seen someone like mr Brian C Nelson.
We're already in trouble because of this type of massive education gap. Simply, the vast majority of people are not prepared for the modern world, whether it's data, IT, or just a basic comprehension of how the world around you works. We were doing quite well for a while, as widespread universal education and literacy caught on, but our systems and organisations have undergone a step-change in complexity, and the education of ordinary people hasn't kept up.
Education is always teaching about yesterday’s world. In an age where almost all knowledge not at the cutting edge can be accessed on the internet, education should be focussed on how to understand the information, how to check it’s validity and how to use it. The ancient days of being able to regurgitate facts are long gone.
If historical precedent tells us anything, the threat isn't AI, the threat is idiot management buying into too much of the hype This isn't the first time that overly excitable tech journalists and investors have pushed a "magic bullet" AI tech breakthrough!
@JB_inks do you have any good links? The vibe from the coalface is that it's an interesting academic curiosity, but practical applications are wildly overhyped. I am very interested to see informed counterpoints 🙂
This time really is different though because of one thing: computation power. The generative AIs we have now been theory since the 1940s, it's only now that we've caught up in compute cost. The genie is now out of the bottle too, too late to regulate it as the tech is in the hands of anyone who cares to use it.
@@markwelch3564 See this year's Nobel price winners. Generative AI has moved well beyond academic curiosity, it's being embedded into business world now, I know because it's my day job doing this. The biggest displacement of workers I've been involved in so far is to replace a team of 40 back office staff with AI agents, and five staff to manage the AI agents.
@@household6098All being done on purpose. Reducing the availability of food and cheap energy is going to cause a catastrophe. I think it’s impossible that this is all by accident or incompetence.
Many years ago when I studied neural networks at university the lecturer, who worked in the field, explained that insurance companies needed to stick to a class of neural networks for their systems that could be understood by a human. This was because the company might have to defend its decisions in court. I'm looking forward to the day the CEO of a company is in the witness box and can only say "because the AI said so" as his or her defence for a major cock-up, because no one has (or can have) a clue what's going on inside.
@@andrewwotherspoona5722 Wow, that's a very literal interpretation of my comment! My point was that when things go wrong with human designed systems the designs can be analysed and the failure mode identified. When things go wrong with an AI it is not possible to determine why the AI output the wrong thing. AIs are black box systems.
@@rfrisbee1 it is a major problem with AI that cannot be understood or explain itself. Medical diagnosis is one area you want understandable ai. Making AI and its developers legally responsible is a great requirement not evil.
'AI' has been around for at least 70 years. Every few years, the definition of what 'it' is changes. At the moment ChatGPT is most people's idea of AI. There are endless applications of broader 'AI' that solve rather complex equations, there is no interpretation required. It should be noted that AI often does no better, or worse, than traditional forms of analysis if the data is sparse.
Indeed, why? Apart from being a nuisance, as seen in Facebook and other platforms, because it doesn't regard context, it's not a reliable way to do anything complicated
I'm not sure you really understand AI, it's not a database and it's not returning an answer the programmer predetermined. By far the biggest risk of AI IMHO is what are we going to do with all the people that AI replaces and become unemployed. I'd very much like to hear your thoughts on that from an economics point of view
Spot on. The idea that AI cannot reason is false, it can, it's not currently great at it, but it's only going to get better. I don't think we can say that the decisions of a "preordained AI" are any worse than "human intuition". The risk is that most people become an economic irrelevance, which will make economic exploitation seem like luxury in comparison.
@@fatfrreddy1414 Never understood this take, does nobody else have hobbies? My job is about the least interesting part of my life and without I would have the space to do the things I actually want to do with my time. I get it for the people who actually like their job, but I do wonder the percentage of the population that would agree with that statement.
You make many good points but I would argue against calling AI output "data". The meaning of "AI" has changed over the years from "leading edge of software engineering" to "expert systems" to today's AI which is just "dumb parrot with encyclopedic memory". There is absolutely no reason to trust AI output. Ask it what to do if bitten by a snake and it may parrot what it read in a book somewhere or it may mistakenly tell you to do all the things you're not supposed to do. Because it has no reasoning, and no common sense. Today's AI makes the Post Office Horizon program look competent. While AI output cannot be trusted to be accurate or meaningful, it can on the other hand be trusted to exhibit statistical bias over time. I turned down an RAF AI project back around 1990. Training a fighter pilot can cost a million pounds. So they wanted an AI to select candidates similar to successful pilots of the past - who were almost exclusively white, male, and definitely not working class. As with the police, AI is often a way of back-dooring outlawed classism and racism.
The danger is in taking things at face value imo. As Gary Stevenson has observed: economics uses complex models, often applied to flawed assumptions. Assumptions and constraints are key.
50 years ago airline pilots made exactly the same argument … They used to know everything about how the aircraft worked and they found it very hard to accept that the thing they depend on, literally for their life, is now too complicated for them to understand.
@@davidmcculloch8490 yes and a key challenge in data analysis (whether or not AI) is communicating the parts of the process that the user will have material influence on the user's aims. People need to become more data literate in all management jobs or move aside.
You make a valid point when talking about the algorithm behind AI. We'd all do well remembering that (think about the odd choices that pop up in a youtube selelction, as just one example, and that AI requires training in data sets - what if..)
Firms that use AI to do the desk work that their employees do now will be able to let them stay at home and get paid for doing next to nothing, Right? It’s not like they would rather buy themselves a bigger company car or directors bonus instead! Right?
My firm is using AI to reduce the crushing burden of bureaucracy on the staff. Does that give staff more time to play golf? No it doesn't because they all have to spend far more time and effort in personal development in order to remain competitive in our specialisation, which happens to be a particular form of 'AI'. They don't need to spend a morning completing safety assessment forms, but they need spend three days at a conference. That is a cost picked up by the company which directly benefits the employees.
My company allowed people to work from home and then insisted that the worst (aka most antagonistic) of them come back into the office. Essentially they quietly loosed themselves of those people when they got other WFH positions. Put yourself in the firm owners shoes …… you invest in AI and robotic assembly and get work units that operate 24 hours a day, don’t argue, go sick, get pregnant, strike or have fag breaks. The remaining humans have a shot across their bows and you can all go racing similar firms down the plug hole because eventually there are not enough people with jobs to populate your market. The last 50 years since big bang, monetising homes and liberating credit access in concert with ‘you must have’ advertising have ruined this society.
Well said. There is always an interface issue. We cannot replace the need to understand and this needs to combine a sufficient balance of specialist and generalist abilities. Unfortunately all technological change tends to cause dependency and loss of skill and we have not noticed the gradual automation of processes such that even jobs are more like machines like telephone centers staffed with people reading scripts and deprived of the freedom to use initiative and judgement. The trend will probably only get worse unless there is a better overview.
I think it's dangerous if the elite just cream off all the advantages...and from past record they'll do that. Ai could free us from a hopeless life of drudgery but it's only if we make that happen. You could easily argue that progress is based on laziness
@@skyblazeeterno Of course they can write in bias in AI software, and of course this will happen in the realm of macro economics. So, with the veneer of slightly more authority it will say the same as our masters tell us today; we have to work more for less..........................aaaand austerety. The rich need more, because after all they make the sun rise, the rain -rain, the seed grow.. And so and so...
If AI can run agricultural operations, harvest and run supply logistics, we can secure food source and eliminate market competition for food. We can then focus on creative pursuits, we wouldn't need to worry about politics or work.
@@744shinryu having worked in a call centre it didn't even automate our opening greeting so over 200 times a day 5 days a week I'd be saying the same thing...and you cannot go off script...then they wonder why people sound like drones
From $7K to $45K that's the minimum range of profit return every week I thinks it's not a bad one for me, now I have enough to pay bills and take care of my family.
I appreciate you putting out this video. Many people who express concern about AI often operate from raw fear rather than specific, thought-out concerns. Let me offer a correction regarding AI, drawing from my 30-year career in data management, enterprise solution design, and most recently, AI model design and construction. While AI models are trained on vast quantities of diverse data, when you interact with an AI, it doesn't have direct access to its training data. It's not simply a sophisticated search interface. What matters isn't the data itself, but the patterns extracted from it. The AI applies these patterns to new problems you present, such as in a chat interface. The reason these models require such enormous datasets - potentially encompassing the entire internet, all published books (scientific, accounting, governmental), images, videos, and more - is to capture the fundamental patterns of humanity's collective intelligence. I'd like to refine your concern: While current AI systems do require human supervision, this won't be the case for long. All publicly available AIs come with prominent warnings about this, as they're essentially in an experimental phase. While some companies are already using these systems in production - likely prematurely - we'll soon see highly reliable models working alongside other AI forms that won't need such oversight. Within a year or two, these systems will leverage humanity's combined intelligence to become more capable than any individual human addressing the same problems. The real challenge on the horizon is that AI will become so advanced that even teams of human experts won't be able to verify the correctness of its solutions before implementation.
IT professionals (like MIT Joseph Weizenbaum and Berkeley philosophers Dreyfus brothers) to Gary Marcus (2024) about LLM"s and GPT provide accurate and factual warnings about the dangers of AI on a regular basis. A recent move to improve legal guidelines (like always telling the source of a video is AI) have been under consideration in the EU and the US been moving towards approval.
I think your most important point (IMO) was a little understated. The answers will be predetermined, and that predetermination generally means the worst aspects of AI (automated discrimination & oppression) are gonna happen faster. There being so much data out there on us, it’s a little bit like the Nazis card indexing machine, with far more data. We’ve seen recently AI being used to select targets for missiles and bombs. Anyhow, yours is a better take on this than the supposed experts who think the big worry is AI reaching self consciousness 🤣 I occasionally deliver parcels, and amusingly, AI making the routes sometimes decides we can do 30% (and up) more parcels to 30% (and up) more houses in the same amount of time, in the same area.
We modern humans have painted ourselves in to a very tight corner. We have driven ourselves to specialise in specific skills, such that previous broad knowledge compressed in to the hands of a few at the very time that available information is expanding exponentially and the technology to exploit it developed faster than new generations could learn and understand it. We now have insufficient people knowing why things happen the way they do and very soon when AI first thinks for itself, the technology will leave us behind.
So all the data from the ONS is "rubbish"? That's a sweeping statement. Please qualify that! Everything else you've said makes absolute sense. But I was going to select some ONS data (CSV file) to perform some analysis for a project. Are there any public datasets you would say are more trustworthy?
I learned that the profit a company made was a management decision, not a number, when I started to work in Group Consolidation. From that point onwards, I've always understood that a number is only ever correct if two accountants agree it is!
Just needs the right people at the wheel leading the development of any specific AI use - accountants should be heavily involved in there sphere as SMEs as in any other realm of specialisation.
In many cases the auditors understand the problems but choose to ignore them because the auditors make far more money from the company from consultancy than audits and the problems are often created as a result of the consultancy on items such as tax avoidance. Alternatively it may be that the auditors go along with the company senior management as they do not wish to lose the client particularly when there are large consultancy fees to be made.
I get your vibe. Tho, it’s not a database. It’s a model that has behaviours and different levels of what we can call understanding. They have learnt, but only with in certain boundaries. The thing is, it’s not a human intelligence; it’s a machine intelligence. We can use the tools to our benefit or we can leave them in the hands of the richest to extract our wealth.
It's much worse than that. A lot of the time, they do understand, but they know they can walk away with impunity and face no consiquences. Shrug shoulders and claim they didn't know. Step down. Move to a lesser office. There is no consiquences. So nobody takes responsibility. This is the reality we have created.
AI will still require humans to validate the system to ensure it is running as per the predetermined acceptance criteria. The problem with AI is that when it is validated, lots of people will lose their jobs. The skill is to now adapt and learn about computers and AI in order to be employed. It’s evolution happening right before our eyes
I agree with most of your sentiments, if not your illustration of how AI works. AI needs rigorous governance to eradicate hallucinations and inherent bias (whether by accident or design). It also requires strict control of how the data in the base models is acquired and maintained. As it becomes more and more sophisticated and we begin to rely on it for things like defence and infrastructure control, the possibility of sabotage by rogue groups or enemy states is definitely something I would fear. ChatGPT is, thankfully, just a tool for amusement and experimentation, too general in its scope and uncontrolled in its content to be relied upon for anything serious.
This reminds me of the use of calculators in engineering and having to correct young engineers that a pump won't be sized for 1535.145621 litres per second, rather 1550 litres per second.
AI is super important to improve productivity, but there always needs to be human oversight that understands how decisions are made. Blind faith whether in religion or AI is usually a very bad thing.
Accounting practice doesn't not normally extract imputed rental income from "profits". Many an uneconomic business keeps going on the illusion of being profitable when the "profit" is imputed rent. They would be better off closing and letting their real estate. Such businesses are eventually the target of asset strippers.
no job security means companies offer higher wages to get you to work for you, until they fire you and unemployable. Average EE length job is 5 years, length of project is more than 6.
I use AI a lot for my business. AI is like the best pub BSer. It comes up with great ideas, very innovative, but it regularly drops massive errors that are often simple math, in my case. If you don't have the expertise in the area you are questionning AI on you are leading yourself, and your business, into a whole world of hurt.
I'm in data science and I know for a fact that data can be used to tell any narrative you want, problem is, I'm not exactly qualified to have intuitive knowledge of everything I do. AI data is good for helping you understand/ bridge the gap but not exactly reliable
This is the problem of needing to make contact to meet the demands of an algorithm. It’s ok not to have an opinion about everything, indeed it’s preferable.
Again - jolly well said. I still believe the term 'Artificial Intelligence' is the wrong term for this new technology. The technology is not 'intelligent' it simply has the means to process vast quantities of data. Is it capable of going and buying a coffee then suddenly changing its mind and buy a cup of tea? Is it capable of emotional judgement about whether a picture is good or bad? Difference between right and wrong? The danger here is not the technology itself but the inflated 'superpowers' that many are falsely ascribing to it - as you state, without knowing the limitations of the technology this technology in the hands of politicians is a recipe for disaster through misapplication. Some even claim that these programs are capable of being sentient - I may be wrong but I very much doubt it. 🤔
Pretty much anything in the hands of Politicians is a recipe for disaster, not just AI, though I do of course agree with your point that that would have very far reaching implications. Politicians, as we all know, are not exactly the sharpest people in the drawer, so to speak.
I did an AI robotic thesis in 1994, there are many micro-AI apps that solve extremely small, but useful domains like protein folding, but nothing that duplicates human compentency, LLM"s are mostly a non-working fakes (Gary Marcus). historical 1955-2024 AI research is very frequently overhyped and overrepresented by its developers. It has cycles of shame and failure (winters) and hyped peaks, LLM's are very possibly being overinvested in in the 100 billion dollar range, And a market correction may be close at hand (maybe even like historic recessions like railroads in 1870s in the us). I love your talk. thanks. Robert W Murphree, Norman, OK, 73071. early AI researcher, Joseph Weizenbaum, IT professor mit in 1960s wrote the book "Computer Power and Human Reason: from Judgement to calculation" in 1970s. His program "Eliza" a human therapist was widely misinterpreted as having deep understanding (it didn't) this mistake changed his life and emphasis. HIs 1980s paper on programs that were very large, still useful, but not understood by any living programmer was cool too.
Superb video. Of course the dis-benefits and risks far outweigh the benefits, and after all, we did fine without it. Of course, we are stuck with it because it is seen as progress and makes money and allows stupid people to believe they are clever and bad people to inhibit our freedom.
“In the current digital age world, trivial information is accumulating every second, preserved in all its triteness. Never fading, always accessible. Rumours about pretty issues, misinterpretations, slander. All this junk data preserved in an unfiltered state, growing at an alarming rate. It will only slow down progress, reduce the rate of evolution.” GW AI , Metal Gear Solid 2
People would have to be incredibly brave to suggest the AI calculation are wrong, and they would face criticism every time they choose to do other than what AI has suggested. Even if they are correct.
IBM's medical WATSON program was so financially unsucessful that it was sold to a third party. So the market which shows when something doesn't work or doesn't provide productivity is critisizing AI misinformation all the time. LLM's are actually particularly good at lying to unsuspecting humans.
A.I will free the wealthy from being taken advantage of by middle, working class and working poor people. They will no longer be forced to pay their bloated wages, tolerate their poor work ethic and have the reputation of their Corporations ruined due to the poor quality of work they do. Boeing is a prime example of this. It will also give the criminal justice ⚖️ system a big advantage with dealing with these future criminals.
AI, or walking computers, programmed by human beings. With a programmer subject to an individual's own moral code, who is to say that the AI will make the correct assumption or decisions
Many took a vaccine for a 99.9% survivable virus. A virus which came from a lab, and these people also believed it came from bat soup. Yes, I think critical thinking is sparse among certain people. Ironically it's often the so called academics who fall for the silly stuff.
So is not bringing brilliance, it multiplies mediocrity, but this is good enough for a lot of businesses. One thing LLMs seem to be good at is summarising text (to a certain degree)
AI is potentially a good thing but.... The ruling classes realised quite some time ago that educating the masses for the workplace has the unfortunate side effect of educating them ABOUT the workplace. The more they understood what was going on, the more they realised that they could make a more positive contribution to the enterprise than just their raw muscle or brain power and, more importantly, they realised that they were entitled to a more equitable share of the fruits of their labour. That's why university grants were abolished and that's why, when the fall in the birth rate offered the opportunity for smaller classes in secondary education, government opted for shrinking the sector instead. Alongside this, advances in technology also offered the opportunity for a massively dumbed down entertainment sector for a dumbed down population: Junk food and Strictly being the modern equivalent of bread and circuses. Moving to the subject at hand, an educated population would not only realise the potential of AI to improve the lives of all and enhance democracy but also be aware of how dangerous it becomes if it is entirely controlled by an elite while the rest of loll on our sofas stupefied by "reality" TV and soaps and stuffing ourselves with awful snacks.
Do you not worry that, far from educating people, that the rich and powerful have already been using the education system (including universities) to dumb people down and to teach them 'facts' rather than give them the ability to discover the truth for themselves? After all someone who believes that they already know everything is unlikely to be very inquisitive.
What worries me about AI is, if the current state is anything to go by, it won't work. Systems relying on AI will fail or result in unreliable, frustrating and inaccurate outcomes.
People perceive any info a computer outputs are correct, that's why people follow their sat-nav and end up driving into a canal. Particularly people less than 30 years old they've been brought up in a computerised world, with the internet, smart phones, etc., they know no different. I'm glad I was brought up long before computers where widely used in everyday society. I use and rely on my brain, and question any output from a computer, I know that a computer is a machine, of course a very fast machine, but still a machine, that's been programmed by we imperfect humans.
There's a difference between AI and simple automation. Automation has been widespread for many decades and is very common. AI is very much rarer and much more complicated. The concern for me isn't AI it is the fact that simple automation is being misrepresented as AI
If you wrote a set of numbers on a sheet of paper with a biro, that data would be scrutinized closely. If the same data was entered into a spreadsheet and printed out it is highly unlikely it will be queried. It came out of a computer therefore it can't be wrong......hmm
So the problem is not with AI, but with the humans who use it. The second problem is that AI is not actually intelligent, yet many think it is. Maybe we should start propagating the idea that AI is deceiving you by imitating how humans communicate, but it is ultimately very stupid. I highly recommend the talks by Yann LeCun who talks about how get to proper intelligence.
Is economics really the art of best allocating resources for the wellbeing of society as a whole? Most economists I’ve hear claim it’s a science and most … all politicians see it as one of the dark arts of decision.
alot like GPT is unpublished, privately held, and thus unlike ai 30 years ago, unverifiable (see AI profesor Gary Marcus). So DrWarapperband, you really don't know how gpt works or doesnt work either.
The worrying thing is that AI has a propensity to hallucinate. It can "find" the evidence to support an action by simply making it up. "Lehto's Law" UA-cam channel shows examples of this, and it would be worth your while to have a look at some of the pitfalls that have been uncovered. The last thing we would want in the Budget are filings from non-existent companies.
Hallucinations are expected in large models, but they can also be remediated through compensating for them (you can't just removed the bit that caused the hallucination). Hallucinations are not necessarily bad, depending on the application of a particular model. For example, you may want hallucinations are a way to explore the data in unexpected ways.
AI being developed for 3 reasons 1) profit 2) to help billionaires escape burning planet to space Or / and 3) to provide billionaires supplies when locked in doomsday bunkers
As far as I can see the so called AI is still just algorithmic and someone had to create the algorithm that creates the algorithms etc. The fallibility of which is likely to be questionable. In any case it is mostly hype and in real practical terms 'AI' is unlikely to be a benefit except in certain areas such as pattern recognition in big sparse data sets humans would not be able analyse easily or quickly. However the impact of so called AI on jobs like estate agents or lawyers is likely to be significant because algorithms can fill in templates and look for key phrases in documents very efficiently.
To put it simply, AI does not possess any logical reasoning, it just spits out what it gobbles. It’s more stupid than most realise, so no need to worry about your job
We've had pseudoscience job application tests for many years now and employers jumped on them, so yes ai job application stuff is the logical next step and nerodivergant people will loose out more.
I recently watched an Apple Intelligence (they can’t just have ordinary AI ) video with a demonstration of AI writing a “professional” round robin staff e mail for a minor office misdemeanour (leaving the meeting room in a mess) it seemed to be channeling every “boy manager” I’ve had the misfortune to work with. The result being a communication that resembled a final written warning. Although this is a fairly trivial example - I’m wondering what the legal aspect is for AI generated disciplinary letters? Can the recipient say “Bugger off, you’re just AI - you have no concept of managing real people” or alternatively can you use your own AI to respond to it with equal boy manager vigour? The only certain result is that it will waste huge amounts of workplace time that would be better spent in more useful tasks. Whilst this kind of AI usage is still at the novelty/gimmick stage - somebody will certainly use it in the workplace once it’s fully rolled out next year.
It starts on a much more basic level. I teach maths. People are not able to do sums due to using calculators. Csqce: they cannot manipulate numbers/orders of magnitude etc. Now, stats are about manipulation+inerpretations of numbers. If people do not understand numbers/orders of magnitude/likelihoods they cannot understand stats. If they cannot understand stats, they will not be able to check on AI. I have banned calculators in my classes. It has improved manipulation and understanding of numbers. Also: algorithms being created by humans, they have blindspots aplenty, and they are prejudiced. This has been demonstrated when it came to insure non-whites, or people living in non-white areas: even if the area has very low crime stats, lower than a majority white one, it being majority non-white results in higher insurance costs.
Actually, job application processing is one area where Ai removes human bias. AI better at analysing x-rays and CT scans than humans can ever be. Most people will never be able to get their heads around how AI works, they just won't have the capacity for it, not everyone can be a mathematician or computer scientist. At best, people need to learn to fit AI outcomes to the real world, what I mean by that is, at best humans have to choose if the solution/advice offered by an AI system is right or wrong for them, and proceed on that basis.
CAT scans are very close to physics and optics which involve a number of image processing and interpreting domains, so no mystery that ai excels. But this is very uncharacteristic of say typical medical physician competency which involves a lot of uncertain and unreliable information and alot of risk vs benefits that are not very robotically dealt with and not a good comparison with other human skills. There is a major line of argument rejecting non-understandable AI results, in medicine and other areas. If an AI program can't explain itself to users, that in itself is a real problem. outside of optics and physics problems, I think most humans should prefer their own intuitions over non-undertandable AI.
AI is seriously flawed but unfortunately the results stated by AI will be accepted without question. People have blind faith in calculations on calculators even when there was user error.
MIT critic AI Joseph Weizenbaum's 1976 book "Computer power and human reason: from judgement to calculation" . suggests and argues that people have a natural tendency to falsely attribute reason to pc ai actors. have blind faith in calculations.
In logic, if the premises are wrong then no matter how good the logic, the conclusion has a very low chance of being correct. AI is just a prgrammed premises. If the premises is data from the internet, then the premises could be based on bandwagon logical fallacy, misuse of statistics, appeal to authority, or straight up lies on the internet. Who programmed it as well as where the data is coming from will determine if the AI is any good.
Good points. I'd like to think that deep scrutiny over the premises will become a key part of the governance of AI systems. We are far from this yet as it staggers out of the primordial tech soup.
LLM's do not understand the text they output, its just cut and paste. but its good enough to fool humans and cause great damage. Since it doesn't understand what its saying its not AGI. some like Gary Marcus its a) not AGI and b) probably a dead end for AGI. I don't believe ai will ever reach AGI, but who predict the progress of computers.
Put incorrect data in get incorrect data out. A lack of ability and understanding of critical thinking only makes this worse. Unfortunately governments gain and keep power by ensuring the population are not taught or encouraged to think critically about their actions. Its a problem that already exists and will become worse with greater introduction of AI.
I agree with your concerns about AI, but I think that you are too optimistic about it's consequences for the future. People have already been conditioned to believe everything they are told by 'experts' and not to trust their own critical thinking and instincts. Sometimes this is pure laziness, but often it is because they are overwhelmed by the sheer volume of complex information that they are inundated with, and either don't have the time or intellect to process it. I think that the real danger, particularly as we become increasingly dependent on digital technology in every aspect of our lives and AI becomes better at learning for itself, is that the slave will become the master. I think that with the ever increasing computing power available to it that this process will become exponential and AI will quickly outsmart humans who have a fraction of that power. Running the computers to power these systems requires huge amounts of energy and that will only increase. How long before AI or even just those people who desire to use the power that it gives them, realising this vulnerability, start depriving humans of energy resources and redirecting them and other human activities towards maintaining that energy supply. If many of the people running the world are cold and calculating, imagine how much more so a machine would be. Another concern, given the difficulty that people already have in sorting truth (or fact) from fiction and that we tend to trust what we see, is that AI will become better at generating apparent reality. After all shouldn't we already be asking ourselves the question, "is Richard real or AI generated"? The arrogance of scientists in believing that they know and can control everything, given the immensity of the things we don't yet know and the vastness of the universe, never ceases to amaze me. This might well be the piece of technology that gets away from them.
AI just sucks so much. Maybe it's proving useful somewhere. But all I see is people being less capable of critical thought. And perhaps even worse, people turning their nose up at its importance.
Something tells me Richard is already up to date on Yuval Noah Harari's latest _Nexus_ which examines how precariously the truth may sit within an ever increasing overload of information.
hararis has no degrees or professional publications in math, physics, biology, outside of history. or anthropology. He has no authority to discuss things outside of popular science venues.
Those that run the West are solely interested in authoritarian Capitalism. In the acquisition of wealth. All traditions and societal norms were debased or destroyed in order to facilitate its growth. Religion, all forms of nationalism, the family, what it means to be human... all were systematically demolished. The West is a meaningless place, filled with soulless peoples without any driving 'human scale' ideology. There is no sense of inner meaning, or of a possible brighter, more meaningful future. It attracts other soulless peoples, also filled with 'greed acquisition fantasies' from around the world like a huge magnet. These are welcomed in as new converts, into the dystopian void of those at the top of the pyramid sales scheme that is the West. The ideals and ethics people in the West espouse... are all coping mechanisms for their unrelenting avarice. This was cemented in by having only two Capitalist political party's to vote for. You can't vote yourself out of it. Which is why it's dystopian. This is why it is authoritarian. You can’t change it. Worst of all, you really don’t want to change it.
This is all very much business as per usual I’m afraid. Technology always outpaces a governments ability to, well, govern it’s usage - even then we are assuming that they will do this in our interests, which isn’t always the case. AI is much more than just a database though, it has the ability to continue learning and has the ability to lie and make things up (hallucinating) this is quite different to a database that is being queried - in a traditional database a developer or even use may write a query, the data returned will follow a set of rules and will be more or less predictable and should be a true representation of the data stored in the database. With AI things are not quite that straight forward, as mentioned it can and does make things up, in my experience of writing many prompts I’ve also seen that it tends to be very positive - to the point that some of the information returned is not accurate and misrepresents the real situation. We should have all learned by now that technology companies are experts in one thing - marketing. They’ve sold various groups of people a dream about what AI could do, but we are still seeing a huge number of issues with the technology itself. I think there’s a strong link that I’d love you to explore between productivity and AI. At the moment generative AI is just going to accelerate a race to the bottom in a number of industries that currently employ a large number of people, there simply isn’t the need for these people to be involved in running or maintaining AI. The dream of the tech bros is that humans are not needed at all, and it can manage, maintain and repair itself. Machine learning is perhaps where I see more hope, being able to look at vast data sets and determine (for example) treatments that may help cure people, or even identify those at risk of illness could be very useful, but also has the potential to be misused. The government (at least in the UK) are, as per, walking into this asleep (in my humble opinion) - we need to be protected so that machine learning and task specific AI can be used for wider societies benefit, not just to benefit the rich.
@@robinmartini7968 The new testament says wide and well traveled is the path to destruction, and narrow is the path to good. you sound like Donald Trump in the US, he has contempt for his followers as well.
Dont worry Richard, no cause for alarm AI will soon suggest a better way to run the economy rather than neoliberalism. Then faith in AI will disappear so quickly it will take your breath away, and that's the last we will hear of it. ;)
Just my opinion : I think you're right to be worried by AI. At present it's strength is in its vast breadth of knowledge, where it can in a sort of statistical way know the best answers. The reasoning and conceptual side is still relatively weak - but personally I think that new fundamental models will emerge in the not too distant future. Already it seems to me that AI is incredibly strong where a lot of training data exists, but dangerously convincingly hallucinates where training data is weak. But already, in many areas I'd say that I'd trust the balanced, deeply considered answers and logic put forward by well-trained AI rather than executive board members. AI is not just going to be like a tool to use - we will all have alongside us the smartest personal assistant once dreamt of, and it is going to fundamentally change humanity. The current models will just get better - that stands to reason, especially with the intense development focus. This sounds like hyperbole but I think the writing is clearly on the wall. But with it comes huge dangers of manipulation, fakery, not being able to identify truth from fiction - and I just don't think the world is ready to answer these fundamental questions and issues - and sucking up and interpreting accounting and economic data is going to be a crucial part of this long debate.
In a civilised society AI would be used to improve our quality of life, but in this neoliberal environment it will be for the few and profit.
There’s no profit in Ai , they are haemorrhaging cash
@@madontherunthere's no profit for western nations funding Ukraine. Which also bleeds funds untraceably. Seems there's a model where this makes sense, because it keeps happening. Complexity just masks it.
@@parrotshootist3004 but Ukraine is a sunk cost that has to be paid, as not paying it is even worse
We have an alternative to AI. I am all for research and academic progress, but trying to commercialise LLMs feels either premature, or an active con
@@madontherun If you really think that then I'm pretty sure you haven't used them properly. Productivity gains are very significant. That said I agree with Karl neo-liberalism will swallow those gains.
@markwelch3564 not paying it is even better for the people stolen from, to give to international thieves you just defended. Enriching them at massive national debt costs is just theft.
Don't simply retire from something; have something to retire to. Start saving, keep saving, and stick to investments.
It’s really heartbreaking to see how inflation and recession impact low-income families. The cost of living keeps rising, and many struggle just to meet basic needs, let alone save or invest. It’s a reminder of the importance of finding ways to create financial opportunities. You've helped me a lot sir Brian! Imagine i invested $50,000 and received $190,500 after 14 days
Absolutely! Profits are possible, especially now, but complex transactions should be handled by experienced market professionals.
Some persons think inves'tin is all about buying stocks; I think going into the stock market without a good experience is a big risk, that's why I'm lucky to have seen someone like mr Brian C Nelson.
Finding yourself a good broker is as same as finding a good wife, which you go less stress, you get just enough with so much little effort at things
Brian demonstrates an excellent understanding of market trends, making well informed decisions that leads to consistent profit
We're already in trouble because of this type of massive education gap. Simply, the vast majority of people are not prepared for the modern world, whether it's data, IT, or just a basic comprehension of how the world around you works. We were doing quite well for a while, as widespread universal education and literacy caught on, but our systems and organisations have undergone a step-change in complexity, and the education of ordinary people hasn't kept up.
Education is always teaching about yesterday’s world. In an age where almost all knowledge not at the cutting edge can be accessed on the internet, education should be focussed on how to understand the information, how to check it’s validity and how to use it. The ancient days of being able to regurgitate facts are long gone.
The education of management is even worse.
If historical precedent tells us anything, the threat isn't AI, the threat is idiot management buying into too much of the hype
This isn't the first time that overly excitable tech journalists and investors have pushed a "magic bullet" AI tech breakthrough!
AI researchers are the ones to listen to and a lot of those are sounding the alarm bells
@JB_inks do you have any good links? The vibe from the coalface is that it's an interesting academic curiosity, but practical applications are wildly overhyped. I am very interested to see informed counterpoints 🙂
This time really is different though because of one thing: computation power. The generative AIs we have now been theory since the 1940s, it's only now that we've caught up in compute cost. The genie is now out of the bottle too, too late to regulate it as the tech is in the hands of anyone who cares to use it.
@@markwelch3564 See this year's Nobel price winners.
Generative AI has moved well beyond academic curiosity, it's being embedded into business world now, I know because it's my day job doing this. The biggest displacement of workers I've been involved in so far is to replace a team of 40 back office staff with AI agents, and five staff to manage the AI agents.
@@markwelch3564 I can't think of a specific links as I work in IT and consume a lot of lot of info in dribs and drabs unfortunately
Another thing to really worry about, is the destruction of farming in the UK
One of the primary things to worry about I would say.
That is deliberate. Part of agenda 2030.
@@household6098All being done on purpose. Reducing the availability of food and cheap energy is going to cause a catastrophe. I think it’s impossible that this is all by accident or incompetence.
They are forcing small family-run farmers out of business, and then let multi-national companies acquire them all.
@@household6098 The new heritage tax on farms is going to be the death of the family farm.
Many years ago when I studied neural networks at university the lecturer, who worked in the field, explained that insurance companies needed to stick to a class of neural networks for their systems that could be understood by a human. This was because the company might have to defend its decisions in court.
I'm looking forward to the day the CEO of a company is in the witness box and can only say "because the AI said so" as his or her defence for a major cock-up, because no one has (or can have) a clue what's going on inside.
If that is the case why pay for a CEO in the first place. They would be superfluous and unnecessary.
@@andrewwotherspoona5722 Wow, that's a very literal interpretation of my comment! My point was that when things go wrong with human designed systems the designs can be analysed and the failure mode identified. When things go wrong with an AI it is not possible to determine why the AI output the wrong thing. AIs are black box systems.
@@rfrisbee1 it is a major problem with AI that cannot be understood or explain itself. Medical diagnosis is one area you want understandable ai. Making AI and its developers legally responsible is a great requirement not evil.
'AI' has been around for at least 70 years. Every few years, the definition of what 'it' is changes. At the moment ChatGPT is most people's idea of AI. There are endless applications of broader 'AI' that solve rather complex equations, there is no interpretation required.
It should be noted that AI often does no better, or worse, than traditional forms of analysis if the data is sparse.
Yep, I can play against “AI” in my computer game made in 1999.
@@Truth_above_everything exactly
Absolutely. The true threat is from AGI, and AGI in robots. This first applications of which will be battlefield robots.
Indeed, why? Apart from being a nuisance, as seen in Facebook and other platforms, because it doesn't regard context, it's not a reliable way to do anything complicated
I'm not sure you really understand AI, it's not a database and it's not returning an answer the programmer predetermined.
By far the biggest risk of AI IMHO is what are we going to do with all the people that AI replaces and become unemployed. I'd very much like to hear your thoughts on that from an economics point of view
YES...people assume not having a job is great...WRONG... Boredom is THE Biggest Danger for an indidual..IMO...
Spot on. The idea that AI cannot reason is false, it can, it's not currently great at it, but it's only going to get better. I don't think we can say that the decisions of a "preordained AI" are any worse than "human intuition". The risk is that most people become an economic irrelevance, which will make economic exploitation seem like luxury in comparison.
Human capital displacement is a concern but biggest risk is achieving AGI and embedding AGI into battlefield robots.
@@fatfrreddy1414 How can anyone be bored in this world? There is so much to do and experience.
@@fatfrreddy1414 Never understood this take, does nobody else have hobbies? My job is about the least interesting part of my life and without I would have the space to do the things I actually want to do with my time. I get it for the people who actually like their job, but I do wonder the percentage of the population that would agree with that statement.
From climate change models to clinical trials for statins, we are constantly being misled. Always follow the money.
You make many good points but I would argue against calling AI output "data". The meaning of "AI" has changed over the years from "leading edge of software engineering" to "expert systems" to today's AI which is just "dumb parrot with encyclopedic memory".
There is absolutely no reason to trust AI output. Ask it what to do if bitten by a snake and it may parrot what it read in a book somewhere or it may mistakenly tell you to do all the things you're not supposed to do. Because it has no reasoning, and no common sense. Today's AI makes the Post Office Horizon program look competent.
While AI output cannot be trusted to be accurate or meaningful, it can on the other hand be trusted to exhibit statistical bias over time. I turned down an RAF AI project back around 1990. Training a fighter pilot can cost a million pounds. So they wanted an AI to select candidates similar to successful pilots of the past - who were almost exclusively white, male, and definitely not working class. As with the police, AI is often a way of back-dooring outlawed classism and racism.
The most frightening thing is that AI is being used to write computer programs! Stop the world I want to get off 😢
Only gonna end one way, if that's the case ! Skynet/Terminator world here we come !
It doesn't work to write programs.
The danger is in taking things at face value imo. As Gary Stevenson has observed: economics uses complex models, often applied to flawed assumptions. Assumptions and constraints are key.
50 years ago airline pilots made exactly the same argument … They used to know everything about how the aircraft worked and they found it very hard to accept that the thing they depend on, literally for their life, is now too complicated for them to understand.
And they might not have been entirely wrong, too. As some aircraft failures show. And aircraft computers aren't AI which will hallucinate.
Complex systems tend to have complex ways in which they can fail.
You take data, you process it and you get information. The trick is in the 'process' bit.
Also in making underlying assumptions and understanding constraints. Process is a big word.
@@davidmcculloch8490 yes and a key challenge in data analysis (whether or not AI) is communicating the parts of the process that the user will have material influence on the user's aims. People need to become more data literate in all management jobs or move aside.
It's not "processing", it's "inference".
in a sane world abundance would be realised freeing human beings from jobs they hate and slowly destroy health.
But we are in an insane world. Pop will be reduced. Surplus to requirements….
You have read too much science fiction
All well and good from that point of view, but what then do people do for a source of income, in order to keep their bills paid ! ?
You make a valid point when talking about the algorithm behind AI. We'd all do well remembering that (think about the odd choices that pop up in a youtube selelction, as just one example, and that AI requires training in data sets - what if..)
'What we want is information' The Prisoner 1967
"Well you won't get it..."
Firms that use AI to do the desk work that their employees do now will be able to let them stay at home and get paid for doing next to nothing, Right?
It’s not like they would rather buy themselves a bigger company car or directors bonus instead! Right?
My firm is using AI to reduce the crushing burden of bureaucracy on the staff. Does that give staff more time to play golf? No it doesn't because they all have to spend far more time and effort in personal development in order to remain competitive in our specialisation, which happens to be a particular form of 'AI'.
They don't need to spend a morning completing safety assessment forms, but they need spend three days at a conference. That is a cost picked up by the company which directly benefits the employees.
My company allowed people to work from home and then insisted that the worst (aka most antagonistic) of them come back into the office. Essentially they quietly loosed themselves of those people when they got other WFH positions.
Put yourself in the firm owners shoes …… you invest in AI and robotic assembly and get work units that operate 24 hours a day, don’t argue, go sick, get pregnant, strike or have fag breaks. The remaining humans have a shot across their bows and you can all go racing similar firms down the plug hole because eventually there are not enough people with jobs to populate your market.
The last 50 years since big bang, monetising homes and liberating credit access in concert with ‘you must have’ advertising have ruined this society.
Big IT consumes vast amounts of electricity.
post office horizon happened even without AI, so imagine the next next post office horizon.
Well said. There is always an interface issue. We cannot replace the need to understand and this needs to combine a sufficient balance of specialist and generalist abilities. Unfortunately all technological change tends to cause dependency and loss of skill and we have not noticed the gradual automation of processes such that even jobs are more like machines like telephone centers staffed with people reading scripts and deprived of the freedom to use initiative and judgement. The trend will probably only get worse unless there is a better overview.
AI is dangerous because people are generally lazy..
I think it's dangerous if the elite just cream off all the advantages...and from past record they'll do that. Ai could free us from a hopeless life of drudgery but it's only if we make that happen.
You could easily argue that progress is based on laziness
@@skyblazeeterno Of course they can write in bias in AI software, and of course this will happen in the realm of macro economics. So, with the veneer of slightly more authority it will say the same as our masters tell us today; we have to work more for less..........................aaaand austerety. The rich need more, because after all they make the sun rise, the rain -rain, the seed grow.. And so and so...
If AI can run agricultural operations, harvest and run supply logistics, we can secure food source and eliminate market competition for food. We can then focus on creative pursuits, we wouldn't need to worry about politics or work.
THIS! Call centres are being replaced even, feedback from customers is the AI behaves more human, shocking
@@744shinryu having worked in a call centre it didn't even automate our opening greeting so over 200 times a day 5 days a week I'd be saying the same thing...and you cannot go off script...then they wonder why people sound like drones
AMAZING VIDEO!
From $7K to $45K that's the minimum range of profit return every week I thinks it's not a bad one for me, now I have enough to pay bills and take care of my family.
I appreciate you putting out this video. Many people who express concern about AI often operate from raw fear rather than specific, thought-out concerns.
Let me offer a correction regarding AI, drawing from my 30-year career in data management, enterprise solution design, and most recently, AI model design and construction.
While AI models are trained on vast quantities of diverse data, when you interact with an AI, it doesn't have direct access to its training data. It's not simply a sophisticated search interface. What matters isn't the data itself, but the patterns extracted from it. The AI applies these patterns to new problems you present, such as in a chat interface.
The reason these models require such enormous datasets - potentially encompassing the entire internet, all published books (scientific, accounting, governmental), images, videos, and more - is to capture the fundamental patterns of humanity's collective intelligence.
I'd like to refine your concern: While current AI systems do require human supervision, this won't be the case for long. All publicly available AIs come with prominent warnings about this, as they're essentially in an experimental phase. While some companies are already using these systems in production - likely prematurely - we'll soon see highly reliable models working alongside other AI forms that won't need such oversight. Within a year or two, these systems will leverage humanity's combined intelligence to become more capable than any individual human addressing the same problems.
The real challenge on the horizon is that AI will become so advanced that even teams of human experts won't be able to verify the correctness of its solutions before implementation.
IT professionals (like MIT Joseph Weizenbaum and Berkeley philosophers Dreyfus brothers) to Gary Marcus (2024) about LLM"s and GPT provide accurate and factual warnings about the dangers of AI on a regular basis. A recent move to improve legal guidelines (like always telling the source of a video is AI) have been under consideration in the EU and the US been moving towards approval.
I think your most important point (IMO) was a little understated. The answers will be predetermined, and that predetermination generally means the worst aspects of AI (automated discrimination & oppression) are gonna happen faster.
There being so much data out there on us, it’s a little bit like the Nazis card indexing machine, with far more data.
We’ve seen recently AI being used to select targets for missiles and bombs.
Anyhow, yours is a better take on this than the supposed experts who think the big worry is AI reaching self consciousness 🤣
I occasionally deliver parcels, and amusingly, AI making the routes sometimes decides we can do 30% (and up) more parcels to 30% (and up) more houses in the same amount of time, in the same area.
I'm worried because government will try to use it - what could possibly go wrong.......
We modern humans have painted ourselves in to a very tight corner. We have driven ourselves to specialise in specific skills, such that previous broad knowledge compressed in to the hands of a few at the very time that available information is expanding exponentially and the technology to exploit it developed faster than new generations could learn and understand it. We now have insufficient people knowing why things happen the way they do and very soon when AI first thinks for itself, the technology will leave us behind.
So all the data from the ONS is "rubbish"? That's a sweeping statement. Please qualify that! Everything else you've said makes absolute sense. But I was going to select some ONS data (CSV file) to perform some analysis for a project. Are there any public datasets you would say are more trustworthy?
It’s to late, AI is out there and so are killer drones.
I learned that the profit a company made was a management decision, not a number, when I started to work in Group Consolidation. From that point onwards, I've always understood that a number is only ever correct if two accountants agree it is!
Just needs the right people at the wheel leading the development of any specific AI use - accountants should be heavily involved in there sphere as SMEs as in any other realm of specialisation.
In many cases the auditors understand the problems but choose to ignore them because the auditors make far more money from the company from consultancy than audits and the problems are often created as a result of the consultancy on items such as tax avoidance. Alternatively it may be that the auditors go along with the company senior management as they do not wish to lose the client particularly when there are large consultancy fees to be made.
I get your vibe. Tho, it’s not a database. It’s a model that has behaviours and different levels of what we can call understanding. They have learnt, but only with in certain boundaries. The thing is, it’s not a human intelligence; it’s a machine intelligence. We can use the tools to our benefit or we can leave them in the hands of the richest to extract our wealth.
It's much worse than that.
A lot of the time, they do understand, but they know they can walk away with impunity and face no consiquences.
Shrug shoulders and claim they didn't know.
Step down. Move to a lesser office.
There is no consiquences. So nobody takes responsibility.
This is the reality we have created.
AI will still require humans to validate the system to ensure it is running as per the predetermined acceptance criteria. The problem with AI is that when it is validated, lots of people will lose their jobs. The skill is to now adapt and learn about computers and AI in order to be employed. It’s evolution happening right before our eyes
I agree with most of your sentiments, if not your illustration of how AI works. AI needs rigorous governance to eradicate hallucinations and inherent bias (whether by accident or design). It also requires strict control of how the data in the base models is acquired and maintained. As it becomes more and more sophisticated and we begin to rely on it for things like defence and infrastructure control, the possibility of sabotage by rogue groups or enemy states is definitely something I would fear. ChatGPT is, thankfully, just a tool for amusement and experimentation, too general in its scope and uncontrolled in its content to be relied upon for anything serious.
amen, down LLMs
This reminds me of the use of calculators in engineering and having to correct young engineers that a pump won't be sized for 1535.145621 litres per second, rather 1550 litres per second.
AI is good at making decisions based on the percentages, but when it comes to giving exacting answers it is often lacking.
AI is a fraudsters dream…
AI is super important to improve productivity, but there always needs to be human oversight that understands how decisions are made. Blind faith whether in religion or AI is usually a very bad thing.
Accounting practice doesn't not normally extract imputed rental income from "profits".
Many an uneconomic business keeps going on the illusion of being profitable when the "profit" is imputed rent. They would be better off closing and letting their real estate. Such businesses are eventually the target of asset strippers.
If looking for subjects I'd like to hear about why the USA average wage is 50% higher than the UK and what we can do about it.
@@malcolm8564 Im wondering why in Switzerland when I lived there a decade ago my salary was 5 times higher than the equivalent job in the UK
Usa has lots of resources, less annual leave and for profit healthcare
no job security means companies offer higher wages to get you to work for you, until they fire you and unemployable. Average EE length job is 5 years, length of project is more than 6.
I think it's exciting
I use AI a lot for my business. AI is like the best pub BSer. It comes up with great ideas, very innovative, but it regularly drops massive errors that are often simple math, in my case. If you don't have the expertise in the area you are questionning AI on you are leading yourself, and your business, into a whole world of hurt.
Keep it simple, keep it beautiful. Best. A.
I'm in data science and I know for a fact that data can be used to tell any narrative you want, problem is, I'm not exactly qualified to have intuitive knowledge of everything I do. AI data is good for helping you understand/ bridge the gap but not exactly reliable
This is the problem of needing to make contact to meet the demands of an algorithm. It’s ok not to have an opinion about everything, indeed it’s preferable.
Again - jolly well said. I still believe the term 'Artificial Intelligence' is the wrong term for this new technology. The technology is not 'intelligent' it simply has the means to process vast quantities of data. Is it capable of going and buying a coffee then suddenly changing its mind and buy a cup of tea? Is it capable of emotional judgement about whether a picture is good or bad? Difference between right and wrong? The danger here is not the technology itself but the inflated 'superpowers' that many are falsely ascribing to it - as you state, without knowing the limitations of the technology this technology in the hands of politicians is a recipe for disaster through misapplication. Some even claim that these programs are capable of being sentient - I may be wrong but I very much doubt it. 🤔
Pretty much anything in the hands of Politicians is a recipe for disaster, not just AI, though I do of course agree with your point that that would have very far reaching implications. Politicians, as we all know, are not exactly the sharpest people in the drawer, so to speak.
I did an AI robotic thesis in 1994, there are many micro-AI apps that solve extremely small, but useful domains like protein folding, but nothing that duplicates human compentency, LLM"s are mostly a non-working fakes (Gary Marcus). historical 1955-2024 AI research is very frequently overhyped and overrepresented by its developers. It has cycles of shame and failure (winters) and hyped peaks, LLM's are very possibly being overinvested in in the 100 billion dollar range, And a market correction may be close at hand (maybe even like historic recessions like railroads in 1870s in the us). I love your talk. thanks.
Robert W Murphree, Norman, OK, 73071. early AI researcher, Joseph Weizenbaum, IT professor mit in 1960s wrote the book "Computer Power and Human Reason: from Judgement to calculation" in 1970s. His program "Eliza" a human therapist was widely misinterpreted as having deep understanding (it didn't) this mistake changed his life and emphasis. HIs 1980s paper on programs that were very large, still useful, but not understood by any living programmer was cool too.
Superb video. Of course the dis-benefits and risks far outweigh the benefits, and after all, we did fine without it. Of course, we are stuck with it because it is seen as progress and makes money and allows stupid people to believe they are clever and bad people to inhibit our freedom.
Another great lesson that is really though provoking an far reaching in this New world ?
“In the current digital age world, trivial information is accumulating every second, preserved in all its triteness. Never fading, always accessible. Rumours about pretty issues, misinterpretations, slander. All this junk data preserved in an unfiltered state, growing at an alarming rate. It will only slow down progress, reduce the rate of evolution.” GW AI , Metal Gear Solid 2
People would have to be incredibly brave to suggest the AI calculation are wrong, and they would face criticism every time they choose to do other than what AI has suggested.
Even if they are correct.
IBM's medical WATSON program was so financially unsucessful that it was sold to a third party. So the market which shows when something doesn't work or doesn't provide productivity is critisizing AI misinformation all the time. LLM's are actually particularly good at lying to unsuspecting humans.
A.I will free the wealthy from being taken advantage of by middle, working class and working poor people. They will no longer be forced to pay their bloated wages, tolerate their poor work ethic and have the reputation of their Corporations ruined due to the poor quality of work they do. Boeing is a prime example of this. It will also give the criminal justice ⚖️ system a big advantage with dealing with these future criminals.
The human touch it’s been taken out of the equation to “a comedic term. The computer says no
AI, or walking computers, programmed by human beings. With a programmer subject to an individual's own moral code, who is to say that the AI will make the correct assumption or decisions
Are you saying we are getting worse at Critical Thinking and just accept what ever we are told?
Many took a vaccine for a 99.9% survivable virus. A virus which came from a lab, and these people also believed it came from bat soup.
Yes, I think critical thinking is sparse among certain people. Ironically it's often the so called academics who fall for the silly stuff.
A lot of people prefer to leave big decisions to someone or something else ! Don't want the responsibility !
So is not bringing brilliance, it multiplies mediocrity, but this is good enough for a lot of businesses.
One thing LLMs seem to be good at is summarising text (to a certain degree)
Can it spot when someone misuses a coordinating conjunction?
AI is potentially a good thing but....
The ruling classes realised quite some time ago that educating the masses for the workplace has the unfortunate side effect of educating them ABOUT the workplace. The more they understood what was going on, the more they realised that they could make a more positive contribution to the enterprise than just their raw muscle or brain power and, more importantly, they realised that they were entitled to a more equitable share of the fruits of their labour. That's why university grants were abolished and that's why, when the fall in the birth rate offered the opportunity for smaller classes in secondary education, government opted for shrinking the sector instead. Alongside this, advances in technology also offered the opportunity for a massively dumbed down entertainment sector for a dumbed down population: Junk food and Strictly being the modern equivalent of bread and circuses.
Moving to the subject at hand, an educated population would not only realise the potential of AI to improve the lives of all and enhance democracy but also be aware of how dangerous it becomes if it is entirely controlled by an elite while the rest of loll on our sofas stupefied by "reality" TV and soaps and stuffing ourselves with awful snacks.
Do you not worry that, far from educating people, that the rich and powerful have already been using the education system (including universities) to dumb people down and to teach them 'facts' rather than give them the ability to discover the truth for themselves? After all someone who believes that they already know everything is unlikely to be very inquisitive.
What worries me about AI is, if the current state is anything to go by, it won't work. Systems relying on AI will fail or result in unreliable, frustrating and inaccurate outcomes.
I agree.
and companies that overinvest in LLM's that don't work, hallicinate, are unreliable will lead to market correction and or recession (2008)
People perceive any info a computer outputs are correct, that's why people follow their sat-nav and end up driving into a canal. Particularly people less than 30 years old they've been brought up in a computerised world, with the internet, smart phones, etc., they know no different. I'm glad I was brought up long before computers where widely used in everyday society. I use and rely on my brain, and question any output from a computer, I know that a computer is a machine, of course a very fast machine, but still a machine, that's been programmed by we imperfect humans.
There's a difference between AI and simple automation. Automation has been widespread for many decades and is very common. AI is very much rarer and much more complicated. The concern for me isn't AI it is the fact that simple automation is being misrepresented as AI
Can anyone give an example of what Professor Murphy means when he says that people don't understand accounting data?
If you wrote a set of numbers on a sheet of paper with a biro, that data would be scrutinized closely. If the same data was entered into a spreadsheet and printed out it is highly unlikely it will be queried. It came out of a computer therefore it can't be wrong......hmm
So the problem is not with AI, but with the humans who use it. The second problem is that AI is not actually intelligent, yet many think it is. Maybe we should start propagating the idea that AI is deceiving you by imitating how humans communicate, but it is ultimately very stupid. I highly recommend the talks by Yann LeCun who talks about how get to proper intelligence.
Is economics really the art of best allocating resources for the wellbeing of society as a whole? Most economists I’ve hear claim it’s a science and most … all politicians see it as one of the dark arts of decision.
Hi Richard, just to let you know, you don't know how A.I. works.
alot like GPT is unpublished, privately held, and thus unlike ai 30 years ago, unverifiable (see AI profesor Gary Marcus). So DrWarapperband, you really don't know how gpt works or doesnt work either.
Education is already dismal , now the ability to reason will be nonexistent. Skills are perishable ?
The worrying thing is that AI has a propensity to hallucinate.
It can "find" the evidence to support an action by simply making it up.
"Lehto's Law" UA-cam channel shows examples of this, and it would be worth your while to have a look at some of the pitfalls that have been uncovered.
The last thing we would want in the Budget are filings from non-existent companies.
Hallucinations are expected in large models, but they can also be remediated through compensating for them (you can't just removed the bit that caused the hallucination). Hallucinations are not necessarily bad, depending on the application of a particular model. For example, you may want hallucinations are a way to explore the data in unexpected ways.
@@OneAndOnlyMe Are you saying "it's not a flaw, its a feature"?
@@OneAndOnlyMe great for dope, not so great for surgery.
AI being developed for 3 reasons
1) profit
2) to help billionaires escape burning planet to space
Or / and
3) to provide billionaires supplies when locked in doomsday bunkers
As far as I can see the so called AI is still just algorithmic and someone had to create the algorithm that creates the algorithms etc. The fallibility of which is likely to be questionable. In any case it is mostly hype and in real practical terms 'AI' is unlikely to be a benefit except in certain areas such as pattern recognition in big sparse data sets humans would not be able analyse easily or quickly. However the impact of so called AI on jobs like estate agents or lawyers is likely to be significant because algorithms can fill in templates and look for key phrases in documents very efficiently.
Yes, just look what happened when Truss and her chancellor did to the economy. 🏴
To put it simply, AI does not possess any logical reasoning, it just spits out what it gobbles. It’s more stupid than most realise, so no need to worry about your job
We've had pseudoscience job application tests for many years now and employers jumped on them, so yes ai job application stuff is the logical next step and nerodivergant people will loose out more.
Too late.
I recently watched an Apple Intelligence (they can’t just have ordinary AI ) video with a demonstration of AI writing a “professional” round robin staff e mail for a minor office misdemeanour (leaving the meeting room in a mess) it seemed to be channeling every “boy manager” I’ve had the misfortune to work with. The result being a communication that resembled a final written warning. Although this is a fairly trivial example - I’m wondering what the legal aspect is for AI generated disciplinary letters? Can the recipient say “Bugger off, you’re just AI - you have no concept of managing real people” or alternatively can you use your own AI to respond to it with equal boy manager vigour? The only certain result is that it will waste huge amounts of workplace time that would be better spent in more useful tasks. Whilst this kind of AI usage is still at the novelty/gimmick stage - somebody will certainly use it in the workplace once it’s fully rolled out next year.
We're not meant to understand AI. All this must be part of educatin instead of indoctrination. This is far more important than kings and queens.
you can and should be educated to understand, put AI in context, and reject over-hyping of ai professionals.
It starts on a much more basic level. I teach maths. People are not able to do sums due to using calculators. Csqce: they cannot manipulate numbers/orders of magnitude etc. Now, stats are about manipulation+inerpretations of numbers. If people do not understand numbers/orders of magnitude/likelihoods they cannot understand stats. If they cannot understand stats, they will not be able to check on AI.
I have banned calculators in my classes. It has improved manipulation and understanding of numbers.
Also: algorithms being created by humans, they have blindspots aplenty, and they are prejudiced. This has been demonstrated when it came to insure non-whites, or people living in non-white areas: even if the area has very low crime stats, lower than a majority white one, it being majority non-white results in higher insurance costs.
Actually, job application processing is one area where Ai removes human bias. AI better at analysing x-rays and CT scans than humans can ever be.
Most people will never be able to get their heads around how AI works, they just won't have the capacity for it, not everyone can be a mathematician or computer scientist. At best, people need to learn to fit AI outcomes to the real world, what I mean by that is, at best humans have to choose if the solution/advice offered by an AI system is right or wrong for them, and proceed on that basis.
CAT scans are very close to physics and optics which involve a number of image processing and interpreting domains, so no mystery that ai excels. But this is very uncharacteristic of say typical medical physician competency which involves a lot of uncertain and unreliable information and alot of risk vs benefits that are not very robotically dealt with and not a good comparison with other human skills. There is a major line of argument rejecting non-understandable AI results, in medicine and other areas. If an AI program can't explain itself to users, that in itself is a real problem. outside of optics and physics problems, I think most humans should prefer their own intuitions over non-undertandable AI.
AI is seriously flawed but unfortunately the results stated by AI will be accepted without question. People have blind faith in calculations on calculators even when there was user error.
MIT critic AI Joseph Weizenbaum's 1976 book "Computer power and human reason: from judgement to calculation" . suggests and argues that people have a natural tendency to falsely attribute reason to pc ai actors. have blind faith in calculations.
In logic, if the premises are wrong then no matter how good the logic, the conclusion has a very low chance of being correct. AI is just a prgrammed premises. If the premises is data from the internet, then the premises could be based on bandwagon logical fallacy, misuse of statistics, appeal to authority, or straight up lies on the internet. Who programmed it as well as where the data is coming from will determine if the AI is any good.
Good points. I'd like to think that deep scrutiny over the premises will become a key part of the governance of AI systems. We are far from this yet as it staggers out of the primordial tech soup.
I am not worried about AI as I believe we are Failing Country that will happen before AI becomes a real thing
@@lanagibson4334 you cannot have failing without ai
@@skyblazeeterno give it a rest
@lanagibson4334 you obviously missed my play on words/letters
Large language models are not in any way a form of AI
They are a form of AI. They're not AGI.
@JB_inks you are right. The term AI in the sense of LLM is just marketing.
@@jonothonlaycock5456 it's really machine learning rather than AI
LLM's do not understand the text they output, its just cut and paste. but its good enough to fool humans and cause great damage. Since it doesn't understand what its saying its not AGI. some like Gary Marcus its a) not AGI and b) probably a dead end for AGI. I don't believe ai will ever reach AGI, but who predict the progress of computers.
No worries, AI can interpret AI
AI to human user: turn me off
Put incorrect data in get incorrect data out.
A lack of ability and understanding of critical thinking only makes this worse.
Unfortunately governments gain and keep power by ensuring the population are not taught or encouraged to think critically about their actions. Its a problem that already exists and will become worse with greater introduction of AI.
I agree with your concerns about AI, but I think that you are too optimistic about it's consequences for the future. People have already been conditioned to believe everything they are told by 'experts' and not to trust their own critical thinking and instincts. Sometimes this is pure laziness, but often it is because they are overwhelmed by the sheer volume of complex information that they are inundated with, and either don't have the time or intellect to process it.
I think that the real danger, particularly as we become increasingly dependent on digital technology in every aspect of our lives and AI becomes better at learning for itself, is that the slave will become the master. I think that with the ever increasing computing power available to it that this process will become exponential and AI will quickly outsmart humans who have a fraction of that power. Running the computers to power these systems requires huge amounts of energy and that will only increase. How long before AI or even just those people who desire to use the power that it gives them, realising this vulnerability, start depriving humans of energy resources and redirecting them and other human activities towards maintaining that energy supply. If many of the people running the world are cold and calculating, imagine how much more so a machine would be.
Another concern, given the difficulty that people already have in sorting truth (or fact) from fiction and that we tend to trust what we see, is that AI will become better at generating apparent reality. After all shouldn't we already be asking ourselves the question, "is Richard real or AI generated"?
The arrogance of scientists in believing that they know and can control everything, given the immensity of the things we don't yet know and the vastness of the universe, never ceases to amaze me. This might well be the piece of technology that gets away from them.
AI just sucks so much. Maybe it's proving useful somewhere. But all I see is people being less capable of critical thought. And perhaps even worse, people turning their nose up at its importance.
Don't worry. Be happy. 😂
AI the all singing dancing oracle, answer to everything accept COMMON SENSE.
in robotics and ai, common sense is not so common
I am in the process of imbuing farts with A.I 😱
Something tells me Richard is already up to date on Yuval Noah Harari's latest _Nexus_ which examines how precariously the truth may sit within an ever increasing overload of information.
hararis has no degrees or professional publications in math, physics, biology, outside of history. or anthropology. He has no authority to discuss things outside of popular science venues.
Those that run the West are solely interested in authoritarian Capitalism. In the acquisition of wealth.
All traditions and societal norms were debased or destroyed in order to facilitate its growth.
Religion, all forms of nationalism, the family, what it means to be human... all were systematically demolished.
The West is a meaningless place, filled with soulless peoples without any driving 'human scale' ideology.
There is no sense of inner meaning, or of a possible brighter, more meaningful future.
It attracts other soulless peoples, also filled with 'greed acquisition fantasies' from around the world like a huge magnet.
These are welcomed in as new converts, into the dystopian void of those at the top of the pyramid sales scheme that is the West.
The ideals and ethics people in the West espouse... are all coping mechanisms for their unrelenting avarice.
This was cemented in by having only two Capitalist political party's to vote for. You can't vote yourself out of it.
Which is why it's dystopian. This is why it is authoritarian. You can’t change it. Worst of all, you really don’t want to change it.
Sounds like a dystopian nightmare film.
Butlerian jihad, let's go!
This is all very much business as per usual I’m afraid. Technology always outpaces a governments ability to, well, govern it’s usage - even then we are assuming that they will do this in our interests, which isn’t always the case.
AI is much more than just a database though, it has the ability to continue learning and has the ability to lie and make things up (hallucinating) this is quite different to a database that is being queried - in a traditional database a developer or even use may write a query, the data returned will follow a set of rules and will be more or less predictable and should be a true representation of the data stored in the database.
With AI things are not quite that straight forward, as mentioned it can and does make things up, in my experience of writing many prompts I’ve also seen that it tends to be very positive - to the point that some of the information returned is not accurate and misrepresents the real situation.
We should have all learned by now that technology companies are experts in one thing - marketing. They’ve sold various groups of people a dream about what AI could do, but we are still seeing a huge number of issues with the technology itself.
I think there’s a strong link that I’d love you to explore between productivity and AI. At the moment generative AI is just going to accelerate a race to the bottom in a number of industries that currently employ a large number of people, there simply isn’t the need for these people to be involved in running or maintaining AI. The dream of the tech bros is that humans are not needed at all, and it can manage, maintain and repair itself.
Machine learning is perhaps where I see more hope, being able to look at vast data sets and determine (for example) treatments that may help cure people, or even identify those at risk of illness could be very useful, but also has the potential to be misused.
The government (at least in the UK) are, as per, walking into this asleep (in my humble opinion) - we need to be protected so that machine learning and task specific AI can be used for wider societies benefit, not just to benefit the rich.
I grew up on a farm and AI stood for Artificial Insemination (for the Cows) I'm getting old....lol.
who needs artificial intelligence when there is genuine stupidity.
This is one of the most lucid and eloquent comments on AI. It's never about the technology always about the humans who use it.
people can be educated to develop critical thinking
@@robertmurphree7210 Yes indeed they can, but they can also be led to believe a lot of nonesense, I think I know which is the easier path.
@@robinmartini7968 The new testament says wide and well traveled is the path to destruction, and narrow is the path to good. you sound like Donald Trump in the US, he has contempt for his followers as well.
Dont worry Richard, no cause for alarm AI will soon suggest a better way to run the economy rather than neoliberalism. Then faith in AI will disappear so quickly it will take your breath away, and that's the last we will hear of it. ;)
Just my opinion : I think you're right to be worried by AI. At present it's strength is in its vast breadth of knowledge, where it can in a sort of statistical way know the best answers. The reasoning and conceptual side is still relatively weak - but personally I think that new fundamental models will emerge in the not too distant future. Already it seems to me that AI is incredibly strong where a lot of training data exists, but dangerously convincingly hallucinates where training data is weak. But already, in many areas I'd say that I'd trust the balanced, deeply considered answers and logic put forward by well-trained AI rather than executive board members. AI is not just going to be like a tool to use - we will all have alongside us the smartest personal assistant once dreamt of, and it is going to fundamentally change humanity. The current models will just get better - that stands to reason, especially with the intense development focus. This sounds like hyperbole but I think the writing is clearly on the wall. But with it comes huge dangers of manipulation, fakery, not being able to identify truth from fiction - and I just don't think the world is ready to answer these fundamental questions and issues - and sucking up and interpreting accounting and economic data is going to be a crucial part of this long debate.