The Dangerous Math Used To Predict Criminals

Поділитися
Вставка
  • Опубліковано 24 лип 2022
  • The criminal justice system is overburdened and expensive. What if we could harness advances in social science and math to predict which criminals are most likely to re-offend? What if we had a better way to sentence criminals efficiently and appropriately, for both criminals and society as a whole?
    That’s the idea behind risk assessment algorithms like COMPAS. And while the theory is excellent, we’ve hit a few stumbling blocks with accuracy and fairness. The data collection includes questions about an offender’s education, work history, family, friends, and attitudes toward society. We know that these elements correlate with anti-social behavior, so why can’t a complex algorithm using 137 different data points give us an accurate picture of who’s most dangerous?
    The problem might be that it’s actually too complex -- which is why random groups of internet volunteers yield almost identical predictive results when given only a few simple pieces of information. Researchers have also concluded that a handful of basic questions are as predictive as the black box algorithm that made the Supreme Court shrug.
    Is there a way to fine-tune these algorithms to be better than collective human judgment? Can math help to safeguard fairness in the sentencing process and improve outcomes in criminal justice? And if we did develop an accurate math-based model to predict recidivism, how ethical is it to blame current criminals for potential future crimes?
    Can human behavior become an equation?
    ** ADDITIONAL READING **
    Sample COMPAS Risk Assessment: www.documentcloud.org/documen...
    COMPAS-R Updated Risk Assessment: www.equivant.com/compas-r-cor...
    “The accuracy, fairness, and limits of predicting recidivism.” Julia Dressel. www.science.org/doi/10.1126/s...
    “Understanding risk assessment instruments in criminal justice,” Brookings Institution: www.brookings.edu/research/un...
    “Machine Bias,” Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica: www.propublica.org/article/ma...
    “The limits of human predictions of recidivism,” Lin, Jung, Goel and Skeem: www.science.org/doi/full/10.1...
    “Even Imperfect Algorithms Can Improve the Criminal Justice System,” New York Times: www.nytimes.com/2017/12/20/up...
    Equivant’s response to criticism: www.equivant.com/official-res...
    “A Popular Algorithm Is No Better at Predicting Crimes Than Random People,” Ed Yong: www.theatlantic.com/technolog...
    “The Age of Secrecy and Unfairness in Recidivism Prediction,” Rudin, Wang, and Coker: hdsr.mitpress.mit.edu/pub/7z1...
    “Practitioner’s Guide to COMPAS Core,” s3.documentcloud.org/document...
    State v. Loomis summary: harvardlawreview.org/wp-conte...
    ** LINKS **
    Vsauce2:
    TikTok: / vsaucetwo
    Twitter: / vsaucetwo
    Facebook: / vsaucetwo
    Talk Vsauce2 in The Create Unknown Discord: / discord
    Vsauce2 on Reddit: / vsauce2
    Hosted and Produced by Kevin Lieber
    Instagram: / kevlieber
    Twitter: / kevinlieber
    Podcast: / thecreateunknown
    Research and Writing by Matthew Tabor
    / tabortcu
    Editing by John Swan
    / @johnswanyt
    Police Sketches by Art Melt
    Twitter: / eeljammin
    IG: / jamstamp0
    Huge Thanks To Paula Lieber
    www.etsy.com/shop/Craftality
    Vsauce's Curiosity Box: www.curiositybox.com/
    #education #vsauce #crime
  • Наука та технологія

КОМЕНТАРІ • 1 тис.

  • @DemonixTB
    @DemonixTB Рік тому +2610

    IBM Internal presentation slide, circa 1979; "A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION" is the perfect response to any of this. no algorithm should ever decide the fate of who lives who dies, whose life get's cut by 30 and whose by 3 years.

    • @feedbackzaloop
      @feedbackzaloop Рік тому +101

      Even more so, justice must not be based on probabilty, computer-calculated or man-accounted.

    • @Mikee512
      @Mikee512 Рік тому +116

      Juries falsely convict a certain % of time.
      Algorithms falsely convict a certain % of time.
      Shouldn't you choose the method that falsely convicts less frequently?
      Or is there something fundamentally important about having people make the decision, even though they falsely convict more often? I don't know the answer, but it's not a cut-and-dry issue, IMO.
      **Whatever the case, I think any algorithms in use by the justice system (government) should be open-source and subject to public scrutiny. This seems like it should be a non-negotiable minimum.**

    • @feedbackzaloop
      @feedbackzaloop Рік тому +45

      @@Mikee512 open-source judging algorithm is a disaster, not a non-negotiable minimum! We kind of already have it as written criminal and civil codes and look at all loopholes people are coming up with to get away from justice, absolutely legally. Now imagine how simple it would be to reverse engineer the algorithm, predict your own sentence and based on that commit it with maximum profit.

    • @sillyproofs
      @sillyproofs Рік тому +20

      If we little people can see how nonsense all this is, why can't the higher-ups?
      I thought they were the more educated..

    • @fedcab4360
      @fedcab4360 Рік тому +6

      @@sillyproofs LMAO

  • @DogKacique
    @DogKacique Рік тому +511

    That company made a buzzfeed quiz and is selling it like it was an advanced minority report AI

    • @bow_and_arrow
      @bow_and_arrow Рік тому +10

      FRRRRR

    • @joshyoung1440
      @joshyoung1440 10 місяців тому

      ​@@bow_and_arrow for real real real real real

    • @joshyoung1440
      @joshyoung1440 10 місяців тому

      ​@@bow_and_arrow oh sorry FOR REAL REAL REAL REAL REAL

    • @avakining
      @avakining 3 місяці тому

      Plus like… the whole point of Minority Report was that those algorithms don’t work anyway

  • @CorvieClearmoon
    @CorvieClearmoon Рік тому +1097

    FYI - Noom was found to be practicing very shady business behind the scenes. They have been overcharging customers and refusing to allow them to cancel their services. I believe they are currently under investigation. From what I've come to learn, they are actually bragging about their mishandling of services and suggesting other companies do the same. I''d do some digging to see what you can find before accepting their promotions again.

    • @moizkhokhar815
      @moizkhokhar815 Рік тому +46

      yes More people should read this comment

    • @ashlinberman4534
      @ashlinberman4534 Рік тому +49

      I think they made canceling subscriptions easy/easier after complaints, but i couldnt find anything anything about overcharging being solved, however they did get a class action lawsuit over it, and all reports seem to be from 2+ years ago, so it might be solved as well. Not accountable on either fronts btw, this is just from basic research, so you might be able to find better evidence against what i said

    • @Games_and_Music
      @Games_and_Music Рік тому +22

      I thought that part of the video really displayed the criminal maths.

    • @thelistener1268
      @thelistener1268 Рік тому +2

      That's for the tip!

    • @that_rhobot
      @that_rhobot Рік тому +39

      I've seen accounts from people trying Noom's mental health app that it pretty much always just recommends dieting, regardless of what you are dealing with. Like, there were people battling anorexia that were being told they were eating too much.

  • @cee8mee
    @cee8mee Рік тому +482

    I think using an algorithm to look for possible suspects, or location of evidence, or possibly areas that might require higher security due to history of criminal behavior is valid. But as soon as you start asking subject philosophical questions, you've introduced a wild card that makes the algorithm meaningless. I think we can find areas in the justice system for algorithmic programs, but definitely not proprietary and hidden. Open source is a must for transparency.

    • @gewurzgurke4964
      @gewurzgurke4964 Рік тому +39

      Any algorithm made for "justice" will reinforce the prejudice of those that make it
      What law is, what crime is and what what crime prevention should look like are already deeply philosophical questions

    • @quintessenceSL
      @quintessenceSL Рік тому +19

      It's a bit more than that, as these same types of test were/are used in "character profiles" for hiring (actually had a manager stand behind me and give me answers after I failed the thing for the 5th time. ALL of my references stated I was a great employee. Who ya gonna believe?).
      It is akin to social credit scores and the like of essentially magic smoke to remove accountability from decision making (and quite possibly subtly gaming an algorithm for a result not mentioned in the stated intent).
      And while claiming the mantle of "science", like much forensic tools, it hasn't been tested for falsifiability or even degree of improvement over existing methods.
      It's a modern day snake-oil salesmen now using computer science as their pitch.
      Run the test on the management of said companies. Let's see how accurate they really are.

    • @Cajek2
      @Cajek2 Рік тому +18

      It’s trying to measure how likely it is that you’ll commit a crime in capitalism. In capitalism it’s a crime to be poor or hungry. And in that sense the algorithm is doing pretty good.

    • @jeffl.8307
      @jeffl.8307 Рік тому +2

      All that stuff that you said an algorithm could do we already have cops doing that but think about it like this what if the algorithm only comes up with one suspect or maybe the algorithm leads them to evidence that incarcerates the wrong person
      An algorithm should never be used other than to show commonalities for example let’s say you feed a list of names date time and location of arrest for people who were caught with meth and it turns out they were all arrested within the same mile of each other even though they were all arrested at different times in different days, well there’s probably a drug den or meth lab nearby
      Algorithm should only be used to look for patterns

    • @andrasfogarasi5014
      @andrasfogarasi5014 Рік тому

      @@Cajek2 What the hell are you talking about? Even if we accept for a fact that the entirety of society is structured to enrich a ruling class, being poor wouldn't be a crime. The poor don't cause the rich to become less rich by virtue of existing. Instead, a poor person under such a system would be considered someone whose labour can be easily bought and is thus quite useful. Preventing the poor from working by imprisoning them would be akin to the rich shooting themselves in the foot. And no, prison labour is not profitable. The number of prisoners in the USA is 2.1 million. The value of prison labour per year is $11 billion. This comes out to each inmate producing $5,238 worth of goods and services per year. There is no prison in the developed world which can house a prisoner while spending only $5,238 per year on them. It's clear then that unless someone causes like net $10,000 worth of social damages per year, it does not make purely financial sense to imprison them. And if they do cause a net $10,000 worth of social damage per year, then I do dare say in my humble opinion that they probably *should* go to prison.

  • @Cyberlisk
    @Cyberlisk Рік тому +203

    We need a law that any algorithm that affects sentences or political decisions must be open source. For me as a computer scientist, that's just common sense and not having that law contradicts every juridical principle in a democracy. Having a black box algorithm influence decisions is literally the equivalent of using investigative results or testimonies without presenting them in court.

    • @mqb3gofjzkko7nzx38
      @mqb3gofjzkko7nzx38 Рік тому +20

      @Lawrence Rogers We might as well have secret laws and secret tax codes too so that those can't be easily gamed either.

    • @zafar0132
      @zafar0132 Рік тому +20

      If they are using a bog standard convolutional neural network, they might not be able to explain the decisions it makes. The US military used them in deciding what drone targets to attack in Pakistan and ended up bombing and killing ordinary people just going about their business. Using these technologies in certain areas with no oversight is just criminally negligent in my opinion.

    • @joshyoung1440
      @joshyoung1440 10 місяців тому

      This is great but I'm pretty sure the word is judicial

    • @user-yx5ry9rj3z
      @user-yx5ry9rj3z 6 місяців тому

      Some would argue that's exactly why we don't live in a democracy.

    • @johnmcleodvii
      @johnmcleodvii 26 днів тому

      Any AI model needs to be traceable.

  • @PhilmannDark
    @PhilmannDark Рік тому +64

    I've first read about this in the book "Weapons of Math Destruction". A major problem with all of these algorithms is that they can't measure the variables which they want to observer (like what people think, how stable they are emotionally, what their views, experiences and skills are). So companies use second-hand variables which are often only weakly linked to the problem at hand. Laymen just see "a computer came up with the number after doing some very complex math" which they think means "must be correct since neither math nor computers can be wrong" and they forget the old wisdom "garbage in, garbage out".

    • @garronfish8227
      @garronfish8227 Місяць тому

      I'm sure more frequent criminals will work out how to answer the questions in the best way. The system seems flawed.

  • @stevenboelke6661
    @stevenboelke6661 Рік тому +478

    There's no way that this machine wasn't trained with data about actual convictions and suspect info. Therefore, the algorithm could at best only accurately replicate justice as it has been done, not as it should be.

    • @quarepercutisproximum9582
      @quarepercutisproximum9582 Рік тому +28

      Dang, that's... a *really* good point. I hadn't thought of that. But, who could say what it should be? How would the creator of the algorithm decide what qualities to select for? I'm not sure such a thing is possible, while still working under the supposition that people lie for their own benefit

    • @andershusmo5235
      @andershusmo5235 Рік тому +25

      I was thinking the same thing. Algorithms aren't necessarily the objective oracles the way we commonly think of them as. An algorithm making predictions based on historical data is bound to replicate that data. An algorithm not based on historical data relies on speculation in some form or some degree and will reveal (or worse, hide) biases and assumptions on the part of whoever designed the algorithm.
      Like Steven stated so well, an algorithm trained on the data we have will merely replicate justice as it has been done so far, not change it. An algorithm thus only serves to obfuscate issues in the justice system behind a veil of infallibility and inaccountability.

    • @pXnTilde
      @pXnTilde Рік тому +10

      Well, it probably wasn't trained at all. It's not a neural network. It's possible the coefficients were tuned to match historical decisions, and your point is very valid. However, if it's true it's simply reflecting what has happened, then getting rid of it would return to ... the exact thing it was doing.

    • @tweter2
      @tweter2 Рік тому +5

      No, the machine algorithms are used at the research level. Studies are done on past convictions to look for common denominators. Researchers use machine learning to look for these correlations. Once there is a stronger correlation is established, it can be considered for a risk assessment. Risk assessments are ultimately a set of items that show a stronger correlation.

    • @NotQuiteGuru
      @NotQuiteGuru Рік тому +3

      You're correct in your initial assessment, but I think you're incorrect in your last. The algorithm does not predict or force "justice". It does NOT dictate a judge's sentence, or if the person is guilty of a crime or not. It merely reports it's best guess for the likelihood of recidivism. By your reasoning (if I'm correctly understanding your meaning that is), it could _"at best only accurately"_ determine the chance for recidivism _"as it has been done."_ There is no recidivism _"as it should be."_ It is guessing possible futures based on historical data, plain and simple. It is STILL the responsibility of the judge to set a sentence... mind you, for someone who has already been convicted of the crime.

  • @SupaKoopaTroopa64
    @SupaKoopaTroopa64 Рік тому +67

    Using AI to predict future crimes is an extremely dangerous idea. If you give an AI access to currently available crime data, and optimize it to predict future crimes, what you are actually doing is asking it to predict who the criminal justice system (with all of its biases) will find guilty of a future crime. It gets even worse when you feed the AI data from crimes that it predicted. The AI can now learn from its past actions, and further 'fine tune' it's predictions, by looking at what traits are more likely to lead to a guilty conviction, and focus its predictions on people with those traits. This leads to a feedback loop where the AI discovers a bias in the justice system, exploits that bias to improve its "accuracy," leading to the generation of more crime data which further enforces its biases.
    Don't even get me started on what could happen if we use an AI powerful enough to realize that it can 'influence' its own training data.

    • @diceblock
      @diceblock Рік тому +4

      That's alarming.

    • @buchelaruzit
      @buchelaruzit Рік тому +3

      exactly. and it very quickly starts sounding like eugenics.

  • @TheVaryox
    @TheVaryox Рік тому +294

    Company: "yea you should sentence him harder, and I won't tell you why I think that"
    Judges: "eh, good enough"
    Man, if trade secrets get prioritised over a citizen's right to a fair trial, seriously, wtf. This is trial by crystal ball.

    • @tweter2
      @tweter2 Рік тому +5

      Research shows sentences are longer in the afternoon or if it's nice weather outside.

    • @jeffreykirkley6475
      @jeffreykirkley6475 Рік тому +8

      Honestly, why do we have trade secrets as a protected thing? If no-one can know the truth about it, then why should we even agree to it's use/consumption?

    • @alperakyuz9702
      @alperakyuz9702 Рік тому +1

      @@jeffreykirkley6475 well, if you spend millions of dollars on development an algorithm to gain an edge over competition, would you publish the information freely so that your competition can imitate itbfor free?

    • @ipadair7345
      @ipadair7345 Рік тому +7

      @@alperakyuz9702 No, but the gov.(courts especially) shouldn't use an algorithm which nobody except comp. knows the working of.

    • @legendgames128
      @legendgames128 Рік тому

      @@ipadair7345 One which the company could use to suppress those who don't like them, perhaps. Or if they are working with the government and the media, we essentially get political opponents being sentenced. In this one, it merely predicted the rate of recidivism. In the one used to actually punish criminals, it could be used to punish political opponents while still being guarded as a trade secret.

  • @imaperson1060
    @imaperson1060 Рік тому +38

    This is assuming that nobody lies and gives answers they know will lower their score.

    • @fetchstixRHD
      @fetchstixRHD Рік тому +8

      Quite possibly, that may be why the girl got a higher score than the guy. The guy probably knew better to think ahead as to how the questions may be taken, whereas the girl probably wasn't calculated at all.

    • @jmodified
      @jmodified Рік тому +2

      Hmmm, if I have no financial concerns, is it because I'm independently wealthy or because I know I can always rob a convenience store if I need cash? Probably best to answer "sometimes" on that one.

  • @Oxytail
    @Oxytail Рік тому +226

    The fact many of these questions seem like what you'd ask a person whilst trying to diagnose them with certain mental illnesses or neurodivergencies is disgusting, let alone the part where these questions are answered with no context or nuanced conversations on the subject.
    "Do you often feel sad?"
    The answer: "Yes"
    The algorithm's thoughts: "this person has nothing to live for and might commit a crime because they don't fear losing their life, their crime and answers indicate they'd be more likely to break the law again"
    The reality/nuance: "Yes, my mom died 4 months ago to cancer and I've felt down ever since, she helped me keep my life in check and without her I completely forgot to get my car's documents renewed, since she always reminded me to do it as I still lived with her and the mail was received by her"
    It's SO easy for any answer to mean the complete opposite if you don't allow someone to explain the reason for their emotion. Algorithms and AIs and machines in general should never be in charge or judging people because they do not, and cannot, guess the nuance behind actions and feelings. It's ludacris to me that this is even a thing.

    • @DanGRV
      @DanGRV Рік тому +36

      Using that same question:
      "Do you often feel sad?"
      "No"
      "The subject displays shallow affect; more likely to have antisocial tendencies."

    • @HoSza1
      @HoSza1 Рік тому +2

      First off, algorithms don't think nothing, they are just not able to. AI included. It's the people who create the algorithms are making the decisions ultimately. Second off, there may be a correlation between mental state and the chance of committing a crime, so why not testing for it? What would *you* ask if your job was to decide if given suspect would about to commit crimes repeatedly or not?

    • @unliving_ball_of_gas
      @unliving_ball_of_gas Рік тому +3

      @@HoSza1 What would I do? Do a nuanced personal detailed psychological assessment and then decide. But even then, you can never understand 100% of someone's thoughts even if you were given years to do it. So the question becomes, SHOULD we even try to determine recidivism or should we just treat everyone equally regardless of their past because everyone can change?

    • @HoSza1
      @HoSza1 Рік тому

      @@unliving_ball_of_gas I agree that in an ideal world where resources are unlimited we could do that. Your other question is indeed more difficult to answer, but I think that investing energy in order to reduce the chance of reoccurring criminal tendencies would pay off on the long run.

    • @noahwilliams8996
      @noahwilliams8996 Рік тому

      Computers can be programmed to understand emotions. That was one of the things Turing proved about them.

  • @Codexionyx101
    @Codexionyx101 Рік тому +67

    You'd think that if we were going to recreate Minority Report, we'd at least try to do a good job at it.

    • @orlandomoreno6168
      @orlandomoreno6168 Рік тому +6

      This is more like Psycho Pass

    • @tweter2
      @tweter2 Рік тому +1

      There is a lot of "minority report" in the sex offender world. For example, in Minnesota every such felon is given a risk assessment at end of jail sentence to determine if they need to be civilly committed to treatment. Sex offender assessments basically determine the probability to reoffend in the next five years. if you are labeled as a higher risk, you are often given extra treatment / civil commitment time.

  • @joaquinBolu
    @joaquinBolu Рік тому +94

    This brings me memories of Psycho pass anime were an AI computer decided who was a threat for society even before comitting a crime. The whole society was ruled by this tech withought questioning it, even cops and law enforcers

    • @feffy380
      @feffy380 Рік тому +11

      It wasn't even AI. It was brains of other psychopaths in jars

    • @aicy5170
      @aicy5170 Рік тому

      course?

    • @tweter2
      @tweter2 Рік тому +1

      Oh, by no means is this all "tech." I've done paper and pencil risk assessments that then get shared with courts / probation.

  • @felipegabriel9220
    @felipegabriel9220 Рік тому +36

    Those algorithms sounds literally like the SYBIL system in PSYCHO PASS anime, lol. Next step we get a social credit score :D

    • @sirswagabadha4896
      @sirswagabadha4896 Рік тому +3

      In a capitalist world, your credit score is pretty much already your social credit score. But of course, some countries go even further than that already...

    • @estebanrodriguez5409
      @estebanrodriguez5409 4 місяці тому

      @@sirswagabadha4896 I was about to answer the same thing

  • @notoriouswhitemoth
    @notoriouswhitemoth Рік тому +91

    "determined by the strength of the item's relationship to person's offense recidivism"
    I was gonna say there was no way those coefficients weren't racist, and the results bear that out. It's almost like predictive algorithms are really good at perpetuating self-fulfilling prophecies.

    • @desfortune
      @desfortune Рік тому +1

      AI and the sort just act on the data you provide. If you provide data that contains racist biases, the program will use them. AI is not intelligent, it does what you teach it to do, so as long as faulty humans insert faulty data, most of time without realizing it, you are not gonna solve anything lol

  • @awesomecoyote5534
    @awesomecoyote5534 Рік тому +512

    The worst kinds of judgements are judgements made by someone who can't be held accountable if they are wrong.
    Judgements that determine how many years someone spends in prison should not be decided by an unaccountable AI.

    • @Klayhamn
      @Klayhamn Рік тому +23

      humans that determine it aren't accountable either.
      in fact, the people who design the systems or manage the systems of law and order rarely if ever (and most likely - never) are held accountable for the decision they made
      so, at least based on this fact, it makes no difference if we use AI or not
      instead, what does matter is how good it is at predicting what it claims to predict

    • @prajwal9544
      @prajwal9544 Рік тому +10

      But algorithms can be changed easily and made better. A biased judge is worse

    • @soulsmanipulatedinc.1682
      @soulsmanipulatedinc.1682 Рік тому +6

      Should we desire to hold someone accountable?
      Sorry. It's just that, if we need to hold someone accountable for wrong judgment, I feel that we would have already failed.
      I mean, the option to hold someone accountable isn't a means to correct someone's judgment, but instead control a person's judgment. An algorithm always has perfectly controlled judgment, so, like...I don't see the problem here?
      I mean, yeah, this could be implemented horribly. However, the base idea would theoretically work.

    • @schmarcel4238
      @schmarcel4238 Рік тому +5

      If it is a machine learning algorithm, it can be punished for mistakes, thus be held accountable. And it will then try not to make the same mistakes again.

    • @soulsmanipulatedinc.1682
      @soulsmanipulatedinc.1682 Рік тому +3

      @@schmarcel4238 I thought about that as well, however, that may cause the program to develop harmful biases that we didn't intend.

  • @chestercs111
    @chestercs111 Рік тому +17

    This reminds me of the study James Fallon did on psychopaths. He would analyze brain scans of known psychopaths and found that all their brains showed similar results. Then during a brain scan testing he did on him and his family he found that one of the brains matched that of a psychopath. He thought someone at work was playing a joke on him but it turned out to be his brain. Showing that it's more than just how your brain is that makes you a psychopath. However, those that match the brain scans may be more susceptible to being a psychopath if certain conditions are met

  • @andrasfogarasi5014
    @andrasfogarasi5014 Рік тому +11

    If you want to develop an effective method for measuring recidivism, here's the plan:
    Step 1: Make a law requiring all people to buy liability crime insurance. Under the terms of this type of insurance, whenever the client commits a crime, the insurance agency pays for the damages caused and the client is charged nothing.
    Step 2: Wait 2 months.
    Step 3: Base prison sentences on people's insurance rates.
    Insurance companies under this system have a financial incentive to create an effective system for predicting future criminal behaviour and base their liability crime insurance rates on that. As such, the insurance rates become accurate predictors of future criminality. Of course you could argue that this system will cause repeat offenders to have such incredibly high insurance rates that they have no reasonable way of ever paying them, thus making them unable to buy liability crime insurance. Fret not, for I have a solution. Execution. This will drop their rates to precisely $0.
    Thank you for listening to my very own dystopia concept presentation.

    • @michaellautermilch9185
      @michaellautermilch9185 Рік тому

      You're just shifting who builds the models and asking insurance companies to be the ones building the black boxes. Yes, insurance companies do have people who build black box algorithms too, but they will basically do the same thing.
      Actually your plan has a massive flaw: insurance premiums don't only include measures of risk, but also multiple other business considerations. They want to sell more policies after all! So now you would have the justice system being partially influenced by some massive insurance company's 5 year growth plan. Not a great idea.

  • @ElNerdoLoco
    @ElNerdoLoco Рік тому +94

    I'd scrawl, "I plead the 5th" over every question. I mean, you have the right to not be a character witness against yourself too, and how can you tell if you're incriminating yourself with some of these questions? Hell, just participating while black seemed incriminating in one example.

    • @o0Donuts0o
      @o0Donuts0o Рік тому +3

      Not that I agree with software being used to predict potential future criminal activity, isn’t this software used after judgement is served and only used to determine the sentencing term?

    • @pXnTilde
      @pXnTilde Рік тому +7

      Seriously, this test was used during sentencing, which means there was absolutely no obligation whatsoever for him to complete that test. Remember, too... _he is guilty of his crime_ The judge could have easily decided on the same exact sentence regardless of the algorithm. In fact, often judges have already decided the sentence before hearing the arguments at sentencing.

  • @KenMathis1
    @KenMathis1 Рік тому +88

    The fundamental problem with this approach is that generalities can't be applied to an individual, and these automated approaches to crime prediction only rely on generalities. They are a codification into law of biases and stereotypes.

    • @mvmlego1212
      @mvmlego1212 Рік тому +9

      Well-said. Even if the predictions are statistically valid, they're not individually valid.

    • @luisheinle7071
      @luisheinle7071 Рік тому +1

      @@mvmlego1212 yes, it doesn't matter if they are statistically correct because it says nothing about the individual

  • @grapetoad6595
    @grapetoad6595 Рік тому +16

    The problem is the focus on punishment. I.e. we think you might commit crime again so you should be punished more for your potential future crime.
    If instead it was built on attempts to rehabilitate, and decided who was most in need of support to avoid recidivism, this would be so much better.
    The algorithms are a problem, but what's worse is why they are able to cause a problem in the first place.

    • @fetchstixRHD
      @fetchstixRHD Рік тому +2

      Agreed. There's a whole separate discussion on whether punishment should be appropriate, but regardless getting punished for something you haven't done (or attempted to do) is pretty unfair.

    • @michaellautermilch9185
      @michaellautermilch9185 Рік тому

      No this is backwards. Punishment needs to be proportional to the crime, not to the likelihood of rehabilitation. With your mindset, someone could be rehabilitated for virtually anything, regardless of their actions, if they posed a future risk.

    • @jeremyfarley3872
      @jeremyfarley3872 4 місяці тому

      Then there's the difference between punishment and rehabilitation. They aren't the same thing. Are we sending someone to prison for ten years because we want to hurt them or because we want to teach them to be a productive member of society?

  • @airiquelmeleroy
    @airiquelmeleroy Рік тому +16

    Mathematically, the problem is preeeetty obvious. The amount of people that only have commited 0 to 1, or maybe 2 crimes, is astoundingly massive. The amount that have commited 4 or more, have commited MANY more than 4, usually around the hundreds if we take into account the amount of times they got away with it before caught.
    This means that while one group (the people that have commited many many crimes) have a fairly similar profile or data points between each other, the ither group is literally *everyone* else.
    So picture this: the algorythm determines that 90% of criminals wear blue pants, accounting for like 10% of the population, then the algorythm will happily mark any blue pants wearing citizen a "potential criminal", despite there being thousands more blue pant wearing innocent people, than total criminals overall.
    While also, completely making invisible any criminal that wears white pants, or worse, chooses to wear white pants, to avoid long sentences.
    The second problem: Petty crimes tend to be done by normal people, so almost any person that commits a crime is "likely" to commit another, since the algorythm will find the pattern "all these criminals are normal people, therefore, any normal people could be criminals!" Way to go blackbox...

    • @TheEnmineer
      @TheEnmineer Рік тому +2

      For real, it's a clear misunderstanding of the field of statistics. Though, the interesting question is how do we know which criminals who have committed less than 4 will commit more than 4? After all, this is supposed to be an algorithm to predict (not just detect) recidivism, pointing at something that's clearly already recidivism isn't what it's supposed to do.

    • @truthboom
      @truthboom Рік тому

      it needs neural network training

    • @ichigo_nyanko
      @ichigo_nyanko Рік тому +4

      @@truthboom that will just reinforce biases already present in the justice system, like racism and sexism.

  • @DeJay7
    @DeJay7 Рік тому +21

    "Thanks for watching"
    No, thank you for making all of these videos, Kevin. I love every single one of your videos, everything you do is great.

  • @themacocko6311
    @themacocko6311 Рік тому +98

    IDK if it works 100%. There is 0 right to punish anyone for acts that have not been committed.

    • @taodivinity1556
      @taodivinity1556 Рік тому +6

      Yet if a time where it really works 100% of the time ever comes to reality, the fact stands that if you ignore the future crime, somebody will suffer, so perhaps rather than a punishment, a pre-emptive rehabilitation might be the compromise.

    • @quarepercutisproximum9582
      @quarepercutisproximum9582 Рік тому +5

      Exactly my problem with it. Present punishment should not be allocated based on one potential future (whether "punishment" deserves a place of its own right- outside of rehab- is its own discussion). There will always be variables that may prevent someone from acting on an intention they have to do one thing or the other; to push any forceful action upon a party before they have done anything is a path to thoughtcrime, which is less than a step away from a total lack of real freedom

    • @truthboom
      @truthboom Рік тому

      @@taodivinity1556 Future crimes happen because of past unjust like bullying or racism. If there's no unjustice there would be no crime in the future

    • @taodivinity1556
      @taodivinity1556 Рік тому

      @@truthboom So are you saying crime is born out of crime? Then how did the crime of bullying and racism happened? Was there another crime before it? I think you're honestly oversimplifying the process, humans are way more complex than that. There is always a beginner, one that happens due to a reason, which may not be from exterior malice at all.

    • @taodivinity1556
      @taodivinity1556 Рік тому +1

      @NatSoc Kaiser Then change it, I don't know what else to tell you, haha. It isn't working to keep society safe.

  • @epiren
    @epiren Рік тому +36

    I'm sad that you didn't cover retrophrenology, where you create bumps on people's heads until they acquire the personality traits you want. ;-)

    • @josephsimmons1232
      @josephsimmons1232 Рік тому +2

      GNU Terry Pratchett

    • @epiren
      @epiren Рік тому +2

      @@josephsimmons1232 I read it in a novel by Simon R. Green called "Tales From The Nightside"

    • @josephsimmons1232
      @josephsimmons1232 Рік тому +1

      @@epiren Oh cool. Pratchett did the same gag in 1993 with "Men At Arms."

  • @moizkhokhar815
    @moizkhokhar815 Рік тому +14

    Noom has been involved in some controversy recently with a lot of complaints of their free trials being misleading and subscriptions being very hard to cancel. And some of their diets were also triggering eating disorders apparently

  • @zncvmxbv4027
    @zncvmxbv4027 Рік тому +3

    It’s a Myers Briggs test basically. But the only way to correctly do one of these is to have multiple people who know you do one about you and compare their results to yours. After correlating the data you get a much more correct version of the data.

  • @bonbondurjdr6553
    @bonbondurjdr6553 Рік тому +19

    I love those videos man, very thought-provoking! Keep up the great work!

  • @jampersand0
    @jampersand0 Рік тому +32

    Never expected there to be what sounds like the Meyers-Briggs equivalent of a recidivism assessment.
    Also, glad to contribute my art in the video ☆ Stoked you reached out to me.

    • @_BangDroid_
      @_BangDroid_ Рік тому +6

      And Myers-Briggs is just glorified palm reading

  • @chankfreng
    @chankfreng Рік тому +13

    If an algorithm told us that lighter sentencing leads to lower recidivism, would the courts treat those results the same way?

    • @buchelaruzit
      @buchelaruzit Рік тому +2

      lol we all know the answer to that question

    • @Epic-so3ek
      @Epic-so3ek 9 місяців тому

      Not in the great US of A

  • @williamn1055
    @williamn1055 Рік тому +8

    Oh my god they made me take this test without saying what it was. I'm so glad I assumed it was a test against me and answered whatever sounded best

    • @studentofsmith
      @studentofsmith Рік тому +1

      You mean people might try to game the system by lying? I'm shocked, I tell you, shocked!

    • @buchelaruzit
      @buchelaruzit Рік тому

      yeah just looking at these questions tells you that it can and will be used against you whenever convenient

  • @GrimMeowning
    @GrimMeowning Рік тому +4

    Or they could go Scandinawia way - where prisoners are not punished (unless very serious crimes) - but instead reintegrated into society, where they learn new stuff and working with psychologists and re-thinking their actions and life position. That decreased level of recidivism to extremely small levels. Thought - until there are private prisons in USA, I doubt it will be possible.

    • @Epic-so3ek
      @Epic-so3ek 9 місяців тому

      That system won’t work for people with aspd, and honestly a number of other people. Many people need to be kept incarcerated until they’re not dangerous or with aspd people just forever. A focus on rehabilitation or at least not intentionally torturing prisoners would be a good start though.

  • @adamplace1414
    @adamplace1414 Рік тому +47

    "Hey let's take the smartest known computer in the universe - the human brain - out of the equation in favor of some vague questions posed by people the defendants will never meet."
    "Sounds great!"
    I get we all have biases and there should be checks in place to offset them. But rules and algorithms are just poor substitutes for common sense in a lot of ways.
    I wonder if the ongoing labor shortage isn't in part due to so many employers relying on similar questionnaire based algorithms to disqualify worthwhile candidates.

    • @desfortune
      @desfortune Рік тому +6

      The program does what you teach it to do. It's still the human developers at fault, because if you train it using biased data, you end up with a biased program. Also no, labor shortage is not because emplyee questionnaires, it's because we are in a recession

    • @adamplace1414
      @adamplace1414 Рік тому

      @@desfortune "...in *part* ..."

  • @Nylak-Otter
    @Nylak-Otter Рік тому +4

    My problem with this evaluation in my own case is that I test high for recidivism, and they're absolutely correct. But in practice I wouldn't show that feedback since I'd be less likely to be caught more than once. I have the same criminal habits that I've had for 20 years, and no one has caught me or bothered to call me out for it yet. If I was caught, I'd continue but be even more careful. The evaluation would be marked down as inaccurate.

  • @EnzoDraws
    @EnzoDraws Рік тому +5

    Should've titled this video "The Immoral COMPAS"

  • @RialVestro
    @RialVestro Рік тому +8

    I once got detention for being racist against myself... cause I was speaking in an Irish accent on St. Patrick's Day and I'm actually part Irish...
    I also got a detention for being late to class when our Teacher was having a parent teacher meeting and locked us out of the classroom during that time but she apparently still took attendance and marked the entire class absent. Apparently that teacher is known for doing stuff like this because when I showed up for detention the lady who runs the detention room took one look at who issued the detention slip and said I could leave.
    And another time I got a detention because I had left school early to go to work and I had already cleared the absence with the school ahead of time but still ended up getting a detention anyway. Though after I explained that to the principal he threw the detention slip in the trash and told me to just ignore it if it happens again.

    • @o0Donuts0o
      @o0Donuts0o Рік тому +2

      3 detentions. I predict 20 to life for you!

    • @truthboom
      @truthboom Рік тому

      if the times you went to detention are recorded in some data. Then you have to sue otherwise it's meaningless

  • @SgtSupaman
    @SgtSupaman Рік тому +3

    Statistics and algorithms can absolutely help predict what people will do but cannot predict what a *person* will do. No one should be trying to predict a single person's actions for anything more than theoretical interest, especially not in any capacity that will affect that person's life.

  • @orsettomorbido
    @orsettomorbido Рік тому +17

    The problem is: We (as world) shouldn't use punitive "justice", but rehabilitative and restorative justice.

    • @ichigo_nyanko
      @ichigo_nyanko Рік тому +1

      Absolutely, why should you punish someone for something they might do? It's innocent until proven guilty, and if you haven't even committed the crime yet it is literally impossible to prove you guilty.

    • @orsettomorbido
      @orsettomorbido Рік тому +2

      @@ichigo_nyanko I'm not talking about thinking wether someone might do a crime again.
      I'm talking about not punishing people, but helping them change the motivations that made them commit the crime. And helping the victims too, of course! Wether the person had already commited a crime or not, or wether they might commit another or not.

    • @michaellautermilch9185
      @michaellautermilch9185 Рік тому +1

      No, you're asking the justice system to do more than administer justice. This will lead to a totalitarian dystopia where the justice system gets to act like everybody's personal overseer.
      Punishment should be punitive (deserved) because rehabilitative punishment is allowed to go far beyond what the person deserves, if there's a chance it might "help them".

  • @Lazarosaliths
    @Lazarosaliths Рік тому +1

    Amazing video Kevin!!!!
    Thats so dystopian. One more step towards the future

  • @The_Privateer
    @The_Privateer Рік тому +13

    YAY!! "Pre-crime."
    I'm sure that will work out well. No risk of dystopian tyranny here... move along.

  • @j.matthewwalker1651
    @j.matthewwalker1651 Рік тому +5

    As odd as it sounds polling Twitter and taking the average is a pretty good way to validate results. The "wisdom of the masses" concept has repeatedly demonstrated extremely accurate results, much more accurate than a small group of experts.

    • @SkigBiggler
      @SkigBiggler Рік тому +1

      Twitter is not a good representation of people as a whole. Wisdom of the masses is also (as far as I am aware) typically only meaningfully applicable to situations where person beliefs are unlikely to play a role in decision making. No one is likely to hold a strong opinion on the nature of a jar of jelly beans, they are likely to do so with regards to a criminal.

    • @j.matthewwalker1651
      @j.matthewwalker1651 Рік тому

      @@SkigBiggler fair points, and obviously Twitter should not become the source for sentences, but as long as the data is presented in a way that reduces the likelihood of sensationalism it's still a good way to corroborate something like the algorithm. Specifically, anything that could link the subject to a trial in the media, and things like race and sexual orientation should be omitted.

    • @buchelaruzit
      @buchelaruzit Рік тому

      you cannot ignore the biased element there is to it. here it makes sense that the general opinion is the same as the AI's, where do you think the AI learned? the "wisdom of the masses" also tended to rank black people higher

  • @weslanstr
    @weslanstr Рік тому +22

    My first problem of many with that software is that its mechanics are secret.

  • @keanugump
    @keanugump Рік тому +5

    Most of those questions sounded to me like "are you rich?", "are you a stereotypical white person?" or "are you in a vulnerable position in life?"

    • @andrasfogarasi5014
      @andrasfogarasi5014 Рік тому +3

      Yeah. Most of the questions on that survey could've been condensed into a single question:
      "What percentage of your income do you save?"
      A great predictor of recidivism. Financial strain causes criminality due to obvious reasons. And the simplest way to quantify financial strain is your savings rate. If someone makes $15,000 but saves 30% of it, that person is distinctly good at managing their finances. They may be poor, but they are certainly not the type to have to commit crimes over that. Now imagine someone who makes $100,000 a year and saves none of it. What exactly do you spend $100,000 on per year? Drugs? Alcohol? Gambling? Status symbols? An unemployed spouse and 3 children? Whatever it may be, this person is likely to have a stressful life and/or a terrible personality. I dare say they're probably more likely to commit a crime than our impoverished financial wizard. And while that crime is most likely going to be insurance fraud, it is still crime.

  • @Youssii
    @Youssii Рік тому +3

    If an accurate algorithm said it was almost certain someone would commit a crime, would it even be fair to punish them for it? After all, it would seem predestined to happen…

    • @michaellautermilch9185
      @michaellautermilch9185 Рік тому

      Under a fair judicial system, no. Under a rehabilitative system, yes, you can punish anyone for just about any reason if it will "help them" in the long run.

  • @notme222
    @notme222 Рік тому +5

    Your question at the beginning isn't about who's more likely to commit a violent crime, or who's more likely to get a conviction in the next 8 years. It's "who's more likely to commit another crime?" And logic backs up the algorithm on that. The person with more years in front of them, who may believe they got away with their last crime, has a higher chance of doing something at some point. No context from that question was about setting parole.
    A algorithm that makes accurate predictions would still be wrong if the questions being answered aren't what the asker meant to ask.

  • @Eeeeehhh
    @Eeeeehhh Рік тому +4

    This test feels scarily similar to an ADHD assessment, I always wonder how algorithms will discriminate against mentally/chronically ill people

  • @SuperYoonHo
    @SuperYoonHo Рік тому +1

    Vsauce! Glad to have you back!!! Love your videos Kevin! You are so cool as always.

  • @trickdeck
    @trickdeck Рік тому +6

    I can't wait for the Sibyl System to be implemented.

  • @yinq5384
    @yinq5384 Рік тому +3

    The black box reminds me of Minority Report.

  • @prnzssLuna
    @prnzssLuna Рік тому +5

    Not gonna lie, this is genuinely terrifying. The other vidoes you've made so far mostly showed one-off mistakes, that got rectified afterwards, but it doesn't look like anyone is willing to stop the use of unreliable software like this? Terrifying.

  • @danielhernandezmota225
    @danielhernandezmota225 Рік тому +1

    One must be careful to include relevant and pertinent data when generating a model. In this case, the model must not have biased features directly or indirectly; that can be tested alongside with a team of experts who carefully evaluate de results. An additional procedure must also be done in order to "open" the black box with model explainabilty. One can use SHAP values or Anchors, even Lime to try to uncover what's inside. Finally monitoring of the model is a must; performance through detailed audits is imperative to determine if the model is still functional or if it is getting worse over time. In this case since population dynamics change over time, it is save to assume that the model will eventually stop working correctly.

  • @louistennent
    @louistennent Рік тому +1

    This is literally the plot of Captain America:the winter soilder. Except of course,with massive aircraft with guns aimed at the high risk people.

  • @PlaNkie1993
    @PlaNkie1993 Рік тому +4

    Didn't know the black box was actually real, that's pretty wild and concerning

  • @sydney9225
    @sydney9225 Рік тому +3

    Great video! love the way you summarize and explain topics. But that voice crack tho

  • @kylejramstad
    @kylejramstad Рік тому +1

    I love the "code" stock footage that shows the help of the command line command append.

  • @daaawnzoom
    @daaawnzoom Рік тому +1

    6:30 Remember everyone, if you saw someone stealing food, no you didn't.

  • @youkofoxy
    @youkofoxy Рік тому +4

    They should have watched Minority Report or Psycho Pass.
    Just that, one just need to watch one of those to realise how such system can be easily ruin people's lives.

  • @raxcentalruthenta1456
    @raxcentalruthenta1456 Рік тому +4

    This is dystopian. Plain and simple.

  • @venkat2277
    @venkat2277 Рік тому +2

    0:40 yes, I predicted that too, it makes a lot of sense.
    Think about it, the 40 year old guy who has done armed robbery knows the consequences and probably regrets it and will be very scared to repeat it. While the girl walked away as if nothing happened, faced no consequences hence she is much more likely to repeat it.

    • @michaellautermilch9185
      @michaellautermilch9185 Рік тому

      The girl should be appropriately punished by her parents, as all children occasionally need. If parents would parent, then the government wouldn't need to become Big Brother and act like everybody's parent.

  • @vgamesx1
    @vgamesx1 Рік тому +1

    6:00 Right here is where I really noticed the biggest problem with these questions on my own, I do agree with this statement, however that does NOT mean that I think you should always put yourself first, but for someone who's main goal is to climb the corporate ladder or whatever then that would be a perfectly valid response too.

  • @maxwhite4732
    @maxwhite4732 Рік тому +4

    This is the equivalent of asking a fortune teller to predict the future and using it as evidence in court.

  • @nourgaser6838
    @nourgaser6838 Рік тому +31

    This video to me relates directly to the MBTI and proves that we cannot predict or understand human behavior and personality. Psychology is not a natural science with concrete facts that can be derived mathematically. (Not that the MBTI or that compass software rely on psychology or anything scientific anyways).

    • @feedbackzaloop
      @feedbackzaloop Рік тому

      For a 'not a natural science' psychologists learn way too much statistics. Like, near as much as physicists

  • @LeetJose
    @LeetJose Рік тому +1

    this reminds me of this older book my class read in middle school (2002?) about a computer that could predict crime. I think I remember the book describing a person being led to the room with the device so it could be destroyed I actually don't remember to well I haven't been able to find it.

  • @distortedjams
    @distortedjams Рік тому +2

    I only chose the bike stealer because they weren't caught, and the other one was in prison so couldn't commit more crimes.

  • @themightyquinn1343
    @themightyquinn1343 Рік тому +17

    There is something extremely concerning to me about an algorithm or artificial intelligence that tells me whether or not I will commit a crime.

  • @bbrandonh
    @bbrandonh Рік тому +6

    Minority report moment

  • @MrTJPAS
    @MrTJPAS Рік тому +2

    The Watch Dogs games sure seem to be more and more prophetic as time has passed, with the use of big data and algorithms moving from businesses improving their marketing into more personal and immediately important parts of people's lives, like in this case a calculation of one's likelihood to commit crime or be the victim of a crime being reduced to a eimple equation.

  • @aloe.0v0
    @aloe.0v0 Рік тому +1

    These "risk assessments" have HUGE bias towards the neurodivergent. As someone with ADHD, I've faced similar lines of questioning in clinical assessments. ("Do you feel bored?", "Do you feel discouraged?", "Is it difficult to keep your mind on one thing for a long time?")...
    ...Not to mention I live in an expensive city and live with friends to afford rent. Apparently I'm high risk for repeat criminality 😅

  • @FreeDomSy-nk9ue
    @FreeDomSy-nk9ue Рік тому +3

    I love your videos, that was awesome I really enjoyed it.
    I can't believe COMPAS isn't talked about as much as it should

  • @prim16
    @prim16 Рік тому +16

    This convinces me that COMPAS doesn't just need to be revised or "fixed", it needs to be discontinued. AI may have a future in the world of law. But this has completely tarnished its reliability, and ruined the lives of people. Its untested and inaccurate technology is being used too soon. If you were using machine learning to teach a bot to play chess, you wouldn't throw it up against Magnus Carlsen on its first dozen trials.

    • @tweter2
      @tweter2 Рік тому +1

      What would replace it? Gut hunches?

    • @jinolin9062
      @jinolin9062 Рік тому

      @@tweter2 something that doesn’t ask philosophical questions to base whether or not someone should get 13 or 30 years in prison?

    • @tweter2
      @tweter2 Рік тому

      ​@@jinolin9062 That's the county prosecutor and judge. I know of one crime where one judge gave someone 15 years probation and treatment (for first conviction) while and prosecutor appealed to get the guy 15 years in prison. (Yes, the prosecutor can appeal your conviction for a harsher sentence)

    • @tweter2
      @tweter2 Рік тому

      @@jinolin9062 I think another horrid thing is that judges can decide if they want sentences for multiple convictions to be served concurrently or consecutively. In other words, if you get convicted for a 3 year crime, a 5 year crime, and a 10 year crime will you service 10 years for all three or 18 for all three? Judge gets to pick!

    • @ichigo_nyanko
      @ichigo_nyanko Рік тому

      @@tweter2 Nothing, standardised sentencing for the same crime; perhaps increased sentencing for repeat offenders. Why should you punish someone for something they might do? It's innocent until proven guilty, and if you haven't even committed the crime yet it is literally impossible to prove you guilty.

  • @EmperorShang
    @EmperorShang Рік тому

    Thanks for being part of the problem

  • @light-master
    @light-master Рік тому +2

    Our societal laws are what a collection of what society deems that we are and aren't allowed to do. By definition they are a human judgement of human actions, and are consistently changing based on how each new generation values and judges the actions of others. You can not morally allow a computer to judge human actions anymore than you can judge the actions of those that lived hundreds of years ago, who were governed by an entirely different set of laws.

  • @meisstupid1831
    @meisstupid1831 Рік тому +23

    Okay, kevin. This is the problem
    Crimes shouldnt have algorythms. By judgement is basically the closest anyone could have basically counted the crime.
    Things might be related, but its never always true either, people are too hard to predict on criminalogy or basically everything.
    Math doesnt conclude crimes, it catches clues, as kevin has already proven in the last video.
    Such a dumb misconception, its like using a broken compass to find your way back.
    The real problem is that human nature is too complex, but the best way to reduce crime rates is to find the root cause.
    It feels odd to judge people by using math, its a tool but not for something too complex like us human beings.

    • @HHHjb_
      @HHHjb_ Рік тому

      Ye

    • @feedbackzaloop
      @feedbackzaloop Рік тому

      Funny you brought up that analogy, when one of the said algorithms is called COMPAS

    • @truthboom
      @truthboom Рік тому

      Human nature isn't that complicated lol.
      People rob food if they have no food.
      Bosses lower the wage as they are greedy and can get away with it

  • @Rayzan1000
    @Rayzan1000 Рік тому +8

    I think you misinterpret the "How often do you worry about financial survival" -question. If you are often worried about your financial survival, then you "probably" either have a rather low wage or fluctuating wage, making you more likely to commit a crime, in order to pay your bills.

    • @sirswagabadha4896
      @sirswagabadha4896 Рік тому +6

      In that case, any psych undergrad could tell you how much the ambiguity of the question without any context invalidates its results. There's a whole history of keeping people in prison for being poor, they could have chosen something much better

    • @SeidCivic
      @SeidCivic Рік тому +2

      Thus making the test/algorithm even more unreliable.

    • @Rayzan1000
      @Rayzan1000 Рік тому

      @@sirswagabadha4896 Well, most (if not all) questions can invalidate the result if taken out of context.

  • @Gerard1971
    @Gerard1971 Рік тому +1

    The duration of a sentence should be based on what evidence about the crime that happened, not on what might happen in the future according to some black box algorithm that is based on group statistics and not on the individual, and that nobody can independently verify. It should only be used to determine if certain treatment needs to be given before rehabilitation to decrease recidivism. It is sometimes used to reduce sentences when the risk for recidivism is deemed low to free up space in prisons, but that is similar to using it to give someone a longer sentence because they have a higher risk of recidivism.

    • @quarepercutisproximum9582
      @quarepercutisproximum9582 Рік тому

      Exactly! Our system is based not on self-proclaimed rehabilitation, but instead on revenge/ punishment. Therefore, we cannot morally "take revenge" or "punish" that which has yet to actually be done

  • @NaudVanDalen
    @NaudVanDalen Рік тому

    Kevin: writes inappropriate poem.
    Algorithm: "He's too dangerous to be left alive."

  • @Lolstarwar
    @Lolstarwar Рік тому +5

    i wanne read the poem

  • @jamesmiller4487
    @jamesmiller4487 Рік тому +4

    Excellent and thought provoking video, clearly algorithms are not, and maybe never will be, ready to judge humans. The problem is that human judgement is just as flawed, varying from person to person, day to day, and situation to situation. You could have created a video on the fallibility of human judges, their inept and biased sentencing, and been equally right and thought provoking.

  • @kevinlago1619
    @kevinlago1619 4 місяці тому

    Awesome video as always Kevin! :D Cool name btw

  • @csolisr
    @csolisr Рік тому +2

    One of the parameters in that COMPAS algorithm is basically the skin tone chart from that Family Guy skit, you know the one

  • @j.21
    @j.21 Рік тому +9

    .

  • @charlierogers5403
    @charlierogers5403 Рік тому +18

    And this is why algorithms are not good for everything! We shouldnt rely on them 100%

    • @timojissink4715
      @timojissink4715 Рік тому +2

      Algorithms can be amazing, but they need the right unbiased human input.

    • @luc_666jr5
      @luc_666jr5 Рік тому +2

      Tell UA-cam that please

  • @bishoukun
    @bishoukun Рік тому +2

    The algorithm: "Mental illness and learning differences are criminal indicators!"

  • @Mysteroo
    @Mysteroo Рік тому +2

    Those darn scooter thieves

  • @SAUL_GOOFYMAN
    @SAUL_GOOFYMAN Рік тому +8

    we dont need detectives anymore?

    • @issamasghar5203
      @issamasghar5203 Рік тому

      This isnt about finding criminals but predicting if someone will become one, this is more of a proactive way of stopping crime rather than waiting for it to happen and try to prevent monetary loss and even loss of life

  • @tom05011996
    @tom05011996 Рік тому +4

    The compass risk assessment would give a high score to anyone with ADHD!

    • @evil_bratwurst
      @evil_bratwurst Рік тому +1

      I guess I'm gonna be a major criminal, then!

  • @user-nn1mf8lr2w
    @user-nn1mf8lr2w Рік тому +2

    Feels like we're getting dangerously close to Psycho-Pass

    • @yanivray
      @yanivray Рік тому +2

      I looked if there was a comment about that lol

  • @spthibault
    @spthibault Рік тому

    "...If we could, should we? " that is a gold level philosophical question. An additional question that skews the hard line of separation between subjects -imo, that is - should we be fielding this technology and subjecting the public (where lives are real) before it is perfected. Should we be making actual society unwilling and unknowing members of that apparatuses development and operation, especially when their actual livelihood's are on the line?

  • @vertigo747
    @vertigo747 Рік тому +5

    Havent watched it yet but I know its going to be good

    • @Nillowo
      @Nillowo Рік тому +4

      That’s easy to say for all of Kevin’s videos ;)

  • @martinzg0078
    @martinzg0078 Рік тому +3

    Vsauce

  • @theomni1012
    @theomni1012 4 місяці тому

    It’s always been interesting how history can predict the future- but it still varies wildly.
    For example, a kid raised by abusive parents. You could say that they’ll be an abusive parent when they grow up because that’s how they were raised. You could also say that they’d grow up to be a very good parent because they never want to treat their child the way they were treated.

  • @danbance5799
    @danbance5799 Рік тому

    I've spent a lot of time developing statistical methods for identifying spam in email. And I assure you, the same fundamentals apply to any sort of predictive methodology. Sepcifically:
    1. Dumb beats smart. Every single time. Whatever clever algorithm you come up with, nothing has ever outperformed brute force statistics on raw data.
    2. You're only as good as your source data. If your source data is garbage, then every prediction you make will be as well. If your source data is biased, your results will also be biased.
    3. Data changes over time. Spam evolves. It's gotten a lot harder to detect over the last several years. Society also evolves. Predicting outcomes based on data from the 1980s will be unreliable (see #2 above).
    4. Every prediction has a margin of error. The best spam filters make mistakes. When applying the same methodology to something as unpredictable as behavior, the margin of error will be higher.
    None of this, however, gets to the biggest problem here - we have a criminal justice system that's predicated on punishment, not rehabilitation. Beyond that, transparency is essential. If I were a judge or a juror, I would never rely on a black box output, ever. Courts should never accept any piece of software or data that can not be audited. Doesn't matter if it's COMPAS, or a breathalyzer, or anything else.

  • @RedIceberg
    @RedIceberg Рік тому +4

    I feel like the problem is that a young person, if taken to court, is much less likely to commit a crime. COMPAS probably doesn't take this into account, and therefore gave the teenager an inflated score.

  • @lawlerzwtf
    @lawlerzwtf Рік тому +5

    Psycho Pass.
    Or Minority Report, depending on your demographic.

  • @zeropoint703
    @zeropoint703 Рік тому +1

    that outro with the box tho 🔥🔥🔥

  • @tmrogers87
    @tmrogers87 Рік тому

    Liking and commenting to increase engagement and visibility. This is fascinating and more people should know how criminal justice, AND MOST OTHER ASPECTS OF MODERN SOCIETY, are shaped by assumptions made by an algorithm or other model

  • @spudd86
    @spudd86 Рік тому +8

    Seems like you could get the one about having difficulty keeping your mind on one thing tossed as discriminatory... since that is literally the main symptom of ADHD.

  • @Mysterios1989
    @Mysterios1989 Рік тому +3

    I am really glad that these kinds of tools are about to be banished in the EU (well - as soon as the AI directive passes, but there is a strong push for it). AI are great systems for where they are meant to work, but they have too many flaws when used in fields like law.

  • @ProductBasement
    @ProductBasement Рік тому +1

    Please note that the SCOTUS declined to hear Loomis v Wisconsin on June 26, 2017, after Gorsuch had taken the bench but before Kavanaugh, Barrett, or Jackson.

  • @JM-us3fr
    @JM-us3fr Рік тому +1

    People find all sorts of excuses to hurt people and punish people they don't like