RubyConf 2017: Finding Responsibility by Caleb Thompson

Поділитися
Вставка
  • Опубліковано 14 гру 2024

КОМЕНТАРІ • 2

  • @cjayssons
    @cjayssons 7 років тому +1

    There's a full transcript of this talk over at calebthompson.io/talks/dont-get-distracted, and I plan to add some more related content to the blog.

  • @HotSkorpion
    @HotSkorpion 6 років тому +1

    This is a very pertinent point, but it's not a new issue. Software may be the tool of the future, but it is still a tool like many others. In that respect, a software is no different than a hammer. Regarding the point of thinking the worst possible use of a tool, usually, it's inversely proportional to the best use. But rather than limiting what the tool can do, think about WHO you are developing for and WHO is going to use it. The whole potential good/evil is a tricky duality, but it's the nature of the beast. Almost every tool has an equal potential for good and evil and I personally don't agree on not developing something that has the potential of massive good, because it has the potential of massive evil as well. On the example of the Wifi tool for the DoD, that would have been my first and major alert. All the other aspects, function and reason, would not matter, even if they would tell me all the right answers and that it would not be used for killing people, It's still the DoD. So the main and only questions developers should ask is "Do i trust this person/entity?" and "Do I agree with what they do and want to be part of it?". Example, imagine this situation, Instead of that tool being requested by the DoD, it was being requested by NASA or ESA. Same exact functionality, same exact requests, same exact questions of "does it find phones?". Would that change how you interpreted and viewed the usage possibilities or not? The tool would retain the same exact potential both for good and evil, being the only difference WHO is asking for it.