A really good talk. I don't necessarily agree with everything but its a good perspective and raises some really pressing questions, ones we should ask more.
Terry Davis addressed some of these ideas in TempleOS. His shell runs C code interactively. He also argued that Linux is way too complex for single-user machines, saying that Unix-like is an 18 wheeler and all he needs is a dirt bike. As far as response-request for IPC, this is implemented by Minix. Modifying Minix might be the best way to go about building this vision of an OS.
@Patrick Keenan DTRH wasn't the first video that was made on Terry's life, and beyond that, there are a lot of people who followed Terry up to his death in the programming community.
It's not true that computers are not shared anymore. At my university there is a computer pool for all the students to use running linux. Each user has it's own desktop and home directory. Everybody can use any computer, and I do not need to bring my laptop from home anymore.
I still work on systems with over a thousand users which need to be protected from each other. Brian thinks these systems were all replaced back in the 1970s.
There’s no reason why those systems couldn’t run a different os, or os feature set. “But I still use x” is not a reason why everyone must be forced to use x. If we are talking about an ideal alternative for a simplified OS, it wouldn’t replace all systems. In general it sucks to not be an admin, and if you’re admin users don’t matter.
On Plan 9, whoever logs in at the console is effectively root, and also the only user, on that machine. It's very simplifying, and no loss at all in terms of security, because in a proper Plan 9 system the terminals have no local storage.
I've complained about gratuitous complexity since at least 1993. The universal response is that I'm too stupid to grasp what's necessary, clearly, and should probably revert to being a tester-or-something. Yet, somehow, I delivered over 20 shrink-wrapped projects ( when that was a thing ) on five different architectures before delving into web programming in 2005. Javascript in 2020 is insanely complex and well-nigh impossible without numerous "package managers." Aaaaand, forcing javascript to be on the server-side - ostensibly so engineers "wouldn't have to learn another language" - was crazy dumb, and the complexity of dealing with a dozen different ways to fake multi-threading makes any "wins" of using a single language on both servers and clients completely moot.
Complexity has been something I've fought against all my career. Computing is the only engineering discipline I know where "more visibly complex" is seen as "better".
Javascript is running on the server-side because people want the same CODE running on both client and server. For example, if web page is generated by JS, and web browser has JS disabled, server can render that webpage instead. What dozens of different ways of fake multi-threading? There is just one, that's async runtime (with a few legacy APIs that do it, but it's all been replaced with async/await by now).
@@rlidwka ...all been replaced - uh, no. Do you have anything in production? You don't just yank working production code, and change things to async/await. You leave it there until it either is found to have a bug, is replaced by other code, or is refactored. And, async/await is NOT true multi-threading, and if you've done actual multi-threading you'll know that. Also, async / await is NOT what you want in many cases, particularly await - as you end-up with synchronous code that will impact user experience. I'll stick with PHP on the backend. It's GOOD to help separate tasks.
@@Jollyprez APIs that a developer frequently interacts with are being migrated to async/await, you can check popular opensource libraries for that. Perfectly working production code doesn't change, but who cares? If it's working, let it be working. And if it breaks, time can be allocated to rewrite it up to newer standards. Async/await allows for the underlying library to implement multithreading, which webworkers do a good job at.
As opposed to a gui over the phone where it's 'What do you see?' 'No, the other window' 'Yeah, it looks like a wrench' 'No below that' 'Now click "yes"' 'There's no "yes" button?' 'Try the "options"' tab.' 'No, not "settings", "options"'
Tim Wasson i don’t see why experts have to cater to Paris Hiltons of the world. Wanna use a computer, learn how it works first. It’s not my job to educate users what the Windows key is.
Anyone championing a reduction of all the accidental, corrupting, pointless and often insane complexity in modern IT, gets my vote. Unfortunately, this complexity sells consultancy hours, days and months... it keeps most people in their cosy jobs. It's also a power trip for most: those who understand the nonsense complexity have (lots of) power over those who do not. "Trust me, I'm a doctor" (and my invoice is in the post)
(This is just gathering of lose arguments, facts and emotionsand is not the base for a thougthful - and respectful - discussion. I want to point out the grotesk of arguments given in the preceding comment(s). So take it more as a satire. Yeah.) So let's see, where do you want to start? How many storage systems are there? Read only? Spinning hardware disks? Solid State storage? Tapes for archiving? Which do you want to get rid of? Windows? *NIX? Why windows server? Why windows 10 Pro or enterprise edition? Which ot those should we remove from the market, no from universe? Hardware? RISC or CISC? I mean it seams that RISC is beginning to be more popular even in Server and desktop area. Smartphones, that's gonna be great! Are IT systems too, you know? Software, hardware. Which should we drop? MS did us a favour, no more Windows Phones? But the apple and the robot, which should it be? Two totally different operating concepts, i guess, and I am not only talking about the pesky user interface. Just get rid of all those pointless, awkward Signal Threema Pulse Telegram Skype. Just go WhatsApp. And Facebook. Btw, thanks for Google making it -1 ! All those efforts wasting in doing DIVERSITY just that some people can get jobs because they got no(t the same) clue as you playing rock music, miscalculating the next world financial crisis, lying for getting voted for being able to lye to many more people. No, you want to get rid of those IT guys because they should have learned different stuff in life. Excuse me, all those pros creating overly complex IT systems for their own benefit (money and "Trust me I'm a doctor" attitude). Where did these people start to make tech? Google founders? Facebook founder? Apple founders? These and others dedicated their lifetime, some even from childhood, to create complex, efficient, unique, diverse software and hardware - also to please you. Are these people those snobs you are calling them? What is it that you do for work? (Rhetorical questions, please don't answer) Can it be done just by watching someone else? Doing the same "moves"? Did you have to learn stuff? Would I have to learn stuff to do your work? Would it take me hours, days, weeks, months, or even years? I know you would have to to learn "IT" because you sure seem to have no clue what IT tech is, from nowhere about hardware to nothing at all about operating systems. People spend their lifes! for knowing IT and tech as doctors might spend their lifes just for doing their job, healing or research. (Gonna delete this post in a few weeks, but until then I feel a bit reliefed.)
It's a lot like biological evolution, where certain organs may be redundant, certain vains and systems take unnecesary complex paths, etc. But it's all there because of history. It's the same thing, someone starts building some very rudimentary computing device. Then instead of inventing the wheel again, someone else uses that thing to build something more complex, etc, etc. After more than 10 iterations like that you get a mess naturally. It was no ones intent but there we are, patchwork on patchwork. But now redisigning the entire thing will take a lot of work, and we would have to reschool everyone. So we just reluctantly carry on piling even more shit onto that mess, increasing the cost of redisigning the whole thing EVEN MORE, and hence we are stuck in this vicious cycle of ever decreasing efficiency where the mess just keeps getting bigger. Weather it is biological evolution, city design or software, the same mechanism explains the inefficiëny. And the bigger the history the greater these inefficiencies get. It's also called the 'law of the handicap of a headstart'. Those entering a market freshly can immediately develop anything wayyy more efficiently than established companies with a long history: en.m.wikipedia.org/wiki/Law_of_the_handicap_of_a_head_start
I think that the dichotomy of "admin software" vs "not-admin software" being your only method of permissions is a bad idea for security reasons, for the same reason that setuid root programs in Unix are encouraged to drop root privileges as soon as possible; you want software to have the least amount of permissions for the least amount of time, so that the impact of vulnerabilities in the software is minimized.
It's possible he meant them to be able to change over time. Either way, though, I think the two-way system, while it does simplify things, is actually MIMICKING on of the weaknesses of the Unix permissions system. I can agree that the focus on protecting users from other users of the same computer is somewhat archaic nowadays (even if it still applies often enough), but what's also archaic is the lack of focus on protecting users from their own software. That's why I think it might be better to start with the Android app permission system and try to improve that (e.g., generalize it, make it more fundamental in the OS if it's not already, focus it on more precise user control, maybe add OPTIONAL support for multiple users). The Android app permission system (which keeps changing -- improving I think) gives each app permissions, controlling exactly what types of things each app is able to do (rather than what files each USER is able to do what to, as in Unix). Also, since the Android app permission system always tells you exactly what permission is blocking a process and let's you fix it immediately in a pop-up (and Android 11+ lets make an exception only once, rather than permanently changing the permissions), rather than stopping and saying "Permission denies" like Linux, common Android permissions problems don't just break things and require troubleshooting using the command line like Linux ones do. Actually, I imagine a lot of my personal problems in this vein with Linux permissions could be solved within the Unix permissions framework, if I just massively switched around my settings; however, here are some problems I have with and improvements that could be made to my Linux permission: It would be nice if I could properly install software without needing full superuser privileges (I use sudo apt/apt-get install when possible). It would be nice if you could control exactly what what files, directories, or drives a particular process has permission to affect without my express permission. (That way, when I download a file that has a script that's supposed to install a new operating system on some external drive, I don't have to be worried that this script I downloaded from the internet and don't understand will mess with any other drives, like my primary hard drive. Similarly, sandboxing untrusted executables, could be trivial, and downloaded software could be used in a permanently sandboxed state without needing to use a virtual machine; also, browser scripts could be sandboxed at the operating system level rather than the browser level.) It would be nice if I could use software that can READ my partition table without having to give it permission to CHANGE my partition table. (The only way I know to check the layout of my partitions properly, or to make images of partitions, is use software that need superuser privileges whenever the run, like parted, gparted, fdisk, and partclone (actually, I think parted might be able to run without sudo, but there was something inadequate about what it did).) Although I'm not as paranoid about my Linux PCs (/laptops/etc) spying on me as my phone (mainly because it normally has no microphones, cameras, or GPS/wireless phone service/wi-fi/bluetooth/ attached), this concern definitely not unique to cell phones (and, logically, it makes more sense to worry about malware when you rely heavily on free software). Thus, being able to control the use of such devices might also be useful.
I feel like I'm showcasing a lot of my ignorance and noobiness in that last comment, even if my core points made sense. For the record, I forgot about the df command (which allows you to see partitions without needing admin privileges), but I DID already know that Android was based on Linux, though I didn't know the specifics. I was trying out puppy linux (bionicpup64) in a virtual machine, and I thought it was interesting how in fatdog and puppy, the default login is root, but you can set network applications like browsers to be run by a user called "spot", who has no admin privileges, and there's also work on a third user called "finn" or "fido" who has no admin privileges but can sudo to get them, like on Ubuntu. Someone who I guess (because I don't remember for sure) was the/a designer of fatdog and puppy compared the "spot" system to Android, which he claimed ran each process under a different randomized user. Still, maybe you can see how it might be reasonable to call this a contortion of the Unix permission system, designed for computers shared by many humans, to fit a new context, where the focus is on sandboxing many different applications from each other and from hardware, personal data, and core system programs and data, and on separately limiting the permissions of each application, rather than of each human user, to only what the one human user wants and expects that application to do. I was also interested that puppy comes with a very simple GUI "partviewer" for viewing partitions without admin privileges, though, as I mentioned, I more recently rediscovered that df does a similar thing, though without the bars to VISUALLY compare used to allotted, and with more complications, and I don't know how to get it to show all partitions (including swap) or to show start and end points on the disk.
There's really nothing stopping you from making a "shell" exactly like you describe for current UNIX systems. There's also nothing stopping you from making a distro with just two privilege levels. Separate filespaces are also very much possible on Linux (containers). A request/response IPC that starts up programs is what systemd can do. Configuration in the form of a registry is also implemented on various systems. Of course, what you then and up with is a platform within a platform, and we already have many of those. That is part of why systems are such a mess. The GTK people have their ways of doing things, we have a variety of IPC mechanisms running alongside each other, we have various authorization protocols in place, etc. Most of what you describe as solutions are really just conventions you want to see. You can't expect everyone to follow your choices and limit everyone from making different choices. This reminds me of the old "we have both standard A and standard B in place, so we're making standard C to create a unified standard."
I find it interesting that most cloud infrastructure really attempt to implement most of the proposal presented here (on top of current tech) and, for security or organization, we build all the stuff the proposal attempts to remove right back on top of the platform that doesn't have them. To me this suggests most of the pain may just be what security requires. Concretely, UNIX and other referenced initial platforms were typically developed in the absence of security and security was attached somewhat ad hoc through the years. For a proposal like this to truly work security must be worked in from the beginning. Trying to remove all those hairy parts does not bring in security. You're not just protecting against local users (who really should be protected from one another otherwise a non-administrative user can corrupt an administrative user intentionally or unintentionally and you get disaster). Minimizing the affected scope of breach is a good starting point but communication is key. We need communication for effectiveness but we must restrict it for security and stability. Finding that balance is tough no matter what the underlying platform looks like.
what are your thoughts on NixOS? Seems like Nix really tried to do this exact thing, with the added feature of utilizing immutable data structures on top
Regarding protecting users from each other, you say it almost never happens that strangers share a computer, but what about public libraries or computer labs at universities where students have unix accounts accessible from any machine on the lab?
+hasen195 Not only that, we can ssh in to a shared server that typically is used by quite a few students simultaneously. In addition to your home folder, you'd typically store things in /tmp that you don't necessarily want other people to be able to access. We also have group projects where it's important to belong to a group that together have access to a folder that other people does not have access to. The concept of strangers not sharing the same computer is simply not true in university or company systems.
+Oskar Södergren I would say yes and no regarding shared computers. Yes, people need places to store personal files. Mostly these days those files are stored elsewhere, not on the specific computer itself. Even in personal devices, we now put more stuff in Dropbox/icloud/whatever. In fact, in my university in the computer labs for my section of the school everyone's shares a single login to get onto the machine. It was simply too much maintenance overhead with no real benefit support individual logins. Everyone brings in personal storage devices for personal data, and just users the machines to run applications. Secure? Not really. But more manageable, yes. I'm not saying it's a great solution, but it is one borne out of the necessity to simplify.
I hope they have some of this hardware/software solutions where the system is reset back to the base system image after login. But the solutions for the future is not dropbox or any other opaque cloud but your own device. This convertable idea from MS. You have your virtual machine on your phone/usb stick and the host has some kind of vmware player that just runs your machine. Simple and easy, and well restrictable. Ok, there is this USB controller problem that totally kills security at the moment.
Well... in the beginning, when UNIX was invented, this was the typical environment and that's why all this shared stuff was implemented. But: MOST of the time nowadays you use your own devices. You don't share your home PC, your Smartphone or your console with many other people (mostly only your kids or wife, but often they have their own devices, wo even not that). You can see it especially in the case, that most of the computers nowadays have the main user profile run as Administrator (and Windows and Mac went so far to even cut away privileges from this Administrator and give it to a super administrator, that you only see in repair mode), a role, that in the old time was restricted to one type of users, the server operators, while all the others were just clients, guests on the system. Yes, in offices you often still have in the network that the PCs itself are in an environment of networked computers, where they are only clients, but remember: when Unix was invented, a "computer" didn't meant the terminal you were on, but the whole environment, and nowadays this computer is inside your box and you only network with each other. @David Peterson: In my university they have written on the screens: "Don't save on desktop or internal storage device, at it will be wiped clean after you log off", as the computers are running in a sandbox mode. We are urged to story every data that we want to save on USB devices ;)
We just shouldn't use multi-user systems if there's no need for them(nobody uses my linux boxes other than me). Yes, in a library or computer lab you might need multi-user system. Debate over.
Had some decent ideas at the start but this is dangerously inadequate from a security perspective. Any replacement should have more security controls not less. Actually I would go so far as to say that a new system should be built with security as the core concern with multiple measures at each level to bring it beyond any other system. User and file permissions is a weak and outdated method to place any trust in and causes all kinds of issues but the service they provide should not be discarded. A lot more should be added to a system beyond this. A registry is also particularly bad from a security standpoint since it is by nature a shared space. To adequately isolate programs from each other you wind up putting up walls that defeat the purpose of the registry. Or you just let any program have free reign to do whatever it likes to everything in your system.
I absolutely agree with you. File security may not be an issues on systems with less than 5 users if the users know each other but everything beyond that needs strong security. Alone in the case that one account gets breached. In a system where everybody can access all user files this could cause devestating damage to all user data and something like that is plain not acceptable.
The Microsoft Windows WinSxS system already implements this kernel package management using UUIDs and version hashes. It guarantees that all versions of shared apps and libraries are always available to all consumers on the system. WinSxS has been shipping for ten years now.
And WinSxS is why a fresh install of Windows7 Professional needs 40+gb of disk space filled during two full days of update downloads, before Office or other programs are installed. I wouldn't point to that as a success.
+alderin1 - Dude, it's 2017, and you're still fretting about 40GB? In exchange for 100% backward compatibility across the entire operating system? You linux people have weird priorities.
+angrydachshund - Actually, this "priority" was handed to me by my employer getting 60gb SSD drives for workstations to speed things up: The OS taking 40 of those 60 doesn't leave enough room for user data. However, the point was WinSXS isn't a success, it is a bandaid on a foundationally flawed system, and just because storage space is getting cheaper doesn't mean my operating system should try to take up the same percentage of space. Finally, sadly, I work in an area that often has the edge cases, your "100%" is incorrect, and in my experience is closer to 90%, unless you are talking about other Microsoft products.
We did some of these suggestions before - LISP machines, Forth machines, etc. And Android Intents, SOAP, JSON-RPC, GraphQL queries... if we don't have a way to describe piping data from one small bit of code to another, we invent it. Again, and again, and again. "Those who cannot remember the past are condemned to repeat it." - George Santayana Whether you buy into his philosophy or not, we have gone from massive shared machines, to personal machines, to largely shared machines, and back again multiple times. I have customers today who have every desktop running a Terminal Services session on their server.
I can't say I agree with everything here, especially the idea of dispensing with user-level protections from other users. Its important to remember that users represent separations of concern, not just human users. Ex: a mysql user or a www user. I'm also not convinced on the registry store as being better than /etc or keeping config files in a individual user's namespace in plaintext files. I think the points about package management are the strongest here.
Some comments: 1. Shell languages are certainly a mess. I am amazed that people still write complex programs in bash which was written by someone who clearly skipped language syntax classes. However, shell languages spring naturally from the idea that it is a simple increment from command line execution to scripting. This certainly means that shell languages need to be compromised somewhat (but not to the level of bash!), and that interpreted languages such as python will go their own way from shell languages. If you don't like ls and the command shell, you can always use a graphical file explorer, but they don't script and their output cannot be piped and manipulated. Ie., lets not throw out the baby with the bath water. There is a reason programmers prefer command line environments. 2. Dependency management in Linux became a serious problem due to a combination of "everyone should install from source" thinking combined with the tragic decision to include DLLs or .SO modules, aka "dynamic link loading" from Windows. This was a technique from the bad old batch days of the 1960s, brought forward into the days of demand paged virtual memory, where it makes no sense whatever. Does it save memory space? Yes, but page management makes that irrelevant, and DLL techniques actually work counter to virtual page management, which works best if you have unmolested binaries on disk. The real issue is that cross package referencing (either DLL/SO or library, or program) creates N different combinations of program run environments, and that is an inherently wrong idea. You don't want your users to be testing configurations that you haven't tested. The gain from this idea is that the user *might*, repeat *might* see better functionality via a new package upgrade unrelated to the package at hand. I would assert that it is far more likely that you will see a new bug than better functionality. The alternative to all of this is return to static program linking. The configuration will be fixed at the package creation by its developer, and that configuration will be tested and unchanging. Libraries get compiled and shipped with the product. A program that is needed and is open sourced gets shipped in a subdirectory, eliminating the need to go find it, worry if it is still compatible, etc. The binaries get bigger, but who cares? virtual memory management takes care of loading only the actively used sections of a program. 3. Registries moved the configuration data from individual files to a central system. Even if the Windows registry was organized, one of the biggest issues in WIndows is the central registry becoming corrupted, which brings the entire system down. What point did the central registry support? What point does Linux having all of its information in the /etc tree support? It does not make their configuration information look like each other. The info is divided by filename, and increasingly, by different directories as programs and devices need complex support files. My gold example here is Eclipse. This program needs no install program. It lives in one tree. You can copy a working tree from anyone to another machine and it just WORKS. All its configuration info is in that tree. Delete it and it is gone, there are no registry or information files outside. Want to be sure that you got rid of the old program before putting a new eclipse on? rm the tree and copy. Want multiple versions on your system at the same time for compatibility reasons? Make another tree. They won't fight over registries or info files elsewhere. 4. I would not be so quick to declare multiuser systems as dead. We use *lots* of shared machines, some with multiple users at the same time, some not. Even if the system does not have multiple attached users, each user has an account, and the user paradigm is valuable.
Indeed, DLLs made a lot more sense back when hard discs were smaller, but these days, when terabytes are routine, not having a statically linked system can't be justified. And it's a lot more difficult to do multiuser if one doesn't have a multiuser-capable os. Having Unix at hand means one gets tempted to use it as a multiuser system when one can, because it's what big systems do, and big systems are inherently cooler. Thus, everywhere Unix lives, multiuser lives too.
Actually if you look a little deeper, "linking" as an idea is pretty stupid, static or dynamic. You need just executeable files - binaries (not modern 1 mb bins, but real ones - tens, hundreds Bytes) , that can call another binary. You can put all that bins to RAM, making them available for being called, and you can have full cache on disk, in case that there is no needed function in ram cache (which is smaller) and you can have remote "cache"- repo - in case if function not found in ram nor disk it just download it from your remote repo and neither disturbs your again nor or throw error - all needed package managment and linking with more efficieny and less size
Dlls are needed as if one library has had a security update you'll need to recompile ever peice of Software that uses it if it was statically... And that costs years for libc in a standard in cpu time. It does not work.
Unix was created to be a pipe and filter architecture and the shell was the means of using it. The Unix philosophy is about small programs that do one thing very well and piping them together to create an application. It is a brilliant idea for an amazing amount of productivity before a windowed environment. And Unix is meant to be a multi-user computing platform, that's why it was designed with a file and memory security. In Unix, everything is a file, this includes ports, terminals, printers, etc. So, the file security model can be used for everything on the system.
One big problem lies in identifying what is part of configuration. Even the most conscientious developer will often fail to identify all the elements which comprise their application's configuration. As a result, third party packages which need to interact with that application may find it necessary to rummage around in the application's private namespace to find the configuration information they need. The alternative - petition the application developer to update their config info and hold up your release until they fix it - is rarely practical. In my experience this sort of "configuration shortfall" is responsible for 20-30% of the complexity I face in getting packages to work on my system and in creating packages of my own. All existing systems suffer from this problem. Your suggested scheme will as well, although its greater emphasis on configuration might mean that package developers are more rigourous in identifying the elements of their package's configuration.
in my experience the "solution" is to use containers and virtual environments. As you say, no mater what you do, configurations can change when you install some new thing or version sometimes without you knowing. Of course this absolutely adds complexity, but I don't don't subscribe to the assertion of this channel that more complexity is inherently bad. Complexity just has to be met with good abstraction. ls -al > foo.txt is a good abstraction.
This talk reminds me of Terry Davis' TempleOS. I think the terminal exclusively parses Holy C as opposed to some scripting language, and has the ability to improve the readability of output in small ways, e.g. printing images and 3d models to the terminal... lol. Edit: Should have guessed, people have already said this a long time ago
You've done videos on problems with the status quo of software development like OOP and the mess of modern operating systems. You've opened my mind to ideas of what could be done to create better software. Do you think you'll do a video on the state of the World Wide Web and the overextending of technologies? I'm very eager to hear a response.
I appreciate the sentiment and guts to share your thoughts! Many ideas you are describing here remind me of Android (a lot), Smalltalk VM, .NET framework and Microsoft OLE. I am sure these concepts are present in other systems as well. I agree that existing OSes are hard to learn and understand, promoting buggy programs but I suspect that complexity is not the enemy here, it's just bad design. A system can be complex and yet self-healing and easy to reason about. A well designed system is not simple, but the one which layers complexity in a sane way and keeps those layers separate. If you look at the systems which seem "simple" or try to actually build you own, you will see that they all have complexity in them, it's just with "simple to use" systems we don't need to know about it most of the time. From this regard mobile OSes are a big step forward compared to Windows, Linux or MacOS (which in many ways does good job too) in the ways they make user's interfaces "simple" and predictable compared with desktop OSes where any problem requires "popping the hood" and diving into the system details.
I refer to "well designed complexity" as sophistication. Software can be sophisticated yet simple. With this definition, complexity is almost always bad.
What you've described is basically the Macintosh platform prior to the switch to a BSD core. Nowadays they actually solve a lot of the traditional unix-ish problems from a normal-user perspective through a strict set of conventions about where programs and user data live.
Sadly, they devastated the spacial UI that was the secret sauce that made the Macintosh's UI revolutionary and unsurpassed in terms of usability and productivity. I really miss the system folder where you know which files do what and what application they belong to by just looking at the icons. Nowadays, you have hundreds of directories littered with hundreds of files with cryptic names and no clue as to what they are. It makes it very hard to troubleshoot an issue, remove, or install additional functionality. They also adopted UNIX's directory structures where its organization for the system rather than the user. Most of elegant and beautiful abstractions which hid the ugly and distracting details of the OS are gone forever.
To understand 'ls -la > foo' you don't really need to know anything about EXEC or file descriptors or anything like that. Really, you just run "ls -la" and see that it spits out the result on your screen. Then, you run "ls -la > foo" and observe that it gets written to a file called 'foo'. That's all you have to know to understand that. The rest of the details are just... details (you can learn them as needed).
+MasterGhostKnight Sounds like you seriously need a new work environment if your boss is raping (or rapping??). If what I said is wrong please say how. The point is that you don't need to know all the details to get 98% of the value of shell redirection. Are there edge cases? Sure as hell. But then for those you can develop guidelines and things like static analysis/best practices, and so on. Or refactor (when your shell scripts morph into gnarly hacks -- refactor them into high-level language programs).
David Rutten Yes. When something goes wrong the shell gives you an error report like "foo: permission denied" and so on. Try this in Python and you'll get a 30 line backtrace or something else instead.
Still waiting to hear why you need to know this just to use the command 'ls -l > foo'. That's the point. Going on about details is like saying you have to be a mechanic just to be able to use an automobile. Utter nonsense. If your Unix command line breaks, take it to the local guru. Not that hard.
Yes, there are times that it helps to know the complex operation of processes being forked off by the shell but it is very, very rare, in my experience. I've been working with Unix/Linus since the late 70s (Unix) and I've only needed to think about this a few times and even then it was at a surface level. I'm with you about the difficulty of installing software but I'm not sure that the OS is the cause...?
The Windows registry was created to stop piracy. In the DOS world, all the information about a program was contained in its own directory. You could copy that directory to another DOS machine and it would work. But you can't do that in Windows with a registry, because the program checks for entries it put there when it was installed. If you just copy the directory of the program to another Windows machine, it will not have the registry settings put there by the installer. The registry works to prevent copying for most people, but hackers can get around it. Never-the-less, it is a terrible idea and it should be retired.
Some things that seem to be missing from the description, and that seem pretty important: Side-by-side versioning (because programs change and stop supporting certain kinds of messages over time). How to avoid GUIDs when figuring out which file I want to reference (e.g. module includes). Meta-facility lookup (which C++ compiler do I run, which text editor, etc). Any standardized way to capture program runtime information for debugging (e.g. log files written by stdout, etc. Or does logging to a file just work basically the same way - you just don't tend to chain things via text?). How you implement something like a system-wide database program (that has a shared state between users, but really shouldn't be running as admin).
I know I'm 8 years late here, but I have been watching your videos about Low-level Unix stuff, and man, I'm just so appreciative. You present SO MUCH content, so concisely and well explained, it is just incredible. You and Jan Schaumann (from the 'Advanced programming in the Unix environment' series here on YT) are the ONLY people I have seen go so far into Unix with such clarity. It's just awesome. Also, I find it strangely comforting that even someone who is so clearly knowledgeable about Unix also sometimes gets the File descriptors for STDIN, STDOUT, and STDERR mixed up too. I do that all time, nice to know I'm not alone.
I just took a look at Jan Schaumann's Advanced Programming in The Unix Environment series - they appear to be very promising. Thank you for taking the time to point that out. I intend to look into it more closely when I find more time. :)
@@kugurerdem No problem! They are indeed FANTASTIC videos, I can't claim I've seen the whole series yet, but from what I have watched, they have been super informative. I'm happy to spread the word :)
Yes and Android is really a mess, with apps having to ship with all their dependencies, leading to a lot of duplication which proper package management avoids.
@@patham9 I disagree. Statically linked libraries really are the way forward for linux too in my opinion, because otherwise packages start holding each other back. Oh can't install this version unless I install glibc version this much, but than these packages break because they require a different glibc and so on. Storage is cheap, duplication is fine.
@@kazaamjt1901Yes and no. There are libraries every program uses. Core ones. They should not be duplicated. They are basically part of the OS. Others should.
What many may not recognize is that 7 years later the commentary that this video expresses has been proven true more today than ever. Consider systemd which takes various processes and 'manages them'. So one singular process to manage all other processes in an effort to not only 'simplify' operation but also 'provide and extra layer of security', all while in pursuit of those goals has managed to add yet another layer of complexity. And in order to invoke or modify systemd the interface is multi-lined cruft which is passed along to still more xml configure cruft. And if that werent enough there is not layers upon layers of virualization software like 'containers' in order to 'simplify the complicated' implementing and 'inventing' more 'tools' upon 'tools'. Making the whole thing even more complicated than.....and I loath to admit this but more complicated than windows. The only thing that keeps linux (not gnu) from being squashed right now are the fanatics that support it. Ive been using GNU/Linux for many years and have always found the Linux kernel unnecessarily complicated and naturally for voicing my opinion, observations and concerns as one might imagine what happens to anyone who refuses to succumb to the fanatisism, have been shunned.
This is fine if you have good control over your dependencies. But if you have good control over your dependencies, it's then also unnecessary to do static linking. In other words, statically compiling everything simply proves you don't have good management of your dependencies. Perhaps you don't care about good management of dependencies. In that case, you can save a lot of time by just developing directly on production.
@@JrIcify Should I also imply that you don't care about security either? Then I'm pretty sure I'm not going to run your software on my systems, thank you very much. :-)
Compiling statically just sounds like giving up. You waste more memory too and it's not elegant to have multiple versions of the same library on the system that each have to be updated by recompiling all the software, where as with shared objects you just need to update that one file.
The public port problem might be solved by having a service registry which maps dynamically allocated ports onto universally recognized service identifiers. That way incoming connections will only need to know the standard ID a given service, not the IDs of all the different packages which be used to provide that service. In some ways the existing port numbers already get used as a sort of service identifier, however the port number space is too small and the methods used to ensure uniqueness too ad-hoc to avoid significant conflicts. Switching to UUIDs would almost completely eliminate this problem.
Read "Unix haters handbook". All this mantra of "worse is better" and the underlying crud of the system design is just crazy. So in place of fixing it, they sell it as a feature and this was back in the 80s.
Hi Brian, thank you for publishing so many informative videos. While I do not always agree with your arguments, I always feel like I have learned something useful from your videos. As an avid Linux fan, it seems to me that the variants of Linux are the result of an army of volunteer programmers. What you want is an entirely new OS. Wouldn't it be more productive to have a discussion from a more neutral stance. Linux is not really a "plug-and-play" operating system meant for the casual user (despite efforts to package components ala Redhat, etc..). My love of Linux is precisely the freedom to modify (through shells) my environment and to use shells as "glue" for engineering tasks involving many programs. I love the fact that no one (read Microsoft) is going to swoop in and remove features which I use. If you are talking about a "new" system, all of your comments are valid. I just don't get the point of picking on Linux, it is not the 900 lb elephant in the room. Thanks again for all your efforts. Bob
This is fallacious, the fact that linux is an OS made by an army of developers doesn't justify the clusterfuckness of unix shells, this clusterfuckness doesn't come from the fact that linux is an open source project but from trying to maintain as it is everything about the unix tradition, wich was the point of this video and I am quite amazed you missed it. "Linux is not really a plug and play operating system meant for the casual user" yes it is, or at least it should be. Linus Torvalds himself said that one of the biggest problems with linux is lack of standarization that makes it harder to setup for common users, an operating system should be both easy to use as well as versatile and malleable when in more experienced hands, those two things are not mutually exclusive at all and you thinking they are is a direct attack towards Linux's progress as an operating system. "My love in linux is precisely the freedom to modify" again, how does this have anything to do with the fact that the unix shell conventions are outdated and overly complicated? It is completely unrelated, you can have an operating system that allows for that that doesn't have the unix nonsense. "I just don't get the point of picking on Linux, its not the 900lbs elephant in the room" exactly, because it is not the 900lbs elephant in the room it is the one we should care the most about improving, just because you are a mindless fanboy who thinks linux is utterly perfect and there is no way of improving doesn't mean the rest of us Linux users are that dumb.
@@marcossidoruk8033The shell is never nice. Seriously. It really depends on what your preferences are. Bash is not the only shell too. Just the most popular.
I will also lean on saying that the security view on this proposal is outdated. It seems to stem from a sense that applications are installed because user wants them, and they do what they are supposed to do, except in case of errors, which was probably accurate in the 80's or even in a modern fully controlled Linux environment, but certainly doesn't reflect what happens in a real end-user environment. In the days of smartphones, and even PC starting to feel like smartphones, we have hundreds of applications installed without user consent nor knowledge, doing untold things with accessible data, with the primary objective to invade user privacy, using the tiniest clues to rebuild a profile and sell it to highest bidder. Security and user control should be way, way more central, with default giving way more power to users as opposed to apps. The Android Permission model is a good step in this direction, and I would consider it a minimum nowadays. But only a minimum. There are still too much data that can be siphoned off by rogue applications on Android.
Brian you are wrong, we cannot abandon the entire concept of a shell, and replace it with a general purpose language. You need a way to quickly interact with a running system, and some form of shell is an ideal answer. What we actually need is a modern widely available OS built on a micro-kernal with clean lew level IPC mechanisms,... so basically FuschiaOS, then we can iterate on user-land concepts, without trashing performance or ignoring driver and platform complexity.
To your last: Yes, I disagree with you, and yes, but I've already been doing so for a long time before ever watching this. You seriously praised systemd unsarcastically? Really? And then suggested that we force something that is very much userland into kernel, not just backtracking all the work done recently to take cruft OUT of the kernel that belonged in userland, but going even further than Windows ever did into monolithicness? No. We should NOT put package management into the kernel. At all. Ever. Doing so means a reboot with EVERY software change. Microsoft is bad enough on that score, this would be worse. I won't disagree with you that there's a lot that needs to change. I also won't disagree with you that most package managers are bullshit. Have you ever tried Gentoo? I think it manages to get right most of what you're complaining about without introducing quite as many of the drawbacks you don't seem to have recognized to your prescribed approach.
There are OSs with the ability to edit the kernel without a reboot. Brian is right. You are stuck with the Unix state of mind and you can’t see outside of it. Learn more about other operating systems to see how the world was before and after Unix came to the world.
@@saymehname - no, you're missing the point. What's being suggested here is a complete re-imagining of a computing interface and OS. There was no need to actually mention Unix or Windows here at all, except that I guess some starting place was needed to push off from. The hubris on display is staggering, but that's par for the course, and there's little respect shown for the pioneers who were working within a very different environment. You also missed the point about systemd - a Unix state of mind is perfect for a Unix system. Which is why systemd is the wrong approach. This new suggested thing might be an interesting place to be but I wouldn't take a Unix mindset to it, it's not Unix. It's so different that you'd be forced to approach it with an open mind, or fail. Wills is completely disengenuous if not being decepting about 'ls -al > foo' though. The point of that syntax, like any command interpreter, is to hide the complexity, make it readable and repeatable for ordinary users. There will be huge complexity in getting any OS to do something like that. Even in this new call-response model with windows for responses and links and all that - how much complexity is under the hood? What does the kernel look like and how is it going to be any less complex than the way Unix does/did things? Hmmm?
> *ls -la > foo* While others see bloated piles of complexity, I treat it like fine art or poetry. With just 12 simple characters, we can instruct our silicon machines to do multiple operations. Truly amazing!
@Barry Manilowa Most shell users are naive users only for a brief period of time and after that the power of the shell becomes more valuable to them. For example, let's image the command shell was replaced by Python. How exactly is it going to make it easier to understand to naive users, who don't know Python programming at all? Maybe there would be "ls" function, and write to file functions, but I think the problems would get more complex, not less complex to a naive user who doesn't know anything about programming, functions, function calls, variables, function return values, execution order, etc. So the problem of execution order for example still remains, you just move it to a different place! I agree that Python is a nicer language than shell languages for scripting and doing anything more complex. But the current concept of working allows you to do that, it allows you to use the tool that best suits your use case. I think the concept of pipes and processes is rather simple and efficient compared to how you would do it in a programming language, where you have to think about variables and their types, and take certain amount of letters at a time, make special cases for this and that just to handle the character stream like a simple shell pipe does it.
This is an interesting topic. I've wondered why a modern, single user, hardware independent operating system hasn't been developed. Like something you can use on your phone, then dock into a workstation. You'd need a standard interface like a kernel running on each device and just switch the context between the two when docking/un-docking. Web applications try to solve this issue of a unified UX but it's turned all of our devices into web portals. It just seems like a kludge to me.
The only difference between a phone and a computer is the screen size and user input so I don't get why everything is so different, even for Apple. Websites are just an executable on the internet, so I don't get why they're also so completely different as well. In fact they're so different, a website isn't even an executable. It's markup that can execute a script. It makes no sense whatsoever.
I'm glad I came across your video. I've been thinking along the similar lines for some time now. I used to work on operating systems in the days before Unix, when they were a lot simpler. It seems to me Linux/Unix became complex to incorporate features in such a way as to minimise memory and processor requirements for a concurrent multi-user environment - constraints which just don't apply any longer (I'm the only user of my PC and it has 64 GiB of memory - as opposed to 16 KiB and one user or 2 MiB and 20 users). Now that I'm retired I might get round to looking at it.
Constraints may not apply to you anymore, but that doesn't mean it applies to everyone. Our home server is a shared device. I wouldn't want my kids being able to go through my files on there. And lets not forget the developing world. Plenty of small businesses there use a multi-head setup in their offices so multiple people can work on one computer simultaneously.
I agree with you that naming could use some updates, but generally there are two major criticisms I have of this reasoning: First is that the historical 'cruftiness' is not really a symptom of a disease; it's an effect of evolution. Designing a system to replace it means re-learning all of the various reasons why the system is the way it is, which violates the generally good assumption that you're standing on the shoulders of giants. The second, which to me is worse, is that you're mixing abstractions. Your first example - "ls -la > foo" - goes on a long twisty path talking about implementation details. Those implementation details are generally not relevant when debugging a shell script, unless you're working on severe edge cases. But - big but - edge cases like that crop up in every programmable environment whatsoever, it's just the nature of the beast that computational reality is a bit messy, and nobody has yet devised a system where that isn't true. A final consideration is that there is usually a pretty good reason for the oldest design choices in Unix environments. One of them stems from having a *very* simple process model, which is: Every process (except init) has a parent process; everything that isn't a process is a file; and text formats should be standard interchange where possible. Put those together in an environment where you don't have modern luxuries, and you wind up with terse, file and text-oriented commands a la Unix. So far, nobody that I am aware of has managed to produce a working shell model that doesn't follow this pattern and also isn't a huge mess of extra typing, or so particular to the system that it becomes a blob anyway.
Consider what you really need to know for that shell example: You need to know ls is a command, you need to know "-la" is an option, and you need to know > is the redirect to file operator. If you know those, you know everything you need to get started -- and those are really just vocabulary items. If you have some edge case you ran into, you'll get an error back from your shell describing what the error was, or, just like in any programming language, you'll go down into the rabbit hole as far as you need to go to debug, and then you won't make the same mistake again.
TL; DR: a better thing hasn't been done; "giants" have made the bad thing so there must be a good reason; Both are weak arguments. First, they are unprovable. Second, they don't lead to any actionable conclusion but to avoid solving the problems
Idk how I feel about this ngl. I'm just thinking about a multiuser environment, like a school, for example and how this setup wouldn't be the best for that, because the users can see each others data
Your reasoning for why 'ls -la > foo' represents a huge amount of complexity is the equivalent of saying that your car shifting gears is a hugely complex process due to the inner workings of the transmission and the nature of how it interacts with the rest of the vehicle. Yes, it's an accurate statement, but does that complexity really affect your experience as a driver, or inhibit your ability to make use of such a mechanism?
I should point that I do agree with you that the shell seems like a 'giant kludge'. It is not a clean to use interface in many cases, but it's still an incredibly powerful tool that is a lot more messy on the surface than it is outdated
You're a bit wrong about Plan 9. The idea was to make a consistent cruft-free system by applying the "everything is a file" idea for real, so every API is a set of special files controled by a server. This together with mounting remote folders locally gives network transparency. Now, the developers did of course experiment with this, but only as research on top of something more fundemental.
+lonjil Yeah, I shouldn't have said it was the main objective. I think plan 9 failed for being both too ambitious and too unambitious. The more consistent use of special files was probably an improvement, but not different enough to overcome the incumbency of existing Unixes. Meanwhile, the network transparency stuff was half-baked and of nebulous benefit.
+Brian Will Hey Brian I was doing ruby on rails last year and it was fun. I started first year at University doing a software development degree and the launguage is java. It's so fucking retarded and the enjoyment is not stimulating me as much. any advice??
The idea of using an existing kernel and writing a new userspace for it has been tried. There's Android (Linux kernel) and OSX (BSD Unix kernel) and Chromebook (Linux kernel) and probably others, maybe some game consoles or other devices that have some amount userspace apps. Every time there's a new userspace, that's one more platform that software has to be ported to, or else it's not available there or it can only be run in emulation there. Whenever there's one of these new userspace devices, the first thing that people do with it if they can is jailbreak it and get a GNU userspace and command line set up and running, so that they can install some actually good software of their own choice, instead of waiting for ports. So the proposal is just running around in a historical circle, a well beaten path. What advantage would the suggested system provide that would make it worth the hassle of porting to another platform, one that has completely different package management, files, permissions, process management, and so on, so that the software has to be mostly rewritten? I don't see one, honestly, (other than you're promising that you'll use hashes in package management, to ensure that files are what they say they are, which is something good languages on good systems can already do.) The area of security, where this proposal falls short badly, contrasts with the opposite extreme security system, GNU Hurd, which gives every program a sandboxed limited view of resources (with users and programs not being allowed to grant permission to more than their own limited view.)
You're confusing kernel functionality with userspace. GNU Hurd is a kernel - and it should be concerned with security, the userspace shouldn't - it should already be sandboxed by the kernel.
Huh, fail to see how making a new userspace be the preferred environment for new software would eliminate the ability to run legacy software. If we can do better, it's at least worthwhile to explore what it would take. If it makes it easier to write better software, that's a huge benefit that should not be ignored.
text is the universal interface. this is a key part of the unix philosophy. if your program prints a line of text, then your program can be pieced together with other programs in a unix pipeline. expecting programs to output a special format, like html, goes against the unix philosophy. you lose universality, which loses interoperability, which loses composability. your complaints against shell are irrelevant. if you do not like bash, then use fish or write your own shell. python works too, as just about any language can pipe and redirect. we use shell because it is designed for interactive use. it focuses on writability instead of readability. your proposed solution for packages not only makes development harder, it destroys the abstraction for regular users as they think in terms of folders and files. i also disagree with the single user mode as others have pointed out. computers are very often shared at libraries, schools, and work places.
I love the fact that you are at least thinking about these things and looking forward to what an OS can be like. Unix, Linux, Windows, mac... these are not the pinnacle of computing. We can do better.
The notion of the shell using an html-like hypertext reminds me of templeOS, which implements something like that with it's doldoc system. Temple also uses a C-like language for shell commands (and practically everything else), much as you described. If you aren't familiar with it, it's a toy OS built by a lunatic, but if you can look past that it's actually pretty brilliant in a lot of ways.
> it's a toy OS built by a lunatic TempleOS is a temple of God! How dare you insult his Holiness Terry A Davis. You must be one of those glow in the dark dudes
Emacs has full-featured text editing of code with commands to execute expressions, blocks or all of the code in a buffer, with output sent to a different part of the UI, not interleaved with the code on the screen.
It seems like you have this neat intersection of "the simpler the system the more it conforms to the vision" as well as "the simpler the system the easier it should be to produce". It seems like such a great idea why is there not a github project? To make it simpler start by targeting one piece of uniform hardware. Make it for raspberry pi 3 to start with since its probably the currently most accessible and widely owned piece of uniform hardware on the planet. Further the main project could be to maintain it on just the currently most widely adopted single board computer and rely on fork projects for any other hardware. That way you could focus more on the software specifically.
@@jacobschmidt Lol, no. Ofc not. It's far easier to tear existing ideas down, and blabber on about potential replacements, than it is to actually build something to compete with existing ideas that have withstood a good few decades of people hammering on them.
What you propose is not entirely clear because the complexity would require many books to explain the fine details, which are fundamental to such an endeavor (1% idea, 99% execution). But I can see what you mean. People may agree on some problems, but that doesn't mean anybody would agree on the solutions you propose. And there is a very clear vision and philosophical standpoint behind many choices on Unix. If you can't agree on 90% of what it is, just use another OS. You can't change such profound things as the very *defining* foundations of something (e.G. the filesystem theory of Unix). Doing so would mean that you have a completely different thing, and not the "same but better"! You may as well start from scratch and at that point not look at Unix at all. That would make much more sense.
The shell is NOT a fundamental element of Unix. It is trivial to create a Linux distribution where the default user shell is /usr/bin/python, just as easy as switching between bash, ash, csh, sh etc. You can also set it as a personal preference. But I think that using the shell is more beginner-friendly than using the python interpreter to perform the same tasks. That a lot of system sofware in Linux has been written in the shell language just shows its strenth.
Some good ideas. But also some that at first sounds terrifying. Like "kernel-level package management", kernel is about hard-soft interface, is a hardware abstraction, not userspace abstraction. Shell responses in HTML? I want to manage my system, not read magazines. JSON with url syntax highlight and functionality is good enough for humans, storage, processing and pipelining. Will be good to have a standard for package management OS-wide.
Even with sync services such as Dropbox, OneDrive, and all of them -- because we want to be able to work offline and open/save files quickly, pretty much anything you're working on is going to be cached on the workstation where it needs to be protected from other users. The idea that people aren't sharing computers with personal data that needs to be protected from other non-admin users is preposterous. Additionally, the idea that out of any two users on a shared system, one of them is definitely an administrator is "not even wrong."
great presentation, just two flaws. a) who is going to do it? b) what are the incremental steps from lets say a linux to this system (you cant expect everything/all applications done from scratch)
What you propose is still quite complex tbh. I also had a lot of these ideas, but the more I learn about programming language theory, the more I think we just need to decouple most of the pieces that make up our software. This way we could iteratively get rid of any unnecessary complexity and just replace that with better abstractions bit by bit. That is, we're not gonna one day come up with the perfect replacement for everything that exists today and design a replacement in one go. Here's my idea of an 'ideal OS': First of all, it's supposed to be a so-called safe-language operating system. I.e. a one that doesn't need CPU security features to ensure privilege separation. This is because there shouldn't really be a hard distinction between the kernel and user space - kinda like what microkernels have tried. Then, how privileges should be managed? Well, the whole OS should consist of dead simple abstraction layers with clear interfaces fully described in code. There'd be no concept of users, permissions (neither Unix-like nor Android-like) at this point. The lowest layers know about the hardware and can touch it directly. Everything above works with the safe abstractions the lower layers provide. First, you need a (correct-enough) model of the hardware. That is data structures that behave like or describe the device you want the OS to run on. After that, you can write thin abstraction layers that take this hardware behavior descriptions and present it as more generic models. E.g. if you know exactly how a bunch of different PCI-e network cards work you can have a piece of code that provide the Ethernet protocol device out of this. Or if you have a hard drive, you can model that as a huge array of bits or whatever, together with it's runtime behavior (how much time do reads or writes tak, what happens when you loose power at some point, etc.) Somewhere alongside that you could have a layer that can turn the abstract CPU device into threads. Or turn RAM into allocatable memory. On top of that, you can put a layer that implements a textual shell. Or alternatively a layer that runs a GUI - consumes the keyboard, mouse, display, speakers, … and uses that to drive a desktop environment. Probably it should then also provide the concept of processes - otherwise any applications would need to be built-in (like on old feature phones). Still, all of that should sit in the compile-time APIs and not necessarily in any runtime ABIs. The compiler should be allowed to optimize any of that away and the OS developers should easily be able to swap out any part of it. However, at some point you will want to run 'the userspace'. Doing so would essentially boil down to making that specific layer provide a very concrete interface to whatever will run on top of it (an ABI I guess) and validating whatever is to be run (as long as that's our security model). This isn't really that different from all the layers before, except it now requires a few more technicalities to be modeled. Looking at what I wrote, it doesn't sound convincing at all. I feel like this comment severely misrepresents this idea. But still, maybe someone will find it interesting.
On any given computer, there is really only one thing of any value, and that's the users' data. The rest (programs, OS) can just be reinstalled, but the data must be protected. This is the complete opposite from UNIX, which protects the operating system first and foremost, and leaves your data at the mercy of a single mistyped rm -rf / or malicious program.
@@shallex5744 Nothing! By default, applications only need access to their own files. Access to files belonging to other applications should explicitly be allowed by the user, using a tool that has only that purpose. Sandbox everything. Even the shell doesn't need unrestricted access across the filesystem. The shell runs as a user; why not run it as an application instead? And if you think that's odd: you don't run it as super user all the time either, do you?
+Tim Hayward Nice trolling :) Seriously - where would forth come in? As the new shell language? The last forth I used could only write fixed sized 255 bytes blocks to disk and did not even have a notion of a file. The last I heard of forth raising from the realm of the dead is in the context of boot loaders.
It is alive, though not well. It is in other places. It is in Postscript. You can get a new 144 core super processor with only forth, but that isn't what my comment was about. In Forth: The shell is the ide is the compiler is the loader It is painfully simple It is infinitely extensible, supports overloading, so my overabstraction will never interfere with your overabstraction. It is closely tied to the hardware and sometimes, vice-versa. What if we just made a most Turing complete architecture? What does the perfect implementation look like. Computers don't really do that much. What do they need to be told?
@@TimHayward Sounds a bit like BASIC on home computers, which was equally complete and self contained (albeit far to often very slow). They also coexisted in that context (say Jupiter Ace versus Sinclair ZX-81). But frankly, I never understood why Forth had to be so wierd and untidy, compared to most algol-based languages (except the ugly c-family). One of the few popular languages I never learned in the 80/90s.
I agree with many of the problems that you described: * Shell syntax is horrible, especially for more complex stuff than just starting programs. And yeah, having to learn how to properly escape all the arguments etc. is a pain in the ass. * Shell output could be something better than just plain text with some colors. One big improvement that could be implemented right now would be to use Markdown for the terminal output. This would be backwards compatible with current terminal emulators but would also provide nice formatting for terminal emulators that support it. * More and more programming languages have their own package managers. I think this isn't a fundamental flaw of how modern systems work, though but only with current package managers. * And some more that I don't remember. Now some questions about the architecture of your model: * If all processes can see all user data, how does the security model work? Having every process only see it's own data, I could understand that, then you could use IPC mechanisms to pass user data to the processes, but letting a process have access to all the users data seems unreasonable. * How are those Globally unique IDs assigned and managed. This sounds like it will become either unmanageable or fail in another way like some entity having controll over every ID or some failing model like the current CertificateAuthority system. * How in the world can you create files with a global UUID? And if it isn't global, what happens if you attach a USB flash drive with a file on it that has the same UUID? * Having permission groups might not be necessary in most home computing scenarios, but when you have hundreds of people working on a project you can't just give everyone access to everything. But this might then be a server side problem that lives outside of the client side computers, so this might actually work because it doesn't need to be part of the low level desing of the system. * How do you manage hardware access without a permission system. You can't just let every program access every piece of hardware, this would be disastrous from a security perspective. Btw: Why even use a global registry for configuration if you can just let every application have it's own configuration and expose it over the IPC mechanism if the configuration options are required from outside the application? Also I think that most of the problems you describe can be properly fixed on top of the current system, or rather by gradually changing it without having to throw out everything and start from scratch (just like refactoring in programming). As a sidenote: Your model has, in parts, interesting similarities to what is already done in modern webbrowsers: * The "shell" uses a modern dynamic language (javascript). * There are no files, an application can store it's data in the local storage (which also serves as a registry for configuration). The UUIDs would be the reference that programs hold to the data. Although JSON has a similiar tree structure to file paths. * There is no process hierarchy. * I could probably find more. And it even works across multiple platforms.
+shevegen My whole point was that most of the problems described in this video can probably be properly solved on top of current unix systems! And if some problems keep existing, the systems can be adapted without completely throwing them away. EDIT: Well, not my whole point. But I still think this is possible and it would be much more reasonable to actually implement (in terms of effort required).
+Max Bruckner (FSMaxB) Thanks for the feedback! I think I can address most of your points: > If all processes can see all user data, how does the security model work? Having every process only see it's own data, I could understand that, then you could use IPC mechanisms to pass user data to the processes, but letting a process have access to all the users data seems unreasonable. I'm not sure the system should protect user data from installed programs. This isn't something done in Linux anyway, right? My home directory is visible to every program, e.g. any text editor can open any text files in home. Android attempts to treat certain kinds of data, like contacts, as requiring explicit privileges, but I'm skeptical that users should be protected from their own installed apps. Sure, we want to mitigate the damage a malicious user program might do, so we don't give every program superuser privileges, but specially classifying the user's data adds complexity to the system and burdens the user. If we really did want to go that route, though, the way to do it would be for each program to store user data in its own filespace. A program might make this data available through IPC requests, and it could whitelist other programs upon approval of the user. Maybe then we want some kind of system-wide way of managing these privileges...but that might just end up more complicated. This solution also complicates backing up and transferring user data. It also arguably traps user data in silos. > How are those Globally unique IDs assigned and managed. This sounds like it will become either unmanageable or fail in another way like some entity having controll over every ID or some failing model like the current CertificateAuthority system. > How in the world can you create files with a global UUID? And if it isn't global, what happens if you attach a USB flash drive with a file on it that has the same UUID? Obviously UUID's are not reliable because they can be trivially spoofed. The idea is that, anytime veracity matters, you rely on the version id (the hash), not the UUID. UUID's generally include a high-resolution timestamp. As long as the generating code is not broken or malicious, collisions between any two UUID's are highly unlikely. Still, we would want public catalogs of packages to resolve such conflicts and--more importantly--to verify hashes. Unlike with DNS, we don't need a single centralized catalog. I could use a catalog that I trust, and you could use a totally different one that you trust. This is basically what we do already with every Linux distribution package repo. (Using numeric ids instead of names also means we can sidestep politics over who gets what desirable name.) As for file UUID collisions, either from error or malice, the system should cope by just letting them live side-by-side in the same filespace. UUID's resolving to multiple files is something human users can cope with. Programs, on the other hand: 1) have complete control over their own filespace, and looking up files in your own filespace is a different syscall, such that external collisions won't interfere 2) whenever possible, programs should specify files by the version id (the hash) instead of just the UUID Annoyingly, files with the same UUID in separate filespaces may have different metadata attached. Ideally, every copy of a file across time and space would have the same label everywhere, but this is already a problem we deal with. The only new wrinkle here is that two unrelated files might erroneously share the same UUID. I think this would be an annoyance but not a real security/config problem. (BTW, there's a whole angle I glossed over about how files would, by default, be treated as if they are immutable, e.g. opening a file to write produces a new file rather than overwriting the existing one.) > Having permission groups might not be necessary in most home computing scenarios, but when you have hundreds of people working on a project you can't just give everyone access to everything. But this might then be a server side problem that lives outside of the client side computers, so this might actually work because it doesn't need to be part of the low level desing of the system. Yes, I think as I mentioned, any sort of many-user concerns belong at the application level. We have servers running webapps or services in the backroom, and users--outside increasingly rare cases--all have their own machines (multiple per person, in fact). I think administering many users at the OS level is just outdated. > How do you manage hardware access without a permission system. You can't just let every program access every piece of hardware, this would be disastrous from a security perspective. Admin/non-admin might be too simple. Perhaps the system API is split into separate dependencies, such that a package effectively states explicitly which hardware it will use. A package requiring certain system API's would require special approval upon installation, e.g. this program may use the webcam. (On the other hand, users of Android seem to have been trained to just blindly click through these permission screens. Perhaps only admin users should be able to approve packages requesting certain kinds of special access.) > Btw: Why even use a global registry for configuration if you can just let every application have it's own configuration and expose it over the IPC mechanism if the configuration options are required from outside the application? I've considered something like that. But then there's the question of where to store each user's general settings and system wide settings. We could store them in files of user space, but as previously mentioned there's zero protections on those files. Again, I'm not really clear on this area. > Your model has, in parts, interesting similarities to what is already done in modern webbrowsers: Sure, there are parallels with browsers, but of course there's a lot we just can't do in browsers, e.g. run a server or natively compiled games.
+Brian Will Now I have a much more clear picture of your proposed model. Just a few additional comments: > I'm not sure the system should protect user data from installed programs. This isn't something done in Linux anyway, right? Yes, linux doesn't do that and I think it is wrong. I don't want to be forced to trust every single binary that I ever run to not do bad stuff with my data (on purpose or by accident, see the steam client deleting home directories because of a bug in a shell script for example). Also if programs are separated from user data, companies that develop them don't even get tempted to snoop around in it. >If we really did want to go that route, though, the way to do it would be for each program to store user data in its own filespace. A program might make this data available through IPC requests, and it could whitelist other programs upon approval of the user. Exactly, that's what is used (or at least planned to be used) by xdg-app. The user grants access to a certain file by selecting it via a the file explorer. This might be expanded by passing files via the command line. Shared data can be whitelisted by an applications dependency manifest. > (BTW, there's a whole angle I glossed over about how files would, by default, be treated as if they are immutable, e.g. opening a file to write produces a new file rather than overwriting the existing one.) This sounds just like a waste of disk space and confusion for users because they would have to differentiate between different versions of "the same" file. But this could be handled like regular copy on write with versioning, showing the user only the newest version and providing a backlog. Versions that are older than a certain amount could then automatically be marked obsolete so they can be overwritten when space is needed. >Admin/non-admin might be too simple. Perhaps the system API is split into separate dependencies, such that a package effectively states explicitly which hardware it will use. A package requiring certain system API's would require special approval upon installation, e.g. this program may use the webcam. (On the other hand, users of Android seem to have been trained to just blindly click through these permission screens. Perhaps only admin users should be able to approve packages requesting certain kinds of special access.) Yeah, this could be done just like android with the slight modification that access policies could be changed separately from the actual applications by the repository maintainers. This model provides the possibility for more advanced users to sanitize the kind of special access an application gets, even if the developer wants all of it. >> Btw: Why even use a global registry for configuration if you can just let every application have it's own configuration and expose it over the IPC mechanism if the configuration options are required from outside the application? > I've considered something like that. But then there's the question of where to store each user's general settings and system wide settings. We could store them in files of user space, but as previously mentioned there's zero protections on those files. Again, I'm not really clear on this area. Just store the global configuration inside the filespace of a configuration-application that allows moderated access via the IPC mechanisms. Every non-global configuration just lives in the file space of it's application. This is also how this registry you described could be implemented without having to incorporate it in the base system.
+Max Bruckner (FSMaxB) All versions of a file in a filespace share the same metadata. Generally in file listings, you only see the latest version with a column indicating the number of old versions. Users can expand a file in the list to browse and select its particular versions. Overhead from keeping a bunch of old file versions around could be mitigated by applications simply deleting the previous version as their normal 'save' operation. Better yet, applications should make the choice very clear, e.g. 'save new version and keep old' vs. 'save new version and delete old' (not sure if there's a pithier way of expressing this distinction). Applications producing large files should maybe warn users about the overhead of keeping old versions. The pseudo-immutability thing is mainly to accommodate the version hash thing: as soon as you modify a file, its version hash becomes invalid, and until the file is closed, it doesn't make sense to recompute a new hash. So it seems logical to make copy-on-write the norm and think of modifying a file as actually producing a new separate version. There do seem to be cases, though, where normal mutability might be preferable, such as with log files. Perhaps just leave it up to each program on a case-by-case basis. (Of course, while a file is being modified, it can't have a hash id, but I think it works out okay if the open file is known just by its file descriptor until it is closed. This works because programs share files through IPC by descriptors, never by names.) I like the whitelisting idea, and that could apply to files: when a program attempts access of a user file for the first time, the user gets a UAC-style prompt to authorize access. Whether the registry should be a special kernel mechanism or a standard program is something I've gone back and forth on, but I suppose as currently described there's no reason for it not to be just a program. I have a hazy notion that other system features could be exposed as service programs in the same manner, but I'm not sure how far the idea could/should be taken. Kernel modules presented as service programs that hand out device file handles? Could ioctl be replaced? Anyway, thanks again for the feedback!
It sounds like a library OS or unikernel, which is becoming more popular lately, would be a great first step. You get the application virtualization you need plus an API. There may already exist research similar to your proposal.
Sure there are people using the same machine. For example you might run a shell for your employees to log in to or a university might have a student shell.
In many respects I agree with a lot of what you have to say here. I like the Unix environment but its current state does represent decades of cruft and incremental design. A lot of the underlying assumptions made sense 30-40 years ago but are a poor fit for the present. Command brevity is one of these: Command brevity was not just valuable for saving keystrokes, it was also worthwhile to present information in a more compact form back when the standard was an 80x24 video terminal, or even a teletype. And storage and RAM space was limited so it was worthwhile to make scripts inherently more compact. None of those considerations are really important any more. If there's any value to that compact style at all, I'd say that it could possibly convey the information in the program code more effectively, since there's less visual information to process. But that depends on the programmers involved becoming really fluent in this compact style - and that is one of the reasons why most programmers in my experience (myself included, really) feel that a more verbose style is better. Other aspects, I pretty strongly disagree: For instance, you talk about the inherent complexity behind running a simple command with redirection: That complexity is always there. If you're dragging a file to the waste bin, you don't need to think about the fact that the file is part of an index of files in the directory, or that the icon that represents it is some image resource somewhere either attached to the file or stored in a listing correlated by some notion of the file's type, or that when you start dragging it the OS produces a data structure that's used for object exchange between processes and handing that data structure over to the object or process represented by the icon or window you're dragging the file to... And you don't need to know the arcane details of how the system associates these metaphorical user actions with actual pieces of program code. As a user you don't care. You don't need to care, you've got a metaphor that models the whole thing, and that is the useful part. All this about how the details are always lurking, and eventually you have to deal with those details because it's necessary to understand what's really going on... That's all still true whatever the interaction metaphor is. There's room for improvement for sure - the "metaphors" in the Unix shell are pretty firmly tied to some of the underlying details of the system (especially TTYs, file descriptors, and the process hierarchy) but the basic "problem" you describe there is just the inherent nature of computing: Complexity exists. You may have to deal with the complexity at some point. What matters to the user is whether that complexity is modeled through a useful and meaningful interface.
Huh?! ... Sure... there's a lot of legacy stuff in all parts of the "IT stack". Rarely are things redone from scratch, but projects like RISC-V does occasionally happen. However ... that not really the "UNIX tradition": Doing 1 thing and doing it well is never a bad thing. And sure... the syntax of bash is a bit arcane, but ... aside from that I have a hard time seeing what a "less complex" objective alternative would be. ... and I have to continue the anti-rant here... You can set any program you want as a shell. I lived for a time with "emacs" as my shell. Works great. But if you think just running python programs is a better alternative than any other, then I have to vehemently disagree. Python is a sucky language too and I don't want to write more verbose commands if it doesn't make what I do less sucky. And it doesn't And wrt. package managers. You can do snaps or flatpacks to have a more "app like" environment.
The main reason a shiny new operating system wont work is that it too will have new functionality and requirements kludged on top of it. Then we will be back to where we are now. Why invest all that time and effort just to go round in a big circle. I think a bigger problem is the way operating systems evolve with little or no real concern for backwards compatibility. Its crazy that my biggest concern at the moment is Ubuntu will stop supporting 12.04 LTS in 2017.
That's mostly a problem with your Linux distribution choice than Linux distributions as a group... Gentoo for example has nigh-perfect BC. It's like 99.9% completely compatible with everything ever on a POSIX base. And for that remaining .1%, it'd just take some careful planning and a chroot to get.
The HTML-command output is kinda silly IMO....half the point of the terminal is so you don't have to click buttons or follow prompts to do basic tasks.
+2chws I don't really think it's sensible to build a system around the fact that programs COULD do bad UI design. Nothing about the proposed form presentation would make you unable to just use the keyboard. For instance you'd run enter value, tab, enter value tab. Inspect the form and then tab enter on the submit button. For instance. I'm much more concerned with the fact that to make this idea realistic you'd have to make a POSIX wrapper for this message system essentially. Because there's no way people would abandon all their tools. That's a far bigger user-convenience threat than people starting to write UI that forces mouse use. Hopefully the system would encapsulate the old programs effectively making the request-response system the only interface a modern user has to deal with.
MrSnowman yeah but so many programs are designed to be completely non-interactive and have so many options that trying to make them interactive will be futile and not make any sense. I just don't see the point. The only thing I see this as useful for is maybe a replacement for ncurses, that's it.
2chws I agree I don't quite see it being such a major feature that it'd be worth mentioning alongside other stuff here. Perhaps he has some plans for closing the gap between power-user and normal users somehow. I have no doubt that if you could just wrap CLI programs in a HTML form easily you could get normal users to use those programs more. If not for conveniences sake then just to make it look less scary. It's pretty clear to me that normal users can make good use of a lot of CLI applications if they wanted to explore it.
That's a really good point. For one thing, you want to be able to write a batch file that can run without user interaction. I'd assume you could simply specify "run silently" at the beginning of a file or block, but still.
So, we will not have the environment (a set of name-value pairs), but a configuration (a set of name-value pairs) ? We won't have a shell, but a ... shell ? The problem of protecting one user from other users is solved by ... declaring it not being an issue? Etc.
I think the benefit of centralized configuration is that the OS can enforce that programs not step on each other. So, for instance, if my text editor stores config in ~/config/.editor, but my paint program also attempts to do so, this is an issue. With centralized configuration, this is all handled by the system, so the configuration is associated with each program in a more structured sense. I definitely don't agree with his single-user fantasy though. We should have more security, not less.
No user isolation, well, on a personal computer maybe it is fine but when it comes to servers or super-computers they do have value. I mean, for one university class I got access to a supercomputer doing important stuff. Would have been a shame that I mess up all that important work bc I don't know how to code.
The first seven minutes are a silly hyperbole. You don't have to and NEVER DO tell the whole story about all the underbelly of the software to a newbie. The equivalent of his argument would be to force somebody to understand how the Python interpreter builds an AST before teaching them to "Hello, World." It's almost like Will thinks that Domain-Specific Languages with implementation details should never exist.
Yep, it would be as silly to teach Windows system internals to newbies. They only care about how to work the system, not how the system works. And Wndows does a lot to hide the command-line, which is possibly where this attitude comes from. I've heard people who have only experience with Windows systems that command-lines are going out of style. And you remember Unix, where the terminal has very much an active role, and will keep it for the foreseeable future. And the terminal is so much more capable under Unix too, copied as it is from the VT-100, VT-220 and other 80s models. There's even a Tektronix 4014 emulation in xterm which I never figured out how to use, as no program I had uses it. :( (Lol, looking into it, I see it was used for CAD, and likely obsolete. These days you'd use plain X)
1. 'users aren't protected from each other'. There is no security here. If there were no multi-user systems, this might work. But there's no networks in this model. 2. Your view of what a shell should be almost sounds like a description of PowerShell. 3. 'No shell language, only proper programming.' Everyone would have to be a programmer. What about casual users? 4. Directory - userd is built in a way that works to address some of your concerns there. Each user has a personally encrypted directory in their home folder. Each program can store user specific config in the user's directory. 5. Filenames and paths come from the hardware implementation. How do we organize files without paths? One flat location with UUID and hashid files?
Unfortunately this presentation sounds more like a rant. It would be much more useful as a whitepaper. I've got decades of *nix experience and I find those complexities part of the power of *nix. As with anything, simplicity creates restrictions and complexity creates flexibility. It's like the age-old debate between Android and IOS. One is more flexible, the other more stable and friendly. There's clearly a divided fandom in those cases. As for *nix or even Windows, it's always a trade-off and that's why all those OS's provide both interfaces. In fact, Microsoft was pressured to include MORE command line capabilities and that's what led to PowerShell. Command line and scriptable interfaces are necessary. Whether it's in direct mode or written script mode is a convenient flexibility. I simply don't see any value in eliminating those interfaces. The value comes from offering multiple interfaces so each need can be addressed in the easiest way for the person doing the task. Ideally you should be able to do the same things from different approaches. Command line people, programmers, or point-click people can all get the job done.
+Jerry Hobby If you watch the whole thing, it's clear I'm saying that total programmatic control of the system is a good thing. What I proposed is replacing existing terminals and shells with more modern alternatives.
+trsk Yes, the way you work on a shell like zsh or in Python, like in ipython is very different. And while I never had a real problem to teach people to type simple commands on a shell, I had a hell lot more problems to teach them programming. It’s clearly an other layer what you are talking about. The one layer is the simple “start this binary as program” command-line level and the next layer is something like ipython, which is at the moment implemented as an additional layer. I mean, you can even skip the zsh or bash if you directly jump into ipython as a shell, just tell your system on your useraccount to switch the shell /bin/bash against /usr/bin/ipython or something. But I doubt you will work faster with your system after that. It’s not a simpler interface, indeed it is not. Python is very complex. But I doubt that this will make you really happy. I mean there are things in Unix/Posix/Linux that could be better. But the simple thing to tell the operating system to load something into memory and execute it, this has to happen somewhere. You can try to hide it, yes. But then you do not get rid of complexity, you add complexity to your interface. More or less every GUI tries to hide the fact. They do not look like something simpler to me. Don’t get rid of essential layers in an operating system just because you, yourself and your sister don’t like that layer. There are reasons for them. Indeed I tried myself to ignore perl for a long time. As I tried to ignore awk at my first contacts with Unix, somewhere in the early 90s. But both are mighty tools that I won’t miss for anything. See, an operating system is more like a workbench, no, more like a whole hall full with tools and machinery and generators and steam engines and a lot of folks go an work there, play, build, destroy, move. It’s a bit ignorant to try to get rid of the lathe just because you don’t use it. Or the way the things are glued together in an environment everybody knows for years and years. Yes, complexity has some disadvantages, but especially the packet management has become better over the last years. Maybe you just use the wrong type of distribution or environment. There is an ongoing streamlining process. But it’s never so radical to destroy the complexity of the Unix environment for something “cleaner”. Apple did that with the poor BSD Kernel they got and you can see where it let them. Their system is just crap. “Cleaner” is not always better. No, mostly it’s worse. Like radicals that try to change something in society by burning or destroying things that work, usually are left with a broken society. That needs generations to fix the damage they have done. And after fixing, it’s usually even more complex than before. Grown systems can be streamlined. But not by radical movement. It’s more a step by step thing that makes things better. If you ever have seen how the ancient VAX systems were designed, today it’s a hell lot better. I mean there are things in Linux that are a bit going on my nerves, that’s true. But the shell is not in the center of it. More like /proc and /sys. Can’t they make up their mind where to put what?! Yeah, /proc was there first and then they introduced /sys but /proc never gone away or was reduced to what it should, just hold the processes. So we have a mess today. That was an effort to make things easier. Didn’t work out too well. :D This clusterfuck really goes on my nerves. Please let the shell be. It’s okay. It’s doing the job perfectly. Nothing wrong with that. Hands off. I mean, there are things I am really still ignorant about. Like wtf do the people want with something like FORTRAN?! This thing from out of the cellar should be really dead dead dead. But some folks love that zombie. Even if I explain that there are mathematical libraries, programmed in assembler that work faster, better than FORTRAN, even if those said libraries can call GPU, I mean, wtf, you got an own vector computer today! Still. Fortran programs keep coming. Don’t know why. If you try to kill something, yeah. I lend you my shotgun and we enter that cellar and get rid of that monster together. I'm with you with that. But I know I'm totally ignorant to Fortran. I know. But that thing MUST DIE. Well, it won’t be easy, son, to kill Fortran. That aged mummy is hell of fast, I don’t know why it’s so fast, that’s unnatural and I guess there is black coding magic involved. Just hold fast your Kerninghan-Ritchie and we see how far the light will bring us. Just repeat the banning formular: “-ffast-math CUDA OpenCL and gods of GMP are with us! In the name of Kernighan! In the name of Ritchie! Die! Die! Die you unholy abomination!”
What I don't understand about the rant is "why even bother"? If +Brian Will doesn't like the shell, why not create a new one? If he did, and it was more elegant and efficient than what's already available, I'd use it. I love the power and flexibility of my bash shell. I really agreed with his criticisms of OOP, but IMHO, the Unix Philosophy is the best way I've seen. I mean, Capitalism and Democracy both have their share of problems, but I don't see anyone proposing anything better.
Our company forbids any shell program longer then 10 lines function that work as program starter, because shell scripting is terrible. Error handling is almost impossible. If you want do something you have to use ruby. Even if this uses "system" to run command line tools.
The only thing that's really caused me any grief is the environment variables thing. I wish I could just update them and have the new values everywhere, not just in new shell instances. Proper languages like python aren't fit to be used interactively in the way that shells are used. If you encounter something you can't reasonably do in a shell, you can always hop into a python repl, though. I don't want to give up the terminal. It's not that I've learned to cope with the terminal. Rather, the terminal is too powerful a way to work, computing would suck without effortlessly piping streams of text data from one program to the next. I don't want to use a shell from a browser. If there's no terminal, how do I use vim?
I don't mean any disrespect to you. Forgive me. I was responding to the video. You just gave me a copy-paste response. I assure you, I know the difference. I chose my words carefully and used the correct terms. Did you watch the video?
@@smorrow Forgive me for being unclear. The word "terminal" there is used in a certain context. The context is the terminal as "a way to work". Used in this way, terminal no longer refers strictly to the one component, but to the way the component is used, ie: in concert with a shell and with commands and everything.
You've got some good ideas here, but I think you are a bit light on this history of why specific engineering decisions were made in the past, which turned them into today's silly legacy things. Unix was built initially on a PDP-11, 16 bit minicomputer as a multi-user timesharing system. I will grant that many of those decisions would have had different choices and even different possibilities if done today. I think your security model is naive. I am not saying we need the ancient World-group-user octal protection model that was inherited from long before Unix. But I think we have real needs for security for privacy that are critical to our finances and our civic freedom. We can expand it.
+Pat Farrell I don't see why this would necessarily have to be taken care of by the operating system. Why not take care of it in hardware, or in applications? Why is it necessary for the OS to do this job?
+Benjamin McLean They can't be trusted to the applications, but if the hardware took care of it, all would be fine. Protection, security, privacy, etc. need to be invisible to an application program. You don't want a bad programmer to decide to circumvent the protections.
+Pat Farrell Bad programmers write operating systems too. Seems to me that any private data should be kept in files encrypted by applications which the OS team would not know anything about. One service the OS would need to provide, however, would be a way to track changes to files you want kept secure, and a way of rolling back changes.
+Benjamin McLean not necessarily. If the application keeps it in something like a git repo and have every transaction be a commit, then any change could be detected instantly and rolled back. The os need not interfere.
Kehnin, assuming you are a "bad guy", surely it would be trivial to write a script to move any changes you want backwards in the commit history as far back as you want and rewrite the entire history to accommodate the changes you want without some protection beyond what scripts can normally access.
You need several months to properly understand ls -la >foo, yes. Then for decades you use it every day. That's what we call a profession, you even get payed for it. You learn a language, then you use it. You don't have to learn it if you don't want to communicate. If you want to communicate you need to learn the language. And the names are not piled up in decades, these are names that were the same in the first UNIXes, it remained the same. I am using them for 30 years now, thanks god nobody is stupid enough to change them.
It reminds me of the mouthbreathers who want to simplify and standardise spelling in English, without understanding that this would require the entire English-speaking world to speak with the same (American) accent.
You really need two permission levels for users, one for drivers and one for the kernel. a program should not get direct access to hardware or the part of the memory where the kernel lives.
One Point: shell language has been designed so that you do not need to use parentheses when invoking a command or a method. Instead, arguments to functions or command programs are separated by white space. If said arguments would include white spaces, you quote the arguments. Very simple, very intuitive, and readable. It would be awful and stupid having to type in parentheses, commas and all the other syntactical sugar just to invoke a specific command with specific arguments, like you would do in languages like Python. You are right about the fragmented naming conventions in command names and command arguments. That was never fixed, and now it cannot be fixed. But that doesn't mean that you cannot create correct command line utilities that have intuitive name and calling conventions.
>very intuitive Not really. You have one set of rules for one type of arbitrary data, and then another set for the other type. It's meant to save on typing, not be intuitive.
you have valid points here. the reasons shells are so kludgy is because they try to work around the lack of a type system. this makes the commands bloated. Not to mention, you need to scrape text to make it usable between pipes. And last but not least, code reuse is a joke because there is no polymorphism. It's not like people haven't thought about this. That's why they have started to use Perl and Python. The shell should just be an execution environment of text based apps. Once you learn Python or Powershell, you can use them even for one liners. I only use shells to launch programs like ping. Scripting in them is downright painfull because the syntax is so bad.
What about just making a shell layer that will translate it self to bash or another terminal language so we can type like "list with size to foo.txt" which will translate to "ls -la > foo.txt". Think about it :)
So, you want a continuously present services exposing a request-response user interaction model with responses in HTML and a real dynamic language such as Javascript? I have a solution for you!
Though I fully agree with you in your stance on OOP, when it comes to *nix-based systems, I will disagree. As a programmer, the beauty of *nix systems is that YOU CAN BUILD YOUR OWN for fuck's sake. If you are fine with the Linux kernel or Unix, then by all means build your own system around them. Have your own packages. Make your own dependencies. Port over what you want, ignore what you don't. Does it seem like a headache? At first, yes. After it is done, you have an operating system that you know will work for whatever software you build on it. You know that the only thing necessary to keep maintained, as far as repos or updates, is for the kernel. This is stupid easy for a seasoned programmer that knows the linux kernel inside and out.
Interesting stuff. My main issue with security would be for how this would work in a server setting. any process could go rogue and spill all the userdata beans. I mean, you could segment your machines so a public network facing machine only handles that. But with large datacenters handling multiple users data things get complicated. I understand that this is a "toy model" and such, but one of unixs strongest areas is it's handling of user segmenting and security (not that it is super strong, but that it is better than the rest)
Replace UNIX? Heresy! Actually, the simplified system you're describing sounds a lot like my old Commodore 64. ;-) In seriousness, while I don't entirely agree with this video, your videos as a whole are absolutely fantastic! I just discovered your channel, and I'm learning a lot from it. Thank you very much.
There were hundreds of BASIC-based computers similar to the VIC-20/64, both before and after these computers. So that simple but effective user interface was very widespread.
essentially this video tells "if a feature is complicated, remove it, we don't need it". Thats not the definiton of "solution". Just code a ui for dos.... but wait! that already exists! and it was aweful.
Oh Brian, you got it all wrong, again. It is supposed to give you that rough experience. Its not so much a technical problem, its to keep the big players in the business from taking control over the central part of the system that would be necessary to integrate every tool the way you want it to be. So they use and should go on using streams of strings and a minimal help from the kernel to communicate with each other. That is something I dislike about systemd. Its a central component in an OS that uses it and whoever controls it, controls a big part of the OS as well. Most of what you want, you could get. But this level of integration goes along with a single big company who provides the integration. They wont do it for free, they will do it to directly profit from it or to gain control over a stack of technology that sees widespread use, in the end to make even more money of it. And we already have such companies and they both have an OS you can use. Maybe take a look at the Powershell, I do really think you will like it. I don't.
You just described Emacs: (1) all written in one language, only used for work inside the system (2) the system itself an interactive shell (3) constant output of data sequestered on easily accessible screens (4) one package manager.
I think some of your predictions for how to make the user experience easier have come true with Android and iOS, which present users with a very flat, non-hierarchical view of their installed apps and make it easy for app developers to work in their own sandboxed file space but hard for them to interact with anything else. And some of your predictions for how to make the developer experience easier have come true with containerized platforms like Docker, which create sandboxed file and configuration spaces per container and typically require containers to communicate via networking - a form of request/response.
A really good talk. I don't necessarily agree with everything but its a good perspective and raises some really pressing questions, ones we should ask more.
Terry Davis addressed some of these ideas in TempleOS. His shell runs C code interactively. He also argued that Linux is way too complex for single-user machines, saying that Unix-like is an 18 wheeler and all he needs is a dirt bike.
As far as response-request for IPC, this is implemented by Minix. Modifying Minix might be the best way to go about building this vision of an OS.
A fellow watcher of Down the Rabbit Hole, I see
@Patrick Keenan
DTRH wasn't the first video that was made on Terry's life, and beyond that, there are a lot of people who followed Terry up to his death in the programming community.
RIP saint Terry. He was a tragic genius.
He will forever be missed
Terry and the CIA will watch this thread forever
F
It's not true that computers are not shared anymore. At my university there is a computer pool for all the students to use running linux. Each user has it's own desktop and home directory. Everybody can use any computer, and I do not need to bring my laptop from home anymore.
I still work on systems with over a thousand users which need to be protected from each other. Brian thinks these systems were all replaced back in the 1970s.
There’s no reason why those systems couldn’t run a different os, or os feature set. “But I still use x” is not a reason why everyone must be forced to use x. If we are talking about an ideal alternative for a simplified OS, it wouldn’t replace all systems. In general it sucks to not be an admin, and if you’re admin users don’t matter.
On Plan 9, whoever logs in at the console is effectively root, and also the only user, on that machine. It's very simplifying, and no loss at all in terms of security, because in a proper Plan 9 system the terminals have no local storage.
And horses and buggies are still used by the Amish.
@@eaustin2006 yup and we still have libraries. Poor argument by the first two commenters
I've complained about gratuitous complexity since at least 1993. The universal response is that I'm too stupid to grasp what's necessary, clearly, and should probably revert to being a tester-or-something. Yet, somehow, I delivered over 20 shrink-wrapped projects ( when that was a thing ) on five different architectures before delving into web programming in 2005.
Javascript in 2020 is insanely complex and well-nigh impossible without numerous "package managers." Aaaaand, forcing javascript to be on the server-side - ostensibly so engineers "wouldn't have to learn another language" - was crazy dumb, and the complexity of dealing with a dozen different ways to fake multi-threading makes any "wins" of using a single language on both servers and clients completely moot.
Complexity has been something I've fought against all my career. Computing is the only engineering discipline I know where "more visibly complex" is seen as "better".
We can agree that the Node.js world makes everything way more complex than it should be. Not sure you can use that as an argument to generalize.
Javascript is running on the server-side because people want the same CODE running on both client and server. For example, if web page is generated by JS, and web browser has JS disabled, server can render that webpage instead.
What dozens of different ways of fake multi-threading? There is just one, that's async runtime (with a few legacy APIs that do it, but it's all been replaced with async/await by now).
@@rlidwka ...all been replaced - uh, no. Do you have anything in production? You don't just yank working production code, and change things to async/await. You leave it there until it either is found to have a bug, is replaced by other code, or is refactored.
And, async/await is NOT true multi-threading, and if you've done actual multi-threading you'll know that. Also, async / await is NOT what you want in many cases, particularly await - as you end-up with synchronous code that will impact user experience.
I'll stick with PHP on the backend. It's GOOD to help separate tasks.
@@Jollyprez APIs that a developer frequently interacts with are being migrated to async/await, you can check popular opensource libraries for that.
Perfectly working production code doesn't change, but who cares? If it's working, let it be working. And if it breaks, time can be allocated to rewrite it up to newer standards.
Async/await allows for the underlying library to implement multithreading, which webworkers do a good job at.
When you're working with someone over the phone and you ask them to type:
ls -la
and they type:
Ellis Daschellie
🤣🤣🤣
What‘s on the screen? A cup of coffee!
Close all windows! A moment please ...
can't fuck it up in polish, worst he will do is ls myślnik la
As opposed to a gui over the phone where it's 'What do you see?' 'No, the other window' 'Yeah, it looks like a wrench' 'No below that' 'Now click "yes"' 'There's no "yes" button?' 'Try the "options"' tab.' 'No, not "settings", "options"'
Tim Wasson i don’t see why experts have to cater to Paris Hiltons of the world. Wanna use a computer, learn how it works first. It’s not my job to educate users what the Windows key is.
Anyone championing a reduction of all the accidental, corrupting, pointless and often insane complexity in modern IT, gets my vote. Unfortunately, this complexity sells consultancy hours, days and months... it keeps most people in their cosy jobs. It's also a power trip for most: those who understand the nonsense complexity have (lots of) power over those who do not. "Trust me, I'm a doctor" (and my invoice is in the post)
(This is just gathering of lose arguments, facts and emotionsand is not the base for a thougthful - and respectful - discussion. I want to point out the grotesk of arguments given in the preceding comment(s). So take it more as a satire. Yeah.)
So let's see, where do you want to start?
How many storage systems are there? Read only? Spinning hardware disks? Solid State storage? Tapes for archiving? Which do you want to get rid of?
Windows? *NIX? Why windows server? Why windows 10 Pro or enterprise edition? Which ot those should we remove from the market, no from universe?
Hardware? RISC or CISC? I mean it seams that RISC is beginning to be more popular even in Server and desktop area.
Smartphones, that's gonna be great! Are IT systems too, you know? Software, hardware. Which should we drop? MS did us a favour, no more Windows Phones? But the apple and the robot, which should it be? Two totally different operating concepts, i guess, and I am not only talking about the pesky user interface.
Just get rid of all those pointless, awkward Signal Threema Pulse Telegram Skype. Just go WhatsApp. And Facebook. Btw, thanks for Google making it -1 !
All those efforts wasting in doing DIVERSITY just that some people can get jobs because they got no(t the same) clue as you playing rock music, miscalculating the next world financial crisis, lying for getting voted for being able to lye to many more people. No, you want to get rid of those IT guys because they should have learned different stuff in life.
Excuse me, all those pros creating overly complex IT systems for their own benefit (money and "Trust me I'm a doctor" attitude). Where did these people start to make tech? Google founders? Facebook founder? Apple founders? These and others dedicated their lifetime, some even from childhood, to create complex, efficient, unique, diverse software and hardware - also to please you. Are these people those snobs you are calling them?
What is it that you do for work? (Rhetorical questions, please don't answer) Can it be done just by watching someone else? Doing the same "moves"? Did you have to learn stuff? Would I have to learn stuff to do your work? Would it take me hours, days, weeks, months, or even years? I know you would have to to learn "IT" because you sure seem to have no clue what IT tech is, from nowhere about hardware to nothing at all about operating systems. People spend their lifes! for knowing IT and tech as doctors might spend their lifes just for doing their job, healing or research.
(Gonna delete this post in a few weeks, but until then I feel a bit reliefed.)
You forgot to delete
@@sless621 What?
It's a lot like biological evolution, where certain organs may be redundant, certain vains and systems take unnecesary complex paths, etc. But it's all there because of history.
It's the same thing, someone starts building some very rudimentary computing device. Then instead of inventing the wheel again, someone else uses that thing to build something more complex, etc, etc. After more than 10 iterations like that you get a mess naturally. It was no ones intent but there we are, patchwork on patchwork.
But now redisigning the entire thing will take a lot of work, and we would have to reschool everyone. So we just reluctantly carry on piling even more shit onto that mess, increasing the cost of redisigning the whole thing EVEN MORE, and hence we are stuck in this vicious cycle of ever decreasing efficiency where the mess just keeps getting bigger.
Weather it is biological evolution, city design or software, the same mechanism explains the inefficiëny. And the bigger the history the greater these inefficiencies get.
It's also called the 'law of the handicap of a headstart'. Those entering a market freshly can immediately develop anything wayyy more efficiently than established companies with a long history:
en.m.wikipedia.org/wiki/Law_of_the_handicap_of_a_head_start
"(and my invoice is in the post)" lol that got me laughing good :P
I think that the dichotomy of "admin software" vs "not-admin software" being your only method of permissions is a bad idea for security reasons, for the same reason that setuid root programs in Unix are encouraged to drop root privileges as soon as possible; you want software to have the least amount of permissions for the least amount of time, so that the impact of vulnerabilities in the software is minimized.
It's possible he meant them to be able to change over time. Either way, though, I think the two-way system, while it does simplify things, is actually MIMICKING on of the weaknesses of the Unix permissions system. I can agree that the focus on protecting users from other users of the same computer is somewhat archaic nowadays (even if it still applies often enough), but what's also archaic is the lack of focus on protecting users from their own software. That's why I think it might be better to start with the Android app permission system and try to improve that (e.g., generalize it, make it more fundamental in the OS if it's not already, focus it on more precise user control, maybe add OPTIONAL support for multiple users).
The Android app permission system (which keeps changing -- improving I think) gives each app permissions, controlling exactly what types of things each app is able to do (rather than what files each USER is able to do what to, as in Unix). Also, since the Android app permission system always tells you exactly what permission is blocking a process and let's you fix it immediately in a pop-up (and Android 11+ lets make an exception only once, rather than permanently changing the permissions), rather than stopping and saying "Permission denies" like Linux, common Android permissions problems don't just break things and require troubleshooting using the command line like Linux ones do.
Actually, I imagine a lot of my personal problems in this vein with Linux permissions could be solved within the Unix permissions framework, if I just massively switched around my settings; however, here are some problems I have with and improvements that could be made to my Linux permission:
It would be nice if I could properly install software without needing full superuser privileges (I use sudo apt/apt-get install when possible).
It would be nice if you could control exactly what what files, directories, or drives a particular process has permission to affect without my express permission. (That way, when I download a file that has a script that's supposed to install a new operating system on some external drive, I don't have to be worried that this script I downloaded from the internet and don't understand will mess with any other drives, like my primary hard drive. Similarly, sandboxing untrusted executables, could be trivial, and downloaded software could be used in a permanently sandboxed state without needing to use a virtual machine; also, browser scripts could be sandboxed at the operating system level rather than the browser level.)
It would be nice if I could use software that can READ my partition table without having to give it permission to CHANGE my partition table. (The only way I know to check the layout of my partitions properly, or to make images of partitions, is use software that need superuser privileges whenever the run, like parted, gparted, fdisk, and partclone (actually, I think parted might be able to run without sudo, but there was something inadequate about what it did).)
Although I'm not as paranoid about my Linux PCs (/laptops/etc) spying on me as my phone (mainly because it normally has no microphones, cameras, or GPS/wireless phone service/wi-fi/bluetooth/ attached), this concern definitely not unique to cell phones (and, logically, it makes more sense to worry about malware when you rely heavily on free software). Thus, being able to control the use of such devices might also be useful.
I feel like I'm showcasing a lot of my ignorance and noobiness in that last comment, even if my core points made sense. For the record, I forgot about the df command (which allows you to see partitions without needing admin privileges), but I DID already know that Android was based on Linux, though I didn't know the specifics. I was trying out puppy linux (bionicpup64) in a virtual machine, and I thought it was interesting how in fatdog and puppy, the default login is root, but you can set network applications like browsers to be run by a user called "spot", who has no admin privileges, and there's also work on a third user called "finn" or "fido" who has no admin privileges but can sudo to get them, like on Ubuntu. Someone who I guess (because I don't remember for sure) was the/a designer of fatdog and puppy compared the "spot" system to Android, which he claimed ran each process under a different randomized user. Still, maybe you can see how it might be reasonable to call this a contortion of the Unix permission system, designed for computers shared by many humans, to fit a new context, where the focus is on sandboxing many different applications from each other and from hardware, personal data, and core system programs and data, and on separately limiting the permissions of each application, rather than of each human user, to only what the one human user wants and expects that application to do.
I was also interested that puppy comes with a very simple GUI "partviewer" for viewing partitions without admin privileges, though, as I mentioned, I more recently rediscovered that df does a similar thing, though without the bars to VISUALLY compare used to allotted, and with more complications, and I don't know how to get it to show all partitions (including swap) or to show start and end points on the disk.
There's really nothing stopping you from making a "shell" exactly like you describe for current UNIX systems. There's also nothing stopping you from making a distro with just two privilege levels. Separate filespaces are also very much possible on Linux (containers). A request/response IPC that starts up programs is what systemd can do. Configuration in the form of a registry is also implemented on various systems. Of course, what you then and up with is a platform within a platform, and we already have many of those. That is part of why systems are such a mess. The GTK people have their ways of doing things, we have a variety of IPC mechanisms running alongside each other, we have various authorization protocols in place, etc. Most of what you describe as solutions are really just conventions you want to see. You can't expect everyone to follow your choices and limit everyone from making different choices. This reminds me of the old "we have both standard A and standard B in place, so we're making standard C to create a unified standard."
I find it interesting that most cloud infrastructure really attempt to implement most of the proposal presented here (on top of current tech) and, for security or organization, we build all the stuff the proposal attempts to remove right back on top of the platform that doesn't have them. To me this suggests most of the pain may just be what security requires. Concretely, UNIX and other referenced initial platforms were typically developed in the absence of security and security was attached somewhat ad hoc through the years. For a proposal like this to truly work security must be worked in from the beginning. Trying to remove all those hairy parts does not bring in security. You're not just protecting against local users (who really should be protected from one another otherwise a non-administrative user can corrupt an administrative user intentionally or unintentionally and you get disaster). Minimizing the affected scope of breach is a good starting point but communication is key. We need communication for effectiveness but we must restrict it for security and stability. Finding that balance is tough no matter what the underlying platform looks like.
what are your thoughts on NixOS? Seems like Nix really tried to do this exact thing, with the added feature of utilizing immutable data structures on top
There's also GNU Guix. It goes one step further by replacing the omnipresent shell scripts and all the bespoke config files with just Scheme code.
Regarding protecting users from each other, you say it almost never happens that strangers share a computer, but what about public libraries or computer labs at universities where students have unix accounts accessible from any machine on the lab?
+hasen195 Not only that, we can ssh in to a shared server that typically is used by quite a few students simultaneously. In addition to your home folder, you'd typically store things in /tmp that you don't necessarily want other people to be able to access. We also have group projects where it's important to belong to a group that together have access to a folder that other people does not have access to.
The concept of strangers not sharing the same computer is simply not true in university or company systems.
+Oskar Södergren I would say yes and no regarding shared computers. Yes, people need places to store personal files. Mostly these days those files are stored elsewhere, not on the specific computer itself. Even in personal devices, we now put more stuff in Dropbox/icloud/whatever.
In fact, in my university in the computer labs for my section of the school everyone's shares a single login to get onto the machine. It was simply too much maintenance overhead with no real benefit support individual logins. Everyone brings in personal storage devices for personal data, and just users the machines to run applications.
Secure? Not really. But more manageable, yes. I'm not saying it's a great solution, but it is one borne out of the necessity to simplify.
I hope they have some of this hardware/software solutions where the system is reset back to the base system image after login. But the solutions for the future is not dropbox or any other opaque cloud but your own device. This convertable idea from MS. You have your virtual machine on your phone/usb stick and the host has some kind of vmware player that just runs your machine. Simple and easy, and well restrictable.
Ok, there is this USB controller problem that totally kills security at the moment.
Well... in the beginning, when UNIX was invented, this was the typical environment and that's why all this shared stuff was implemented. But: MOST of the time nowadays you use your own devices. You don't share your home PC, your Smartphone or your console with many other people (mostly only your kids or wife, but often they have their own devices, wo even not that). You can see it especially in the case, that most of the computers nowadays have the main user profile run as Administrator (and Windows and Mac went so far to even cut away privileges from this Administrator and give it to a super administrator, that you only see in repair mode), a role, that in the old time was restricted to one type of users, the server operators, while all the others were just clients, guests on the system. Yes, in offices you often still have in the network that the PCs itself are in an environment of networked computers, where they are only clients, but remember: when Unix was invented, a "computer" didn't meant the terminal you were on, but the whole environment, and nowadays this computer is inside your box and you only network with each other.
@David Peterson: In my university they have written on the screens: "Don't save on desktop or internal storage device, at it will be wiped clean after you log off", as the computers are running in a sandbox mode. We are urged to story every data that we want to save on USB devices ;)
We just shouldn't use multi-user systems if there's no need for them(nobody uses my linux boxes other than me). Yes, in a library or computer lab you might need multi-user system. Debate over.
Had some decent ideas at the start but this is dangerously inadequate from a security perspective. Any replacement should have more security controls not less. Actually I would go so far as to say that a new system should be built with security as the core concern with multiple measures at each level to bring it beyond any other system. User and file permissions is a weak and outdated method to place any trust in and causes all kinds of issues but the service they provide should not be discarded. A lot more should be added to a system beyond this. A registry is also particularly bad from a security standpoint since it is by nature a shared space. To adequately isolate programs from each other you wind up putting up walls that defeat the purpose of the registry. Or you just let any program have free reign to do whatever it likes to everything in your system.
I absolutely agree with you. File security may not be an issues on systems with less than 5 users if the users know each other but everything beyond that needs strong security. Alone in the case that one account gets breached. In a system where everybody can access all user files this could cause devestating damage to all user data and something like that is plain not acceptable.
The Microsoft Windows WinSxS system already implements this kernel package management using UUIDs and version hashes. It guarantees that all versions of shared apps and libraries are always available to all consumers on the system. WinSxS has been shipping for ten years now.
And WinSxS is why a fresh install of Windows7 Professional needs 40+gb of disk space filled during two full days of update downloads, before Office or other programs are installed. I wouldn't point to that as a success.
+alderin1 - Dude, it's 2017, and you're still fretting about 40GB? In exchange for 100% backward compatibility across the entire operating system? You linux people have weird priorities.
+angrydachshund - Actually, this "priority" was handed to me by my employer getting 60gb SSD drives for workstations to speed things up: The OS taking 40 of those 60 doesn't leave enough room for user data. However, the point was WinSXS isn't a success, it is a bandaid on a foundationally flawed system, and just because storage space is getting cheaper doesn't mean my operating system should try to take up the same percentage of space. Finally, sadly, I work in an area that often has the edge cases, your "100%" is incorrect, and in my experience is closer to 90%, unless you are talking about other Microsoft products.
Package management on the kernel?
I'd love to see you pitch it to Linus
I'd love to read Linus' resulting tirade. I think I was tame compared to what Linus would do.
He would get a Nvidia-like response.🤣
@@wiskasIO what does that mean?
@@abigailpatridge2948 Linus tends to not go on tirades over these things. Fucking up userland abi, now that'll set him off.
This is kind of what containers can be used for.
We did some of these suggestions before - LISP machines, Forth machines, etc. And Android Intents, SOAP, JSON-RPC, GraphQL queries... if we don't have a way to describe piping data from one small bit of code to another, we invent it. Again, and again, and again. "Those who cannot remember the past are condemned to repeat it." - George Santayana
Whether you buy into his philosophy or not, we have gone from massive shared machines, to personal machines, to largely shared machines, and back again multiple times. I have customers today who have every desktop running a Terminal Services session on their server.
Don't forget BASIC running on home computers!
14:07 package management at the kernel level seems like an ugly idea. That's too complex (and the kernel shouldn't handle such high-level things)...
Indeed, separation of concerns is crucial.
Not when we talk about Microkernels!
I can't say I agree with everything here, especially the idea of dispensing with user-level protections from other users. Its important to remember that users represent separations of concern, not just human users. Ex: a mysql user or a www user. I'm also not convinced on the registry store as being better than /etc or keeping config files in a individual user's namespace in plaintext files. I think the points about package management are the strongest here.
Oh good, I'm not just some crazy weirdo in thinking that users should have some "space" to themselves on a system!
Some comments:
1. Shell languages are certainly a mess. I am amazed that people still write complex programs in bash which was written by someone who clearly skipped language syntax classes. However, shell languages spring naturally from the idea that it is a simple increment from command line execution to scripting. This certainly means that shell languages need to be compromised somewhat (but not to the level of bash!), and that interpreted languages such as python will go their own way from shell languages.
If you don't like ls and the command shell, you can always use a graphical file explorer, but they don't script and their output cannot be piped and manipulated. Ie., lets not throw out the baby with the bath water. There is a reason programmers prefer command line environments.
2. Dependency management in Linux became a serious problem due to a combination of "everyone should install from source" thinking combined with the tragic decision to include DLLs or .SO modules, aka "dynamic link loading" from Windows. This was a technique from the bad old batch days of the 1960s, brought forward into the days of demand paged virtual memory, where it makes no sense whatever. Does it save memory space? Yes, but page management makes that irrelevant, and DLL techniques actually work counter to virtual page management, which works best if you have unmolested binaries on disk.
The real issue is that cross package referencing (either DLL/SO or library, or program) creates N different combinations of program run environments, and that is an inherently wrong idea. You don't want your users to be testing configurations that you haven't tested. The gain from this idea is that the user *might*, repeat *might* see better functionality via a new package upgrade unrelated to the package at hand. I would assert that it is far more likely that you will see a new bug than better functionality. The alternative to all of this is return to static program linking. The configuration will be fixed at the package creation by its developer, and that configuration will be tested and unchanging. Libraries get compiled and shipped with the product. A program that is needed and is open sourced gets shipped in a subdirectory, eliminating the need to go find it, worry if it is still compatible, etc.
The binaries get bigger, but who cares? virtual memory management takes care of loading only the actively used sections of a program.
3. Registries moved the configuration data from individual files to a central system. Even if the Windows registry was organized, one of the biggest issues in WIndows is the central registry becoming corrupted, which brings the entire system down. What point did the central registry support? What point does Linux having all of its information in the /etc tree support? It does not make their configuration information look like each other. The info is divided by filename, and increasingly, by different directories as programs and devices need complex support files.
My gold example here is Eclipse. This program needs no install program. It lives in one tree. You can copy a working tree from anyone to another machine and it just WORKS. All its configuration info is in that tree. Delete it and it is gone, there are no registry or information files outside. Want to be sure that you got rid of the old program before putting a new eclipse on? rm the tree and copy. Want multiple versions on your system at the same time for compatibility reasons? Make another tree. They won't fight over registries or info files elsewhere.
4. I would not be so quick to declare multiuser systems as dead. We use *lots* of shared machines, some with multiple users at the same time, some not. Even if the system does not have multiple attached users, each user has an account, and the user paradigm is valuable.
Indeed, DLLs made a lot more sense back when hard discs were smaller, but these days, when terabytes are routine, not having a statically linked system can't be justified.
And it's a lot more difficult to do multiuser if one doesn't have a multiuser-capable os. Having Unix at hand means one gets tempted to use it as a multiuser system when one can, because it's what big systems do, and big systems are inherently cooler. Thus, everywhere Unix lives, multiuser lives too.
Actually if you look a little deeper, "linking" as an idea is pretty stupid, static or dynamic. You need just executeable files - binaries (not modern 1 mb bins, but real ones - tens, hundreds Bytes) , that can call another binary.
You can put all that bins to RAM, making them available for being called, and you can have full cache on disk, in case that there is no needed function in ram cache (which is smaller) and you can have remote "cache"- repo - in case if function not found in ram nor disk it just download it from your remote repo and neither disturbs your again nor or throw error - all needed package managment and linking with more efficieny and less size
Dlls are needed as if one library has had a security update you'll need to recompile ever peice of Software that uses it if it was statically... And that costs years for libc in a standard in cpu time.
It does not work.
@@GegoXaren years? yeah, probably for glibc.
glibc isn't the the only kid on the block though.
@@jess-sch
It is the same for ulibc too you tosser.
Unix was created to be a pipe and filter architecture and the shell was the means of using it. The Unix philosophy is about small programs that do one thing very well and piping them together to create an application. It is a brilliant idea for an amazing amount of productivity before a windowed environment. And Unix is meant to be a multi-user computing platform, that's why it was designed with a file and memory security. In Unix, everything is a file, this includes ports, terminals, printers, etc. So, the file security model can be used for everything on the system.
Those are principles, they are good. The problem is the implementation.
@@monad_tcp In 1969, it was a monumental leap in that it enabled sharing "general purpose computing" on various systems.
@@substance1 that's history. We learn from it, not worship it as perfection.
@@monad_tcp It works great for Android.
@@substance1 because Android removed completely the user mode. Such irony.
One big problem lies in identifying what is part of configuration. Even the most conscientious developer will often fail to identify all the elements which comprise their application's configuration. As a result, third party packages which need to interact with that application may find it necessary to rummage around in the application's private namespace to find the configuration information they need. The alternative - petition the application developer to update their config info and hold up your release until they fix it - is rarely practical. In my experience this sort of "configuration shortfall" is responsible for 20-30% of the complexity I face in getting packages to work on my system and in creating packages of my own.
All existing systems suffer from this problem. Your suggested scheme will as well, although its greater emphasis on configuration might mean that package developers are more rigourous in identifying the elements of their package's configuration.
in my experience the "solution" is to use containers and virtual environments. As you say, no mater what you do, configurations can change when you install some new thing or version sometimes without you knowing. Of course this absolutely adds complexity, but I don't don't subscribe to the assertion of this channel that more complexity is inherently bad. Complexity just has to be met with good abstraction. ls -al > foo.txt is a good abstraction.
This talk reminds me of Terry Davis' TempleOS. I think the terminal exclusively parses Holy C as opposed to some scripting language, and has the ability to improve the readability of output in small ways, e.g. printing images and 3d models to the terminal... lol.
Edit: Should have guessed, people have already said this a long time ago
You've done videos on problems with the status quo of software development like OOP and the mess of modern operating systems. You've opened my mind to ideas of what could be done to create better software. Do you think you'll do a video on the state of the World Wide Web and the overextending of technologies? I'm very eager to hear a response.
All that is solved by having browsers adopt a well done language.
@@jbmw16 How? You don't fix overextension by continuing the overextending.
I wish, instead of WHINING, this turd would DEVELOP something, like a SOLUTION.
@@Spiderboydk i think he means scrapping javascript for something not shit
replacing, not more overextension
@@ttheno1 Probably, but that wont solve it. No programming language should ever have been jammed into HTML in the first place.
I appreciate the sentiment and guts to share your thoughts! Many ideas you are describing here remind me of Android (a lot), Smalltalk VM, .NET framework and Microsoft OLE. I am sure these concepts are present in other systems as well. I agree that existing OSes are hard to learn and understand, promoting buggy programs but I suspect that complexity is not the enemy here, it's just bad design. A system can be complex and yet self-healing and easy to reason about. A well designed system is not simple, but the one which layers complexity in a sane way and keeps those layers separate. If you look at the systems which seem "simple" or try to actually build you own, you will see that they all have complexity in them, it's just with "simple to use" systems we don't need to know about it most of the time. From this regard mobile OSes are a big step forward compared to Windows, Linux or MacOS (which in many ways does good job too) in the ways they make user's interfaces "simple" and predictable compared with desktop OSes where any problem requires "popping the hood" and diving into the system details.
I refer to "well designed complexity" as sophistication. Software can be sophisticated yet simple.
With this definition, complexity is almost always bad.
What you've described is basically the Macintosh platform prior to the switch to a BSD core. Nowadays they actually solve a lot of the traditional unix-ish problems from a normal-user perspective through a strict set of conventions about where programs and user data live.
Sadly, they devastated the spacial UI that was the secret sauce that made the Macintosh's UI revolutionary and unsurpassed in terms of usability and productivity. I really miss the system folder where you know which files do what and what application they belong to by just looking at the icons. Nowadays, you have hundreds of directories littered with hundreds of files with cryptic names and no clue as to what they are. It makes it very hard to troubleshoot an issue, remove, or install additional functionality. They also adopted UNIX's directory structures where its organization for the system rather than the user. Most of elegant and beautiful abstractions which hid the ugly and distracting details of the OS are gone forever.
@@bobweiram6321But the stupid thing stopped being buggy and finally has multitasking, so that's a plus.
To understand 'ls -la > foo' you don't really need to know anything about EXEC or file descriptors or anything like that. Really, you just run "ls -la" and see that it spits out the result on your screen. Then, you run "ls -la > foo" and observe that it gets written to a file called 'foo'. That's all you have to know to understand that. The rest of the details are just... details (you can learn them as needed).
+MasterGhostKnight Sounds like you seriously need a new work environment if your boss is raping (or rapping??). If what I said is wrong please say how. The point is that you don't need to know all the details to get 98% of the value of shell redirection. Are there edge cases? Sure as hell. But then for those you can develop guidelines and things like static analysis/best practices, and so on. Or refactor (when your shell scripts morph into gnarly hacks -- refactor them into high-level language programs).
+thought2007 , unless something goes wrong. Where do you start then without any background knowledge?
David Rutten
Yes. When something goes wrong the shell gives you an error report like "foo: permission denied" and so on. Try this in Python and you'll get a 30 line backtrace or something else instead.
+David Rutten That's the OP's point, that thought2007 is totally missing. And everything breaks at some point, especially on Linux.
Still waiting to hear why you need to know this just to use the command 'ls -l > foo'. That's the point. Going on about details is like saying you have to be a mechanic just to be able to use an automobile. Utter nonsense. If your Unix command line breaks, take it to the local guru. Not that hard.
Absolutely love the Linux tradition, but I agree that the package management can be a hassle.
Yes, there are times that it helps to know the complex operation of processes being forked off by the shell but it is very, very rare, in my experience. I've been working with Unix/Linus since the late 70s (Unix) and I've only needed to think about this a few times and even then it was at a surface level. I'm with you about the difficulty of installing software but I'm not sure that the OS is the cause...?
All shells should be replaced with the LISP REPL.
Actually yes, and use spaced lisp so there's no issues with unbalanced parentheses
Haskell LET'S GOO
Exactly what I though: He seems to like LISP machines.
This, or Haskell
Spotted the LISP Machine user! :P
The Windows registry was created to stop piracy. In the DOS world, all the information about a program was contained in its own directory. You could copy that directory to another DOS machine and it would work. But you can't do that in Windows with a registry, because the program checks for entries it put there when it was installed. If you just copy the directory of the program to another Windows machine, it will not have the registry settings put there by the installer.
The registry works to prevent copying for most people, but hackers can get around it. Never-the-less, it is a terrible idea and it should be retired.
The registry is also a place to store system configuration. It has a real purpose and it seems to work well enough.
Warning: Statements in this video make a dangerous amount of sense.
Some things that seem to be missing from the description, and that seem pretty important: Side-by-side versioning (because programs change and stop supporting certain kinds of messages over time). How to avoid GUIDs when figuring out which file I want to reference (e.g. module includes). Meta-facility lookup (which C++ compiler do I run, which text editor, etc). Any standardized way to capture program runtime information for debugging (e.g. log files written by stdout, etc. Or does logging to a file just work basically the same way - you just don't tend to chain things via text?). How you implement something like a system-wide database program (that has a shared state between users, but really shouldn't be running as admin).
"The notion of strangers sharing the same machine is outdated, that doesn't actually happen"... yeah right, what about fucking servers?!?
@@avinamerkur1484 i meant servers where you log in via SSH, for example
Ah yes, the world where endpoints don’t exist!
I know I'm 8 years late here, but I have been watching your videos about Low-level Unix stuff, and man, I'm just so appreciative.
You present SO MUCH content, so concisely and well explained, it is just incredible. You and Jan Schaumann (from the 'Advanced programming in the Unix environment' series here on YT) are the ONLY people I have seen go so far into Unix with such clarity. It's just awesome.
Also, I find it strangely comforting that even someone who is so clearly knowledgeable about Unix also sometimes gets the File descriptors for STDIN, STDOUT, and STDERR mixed up too. I do that all time, nice to know I'm not alone.
I just took a look at Jan Schaumann's Advanced Programming in The Unix Environment series - they appear to be very promising. Thank you for taking the time to point that out. I intend to look into it more closely when I find more time. :)
@@kugurerdem No problem! They are indeed FANTASTIC videos, I can't claim I've seen the whole series yet, but from what I have watched, they have been super informative. I'm happy to spread the word :)
Sounds a lot like Android, with the intents and all. Still some parts missing from Android though, like a real shell
Termux gives you a real shell on android. It has a package manager and runs without any virtualization.
@@anytimetraveler wow!
Yes and Android is really a mess, with apps having to ship with all their dependencies, leading to a lot of duplication which proper package management avoids.
@@patham9 I disagree. Statically linked libraries really are the way forward for linux too in my opinion, because otherwise packages start holding each other back. Oh can't install this version unless I install glibc version this much, but than these packages break because they require a different glibc and so on. Storage is cheap, duplication is fine.
@@kazaamjt1901Yes and no. There are libraries every program uses. Core ones. They should not be duplicated. They are basically part of the OS. Others should.
What many may not recognize is that 7 years later the commentary that this video expresses has been proven true more today than ever.
Consider systemd which takes various processes and 'manages them'.
So one singular process to manage all other processes in an effort to not only 'simplify' operation but also 'provide and extra layer of security', all while in pursuit of those goals has managed to add yet another layer of complexity. And in order to invoke or modify systemd the interface is multi-lined cruft which is passed along to still more xml configure cruft.
And if that werent enough there is not layers upon layers of virualization software like 'containers' in order to 'simplify the complicated' implementing and 'inventing' more 'tools' upon 'tools'. Making the whole thing even more complicated than.....and I loath to admit this but more complicated than windows.
The only thing that keeps linux (not gnu) from being squashed right now are the fanatics that support it.
Ive been using GNU/Linux for many years and have always found the Linux kernel unnecessarily complicated and naturally for voicing my opinion, observations and concerns as one might imagine what happens to anyone who refuses to succumb to the fanatisism, have been shunned.
The straightforward solution to the majority of package management issues is to statically compile our programs
This is fine if you have good control over your dependencies. But if you have good control over your dependencies, it's then also unnecessary to do static linking. In other words, statically compiling everything simply proves you don't have good management of your dependencies. Perhaps you don't care about good management of dependencies. In that case, you can save a lot of time by just developing directly on production.
@@PeteRyland People can think whatever they want about my management of dependencies.
@@JrIcify Should I also imply that you don't care about security either? Then I'm pretty sure I'm not going to run your software on my systems, thank you very much. :-)
Compiling statically just sounds like giving up. You waste more memory too and it's not elegant to have multiple versions of the same library on the system that each have to be updated by recompiling all the software, where as with shared objects you just need to update that one file.
good idea, until you realise how fast your disk space is being consumed
The public port problem might be solved by having a service registry which maps dynamically allocated ports onto universally recognized service identifiers. That way incoming connections will only need to know the standard ID a given service, not the IDs of all the different packages which be used to provide that service.
In some ways the existing port numbers already get used as a sort of service identifier, however the port number space is too small and the methods used to ensure uniqueness too ad-hoc to avoid significant conflicts. Switching to UUIDs would almost completely eliminate this problem.
Read "Unix haters handbook". All this mantra of "worse is better" and the underlying crud of the system design is just crazy. So in place of fixing it, they sell it as a feature and this was back in the 80s.
Hi Brian, thank you for publishing so many informative videos. While I do not always agree with your arguments, I always feel like I have learned something useful from your videos. As an avid Linux fan, it seems to me that the variants of Linux are the result of an army of volunteer programmers. What you want is an entirely new OS. Wouldn't it be more productive to have a discussion from a more neutral stance. Linux is not really a "plug-and-play" operating system meant for the casual user (despite efforts to package components ala Redhat, etc..). My love of Linux is precisely the freedom to modify (through shells) my environment and to use shells as "glue" for engineering tasks involving many programs. I love the fact that no one (read Microsoft) is going to swoop in and remove features which I use. If you are talking about a "new" system, all of your comments are valid. I just don't get the point of picking on Linux, it is not the 900 lb elephant in the room. Thanks again for all your efforts. Bob
+MrBebopbob Well... at the beginning of the video he said all OSes we have are too complex, but then he talked only about Linux.
This is fallacious, the fact that linux is an OS made by an army of developers doesn't justify the clusterfuckness of unix shells, this clusterfuckness doesn't come from the fact that linux is an open source project but from trying to maintain as it is everything about the unix tradition, wich was the point of this video and I am quite amazed you missed it.
"Linux is not really a plug and play operating system meant for the casual user" yes it is, or at least it should be. Linus Torvalds himself said that one of the biggest problems with linux is lack of standarization that makes it harder to setup for common users, an operating system should be both easy to use as well as versatile and malleable when in more experienced hands, those two things are not mutually exclusive at all and you thinking they are is a direct attack towards Linux's progress as an operating system.
"My love in linux is precisely the freedom to modify" again, how does this have anything to do with the fact that the unix shell conventions are outdated and overly complicated? It is completely unrelated, you can have an operating system that allows for that that doesn't have the unix nonsense.
"I just don't get the point of picking on Linux, its not the 900lbs elephant in the room" exactly, because it is not the 900lbs elephant in the room it is the one we should care the most about improving, just because you are a mindless fanboy who thinks linux is utterly perfect and there is no way of improving doesn't mean the rest of us Linux users are that dumb.
@@marcossidoruk8033The shell is never nice. Seriously. It really depends on what your preferences are. Bash is not the only shell too. Just the most popular.
I would love if the shell for Unix or any os used a C like a syntax, why "ls -la > foo" and not "writeFile("foo", listDir())"
I will also lean on saying that the security view on this proposal is outdated.
It seems to stem from a sense that applications are installed because user wants them, and they do what they are supposed to do, except in case of errors, which was probably accurate in the 80's or even in a modern fully controlled Linux environment, but certainly doesn't reflect what happens in a real end-user environment.
In the days of smartphones, and even PC starting to feel like smartphones, we have hundreds of applications installed without user consent nor knowledge, doing untold things with accessible data, with the primary objective to invade user privacy, using the tiniest clues to rebuild a profile and sell it to highest bidder.
Security and user control should be way, way more central, with default giving way more power to users as opposed to apps.
The Android Permission model is a good step in this direction, and I would consider it a minimum nowadays. But only a minimum. There are still too much data that can be siphoned off by rogue applications on Android.
Brian you are wrong, we cannot abandon the entire concept of a shell, and replace it with a general purpose language. You need a way to quickly interact with a running system, and some form of shell is an ideal answer. What we actually need is a modern widely available OS built on a micro-kernal with clean lew level IPC mechanisms,... so basically FuschiaOS, then we can iterate on user-land concepts, without trashing performance or ignoring driver and platform complexity.
You could use python as a shell, just try it to understand why it is not a good idea.
To your last: Yes, I disagree with you, and yes, but I've already been doing so for a long time before ever watching this. You seriously praised systemd unsarcastically? Really? And then suggested that we force something that is very much userland into kernel, not just backtracking all the work done recently to take cruft OUT of the kernel that belonged in userland, but going even further than Windows ever did into monolithicness? No. We should NOT put package management into the kernel. At all. Ever. Doing so means a reboot with EVERY software change. Microsoft is bad enough on that score, this would be worse.
I won't disagree with you that there's a lot that needs to change. I also won't disagree with you that most package managers are bullshit. Have you ever tried Gentoo? I think it manages to get right most of what you're complaining about without introducing quite as many of the drawbacks you don't seem to have recognized to your prescribed approach.
There are OSs with the ability to edit the kernel without a reboot. Brian is right. You are stuck with the Unix state of mind and you can’t see outside of it. Learn more about other operating systems to see how the world was before and after Unix came to the world.
@@saymehname - no, you're missing the point. What's being suggested here is a complete re-imagining of a computing interface and OS. There was no need to actually mention Unix or Windows here at all, except that I guess some starting place was needed to push off from. The hubris on display is staggering, but that's par for the course, and there's little respect shown for the pioneers who were working within a very different environment.
You also missed the point about systemd - a Unix state of mind is perfect for a Unix system. Which is why systemd is the wrong approach.
This new suggested thing might be an interesting place to be but I wouldn't take a Unix mindset to it, it's not Unix. It's so different that you'd be forced to approach it with an open mind, or fail.
Wills is completely disengenuous if not being decepting about 'ls -al > foo' though. The point of that syntax, like any command interpreter, is to hide the complexity, make it readable and repeatable for ordinary users. There will be huge complexity in getting any OS to do something like that. Even in this new call-response model with windows for responses and links and all that - how much complexity is under the hood? What does the kernel look like and how is it going to be any less complex than the way Unix does/did things? Hmmm?
> *ls -la > foo*
While others see bloated piles of complexity, I treat it like fine art or poetry. With just 12 simple characters, we can instruct our silicon machines to do multiple operations. Truly amazing!
same!
yes!
@Barry Manilowa Is that really a hard question? If it existed before the ls operation, yes. Else, no.
@Barry Manilowa I mean, fine, but it doesn't take more than 5 minutes of experimentation to learn how it works. Or the damn manual.
@Barry Manilowa Most shell users are naive users only for a brief period of time and after that the power of the shell becomes more valuable to them. For example, let's image the command shell was replaced by Python. How exactly is it going to make it easier to understand to naive users, who don't know Python programming at all?
Maybe there would be "ls" function, and write to file functions, but I think the problems would get more complex, not less complex to a naive user who doesn't know anything about programming, functions, function calls, variables, function return values, execution order, etc. So the problem of execution order for example still remains, you just move it to a different place!
I agree that Python is a nicer language than shell languages for scripting and doing anything more complex. But the current concept of working allows you to do that, it allows you to use the tool that best suits your use case. I think the concept of pipes and processes is rather simple and efficient compared to how you would do it in a programming language, where you have to think about variables and their types, and take certain amount of letters at a time, make special cases for this and that just to handle the character stream like a simple shell pipe does it.
This is an interesting topic. I've wondered why a modern, single user, hardware independent operating system hasn't been developed. Like something you can use on your phone, then dock into a workstation. You'd need a standard interface like a kernel running on each device and just switch the context between the two when docking/un-docking. Web applications try to solve this issue of a unified UX but it's turned all of our devices into web portals. It just seems like a kludge to me.
The only difference between a phone and a computer is the screen size and user input so I don't get why everything is so different, even for Apple. Websites are just an executable on the internet, so I don't get why they're also so completely different as well. In fact they're so different, a website isn't even an executable. It's markup that can execute a script. It makes no sense whatsoever.
@@antifa_communist Not only screen and user input. It's also the processor and the hardware.
ANDROID AND JAVA. Man.
I'm glad I came across your video. I've been thinking along the similar lines for some time now. I used to work on operating systems in the days before Unix, when they were a lot simpler. It seems to me Linux/Unix became complex to incorporate features in such a way as to minimise memory and processor requirements for a concurrent multi-user environment - constraints which just don't apply any longer (I'm the only user of my PC and it has 64 GiB of memory - as opposed to 16 KiB and one user or 2 MiB and 20 users). Now that I'm retired I might get round to looking at it.
Constraints may not apply to you anymore, but that doesn't mean it applies to everyone. Our home server is a shared device. I wouldn't want my kids being able to go through my files on there. And lets not forget the developing world. Plenty of small businesses there use a multi-head setup in their offices so multiple people can work on one computer simultaneously.
I agree with you that naming could use some updates, but generally there are two major criticisms I have of this reasoning: First is that the historical 'cruftiness' is not really a symptom of a disease; it's an effect of evolution. Designing a system to replace it means re-learning all of the various reasons why the system is the way it is, which violates the generally good assumption that you're standing on the shoulders of giants.
The second, which to me is worse, is that you're mixing abstractions. Your first example - "ls -la > foo" - goes on a long twisty path talking about implementation details. Those implementation details are generally not relevant when debugging a shell script, unless you're working on severe edge cases. But - big but - edge cases like that crop up in every programmable environment whatsoever, it's just the nature of the beast that computational reality is a bit messy, and nobody has yet devised a system where that isn't true.
A final consideration is that there is usually a pretty good reason for the oldest design choices in Unix environments. One of them stems from having a *very* simple process model, which is: Every process (except init) has a parent process; everything that isn't a process is a file; and text formats should be standard interchange where possible. Put those together in an environment where you don't have modern luxuries, and you wind up with terse, file and text-oriented commands a la Unix. So far, nobody that I am aware of has managed to produce a working shell model that doesn't follow this pattern and also isn't a huge mess of extra typing, or so particular to the system that it becomes a blob anyway.
Consider what you really need to know for that shell example: You need to know ls is a command, you need to know "-la" is an option, and you need to know > is the redirect to file operator. If you know those, you know everything you need to get started -- and those are really just vocabulary items. If you have some edge case you ran into, you'll get an error back from your shell describing what the error was, or, just like in any programming language, you'll go down into the rabbit hole as far as you need to go to debug, and then you won't make the same mistake again.
TL; DR: a better thing hasn't been done; "giants" have made the bad thing so there must be a good reason;
Both are weak arguments. First, they are unprovable. Second, they don't lead to any actionable conclusion but to avoid solving the problems
Idk how I feel about this ngl. I'm just thinking about a multiuser environment, like a school, for example and how this setup wouldn't be the best for that, because the users can see each others data
Your reasoning for why 'ls -la > foo' represents a huge amount of complexity is the equivalent of saying that your car shifting gears is a hugely complex process due to the inner workings of the transmission and the nature of how it interacts with the rest of the vehicle. Yes, it's an accurate statement, but does that complexity really affect your experience as a driver, or inhibit your ability to make use of such a mechanism?
I should point that I do agree with you that the shell seems like a 'giant kludge'. It is not a clean to use interface in many cases, but it's still an incredibly powerful tool that is a lot more messy on the surface than it is outdated
You're a bit wrong about Plan 9. The idea was to make a consistent cruft-free system by applying the "everything is a file" idea for real, so every API is a set of special files controled by a server. This together with mounting remote folders locally gives network transparency. Now, the developers did of course experiment with this, but only as research on top of something more fundemental.
+lonjil Yeah, I shouldn't have said it was the main objective. I think plan 9 failed for being both too ambitious and too unambitious. The more consistent use of special files was probably an improvement, but not different enough to overcome the incumbency of existing Unixes. Meanwhile, the network transparency stuff was half-baked and of nebulous benefit.
+Brian Will Hey Brian I was doing ruby on rails last year and it was fun. I started first year at University doing a software development degree and the launguage is java. It's so fucking retarded and the enjoyment is not stimulating me as much. any advice??
The idea of using an existing kernel and writing a new userspace for it has been tried. There's Android (Linux kernel) and OSX (BSD Unix kernel) and Chromebook (Linux kernel) and probably others, maybe some game consoles or other devices that have some amount userspace apps. Every time there's a new userspace, that's one more platform that software has to be ported to, or else it's not available there or it can only be run in emulation there.
Whenever there's one of these new userspace devices, the first thing that people do with it if they can is jailbreak it and get a GNU userspace and command line set up and running, so that they can install some actually good software of their own choice, instead of waiting for ports. So the proposal is just running around in a historical circle, a well beaten path.
What advantage would the suggested system provide that would make it worth the hassle of porting to another platform, one that has completely different package management, files, permissions, process management, and so on, so that the software has to be mostly rewritten? I don't see one, honestly, (other than you're promising that you'll use hashes in package management, to ensure that files are what they say they are, which is something good languages on good systems can already do.)
The area of security, where this proposal falls short badly, contrasts with the opposite extreme security system, GNU Hurd, which gives every program a sandboxed limited view of resources (with users and programs not being allowed to grant permission to more than their own limited view.)
You're confusing kernel functionality with userspace. GNU Hurd is a kernel - and it should be concerned with security, the userspace shouldn't - it should already be sandboxed by the kernel.
Huh, fail to see how making a new userspace be the preferred environment for new software would eliminate the ability to run legacy software. If we can do better, it's at least worthwhile to explore what it would take. If it makes it easier to write better software, that's a huge benefit that should not be ignored.
text is the universal interface. this is a key part of the unix philosophy. if your program prints a line of text, then your program can be pieced together with other programs in a unix pipeline. expecting programs to output a special format, like html, goes against the unix philosophy. you lose universality, which loses interoperability, which loses composability.
your complaints against shell are irrelevant. if you do not like bash, then use fish or write your own shell. python works too, as just about any language can pipe and redirect. we use shell because it is designed for interactive use. it focuses on writability instead of readability.
your proposed solution for packages not only makes development harder, it destroys the abstraction for regular users as they think in terms of folders and files.
i also disagree with the single user mode as others have pointed out. computers are very often shared at libraries, schools, and work places.
I love the fact that you are at least thinking about these things and looking forward to what an OS can be like. Unix, Linux, Windows, mac... these are not the pinnacle of computing. We can do better.
Develop something or stop WHINING.
The notion of the shell using an html-like hypertext reminds me of templeOS, which implements something like that with it's doldoc system. Temple also uses a C-like language for shell commands (and practically everything else), much as you described.
If you aren't familiar with it, it's a toy OS built by a lunatic, but if you can look past that it's actually pretty brilliant in a lot of ways.
> it's a toy OS built by a lunatic
TempleOS is a temple of God! How dare you insult his Holiness Terry A Davis. You must be one of those glow in the dark dudes
A lunatic??? Terry Davis was beyond all of us, god rest his soul
"every piece of text is clickable and does something" is hardly Terry's idea. Oberon was like that
God is not a disease
Emacs has full-featured text editing of code with commands to execute expressions, blocks or all of the code in a buffer, with output sent to a different part of the UI, not interleaved with the code on the screen.
It seems like you have this neat intersection of "the simpler the system the more it conforms to the vision" as well as "the simpler the system the easier it should be to produce". It seems like such a great idea why is there not a github project? To make it simpler start by targeting one piece of uniform hardware. Make it for raspberry pi 3 to start with since its probably the currently most accessible and widely owned piece of uniform hardware on the planet. Further the main project could be to maintain it on just the currently most widely adopted single board computer and rely on fork projects for any other hardware. That way you could focus more on the software specifically.
That sounds like a great plan.
did anything ever come from this?
@@jacobschmidt Lol, no. Ofc not. It's far easier to tear existing ideas down, and blabber on about potential replacements, than it is to actually build something to compete with existing ideas that have withstood a good few decades of people hammering on them.
>systemd is good
oh boi
What you propose is not entirely clear because the complexity would require many books to explain the fine details, which are fundamental to such an endeavor (1% idea, 99% execution). But I can see what you mean. People may agree on some problems, but that doesn't mean anybody would agree on the solutions you propose. And there is a very clear vision and philosophical standpoint behind many choices on Unix. If you can't agree on 90% of what it is, just use another OS. You can't change such profound things as the very *defining* foundations of something (e.G. the filesystem theory of Unix). Doing so would mean that you have a completely different thing, and not the "same but better"!
You may as well start from scratch and at that point not look at Unix at all. That would make much more sense.
The shell is NOT a fundamental element of Unix. It is trivial to create a Linux distribution where the default user shell is /usr/bin/python, just as easy as switching between bash, ash, csh, sh etc. You can also set it as a personal preference. But I think that using the shell is more beginner-friendly than using the python interpreter to perform the same tasks.
That a lot of system sofware in Linux has been written in the shell language just shows its strenth.
Nah.
Some good ideas. But also some that at first sounds terrifying. Like "kernel-level package management", kernel is about hard-soft interface, is a hardware abstraction, not userspace abstraction. Shell responses in HTML? I want to manage my system, not read magazines. JSON with url syntax highlight and functionality is good enough for humans, storage, processing and pipelining. Will be good to have a standard for package management OS-wide.
I really don't get what is the problem here to solve. There is always requirement to have some prerequisites, something where to build software.
Even with sync services such as Dropbox, OneDrive, and all of them -- because we want to be able to work offline and open/save files quickly, pretty much anything you're working on is going to be cached on the workstation where it needs to be protected from other users. The idea that people aren't sharing computers with personal data that needs to be protected from other non-admin users is preposterous. Additionally, the idea that out of any two users on a shared system, one of them is definitely an administrator is "not even wrong."
great presentation, just two flaws. a) who is going to do it? b) what are the incremental steps from lets say a linux to this system (you cant expect everything/all applications done from scratch)
What you propose is still quite complex tbh. I also had a lot of these ideas, but the more I learn about programming language theory, the more I think we just need to decouple most of the pieces that make up our software. This way we could iteratively get rid of any unnecessary complexity and just replace that with better abstractions bit by bit. That is, we're not gonna one day come up with the perfect replacement for everything that exists today and design a replacement in one go.
Here's my idea of an 'ideal OS':
First of all, it's supposed to be a so-called safe-language operating system. I.e. a one that doesn't need CPU security features to ensure privilege separation. This is because there shouldn't really be a hard distinction between the kernel and user space - kinda like what microkernels have tried.
Then, how privileges should be managed?
Well, the whole OS should consist of dead simple abstraction layers with clear interfaces fully described in code.
There'd be no concept of users, permissions (neither Unix-like nor Android-like) at this point. The lowest layers know about the hardware and can touch it directly. Everything above works with the safe abstractions the lower layers provide.
First, you need a (correct-enough) model of the hardware. That is data structures that behave like or describe the device you want the OS to run on.
After that, you can write thin abstraction layers that take this hardware behavior descriptions and present it as more generic models. E.g. if you know exactly how a bunch of different PCI-e network cards work you can have a piece of code that provide the Ethernet protocol device out of this. Or if you have a hard drive, you can model that as a huge array of bits or whatever, together with it's runtime behavior (how much time do reads or writes tak, what happens when you loose power at some point, etc.)
Somewhere alongside that you could have a layer that can turn the abstract CPU device into threads. Or turn RAM into allocatable memory.
On top of that, you can put a layer that implements a textual shell. Or alternatively a layer that runs a GUI - consumes the keyboard, mouse, display, speakers, … and uses that to drive a desktop environment. Probably it should then also provide the concept of processes - otherwise any applications would need to be built-in (like on old feature phones).
Still, all of that should sit in the compile-time APIs and not necessarily in any runtime ABIs. The compiler should be allowed to optimize any of that away and the OS developers should easily be able to swap out any part of it.
However, at some point you will want to run 'the userspace'. Doing so would essentially boil down to making that specific layer provide a very concrete interface to whatever will run on top of it (an ABI I guess) and validating whatever is to be run (as long as that's our security model). This isn't really that different from all the layers before, except it now requires a few more technicalities to be modeled.
Looking at what I wrote, it doesn't sound convincing at all. I feel like this comment severely misrepresents this idea. But still, maybe someone will find it interesting.
Kinda funny listening about complexities of the shell and then hearing that systemd "fixed" things. Oh, boy.
On any given computer, there is really only one thing of any value, and that's the users' data. The rest (programs, OS) can just be reinstalled, but the data must be protected. This is the complete opposite from UNIX, which protects the operating system first and foremost, and leaves your data at the mercy of a single mistyped rm -rf / or malicious program.
so what should your data be at the mercy of instead?
@@shallex5744 Nothing! By default, applications only need access to their own files. Access to files belonging to other applications should explicitly be allowed by the user, using a tool that has only that purpose. Sandbox everything.
Even the shell doesn't need unrestricted access across the filesystem. The shell runs as a user; why not run it as an application instead? And if you think that's odd: you don't run it as super user all the time either, do you?
I watched several of your excellent videos and wondered if you ever looked at Forth as a concept that addresses many of these concerns.
+Tim Hayward Nice trolling :) Seriously - where would forth come in? As the new shell language? The last forth I used could only write fixed sized 255 bytes blocks to disk and did not even have a notion of a file. The last I heard of forth raising from the realm of the dead is in the context of boot loaders.
It is alive, though not well. It is in other places. It is in Postscript. You can get a new 144 core super processor with only forth, but that isn't what my comment was about.
In Forth:
The shell is the ide is the compiler is the loader
It is painfully simple
It is infinitely extensible, supports overloading, so my overabstraction will never interfere with your overabstraction.
It is closely tied to the hardware and sometimes, vice-versa.
What if we just made a most Turing complete architecture? What does the perfect implementation look like. Computers don't really do that much. What do they need to be told?
@@TimHayward Sounds a bit like BASIC on home computers, which was equally complete and self contained (albeit far to often very slow). They also coexisted in that context (say Jupiter Ace versus Sinclair ZX-81). But frankly, I never understood why Forth had to be so wierd and untidy, compared to most algol-based languages (except the ugly c-family). One of the few popular languages I never learned in the 80/90s.
I agree with many of the problems that you described:
* Shell syntax is horrible, especially for more complex stuff than just starting programs. And yeah, having to learn how to properly escape all the arguments etc. is a pain in the ass.
* Shell output could be something better than just plain text with some colors. One big improvement that could be implemented right now would be to use Markdown for the terminal output. This would be backwards compatible with current terminal emulators but would also provide nice formatting for terminal emulators that support it.
* More and more programming languages have their own package managers. I think this isn't a fundamental flaw of how modern systems work, though but only with current package managers.
* And some more that I don't remember.
Now some questions about the architecture of your model:
* If all processes can see all user data, how does the security model work? Having every process only see it's own data, I could understand that, then you could use IPC mechanisms to pass user data to the processes, but letting a process have access to all the users data seems unreasonable.
* How are those Globally unique IDs assigned and managed. This sounds like it will become either unmanageable or fail in another way like some entity having controll over every ID or some failing model like the current CertificateAuthority system.
* How in the world can you create files with a global UUID? And if it isn't global, what happens if you attach a USB flash drive with a file on it that has the same UUID?
* Having permission groups might not be necessary in most home computing scenarios, but when you have hundreds of people working on a project you can't just give everyone access to everything. But this might then be a server side problem that lives outside of the client side computers, so this might actually work because it doesn't need to be part of the low level desing of the system.
* How do you manage hardware access without a permission system. You can't just let every program access every piece of hardware, this would be disastrous from a security perspective.
Btw: Why even use a global registry for configuration if you can just let every application have it's own configuration and expose it over the IPC mechanism if the configuration options are required from outside the application?
Also I think that most of the problems you describe can be properly fixed on top of the current system, or rather by gradually changing it without having to throw out everything and start from scratch (just like refactoring in programming).
As a sidenote:
Your model has, in parts, interesting similarities to what is already done in modern webbrowsers:
* The "shell" uses a modern dynamic language (javascript).
* There are no files, an application can store it's data in the local storage (which also serves as a registry for configuration). The UUIDs would be the reference that programs hold to the data. Although JSON has a similiar tree structure to file paths.
* There is no process hierarchy.
* I could probably find more.
And it even works across multiple platforms.
+shevegen My whole point was that most of the problems described in this video can probably be properly solved on top of current unix systems!
And if some problems keep existing, the systems can be adapted without completely throwing them away.
EDIT: Well, not my whole point. But I still think this is possible and it would be much more reasonable to actually implement (in terms of effort required).
+Max Bruckner (FSMaxB)
Thanks for the feedback! I think I can address most of your points:
> If all processes can see all user data, how does the security model work? Having every process only see it's own data, I could understand that, then you could use IPC mechanisms to pass user data to the processes, but letting a process have access to all the users data seems unreasonable.
I'm not sure the system should protect user data from installed programs. This isn't something done in Linux anyway, right? My home directory is visible to every program, e.g. any text editor can open any text files in home. Android attempts to treat certain kinds of data, like contacts, as requiring explicit privileges, but I'm skeptical that users should be protected from their own installed apps. Sure, we want to mitigate the damage a malicious user program might do, so we don't give every program superuser privileges, but specially classifying the user's data adds complexity to the system and burdens the user.
If we really did want to go that route, though, the way to do it would be for each program to store user data in its own filespace. A program might make this data available through IPC requests, and it could whitelist other programs upon approval of the user.
Maybe then we want some kind of system-wide way of managing these privileges...but that might just end up more complicated. This solution also complicates backing up and transferring user data. It also arguably traps user data in silos.
> How are those Globally unique IDs assigned and managed. This sounds like it will become either unmanageable or fail in another way like some entity having controll over every ID or some failing model like the current CertificateAuthority system.
> How in the world can you create files with a global UUID? And if it isn't global, what happens if you attach a USB flash drive with a file on it that has the same UUID?
Obviously UUID's are not reliable because they can be trivially spoofed. The idea is that, anytime veracity matters, you rely on the version id (the hash), not the UUID.
UUID's generally include a high-resolution timestamp. As long as the generating code is not broken or malicious, collisions between any two UUID's are highly unlikely. Still, we would want public catalogs of packages to resolve such conflicts and--more importantly--to verify hashes. Unlike with DNS, we don't need a single centralized catalog. I could use a catalog that I trust, and you could use a totally different one that you trust. This is basically what we do already with every Linux distribution package repo. (Using numeric ids instead of names also means we can sidestep politics over who gets what desirable name.)
As for file UUID collisions, either from error or malice, the system should cope by just letting them live side-by-side in the same filespace. UUID's resolving to multiple files is something human users can cope with. Programs, on the other hand:
1) have complete control over their own filespace, and looking up files in your own filespace is a different syscall, such that external collisions won't interfere
2) whenever possible, programs should specify files by the version id (the hash) instead of just the UUID
Annoyingly, files with the same UUID in separate filespaces may have different metadata attached. Ideally, every copy of a file across time and space would have the same label everywhere, but this is already a problem we deal with. The only new wrinkle here is that two unrelated files might erroneously share the same UUID. I think this would be an annoyance but not a real security/config problem.
(BTW, there's a whole angle I glossed over about how files would, by default, be treated as if they are immutable, e.g. opening a file to write produces a new file rather than overwriting the existing one.)
> Having permission groups might not be necessary in most home computing scenarios, but when you have hundreds of people working on a project you can't just give everyone access to everything. But this might then be a server side problem that lives outside of the client side computers, so this might actually work because it doesn't need to be part of the low level desing of the system.
Yes, I think as I mentioned, any sort of many-user concerns belong at the application level. We have servers running webapps or services in the backroom, and users--outside increasingly rare cases--all have their own machines (multiple per person, in fact). I think administering many users at the OS level is just outdated.
> How do you manage hardware access without a permission system. You can't just let every program access every piece of hardware, this would be disastrous from a security perspective.
Admin/non-admin might be too simple. Perhaps the system API is split into separate dependencies, such that a package effectively states explicitly which hardware it will use. A package requiring certain system API's would require special approval upon installation, e.g. this program may use the webcam. (On the other hand, users of Android seem to have been trained to just blindly click through these permission screens. Perhaps only admin users should be able to approve packages requesting certain kinds of special access.)
> Btw: Why even use a global registry for configuration if you can just let every application have it's own configuration and expose it over the IPC mechanism if the configuration options are required from outside the application?
I've considered something like that. But then there's the question of where to store each user's general settings and system wide settings. We could store them in files of user space, but as previously mentioned there's zero protections on those files. Again, I'm not really clear on this area.
> Your model has, in parts, interesting similarities to what is already done in modern webbrowsers:
Sure, there are parallels with browsers, but of course there's a lot we just can't do in browsers, e.g. run a server or natively compiled games.
+Brian Will Now I have a much more clear picture of your proposed model.
Just a few additional comments:
> I'm not sure the system should protect user data from installed programs. This isn't something done in Linux anyway, right?
Yes, linux doesn't do that and I think it is wrong. I don't want to be forced to trust every single binary that I ever run to not do bad stuff with my data (on purpose or by accident, see the steam client deleting home directories because of a bug in a shell script for example). Also if programs are separated from user data, companies that develop them don't even get tempted to snoop around in it.
>If we really did want to go that route, though, the way to do it would be for each program to store user data in its own filespace. A program might make this data available through IPC requests, and it could whitelist other programs upon approval of the user.
Exactly, that's what is used (or at least planned to be used) by xdg-app. The user grants access to a certain file by selecting it via a the file explorer. This might be expanded by passing files via the command line. Shared data can be whitelisted by an applications dependency manifest.
> (BTW, there's a whole angle I glossed over about how files would, by default, be treated as if they are immutable, e.g. opening a file to write produces a new file rather than overwriting the existing one.)
This sounds just like a waste of disk space and confusion for users because they would have to differentiate between different versions of "the same" file. But this could be handled like regular copy on write with versioning, showing the user only the newest version and providing a backlog. Versions that are older than a certain amount could then automatically be marked obsolete so they can be overwritten when space is needed.
>Admin/non-admin might be too simple. Perhaps the system API is split into separate dependencies, such that a package effectively states explicitly which hardware it will use. A package requiring certain system API's would require special approval upon installation, e.g. this program may use the webcam. (On the other hand, users of Android seem to have been trained to just blindly click through these permission screens. Perhaps only admin users should be able to approve packages requesting certain kinds of special access.)
Yeah, this could be done just like android with the slight modification that access policies could be changed separately from the actual applications by the repository maintainers. This model provides the possibility for more advanced users to sanitize the kind of special access an application gets, even if the developer wants all of it.
>> Btw: Why even use a global registry for configuration if you can just let every application have it's own configuration and expose it over the IPC mechanism if the configuration options are required from outside the application?
> I've considered something like that. But then there's the question of where to store each user's general settings and system wide settings. We could store them in files of user space, but as previously mentioned there's zero protections on those files. Again, I'm not really clear on this area.
Just store the global configuration inside the filespace of a configuration-application that allows moderated access via the IPC mechanisms. Every non-global configuration just lives in the file space of it's application. This is also how this registry you described could be implemented without having to incorporate it in the base system.
+Max Bruckner (FSMaxB)
All versions of a file in a filespace share the same metadata. Generally in file listings, you only see the latest version with a column indicating the number of old versions. Users can expand a file in the list to browse and select its particular versions.
Overhead from keeping a bunch of old file versions around could be mitigated by applications simply deleting the previous version as their normal 'save' operation. Better yet, applications should make the choice very clear, e.g. 'save new version and keep old' vs. 'save new version and delete old' (not sure if there's a pithier way of expressing this distinction). Applications producing large files should maybe warn users about the overhead of keeping old versions.
The pseudo-immutability thing is mainly to accommodate the version hash thing: as soon as you modify a file, its version hash becomes invalid, and until the file is closed, it doesn't make sense to recompute a new hash. So it seems logical to make copy-on-write the norm and think of modifying a file as actually producing a new separate version. There do seem to be cases, though, where normal mutability might be preferable, such as with log files. Perhaps just leave it up to each program on a case-by-case basis. (Of course, while a file is being modified, it can't have a hash id, but I think it works out okay if the open file is known just by its file descriptor until it is closed. This works because programs share files through IPC by descriptors, never by names.)
I like the whitelisting idea, and that could apply to files: when a program attempts access of a user file for the first time, the user gets a UAC-style prompt to authorize access.
Whether the registry should be a special kernel mechanism or a standard program is something I've gone back and forth on, but I suppose as currently described there's no reason for it not to be just a program. I have a hazy notion that other system features could be exposed as service programs in the same manner, but I'm not sure how far the idea could/should be taken. Kernel modules presented as service programs that hand out device file handles? Could ioctl be replaced?
Anyway, thanks again for the feedback!
It sounds like a library OS or unikernel, which is becoming more popular lately, would be a great first step. You get the application virtualization you need plus an API. There may already exist research similar to your proposal.
Sure there are people using the same machine. For example you might run a shell for your employees to log in to or a university might have a student shell.
In many respects I agree with a lot of what you have to say here. I like the Unix environment but its current state does represent decades of cruft and incremental design. A lot of the underlying assumptions made sense 30-40 years ago but are a poor fit for the present.
Command brevity is one of these: Command brevity was not just valuable for saving keystrokes, it was also worthwhile to present information in a more compact form back when the standard was an 80x24 video terminal, or even a teletype. And storage and RAM space was limited so it was worthwhile to make scripts inherently more compact. None of those considerations are really important any more. If there's any value to that compact style at all, I'd say that it could possibly convey the information in the program code more effectively, since there's less visual information to process. But that depends on the programmers involved becoming really fluent in this compact style - and that is one of the reasons why most programmers in my experience (myself included, really) feel that a more verbose style is better.
Other aspects, I pretty strongly disagree: For instance, you talk about the inherent complexity behind running a simple command with redirection: That complexity is always there. If you're dragging a file to the waste bin, you don't need to think about the fact that the file is part of an index of files in the directory, or that the icon that represents it is some image resource somewhere either attached to the file or stored in a listing correlated by some notion of the file's type, or that when you start dragging it the OS produces a data structure that's used for object exchange between processes and handing that data structure over to the object or process represented by the icon or window you're dragging the file to... And you don't need to know the arcane details of how the system associates these metaphorical user actions with actual pieces of program code. As a user you don't care. You don't need to care, you've got a metaphor that models the whole thing, and that is the useful part. All this about how the details are always lurking, and eventually you have to deal with those details because it's necessary to understand what's really going on... That's all still true whatever the interaction metaphor is. There's room for improvement for sure - the "metaphors" in the Unix shell are pretty firmly tied to some of the underlying details of the system (especially TTYs, file descriptors, and the process hierarchy) but the basic "problem" you describe there is just the inherent nature of computing: Complexity exists. You may have to deal with the complexity at some point. What matters to the user is whether that complexity is modeled through a useful and meaningful interface.
What is your opinion of the Urbit project, in this context?
I look forward to playing around with the Userland you describe. Let me know when it is ready.
Shells are here to stay. If you program microcontrollers, a shell is the easiest interface to implement.
Lisp clone is easier
Huh?! ...
Sure... there's a lot of legacy stuff in all parts of the "IT stack". Rarely are things redone from scratch, but projects like RISC-V does occasionally happen.
However ... that not really the "UNIX tradition": Doing 1 thing and doing it well is never a bad thing.
And sure... the syntax of bash is a bit arcane, but ... aside from that I have a hard time seeing what a "less complex" objective alternative would be.
... and I have to continue the anti-rant here...
You can set any program you want as a shell. I lived for a time with "emacs" as my shell. Works great.
But if you think just running python programs is a better alternative than any other, then I have to vehemently disagree. Python is a sucky language too and I don't want to write more verbose commands if it doesn't make what I do less sucky. And it doesn't
And wrt. package managers. You can do snaps or flatpacks to have a more "app like" environment.
The main reason a shiny new operating system wont work is that it too will have new functionality and requirements kludged on top of it. Then we will be back to where we are now. Why invest all that time and effort just to go round in a big circle. I think a bigger problem is the way operating systems evolve with little or no real concern for backwards compatibility. Its crazy that my biggest concern at the moment is Ubuntu will stop supporting 12.04 LTS in 2017.
That's mostly a problem with your Linux distribution choice than Linux distributions as a group... Gentoo for example has nigh-perfect BC. It's like 99.9% completely compatible with everything ever on a POSIX base. And for that remaining .1%, it'd just take some careful planning and a chroot to get.
The HTML-command output is kinda silly IMO....half the point of the terminal is so you don't have to click buttons or follow prompts to do basic tasks.
+2chws exactly. I'm just imagining having a console that can display JFrames (from java) inline.
+2chws I don't really think it's sensible to build a system around the fact that programs COULD do bad UI design. Nothing about the proposed form presentation would make you unable to just use the keyboard. For instance you'd run enter value, tab, enter value tab. Inspect the form and then tab enter on the submit button. For instance.
I'm much more concerned with the fact that to make this idea realistic you'd have to make a POSIX wrapper for this message system essentially. Because there's no way people would abandon all their tools. That's a far bigger user-convenience threat than people starting to write UI that forces mouse use. Hopefully the system would encapsulate the old programs effectively making the request-response system the only interface a modern user has to deal with.
MrSnowman yeah but so many programs are designed to be completely non-interactive and have so many options that trying to make them interactive will be futile and not make any sense. I just don't see the point. The only thing I see this as useful for is maybe a replacement for ncurses, that's it.
2chws I agree I don't quite see it being such a major feature that it'd be worth mentioning alongside other stuff here.
Perhaps he has some plans for closing the gap between power-user and normal users somehow. I have no doubt that if you could just wrap CLI programs in a HTML form easily you could get normal users to use those programs more. If not for conveniences sake then just to make it look less scary. It's pretty clear to me that normal users can make good use of a lot of CLI applications if they wanted to explore it.
That's a really good point. For one thing, you want to be able to write a batch file that can run without user interaction. I'd assume you could simply specify "run silently" at the beginning of a file or block, but still.
So, we will not have the environment (a set of name-value pairs), but a configuration (a set of name-value pairs) ?
We won't have a shell, but a ... shell ?
The problem of protecting one user from other users is solved by ... declaring it not being an issue?
Etc.
I think the benefit of centralized configuration is that the OS can enforce that programs not step on each other. So, for instance, if my text editor stores config in ~/config/.editor, but my paint program also attempts to do so, this is an issue. With centralized configuration, this is all handled by the system, so the configuration is associated with each program in a more structured sense.
I definitely don't agree with his single-user fantasy though. We should have more security, not less.
@@skepticmoderate5790 Tell me, do you consider Windows registry a part of the problem set or the solution set?
No user isolation, well, on a personal computer maybe it is fine but when it comes to servers or super-computers they do have value. I mean, for one university class I got access to a supercomputer doing important stuff. Would have been a shame that I mess up all that important work bc I don't know how to code.
The first seven minutes are a silly hyperbole. You don't have to and NEVER DO tell the whole story about all the underbelly of the software to a newbie. The equivalent of his argument would be to force somebody to understand how the Python interpreter builds an AST before teaching them to "Hello, World." It's almost like Will thinks that Domain-Specific Languages with implementation details should never exist.
Yep, it would be as silly to teach Windows system internals to newbies. They only care about how to work the system, not how the system works. And Wndows does a lot to hide the command-line, which is possibly where this attitude comes from. I've heard people who have only experience with Windows systems that command-lines are going out of style. And you remember Unix, where the terminal has very much an active role, and will keep it for the foreseeable future. And the terminal is so much more capable under Unix too, copied as it is from the VT-100, VT-220 and other 80s models. There's even a Tektronix 4014 emulation in xterm which I never figured out how to use, as no program I had uses it. :(
(Lol, looking into it, I see it was used for CAD, and likely obsolete. These days you'd use plain X)
1. 'users aren't protected from each other'. There is no security here. If there were no multi-user systems, this might work. But there's no networks in this model.
2. Your view of what a shell should be almost sounds like a description of PowerShell.
3. 'No shell language, only proper programming.' Everyone would have to be a programmer. What about casual users?
4. Directory - userd is built in a way that works to address some of your concerns there. Each user has a personally encrypted directory in their home folder. Each program can store user specific config in the user's directory.
5. Filenames and paths come from the hardware implementation. How do we organize files without paths? One flat location with UUID and hashid files?
Unfortunately this presentation sounds more like a rant. It would be much more useful as a whitepaper. I've got decades of *nix experience and I find those complexities part of the power of *nix. As with anything, simplicity creates restrictions and complexity creates flexibility. It's like the age-old debate between Android and IOS. One is more flexible, the other more stable and friendly. There's clearly a divided fandom in those cases. As for *nix or even Windows, it's always a trade-off and that's why all those OS's provide both interfaces. In fact, Microsoft was pressured to include MORE command line capabilities and that's what led to PowerShell. Command line and scriptable interfaces are necessary. Whether it's in direct mode or written script mode is a convenient flexibility. I simply don't see any value in eliminating those interfaces. The value comes from offering multiple interfaces so each need can be addressed in the easiest way for the person doing the task. Ideally you should be able to do the same things from different approaches. Command line people, programmers, or point-click people can all get the job done.
+Jerry Hobby If you watch the whole thing, it's clear I'm saying that total programmatic control of the system is a good thing. What I proposed is replacing existing terminals and shells with more modern alternatives.
+trsk Yes, the way you work on a shell like zsh or in Python, like in ipython is very different. And while I never had a real problem to teach people to type simple commands on a shell, I had a hell lot more problems to teach them programming.
It’s clearly an other layer what you are talking about. The one layer is the simple “start this binary as program” command-line level and the next layer is something like ipython, which is at the moment implemented as an additional layer. I mean, you can even skip the zsh or bash if you directly jump into ipython as a shell, just tell your system on your useraccount to switch the shell /bin/bash against /usr/bin/ipython or something. But I doubt you will work faster with your system after that. It’s not a simpler interface, indeed it is not. Python is very complex.
But I doubt that this will make you really happy.
I mean there are things in Unix/Posix/Linux that could be better. But the simple thing to tell the operating system to load something into memory and execute it, this has to happen somewhere. You can try to hide it, yes. But then you do not get rid of complexity, you add complexity to your interface. More or less every GUI tries to hide the fact. They do not look like something simpler to me.
Don’t get rid of essential layers in an operating system just because you, yourself and your sister don’t like that layer. There are reasons for them. Indeed I tried myself to ignore perl for a long time. As I tried to ignore awk at my first contacts with Unix, somewhere in the early 90s. But both are mighty tools that I won’t miss for anything.
See, an operating system is more like a workbench, no, more like a whole hall full with tools and machinery and generators and steam engines and a lot of folks go an work there, play, build, destroy, move. It’s a bit ignorant to try to get rid of the lathe just because you don’t use it. Or the way the things are glued together in an environment everybody knows for years and years.
Yes, complexity has some disadvantages, but especially the packet management has become better over the last years. Maybe you just use the wrong type of distribution or environment. There is an ongoing streamlining process. But it’s never so radical to destroy the complexity of the Unix environment for something “cleaner”.
Apple did that with the poor BSD Kernel they got and you can see where it let them. Their system is just crap. “Cleaner” is not always better. No, mostly it’s worse. Like radicals that try to change something in society by burning or destroying things that work, usually are left with a broken society. That needs generations to fix the damage they have done. And after fixing, it’s usually even more complex than before.
Grown systems can be streamlined. But not by radical movement. It’s more a step by step thing that makes things better. If you ever have seen how the ancient VAX systems were designed, today it’s a hell lot better.
I mean there are things in Linux that are a bit going on my nerves, that’s true. But the shell is not in the center of it. More like /proc and /sys. Can’t they make up their mind where to put what?! Yeah, /proc was there first and then they introduced /sys but /proc never gone away or was reduced to what it should, just hold the processes. So we have a mess today.
That was an effort to make things easier. Didn’t work out too well. :D
This clusterfuck really goes on my nerves. Please let the shell be. It’s okay. It’s doing the job perfectly. Nothing wrong with that. Hands off.
I mean, there are things I am really still ignorant about. Like wtf do the people want with something like FORTRAN?! This thing from out of the cellar should be really dead dead dead. But some folks love that zombie. Even if I explain that there are mathematical libraries, programmed in assembler that work faster, better than FORTRAN, even if those said libraries can call GPU, I mean, wtf, you got an own vector computer today!
Still. Fortran programs keep coming. Don’t know why. If you try to kill something, yeah. I lend you my shotgun and we enter that cellar and get rid of that monster together. I'm with you with that. But I know I'm totally ignorant to Fortran. I know. But that thing MUST DIE.
Well, it won’t be easy, son, to kill Fortran. That aged mummy is hell of fast, I don’t know why it’s so fast, that’s unnatural and I guess there is black coding magic involved. Just hold fast your Kerninghan-Ritchie and we see how far the light will bring us. Just repeat the banning formular: “-ffast-math CUDA OpenCL and gods of GMP are with us! In the name of Kernighan! In the name of Ritchie! Die! Die! Die you unholy abomination!”
+trsk Shell architectures remain stable over 50 years because no one wants to touch that crap ;)
What I don't understand about the rant is "why even bother"? If +Brian Will doesn't like the shell, why not create a new one? If he did, and it was more elegant and efficient than what's already available, I'd use it. I love the power and flexibility of my bash shell. I really agreed with his criticisms of OOP, but IMHO, the Unix Philosophy is the best way I've seen. I mean, Capitalism and Democracy both have their share of problems, but I don't see anyone proposing anything better.
Our company forbids any shell program longer then 10 lines function that work as program starter, because shell scripting is terrible. Error handling is almost impossible. If you want do something you have to use ruby. Even if this uses "system" to run command line tools.
The only thing that's really caused me any grief is the environment variables thing. I wish I could just update them and have the new values everywhere, not just in new shell instances. Proper languages like python aren't fit to be used interactively in the way that shells are used. If you encounter something you can't reasonably do in a shell, you can always hop into a python repl, though. I don't want to give up the terminal. It's not that I've learned to cope with the terminal. Rather, the terminal is too powerful a way to work, computing would suck without effortlessly piping streams of text data from one program to the next. I don't want to use a shell from a browser. If there's no terminal, how do I use vim?
Except for the very last sentence, it sounds like you don't understand the difference between a terminal and a shell.
@@smorrow What gives you that impression?
I don't mean any disrespect to you. Forgive me. I was responding to the video. You just gave me a copy-paste response. I assure you, I know the difference. I chose my words carefully and used the correct terms. Did you watch the video?
@@anastaziuskaejatidarjan4711 You seem to think the ability to use pipes has something to do with terminals.
@@smorrow Forgive me for being unclear. The word "terminal" there is used in a certain context. The context is the terminal as "a way to work". Used in this way, terminal no longer refers strictly to the one component, but to the way the component is used, ie: in concert with a shell and with commands and everything.
You've got some good ideas here, but I think you are a bit light on this history of why specific engineering decisions were made in the past, which turned them into today's silly legacy things. Unix was built initially on a PDP-11, 16 bit minicomputer as a multi-user timesharing system. I will grant that many of those decisions would have had different choices and even different possibilities if done today.
I think your security model is naive. I am not saying we need the ancient World-group-user octal protection model that was inherited from long before Unix. But I think we have real needs for security for privacy that are critical to our finances and our civic freedom.
We can expand it.
+Pat Farrell I don't see why this would necessarily have to be taken care of by the operating system. Why not take care of it in hardware, or in applications? Why is it necessary for the OS to do this job?
+Benjamin McLean They can't be trusted to the applications, but if the hardware took care of it, all would be fine. Protection, security, privacy, etc. need to be invisible to an application program. You don't want a bad programmer to decide to circumvent the protections.
+Pat Farrell Bad programmers write operating systems too.
Seems to me that any private data should be kept in files encrypted by applications which the OS team would not know anything about. One service the OS would need to provide, however, would be a way to track changes to files you want kept secure, and a way of rolling back changes.
+Benjamin McLean not necessarily. If the application keeps it in something like a git repo and have every transaction be a commit, then any change could be detected instantly and rolled back. The os need not interfere.
Kehnin, assuming you are a "bad guy", surely it would be trivial to write a script to move any changes you want backwards in the commit history as far back as you want and rewrite the entire history to accommodate the changes you want without some protection beyond what scripts can normally access.
You need several months to properly understand ls -la >foo, yes. Then for decades you use it every day. That's what we call a profession, you even get payed for it. You learn a language, then you use it. You don't have to learn it if you don't want to communicate. If you want to communicate you need to learn the language.
And the names are not piled up in decades, these are names that were the same in the first UNIXes, it remained the same. I am using them for 30 years now, thanks god nobody is stupid enough to change them.
It reminds me of the mouthbreathers who want to simplify and standardise spelling in English, without understanding that this would require the entire English-speaking world to speak with the same (American) accent.
You don't need months, only if you are trying to understand how it works under the hood. This is the most nonsensical part of the video.
You really need two permission levels for users, one for drivers and one for the kernel. a program should not get direct access to hardware or the part of the memory where the kernel lives.
One Point: shell language has been designed so that you do not need to use parentheses when invoking a command or a method. Instead, arguments to functions or command programs are separated by white space. If said arguments would include white spaces, you quote the arguments. Very simple, very intuitive, and readable. It would be awful and stupid having to type in parentheses, commas and all the other syntactical sugar just to invoke a specific command with specific arguments, like you would do in languages like Python.
You are right about the fragmented naming conventions in command names and command arguments. That was never fixed, and now it cannot be fixed. But that doesn't mean that you cannot create correct command line utilities that have intuitive name and calling conventions.
>very intuitive
Not really. You have one set of rules for one type of arbitrary data, and then another set for the other type. It's meant to save on typing, not be intuitive.
Look at Haskell: a programming language with no parentheses but spaces around function arguments.
you have valid points here. the reasons shells are so kludgy is because they try to work around the lack of a type system. this makes the commands bloated. Not to mention, you need to scrape text to make it usable between pipes. And last but not least, code reuse is a joke because there is no polymorphism.
It's not like people haven't thought about this. That's why they have started to use Perl and Python.
The shell should just be an execution environment of text based apps.
Once you learn Python or Powershell, you can use them even for one liners. I only use shells to launch programs like ping. Scripting in them is downright painfull because the syntax is so bad.
What about just making a shell layer that will translate it self to bash or another terminal language so we can type like "list with size to foo.txt" which will translate to "ls -la > foo.txt". Think about it :)
Indeed, for those needing readable commands, it would just about be perfect.
But adds more complexity which I thought we needed less of. /endsarcasm
Oh this was 5 years ago. Why was I here 5 years ago. I found nushell, that somehow simplifies shell a lot, check it out.
If UNIX/Linux is over complex where does that leave windows? The inter connectivity diagram for Linux is less complex than for windows.
I'd very much like to show Brian the OpenQM environment. It addresses most of the issues that he discusses in his videos.
So, you want a continuously present services exposing a request-response user interaction model with responses in HTML and a real dynamic language such as Javascript?
I have a solution for you!
Though I fully agree with you in your stance on OOP, when it comes to *nix-based systems, I will disagree. As a programmer, the beauty of *nix systems is that YOU CAN BUILD YOUR OWN for fuck's sake. If you are fine with the Linux kernel or Unix, then by all means build your own system around them. Have your own packages. Make your own dependencies. Port over what you want, ignore what you don't. Does it seem like a headache? At first, yes. After it is done, you have an operating system that you know will work for whatever software you build on it. You know that the only thing necessary to keep maintained, as far as repos or updates, is for the kernel. This is stupid easy for a seasoned programmer that knows the linux kernel inside and out.
Interesting stuff. My main issue with security would be for how this would work in a server setting. any process could go rogue and spill all the userdata beans. I mean, you could segment your machines so a public network facing machine only handles that. But with large datacenters handling multiple users data things get complicated. I understand that this is a "toy model" and such, but one of unixs strongest areas is it's handling of user segmenting and security (not that it is super strong, but that it is better than the rest)
Replace UNIX? Heresy!
Actually, the simplified system you're describing sounds a lot like my old Commodore 64. ;-)
In seriousness, while I don't entirely agree with this video, your videos as a whole are absolutely fantastic! I just discovered your channel, and I'm learning a lot from it. Thank you very much.
There were hundreds of BASIC-based computers similar to the VIC-20/64, both before and after these computers. So that simple but effective user interface was very widespread.
essentially this video tells "if a feature is complicated, remove it, we don't need it". Thats not the definiton of "solution".
Just code a ui for dos.... but wait! that already exists! and it was aweful.
Oh Brian, you got it all wrong, again. It is supposed to give you that rough experience. Its not so much a technical problem, its to keep the big players in the business from taking control over the central part of the system that would be necessary to integrate every tool the way you want it to be. So they use and should go on using streams of strings and a minimal help from the kernel to communicate with each other. That is something I dislike about systemd. Its a central component in an OS that uses it and whoever controls it, controls a big part of the OS as well. Most of what you want, you could get. But this level of integration goes along with a single big company who provides the integration. They wont do it for free, they will do it to directly profit from it or to gain control over a stack of technology that sees widespread use, in the end to make even more money of it. And we already have such companies and they both have an OS you can use. Maybe take a look at the Powershell, I do really think you will like it. I don't.
You just described Emacs: (1) all written in one language, only used for work inside the system (2) the system itself an interactive shell (3) constant output of data sequestered on easily accessible screens (4) one package manager.
Unfortunately, external dependencies present a problem. Emacs + GUIX/Nix is probably pretty close to the mark.
I think some of your predictions for how to make the user experience easier have come true with Android and iOS, which present users with a very flat, non-hierarchical view of their installed apps and make it easy for app developers to work in their own sandboxed file space but hard for them to interact with anything else. And some of your predictions for how to make the developer experience easier have come true with containerized platforms like Docker, which create sandboxed file and configuration spaces per container and typically require containers to communicate via networking - a form of request/response.
Mobile all sucks. Professionals have Workflows and Workflows require sharing of files. IOS is biggest shit
I noticed that with this video as well. Stuff like Android bothers me because of how little control I have.
What do you think about Redox-OS? A project developed in Rust language.