I am still figuring out how to implement named pipes in a C# program I'm writing. This video gives some hands-on info on pipes with the examples. Thanks!
Hey, that's funny! I recently tried to teach myself pipes, from what I found in the documentation, and did it in very similar way: Multiple terminals next to each other, for-loops, counter and sleep-command. What I fail to understand is the syntax of cat pipe . Reading the documention, and experimenting, I realized, that it isn't replaceable by `cat >< pipe`, nor by `cat < > pipe.` So it is not the combination of two operators ``, at least not an algebraic combination, but an operator on itself - I would call it the diamond-operator. In contrast to cat pipe , it is just sitting there after the input finished, waiting for more, while `cat pipe` just finishes, when it is ready. The manpage says: Opening File Descriptors for Reading and Writing The redirection operator [n]word causes the file whose name is the expansion of word to be opened for both reading and writing on file descriptor n, or on file descriptor 0 if n is not specified. If the file does not exist, it is created. That's confusing to me. In which way is word (aka pipe for our purpose) opened for writing with cat? BTW: Using backticks (2:00 ff.) is discouraged in the shell. `for i in $(seq 1 20)` is better readable, more portable and trivial to nest.
What’s the simplest IPC paradigm to use if I want one writer, but multiple/each reader to see the same data? Kind of like one writer, but multiple tail -f’s. Do I just write to a plain file and flush it aperiodically? I guess I need to investigate how to code tail -f ...
I don't know, whether it is the simplest way, but using the tee command worked for me: ``` mkfifo pipe1 mkfifo pipe2 for i in {1..20}; do echo $i ; sleep 1; done | tee pipe1 > pipe2 ```
Resorting to begging in the UA-cam channels for professional advice-you are pathetic and incompetent. It's all documented in plain text but you can't or won't read.
You could use it as a simple message queue and have multiple readers consume the data from the pipe to process. For example if you had a file of URLs that you wanted to download with wget you could cat said list to a named pipe then start up multiple instances of wget to grab URLs from the named pipe. You could use it to compress anything piped to the named pipe. Like this: mkfifo metal.pipe gzip -9 -c < metal.pipe > output.gz & cat files that you want to combine the contents of to your pipe > metal.pipe Now you will have a compressed file that contains the content of all of the files that you piped to your named pipe. It is common to do this kind of thing in reverse when populating tables in mysql. So for example mkfifo -m 0666 /tmp/metal.pipe gzip -d < customer_data.gz > /tmp/metal.pipe Then load the decompressed data into a mysql table. LOAD DATA INFILE '/tmp/metal.pipe' INTO TABLE customerData; Doing it this way means that you don't have to decompress the data to a file on disk which takes up space and disks are slower than RAM. And the only file that you will be left with is the named pipe on your computer which is in /tmp so you don't need to rm it as it will disappear the next time you reboot.
One example that brought me here: I’ve recently stumbled on a error in a python program that I wrote. It’s communicating via a socket connection with another system. A socket seems realized as a pipe. Unfortunately I got a broken pipe error. My conclusion after watching the video is that the other system propably closes the read on the socket.
Great video, thank you Linux Leech. Clear, concise, and super knowledgeable.
I'm so glad I found your channel man ! I learn something new everyday. If you take suggestions, I'd love to see you cover dbus
Cheers !
I am still figuring out how to implement named pipes in a C# program I'm writing. This video gives some hands-on info on pipes with the examples. Thanks!
Thanks
Thanks for the Super Thanks alexvass. Glad you found it useful.
Thank you so much I didnt know about this command, I really enjoy your videos.
Thanks.
Hey, that's funny! I recently tried to teach myself pipes, from what I found in the documentation, and did it in very similar way: Multiple terminals next to each other, for-loops, counter and sleep-command.
What I fail to understand is the syntax of
cat pipe
.
Reading the documention, and experimenting, I realized, that it isn't replaceable by `cat >< pipe`, nor by `cat < > pipe.` So it is not the combination of two operators ``, at least not an algebraic combination, but an operator on itself - I would call it the diamond-operator.
In contrast to
cat pipe
, it is just sitting there after the input finished, waiting for more, while `cat pipe` just finishes, when it is ready.
The manpage says:
Opening File Descriptors for Reading and Writing
The redirection operator
[n]word
causes the file whose name is the expansion of word to be opened for both reading and writing on file descriptor n, or on file descriptor 0 if n is not specified. If the file does not exist, it is created.
That's confusing to me. In which way is word (aka pipe for our purpose) opened for writing with cat?
BTW: Using backticks (2:00 ff.) is discouraged in the shell. `for i in $(seq 1 20)` is better readable, more portable and trivial to nest.
Great video, great song!
great demonstration and explanation! thx!
wonderful. got what i wanted. thank you.
Great video. But I'm confused about that `` in `cat metal.pip`. What does it do?
What’s the simplest IPC paradigm to use if I want one writer, but multiple/each reader to see the same data? Kind of like one writer, but multiple tail -f’s. Do I just write to a plain file and flush it aperiodically? I guess I need to investigate how to code tail -f ...
I don't know, whether it is the simplest way, but using the tee command worked for me:
```
mkfifo pipe1
mkfifo pipe2
for i in {1..20}; do echo $i ; sleep 1; done | tee pipe1 > pipe2
```
Nice one.
Could you do a video on some useful applications of the named pipe?
Thanks :)
Resorting to begging in the UA-cam channels for professional advice-you are pathetic and incompetent. It's all documented in plain text but you can't or won't read.
Using pipes, parent read data from one file, and child write data into another file.???
please help!!
i have very low time
Well you've wasted three years waiting for the answer that was provided in this very video. How important was your time again?
Good stuff.
Great video! ;)
While this is very informative i'm having trouble trying to find a situation in which this would be useful.
You could use it as a simple message queue and have multiple readers consume the data from the pipe to process. For example if you had a file of URLs that you wanted to download with wget you could cat said list to a named pipe then start up multiple instances of wget to grab URLs from the named pipe. You could use it to compress anything piped to the named pipe. Like this:
mkfifo metal.pipe
gzip -9 -c < metal.pipe > output.gz &
cat files that you want to combine the contents of to your pipe > metal.pipe
Now you will have a compressed file that contains the content of all of the files that you piped to your named pipe.
It is common to do this kind of thing in reverse when populating tables in mysql. So for example
mkfifo -m 0666 /tmp/metal.pipe
gzip -d < customer_data.gz > /tmp/metal.pipe
Then load the decompressed data into a mysql table.
LOAD DATA INFILE '/tmp/metal.pipe' INTO TABLE customerData;
Doing it this way means that you don't have to decompress the data to a file on disk which takes up space and disks are slower than RAM. And the only file that you will be left with is the named pipe on your computer which is in /tmp so you don't need to rm it as it will disappear the next time you reboot.
One example that brought me here: I’ve recently stumbled on a error in a python program that I wrote. It’s communicating via a socket connection with another system. A socket seems realized as a pipe. Unfortunately I got a broken pipe error. My conclusion after watching the video is that the other system propably closes the read on the socket.
Interesting, thank you!