i was struggling to setup the nginx logs in docker cluster using ELK from last couple of days. after seeing this tutorial it has been done in 15 mins. thanks a lot.
Thank you so much for all these videos, they are very helpful! This is definitely one of my favorite UA-cam Channels to Follow! Do you have any videos for setting up Logstash in Docker? I don't recall seeing any. I have Elasticsearch and Kibana in Docker, but Logstash is running on Centos VM. I would like to see a video on Logstash in Docker, that would be very helpful if you could make a video for running Logstash in Docker. Alternatively, I have seen your video on Fluentd in Docker, how do you convert a Logstash config to a Fluentd config? Maybe that could be a video you could do. Would you please consider making a video on Logstash in Docker and/or converting Logstash Config to Fluentd? Thank you so much!
Yes you can use filebeat. Infact I did a video on Grafana Loki and released it today. You can use the Loki stack with Filebeat for collecting logs. ua-cam.com/video/UM8NiQLZ4K0/v-deo.html But in this video, I didn't use filebeat but Promtail.
Great video. One thing if anyone can answer, This stack lacks Logstash right? The Filebeat directly sends logs to Elasticsearch and the logstash is not present. Am I right?
Thanks a lot Sir, I have a basic question, after set filebeat or metricneat in docker container, I find it only get the data from all docker containers, but I also want to get data from my lxc container since I use LXC and Docker Container, is that possible? thanks in advance
Is it possible to run Filebeat as a Docker Container and use it to monitor a remote machine? I would think so, because they have Filebeat modules for Netflow, Cisco, Juniper, etc. and I don't think that you cannot run filebeat directly on Cisco or Juniper Devices
Hi Venket, Thanks for social engineering. I am trying to setup the metricbeat module for Kubernetes on hosted Linux platform. I found the many solution on everything hosted on Kubernetes but not much on different Kubernetes and ELK stack. Kindly post a video on Kubernetes module enabled using the metricbeat module for independent hosted ELK stack.
Hi venket. Thanks for doing this video. I have 5 containers in my vm by using this file beat setup can I get all containers logs or else need to pass any extra info
hello. thank u for video. do i need use logstash. Because i want to watch logs from 32 servers. when i try to check the logs other server i could't find other filebeats from discovery
Hi Swaraj, thanks for watching. Filebeat's default configuration has that index name format. However you can configure it with any name you like. Cheers.
how can we send our custom logs to filebeat. The video shows that it only sends running container logs. But what if I have a log file in /home/mypc/hello.log with content inside hello.log as "Hello sending logs to elastic search". how can I send it? I am following all your videos from start to bottom but not getting any luck.
Hi how do you deal with rebooting filebeat ? If filebeat container crashes, it will reboot and then send back all nginx's logs (considering nginx wasn't restarted) to ES again.. therefore creating duplicates. Duplicates are created because ES defines the key (_id) of the document. This also means there is no way to find and delete duplicate ? Would love your insight on this issue ;) Otherwise, nice and clean video as always !
@@justmeandopensource As I'm running filebeat stateless, I think the best solution would be to use the figerprint processor (www.elastic.co/guide/en/beats/filebeat/master/filebeat-deduplication.html ) and use fields like date, container_id and offset to create the fingerprint. Then a restart would overwrite values in ES (still a lot of processing though). I can't understand why this issue isn't describe in filebeat/docker autodiscovery documentation. Maybe I'm missing something :/ I think this functionnality was really thought for K8s because giving full read-only access ton all containers data (including mounted docker secrets) just to get the logs out... Well I can only say that Docker/Swarm has it's limitation and you need to make a lot of trade-off in order to get "simplicity" :D
@@justmeandopensource You can use Logstash too because, Logstash create since db that stores pointer and sometimes timestamp to resolve duplication issues.
Hi Swaraj, thanks for watching. This is a requirement as per Elastic docs. If you want to know what this kernel parameter is, here is the explanation from kernel.org. www.kernel.org/doc/Documentation/sysctl/vm.txt max_map_count: This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap, mprotect, and madvise, and also when loading shared libraries. While most applications need less than a thousand maps, certain programs, particularly malloc debuggers, may consume lots of them, e.g., up to one or two maps per allocation. The default value is 65536.
Hello sir, If filebeat stops and resume then it will again send the whole duplicate data from the logger file to elasticsearch and if it will send then how elasticsearch will handle it? Will it restore it or ignore that previous data.
Good one brother. I'm planning to implement filebeat in a different server apart from the production server as we aren't supposed to disturb the prod environment. But need a way to make filebeat listen to prod server logs and ship to elastic search running along with filebeat in a that different server. Is there any way ? . What I thought was to have docker root volume as shared one for these two servers. So filebeat can pull those logs and ship to elastic whenever any event changes happening in the docker containers . Please provide your suggestions.
Hi Rathin, thanks for watching. I don't think there is a way to do that without installing something on the production server. Filebeat, as far as I know can only pull data from the instance it is installed on. You can have Fluentd installed on a separate server, but again you need to install a forwarding agent on the production server.
@@justmeandopensource Thanks for thr reply. But All filebeat needs is the log folder as it's input right which we provjde in the filebeat-docker.yml file. For eg. Default json driver docker logs gets stored in /var/logs/docker/containers inside which we have the container logs. Is my understanding right ?
Yes. If you want to collect logs from the containers on your production server, filebeat on this separate machine needs access to /var/lib/docker/containers and /var/run/docker.sock from theat production server.
Hi could you post one video about how to setup audit beat with logstash on client machine? because auditbeat logs doesn't ships with logstash from client. i can't see any details in audit beat dashboard too.
@@justmeandopensource I have a elk+kibana+logstash on same server and another one is application server. I need to monitor user login details,any modifications in "passwd" file and uptime for application server. my elasticsearch search server only access with localhost, can't access directly from clients. so my logs are send to Logstash only.
I see. Thats a typical setup. Clients (Beats) can send data directly to ElasticSearch, in which case, you need to make elasticsearch available on public interface. The need for logstash is to filter and transform incoming logs before storing it in the elastisearch engine. Are you client machines Windows?
@@justmeandopensource Thanks for your quick reply. My clients are linux machine. I am trying to setup ELK for prod environment. I can't make elasticsearch as public interface. As you know its create a security issues. Thats the reason, i would like to forward logs to Logstash.
@@justmeandopensource filebeat with logstash working fine on same client machine.it create indices automatically in kibana Dashboard and i can create index pattern as well...but auditbeat indices not create jn kibana
hey champion can you help me , i want to send my local logs to the AWS s3 with the help of filebeat docker how can i do that , i can't install the filebeat on my system .
I checked the video. But, problem here is, filebeat is only listening logs coming out of docker container (It would only be access logs). What about error log file or files?
I am not able to see any data in kibana discover page even after creating a filebeat index, i am running nginx and accessed it thru browser. Any idea what could be missing, I did run the setup before running filebeat.
I have this error Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at localhost:9200: Get localhost:9200: dial tcp 192.168.250.157:9200: connect: connection refused]
docker run \ docker.elastic.co/beats/filebeat:8.1.3 \ setup -E setup.kibana.host=localhost:5601 \ -E output.elasticsearch.hosts=["localhost:9200"] Its not working Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at localhost:9200: Get "localhost:9200": dial tcp [::1]:9200: connect: cannot assign requested address] I m running both on localmachine
i was struggling to setup the nginx logs in docker cluster using ELK from last couple of days. after seeing this tutorial it has been done in 15 mins. thanks a lot.
Hi Shishir, thanks for watching.
Thanks for the helpful guide.
You are welcome. Thanks for watching.
Thanks a ton. This is exactly what I was searching for days. 🙂
Many thanks for watching. Glad it helped. Cheers.
Really helped me. Very well explained !
Thank you
Hi Victor, Thanks for watching.
Your are a good master
Hi, many thanks for your interest in my videos.
Extremely good ..... Thx
Many thanks.
Thanks sir, very interesting and easy to understand such complex issue.
Good video this will help a lot... 🙂🙂👍👍
Hi, thanks for watching.
Thank you so much for all these videos, they are very helpful! This is definitely one of my favorite UA-cam Channels to Follow!
Do you have any videos for setting up Logstash in Docker? I don't recall seeing any. I have Elasticsearch and Kibana in Docker, but Logstash is running on Centos VM. I would like to see a video on Logstash in Docker, that would be very helpful if you could make a video for running Logstash in Docker.
Alternatively, I have seen your video on Fluentd in Docker, how do you convert a Logstash config to a Fluentd config? Maybe that could be a video you could do.
Would you please consider making a video on Logstash in Docker and/or converting Logstash Config to Fluentd?
Thank you so much!
Hi Daryl, thanks for your interest in my videos. I haven't tried containerized logstash yet but could give it a try. Cheers.
Thanks for the video.
I have configured. But in kibana elasticsearch health is yellow and showing no log data found.
Hi sir,.
Please do for the kubernetes pod to gather logs
Can i deploy Filebeat as sidecar to collect the pod logs in kuberenetes? any response would be really grateful.
Yes you can use filebeat. Infact I did a video on Grafana Loki and released it today. You can use the Loki stack with Filebeat for collecting logs.
ua-cam.com/video/UM8NiQLZ4K0/v-deo.html
But in this video, I didn't use filebeat but Promtail.
@@justmeandopensource Thank you so much for quick turnout...i will try
@@d4devops30 no worries
Great video. One thing if anyone can answer, This stack lacks Logstash right? The Filebeat directly sends logs to Elasticsearch and the logstash is not present. Am I right?
You are right.. No logstack component inbetween. Thanks for watching. Cheers.
@@justmeandopensource You are my inspiration. Want to be expert in the field like you
@@ahsanraza4762 Well, I am not an expert. If you just know how to read the docs, you can do it :P
Thanks a lot Sir, I have a basic question, after set filebeat or metricneat in docker container, I find it only get the data from all docker containers, but I also want to get data from my lxc container since I use LXC and Docker Container, is that possible? thanks in advance
do you have tutorial ELK running in docker and TLS enable for all (beats to elastic) and (kibana to elastic) and (kibana to nginx)
Not exactly as you requested. Thanks for watching.
Hi, Wish zsh theme you are using??
Thx
Hi, you can find my terminal/shell customizations in this video.
ua-cam.com/video/PUWnCbr9cN8/v-deo.html
@@justmeandopensource 🙏
I NEEEED to know how did you make that terminal
Hi, thanks for watching. Its just a combination of Zsh, oh-my-zsh, zsh-autosuggestions, zsh-syntax-highlighting, powerlevel10k.
Thanks I'll be sure to try that out!
@@anmolmajithia You are welcome.
Can you create video on graylog with filebeat in docker container
Great video, really helped me out! :thumbsup:
Thanks for watching.
Is it possible to run Filebeat as a Docker Container and use it to monitor a remote machine?
I would think so, because they have Filebeat modules for Netflow, Cisco, Juniper, etc. and I don't think that you cannot run filebeat directly on Cisco or Juniper Devices
As you said, it should be possible but I haven't tried that yet.
Hi Venket, Thanks for social engineering.
I am trying to setup the metricbeat module for Kubernetes on hosted Linux platform. I found the many solution on everything hosted on Kubernetes but not much on different Kubernetes and ELK stack.
Kindly post a video on Kubernetes module enabled using the metricbeat module for independent hosted ELK stack.
this vide only fot check logs elastic or? i want check other logs
Hi venket. Thanks for doing this video.
I have 5 containers in my vm by using this file beat setup can I get all containers logs or else need to pass any extra info
Hi Nagesh, thanks for watching. It will collect logs from all the containers.
hello. thank u for video. do i need use logstash. Because i want to watch logs from 32 servers. when i try to check the logs other server i could't find other filebeats from discovery
What is the reason for getting filebeat-* in index patterns? Have we given this somewhere while configuraing?
Hi Swaraj, thanks for watching. Filebeat's default configuration has that index name format. However you can configure it with any name you like. Cheers.
@@justmeandopensource Can you share some references for creating index through filebeat?
how can we send our custom logs to filebeat. The video shows that it only sends running container logs. But what if I have a log file in /home/mypc/hello.log with content inside hello.log as "Hello sending logs to elastic search". how can I send it? I am following all your videos from start to bottom but not getting any luck.
Hi
how do you deal with rebooting filebeat ?
If filebeat container crashes, it will reboot and then send back all nginx's logs (considering nginx wasn't restarted) to ES again.. therefore creating duplicates.
Duplicates are created because ES defines the key (_id) of the document. This also means there is no way to find and delete duplicate ?
Would love your insight on this issue ;)
Otherwise, nice and clean video as always !
Hi Maxime, thanks for watching. I never thought about that scenario. Sure there must be a way as containers restarting is an usual thing.
@@justmeandopensource
As I'm running filebeat stateless, I think the best solution would be to use the figerprint processor (www.elastic.co/guide/en/beats/filebeat/master/filebeat-deduplication.html ) and use fields like date, container_id and offset to create the fingerprint. Then a restart would overwrite values in ES (still a lot of processing though). I can't understand why this issue isn't describe in filebeat/docker autodiscovery documentation. Maybe I'm missing something :/
I think this functionnality was really thought for K8s because giving full read-only access ton all containers data (including mounted docker secrets) just to get the logs out... Well I can only say that Docker/Swarm has it's limitation and you need to make a lot of trade-off in order to get "simplicity" :D
@@Mr3maxmax3 page shows 404
The link has an extra ")" at the end.
Try this
www.elastic.co/guide/en/beats/filebeat/master/filebeat-deduplication.html
@@justmeandopensource You can use Logstash too because, Logstash create since db that stores pointer and sometimes timestamp to resolve duplication issues.
What is vm.max_map_count for? How it prevents the setup from failing?
Hi Swaraj, thanks for watching.
This is a requirement as per Elastic docs. If you want to know what this kernel parameter is, here is the explanation from kernel.org.
www.kernel.org/doc/Documentation/sysctl/vm.txt
max_map_count:
This file contains the maximum number of memory map areas a process
may have. Memory map areas are used as a side-effect of calling
malloc, directly by mmap, mprotect, and madvise, and also when loading
shared libraries.
While most applications need less than a thousand maps, certain
programs, particularly malloc debuggers, may consume lots of them,
e.g., up to one or two maps per allocation.
The default value is 65536.
filebeat setup starts @06:52
Hi Shaheer, thanks for watching.
Hello sir, If filebeat stops and resume then it will again send the whole duplicate data from the logger file to elasticsearch and if it will send then how elasticsearch will handle it? Will it restore it or ignore that previous data.
Good one brother. I'm planning to implement filebeat in a different server apart from the production server as we aren't supposed to disturb the prod environment. But need a way to make filebeat listen to prod server logs and ship to elastic search running along with filebeat in a that different server. Is there any way ? . What I thought was to have docker root volume as shared one for these two servers. So filebeat can pull those logs and ship to elastic whenever any event changes happening in the docker containers . Please provide your suggestions.
Hi Rathin, thanks for watching. I don't think there is a way to do that without installing something on the production server. Filebeat, as far as I know can only pull data from the instance it is installed on. You can have Fluentd installed on a separate server, but again you need to install a forwarding agent on the production server.
@@justmeandopensource Thanks for thr reply. But All filebeat needs is the log folder as it's input right which we provjde in the filebeat-docker.yml file. For eg. Default json driver docker logs gets stored in /var/logs/docker/containers inside which we have the container logs. Is my understanding right ?
Yes. If you want to collect logs from the containers on your production server, filebeat on this separate machine needs access to /var/lib/docker/containers and /var/run/docker.sock from theat production server.
Hi
could you post one video about how to setup audit beat with logstash on client machine? because auditbeat logs doesn't ships with logstash from client. i can't see any details in audit beat dashboard too.
Hi Vijay, How are you intending to use Logstash?
@@justmeandopensource
I have a elk+kibana+logstash on same server and another one is application server. I need to monitor user login details,any modifications in "passwd" file and uptime for application server. my elasticsearch search server only access with localhost, can't access directly from clients. so my logs are send to Logstash only.
I see. Thats a typical setup. Clients (Beats) can send data directly to ElasticSearch, in which case, you need to make elasticsearch available on public interface. The need for logstash is to filter and transform incoming logs before storing it in the elastisearch engine.
Are you client machines Windows?
@@justmeandopensource
Thanks for your quick reply. My clients are linux machine. I am trying to setup ELK for prod environment. I can't make elasticsearch as public interface. As you know its create a security issues. Thats the reason, i would like to forward logs to Logstash.
@@justmeandopensource
filebeat with logstash working fine on same client machine.it create indices automatically in kibana Dashboard and i can create index pattern as well...but auditbeat indices not create jn kibana
Hi Sir, Can you help me in reading the json file from filebeat.
hey champion can you help me , i want to send my local logs to the AWS s3 with the help of filebeat docker how can i do that , i can't install the filebeat on my system .
and i am using python to do that
I checked the video. But, problem here is, filebeat is only listening logs coming out of docker container (It would only be access logs). What about error log file or files?
In docker youve got two listening logs, filebeat will listen the both of them (STDERR and STDOUT)
I am not able to see any data in kibana discover page even after creating a filebeat index, i am running nginx and accessed it thru browser. Any idea what could be missing, I did run the setup before running filebeat.
I get the same as you... I'm sure I run a filebeat index. But I can't see the the log in the UI
Yea but I didnt see you configure filebeat to pull nginx logs
Hi, Anyone using the ELK stack in production on k8 ? Here I would like to check the design arch for a production setup.
I have this error Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at localhost:9200: Get localhost:9200: dial tcp 192.168.250.157:9200: connect: connection refused]
hi there, did you find a solution for this issue? I am facing the same error
docker run \
docker.elastic.co/beats/filebeat:8.1.3 \
setup -E setup.kibana.host=localhost:5601 \
-E output.elasticsearch.hosts=["localhost:9200"]
Its not working
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at localhost:9200: Get "localhost:9200": dial tcp [::1]:9200: connect: cannot assign requested address]
I m running both on localmachine
hi there, did you find a solution for this issue? I am facing the same error
@@ahmedfayez nope