Nice ... but you still need an exploit from which you can run root commands or escalate to root to replace the shell in the shadow passwd file (chsh) and change the non password to something legible (passwd). Do you have alerts of possible RCE's on vulnerable systems ... do you do continuous nessus like or nmap/NSE or ... other types of vulnerability scanning ? Anyway ... quite interesting product for an enterprise with a Linux environment.
We assume anyone that gets on a Linux box is going to get root is our philosophy. There are many ways it can happen with bugs, mis-configurations, etc. We scan systems on random basis for signs of attack but are not a vulnerability scanner. We specifically focus on compromise detection and agentless threat hunting. Many systems remain unpatched or open to attack and admins need an automated way to search out and identify hosts that have been compromised. Hope that helps.
A question regarding how sandfly works. Are all the individual modules (the sandflies) that are ran on the target system, individual binaries? because, if so, they have to be transferred and executed on the target system. Are they just placed in the tmp dir and then executed and send the results back over SSH via JSON? I am curious. Otherwise, seems like a very interesting product.
We use a purpose built binary and instructions are sent to it once on the host on what to analyze and collect. The binary is built specifically to investigate Linux with capabilities to de-cloak rootkits, parse data, etc. The execution is done in a secured home user directory and not out of /tmp. Results in the server are JSON and can be exported to any compatible source such as Splunk, Elastic, Postgres, Syslog, and so on. If it takes JSON, we can send to it also with our REST API. Hope that helps and thanks for watching.
@@SandflySecurity Do you utilize EBPF to query the information? Because relying on userspace applications seems prone to error, as they could all be tampered with, no?
@@dominikheinz2297 We do not tie into the kernel using kernel hooks or eBPF for safety reasons. These telemetry sources can cause kernel panics and performance issues. The reality also is any telemetry source can be tampered with, and this includes eBPF. Our approach is to analyze the systems from multiple different angles and this gives very high detection coverage that is difficult to evade.
@@SandflySecurity Appreciate the detailed answers! Very interesting. From my understanding, eBPF code is very unlikely to panic the kernel because it essentially runs in a "VM" inside the kernel, and is verified during compilation, I might be wrong on that tho. So, if I understand you correctly, you essentially transfer your custom binary, and have various functions to verify the same state. Lets take as an example, hiding of processes. You would query using the ps command maybe, another approach would be walkign /proc, and maybe some other syscall to retrieve running processes. Then these informations are aggegated, and checked if any of the results differ? Thats how I understand it. So, the binary performs the operations of querying for data/states, and the sandflies just instruction the binary what to query?
@@dominikheinz2297 eBPF is much less likely to cause kernel panics than kernel hooks, but it has happened. The other issue is that once you get the telemetry data out of your eBPF, how does that impact performance and stability of the host? The more you collect, the more processing power needed to analyze the data. Each way of collecting telemetry has pluses and minuses. By avoiding these other telemetry sources we increase reliability, safety, and speed. We also have much wider compatibility as we can operate on systems over a decade old, embedded systems, custom kernels, etc. With other methods you need to be extremely careful about kernel versions and updates can break the agent or the agent can break the kernel. We simply avoid all these issues by not having an agent. This means we have far wider visibility across all Linux systems than other methods. Also, we can watch everything, and not just select systems for fear of compatibility/stability/performance impacts. Our system will use various mechanisms to collect the data depending on what the source is we need. The mechanisms are built-in native functions and we don't call out to ps, and such because we assume the system is compromised and don't trust the results. We go and look ourselves. Results can be processed for known attacks, or in the case of drift detection, changes we see vs. what we expected. This can be new processes started, new users, new systemd services, new modules loaded and so on.
this channel should have so many more subs you guys make great vids i need to try your products i havent yet
Thanks. We will be posting many more videos. Please share and tell your friends. We have a free trial on the website if you want to use it.
@SandflySecurity for sure! And yea I was looking lastnight I may have too
Nice ... but you still need an exploit from which you can run root commands or escalate to root to replace the shell in the shadow passwd file (chsh) and change the non password to something legible (passwd). Do you have alerts of possible RCE's on vulnerable systems ... do you do continuous nessus like or nmap/NSE or ... other types of vulnerability scanning ?
Anyway ... quite interesting product for an enterprise with a Linux environment.
We assume anyone that gets on a Linux box is going to get root is our philosophy. There are many ways it can happen with bugs, mis-configurations, etc. We scan systems on random basis for signs of attack but are not a vulnerability scanner. We specifically focus on compromise detection and agentless threat hunting. Many systems remain unpatched or open to attack and admins need an automated way to search out and identify hosts that have been compromised. Hope that helps.
With lynis or emba I can detect misconfiguration perfectly
A question regarding how sandfly works. Are all the individual modules (the sandflies) that are ran on the target system, individual binaries? because, if so, they have to be transferred and executed on the target system. Are they just placed in the tmp dir and then executed and send the results back over SSH via JSON? I am curious. Otherwise, seems like a very interesting product.
We use a purpose built binary and instructions are sent to it once on the host on what to analyze and collect. The binary is built specifically to investigate Linux with capabilities to de-cloak rootkits, parse data, etc. The execution is done in a secured home user directory and not out of /tmp. Results in the server are JSON and can be exported to any compatible source such as Splunk, Elastic, Postgres, Syslog, and so on. If it takes JSON, we can send to it also with our REST API. Hope that helps and thanks for watching.
@@SandflySecurity Do you utilize EBPF to query the information? Because relying on userspace applications seems prone to error, as they could all be tampered with, no?
@@dominikheinz2297 We do not tie into the kernel using kernel hooks or eBPF for safety reasons. These telemetry sources can cause kernel panics and performance issues. The reality also is any telemetry source can be tampered with, and this includes eBPF. Our approach is to analyze the systems from multiple different angles and this gives very high detection coverage that is difficult to evade.
@@SandflySecurity Appreciate the detailed answers! Very interesting. From my understanding, eBPF code is very unlikely to panic the kernel because it essentially runs in a "VM" inside the kernel, and is verified during compilation, I might be wrong on that tho. So, if I understand you correctly, you essentially transfer your custom binary, and have various functions to verify the same state. Lets take as an example, hiding of processes. You would query using the ps command maybe, another approach would be walkign /proc, and maybe some other syscall to retrieve running processes. Then these informations are aggegated, and checked if any of the results differ? Thats how I understand it. So, the binary performs the operations of querying for data/states, and the sandflies just instruction the binary what to query?
@@dominikheinz2297 eBPF is much less likely to cause kernel panics than kernel hooks, but it has happened. The other issue is that once you get the telemetry data out of your eBPF, how does that impact performance and stability of the host? The more you collect, the more processing power needed to analyze the data. Each way of collecting telemetry has pluses and minuses. By avoiding these other telemetry sources we increase reliability, safety, and speed. We also have much wider compatibility as we can operate on systems over a decade old, embedded systems, custom kernels, etc.
With other methods you need to be extremely careful about kernel versions and updates can break the agent or the agent can break the kernel. We simply avoid all these issues by not having an agent. This means we have far wider visibility across all Linux systems than other methods. Also, we can watch everything, and not just select systems for fear of compatibility/stability/performance impacts.
Our system will use various mechanisms to collect the data depending on what the source is we need. The mechanisms are built-in native functions and we don't call out to ps, and such because we assume the system is compromised and don't trust the results. We go and look ourselves. Results can be processed for known attacks, or in the case of drift detection, changes we see vs. what we expected. This can be new processes started, new users, new systemd services, new modules loaded and so on.