Enabling KAPE at Scale

Поділитися
Вставка
  • Опубліковано 26 лип 2024
  • KAPE (Kroll Artifact Parser and Extractor) is a DFIR triage tool developed by Eric Zimmerman. KAPE can both collect digital evidence based upon a highly configurable set of target definitions and process that data with an ever-gowning list of processing modules. The DFIR community is contributing new targets and modules at a frequent, steady pace. KAPE is a true game-changer, no other tool is even close.
    There has been little written about implementing KAPE at scale and today's webcast will focus on those possibilities. KAPE has several features that allow for the remote access of data and the ability to store collected data and processing output remotely. Examples of how to use the KAPE remote options will be shown and demoed. These capabilities include SFTP, Amazon S3, Microsoft Azure, UNC Paths, and PowerShell.
    PowerShell Remoting has the ability to asynchronously execute commands and scripts on remote systems. Using this capability, KAPE can be executed a large number of systems simultaneously. There were a few hurdles to clear in getting a set of PowerShell scripts that could be used in a DFIR triage situation involving many remote systems. The approach and solution will be demo and code shared.
    Through this webcast, we hope to provide not only information on KAPE functionality and examples that you can utilize immediately, but also to stimulate thought on using KAPE at scale and, hopefully, participation in making these scripts better.
    Speaker Bio
    Mark Hallman
    Mark has been performing computer-related investigations for over 12 years. Mark lead and assisted in investigations involving identification, preservation, research, analysis, and presentation of ESI for Fortune 100 and NLJ firms across the United States as well as governmental agencies such as The Department of Justice, The Department of Labor and The Securities and Exchange Commission. Mark's certifications include GCFE, CGFA, GCHI, EnCE, and CCE.
    Mark was primarily responsible for building the digital forensics and e-discovery practice of a regional firm in Dallas Texas. Responsibilities included forensics tool research and evaluation, development of ESI collection protocols, development of investigation "playbooks", training of the analyst team in the application of those tools and techniques for deployment on client projects. Mark actively lead and participated in hundreds of digital forensics and e-discovery projects. In addition to investigation and team training/development responsibilities, Mark has provided expert testimony in both state and federal courts.
    Mark currently works for the SANS Institute's Research Operations Center (SROC) researching, designing, developing and testing virtual lab environments for the SANS DFIR curriculum. He also teaches the FOR500: Windows Forensic Analysis course (sans.org/FOR500) course at SANS.
  • Наука та технологія

КОМЕНТАРІ • 2

  • @haythamcheikhali5375
    @haythamcheikhali5375 4 роки тому +1

    This awesome ! Thank you for your Eric continuous contribution to the community !

  • @wkyrouz
    @wkyrouz 3 роки тому +1

    This is wonderful, but is there any chance we can convince Mr. Zimmerman to change the nomenclature from "Target" to "Artifact" or... pretty much anything else that doesn't confuse with endpoint/host/etc?