(Pre)training and applying LLMs to Blockchain Transactions, Arthur Gervais (UCL/UC Berkeley)

Поділитися
Вставка
  • Опубліковано 14 жов 2024
  • Talk by Arthur Gervais (UCL)
    Abstract:
    Why pay tens of thousands of USD, and wait weeks for a smart contract security audit? In this paper, we explore the potential of using large language models (LLMs) to perform smart contract security audits. We explore prompt engineering for effective security analysis, while comparing the performance and accuracy of LLMs given a ground-truth dataset of 52 DeFi smart contracts that were attacked in the wild. On vulnerable contracts, our system SmartGPT achieves a hit rate of 40% on the correct vulnerability type, yet exhibits a high false positive rate which still requires manual auditor attention. We find that SmartGPT achieves a 20% better F1-score than a random model. Extending SmartGPT is as easy as providing a new vulnerability type name along its technical description. While there are many possible improvements, this study paves the way for faster, more cost-effective and systematic smart contract security audits using LLMs, revolutionizing the field of smart contract security.
    Abstract:

КОМЕНТАРІ •