Tyler Le/Insider
- The cybersecurity industry is already seeing evidence of ChatGPT's use by criminals.
- ChatGPT can quickly generate targeted phishing emails or malicious code for malware attacks.
- AI companies could be held liable for chatbots counseling criminals since Section 230 may not apply.
Whether it is writing essays or analyzing data, ChatGPT can be used to lighten a person's workload. That goes for cybercriminals too.
Sergey Shykevich, a lead ChatGPT researcher at cybersecurity company Checkpoint security, has already seen cybercriminals harness the AI's power to create code that can be used in a ransomware attack.