It's no secret that AI development brings a lot of security risks. While governing bodies are working to put forth regulations, for now, it's mostly up to the companies themselves to take precautions. The latest show of self-supervision comes with Anthropic, Google, Microsoft and Open AI's joint creation of the Frontier Model Forum, an industry-led body concentrating on safe, careful AI development. It considers frontier models to be any "large-scale machine-learning models" that go beyond current capabilities and have a vast range of abilities.