Meta says it may stop development of AI systems it deems too risky
1. Meta plans to limit access to high-risk AI systems. 2. The Frontier AI Framework defines risky AI as 'high' or 'critical'. 3. Internally developed AI poses cybersecurity and safety risks. 4. Meta contrasts its open AI approach with more restricted competitors. 5. The decision process involves evaluating non-quantitative risks.