← Back to home

AI Safety

24 items tagged with this topic

Recent

Older

Official Sourcesfrom Anthropic Newsroom

Claude for Creative Work \ Anthropic

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

Official Sourcesfrom OpenAI News

Our commitment to community safety

Learn how OpenAI protects community safety in ChatGPT through model safeguards, misuse detection, policy enforcement, and collaboration with safety experts.

Official Sourcesfrom Anthropic Newsroom

Anthropic Sydney office \ Anthropic

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

Podcasts & Newslettersfrom ChinaTalk

Quantum 101

What exactly is quantum computing?

Official Sourcesfrom Anthropic Newsroom

Introducing Claude Opus 4.7 \ Anthropic

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

Official Sourcesfrom OpenAI News

Responsible and safe use of AI

Learn how to use AI responsibly with best practices for safety, accuracy, and transparency when using tools like ChatGPT.

Official Sourcesfrom OpenAI News

Introducing the Child Safety Blueprint

Discover OpenAI’s Child Safety Blueprint—a roadmap for building AI responsibly with safeguards, age-appropriate design, and collaboration to protect and empower young people online.

Official Sourcesfrom OpenAI News

Industrial policy for the Intelligence Age

Explore our ambitious, people-first industrial policy ideas for the AI era—focused on expanding opportunity, sharing prosperity, and building resilient institutions as advanced intelligence evolves.

Official Sourcesfrom Google DeepMind Blog

Protecting people from harmful manipulation

Google DeepMind researches AI's harmful manipulation risks across areas like finance and health, leading to new safety measures.