<p dir="ltr">In-person attacks on religious institutions in recent months have underscored the need to reflect on the role of artificial intelligence in both spreading and counteracting hate speech online. On the one hand, algorithms can amplify the spread of extreme social-media posts to increase engagement, and large language models sometimes produce biased outputs, reflecting prejudices in their training data. On the other hand, AI can detect online hate speech quickly and on a large scale.</p><p dir="ltr">But AI cannot prevent real-world violence on its own. Political and legal mechanisms to govern these tools effectively are urgently needed. Oversight, diverse input in designing data sets and regular audits to ensure AI supports online safety rather than amplifying prejudice are all essential.</p><p dir="ltr">Germany’s 2017 Network Enforcement Act, which requires platforms to remove ‘obviously illegal’ content within 24 hours, has inspired private initiatives in the country that could lead the way in this work. Their AI-powered monitors check social media for hate speech that can then be removed or filtered out (go.nature.com/4h5fv7w). The next step is to develop an active response to hate speech — factual corrections, explanations of harm and prompts to reconsider language. These interventions might reduce hostile messaging and encourage reflection, with the hope that adequate design technology can help to steer online conversations towards understanding instead of outrage.</p>