If you happen to be someone who understands how dangerous weapons are designed or handled, the artificial intelligence industry may be looking for you. In a surprising twist, leading technology companies are now recruiting experts in chemical weapons, explosives and radiological threats. The goal is not to build such weapons but to prevent AI tools from helping others do so. According to a BBC report, US AI firm Anthropic has advertised a role requiring expertise in chemical weapons defence and dirty bombs, while ChatGPT developer OpenAI is offering salaries of up to $455,000 for researchers focused on biological and chemical risks.
Why Anthropic and ChatGPT are hiring experts in dirty bombs
As AI systems become increasingly capable of answering complex technical questions, companies are facing a new challenge. What if someone attempts to use these systems to obtain information about building weapons?Anthropic’s job listing seeks candidates with experience in chemical weapons or explosives defence, along with knowledge of radiological dispersal devices, commonly known as dirty bombs. The company says the role is intended to ensure that its AI models cannot be manipulated into generating harmful instructions.According to the BBC, the expert would help strengthen safety policies and technical guardrails designed to prevent users from extracting dangerous information.Anthropic is not the only company adopting this approach. OpenAI, the developer behind ChatGPT, has also advertised a position for a researcher specialising in biological and chemical risks.The role focuses on studying how advanced AI models could potentially be misused and developing systems to prevent such behaviour. The company is offering salaries of up to $455,000 for experts who can help address these risks.The hiring reflects growing recognition within the AI industry that powerful language models could inadvertently generate highly sensitive technical knowledge if proper safeguards are not in place.
Experts warn of regulatory gaps
While companies say these roles are meant to strengthen safeguards and prevent misuse, some researchers argue that the broader implications of exposing AI systems to sensitive weapons-related knowledge deserve closer examination. As AI models become increasingly capable of synthesising complex technical information, experts worry about whether it is possible to completely eliminate the risk of misuse once such knowledge becomes part of safety testing or evaluation.Dr Stephanie Hare, a technology researcher and co-presenter of the BBC’s AI Decoded programme, has questioned whether it is entirely safe for AI systems to interact with information related to explosives or radiological weapons, even when the intention is to build protective guardrails. She also notes that there is currently no dedicated international treaty or regulatory framework governing how artificial intelligence systems should handle such sensitive knowledge.
Guardrails becoming a priority for AI developers
AI developers have increasingly warned that their technology could pose serious risks if misused. As a result, many companies are investing heavily in safety research.Anthropic has previously stated that its AI systems should not be used in autonomous weapons or mass surveillance. Its co-founder Dario Amodei has argued that the technology is not yet reliable enough for such applications.By hiring specialists who understand chemical weapons and explosive threats, companies hope to design safeguards that prevent AI from generating harmful instructions while allowing the technology to remain useful for research, education and legitimate problem-solving.The unusual job listings reflect a growing reality in the AI era. As the technology becomes more powerful, the challenge is not just building smarter systems but ensuring they cannot be turned into dangerous tools.