New Delhi: Sam Altman-run OpenAI has allowed applications of its AI technologies for the purposes of "military and warfare". The company reportedly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, reports The Intercept.


COMMERCIAL BREAK
SCROLL TO CONTINUE READING

“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” an OpenAI spokesperson was quoted as saying.


“A principle like 'Don't harm others' is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples,” the spokesperson added.


The real-world consequences of the policy are, however, unclear. According to the report, there are several “killing-adjacent tasks” that a large language model (LLM) like ChatGPT could augment, like writing code or processing procurement.


OpenAI's platforms could be of great use to army engineers looking to summarise decades of documentation of a region's water infrastructure, reports TechCrunch. OpenAI has softened its stance on military use but it still bans AI for using it for weapons development.