Microsoft taking steps to ban hacking groups using its AI products

In its report, Microsoft claimed to have discovered proof of Russian, Iranian, Chinese, and North Korean hacking groups utilizing its AI products.

Image via OpenAI
3

As artificial intelligence technology has boomed in popularity over the last year or so, so too have bad actors attempting to use it maliciously as Microsoft pointed out in a report this week. The company announced a ban against state-backed hacking groups using its AI tools after discovering that such groups were using products like OpenAI for hacking purposes.

Microsoft released its report and accompanying ban earlier this week, as reported by Reuters. According to the report, Microsoft had found that state-backed hacking groups working on behalf of Russia, Iran, China, and North Korea had made use of OpenAI and other AI products, honing hacking skills by using large language models to trick targets maliciously. As of this report, Microsoft has issued a ban on such use of its products, hoping to corral use of AI by military and government-associated groups such as the Russian military intelligence and Iran's Revolutionary Guard.

Sam Altman of OpenAI and Satya Nadella of Microsoft on stage
OpenAI and Microsoft have worked closely since even before the AI boom with Microsoft making several key investments into the AI company.
Source: Justin Sullivan/Getty Images

Microsoft’s moves to secure its AI products from malicious purposes is just another wrinkle in the dangers posed by unregulated AI, which have been opined on regularly by leading tech experts. Notably, Elon Musk has shared his thoughts several times about the dangers of unregulated AI, and even joined Steve Wozniak and other industry leaders in calling for pauses on AI system training until safeguards could be developed. President Joe Biden followed in October 2023 with an executive order demanding the establishment of AI safety standards.

AI technology is still, in many ways, a wild west, and one wonders if Microsoft has the resources to put a stop to bad actors utilizing its products, especially at a government level. Nonetheless, it will be interesting to see if this ban has an effect on the project. Stay tuned as we continue to follow for the latest updates.

Senior News Editor

TJ Denzer is a player and writer with a passion for games that has dominated a lifetime. He found his way to the Shacknews roster in late 2019 and has worked his way to Senior News Editor since. Between news coverage, he also aides notably in livestream projects like the indie game-focused Indie-licious, the Shacknews Stimulus Games, and the Shacknews Dump. You can reach him at tj.denzer@shacknews.com and also find him on Twitter @JohnnyChugs.

From The Chatty
  • reply
    February 14, 2024 1:35 PM

    TJ Denzer posted a new article, Microsoft taking steps to ban hacking groups using its AI products

    • reply
      February 14, 2024 2:32 PM

      Are they banning hacking groups FROM using its AI products or are they banning hacking groups FROM other things using their AI products? Shame I never learned how to read or else maybe the article would answer that question for me.

      • reply
        February 14, 2024 2:35 PM

        tldr; state sponsored threats have been using AI for social engineering. MS trying to ban them.

Hello, Meet Lola