News & Analysis

Big Techs Share AI Security Initiatives

Microsoft, Google and OpenAI have shared the progress they’ve made on security pledges

Ever since OpenAI launched its AI-chatbot via the ChatGPT series in the winter of 2022, there has been growing concern over the security implications of the technology, with questions raised on how bad actors could use it to grind existing systems to a halt. However, the Big Tech companies have argued that AI is secure by design and committed to further this belief. 

Last year, the major players in the AI landscape including Microsoft, Google and OpenAI had pledged to enhance safety and security initiatives around AI and also develop transparency in its development. This followed reports of AI being banned across companies as well as growing concerns over the potential risks around GenAI. 

A year after making these commitments to secure AI while also emphasizing on its importance in the technology space to improve service quality, these tech majors have once again revealed some of their efforts in this direction – especially related to funding infrastructure and the development of tools and training for the defenders. Here’s a brief look at their efforts: 

What’s Microsoft saying now?

The tech giant introduced the Secure Future initiative in November where it committed to build secure foundations required for the development of AI and beyond. Microsoft highlighted its intention of delivering software that is “secure by design, by default, in deployment and in operation,” besides helping customers with better security defaults. 

Now, Microsoft has revealed some principles that guide its vendor policy and actions to mitigate the risks around the AI tools and APIs. Here’s what the principles, built on the company’s stated Responsible AI practices and the Azure OpenAI Code of Conduct, state: 

  • To identify and take action against malicious threat actors’ use, Microsoft will take appropriate action to disrupt the activities upon detection of the use of any Microsoft AI services or systems by malicious threat actors.
  • Microsoft will notify other AI service providers and share relevant data when the company detects a threat actor’s use of another service provider’s AI services or systems.
  • Microsoft will collaborate with other stakeholders to regularly exchange information.
  • It will inform the public and stakeholders about actions taken under these threat actor principles.

Google’s new cyber defense initiatives

On its part, Google came up with its own Secure AI Framework last June that included core elements that included: 

  • Expanding strong security foundations to the AI ecosystem, 
  • Extending detection and response to bring AI into an organization’s threat universe,
  • Automating defenses to keep pace with existing and new threats, 
  • Harmonizing platform-level controls to ensure consistent security across the organization, 
  • Adapting controls to adjust mitigations and create faster feedback loops for AI deployment 
  • And contextualizing AI system risks in surrounding business processes.

Now, Google has followed it up with an announcement that talks about a new AI cyber defense initiative that highlights some fresh commitments to invest in AI-ready infrastructure, release new tools for defenders, as well as launch new research and AI security training. The foundation of this initiative is the concept of secure by design and by default. 

The company also underscored its plans to continue investing more in its AI-ready network of global data centers and expand its “AI for Cybersecurity” cohort for startups besides putting in $15 million into the Cybersecurity Seminars program and advance research aimed to generate breakthroughs in AI-powered security. 

Amidst all this, OpenAI waxes eloquent again

As for OpenAI, which holds significant investments from Microsoft, the narrative continues to be around going whole hog despite limited capabilities of the current models to generate malicious cybersecurity activity. The AI vendor says it’s multi-pronged approach to combat the threat would include the following: 

  • Invest in technology and teams to identify, monitor and disrupt sophisticated threat actors’ activities.
  • Collaborate with industry partners and other stakeholders to regularly exchange information about malicious use of AI.
  • Take lessons learned from the real-world abuse, use and misuse of AI by threat actors and share with the industry its inputs around the potential misuse of AI.

The companies also shared methods that it was using to protect itself from AI-related cyber threats. In a blog post, Vasu Jakkal, corporate VP of security, compliance, identity and management at Microsoft, noted that, these methods include AI-powered threat detection to detect changes in how resources or traffic are being used on the network; behavioral analytics to detect risky logins and anomalous behavior; machine learning (ML) models to spot risky logins and malware; zero trust principle, where every access request must be fully authenticated, authorized, and encrypted; and device health, which must be verified before a device can connect to the corporate network.

As for Google, it provided details of how it integrates AI into its products to enhance security. The Gmail service uses a multilingual neuro-based text processing model called RETVec to improve spam detection and reduce false positive alerts, while the malware analysis tool VirusTotal leverages AI to review potentially malicious files.

In addition, the open-source security team uses Google Gemini to improve code coverage of open-source projects, while the detection and response team applies generative AI to generate incident summaries and the Mandiant team is utilizing generative AI to help identify threats faster, eliminate toil, and better scale talent and expertise.