Latest Microsoft’s Agent 365 Tries To Be The AI Bot Boss 2025

Latest Microsoft’s Agent 365 Tries To Be The AI Bot Boss 2025

A new tool from Microsoft called Agent 365 is designed to help businesses control their growing collection of robotic helpers.

Agent 365 is not a platform for making enterprise AI tools; it’s a way to manage them, as if they were human employees. Companies using generative AI agents in their digital workplace can use Agent 365 to organize their growing sprawl of bots, keep tabs on how they’re performing, and tweak their settings. The tool is rolling out today in Microsoft’s early access program.

Essentially, Microsoft created a trackable workspace for agents. “Tools that you use to manage people, devices and applications today, you'd want to extend them to run agents as well in the future,” says Charles Lamanna, a president of business and industry for Microsoft’s Copilot, its AI chatbot.

Lamanna envisions a future where companies have many more agents performing labor than humans. For example, if a company has 100,000 employees, he sees them as using “half a million to a million agents,” ranging in tasks from simple email organization to running the “whole procurement process” for a business. He claims Microsoft internally uses millions of agents.

This army of bots, with permission to take actions inside a company’s software and automate aspects of an employee’s workflow, could quickly grow unwieldy to track. A lack of clear oversight could also open businesses up to security breaches. Agent 365 is a way to manage all your bots, whether those agents were built with Microsoft’s tools or through a third-party platform.

Agent 365’s core feature is a registry of an organization’s active agents all in one place, featuring specific identification numbers for each and details about how they are being used by employees. It’s also where you can change the settings for agents and what aspects of a business’s software each one has permission to access.

The tool includes security measures to scan what every agent is doing in real-time. “As data flows between people, agents, and applications,” says Lamanna “It stays protected.” As more businesses run pilot programs testing out AI agents, more questions arise about how safe the technology is to implement into core workflows that often contain sensitive data. A “prompt injection attack” where a website or app has hidden messages that try to take control of an agent or change its outputs is just one example of the vulnerabilities found in existing AI agents.

Lamanna believes business leaders who are wary about the

Source: Wired