On Wednesday, Microsoft researchers released a new simulation environment designed to test artificial intelligence agents, along with new research showing that current agent models may be vulnerable to manipulation. Conducted in collaboration with Arizona State University, the research raises new questions about how well AI agents will perform when working unsupervised — and how quickly AI companies can deliver on the promises of a future.
The simulation environment, named the Magentic Marketplace by Microsoft, has been built as a synthetic platform for experimentation in the behavior of AI agents. A typical experiment might involve a customer-agent trying to order dinner according to a user’s instructions, while agents representing various restaurants compete to win the order.
The team’s initial experiments involved 100 separate client-side agents interacting with 300 business-side agents. Because the source code for the market is open source, it should be simple for other groups to adopt the code to run new experiments or replicate findings.
Ece Kamar, managing director of Microsoft Research’s AI Frontiers Lab, says this kind of research will be critical to understanding the capabilities of AI agents. “There’s really a question about how the world is going to change by having these agents working together and talking to each other and negotiating,” Kamar said. “We want to understand these things deeply.”
Initial research looked at a mix of leading models, including GPT-4o, GPT-5, and Gemini-2.5-Flash, and found some surprising weaknesses. Specifically, the researchers found several techniques that businesses could use to manipulate customer representatives into buying their products. The researchers observed a particular drop in efficiency as a customer agent had more options to choose from, overwhelming the agent’s attention span.
“We want these agents to help us process a lot of options,” says Kamar. “And we’re seeing the current models really get overwhelmed by having too many options.”
Agents also experienced problems when asked to cooperate towards a common goal, apparently unsure of which agent should play which role in the cooperation. Performance improved when the models were given clearer instructions on how to work together, but the researchers still felt that the models’ inherent capabilities needed improvement.
Techcrunch event
San Francisco
|
13-15 October 2026
“We can guide the models – as we can tell them, step by step,” Kamar said. “But if we’re testing their collaborative capabilities natively, I would expect those models to have those capabilities by default.”
