I ask friends all the time if they trust ChatGPT or Gemini, especially when they tell me they feed these AI chatbots medical test results, deeply personal thoughts, and even sensitive work issues. But after talking with Microsoft at RSAC’s 2026 cybersecurity conference, I realized I’ve been approaching that question all wrong. You probably have, too.
Ram Shankar Siva Kumar, Data Cowboy and AI Red Team Lead at Microsoft, says most people ask the question of how to trust an AI. That is, asking how to learn enough about it and its inner workings in order to make a call on its dependability. Their focus is on the model and its code.
Instead, Kumar suggests we should ask: “Do I trust the developer?”
This new approach to trust came from a quick conversation around agentic AI, which will appeal to most consumers. As Kumar says, this type of AI helps “with the drudgery of life.” It’s not hard to see the allure of having an AI agent as a digital assistant, able to handle multi-step tasks with little input. But Kumar also expressed concern that most consumers don’t know that some AI projects just aren’t ready for prime time yet.
For example, if an AI agent has unrestricted access to your entire digital life, you may not realize that safeguards aren’t properly in place to prevent big mistakes and possible irreparable damage. (Kumar and I ended up referring to this as a “YOLO model” for AI.) There’s proof of this in the wild—we’ve already seen repeat stories of AI agents deleting files they were not asked to, with a prominent recent example being a Meta exec losing 200 emails to OpenClaw.
Of course, you can approach agentic AI use more carefully, as my colleague Ben Patterson explains. But his suggestions require living and breathing AI much more than most people do—my friends, family, and acquaintances just plop their questions into a chatbot or use AI summary buttons without forming a game plan first.
What’s easier is to simply ask yourself if you believe the developer of the AI model will deliver on the product’s marketing. Many AI development teams can truthfully say a new release is their most advanced model yet. But on an objective level, is it truly advanced in both features and protection from exploits?
The answer to this question isn’t necessarily to cut yourself off from interesting new tools. Rather, exercise sharp judgement about what and who you trust—and to what degree.



