Here’s the problem with AI agents: accountability.

AI Agents companies are popping up like mushrooms. It seems to be the next step in AI development.

At the same time, there is ongoing debate. What is the difference between an AI agent and regular software? I think the difference is agency. An AI agent can decide things, not just follow simple if-else logic.

And yet, I still haven’t seen any good examples of AI agents that are actually useful and being used.

And I’m not a skeptic per se, I use AI with great pleasure. For example, I recently used ChatGPT deep research again, and that worked super well. But I am still the one who writes the prompt, and I decide what happens to the output of such a prompt.

I think the problem is that those AI agents have agency but no accountability. If something goes wrong, who can you hold accountable?

Ultimately, the person who deployed the agent will always be responsible for the results of that agent.

That’s why I think it will take a while before we see agents really taking over work. We first need to be more certain about how well it works. Maybe if you’re a solopreneur you can use these agents, you will just blame yourself when something goes awry. But in companies, no one wants to be accountable for something they can’t fully control. Especially when, more than 900 days after ChatGPT first launched, there is still no solution to hallucination.

(Fun fact: the one person I follow on LinkedIn who claims to have 20 AI agents ‘working for him’, is the founder of an AI Agent company.)

Making my own agents

Despite these concerns, I’m actually quite interested in building and using AI agents myself. I’m planning to experiment with n8n to create a useful agent soon - though I haven’t decided yet what specific problem it will solve.

I believe that by starting small and building something practical, I can better understand both the potential and limitations of AI agents. And hopefully I can find a way to make them truly useful while keeping accountability in mind.