Overview
Hermes is an open-source personal AI agent that you run on your own server. It connects to messaging platforms, builds persistent memory about you over time, and executes tasks by delegating to specialized subagents. Six weeks after launch, it has 57,200 GitHub stars. Six major releases in 21 days. The project is growing faster than almost any AI tool in recent memory.
The star count matters less than what it signals. GitHub stars are a proxy for developer attention, and 57,000 in six weeks means Hermes captured the imagination of a large slice of the technical community. These are people who have the skills to run their own infrastructure and were waiting for a project worth the effort. Hermes is that project for a substantial number of them.
57,200 GitHub stars in six weeks with 6 major releases

What Makes It Different
The key differentiators are memory and adaptability. Hermes builds a persistent model of the user from interactions over time: communication style, recurring tasks, preferences, context that would otherwise need to be re-explained in every session. This memory is stored locally on the user's server, not in a cloud service. It belongs to the user.
Auto-generated skills are the other standout feature. When Hermes encounters a task it cannot handle natively, it generates a skill, a small code package for that capability, and adds it to its repertoire. The next time the task comes up, the skill is there. The agent learns from use without requiring the user to configure anything. This is closer to how a competent human assistant operates than anything in the closed-source AI agent space.
The platform coverage is broad: 16 messaging platforms including Slack, Discord, Telegram, and WhatsApp. Support for over 400 AI models from different providers. Delegation to subagents for tasks that benefit from specialization. The architecture assumes that no single model is best for every task, and it routes accordingly. That flexibility is part of why the project resonates with developers who are skeptical of vendor lock-in.
Security and Infrastructure
Security is where Hermes has invested noticeably more than comparable open-source projects. Five sandbox backends with container hardening isolate code execution from the host system. Active blocks prevent secret exfiltration, a real attack vector for any agent with file system access. MCP (Model Context Protocol) OAuth 2.1 with PKCE handles authentication to external services securely. Malware scanning runs on every skill downloaded from ClawHub, the community skills marketplace.
The Camofox browser, used for web tasks, runs in a hardened container with network isolation. This is not default security theater. Someone thought carefully about the attack surface of a persistent agent with messaging access and file system permissions, and built defenses accordingly.
Hermes is built by Nous Research, which raised $65 million earlier this year. That funding provides runway to invest in the security and infrastructure work that most open-source projects skip. The MIT license keeps the codebase open while the company builds commercial services on top.
The Open Source vs Closed AI Agent Battle
Claude Code, OpenAI's Codex agent, and Google's Gemini Code Assist are all closed-source products hosted by their respective companies. Using them means trusting the vendor with your data, accepting their pricing, and operating within their capability constraints. For individual developers and small teams, that tradeoff is often fine. For enterprises with data governance requirements or organizations that want to run in air-gapped environments, it is frequently not acceptable.
The MIT license on Hermes directly addresses the enterprise adoption barrier. MIT lets companies fork, modify, and deploy Hermes without licensing fees or vendor relationships. Legal teams can audit the code. Security teams can run penetration tests. Procurement teams do not need to negotiate with a startup. These are not small considerations for organizations that move slowly on software adoption.
The commercial tension is real but manageable. Nous Research needs revenue to sustain development. The model is likely to follow the pattern of successful open-source infrastructure companies: core agent open and free, managed cloud hosting, enterprise support contracts, and premium features behind a paid tier. Red Hat built a billion-dollar business on this model with Linux. HashiCorp did it with Terraform. It works when the open-source product is genuinely better than the closed alternatives.
How You Actually Deploy It
The minimum deployment is a server running Docker, a domain name pointing at it, and a reverse proxy to handle HTTPS. The Hermes documentation provides a Docker Compose file that brings up the agent, the memory database, and the skill execution sandbox with a single command. On a fresh Ubuntu 24.04 server, the setup takes about 20 minutes if you have the prerequisites in place.
Connecting your first messaging platform requires generating an API token in the respective service and entering it in the Hermes admin interface. Slack takes the most configuration because Slack's API requires creating an app with specific permission scopes. Telegram is the simplest: generate a bot token from BotFather, paste it in, done. WhatsApp requires a Meta Business account, which adds friction for individual users but is standard for organizations.
The first-use experience after setup is deliberately simple. Send the agent a message introducing yourself and describing what you typically need help with. Hermes will ask follow-up questions to seed its memory model. Within a few interactions, it starts making inferences and anticipating needs. The skill generation feature activates automatically when you ask for something outside its current capabilities. You will notice it happening: the agent will pause briefly, generate the skill, and then complete the task. Over days and weeks, the agent's capability surface expands to match your specific workflow.


