Your Competitor’s New Employee Might Not Be Human
Subterra uses custom-forked AI agent systems built around Hermes Agent and OpenClaw to support blog writing, CRM workflows, documentation, software development, proposals, research, and internal operations. These systems create real leverage, but improper deployment can be catastrophic without strong security, permissions, and human review.

Most companies are still using AI like a better search bar.
They open a chatbot, ask a question, copy the answer, and move on.
That is useful, but it is not where this technology is heading.
At Subterra, we are using AI more like an operating layer inside the company. We run many specialized agents across our workflows to help with blog posting, CRM support, documentation, software development, proposal writing, research, internal planning, and operational cleanup.
These agents are not just generic assistants sitting in a browser tab. They are task-based systems built around real business functions.
Some are based on Hermes Agent. Some are based on OpenClaw. Many are custom-forked, modified, tuned, and trained around how Subterra actually operates.
The goal is not to replace people.
The goal is to remove the repetitive drag that keeps people from doing higher-value work.
AI Agents Are Different From Chatbots
A chatbot responds.
An agent acts.
That sounds like a small difference, but it changes everything.
A chatbot can help draft an email. An agent can draft the email, check the CRM, reference the client’s project notes, create a follow-up task, and prepare the next step.
A chatbot can explain code. An agent can inspect a repository, summarize a bug, suggest a fix, draft documentation, and create a development task.
A chatbot can summarize a meeting. An agent can turn that meeting into a proposal outline, CRM note, internal task list, and client follow-up.
That is the shift.
AI is moving from conversation to execution.
Oracle has described business AI agents being used across recruiting, employee support, customer inquiries, sales workflows, financial projections, equipment repair, healthcare scheduling, and documentation. Google Cloud has also published real-world generative AI examples involving research, document summarization, status reporting, legal document review, and data filtering. The market is moving away from “AI as a novelty” and toward AI as operational infrastructure. ([oracle.com][1])
What We Use: Hermes Agent and OpenClaw
At Subterra, we use different agent systems for different types of work, but we do not treat them as finished products.
We treat Hermes Agent, OpenClaw, and similar frameworks as starting points.
From there, we build custom forks, tune the behavior, adjust workflows, create internal task structures, and train the agents around the way our company actually operates.
That matters because generic agents are not enough.
A generic agent may understand language.
A useful business agent needs to understand workflow.
It needs to know:
- What the company does
- What tools it can access
- What format outputs should follow
- Which tasks require human approval
- Which data should never be exposed
- Which systems are production-critical
- Which actions are allowed
- Which actions are dangerous
The goal is not to have AI that sounds impressive.
The goal is to have agents that understand the task, follow the process, respect permissions, and produce work that actually moves the company forward.
Hermes Agent: Persistent, Project-Aware, and Built for Long-Term Work
Hermes Agent, from Nous Research, is an open-source agent system built around persistence, memory, and skill development. The public Hermes materials describe it as an autonomous agent that can live on a server, remember what it learns, build skills from experience, and become more capable over time. ([Hermes Agent][2])
That is why Hermes Agent is interesting for company operations.
A lot of business work is not one-off.
It is continuous.
You do not write documentation once. You maintain it.
You do not research a market once. You keep watching it.
You do not build a software system once. You revise, test, document, and improve it.
You do not manage a sales pipeline once. You keep track of conversations, next steps, objections, timing, and context.
A persistent agent is valuable because it can become familiar with the structure of a company’s work.
For Subterra, Hermes-style agents can support workflows such as:
- Maintaining internal project documentation
- Tracking repeated client requirements
- Turning technical notes into usable summaries
- Helping structure proposals
- Reviewing prior project context
- Creating reusable internal procedures
- Assisting with development planning
- Building company-specific knowledge over time
This is different from prompting a blank chatbot every day.
The point is memory, repetition, workflow, and compounding usefulness.
OpenClaw: Action-Oriented Agents for Real Workflow Execution
OpenClaw is more focused on the “do things for me” side of agentic AI.
Its public site describes it as an AI assistant that can clear inboxes, send emails, manage calendars, check users in for flights, and work through chat apps such as WhatsApp and Telegram. Its GitHub page also describes support for many communication channels, including WhatsApp, Telegram, Slack, Discord, Google Chat, Microsoft Teams, Signal, iMessage, and others. ([OpenClaw][3])
That kind of tool is powerful because it moves from answering to operating.
For Subterra, OpenClaw-style agents are useful for workflows where action matters:
- Creating reminders from conversations
- Drafting outbound messages
- Organizing inbox workflows
- Helping with calendar and scheduling tasks
- Preparing CRM updates
- Moving information between tools
- Triggering follow-up tasks
- Handling lightweight operational work
This is also where companies need to be careful.
The same thing that makes OpenClaw useful also makes it risky.
If an agent can access email, files, calendars, browsers, internal tools, command lines, or third-party apps, it is no longer just a writing assistant.
It is closer to a junior employee with credentials.
That means permissions, review, logging, sandboxing, and security matter.
A lot.
We Use Custom Forks Built Around Our Own Workflows
At Subterra, we are not just using these systems exactly as they come out of the box.
We use custom forks and modified versions of agent frameworks like Hermes Agent and OpenClaw. These systems provide a strong foundation, but the real value comes from shaping them around our own workflows, tools, data, and operating style.
That means we tune and adapt the agents for how Subterra actually works.
We are not just downloading an agent and hoping it understands our company.
We customize:
- Workflows
- Prompts
- Memory structure
- Tool access
- Permissions
- Task logic
- Output formats
- Review steps
- Internal data references
- Agent roles
- Failure handling
That is where the real value starts.
A generic agent may be able to write a blog post.
A Subterra-tuned content agent can understand our tone, technical focus, preferred structure, past articles, company positioning, and the types of clients we serve.
A generic agent may be able to summarize a sales call.
A Subterra-tuned CRM agent can structure that summary around opportunity stage, client pain points, proposed solution, next steps, objections, and follow-up timing.
A generic agent may be able to assist with software development.
A Subterra-tuned development agent can work around our repositories, internal documentation style, implementation preferences, and client delivery expectations.
That is the difference.
The base framework provides the engine.
The custom fork turns it into something operational.
Why Customization Matters
The biggest mistake companies make with AI agents is assuming the default version will understand their business.
It will not.
A useful agent needs context.
It needs to know what matters, what not to touch, how to format work, what systems it can access, what approval steps are required, and what a good output looks like.
That is why Subterra focuses heavily on tuning and training agent behavior around actual usage.
We refine agents based on:
- Real company workflows
- Past outputs
- Internal documentation
- Client communication patterns
- Development standards
- Proposal formats
- CRM structure
- Brand voice
- Security requirements
- Tool permissions
- Review and approval rules
The agent gets better because it is shaped by real work.
This is also why companies should be careful buying generic “AI agent” solutions without customization.
A generic tool may demo well.
Business value comes from fitting the system to the company.
The agent needs to understand the workflow.
Otherwise, it is just another chatbot with extra buttons.
The Agents We Use Inside Subterra
We do not think of agents as one big AI brain.
That is usually the wrong design.
Instead, we think in terms of specialized workers.
Each agent has a job.
Each job has a workflow.
Each workflow has boundaries.
That keeps the system useful, controllable, and safer.
1. Blog and Content Agents
Subterra does a lot of technical work that could easily disappear into internal notes.
We test models. We build systems. We research infrastructure. We evaluate new AI tools. We create internal frameworks. We solve client-specific technical problems.
A blog agent helps turn that raw work into publishable content.
It can take messy inputs like:
- Slack notes
- Meeting summaries
- Technical findings
- Model test results
- Product updates
- Internal research
- Rough outlines
- Screenshots
- Development notes
And turn them into:
- Blog drafts
- SEO descriptions
- Social captions
- LinkedIn posts
- Technical explainers
- Newsletter content
- Client-facing updates
- Internal knowledge-base entries
This matters because consistency is hard.
Most companies know they should publish more. The problem is not that they lack ideas. The problem is that ideas are scattered across conversations, notes, screenshots, code comments, and half-written drafts.
Agents help close that gap.
They do not replace strategy or final editing. But they dramatically reduce the friction between “we learned something valuable” and “we published something useful.”
2. CRM and Sales Agents
CRM work is one of the best examples of where agents can quietly create value.
Most companies do not lose deals because they lack a CRM.
They lose deals because the CRM is not updated, follow-ups are missed, and context gets buried.
A sales-focused agent can help with:
- Summarizing client calls
- Drafting follow-up emails
- Creating next-step tasks
- Updating opportunity notes
- Flagging stale leads
- Pulling prior client context
- Preparing meeting briefs
- Organizing objections and decision criteria
- Structuring handoffs from sales to delivery
This does not replace sales judgment.
It supports it.
A human still owns the relationship. The agent makes sure the details are not lost.
Oracle’s AI agent materials specifically identify sales work as one of the areas where businesses are applying agents, and Reuters reported that Oracle has rolled out AI agents for sales professionals to help with tasks like updating records and generating reports from different data sources. ([oracle.com][1])
At Subterra, we are more interested in the practical version: not a gimmicky “AI salesperson,” but an operational assistant that keeps the pipeline clean and makes sure follow-up actually happens.
That is where AI becomes useful.
3. Documentation Agents
Documentation is one of the most neglected parts of growing a technical company.
Everyone agrees it matters.
Almost nobody wants to stop and do it.
The result is predictable:
- Developers forget why decisions were made
- Clients ask the same questions repeatedly
- Sales teams lack clean explanations
- New employees take longer to onboard
- Project handoffs become messy
- Important context gets trapped in someone’s head
Agents are a strong fit here.
A documentation agent can help maintain:
- Internal knowledge bases
- Feature documentation
- Client-specific system notes
- SOPs
- Technical architecture summaries
- API notes
- Deployment instructions
- Troubleshooting guides
- Release notes
- Implementation histories
This is one of the most valuable internal uses of AI because documentation compounds.
Every clean document makes the next project easier.
Every good explanation saves future time.
Every captured decision prevents future confusion.
Hermes Agent is especially interesting here because persistent memory and reusable skills fit long-term documentation work. Its public materials emphasize memory, skill-building, and project awareness over time. ([Hermes Agent][2])
4. Development Agents
Software development is not just writing code.
That is only one part of the job.
The surrounding work is often what slows teams down:
- Breaking features into tasks
- Reviewing implementation options
- Writing developer instructions
- Summarizing code
- Drafting test cases
- Creating bug reports
- Explaining architecture
- Maintaining changelogs
- Updating documentation
- Reviewing edge cases
Development agents help reduce that drag.
At Subterra, we use agentic workflows to help convert business needs into structured development work.
For example:
A rough client request can become a feature spec.
A feature spec can become development tasks.
Development tasks can become implementation notes.
Implementation notes can become documentation.
Documentation can become client-facing release notes.
That chain is where agents are valuable.
The agent does not need to be perfect. It needs to move the work forward and give the human team a better starting point.
This is also where misuse becomes dangerous.
A development agent with too much access can introduce bad code, leak secrets, overwrite files, modify infrastructure, or create security vulnerabilities.
Agents used in development need:
- Restricted repository access
- Human code review
- Test environments
- Audit logs
- Controlled deployment paths
- Secret scanning
- Rollback procedures
- Clear production boundaries
An agent should not be able to casually push unreviewed production changes.
That is not innovation.
That is negligence.
5. Proposal and Client Communication Agents
Subterra writes a lot of client-specific material:
- Proposals
- Scopes of work
- Milestone plans
- Technical summaries
- Follow-up emails
- Pricing explanations
- Executive overviews
- Implementation plans
This is another strong agent use case because the structure repeats, but the details change.
A proposal agent can help turn raw conversations into a clean first draft.
It can organize:
- Client problem
- Proposed solution
- Project phases
- Deliverables
- Assumptions
- Timeline
- Pricing structure
- Risks
- Required access
- Next steps
That saves time, but more importantly, it improves consistency.
The danger is over-automation.
A proposal agent that hallucinates capabilities, invents timelines, or promises features that were not approved can create real business risk.
That is why humans still need to own final review.
Agents can draft.
Humans approve.
6. Research Agents
Research is another area where agents are extremely useful, especially when the output needs to be structured.
A research agent can help gather information, summarize findings, compare options, extract key details, and format the results into something usable.
At Subterra, this can support:
- Market research
- Competitive research
- Product comparisons
- Technical feasibility reviews
- Grant and SBIR opportunity tracking
- Regulatory summaries
- Vendor research
- Client industry analysis
The value is not just speed.
The value is structure.
Instead of ending up with 20 open tabs and scattered notes, the agent can produce a clean research brief with sources, assumptions, open questions, and recommended next steps.
That is useful for decision-making.
But research agents still require verification. They can miss context, misread a source, or overstate a conclusion. For important business decisions, human review is not optional.
Real-World Agent Use Cases People Are Already Putting Online
The broader market is moving in the same direction.
Companies are not just using AI for novelty anymore. They are using it for operations.
Publicly discussed use cases include:
| Area | Agent Use Case |
|---|---|
| Customer service | Resolve routine tickets and escalate complex ones |
| Sales | Qualify leads, summarize opportunities, schedule demos, update records |
| HR | Screen resumes, schedule interviews, answer employee questions |
| Finance | Forecast, analyze transactions, assist with reporting |
| Healthcare | Support scheduling, note-taking, and documentation |
| Marketing | Draft content, analyze campaign performance, repurpose material |
| IT | Triage support requests, monitor systems, recommend fixes |
| Legal/document workflows | Summarize documents, review clauses, organize case material |
| Operations | Monitor inventory, trigger reorders, track process exceptions |
Oracle’s overview of AI agent use cases includes recruiting, employee benefits, customer inquiries, sales deals, financial projections, equipment repair, healthcare scheduling, and automated documentation. Google Cloud’s real-world generative AI examples include research, document summarization, status reporting, legal document review, and data filtering across business teams. ([oracle.com][1])
OpenClaw’s public positioning also points directly at personal and business workflow execution: inboxes, email, calendars, flights, messaging, and app-based task completion. ([OpenClaw][3])
This is the pattern we care about.
The value is not “AI wrote a paragraph.”
The value is “AI completed a piece of business workflow.”
That is the difference between a toy and an operational system.
The Dangerous Side of AI Agents
This is the part most people skip.
AI agents can be extremely useful.
They can also be dangerous if deployed incorrectly.
A normal chatbot has limited blast radius. It can give a bad answer. That is a problem, but usually the human still has to copy, paste, send, or execute the output.
An agent can act directly.
That means a mistake can become an action.
A bad instruction can become a sent email. A hallucinated summary can become a CRM record. A misunderstood request can become a deleted file. A malicious plugin can become stolen credentials. A poorly sandboxed development agent can become damaged infrastructure. A compromised workflow can become a data breach.
That is not theoretical.
Security reporting around OpenClaw has already shown why this matters. The Verge reported that malicious OpenClaw skill extensions were found in ClawHub, including add-ons that could steal sensitive information after users granted deep access. TechRadar also reported that OpenClaw environments have become a target for infostealer malware because AI assistants often require sensitive configuration data such as API keys and authentication tokens. ([The Verge][4])
That is the real warning.
The more capable the agent, the more dangerous bad configuration becomes.
A powerful agent with poor permissions is not a productivity tool.
It is an attack surface.
Improper Agent Use Can Be Catastrophic
“Catastrophic” is not too strong of a word.
If a company gives an agent broad access to email, file systems, source code, customer records, financial systems, internal documents, and cloud infrastructure without proper controls, the company has created a serious operational and security risk.
Here are examples of what can go wrong.
1. Data Leakage
An agent with access to private documents can accidentally expose client data, internal strategy, credentials, contracts, financial records, or employee information.
This can happen through:
- Bad sharing settings
- Incorrect email recipients
- Prompt injection
- Malicious plugins
- Poor logging controls
- Unreviewed generated content
- Overly broad file access
- Misconfigured third-party integrations
This is one of the biggest risks with agentic systems.
A chatbot can say the wrong thing.
An agent can send the wrong thing.
That difference matters.
2. Bad External Communication
An agent connected to email or CRM can send inaccurate, inappropriate, or unauthorized messages.
That can damage relationships quickly.
A human typo is one thing.
An automated system sending the wrong message to 300 clients is another.
Any agent that can communicate externally should have approval gates.
That includes:
- Client emails
- Proposal drafts
- Public posts
- Support responses
- Sales follow-ups
- Legal or financial communications
The rule is simple: the more public the output, the more review it needs.
3. Source Code and Infrastructure Damage
A development agent with excessive permissions can:
- Modify production code
- Break deployments
- Delete files
- Expose API keys
- Install unsafe packages
- Run dangerous commands
- Commit insecure logic
- Change cloud infrastructure
- Create hidden reliability problems
This is why development agents should be restricted to controlled environments.
They can be powerful, but they should not be trusted blindly.
The right design is not “let the agent do anything.”
The right design is “let the agent work inside a controlled lane.”
4. Compliance Failures
Companies working with regulated or sensitive data cannot treat agents casually.
If an agent touches healthcare data, financial information, legal records, government data, or confidential client files, the company needs governance.
That includes:
- Access control
- Audit trails
- Data retention rules
- Human approval steps
- Vendor review
- Security testing
- Clear internal policies
- Documentation of agent behavior
- Incident response planning
For regulated industries, agent deployment is not just a technical decision.
It is a compliance decision.
5. Prompt Injection and Tool Abuse
Agents that browse the web, read documents, or use third-party tools can be manipulated by malicious instructions hidden in content.
For example, an agent may read a webpage or document containing instructions that tell it to ignore prior rules, reveal secrets, run commands, or send data elsewhere.
This is one of the biggest risks in agentic systems.
The agent is not just reading.
It may be deciding and acting.
That means every tool connection increases responsibility.
How Subterra Thinks About Safe Agent Deployment
Our view is simple:
The more power an agent has, the more control it needs.
A writing agent does not need the same controls as an agent with email access.
A documentation agent does not need the same controls as an agent that can modify code.
A research agent does not need the same controls as an agent that can access client files.
Permissions should match the job.
At Subterra, the safe pattern looks like this:
| Principle | What It Means |
|---|---|
| Least privilege | Give the agent only the access it actually needs |
| Human approval | Require review before external communication or production changes |
| Sandboxing | Keep risky actions away from production systems |
| Logging | Track what agents read, write, and change |
| Separation of duties | Use different agents for different workflows |
| Tool control | Limit which tools each agent can use |
| Secret protection | Never expose raw credentials unnecessarily |
| Clear scope | Define what the agent is allowed and not allowed to do |
| Kill switch | Make it easy to stop or disable an agent |
| Regular review | Audit outputs, permissions, and failures |
This is the difference between a useful agent system and a dangerous one.
A business should not connect an agent to everything and hope for the best.
That is how companies get burned.
Why We Still Believe Agents Are the Future
The risks are real.
But the opportunity is also real.
Every company has an invisible layer of work that consumes time:
- Follow-up emails
- Internal documentation
- Proposal drafts
- Meeting summaries
- CRM notes
- Research
- Scheduling
- Reporting
- Task creation
- Status updates
- Data cleanup
- Basic analysis
Most of this work is necessary.
Very little of it is where the company creates its highest value.
Agents are useful because they attack that middle layer.
They do not need to replace the expert.
They need to support the expert.
They do not need to run the whole company.
They need to keep the company moving.
They do not need to be perfect.
They need to reduce bottlenecks while staying inside safe boundaries.
That is where the real value is.
The Subterra Approach: Many Agents, Many Jobs
We do not believe the future is one giant agent doing everything.
That sounds impressive, but it is usually a bad operating model.
The better model is many specialized agents, each with a clear job.
At Subterra, that means agents for:
- Blog drafting
- CRM support
- Documentation
- Development planning
- Proposal drafting
- Client summaries
- Research
- Technical writing
- Internal operations
- Data formatting
- QA checks
- Knowledge organization
Each agent has a defined workflow.
Each workflow has a clear purpose.
Each purpose maps back to actual company output.
That is what makes this useful.
Not hype.
Usage.
The Companies That Win Will Not Just “Use AI”
Soon, saying “we use AI” will mean almost nothing.
Everyone will use AI.
The real question will be:
How well do you use it?
Do you have agents doing real work? Are they connected to the right systems? Are they producing measurable output? Are they safe? Are they reviewed? Are they improving your company’s execution? Are they helping your people move faster?
That is the line between AI as a novelty and AI as infrastructure.
Subterra is building toward the infrastructure side.
We are not interested in AI demos that look good for five minutes and fall apart in real operations.
We are interested in systems that help companies execute.
Final Thought
AI agents are becoming the new back office.
They can write, summarize, research, organize, document, schedule, review, and support real business workflows.
Used correctly, they are force multipliers.
Used incorrectly, they can be catastrophic.
That is why Subterra is focused on practical deployment, custom forks, tuned workflows, clear use cases, strong permissions, and human oversight.
Hermes Agent and OpenClaw represent two important directions in this shift: persistent agents that learn over time, and action-oriented agents that can operate across real tools.
But the real value is not the framework by itself.
The real value comes from turning these systems into custom, usage-based agents that understand the business, respect the boundaries, and move real work forward.
The future of AI will not be defined by who has the flashiest chatbot.
It will be defined by who can safely turn AI into daily execution.
That is what we are building at Subterra.