Where the AI-as-Employee Model Breaks
Agents don't onboard like humans, and that's a good thing for niche industries
Last week, Workday (a leading HR platform) introduced an “agent system of record” to help companies manage their digital workforce. A quote from their founder Aneel Bhusri neatly sums up how AI companies are now positioning themselves:
“The workforce of the future will include both humans and AI agents, and businesses that don’t learn to manage this incredibly complex reality will quickly fall behind.”
Workday realizes what is now obvious — real AI adoption at enterprises will happen via grooves laid by humans. Smart companies won’t sell AI software, they will help companies “hire,” “onboard,” “promote,” and “fire” digital workers. AI systems are being wheeled into organizations in Trojan horses shaped like employees.
As someone actively building and selling AI agents to biotech companies, I can assure you this reframe is immensely useful. And for what it’s worth it isn’t just marketing; the tech has advanced to the point where agents are behaving as humans would — outcome oriented, with agency to get there.
But there is an obvious drawback to thinking of AI in human terms — AI agents behave in ways that humans don’t (and can’t). So it’s worth asking, where does the analogy of AI agents to humans break? And what opportunities does that open to do things differently?
One critical difference is onboarding. Unlike humans, AI agents will need to be “onboarded” every time they go to perform a task, which creates a big challenge as well as a huge advantage.
The Best Agents Need to Be Onboarded (Just Like People)
Common wisdom is that employees only start delivering real productivity 6 months into their job — onboarding is a tricky problem for humans, and it will be for agents as well.
In niche industries, the onboarding challenge is even harder. As an illustrative example, walk through a biotech company and you'll find former scientists in unexpected roles — running business development, leading sales teams, even managing recruitment. Why? Because teaching complex scientific concepts to outsiders is so time-intensive that it's often easier to teach business skills to scientists than the reverse.
The need for domain and company knowledge also explains why niche industries struggle to adopt AI, even for workflows where AI is clearly a game changer (BD, sales, recruiting, content creation, etc).
We hear this directly from customers. For example, existing AI tools work great for recruiting if the job is just to parse LinkedIn profiles, but don’t work if you need to hire a scientist and compare their publications against your company’s research. Clay and other BD tools work out of the box if your prospects are easy to identify, not so much for the deeper analyses and opportunity identification BD people in biotech are doing. For those workflows, you need agents that are more deeply onboarded to the problem.
Onboarding in AI is an unsolved problem
The need for well-onboarded agents is obvious. the right way to build them is not.
For a while, RAG (retrieval augmented generation) was the gold standard for custom agents. But if you’ve used company support chatbots, you’ve experienced how RAG doesn’t feel as dynamic as a human. In fact, Hebbia did a study that claims RAG failed for 84% of real-world user queries in a chat based interaction.
A big step forward will come when we get better foundational models for niche areas, like Deepmind and BioNTech’s Laila, a “lab assistant” built on top of Llama 3.1. OpenAI released GPT 4b (bio) a few weeks ago. But having better foundational models for bio is like having a better talent pool to hire agents from — it doesn’t change the fact that you need to onboard agents to your company’s specific knowledge, especially since a lot of that information is private and inaccessible to the foundational models.
The best bet for how proper onboarding will work is reasoning agents like OpenAI’s Deep Research. If you haven’t used it or seen a demo, you should. Reasoning agents behave like a human would in an onboarding process —gather information, think through it, decide what new information to gather, and so on.
Here is where the analogy to humans starts to break though, because Deep research is onboarding itself for each new query in a matter of minutes. What it learns in a given search doesn’t carry over to the next one, which (perhaps counterintuitively) presents a huge and obvious advantage.
Unlike Humans, AI Agents Will Be Re-Onboarded Constantly
A huge problem for companies is that employees not only need to be onboarded, they need to be constantly re-onboarded as the technology advances.
The re-onboarding problem can be summarized as “good-enough is the enemy of perfect.” Why would a BD or sales person take the time to properly retrain when their existing knowledge — while outdated — is good enough for them to do their job? Companies collectively spend hundreds of billions of dollars trying to solve that problem.

The good news is that reasoning agents like Deep Research do not have the same issue. They get onboarded on the fly for each new task, along with context-specific information. It’s like being able to clone your best account rep and focus each clone on a different customer.
Onboarding agents will also use a very different set of tools under the hood — they will not just access your company Notion. In fact, the knowledge bases agents use will likely be unusable to humans.
How will Notion for agents work?
It’s amazing how ubiquitous and valuable knowledge management systems like Notion or Confluence are despite doing a horrible job of solving the problem they are meant to solve. This isn’t a dig at those companies — building knowledge management for humans is just a wicked problem.
An analogy to tech debt is useful here. Knowledge, like code, is cheap to create but expensive to refactor. As a result, knowledge and code bases tend to get more brittle and harder to use over time.
Knowledge management for agents will be vastly superior to knowledge management for humans for the same reason tech debt will kill fewer companies in the future. AI not only makes it trivial to rewrite a code of knowledge base, it also doesn’t care about relearning how to use it afterwards.
How will agents prune their own knowledge base as they perform new tasks? We can see interesting glimpses via companies like Shelf, which rebuild a company’s FAQs as humans interact with it via agents. Experimenting with how to solve that problem for biotech is occupying a lot of our time at the moment.
The talent war for agents begins now
This all sounds promising in theory, but where will the cost of training well-onboarded agents pay off first? Here are a few examples we have been looking at in biotech that can be generalized to any specialized industry:
Recruiting: onboard the agents to the company’s unique tech, then have them go find and evaluate candidates that can expand/strengthen that focus.
Market research: onboard the agents to a company’s unique tech and position in the market, then have them continuously search and distill insights as the market evolves.
BD/Sales: onboard the agents to the company’s unique tech and opportunities for expansion, then have them search and prioritize partner opportunities.
The pattern is clear: while generic AI can deliver value out of the box, the real transformation will come from agents that deeply understand your company's unique technology and context. Just as the best human employees aren't just functionally competent but deeply aligned with your company's mission and technology, the best AI agents will need to be "company natives" – even if they achieve that status in minutes rather than months.