Imagine this.

A prepackaged AI model with different profiles configured for every spot on an organizational chart. With KRAs built in, customizable (by leadership), company policies and benefits, work culture routines, attendance markers, work loggers, messenger services, etc etc.

A new marketing director joining the team would be onboarded by the marketing director AI model. No access to the previous guy’s emails, but exposure to his remaining tasks. An interactive onboarding guide. An interactive status dashboard. An always updated wiki of the entire company’s org structure, achievements for each person, profile pages - (strengths, reviews by other people). A list of expected KRAs. A list of company wide unfinished tasks that you can pick up in your free time (gives you an incentive bonus).

Once the marketing director settles in, the AI helps him decide the best course for marketing strategy, duration and topic of campaigns, etc. Based on past decisions and predefined specific goals; and also information across slack threads and emails and meeting notes, the AI would list all possibilities and restrictions and tasks relevant to the marketing director with a time frame. The human decides what to do, and how to implement. There’s a model that does pretty much the same thing for every role. Consolidates video review notes for the editor. Suggests colour palettes and does some light coding for the 3D artist. Has a checklist of restrictions ready for a writer. Corrects tenses sometimes. Has a bug tracker for the programmer. Helps with syntax and missing semicolons.

Has a “monitoring” system for the CEO. Helps with leadership decisions.

Some opinions about this situation:

  • A majority of the people will get fed up with their jobs and quit. Or get laid off. Wishful thinking - traditional company CEOs will become huge social targets and labelled evil people.
  • There will be a huge wave of a new kind of startups to replace the older, evil CEO companies - Lean-Mean startups. No two people in an organization share the same designation. “I work as the HR department at (company).” Most of the “first movers” who jump on this opportunity will end up dying. They’re working it the old way - doing the human-work with AI, instead of doing the AI-work with human. As in, “Ok AI, give me 3 good endings for this story, I’ll pick one” instead of “Hey is there a character arc I’m missing out on tying up for this ending”.
  • Once the trash-company wave settles, the few left standing will be organizations with creativity as a significant ingredient in their product and/or process. But I think this is quite a ways away.
  • AI will become like google docs, simple enough to be accessed by anyone. It will be personalized, but not to the person, to the role. The next guy who occupies an artist position will inherit the AI model with all instructions, training, etc. There’s no requirement for prompt-engineers, because human-engineers are running the place.
  • I wonder what happens in a role that churns relatively fast. I wonder what the dynamics of a “content-person” AI model and “media-person” AI model will be. I wonder how that affects an organization. I wonder what happens when organizational AI interface with each other (aquisitions, buyouts, etc). I wonder what privatization of government services will look like.

|500 Today, knowledge work represents nearly half of America’s GDP. Most of it still operates at human scale: teams of dozens, workflows paced by meetings and email, organizations that buckle past a few hundred people. We’ve built Florences with stone and wood.

When AI agents come online at scale, we’ll be building Tokyos. Organizations that span thousands of agents and humans. Workflows that run continuously, across time zones, without waiting for someone to wake up. Decisions synthesized with just the right amount of human in the loop.