What is Your Company's AI risk Appetite?
- Doug Shannon

- Sep 13
- 1 min read
▫️Not every problem needs an LLM.
▫️Not every solution demands full autonomy.
▫️Yet, every enterprise adopting AI must first answer one question:
How much control are we prepared to give up, and under what conditions?
Too often, companies chase usecases before defining accountability. That’s backwards
Usecases only matter once governance is in place.
Autonomy is not a milestone. It’s a dial, and risk appetite determines how far that dial turns.
▫️Human-led
AI supports, but humans make all decisions and initiate all actions.
▫️Human-assisted
AI provides inputs or optimizations, yet execution remains human-owned.
▫️Human-in-the-loop
AI executes, but defers at thresholds for human context and confirmation. Responsibility stays with the human, even as automation expands.
▫️Human-on-the-loop
AI acts with minimal oversight, while humans monitor and override only when necessary.
▫️Autonomous
AI acts independently, within strict bounds and predefined policy. Often orchestrated by other agents and locked behind RBAC - Rule based access controls and system-level safeguards.
This is not about automation, It’s about accountability.
Forbes Technology Council Gartner Peer Experiences InsightJam.com PEX Network Theia Institute VOCAL Council IgniteGTM IA FORUM