Module 03  /  Leadership Partnership  /  Service

Strategic AI Advisory and Executive Training.

Creative Human sits with leadership teams adopting AI as core infrastructure. Two formats — recurring partnership for AI-mandated leadership, and a structured cohort program for C-suite and boards.

01 / Advisory

Recurring partnership for intelligence continuity.

We sit with your leadership team to figure out where AI actually moves the needle for your specific organization, what to build versus buy, how to sequence the next six to eighteen months, and how to govern what gets deployed. Not a "here is a sixty-page report" engagement. More "we meet weekly, we look at the real problems, we make and defend decisions together."

Advisory is for the moment when the mandate has landed — "we need to be serious about AI" — but the internal point of view has not yet crystallized. Every week is a working session on an actual decision the client is facing. Every month, the working documents the client owns have advanced. The engagement ends when the client's internal team can operate the point of view on its own — and never before.

/01   How it works

Cadence, duration, deliverables

  • Cadence. Weekly or bi-weekly working sessions, in person in Berlin or remote, plus an async written channel between sessions.
  • Duration. Typically 3 to 6 months, renewable.
  • Deliverables. Decisions, working documents, architecture decision records, governance playbooks. Never slide decks. Never "here is the report" moments.

/02   Who it's for

CTOs, CIOs, Chief Data Officers, Heads of Engineering and Platform

CTOs, CIOs, Chief Data Officers, Heads of Engineering, and Heads of Platform at organizations that have a budget and a mandate for AI but do not yet have a clear internal point of view on what to do with either. If "my board wants us to have an AI strategy and I don't know where to start" is a sentence you've said out loud this quarter, Advisory is the format built for you.

/03   What you walk away with

A point of view that belongs to your team

  • — A living AI roadmap you own and update
  • — Architecture decision records for every significant choice you made along the way
  • — A build-versus-buy matrix for the specific AI functions in scope
  • — A governance playbook with review cadences, risk surfaces, and sign-off rules
  • — An ongoing async channel while the engagement is active
  • — A point of view that belongs to your team, not to us

/04   Initial partnerships open

We're opening our first Advisory engagements now

If you're standing up an AI function from scratch, and you want to be one of the first leadership teams we work with — early partnership pricing, direct access to the founder, and a shared interest in making the first engagements a proof point we can both stand behind — start a conversation. Initial cohorts are small on purpose.

/05   Start a conversation

Three fields and a question.

02 / Executive Track

Six sessions, two blocks, one leadership team.

A six-session programme that brings a leadership cohort from "we keep hearing about AI" to "we can evaluate, govern, and lead AI initiatives with confidence." Delivered as two blocks of three half-day workshops spaced approximately one month apart, with a virtual check-in in between.

Every session is hands-on, small-group (6–8 executives), and grounded in the cohort's real decisions — not hypothetical case studies, not slide-heavy lectures. Every session is led by two senior AI practitioners who build with AI daily.

Why start with executives

Every failed AI initiative we have seen shares a common root cause: leadership that could not distinguish realistic AI capabilities from vendor promises. When executives lack genuine AI literacy, three things happen.

  1. 01

    Bad investment decisions.

    Money flows to projects that sound impressive in a pitch deck but collapse during implementation. Without understanding what AI actually does well — and what it does not — leaders approve projects based on marketing rather than feasibility.

  2. 02

    Unrealistic expectations cascade downward.

    Teams get mandated to deliver outcomes that are not technically achievable, or achievable only at ten times the budget. This breeds cynicism. By the second failed project, the best people stop volunteering for AI work.

  3. 03

    Governance becomes either absent or paralysing.

    Leaders who do not understand AI either ignore governance entirely (creating compliance and reputational risk) or overcorrect with rigid approval processes that kill momentum.

The fix is to give executives enough genuine understanding to make informed decisions, set achievable expectations, and govern effectively. Not to turn them into engineers. To turn them into leaders who can tell the difference between a realistic plan and a hallucinated one.

Block 1  /  AI Foundations  /  three half-days, Tue–Thu

Build the knowledge base.

Day 1 · The AI tool landscape

Tuesday · ½ day

Hands-on comparison across the major AI platforms. Participants use the same tools on the same tasks, building direct experience of how platforms differ in capability, interaction patterns, and enterprise readiness. Covers the current generation of frontier models and the major enterprise ecosystems (OpenAI, Google, Anthropic, Microsoft), plus no-code and low-code options. By session end, participants can evaluate AI tools from experience rather than marketing.

Deliverables: AI tool comparison matrix · tool selection framework · platform quick-start guides

Day 2 · AI literacy & strategic decision-making

Wednesday · ½ day

The conceptual understanding executives need to evaluate AI opportunities. How LLMs actually work at a decision-making level. The AI Capability Assessment Framework — a practical tool for evaluating any AI proposal, with red flags practised against real vendor material. Participants assess their own opportunities and receive direct feedback. Establishes shared vocabulary for leadership meetings and investment discussions.

Deliverables: capability assessment framework · investment evaluation framework · red-flag checklist

Day 3 · Governance, agents & prompting

Thursday · ½ day

Morning: governance fundamentals — what data can and cannot flow into AI tools, residency and sovereignty, customer data boundaries. Small groups draft a first-iteration governance framework designed to be tested in real conditions and refined in Block 2. Afternoon: advanced prompting techniques (chain-of-thought, structured output, role-based, constraint-based), and each participant builds a tested AI workflow they can use immediately against a recurring task from their own work.

Deliverables: governance framework v1 · personal AI workflow · starter prompt library · agent selection guide

Inter-block check-in  /  ~2 weeks after Block 1  /  virtual  /  60–90 min

Where learning takes root or fades.

Included in the programme price. The gap between blocks is where practical application happens — or doesn't. The check-in surfaces early friction (tool access, governance questions, prompts not producing expected results), provides targeted Q&A from the first two weeks of field application, pulse-checks the governance draft, and previews Block 2. When Block 2 begins, participants arrive having already resolved early friction, so Day 4 goes straight to deeper strategic patterns.

Block 2  /  Strategy & Application  /  three half-days, Tue–Thu  /  ~1 month after Block 1

Deepen and operationalise.

By the time Block 2 begins, participants have had approximately four weeks to apply what they learned in Block 1, plus the mid-point check-in. They have tested the frameworks, built working prompts, encountered real challenges, and formed opinions from genuine experience rather than theory. Block 2 is designed around that reality.

Day 4 · Q&A and lessons learned

Tuesday · ½ day

This session belongs to the participants. Open forum on what worked and what did not after four weeks of application. Troubleshooting clinic with root-cause analysis of the top challenges — if an agent workflow is not producing good results, it gets fixed live. If a governance decision is causing bottlenecks, it gets redesigned. Governance framework revised to v2 based on real experience. Timelines recalibrated on grounded data rather than initial enthusiasm.

Deliverables: governance framework v2 · troubleshooting playbook · recalibrated initiative timeline

Day 5 · Agentic development, live

Wednesday · 09:00–14:00

— The marquee session —

An extended half-day dedicated to observing a complete software-development workflow executed by AI agents, running on Creative Human's own internal projects. Not a slideshow. Not a scripted demo. Real work, in real time, on actual codebases. Six phases covered in full: Planning, Implementation, Testing, Code Review, Security Review, Deployment. The session follows a 2:1 demo-to-Q&A ratio across roughly five hours. By the end, executives understand agentic development well enough to evaluate vendor claims with informed scepticism and identify where agentic workflows could fit within their own operations.

Deliverables: agentic development reference guide · capabilities matrix · human oversight model template

Day 6 · Adoption & customer strategy

Thursday · ½ day

Two critical questions: how do you bring the rest of the organisation along, and how do you position AI capability to win and retain customers? Part 1 — change management essentials: communication approaches, AI champions, phased rollout, resistance patterns. Part 2 — customer-facing AI strategy: where AI creates genuine customer value, defensible value propositions, pricing and packaging. Capstone exercise produces a unified strategy direction with 30-day and 90-day milestones.

Deliverables: adoption playbook · customer AI strategy framework · value propositions · champions programme design

Format

place In-person at your site or in Berlin. Check-in is virtual.
schedule Half-day sessions 09:00–13:00. Day 5 extends to 14:00 for the full live demonstration.
groups 6–8 participants per session. Two trainers per workshop.
shield_person All exercises within your approved infrastructure. No client data leaves.

What makes this programme different

  • Minimal, purposeful slides. The vast majority of each session is spent doing, not watching a presenter click through decks.
  • Real work, not simulations. Participants bring their actual decisions, their actual documents, their actual challenges.
  • Small groups, real attention. Everyone gets hands-on time and direct feedback. Nobody hides in the back row.
  • Tangible outputs. Every session produces frameworks, matrices, governance drafts, working agent configurations — tools for immediate use.
  • Expert delivery. Sessions are led by practitioners who build with AI daily, not by corporate trainers reading from a curriculum.
  • Two trainers per workshop. Every participant is adequately supported, especially during hands-on exercises.

What the cohort walks out with

A complete working kit across six categories: strategy and investment frameworks, agentic development materials, governance and adoption artefacts, personal capabilities, customer-facing strategy, and a reference library. Delivered as an editable digital package, and the property of the client organisation after delivery.

See the full deliverables list →

Strategy & investment

  • — AI Capability Assessment Framework
  • — AI Investment Evaluation Framework
  • — AI Value Chain Map
  • — AI Success Metrics Template
  • — AI Opportunity Matrix

Agentic development

  • — Agentic Development Reference Guide
  • — Workflow Capabilities Matrix
  • — Human Oversight Model Template
  • — Agentic Capabilities Assessment

Governance & adoption

  • — AI Governance Framework v2
  • — Data Classification Guide
  • — AI Adoption Playbook
  • — AI Champions Programme Design
  • — Resistance Response Guide
  • — Adoption Metrics Framework

Personal & customer

  • — Individual AI workflows
  • — Personal prompt libraries
  • — Customer AI Strategy Framework
  • — Customer-Facing Value Propositions
  • — Customer AI Readiness Assessment
  • — Six session-specific cheat sheets

Who it's for

C-suite, VPs, senior functional leaders, and boards at organisations where the AI adoption decisions are real and consequential — and where the internal leadership team needs to be aligned on vocabulary, governance, and strategy before committing to a rollout. Particularly valuable for leadership teams that have encountered the three failure modes described above and want to build the foundation properly.

Request a programme

Tell us about your cohort.

03  /  Which one do I need?

Pick one.

Advisory is for you if

  • check_circle You are standing up an AI function from scratch and need weekly partnership to make consequential decisions.
  • check_circle The mandate has landed but your internal point of view has not yet crystallized.
  • check_circle You want decisions, working documents, and architecture records — not slide decks.
Start a conversation arrow_forward

Training is for you if

  • school You have a leadership team that needs to govern AI across a portfolio.
  • school You want shared vocabulary and shared judgment in weeks rather than months.
  • school You want every executive to walk out with a concrete set of decisions and the arguments already drafted.
Request a program arrow_forward

If you are not sure, start a conversation and we will help you pick.

Ledger source ID: CH-2026-SV

"AI is not a layer on top of your business. It is the new foundation of your architectural ledger."

Creative Human · Berlin

Working session in a Berlin studio

04  /  From Berlin

Advisory sessions and Training cohorts happen in Berlin or remotely.

We travel for in-person sessions when the engagement benefits from it. The working register is European — precise, considered, anti-hype, production-grade.