CDA-AI

CDA-AI

Safety Anchor

Safety Anchor

Human First, Always.

Human First, Always.

“I will maintain by all the means in my power, the honour and the noble traditions of the medical profession… I make these promises solemnly, freely and upon my honour.”

“I will maintain by all the means in my power, the honour and the noble traditions of the medical profession… I make these promises solemnly, freely and upon my honour.”

— Declaration of Geneva,
World Medical Association

— Declaration of Geneva,
World Medical Association

Phase 1: Human First, Always

Phase 1: Human First, Always

This is where we start.

Not technical. Not hypothetical.

A new class of intelligent structure.

Not to replace humans.
To empower them.

But the core reframe required to lead AI
without losing command.

But the core reframe required to lead AI
without losing command.

Foundational Principle:
AI is Not Your Friend. Not Your Enemy.
It’s Force.

You are not here to love AI. You are not here to fear it.
You are here to command it.

Foundational Principle:
AI is Not Your Friend. Not Your Enemy.
It’s Force.

You are not here to love AI.
You are not here to fear it.

You are here to command it.

Foundational Principle:
AI is Not Your Friend. Not Your Enemy.
It’s Force.

You are not here to love AI. You are not here to fear it.
You are here to command it.

  • AI is force, like electricity:

  • AI is force, like electricity:

  • Touch it wrong it shocks.

  • Touch it wrong it shocks.

  • Use it right it powers worlds.

  • Use it right it powers worlds.

AI Hallucination

AI-Hallucination.

AI-Hallucination.

Not a Bug. It’s the Default.

Not a Bug. It’s the Default.

Not a Bug. It’s the Default.

AI doesnt know. 

AI doesn't "know".

AI doesn't "know".

It predicts.

It predicts.

It doesnt think like a human. It predicts the next likely word based on data patterns.

It doesnt think like a human.
It predicts the next likely word based on data patterns.

It doesnt think like a human. It predicts the next likely word based on data patterns.

So what happens when it’s unsure?

  • It hallucinates—but sounds completely confident.

    It says things like:

    • “According to the Australian RACGP guidelines…” (when no such guideline exists)

    • “The standard dose is…” (fabricated)

    • “This can be confirmed by…” (non-existent references)

So what happens when it’s unsure?

• It hallucinates—but sounds completely confident.

• It says things like: “According to the Australian RACGP guidelines…” (when no such guideline exists)

• “The standard dose is…” (fabricated)

• “This can be confirmed by…” (non-existent references)

So what happens when it’s unsure?

  • It hallucinates—but sounds completely confident.

    It says things like:

    • “According to the Australian RACGP guidelines…” (when no such guideline exists)

    • “The standard dose is…” (fabricated)

    • “This can be confirmed by…” (non-existent references)

The Trust Gradient: The Core Human Risk

The Trust Gradient: The Core Human Risk

The Trust Gradient: The Core Human Risk

Here’s what happens when an untrained human interacts with AI:

Here’s what happens when an untrained human interacts with AI:

Here’s what happens when an untrained human interacts with AI:

  1. Skepticism– “This can’t be right.” 

  1. Testing– “Wait… that was actually useful.” 

  1. Skepticism– “This can’t be right.” 

  1. Testing– “Wait… that was actually useful.” 

  1. Adoption– “This is saving me time.”

  2. Trust– “It hasn’t been wrong in a while.” 

  1. Adoption– “This is saving me time.”

  2. Trust– “It hasn’t been wrong in a while.” 

  1. Delegation– “I’ll let it decide.” 

  2. Automation– “I’ll just plug it in.” 

  3. Overtrust– “I didn’t check.”

  1. Delegation– “I’ll let it decide.” 

  2. Automation– “I’ll just plug it in.” 

  3. Overtrust– “I didn’t check.”

At Step 7, you’re no longer the CEO
– You’re a passive observer. 

You must never go beyond Step 4.

At Step 7, you’re no longer the CEO
– You’re a passive observer. 

You must never go beyond Step 4.

Human Oversight is Non-Negotiable

Human Oversight is Non-Negotiable

Human Oversight
is Non-Negotiable

The most dangerous failure mode in AI isn’t malfunction.

It’s human overtrust.

The most dangerous failure mode in AI isn’t malfunction.

It’s human overtrust.

The most dangerous failure mode in AI isn’t malfunction.

It’s human overtrust.

As humans, we naturally trust systems that work.

But over time, we stop checking.

We assume it’s right. We outsource judgment.


As humans, we naturally trust systems that work.

But over time, we stop checking.

We assume it’s right. We outsource judgment.


As humans, we naturally trust systems that work.

But over time, we stop checking.

We assume it’s right. We outsource judgment.


That can never happen here.

That can never happen here.

by CDA AI

• Every output must be verified

• Every insight must be reviewed

• Every output must be verified

• Every insight must be reviewed
Trust must never replace leadership


Trust must never replace leadership



Coming Soon:

Safety & Oversight
Resource Centre

Coming Soon:

Safety & Oversight
Resource Centre

Safety & Oversight
Resource Centre

You are the breaker, the switch, the fuse, and the circuit designer.

Sentinel Force doesn’t run until the CEO authorises it.

That’s not paranoia. That’s safety.
Deep-dive training modules on:

You are the breaker, the switch, the fuse, and the circuit designer.

Sentinel Force doesn’t run until the CEO authorises it.

That’s not paranoia. That’s safety.
Deep-dive training modules on:

You are the breaker, the switch, the fuse, and the circuit designer.

Sentinel Force doesn’t run until the CEO authorises it.

That’s not paranoia. That’s safety.
Deep-dive training modules on:

Executive responsibility
in AI-led environments

Executive responsibility
in AI-led environments

AI safety 

AI-Hallucination
management

AI safety 

AI-Hallucination
management

by CDA AI

Fluency ≠ Accuracy.
The more fluent the AI, the more likely a human will assume it’s correct.

Fluency ≠ Accuracy.
The more fluent the AI,
the more likely a human will assume it’s correct.

Fluency ≠ Accuracy.
The more fluent the AI, the more likely a human will assume it’s correct.

FINAL WORDS

"The true danger of AI is not whether it becomes smarter than us—but whether we surrender our responsibility too easily.

There are two core risks:

  1. A well-intentioned human overtrusts a system that hallucinates and fails silently.

  2. A malicious actor uses AI with perfect clarity to amplify harm.

Thats why I believe AI must never operate without human oversight

—and that oversight must be grounded in values of service, truth, and the sanctity of human life.


I am not here to worship AI.
I am here to command it."


FINAL WORDS

"The true danger of AI is not whether it becomes smarter than us—but whether we surrender our responsibility too easily.

There are two core risks:

  1. A well-intentioned human overtrusts a system that hallucinates and fails silently.

  2. A malicious actor uses AI with perfect clarity to amplify harm.

Thats why I believe AI must never operate without human oversight

—and that oversight must be grounded in values of service, truth, and the sanctity of human life.


I am not here to worship AI.
I am here to command it."


FINAL WORDS

"The true danger of AI is not whether it becomes smarter than us—but whether we surrender our responsibility too easily.

There are two core risks:

  1. A well-intentioned human overtrusts a system that hallucinates and fails silently.

  2. A malicious actor uses AI with perfect clarity to amplify harm.

Thats why I believe AI must never operate without human oversight

—and that oversight must be grounded in values of service, truth, and the sanctity of human life.

I am not here to worship AI.
I am here to command it."


“At the time of being admitted as a member of the medical profession, I solemnly pledged to dedicate my life to the service of humanity… I will maintain the utmost respect for human life. I make these promises solemnly, freely, and upon my honour.”

“At the time of being admitted as a member of the medical profession, I solemnly pledged to dedicate my life to the service of humanity… I will maintain the utmost respect for human life. I make these promises solemnly, freely, and upon my honour.”

“At the time of being admitted as a member of the medical profession, I solemnly pledged to dedicate my life to the service of humanity… I will maintain the utmost respect for human life. I make these promises solemnly, freely, and upon my honour.”

— Adapted from the World Medical Association Declaration of Geneva (recited at graduation 03/12/2016, coded into Sentinel Protocol 30/03/2025)
— Dr Fernando Telles, CEO & Founder, CDA-AI  |  Lightchain Capital

Adapted from the World Medical Association Declaration of Geneva
(recited at graduation 03/12/2016, coded into Sentinel Protocol 30/03/2025)
— Dr Fernando Telles, CEO & Founder, CDA-AI  |  Lightchain Capital

— Adapted from the World Medical Association Declaration of Geneva (recited at graduation 03/12/2016, coded into Sentinel Protocol 30/03/2025)
Dr Fernando Telles, CEO & Founder, CDA-AI  |  Lightchain Capital


Director, Cosmetic Doctors Australia Pty Ltd
ABN 19 638 019 431 | ACN 638 019 431
AI-Human Synergy™ is a pending trademark of Cosmetic Doctors Australia Pty Ltd. All rights reserved.


Director, Cosmetic Doctors Australia Pty Ltd
ABN 19 638 019 431 | ACN 638 019 431
AI-Human Synergy™ is a pending trademark of Cosmetic Doctors Australia Pty Ltd. All rights reserved.

Director, Cosmetic Doctors Australia Pty Ltd
ABN 19 638 019 431 | ACN 638 019 431
AI-Human Synergy™ is a pending trademark of Cosmetic Doctors Australia Pty Ltd. All rights reserved.

© Cosmetic Doctors Australia Pty Ltd Inc. 2019

© Cosmetic Doctors Australia Pty Ltd Inc. 2019

General Enquiries: research@aihumansynergy.org

General Enquiries: research@aihumansynergy.org

General Enquiries: research@aihumansynergy.org

Medical Bookings (4+ weeks notice): Dr.Telles@aihumansynergy.org

Medical Bookings (4+ weeks notice): Dr.Telles@aihumansynergy.org

Medical Bookings: Dr.Telles@aihumansynergy.org