Use caseAI receptionist · 02
Primary fitClinics · allied health · services
Runs onExisting inbox + tools
Melbourne
Use case · 02 / AI receptionist

An AI receptionist that triages the enquiries worth a human reply.

A practical AI receptionist is not a chatbot on your homepage. It is the layer between "enquiry arrived" and "a human deals with it" — reading, sorting, grading and drafting — so staff only pick up the conversations that actually need them.

Clinics firstService businessesNo patient dataHuman-in-the-loop
01Reception staff spend hours on repeat questions

Opening hours, parking, fees, referrals, rebates. The same dozen questions, every single day, in slightly different words.

02After-hours enquiries drop cold

A form submitted at 9pm that waits until the next morning for any acknowledgement is already half-lost to the clinic that answered first.

03No-shows are not followed up quickly

Rebooking someone the day after a missed appointment is a different conversation to reaching out a week later.

04Staff retype enquiries into multiple systems

Inbox to CRM to booking system to a spreadsheet. Every hop is a place where context is lost and time is burned.

05FAQ-style questions burn an entire morning

By lunchtime, reception has answered the same three questions twenty times each — with nothing compounding.

06You want a safer human-in-the-loop layer — not a chatbot

Chatbots on the website are a different product. You want the layer underneath — a triage agent working alongside real staff.

Problem shape

A triage problem, not a chatbot problem.

Enquiries arrive through forms, phone, email, social DMs and referrers. Staff context-switch 50+ times a day between those surfaces, the booking system, the clinic software and a CRM. Most of those messages are routine — booking, rebooking, opening hours, pricing, simple follow-up. A small fraction are genuinely new or need real judgement.

The triage problem is separating the two quickly and cleanly, so staff only touch the conversations that actually need a human. An AI receptionist is the layer that reads, classifies, drafts and escalates — it is not a patient-data layer. It is an operational-workflow layer that runs alongside the systems you already use.

How we wire an AI receptionist.

Four stepsCapture → respond → follow through
01

Ingest

Enquiries from web forms, shared inbox and (optionally) DMs land in one structured queue. Every message is timestamped, tagged and stored with context.

FormsInboxDMs
02

Classify

Each enquiry is tagged by intent — book, reschedule, question, complaint, spam — with a confidence score. Routine ones move. Unclear ones wait for a human.

Intent tagConfidencePriority
03

Draft

Templated or LLM-drafted replies go out for routine categories, and sit ready for human approval on anything sensitive or outside the confident-intent bucket.

TemplatesLLM draftHuman approve
04

Escalate

Edge cases go straight to the right human — practice manager, clinician-admin, operations lead — with the right context, in the right channel, in seconds.

RoutingContextSlack / email

What this looks like in the wild.

Three real-world patterns
01 / Clinic

New-patient booking to calendar link, human confirms

The agent reads the enquiry, identifies intent, matches to the right practitioner and drafts a reply with a booking link. A human confirms and sends — or the system auto-sends when intent and service are unambiguous.

02 / Allied health

Reschedule request to automated swap within policy

Inbound reschedules within your documented booking policy (notice period, available slots, same practitioner) are handled automatically. Anything outside policy lands on reception with the full context.

03 / Professional services

RFP-style enquiry to summary + qualification in Slack

A long partnership or RFP-style email is summarised, graded, and dropped into Slack with two or three suggested qualification questions. The partner owns the reply; the reading is already done.

02 / Tech stackWhat we usually build on
Inbox & comms
  • Gmail or shared inbox
  • Slack or Microsoft Teams
Booking
  • Cliniko
  • Halaxy
  • Calendly or Cal.com
Knowledge
  • Notion
  • Clinic software
  • Your own FAQ and policies
Orchestration
  • n8n
  • Make
  • Custom TypeScript for critical paths
Model layer
  • OpenAI, Anthropic or Google
  • Grounded in your FAQ and policies
  • Evals on real prior enquiries
Clinic-safe · operational only

This is an operations layer — not a clinical one.

For clinics and allied health, this use-case is strictly operational. The agent does not read, store or reason about patient-level health information. Its job is bookings, reminders, rebookings, FAQ-style questions, admin queues and routing. No patient-identifiable health data is required for the audit or the build — de-identified workflow examples are more than enough.

Front-desk is on the phone — our inbox is stacking up.We lose after-hours new-patient enquiries to whoever replies first.Rebookings are the single biggest admin time sink.
Examples like that are exactly right. No names, no conditions, no case notes — just where operations break.

Questions we get before people start.

Common answersStraight, no marketing
Q / 01

Is this a chatbot?

No. A chatbot is a widget on your website. An AI receptionist is the layer behind your inbox, booking system and CRM — it reads, classifies, drafts and routes. Visitors usually never see it directly.

Q / 02

Does it read patient data?

No. This is an operational layer. Clinical information stays inside your clinic-approved systems. The agent works on scheduling, reminders, intake, FAQs, admin queues and routing — all at the operational surface, with no clinical reasoning.

Q / 03

What happens to sensitive information if it does appear?

Anything that looks clinical is flagged and escalated straight to the right human, unread by the drafting model where possible. We design the intake and routing so sensitive content stays out of the automated path by default.

Q / 04

How do we audit what it did?

Every action is logged — the original enquiry, the classification, the confidence score, the draft, the approver, the final send. You get a clear trail you can review weekly in under 10 minutes.

Q / 05

How do we start small?

We usually start with a single lane — often rebookings or FAQ-style questions — prove it out for two to four weeks, and only then expand the scope. The bar is evidence, not ambition.

Q / 06

Can it work with our booking software?

In most cases, yes. We have wired this against Cliniko, Halaxy, Calendly and Cal.com, and custom systems with usable APIs. If your booking software has no integration surface, the agent will still draft and route; the booking action stays manual.

Start with the triage lane

Free up reception without a chatbot in sight.