An AI receptionist that triages the enquiries worth a human reply.
A practical AI receptionist is not a chatbot on your homepage. It is the layer between "enquiry arrived" and "a human deals with it" — reading, sorting, grading and drafting — so staff only pick up the conversations that actually need them.
Opening hours, parking, fees, referrals, rebates. The same dozen questions, every single day, in slightly different words.
A form submitted at 9pm that waits until the next morning for any acknowledgement is already half-lost to the clinic that answered first.
Rebooking someone the day after a missed appointment is a different conversation to reaching out a week later.
Inbox to CRM to booking system to a spreadsheet. Every hop is a place where context is lost and time is burned.
By lunchtime, reception has answered the same three questions twenty times each — with nothing compounding.
Chatbots on the website are a different product. You want the layer underneath — a triage agent working alongside real staff.
A triage problem, not a chatbot problem.
Enquiries arrive through forms, phone, email, social DMs and referrers. Staff context-switch 50+ times a day between those surfaces, the booking system, the clinic software and a CRM. Most of those messages are routine — booking, rebooking, opening hours, pricing, simple follow-up. A small fraction are genuinely new or need real judgement.
The triage problem is separating the two quickly and cleanly, so staff only touch the conversations that actually need a human. An AI receptionist is the layer that reads, classifies, drafts and escalates — it is not a patient-data layer. It is an operational-workflow layer that runs alongside the systems you already use.
How we wire an AI receptionist.
Ingest
Enquiries from web forms, shared inbox and (optionally) DMs land in one structured queue. Every message is timestamped, tagged and stored with context.
Classify
Each enquiry is tagged by intent — book, reschedule, question, complaint, spam — with a confidence score. Routine ones move. Unclear ones wait for a human.
Draft
Templated or LLM-drafted replies go out for routine categories, and sit ready for human approval on anything sensitive or outside the confident-intent bucket.
Escalate
Edge cases go straight to the right human — practice manager, clinician-admin, operations lead — with the right context, in the right channel, in seconds.
What this looks like in the wild.
New-patient booking to calendar link, human confirms
The agent reads the enquiry, identifies intent, matches to the right practitioner and drafts a reply with a booking link. A human confirms and sends — or the system auto-sends when intent and service are unambiguous.
Reschedule request to automated swap within policy
Inbound reschedules within your documented booking policy (notice period, available slots, same practitioner) are handled automatically. Anything outside policy lands on reception with the full context.
RFP-style enquiry to summary + qualification in Slack
A long partnership or RFP-style email is summarised, graded, and dropped into Slack with two or three suggested qualification questions. The partner owns the reply; the reading is already done.
- Gmail or shared inbox
- Slack or Microsoft Teams
- Cliniko
- Halaxy
- Calendly or Cal.com
- Notion
- Clinic software
- Your own FAQ and policies
- n8n
- Make
- Custom TypeScript for critical paths
- OpenAI, Anthropic or Google
- Grounded in your FAQ and policies
- Evals on real prior enquiries
This is an operations layer — not a clinical one.
For clinics and allied health, this use-case is strictly operational. The agent does not read, store or reason about patient-level health information. Its job is bookings, reminders, rebookings, FAQ-style questions, admin queues and routing. No patient-identifiable health data is required for the audit or the build — de-identified workflow examples are more than enough.
Questions we get before people start.
Is this a chatbot?
No. A chatbot is a widget on your website. An AI receptionist is the layer behind your inbox, booking system and CRM — it reads, classifies, drafts and routes. Visitors usually never see it directly.
Does it read patient data?
No. This is an operational layer. Clinical information stays inside your clinic-approved systems. The agent works on scheduling, reminders, intake, FAQs, admin queues and routing — all at the operational surface, with no clinical reasoning.
What happens to sensitive information if it does appear?
Anything that looks clinical is flagged and escalated straight to the right human, unread by the drafting model where possible. We design the intake and routing so sensitive content stays out of the automated path by default.
How do we audit what it did?
Every action is logged — the original enquiry, the classification, the confidence score, the draft, the approver, the final send. You get a clear trail you can review weekly in under 10 minutes.
How do we start small?
We usually start with a single lane — often rebookings or FAQ-style questions — prove it out for two to four weeks, and only then expand the scope. The bar is evidence, not ambition.
Can it work with our booking software?
In most cases, yes. We have wired this against Cliniko, Halaxy, Calendly and Cal.com, and custom systems with usable APIs. If your booking software has no integration surface, the agent will still draft and route; the booking action stays manual.