Production Case Study

Automating social presence for a 600-guest event venue.

An end-to-end agentic content pipeline — Make.com orchestration, Google Gemini 2.0 Flash for copy, Imagen 3 for visuals — running live and unattended for an Oshawa-based luxury hospitality venue.

Client
Oshawa-based event venue
Industry
Hospitality / Events
Capacity
600 guests
Service
White-label social automation
Platforms
Facebook · Instagram
Live since
Q1 2026

The challenge

The venue's operators were stretched between events, vendors, and guests. Posting consistently on Facebook and Instagram — writing captions, sourcing imagery, picking hashtags, scheduling — was the first commitment to slip every week. Months of silent feeds had already cost local discoverability, exactly the organic-reach loop that fills a wedding or corporate booking calendar.

Two hard constraints

Time: Operations staff couldn't sustain weekly content production. Brand voice: A luxury venue can't post like a fast-casual restaurant — and a $2,000/month agency wasn't economically justifiable.

The solution

A fully orchestrated AI content pipeline configured to the venue's luxury hospitality voice. Three deliverables:

  • AI-generated captions tuned to the venue's tone, with separate copy lengths for Facebook vs. Instagram.
  • Branded image overlays — every post composited with the venue's logo and a hook headline sized to be legible in the IG main grid.
  • Reels with text overlays — short-form vertical video assembled from the venue's own footage, with on-video hook text and logo, published to IG Reels and FB Video.
  • Scheduled, hands-off publishing via direct integration with Meta's Graph API — posts go live on schedule, no manual approval needed for routine content.
  • Quality monitoring — every published asset logged and reviewable; failures auto-retried.

Architecture

A 15-module Make.com scenario, split into two trigger paths (content generation and human-approval-driven publishing). Multi-client dimensioned from day one — every module is parameterised on Client_ID so the same pipeline serves multiple white-label customers without code duplication.

Make.com Scenario · Content Fabrication Pipeline
Trigger
Watch Sheet
Google Sheets
new row in _Requests
Context
Brand + History
Read brand profile
+ top 3 reference posts
Generate
Prompt Architect
Gemini 2.0 Flash
JSON output mode
Render
3 Visual Variants
Imagen 3 fast
1:1, sampleCount=3
Deliver
Drive + Notify
Upload to /_Drafts/
log to _GenerationLog
Trigger 2
Variant Selected
Sheet update
Selected_Variant ≠ empty
Route
Resolve File
Switch on variant 1/2/3
→ Drive File ID
Promote
Move to /Queue
Google Drive
update _GenerationLog
Publish
Schedule + Post
Meta Graph API
FB + IG, both formats

The stack

Chosen for cost efficiency and reliability — and to keep the customer's data inside Google Workspace, which they already use.

Make.com
Orchestration · 15 modules across two scenarios
Gemini 2.0 Flash
Prompt architect · structured JSON output
Imagen 3 (fast)
Image generation · 3 variants per request
Google Sheets
Source of truth · brand profile + content log
Google Drive
Asset storage · /Drafts → /Queue lifecycle
Meta Graph API
Direct posting to FB + IG · reels + grid

Cost engineering

Per content request (3 image variants): approximately $0.12 – $0.15 CAD in API costs. Cheaper than DALL-E 3 at equivalent quality once we switched to Imagen 3 fast for the variant-generation step. Make.com operations: ~36 ops per request, well within the paid plan ceiling.

A glimpse of the prompt architecture

The Gemini module receives three structured inputs — client brand profile, content request, and three top-performing historical posts — and must return a single JSON object: { image_prompt, negative_prompt }. The system prompt locks in Imagen 3-specific rules (descriptive natural language, exact colour tokens, mood and material), enforces JSON-only output (no markdown), and mirrors the visual energy of the historical examples rather than their literal subjects.

# Module setup — Make.com → Google Gemini Model: gemini-2.0-flash Response MIME type: application/json # prevents markdown fences Max output tokens: 800 Temperature: 0.7 # System prompt enforces: # • single JSON object, raw — no preamble, no fences # • image_prompt: 150–280 words, Imagen 3-tuned # • negative_prompt: 8–15 visual elements to exclude # • mirror historical examples in mood, not subject

Results

15modules
Production scenario · live and unattended
~10min
Owner time per post · down from hours
$0.12/post
All-in API cost · 3 variants generated
2days
Build to first production post

Operational metrics are reviewed monthly with the customer; published-post volume scales with their event calendar.

What I learned

The model is the easy part. The interface is the product. The hardest decision wasn't which LLM or which image generator — it was choosing where to put the human in the loop. We landed on a Google Sheets-based generation log: every variant gets a row, the owner picks 1/2/3 in a single cell, and that selection triggers the publishing scenario. Sheets is what the customer already opens every morning. The "AI dashboard" is a column with a number in it.

Cost engineering matters at SMB scale. Imagen 3 fast at ~$0.02/image vs. DALL-E 3 at ~$0.04 is the difference between a flat-fee business model that pencils and one that doesn't. At three variants per request and ~20 posts/month per client, you're talking $1.20 vs. $2.40 in cost-of-goods — for a service priced under $400/month, that 2x matters.

Multi-client architecture is a Day 1 decision, not a Day 60 refactor. Every module reads Client_ID as its first action. Onboarding the second customer was a 30-minute job. Onboarding the tenth will be the same.

Want this running for your business?

Book a free 30-minute discovery call. We'll spec what an automated content pipeline would look like for your venue, restaurant, or service business — no commitment required.

Book Free Consultation ← Back to Builder Demos