The challenge
The venue's operators were stretched between events, vendors, and guests. Posting consistently on Facebook and Instagram — writing captions, sourcing imagery, picking hashtags, scheduling — was the first commitment to slip every week. Months of silent feeds had already cost local discoverability, exactly the organic-reach loop that fills a wedding or corporate booking calendar.
Time: Operations staff couldn't sustain weekly content production. Brand voice: A luxury venue can't post like a fast-casual restaurant — and a $2,000/month agency wasn't economically justifiable.
The solution
A fully orchestrated AI content pipeline configured to the venue's luxury hospitality voice. Three deliverables:
- AI-generated captions tuned to the venue's tone, with separate copy lengths for Facebook vs. Instagram.
- Branded image overlays — every post composited with the venue's logo and a hook headline sized to be legible in the IG main grid.
- Reels with text overlays — short-form vertical video assembled from the venue's own footage, with on-video hook text and logo, published to IG Reels and FB Video.
- Scheduled, hands-off publishing via direct integration with Meta's Graph API — posts go live on schedule, no manual approval needed for routine content.
- Quality monitoring — every published asset logged and reviewable; failures auto-retried.
Architecture
A 15-module Make.com scenario, split into two trigger paths (content generation and human-approval-driven publishing). Multi-client dimensioned from day one — every module is parameterised on Client_ID so the same pipeline serves multiple white-label customers without code duplication.
new row in _Requests
+ top 3 reference posts
JSON output mode
1:1, sampleCount=3
log to _GenerationLog
Selected_Variant ≠ empty
→ Drive File ID
update _GenerationLog
FB + IG, both formats
The stack
Chosen for cost efficiency and reliability — and to keep the customer's data inside Google Workspace, which they already use.
Cost engineering
Per content request (3 image variants): approximately $0.12 – $0.15 CAD in API costs. Cheaper than DALL-E 3 at equivalent quality once we switched to Imagen 3 fast for the variant-generation step. Make.com operations: ~36 ops per request, well within the paid plan ceiling.
A glimpse of the prompt architecture
The Gemini module receives three structured inputs — client brand profile, content request, and three top-performing historical posts — and must return a single JSON object: { image_prompt, negative_prompt }. The system prompt locks in Imagen 3-specific rules (descriptive natural language, exact colour tokens, mood and material), enforces JSON-only output (no markdown), and mirrors the visual energy of the historical examples rather than their literal subjects.
Results
Operational metrics are reviewed monthly with the customer; published-post volume scales with their event calendar.
What I learned
The model is the easy part. The interface is the product. The hardest decision wasn't which LLM or which image generator — it was choosing where to put the human in the loop. We landed on a Google Sheets-based generation log: every variant gets a row, the owner picks 1/2/3 in a single cell, and that selection triggers the publishing scenario. Sheets is what the customer already opens every morning. The "AI dashboard" is a column with a number in it.
Cost engineering matters at SMB scale. Imagen 3 fast at ~$0.02/image vs. DALL-E 3 at ~$0.04 is the difference between a flat-fee business model that pencils and one that doesn't. At three variants per request and ~20 posts/month per client, you're talking $1.20 vs. $2.40 in cost-of-goods — for a service priced under $400/month, that 2x matters.
Multi-client architecture is a Day 1 decision, not a Day 60 refactor. Every module reads Client_ID as its first action. Onboarding the second customer was a 30-minute job. Onboarding the tenth will be the same.