Building an iMessage-Native Decision Agent with Photon iMessage Kit

javascript dev.to

TL;DR

We built Future-Me Courtroom, an iMessage-native agent that turns a dilemma into:

  • 3 competing long-horizon perspectives,
  • 1 forced verdict,
  • 1 concrete next action,
  • and an accountability loop via scheduled follow-ups.

Stack: Bun + TypeScript + @photon-ai/imessage-kit + OpenAI Responses API.


The Product Idea

Text your dilemma, and three versions of your future self argue the case and force a verdict.

The goal was not “another chat bot.” The goal was behavior change through:

  • constraint-driven reasoning,
  • concrete execution steps,
  • and continuity across conversations.

Why Photon iMessage Kit

Photon solves the hardest part: robust local iMessage automation on macOS.

What we used:

  • startWatching for real-time inbound messages,
  • send for outbound replies,
  • MessageScheduler for deferred nudges,
  • Reminders for natural-language reminder creation.

High-Level Architecture


Runtime Flow

  1. Load runtime env (.env, fallback parent .env, or COURT_ENV_PATH).
  2. Boot IMessageSDK and watcher.
  3. For each inbound direct message:
    • skip self-sent events,
    • dedupe by GUID and short-window normalized text,
    • route commands (help, appeal, done, etc.),
    • otherwise invoke LLM courtroom reasoning.
  4. Persist updated memory and optionally schedule a follow-up nudge.

Core Implementation Highlights

1) Inbound reliability guards

if (alreadyProcessed(msg.guid)) return
if (text && isDuplicateInboundText(chatKey, text)) return
if (echoGuard.isRecentEcho(chatKey, text)) return
Enter fullscreen mode Exit fullscreen mode

This protects against duplicate watcher events and self-thread reflections.

2) Structured LLM output contract

We force a JSON schema response and parse resiliently across output shapes.

text: {
  format: {
    type: 'json_schema',
    name: 'future_me_courtroom',
    schema,
    strict: true,
  },
}
Enter fullscreen mode Exit fullscreen mode

Fallback logic ensures a deterministic response if model calls fail.

3) Attachment evidence mode

Any inbound attachment is summarized and injected as explicit reasoning constraints.

const attachmentBlock = hasAttachments
  ? `\n\nEVIDENCE ATTACHMENTS:\n- ${attachmentSummaries.join('\n- ')}\nUse these as factual constraints in your reasoning.`
  : ''
Enter fullscreen mode Exit fullscreen mode

4) Natural-language reminders

We use Photon’s Reminders wrapper for simple scheduling UX.

const reminderId = reminders.at('tomorrow 9am', replyTarget, 'Ship the draft')
Enter fullscreen mode Exit fullscreen mode

Memory Model

Memory is persisted in local JSON per chat key:

  • values
  • avoidances
  • identity
  • cases[]

Each case stores:

  • dilemma summary,
  • verdict,
  • why-now,
  • first action,
  • fallback,
  • confidence,
  • callback question,
  • timestamp.

This makes the bot adaptive across sessions while remaining inspectable.


Edge Cases We Designed For

  • Duplicate inbound event handling.
  • Echoed message suppression.
  • Empty model output or unexpected output format.
  • Attachment-only messages without dilemma text.
  • Reminder parse failures with recoverable guidance.
  • Optional thread allowlist for safer production rollout.

Local Dev + Validation

npm install
npm run lint
npm run type-check
npm run test
bun run dev
Enter fullscreen mode Exit fullscreen mode

What We’d Ship Next

  • Retrieval over historical iMessage context via getMessages().
  • Group “jury mode” in shared chats.
  • Outcome tracking for confidence calibration.
  • Weekly report export via sendFiles().
  • Plugin-based analytics and observability.

This project shows that the strongest “agent UX” may not be another web app. It can be a high-leverage behavior loop in the messaging channel people already use every day.

Github Repo: https://github.com/harishkotra/future-me-courtroom-agent

Source: dev.to

arrow_back Back to Tutorials