🚀 The "Custom GPT" is Dead: OpenAI Just Dropped Workspace Agents (And They Run in the Background)

javascript dev.to

If you’ve spent any time tinkering with AI over the last year, you’ve probably built a Custom GPT. You give it a system prompt, maybe upload a PDF or two, and use it as a highly specific, personalized chatbot.

But there was always one fatal flaw with this workflow: Custom GPTs are entirely reactive. They only work when you are actively sitting at your keyboard, typing prompts, and waiting for a response.

That era officially ended today.

OpenAI just announced Workspace Agents in ChatGPT. Powered by their underlying Codex engine, these are not chatbots. They are autonomous, cloud-hosted agents that run in the background, execute multi-step workflows, and operate across your team's tools even after you close your laptop.

Here is why this completely changes how we build enterprise automation, and what you need to know to start using it today. 👇

🤯 From Chatbots to Background Daemons

The biggest shift with Workspace Agents is the decoupling of the AI from the traditional chat interface.

Because these agents run in the cloud, they have continuous memory and persistent execution. You don't have to manually prompt them to start working. You can configure an agent to run on a set schedule (e.g., "Pull Jira metrics every Friday at 4 PM and draft a report"), or deploy them directly into communication tools like Slack.

For instance, an agent deployed in a Slack workspace can proactively monitor incoming messages, route product feedback, answer documentation questions, and autonomously file IT tickets while your engineering team focuses on deep work.

đź’» The Code: Automating the Automators

To understand how massive this is for developers, think about how we traditionally build workflow automation.

When I was architecting the secure-pr-reviewer GitHub App, the infrastructure overhead required just to get an AI to act autonomously was significant. To automatically review code, you have to spin up a Node.js server, use a framework like Probot to listen for webhooks, manually orchestrate the API calls to the LLM, and handle the asynchronous callbacks.

The Traditional Automation Stack (TypeScript):

import { Probot } from "probot";
import { runSecurityAudit } from "./ai-service";

export default (app: Probot) => {
  // 1. Listen for specific platform events
  app.on("pull_request.opened", async (context) => {

    // 2. Extract the context manually
    const diff = await context.octokit.pulls.get({
      owner: context.repo().owner,
      repo: context.repo().repo,
      pull_number: context.payload.pull_request.number,
    });

    // 3. Orchestrate the LLM call and wait for completion
    const securityReport = await runSecurityAudit(diff.data.body);

    // 4. Push the formatted result back to the platform
    const comment = context.issue({
      body: `🛡️ Security Audit Complete: \n${securityReport}`,
    });

    await context.octokit.issues.createComment(comment);
  });
};
Enter fullscreen mode Exit fullscreen mode

With Workspace Agents, this entire middleware layer evaporates.

Instead of writing and hosting webhook listeners, you create a shared agent, grant it access to your integrations, and define the workflow in plain English: "Monitor new PRs in this repository. When opened, read the diff, check against our security guidelines, and post a comment with your findings." The Codex-powered agent handles the event listening, the context window management, and the API execution natively in the cloud.

🛑 The "Human-in-the-Loop" Safeguards

Of course, giving an autonomous agent unmitigated access to your CRM, codebase, or email inbox is terrifying for any enterprise.

OpenAI clearly anticipated this security anxiety. Workspace Agents come with strict, granular governance. For sensitive actions—like executing a database script, sending an outbound email to a client, or modifying a financial spreadsheet—the agent will automatically pause its execution and ping you for permission.

It does 99% of the heavy lifting, formats the data, and then essentially asks: "Does this look right before I hit send?" ## đź’¸ Availability & The Road Ahead

Right now, Workspace Agents are rolling out in research preview for ChatGPT Business, Enterprise, Edu, and Teachers plans.

Here is the kicker: They are completely free to use until May 6, 2026. After that date, OpenAI is shifting them to a credit-based pricing model, a logical move given that running persistent background daemons requires significantly more compute than standard, isolated chat completions.

We are rapidly moving away from "AI as an autocomplete tool" and entirely into the era of "AI as an asynchronous teammate."

I will be doing a complete, hands-on teardown of how to build and deploy these specific agents over on the AI Tooling Academy channel soon, so stay tuned.

Are you ready to let a cloud-hosted agent manage your Slack channel and codebase, or are the security risks still too high? Let me know your thoughts in the comments below! 👇

If you found this breakdown helpful, smash the ❤️ button and bookmark this post so you remember the May 6th pricing deadline!

Source: dev.to

arrow_back Back to Tutorials