Open source or closed source,it is your call.

Let agents use your skill. You stay in control of the code.

A skill generated with OOMOL runs its underlying tool functions reliably in the cloud; once users install oo-cli, they can use it directly in Codex and Claude Code.

Apple Silicon Mac only
Start with oo-cli
Why go cloud

Your code shouldn't leak when you deliver it

Local execution = code exposure

Today, when Codex or Claude Code calls your skill, it pulls the code to local and runs it there. Your algorithms, business logic, and call chains are fully exposed to the consumer.

A cloud skill runs in the cloud. Users call the interface; your code stays with you.

You just want to deliver a skill, not build an entire backend

Servers, auto-scaling, billing, subscription management, secrets, usage analytics… Writing the core logic is 20% of the work. The other 80% is delivery infrastructure.

Cloud handles runtime, subscriptions, configuration, and data. You just write the logic.

Build once Deliver continuously

From skill creation to cloud runtime — one complete path.

Create in Studio
Run on Cloud
Use via oo-cli
01 / oo-cli

Install oo-cli in Codex and start using skills

oo-cli is the best entry point for using skills inside Codex and Claude Code. Open the install guide first, then use the video on the right to see the real flow.

Codex demo video

Show installing, searching, inspecting, and running a skill in Codex.

02 / OOMOL Studio

When you need your own skill, use OOMOL Studio

Tell the agent what skill you want, let Studio generate the first version, then validate it locally. You do not need to learn a platform DSL before you begin.

Studio + Agent Vibe demo video

Show the path from prompting to local validation.

03 / Cloud

After release, Cloud runs and delivers the skill

After a skill is released, Cloud handles runtime, subscriptions, configuration, and usage data. You do not need a second product layer around the same implementation.

Deliver it to yourself, your team, or customers

Keep the delivery loop running through subscriptions.

Configure and observe it in one backend

Keep secrets, access, releases, and usage in one place.

Cloud console preview

Cloud console preview

Bring subscriptions, settings, and operational data into one backend.

SubscriptionsRuntime settingsUsage data
04 / OOMOL AI

If you do not want CLI, use OOMOL AI

Think of it as the GUI version of oo-cli. It uses the same skills through a more direct interface. CLI is better for workflows; GUI is better for straightforward use.

OOMOL AI demo video

Show both the chat surface and the structured surface.

Free for Developers

Start at zero cost, get the full loop running

No upfront payment. Bring your own model plus 200 free Cloud Task minutes each month — enough to complete your first Studio-to-production delivery cycle.

Studio Free
Recommended
GLM-5
Use your own model

Bring Your Own Model and Get Studio Running

If you already have model quota, connect it directly instead of paying for another service. We recommend starting with GLM-5.

Follow the guide and you can usually finish local validation first.

Cloud Free
Monthly Included
200 Minutes
Free Cloud Task time

200 Free Minutes Each Month for Lightweight Jobs

The free Cloud Task quota is often enough for scheduled jobs, lightweight automation, and delivery validation.

Use the free quota to validate the flow before you spend more.

Use a skill first, then decide whether to build your own

Start by getting the usage path working in oo-cli. When you need your own skill, install Studio to generate, validate, and publish it.

Start with oo-cli