Install oo-cli
Set up the CLI entry point.
Write functions in Studio, validate them locally, then publish them to OOMOL Cloud. After that, sign in with the OOMOL CLI so Codex or Claude Code can call those cloud functions directly, while the same capability remains available as an API, MCP tool, or automation task.


Install the CLI, then get the first search and run working.
Set up the CLI entry point.
Run oo login.
Search, then run.
Install once, then use it directly.
$ bun install -g @oomol-lab/oo-cli$ oo loginShow the first successful path directly.
$ oo search "generate a QR code"$ oo package info foo/bar@latest$ oo cloud-task run foo/[email protected] --block-id main --data '{"text":"OOMOL"}'$ oo cloud-task result <task-id>A real coding environment where AI helps generate functions, validate them locally, and deliver them as APIs, MCP tools, or automation tasks.
What we want is not another tool that asks developers to adapt to the platform, but a working environment where you can actually write functions, orchestrate nodes, debug dependencies, and validate results.
"Why do I have to learn a proprietary JSON syntax just to write an if/else statement?"
"Why can't I just import a library? Why do I have to wait for the platform to support it?"
"Why am I coding in a textarea with no autocomplete?"
For developers, the real problem is rarely whether something can be dragged visually. The moment delivery becomes real, you still end up dealing with code, dependencies, debugging, and environment control.
So Studio has a clear role: not to replace engineering workflow, but to bring function generation, local validation, and delivery back inside it.




Studio does not introduce a new definition language. It organizes standard code into runnable capabilities.
In OOMOL, a node is still backed by a function. Inputs are arguments, outputs are return values.
You are not configuring a black-box platform. You are writing code that stays maintainable and ready for delivery.


Visual tooling should not come at the cost of a worse developer experience.
That is why each function unit keeps the editing, completion, typing, and debugging capabilities developers already rely on.
AI, code, and the toolchain need to work together instead of forcing you to switch contexts.


The hard part of function delivery is often not writing the code, but controlling dependencies, environment, and runtime behavior.
Studio uses standard containers to keep those concerns in one place.
Install what you need, run the result locally, then carry the same capability forward as an API, MCP tool, or automation task.
Once you want a capability to be called reliably by AI, APIs, and automations, the work that grows is usually wrapping interfaces, aligning environments, shipping to production, and keeping the whole path stable.
The implementation already exists, but publishing it as an API or task still means rebuilding interfaces, deployment steps, and runtime wrappers.
Development, dependency, and deployment environments drift apart. The problem is often not the code itself, but whether it still holds once it leaves your machine.
Teams often wrap the same implementation separately for APIs, MCP tools, and automation tasks, multiplying delivery cost without adding capability.
If Studio is where you write the function, then OOMOL Cloud is the shared runtime and delivery layer. After sign-in, the CLI calls the cloud functions deployed there from Codex or Claude Code.
Turn a validated function into a shared online capability for CLI, APIs, and automation without rebuilding interfaces, runtimes, and scaling layers.
For developers and small teams that want to provide functions reliably to AI tools, applications, or automation tasks.
The same validated function can keep shipping as an API, MCP tool, or automation task, acting as one stable capability layer for both AI tools and applications.
Expose the function directly as a callable API without building a separate service framework or runtime layer.
Let the same implementation enter the call chain of agents and AI apps directly instead of maintaining a second tool service.
Turn the same implementation into scheduled jobs or automation flows so online runs and later iteration keep reusing one capability.
Cloud takes over the online runtime and delivery layer so you do not need to rebuild another interface stack around the same function.
Stay focused on the function itself instead of spending more time on environments, scheduling, scaling, and production maintenance.
pdf.oomol.com shows that OOMOL is not only about making a function work locally. The same capabilities are published to Cloud first, then carried into web tools, desktop apps, CLI usage, and online services that already serve real users.
This is not an internal demo. It is a live product that actively delivers PDF and publishing tools to end users.
The same project already ships PDF conversion, EPUB translation, and manga translation with colorization, not just a single isolated page.
These capabilities are not rebuilt as separate backends. They are delivered as real services through the same Studio, Cloud, and CLI path, and the CLI calls the cloud functions after sign-in.

This is a live web tool, not a showcase page. Users can upload PDFs and get EPUB or Markdown output through the same OOMOL delivery system.
View Web Tool
The same project does not stop at browser conversion. It continues into a desktop app for managing a library and carrying those outputs into real usage.
View Desktop App
Delivery does not end at file output. The result keeps moving into a real reading interface where it can be opened, browsed, and managed.
View Reading ExperienceA function should not stay trapped in one project. After publishing, it can come back into local work, plug into AI workflows, or become the base capability behind future services.
Bring published functions back into your own environment so you do not have to start from scratch the next time.
Each release leaves behind reusable capabilities that gradually become your own function library.
Keep extending published functions and recombining them into new products and services.




Keep the pricing discussion later. First remove the blockers around model access and lightweight task validation so the Studio-to-CLI-to-production loop is easier to complete.
OOMOL Studio lets you configure your own AI model. If you already have model quota elsewhere, you do not need another paid model service just to get local development running. We currently recommend GLM-5.
If you already have model quota, the setup guide is usually enough to get Studio running and finish local validation.
We give developers 200 minutes of Cloud Task usage every month. For scheduled jobs, lightweight automation, and validation flows, that is often enough to get real online tasks running.
For a lightweight app or workflow, the free quota is often enough to get tasks running before you pay anything.
