Over the last few days, I spent some time exploring the new Prompt API that Google has added to Chrome (v138 as on 17 July 2025). Microsoft is expected to follow soon in Edge, where this feature is already present in Canary builds. What makes this interesting is that these APIs bring small AI models directly into your browser. No server calls, no subscription charges, and your data never leaves your computer. It is the browser itself that becomes the runtime for AI.
In Google Chrome, the Prompt API uses Gemini Nano, Google’s compact AI model optimised for on‑device use. When Microsoft rolls this out in Edge, it will use Microsoft’s Phi‑4 model. Both are designed to run securely and efficiently without relying on external servers. Running these models locally means improved privacy, lower latency, and no per‑use cost. For developers, it opens a new layer of capability inside the browser without setting up a backend or paying for API credits, especially for browser extension creators.
To get started, I enabled three experimental flags in Chrome:
chrome://flags/#prompt-api-for-gemini-nano chrome://flags/#prompt-api-for-gemini-nano-multimodal-input chrome://flags/#optimization-guide-on-device-model
After enabling these and restarting Chrome, the first time I called LanguageModel.create(), there was a short wait while the model file downloaded. After that, everything ran fine, locally on my PC with a basic NVIDIA GPU.
With this setup, I built three quick demos
1️⃣AI Creative Writing Assistant: Feed it a theme, mood and length, and it generates poems, short stories or dialogues on the fly. ✍
2️⃣AI Translation Assistant: Translate text between a dozen languages, instantly and without sending anything to a cloud API. ️
3️⃣AI Vision Assistant: Upload an image, and the browser’s built‑in AI describes what it sees, again running locally.️️
The HTML, CSS and JavaScript for my demos were generated by Claude AI with my prompts, following the documentation from the Web Machine Learning Community Group.
The project source code are in this GitHub Repository: venkatarangan/PromptAPI
Prompt API feels like the next step in how we use AI on the web. Of course, what a small local model can do is limited for now, and we still need ChatGPT, Claude and others for heavy‑duty tasks. If you try it out, I would love to hear your experience.
Discover more from Mangoidiots
Subscribe to get the latest posts sent to your email.
