Build hybrid experiences with on-device and cloud-hosted models  |  Firebase AI Logic (original) (raw)

Build AI-powered apps and features with hybrid inference usingFirebase AI Logic. Hybrid inference enables running inference using on-device models when available and seamlessly falling back to cloud-hosted models otherwise (and vice versa).

With this release, hybrid inference is available using theFirebase AI Logic client SDK for Web with support for on-device inference for Chrome on Desktop.

Jump to the code examples

Recommended use cases:

Supported capabilities and features for on-device inference:

Get started

This guide shows you how to get started using the Firebase AI Logic SDK for Web to perform hybrid inference.

Inference using an on-device model uses thePrompt API from Chrome; whereas inference using a cloud-hosted model uses your chosen Gemini APIprovider (either the Gemini Developer API or theVertex AI Gemini API).

Get started developing using localhost, as described in this section (you can also learn more aboutusing APIs on localhostin the Chrome documentation). Then, once you've implemented your feature, you can optionally enable end-users to try out your feature.

Step 1: Set up Chrome and the Prompt API for on-device inference

  1. Make sure you're using a recent version of Chrome. Update inchrome://settings/help.
    On-device inference is available from Chrome v139 and higher.
  2. Enable the on-device multimodal model by setting the following flag toEnabled:
    • chrome://flags/#prompt-api-for-gemini-nano-multimodal-input
  3. Restart Chrome.
  4. (Optional) Download the on-device model before the first request.
    The Prompt API is built into Chrome; however, the on-device model isn't available by default. If you haven't yet downloaded the model before your first request for on-device inference, the request will automatically start the model download in the background.
    View instructions to download the on-device model
    1. Open Developer Tools > Console.
    2. Run the following:
    await LanguageModel.availability();  
    1. Make sure that the output is available, downloading, ordownloadable.
    2. If the output is downloadable, start the model download by running:
    await LanguageModel.create();  
    1. You can use the following monitor callback to listen for download progress and make sure that the model is available before making requests:
    const session = await LanguageModel.create({  
      monitor(m) {  
        m.addEventListener("downloadprogress", (e) => {  
          console.log(`Downloaded ${e.loaded * 100}%`);  
        });  
      },  
    });  

Step 2: Set up a Firebase project and connect your app to Firebase

  1. Sign into the Firebase console, and then select your Firebase project.
    Don't already have a Firebase project?
    If you don't already have a Firebase project, click the button to create a new Firebase project, and then use either of the following options:
    • Option 1: Create a wholly new Firebase project (and its underlyingGoogle Cloud project automatically) by entering a new project name in the first step of the workflow.
    • Option 2: "Add Firebase" to an existing Google Cloud project by clicking Add Firebase to Google Cloud project (at bottom of page). In the first step of the workflow, start entering the project name of the existing project, and then select the project from the displayed list.
      Complete the remaining steps of the on-screen workflow to create a Firebase project. Note that when prompted, you do not need to set upGoogle Analytics to use the Firebase AI Logic SDKs.
  2. In the Firebase console, go to the Firebase AI Logic page.
  3. Click Get started to launch a guided workflow that helps you set up therequired APIsand resources for your project.
  4. Select the "Gemini API" provider that you'd like to use with theFirebase AI Logic SDKs. Gemini Developer API is recommended for first-time users. You can always add billing or set upVertex AI Gemini API later, if you'd like.
    • Gemini Developer APIbilling optional(available on the no-cost Spark pricing plan, and you can upgrade later if desired)
      The console will enable the required APIs and create aGemini API key in your project.
      Do not add this Gemini API key into your app's codebase. Learn more.
    • Vertex AI Gemini APIbilling required(requires the pay-as-you-go Blaze pricing plan)
      The console will help you set up billing and enable the required APIs in your project.
  5. If prompted in the console's workflow, follow the on-screen instructions to register your app and connect it to Firebase.
  6. Continue to the next step in this guide to add the SDK to your app.

Step 3: Add the SDK

The Firebase library provides access to the APIs for interacting with generative models. The library is included as part of the Firebase JavaScript SDK for Web.

  1. Install the Firebase JS SDK for Web using npm:
npm install firebase  
  1. Initialize Firebase in your app:
import { initializeApp } from "firebase/app";  
// TODO(developer) Replace the following with your app's Firebase configuration  
// See: https://firebase.google.com/docs/web/learn-more#config-object  
const firebaseConfig = {  
  // ...  
};  
// Initialize FirebaseApp  
const firebaseApp = initializeApp(firebaseConfig);  

Step 4: Initialize the service and create a model instance

Click your Gemini API provider to view provider-specific content and code on this page.

Before sending a prompt to a Gemini model, initialize the service for your chosen API provider and create a GenerativeModel instance.

Set the mode to one of:

When you use PREFER_ON_DEVICE, PREFER_IN_CLOUD, or ONLY_IN_CLOUD thedefault cloud-hosted model is gemini-2.0-flash-lite, but you canoverride the default.

import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, InferenceMode } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance
// Set the mode, for example to use on-device model when possible
const model = getGenerativeModel(ai, { mode: InferenceMode.PREFER_ON_DEVICE });

Send a prompt request to a model

This section provides examples for how to send various types of input to generate different types of output, including:

If you want to generate structured output (like JSON or enums), then use one of the following "generate text" examples and additionallyconfigure the model to respond according to a provided schema.

Generate text from text-only input

Before trying this sample, make sure that you've completed theGet started section of this guide.

You can usegenerateContent()to generate text from a prompt that contains text:

// Imports + initialization of FirebaseApp and backend service + creation of model instance

// Wrap in an async function so you can use await
async function run() {
  // Provide a prompt that contains text
  const prompt = "Write a story about a magic backpack."

  // To generate text output, call `generateContent` with the text input
  const result = await model.generateContent(prompt);

  const response = result.response;
  const text = response.text();
  console.log(text);
}

run();

Note that Firebase AI Logic also supports streaming of text responses usinggenerateContentStream (instead of generateContent).

Generate text from text-and-image (multimodal) input

Before trying this sample, make sure that you've completed theGet started section of this guide.

You can use generateContent()to generate text from a prompt that contains text and image files—providing each input file's mimeType and the file itself.

The supported input image types for on-device inference are PNG and JPEG.

// Imports + initialization of FirebaseApp and backend service + creation of model instance

// Converts a File object to a Part object.
async function fileToGenerativePart(file) {
  const base64EncodedDataPromise = new Promise((resolve) => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result.split(',')[1]);
    reader.readAsDataURL(file);
  });
  return {
    inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
  };
}

async function run() {
  // Provide a text prompt to include with the image
  const prompt = "Write a poem about this picture:";

  const fileInputEl = document.querySelector("input[type=file]");
  const imagePart = await fileToGenerativePart(fileInputEl.files[0]);

  // To generate text output, call `generateContent` with the text and image
  const result = await model.generateContent([prompt, imagePart]);

  const response = result.response;
  const text = response.text();
  console.log(text);
}

run();

Note that Firebase AI Logic also supports streaming of text responses usinggenerateContentStream (instead of generateContent).

What else can you do?

In addition to the examples above, you can alsoenable end-users to try out your feature,use alternative inference modes,override the default fallback model, anduse model configuration to control responses.

Enable end-users to try out your feature

To enable end-users to try out your feature, you canenroll in the Chrome Origin Trials. Note that there's a limited duration and usage for these trials.

  1. Register for the Prompt API Chrome Origin Trial. You'll be given a token.
  2. Provide the token on every web page for which you want the trial feature to be enabled. Use one of the following options:
    • Provide the token as a meta tag in the <head> tag:<meta http-equiv="origin-trial" content="TOKEN">
    • Provide the token as an HTTP header:Origin-Trial: TOKEN
    • Provide the tokenprogrammatically.

Use alternative inference modes

The examples above used the PREFER_ON_DEVICE mode to configure the SDK to use an on-device model if it's available, or fall back to a cloud-hosted model. The SDK offers three alternativeinference modes:ONLY_ON_DEVICE, ONLY_IN_CLOUD, and PREFER_IN_CLOUD.

const model = getGenerativeModel(ai, { mode: InferenceMode.ONLY_ON_DEVICE });  
const model = getGenerativeModel(ai, { mode: InferenceMode.ONLY_IN_CLOUD });  
const model = getGenerativeModel(ai, { mode: InferenceMode.PREFER_IN_CLOUD });  

Determine whether on-device or in-cloud inference was used

If you use PREFER_ON_DEVICE or PREFER_IN_CLOUD inference modes, then it might be helpful to know which mode was used for given requests. This information is provided by the inferenceSource property of each response (available starting with JS SDK v12.5.0).

When you access this property, the returned value will be eitherON_DEVICE or IN_CLOUD.

// ...

console.log('You used: ' + result.response.inferenceSource);

console.log(result.response.text());

Override the default fallback model

The default cloud-hosted model isgemini-2.0-flash-lite.

This model is the fallback cloud-hosted model when you use thePREFER_ON_DEVICE mode. It's also the default model when you use theONLY_IN_CLOUD mode or the PREFER_IN_CLOUD mode.

You can use theinCloudParamsconfiguration option to specify an alternative default cloud-hosted model.

const model = getGenerativeModel(ai, {
  mode: InferenceMode.INFERENCE_MODE,
  inCloudParams: {
    model: "GEMINI_MODEL_NAME"
  }
});

Find model names for allsupported Gemini models.

Use model configuration to control responses

In each request to a model, you can send along a model configuration to control how the model generates a response. Cloud-hosted models and on-device models offer different configuration options.

The configuration is maintained for the lifetime of the instance. If you want to use a different config, create a new GenerativeModel instance with that config.

Set the configuration for a cloud-hosted model

Use theinCloudParamsoption to configure a cloud-hosted Gemini model. Learn aboutavailable parameters.

const model = getGenerativeModel(ai, {
  mode: InferenceMode.INFERENCE_MODE,
  inCloudParams: {
    model: "GEMINI_MODEL_NAME"
    temperature: 0.8,
    topK: 10
  }
});

Set the configuration for an on-device model

Note that inference using an on-device model uses thePrompt API from Chrome.

Use theonDeviceParamsoption to configure an on-device model. Learn aboutavailable parameters.

const model = getGenerativeModel(ai, {
  mode: InferenceMode.INFERENCE_MODE,
  onDeviceParams: {
    createOptions: {
      temperature: 0.8,
      topK: 8
    }
  }
});

Set the configuration for structured output (like JSON)

Generating structured output (like JSON and enums) is supported for inference using both cloud-hosted and on-device models.

For hybrid inference, use bothinCloudParamsandonDeviceParamsto configure the model to respond with structured output. For the other modes, use only the applicable configuration.

JSON output

The following example adapts thegeneral JSON output examplefor hybrid inference:

import {
  getAI,
  getGenerativeModel,
  Schema
} from "firebase/ai";

const jsonSchema = Schema.object({
 properties: {
    characters: Schema.array({
      items: Schema.object({
        properties: {
          name: Schema.string(),
          accessory: Schema.string(),
          age: Schema.number(),
          species: Schema.string(),
        },
        optionalProperties: ["accessory"],
      }),
    }),
  }
});

const model = getGenerativeModel(ai, {
  mode: InferenceMode.INFERENCE_MODE,
  inCloudParams: {
    model: "gemini-2.5-flash"
    generationConfig: {
      responseMimeType: "application/json",
      responseSchema: jsonSchema
    },
  }
  onDeviceParams: {
    promptOptions: {
      responseConstraint: jsonSchema
    }
  }
});
Enum output

As above, but adapting thedocumentation on enum outputfor hybrid inference:

// ...

const enumSchema = Schema.enumString({
  enum: ["drama", "comedy", "documentary"],
});

const model = getGenerativeModel(ai, {

// ...

    generationConfig: {
      responseMimeType: "text/x.enum",
      responseSchema: enumSchema
    },

// ...
});

// ...

Features not yet available for on-device inference

As an experimental release, not all the capabilities of the Web SDK are available for on-device inference. The following features are not yet supported for on-device inference (but they are usually available for cloud-based inference).

Give feedback about your experience with Firebase AI Logic