We use cookies to improve your browsing experience and to analyze our website traffic. By clicking “Accept All” you agree to our use of cookies. Privacy policy.
16 readsMIT License

Client Extension of AppInventor2 for StableDiffusion

Table of contents

666.png

Abstract

This publication contains a custom extension for App Inventor 2, enabling the creation of mobile applications that can interact with a locally modified Stable Diffusion server. The extension was developed to integrate a powerful generative AI model (Stable Diffusion) with a no-code app development platform. By leveraging this extension, users can harness the capabilities of Stable Diffusion to generate high-quality images directly from their mobile applications.

Key Features

  • Easy Integration: Simplifies the connection between App Inventor 2 and a local Stable Diffusion server.
  • Customizable Image Generation: Allows users to input text prompts to generate images using the Stable Diffusion model.
  • No-Code Interface: Utilizes App Inventor 2's visual programming interface, making it accessible for non-programmers.
  • Optimized for Mobile Devices: The extension is designed to work seamlessly on Android devices with the App Inventor framework.
  • Generality in terms of models: The extension fits all models as long as it's adjusted to fit the server.

Project Motivation

This project was born out of an interest in both mobile app development and the applications of cutting-edge AI in computer vision, particularly the use of Stable Diffusion for generative image models. The goal was to create a straightforward way for non-developers to integrate Stable Diffusion into mobile applications, providing an intuitive interface for generating images from textual descriptions (text-to-image) directly from their phones.

This project provides a no-code interface for mobile app developers and creators to leverage the power of Stable Diffusion, a leading AI model for generative art. By integrating the model into App Inventor 2, the extension empowers users to create highly personalized, AI-generated artwork directly from their mobile devices, without needing any programming skills.

How it Works

Stable Diffusion

Source code of the model is modified to implement the function of exposing latent images. This literally decode latent images to show the process of diffusion. What's more, small changes are also made to provide SD APIs for the server.

Server

Here is the server part, which will act in response to requests from client, and corresponding instructions for SD model will also be generated. For example, this function will get arguments from flask server and generate a generation task.

def run_model(args: dict = None): default_args = {"--ckpt":"checkpoints/v2-1_768-ema-pruned.ckpt", "--config":"configs/stable-diffusion/v2-inference-v.yaml", "--prompt": "the Earth in the solar system", "--n_iter": "1", "--n_samples": "1", "--H": "512", "--W": "512", "--steps": "20", "--log_every_t": "3", "--seed": "42"} if args is not None: for key, value in args.items(): default_args[key] = value args = default_args print("Your parameters here:", args) task = GenerationTask(args) global generationTaskPool generationTaskPool[task.taskID] = task return task.submit()

Client Extension

Here is Java code for Appinventor Extension, which is an aix file after compilation. Client is responsible for connection and communication between Appinventor App and server. This function, for example, will be executed after a denoised image is generated. Then, the client will fetch the image for extension to display.

@SimpleEvent(description = "Event triggered when the a denoised image is generated") public void GenerationStepped(String ImageUrl, int step, int sample) { GetServerResponse("get_step/" + threadIdentifier); EventDispatcher.dispatchEvent(this, "GenerationStepped", ImageUrl, step, sample); }

Here is an example for how a block will be shown:

03_simpleeventblock.png

Conclusion

This project presents a custom extension for App Inventor 2, designed to integrate the powerful Stable Diffusion generative AI model with mobile app development. The extension simplifies the process of creating mobile applications that can generate high-quality images from text prompts, all through a no-code interface. By connecting to a locally modified Stable Diffusion server, users can easily interact with the model and create personalized AI-generated artwork directly from their mobile devices, without requiring any programming skills.

The project aims to make generative AI accessible to a broader audience, providing a user-friendly tool for developers, artists, and creators to explore the capabilities of Stable Diffusion. Whether you're building a creative mobile app or just experimenting with generative art, this extension opens up new possibilities for leveraging AI in a no-code environment.

Models

Datasets

There are no datasets linked