Semantic Assets - Documentation

What is the Plugin?

Semantic Assets is an editor enhancement tool designed to revolutionize how you find and manage art assets in large projects. It allows you to search using natural language (like "wooden office desk") instead of relying on cryptic, hard-to-remember filenames (like SM_Furniture_Office_Table_01.fbx).

By integrating AI power directly into Unity Editor, you can spend more time creating and less time hunting through endless folder hierarchies.

How Does It Work?

The plugin consists of three core components working together:

semantic-assets-unity

Deeply integrated into Unity Editor, handles scanning project assets, processing user input, and displaying search results in familiar editor windows.

semantic-assets-worker

A lightweight, high-performance serverless application that stores asset "semantic fingerprints" and performs rapid similarity matching.

AI Model Services

Utilizes advanced AI models like OpenAI to transform your asset content and search queries into mathematical representations computers can understand.

Getting Started

Goal

Get you set up and running your first semantic search in 15 minutes.

Prerequisites

Unity Editor
2021.3 LTS or higher
Cloudflare Account
Free tier is sufficient
OpenAI API Key
Or OpenAI-compatible service
Node.js
v18 or higher
1

Deploy Backend Service

This is the core of the entire system. You need to deploy our provided backend service to your own Cloudflare account.

Go to Detailed Setup Guide
2

Install Unity Plugin

Purchase and download the plugin from Unity Asset Store.
In Unity Editor, open Window > Package Manager.
Select My Assets from the dropdown menu in the top-left corner.
Find Semantic Assets plugin, click Download and Import.
3

Configure Plugin

After deploying the backend, you will get a Worker URL and an authorization token that you set yourself.

In Unity Editor, open Edit > Project Settings, and find Semantic Assets in the left menu.

Configuration interface screenshot placeholder

docs-getting-started-core-config.png

Fill in the following core fields accurately:

  • Worker Base URL: Your Cloudflare Worker URL
  • Worker Auth Token: The authorization token you set when deploying the backend
  • OpenAI API Key: Your OpenAI API key
4

Initial Indexing

At the bottom of the settings page, click the Initial Index (Sync Now) button.

The plugin will start scanning your project assets, calculating their semantic fingerprints, and uploading them to your backend service. The initial indexing time depends on project size, please be patient.

Index management interface screenshot placeholder

docs-usage-guide-index-management.png

5

Start Searching!

Open Window > Semantic Assets window.

Type your desired asset description in the search box, for example "wooden office desk", then press Enter.

Witness the magic happen! Matching assets will immediately appear as thumbnails below.

First successful search screenshot placeholder

docs-getting-started-first-search.png

Backend Setup Guide

Important Notice

Please follow the steps carefully. Even users unfamiliar with backend technology can complete this successfully. If you encounter problems, please check the FAQ section at the end of the documentation.

1. Prepare Environment

Ensure Node.js (v18+) is installed on your computer. Open your terminal or command line tool.

# Check Node.js version
node --version

2. Install Wrangler CLI

Wrangler is Cloudflare's official command line tool. Run the following command for global installation:

npm install -g wrangler

3. Login to Cloudflare

Connect Wrangler to your Cloudflare account:

wrangler login

This will open a browser window for authorization login.

4. Locate Backend Code

In your imported Unity plugin package, find the Backend or Server folder, which contains the complete source code for the backend service.

In your terminal, navigate to this folder directory:

cd path/to/backend/folder

5. Install Dependencies

npm install

6. Create Cloud Resources

We need to create a D1 database for storing metadata and a Vectorize index for storing vectors.

# Create D1 database
npx wrangler d1 create unity-vectors

# Create Vectorize index
npx wrangler vectorize create unity-vectors --dimensions=1536 --metric=cosine

Please note: After running these two commands, the terminal will output configuration information that needs to be added to the wrangler.toml file. Please copy it to a text editor for later use.

7. Configure wrangler.toml File

This is the deployment configuration file. Open the wrangler.toml file in the project root directory.

Paste the complete [[d1_databases]] and [[vectorize]] configuration blocks generated in the previous step to the end of the file.

Set your authorization token. In the [vars] section, add an AUTH_TOKEN field:

[vars]
AUTH_TOKEN = "your-super-secret-and-long-token"

8. Execute Database Migration

Create the required data table structure for the D1 database:

npx wrangler d1 execute unity-vectors --file=./migrations/001_init.sql --remote

9. Deploy to Cloudflare!

npm run deploy

Verify Deployment

After successful deployment, the terminal will display your Worker URL (e.g., https://semantic-assets-worker.your-name.workers.dev).

You can run the following command to verify if the service is working properly:

curl https://semantic-assets-worker.your-name.workers.dev/

If you see a welcome message, it means the backend has been successfully deployed! Now you can return to the Unity plugin to fill in the configuration.

Plugin Usage Guide

Detailed Configuration Guide

Detailed explanation of each option in Project Settings > Semantic Assets.

Complete configuration interface screenshot placeholder

docs-usage-guide-full-settings.png

Worker Configuration

Base URL

The URL address of your deployed Cloudflare Worker.

Auth Token

The AUTH_TOKEN you set in wrangler.toml, used to protect your API from unauthorized access.

OpenAI (Embedding Service) Configuration

Base URL

The API address for the Embedding model, defaults to OpenAI's official address. If you use Azure OpenAI or other compatible services, please modify this.

API Key

Your Embedding service API key.

Embedding Model

Select the AI model for vector generation. Large models have higher accuracy, while Small models have lower cost and faster speed. We recommend using Large models for initial indexing.

LLM Visual Enhancement Configuration (Key Feature)

Enable LLM for All Extensions

Global switch. When checked, visual analysis will be forcibly enabled for all supported asset types, ignoring individual suffix settings.

Use Same Credentials as Embedding

For convenience, this option is checked by default, reusing the OpenAI configuration above to call the vision model.

LLM Base URL / API Key

When not reusing credentials, fill in the independent vision model service address and key here.

LLM Model

Specify the vision large language model name for image analysis, such as gpt-4o, gpt-4-vision-preview, etc.

Search and Indexing Configuration

Result Limit (1-100)

The maximum number of results returned by default for a single search.

Indexed Extensions with LLM Enhancement

This is the core configuration area of the plugin. You can:

  • Add/Remove File Extensions: Define which types of assets need to be indexed.
  • Independent LLM Enhancement Toggle: Independently check the LLM Enhanced checkbox for each file type.

Note: Enabling LLM enhancement will increase AI API call costs.

Feature Operations

Search Window (Window > Semantic Assets)

Search Box

Enter a natural language description, then press Enter or click the Search button to start searching.

Results List

Search results will be displayed as thumbnails with similarity scores to your query. Click an item to highlight and select that asset in the Project window.

Index Management (In Settings Page)

Initial Index (Sync Now) Incremental Sync

Click this button after first use or when project assets have minor changes. It intelligently compares differences between local and cloud, only uploading, updating, or deleting changed assets - efficient and resource-saving.

Rebuild All Dangerous Operation

Use when you suspect serious problems with the cloud index, or want to completely change the Embedding model. This will first call the backend API to completely clear all cloud data, then perform a complete full synchronization.

API Reference

Provides clear API endpoint reference for advanced users or developers who wish to build upon this backend for secondary development.

Authentication

All endpoints require an authorization token in the request headers:

Authorization: Bearer <Your-Auth-Token>
GET /vectors/hashes

Function: Get a list of guid and content_hash for all indexed assets in the cloud.

Purpose: The plugin uses this interface to compare with local asset status and calculate incremental sync content.

Response Example:

{
  "vectors": [
    { "guid": "...", "content_hash": "..." }
  ]
}
POST /sync/batch

Function: Batch upload, update, and delete asset indexes.

Purpose: Core interface for incremental synchronization.

Request Body Example:

{
  "upserts": [
    { "guid": "...", "content_hash": "...", "embedding": [0.1, 0.2, ...] }
  ],
  "deletes": ["guid1", "guid2"]
}
POST /vectors/search

Function: Execute semantic search.

Purpose: Based on user query vectors, return a list of the most similar asset guids.

Request Body Example:

{
  "embedding": [0.1, 0.2, ...],
  "limit": 50
}
DELETE /clear/all

(Dangerous Operation)

Function: Completely clear all index data in the cloud.

Purpose: Used for "Rebuild All" functionality.

FAQ & Troubleshooting

Q: No search results found, what should I do?

Our plugin is designed to automatically perform incremental sync before each search, so manual sync is not needed. If you still get no results, please troubleshoot in the following order:

1

Check Configuration

Please carefully verify that Worker URL, Auth Token, and OpenAI API Key in Project Settings > Semantic Assets are all correctly filled and accurate. This is the most common source of problems.

2

Check Console

Check the console at the bottom of Unity Editor to see if there are any red error messages related to the plugin, which will provide the most direct clues.

3

Try Complete Rebuild

If both configuration and network are fine, the cloud index might have encountered unexpected issues. Please go to the settings page and click the Rebuild All button. This operation will completely clear cloud data and rebuild the index from scratch, which usually resolves most complex problems.

Q: Console shows 401 Unauthorized or 403 Forbidden errors?

This error clearly indicates authentication failure. Please carefully check whether the Worker Auth Token you filled in the Unity plugin settings is exactly the same as the AUTH_TOKEN you set in the wrangler.toml file when deploying the backend.

Q: Indexing or sync process is very slow?

The speed of initial indexing depends on project size and network conditions. If the speed is abnormally slow, please check:

  • • Whether you have enabled LLM Enhanced functionality for a large number of assets, as visual analysis of each asset takes time. You can try disabling LLM enhancement for some extensions to speed up the process.
  • • Your network connection speed to OpenAI or other AI services.

Q: How much does it cost to use this plugin?

The cost mainly consists of two parts:

1. Cloudflare

The backend service itself runs within Cloudflare's free tier quota, which is free for the vast majority of projects. You only need to pay when your project scale is extremely large and exceeds the free quota.

2. AI Services (like OpenAI)

This is the main cost source. Each asset indexing (generating embeddings) and each LLM-enhanced visual analysis will consume API call quota. Specific costs depend on your usage and selected models, please refer to the official pricing of service providers like OpenAI.

Q: Can I change the Embedding or LLM model?

Yes. If you want to change models (for example, upgrading from text-embedding-3-small to text-embedding-3-large), after changing the model option in the settings page, you must click Rebuild All to regenerate all asset indexes using the new model. Simply changing settings will not automatically update existing indexes.