Models

Supported LLM Models for EmbJSON

OneNode DB supports a variety of embedding models and image-to-text models for use with EmbJSON data types. These models are used to convert text or image data into embeddings, enabling powerful semantic search capabilities.

To specify the embedding model in your EmbJSON fields, use the emb_model parameter for text embeddings and the vision_model parameter for image-to-text conversion.

Below is a list of the currently supported models along with their pricing information.

Embedding Models

These models are used for embedding text data in EmbJSON fields such as EmbText and EmbImage.

ModelPricing
text-embedding-3-small$0.020 / 1M tokens
text-embedding-3-large$0.130 / 1M tokens
ada v2$0.100 / 1M tokens

Image-to-Text Models

These models are used for converting images to text descriptions, which are then embedded and indexed for semantic search in EmbImage fields.

ModelPricing (Input Tokens)Pricing (Output Tokens)
gpt-4o$2.50 / 1M input tokens$10.00 / 1M output tokens
gpt-4o-mini$0.150 / 1M input tokens$0.600 / 1M output tokens

Important Notes:

  • Input tokens refer to the number of tokens in the input data (text or image) sent to the model.
  • Output tokens refer to the number of tokens generated by the model in response (e.g., text generated by an image-to-text model).
  • For EmbText, only the emb_model is used. For EmbImage, both a vision_model and an emb_model are utilized to generate and embed text descriptions of images.