OllamaEmbedding¶
Overview¶
The ollama_embed
function is designed to generate embeddings for a list of texts using the Ollama model. This functionality is part of the Euler Graph Database and can be accessed via the EmbedFactory
class.
Arguments¶
base_url
(str): The base URL for the Ollama API.model_name
(str): The name of the model to use for generating embeddings.texts
(List[str]): A list of texts to generate embeddings for.temperature
(float, optional): The temperature parameter for the model, defaults to 0.7.
Example Usage¶
Here's an example demonstrating how to use the ollama_embed
function:
from euler.embed_factory import EmbedFactory
Initialize the Ollama Embedding Reader¶
openai_reader = EmbedFactory.get_llm_reader(
'ollama_embed',
base_url='http://localhost:11434',
model='nomic-embed-text:latest'
)
Generate Embeddings for a Prompt¶
result = openai_reader.generate_query_embeddings("Example prompt for the OpenAI model.")
print(result if result else "No response received.")
Functions¶
ollama_embed
¶
The ollama_embed
function takes a base URL, model name, a list of texts, and an optional temperature parameter to generate embeddings for each text in the list. It returns a list of embeddings.
process_embed
¶
The process_embed
function sends a request to the Ollama API with the provided base URL, model name, prompt, and temperature. It processes the response to extract and return the embedding for the given prompt.