Overview

The MNN API is a powerful cloud-based API for integrating advanced AI capabilities, including natural language processing, image generation, speech recognition, and more into your applications. It offers RESTful endpoints for tasks like text generation, embeddings, and content moderation, with support for a variety of models tailored to specific use cases.

Quickstart

To quickly get started with the MNN API, begin by installing the OpenAI client library. Since the MNN API is fully compatible with OpenAI, you can use the same interface by simply changing the base_url to https://api.mnnai.ru/v1/. Make sure to get your API key and set it as an environment variable.

Here's a basic example:

                    from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

completion = client.chat.completions.create(
    model="gpt-4.1",
    messages=[
        {
            "role": "user",
            "content": "Give me a short panda story"
        }
    ]
)

print(completion.choices[0].message.content)
                    

To run this code, first install the library: pip install openai.

Models

MNN offers a range of models for different tasks. For text, models like gpt-4.1 and gemini-2.5-pro-preview-05-06 excel in chat and reasoning tasks. For images, DALL·E 3 generates high-quality visuals. Whisper handles audio transcription. Each model is optimized for specific use cases, with trade-offs in speed, cost, and capability.

Example model list request:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

models = client.models.list()

for model in models.data:
    print(model.id)

Pricing

MNN api is based on subscriptions that give credits per month. To learn more about each subscription, please visit the main page.

Text Generation

With the MNN API, you can use LLMs to generate any text - from code and mathematical expressions to structured JSON or human-like text.

{{ showResponses ? 'Chat Completions API' : 'Responses API' }}
from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

response = client.chat.completions.create(
    model="mistral-medium-latest",
    messages=[
        {"role": "user", "content": "Summarize this: [long text]"}
    ],
    temperature=0.7
)

print(response.choices[0].message.content)
from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

response = client.responses.create(
    model="mistral-medium-latest",
    input="Write a one-sentence bedtime story about a hypercorn."
)

print(response.output_text)

Images

Some models generate images from text prompts. Example of generating an image:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

response = client.images.generate(
    model="dall-e-3",
    prompt="A futuristic city at sunset"
)

print(response.data[0].url)

Vision

Vision models like gpt-4.1 analyze images for tasks like object detection, captioning, or answering questions about visual content. Example of image analysis:

{{ showVisionResponses ? 'Chat Completions API' : 'Responses API' }}
from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

response = client.chat.completions.create(
    model="gpt-4.1-mini",
    messages=[{
        "role": "user",
        "content": [
            {"type": "text", "text": "What's in this image?"},
            {
                "type": "image_url",
                "image_url": {
                    "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                },
            },
        ],
    }],
)

print(response.choices[0].message.content)
from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

response = client.responses.create(
    model="gpt-4.1-mini",
    input=[{
        "role": "user",
        "content": [
            {"type": "input_text", "text": "what's in this image?"},
            {
                "type": "input_image",
                "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
            },
        ],
    }],
)

print(response.output_text)

Speech

The Whisper model transcribes audio to text, supporting multiple languages and formats (e.g., MP3, WAV). Example of audio transcription:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

with open("audio.mp3", "rb") as audio_file:
    transcription = client.audio.transcriptions.create(
        model="whisper-1",
        file=audio_file
    )

print(transcription.text)

Moderation

The Moderation endpoint checks text for harmful or inappropriate content, returning flags for categories like hate or violence. Example:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

response = client.moderations.create(input="This is a safe sentence.")
print(response.results[0].flagged)

Embedding

Generate text embeddings with models like text-embedding-3-large for semantic search, clustering, or recommendations. Example:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

response = client.embeddings.create(
  model="text-embedding-3-small",
  input="The quick brown fox jumps over the lazy dog"
)

print(response.data[0].embedding)

Function Calling

Function calling provides a powerful and flexible way for some models to interface with your code or external services. This guide will explain how to connect the models to your own custom code to fetch data or take action.

{{ showResponses ? 'Chat Completions API' : 'Responses API' }}
from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current temperature for a given location.",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "City and country e.g. Bogotá, Colombia"
                }
            },
            "required": [
                "location"
            ],
            "additionalProperties": False
        },
        "strict": True
    }
}]

completion = client.chat.completions.create(
    model="gpt-4.1",
    messages=[{"role": "user", "content": "What is the weather like in Paris today?"}],
    tools=tools
)

print(completion.choices[0].message.tool_calls)
from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

tools = [{
    "type": "function",
    "name": "get_weather",
    "description": "Get current temperature for a given location.",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "City and country e.g. Bogotá, Colombia"
            }
        },
        "required": [
            "location"
        ],
        "additionalProperties": False
    }
}]

response = client.responses.create(
    model="gpt-4.1",
    input=[{"role": "user", "content": "What is the weather like in Paris today?"}],
    tools=tools
)

print(response.output)

Note: To use this feature, your subscription level must be 'Basic' or higher. To check whether a model supports function calling, visit the models page and look for the function_calling parameter.

Web Access

Web search allows AI to access information from the internet in real time to provide relevant answers, check facts, and find details beyond the training data. All MNN models support this feature because we use a proprietary search engine. Example:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.mnnai.ru/v1"
)

response = client.responses.create(
    model="gpt-4o",
    tools=[{"type": "web_search_preview"}],
    input="What was a positive news story from today?"
)

print(response.output_text)

Note: To use search with the Chat Completions API add -search to the model name. For example deepseek-v3-0324-search.