Overview

The MNN API is a powerful cloud-based API for integrating advanced AI capabilities, including natural language processing, image generation, speech recognition, and more into your applications. It offers RESTful endpoints for tasks like text generation, embeddings, and content moderation, with support for a variety of models tailored to specific use cases.

Why choose us?

  1. We have a stable infrastructure and an uptime of over 99.96%
  2. We use reliable methods to protect your data
  3. Our API is easy to integrate into any project because it supports various SDKs
  4. Support is always available—you can always reach out for help

Quickstart

To quickly get started with the MNN API, begin by installing the OpenAI client library. Since the MNN API is fully compatible with OpenAI, you can use the same interface by simply changing the base_url to https://api2.mnnai.ru/v1/. Make sure to get your API key and set it as an environment variable.

Here's a basic example:

                    from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

completion = client.chat.completions.create(
    model="gpt-5.4",
    messages=[
        {
            "role": "user",
            "content": "Give me a short panda story"
        }
    ]
)

print(completion.choices[0].message.content)
                    
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const completion = await client.chat.completions.create({
    model: "gpt-5.4",
    messages: [
        { role: "user", content: "Give me a short panda story" }
    ],
});

console.log(completion.choices[0].message.content);
curl https://api2.mnnai.ru/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "gpt-5.4",
    "messages": [{"role": "user", "content": "Give me a short panda story"}]
  }'

To run this code, first install the library: pip install openai.

Models

MNN offers a range of models for different tasks. For text, models like gpt-5.4 excel in chat and reasoning tasks. For images, Nano Banana 2 generates high-quality visuals. Whisper handles audio transcription. Each model is optimized for specific use cases, with trade-offs in speed, cost, and capability.

Example model list request:

from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

models = client.models.list()

for model in models.data:
    print(model.id)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const models = await client.models.list();

for (const model of models.data) {
    console.log(model.id);
}
curl https://api2.mnnai.ru/v1/models \
  -H "Authorization: Bearer $MNN_API_KEY"

Note: You can view all models and their prices on the dashboard page.

Pricing

MNN offers a variety of subscription plans to suit your needs. Each plan provides a fixed monthly credit allowance and specific rate limits.

Feature Free Basic Pro Ultra Enterprise
Price $0/mo $5/mo $10/mo $15/mo $30/mo
Monthly Credits $1 $20 $50 $100 $150
Rate Limit (RPM) 10 50 100 150 200
Model Access Free Only Free & Basic All Models All Models All Models
Web Search Yes Yes Yes Yes Yes
Media Analysis Text/Image + Audio + Audio + Audio + Audio
Function Calling No Yes Yes Yes Yes
Support Standard Standard Priority Priority Priority

What Are Credits?

Think of credits as your balance for using AI models. In a traditional pay-as-you-go system, you buy a specific number of credits and use them until they run out.

Our system works differently to give you more value. With a monthly subscription, you pay a fixed fee and receive a much larger credit allowance.

For example, with the Pro Plan, you pay $10 per month but receive $50 worth of credits. This makes our service 5x more affordable than using pay-as-you-go pricing directly with providers like OpenAI or Anthropic.

FAQ

  1. If I use $40 out of my $50 credits, do they roll over to the next month?

    No. Credits are allocated monthly and do not carry over to the next billing cycle.

  2. How much cheaper is this compared to official providers?

    Our pricing is significantly more efficient; depending on the plan, you get between 4x and 6.67x more value for your money compared to direct provider rates.

  3. What’s the point of these credits if you can just increase model prices?

    We stick to official market rates. For open-source models, we match or beat the industry standard. If you ever notice a price discrepancy, please reach out—we are committed to fair pricing and will fix any errors immediately.

  4. Can I pay for the subscription in Russia?

    Yes! We fully support Russian ruble (RUB) payments via Yoomoney for your convenience.

  5. Where can I go if I have more questions?

    We’re here to help! You can reach us via email at mnnai.ru@outlook.com or join our Discord community.

Pay-as-you-go system

If you're not comfortable with a subscription-based system, you can always use the PAYG system. The rate is $1 = 3 credits. For example, if you top up your balance by $5, you'll receive $15 in credits

Locations

We provide two API servers for your convenience:

https://api.mnnai.ru/v1 is our server located in Russia.

https://api2.mnnai.ru/v1 is our server located in Germany and is the recommended option.

Text Generation

With the MNN API, you can use LLMs to generate any text - from code and mathematical expressions to structured JSON or human-like text.

{{ showResponses ? 'Chat Completions API' : 'Responses API' }}
from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

response = client.chat.completions.create(
    model="mistral-medium-latest",
    messages=[
        {"role": "user", "content": "Summarize this: [long text]"}
    ],
    temperature=0.7
)

print(response.choices[0].message.content)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const response = await client.chat.completions.create({
    model: "mistral-medium-latest",
    messages: [
        { role: "user", content: "Summarize this: [long text]" }
    ],
    temperature: 0.7
});

console.log(response.choices[0].message.content);
curl https://api2.mnnai.ru/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "mistral-medium-latest",
    "messages": [{"role": "user", "content": "Summarize this: [long text]"}],
    "temperature": 0.7
  }'
from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

response = client.responses.create(
    model="mistral-medium-latest",
    input="Write a one-sentence bedtime story about a hypercorn."
)

print(response.output_text)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const response = await client.responses.create({
    model: "mistral-medium-latest",
    input: "Write a one-sentence bedtime story about a hypercorn."
});

console.log(response.output_text);
curl https://api2.mnnai.ru/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "mistral-medium-latest",
    "input": "Write a one-sentence bedtime story about a hypercorn."
  }'

Reasoning

MNN API supports reasoning models such as glm-5, gemini-3.1-pro-preview. Reasoning models think before they respond, creating a long internal chain of thoughts before responding to the user. Reasoning models excel at complex problem solving, coding, scientific reasoning, and multi-step planning of agent workflows.

{{ showResponses ? 'Chat Completions API' : 'Responses API' }}
from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

prompt = """
Write a bash script that takes a matrix represented as a string with
format '[1,2],[3,4],[5,6]' and prints the transpose in the same format.
"""

response = client.chat.completions.create(
    model="glm-5",
    reasoning_effort="medium",
    messages=[
        {
            "role": "user",
            "content": prompt
        }
    ]
)

print(response.choices[0].message.content)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const prompt = `
Write a bash script that takes a matrix represented as a string with
format '[1,2],[3,4],[5,6]' and prints the transpose in the same format.
`;

const response = await client.chat.completions.create({
    model: "glm-5",
    reasoning_effort: "medium",
    messages: [
        { role: "user", content: prompt }
    ],
});

console.log(response.choices[0].message.content);
curl https://api2.mnnai.ru/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "glm-5",
    "reasoning_effort": "medium",
    "messages": [{"role": "user", "content": "Write a bash script that takes a matrix represented as a string with format [1,2],[3,4],[5,6] and prints the transpose in the same format."}]
  }'
from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

prompt = """
Write a bash script that takes a matrix represented as a string with
format '[1,2],[3,4],[5,6]' and prints the transpose in the same format.
"""

response = client.responses.create(
    model="glm-5",
    reasoning={"effort": "medium"},
    input=[
        {
            "role": "user",
            "content": prompt
        }
    ]
)

print(response.output_text)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const prompt = `
Write a bash script that takes a matrix represented as a string with
format '[1,2],[3,4],[5,6]' and prints the transpose in the same format.
`;

const response = await client.responses.create({
    model: "glm-5",
    reasoning: { effort: "medium" },
    input: [
        { role: "user", content: prompt }
    ],
});

console.log(response.output_text);
curl https://api2.mnnai.ru/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "glm-5",
    "reasoning": {"effort": "medium"},
    "input": [{"role": "user", "content": "Write a bash script that takes a matrix represented as a string with format [1,2],[3,4],[5,6] and prints the transpose in the same format."}]
  }'

In the example above, the reasoning effort parameter guides the model on how many reasoning tokens to generate before creating a response to the prompt. Specify low, medium, or high for this parameter, where low favors speed and economical token usage, and high favors more complete reasoning. The default value is medium, which is a balance between speed and reasoning accuracy.

Images

Some models generate images from text prompts. Example of generating an image:

from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

response = client.images.generate(
    model="z-image-turbo",
    prompt="A futuristic city at sunset",
    extra_body={"enhance": True} # If this parameter is specified, your prompt will be automatically enhanced
)

print(response.data[0].url)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const response = await client.images.generate({
    model: "z-image-turbo",
    prompt: "A futuristic city at sunset",
    extra_body: { enhance: true }
});

console.log(response.data[0].url);
curl https://api2.mnnai.ru/v1/images/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "z-image-turbo",
    "prompt": "A futuristic city at sunset",
    "enhance": true
  }'

Note: The API also supports the size parameter. But not all models support this parameter.

Image edit

Creates an edited image by specifying one image and a prompt:

from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

response = client.images.edit(
    model="gpt-image-1-edit",
    image=open("cat.png", "rb"),
    prompt="Change the background of the image to space",
    response_format="url"
)
print(response.data[0].url)
import OpenAI from "openai";
import fs from "fs";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const response = await client.images.edit({
    model: "gpt-image-1-edit",
    image: fs.createReadStream("cat.png"),
    prompt: "Change the background of the image to space",
    response_format: "url"
});

console.log(response.data[0].url);
curl https://api2.mnnai.ru/v1/images/edits \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -F image="@cat.png" \
  -F model="gpt-image-1-edit" \
  -F prompt="Change the background of the image to space" \
  -F response_format="url"

Vision

Vision models like gemini-3.1-flash-lite-preview analyze images for tasks like object detection, captioning, or answering questions about visual content. Example of image analysis:

{{ showVisionResponses ? 'Chat Completions API' : 'Responses API' }}
from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

response = client.chat.completions.create(
    model="gpt-5.2",
    messages=[{
        "role": "user",
        "content": [
            {"type": "text", "text": "What's in this image?"},
            {
                "type": "image_url",
                "image_url": {
                    "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                },
            },
        ],
    }],
)

print(response.choices[0].message.content)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const response = await client.chat.completions.create({
    model: "gpt-5.2",
    messages: [
        {
            role: "user",
            content: [
                { type: "text", text: "What's in this image?" },
                {
                    type: "image_url",
                    image_url: {
                        url: "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                    },
                },
            ],
        },
    ],
});

console.log(response.choices[0].message.content);
curl https://api2.mnnai.ru/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "gpt-5.2",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What is in this image?"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
            }
          }
        ]
      }
    ]
  }'
from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

response = client.responses.create(
    model="gpt-5.2",
    input=[{
        "role": "user",
        "content": [
            {"type": "input_text", "text": "what's in this image?"},
            {
                "type": "input_image",
                "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
            },
        ],
    }],
)

print(response.output_text)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const response = await client.responses.create({
    model: "gpt-5.2",
    input: [
        {
            role: "user",
            content: [
                { type: "input_text", text: "what's in this image?" },
                {
                    type: "input_image",
                    image_url: "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                },
            ],
        },
    ],
});

console.log(response.output_text);
curl https://api2.mnnai.ru/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "gpt-5.2",
    "input": [
      {
        "role": "user",
        "content": [
          {
            "type": "input_text",
            "text": "what is in this image?"
          },
          {
            "type": "input_image",
            "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
          }
        ]
      }
    ]
  }'

Speech to text

The Whisper model transcribes audio to text, supporting multiple languages and formats (e.g., MP3, WAV). Example of audio transcription:

from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

with open("audio.mp3", "rb") as audio_file:
    transcription = client.audio.transcriptions.create(
        model="whisper-1",
        file=audio_file
    )

print(transcription.text)
import OpenAI from "openai";
import fs from "fs";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const transcription = await client.audio.transcriptions.create({
    model: "whisper-1",
    file: fs.createReadStream("audio.mp3"),
});

console.log(transcription.text);
curl https://api2.mnnai.ru/v1/audio/transcriptions \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -F model="whisper-1" \
  -F file="@audio.mp3"

Create translation

The Whisper model translates spoken audio into English, supporting various input formats (e.g., MP3, WAV) and a wide range of source languages. Example of audio translation:

from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

with open("audio.mp3", "rb") as audio_file:
    transcription = client.audio.translations.create(
        model="whisper-1",
        file=audio_file
    )

print(transcription.text)
import OpenAI from "openai";
import fs from "fs";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const translation = await client.audio.translations.create({
    model: "whisper-1",
    file: fs.createReadStream("audio.mp3"),
});

console.log(translation.text);
curl https://api2.mnnai.ru/v1/audio/translations \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -F model="whisper-1" \
  -F file="@audio.mp3"

Text to speech

With MNN, you can convert text to speech. Here's an example:

from openai import OpenAI
from pathlib import Path

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

speech_file_path = Path(__file__).parent / "speech.mp3"

with client.audio.speech.with_streaming_response.create(
    model="qwen-3-tts-flash",
    voice="Cherry",
    input="The quick brown fox jumped over the lazy dog."
) as response:
    response.stream_to_file(speech_file_path)
import OpenAI from "openai";
import fs from "fs";
import path from "path";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const speechFile = path.resolve("./speech.mp3");

const mp3 = await client.audio.speech.create({
    model: "qwen-3-tts-flash",
    voice: "Cherry",
    input: "The quick brown fox jumped over the lazy dog.",
});

const buffer = Buffer.from(await mp3.arrayBuffer());
await fs.promises.writeFile(speechFile, buffer);
curl https://api2.mnnai.ru/v1/audio/speech \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen-3-tts-flash",
    "input": "The quick brown fox jumped over the lazy dog.",
    "voice": "Cherry"
  }' \
  --output speech.mp3

Moderation

The Moderation endpoint checks text for harmful or inappropriate content, returning flags for categories like hate or violence. Example:

from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

response = client.moderations.create(input="This is a safe sentence.")
print(response.results[0].flagged)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const moderation = await client.moderations.create({ input: "This is a safe sentence." });

console.log(moderation.results[0].flagged);
curl https://api2.mnnai.ru/v1/moderations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "input": "This is a safe sentence."
  }'

Embedding

Generate text embeddings with models like text-embedding-3-large for semantic search, clustering, or recommendations. Example:

from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

response = client.embeddings.create(
  model="text-embedding-3-small",
  input="The quick brown fox jumps over the lazy dog"
)

print(response.data[0].embedding)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const embedding = await client.embeddings.create({
    model: "text-embedding-3-small",
    input: "The quick brown fox jumps over the lazy dog",
});

console.log(embedding.data[0].embedding);
curl https://api2.mnnai.ru/v1/embeddings \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "input": "The quick brown fox jumps over the lazy dog",
    "model": "text-embedding-3-small"
  }'

Function Calling

Function calling provides a powerful and flexible way for some models to interface with your code or external services. This guide will explain how to connect the models to your own custom code to fetch data or take action.

{{ showResponses ? 'Chat Completions API' : 'Responses API' }}
from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current temperature for a given location.",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "City and country e.g. Bogotá, Colombia"
                }
            },
            "required": [
                "location"
            ],
            "additionalProperties": False
        },
        "strict": True
    }
}]

completion = client.chat.completions.create(
    model="gpt-5",
    messages=[{"role": "user", "content": "What is the weather like in Paris today?"}],
    tools=tools
)

print(completion.choices[0].message.tool_calls)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const tools = [
    {
        type: "function",
        function: {
            name: "get_weather",
            description: "Get current temperature for a given location.",
            parameters: {
                type: "object",
                properties: {
                    location: {
                        type: "string",
                        description: "City and country e.g. Bogotá, Colombia",
                    },
                },
                required: ["location"],
                additionalProperties: false,
            },
            strict: true,
        },
    },
];

const completion = await client.chat.completions.create({
    model: "gpt-5",
    messages: [{ role: "user", content: "What is the weather like in Paris today?" }],
    tools: tools,
});

console.log(completion.choices[0].message.tool_calls);
curl https://api2.mnnai.ru/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "gpt-5",
    "messages": [{"role": "user", "content": "What is the weather like in Paris today?"}],
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "get_weather",
          "description": "Get current temperature for a given location.",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {
                "type": "string",
                "description": "City and country e.g. Bogotá, Colombia"
              }
            },
            "required": ["location"],
            "additionalProperties": false
          },
          "strict": true
        }
      }
    ]
  }'
from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

tools = [{
    "type": "function",
    "name": "get_weather",
    "description": "Get current temperature for a given location.",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "City and country e.g. Bogotá, Colombia"
            }
        },
        "required": [
            "location"
        ],
        "additionalProperties": False
    }
}]

response = client.responses.create(
    model="gpt-5",
    input=[{"role": "user", "content": "What is the weather like in Paris today?"}],
    tools=tools
)

print(response.output)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const tools = [
    {
        type: "function",
        name: "get_weather",
        description: "Get current temperature for a given location.",
        parameters: {
            type: "object",
            properties: {
                location: {
                    type: "string",
                    description: "City and country e.g. Bogotá, Colombia",
                },
            },
            required: ["location"],
            additionalProperties: false,
        },
    },
];

const response = await client.responses.create({
    model: "gpt-5",
    input: [{ role: "user", content: "What is the weather like in Paris today?" }],
    tools: tools,
});

console.log(response.output);
curl https://api2.mnnai.ru/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "gpt-5",
    "input": [{"role": "user", "content": "What is the weather like in Paris today?"}],
    "tools": [
      {
        "type": "function",
        "name": "get_weather",
        "description": "Get current temperature for a given location.",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "City and country e.g. Bogotá, Colombia"
            }
          },
          "required": ["location"],
          "additionalProperties": false
        }
      }
    ]
  }'

Note: To use this feature, your subscription level must be 'Basic' or higher. To check whether a model supports function calling, visit the dashboard page and look for the function_calling parameter.

Web Access

Web search allows AI to access information from the internet in real time to provide relevant answers, check facts, and find details beyond the training data. All MNN models support this feature because we use a proprietary search engine. Example:

from openai import OpenAI

client = OpenAI(
    base_url="https://api2.mnnai.ru/v1"
)

response = client.responses.create(
    model="gpt-4o",
    tools=[{"type": "web_search_preview"}],
    input="What was a positive news story from today?"
)

print(response.output_text)
import OpenAI from "openai";

const client = new OpenAI({
    baseURL: "https://api2.mnnai.ru/v1",
    apiKey: "your-api-key"
});

const response = await client.responses.create({
    model: "gpt-4o",
    tools: [{ type: "web_search_preview" }],
    input: "What was a positive news story from today?",
});

console.log(response.output_text);
curl https://api2.mnnai.ru/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $MNN_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "tools": [{"type": "web_search_preview"}],
    "input": "What was a positive news story from today?"
  }'

Note: To use search with the Chat Completions API add -search to the model name. For example deepseek-v3-0324-search.

Anthropic SDK

MNN is compatible with the Anthropic SDK. To use it, simply set the base_url to https://api2.mnnai.ru/v1 and use your MNN API key. This allows you to use the /v1/messages endpoint seamlessly.

import anthropic

client = anthropic.Anthropic(
    base_url="https://api2.mnnai.ru/",
    api_key="your-mnn-api-key"
)

message = client.messages.create(
    model="claude-4.6-sonnet",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Hello, Claude"}
    ]
)

print(message.content[0].text)
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic({
    baseURL: "https://api2.mnnai.ru/",
    apiKey: "your-mnn-api-key"
});

const message = await client.messages.create({
    model: "claude-4.6-sonnet",
    max_tokens: 1024,
    messages: [
        { role: "user", content: "Hello, Claude" }
    ],
});

console.log(message.content[0].text);
curl https://api2.mnnai.ru/v1/messages \
  -H "Content-Type: application/json" \
  -H "x-api-key: $MNN_API_KEY" \
  -d '{
    "model": "claude-4.6-sonnet",
    "max_tokens": 1024,
    "messages": [{"role": "user", "content": "Hello, Claude"}]
  }'

Claude Code

Claude Code is a CLI tool that brings Claude's coding capabilities directly to your terminal. MNN supports Claude Code via the Anthropic API compatibility layer.

Setup

  1. Install Claude Code: npm install -g @anthropic-ai/claude-code
  2. Configure the environment variables to point to MNN:
export ANTHROPIC_BASE_URL="https://api2.mnnai.ru/v1"
export ANTHROPIC_API_KEY="your-mnn-api-key"

After setting these variables, you can run claude in your project directory to start using it with MNN.