Function Router
Access a wide selection of AI models on Function's managed inference service with intelligent routing for optimal performance.
Powerful conversational models designed to empower agentic workloads, enabling dynamic interactions and automation.
1B+
Tokens Processed
50K+
Active Users
15+
Model Providers
100+
Available Models
Start using Function Router with just three simple steps
Create your free account on the Function platform
Try our interactive demos and discover available models
Generate your API key and start building immediately
Built for performance, reliability, and ease of use. Function Router provides the infrastructure you need for AI inference at scale.
Low-latency responses from our managed infrastructure.
Access a large selection of AI models through a single, unified API endpoint.
Run models on Function's fully managed infrastructure—no deployment required.
Pay less with dynamic pricing based on resource demand.
import OpenAI from 'openai';
// Init Client
const client = new OpenAI({
apiKey: process.env.FXN_API_KEY,
baseURL: 'https://api.function.network/v1',
});
// Create a completion request
const response = await client.chat.completions.create({
model: 'fxn/Meta-Llama-3.3-70B-Instruct-Function',
messages: [
{
role: 'system',
content: 'You are a fitness coach. Provide personalized workout advice.'
},
{
role: 'user',
content: 'I want to build muscle but I only have 30 minutes a day to workout. What should I focus on?'
}
],
});
console.log(response.choices[0].message.content);Get started with just a few lines of code. One API, all models.
Unified Access
Access Gemini, DeepSeek, Llama, and more through a single endpoint
Intelligent Selection
Automatically choose the best model based on cost and performance
No Vendor Lock-in
Switch between providers without changing your code
High Availability
Enterprise-grade reliability with intelligent load balancing
Get started with Function Router and access a wide selection of AI models with managed inference today.
Start Building