# Server overview

Mastra runs as an HTTP server that exposes your agents, workflows, and other functionality as API endpoints. The server handles request routing, middleware execution, authentication, and streaming responses.

> **Info:** This page covers the [`server`](https://mastra.ai/reference/configuration) configuration options passed to the `Mastra` constructor. For running Mastra with your own HTTP server (Hono, Express, etc.), visit [Server Adapters](https://mastra.ai/docs/server/server-adapters).

## Server architecture

Mastra uses [Hono](https://hono.dev) as its underlying HTTP server framework. When you build a Mastra application using `mastra build`, it generates a Hono-based HTTP server in the `.mastra` directory.

The server provides:

- API endpoints for all registered agents and workflows
- Custom API routes and middleware
- Authentication across providers
- Request context for dynamic configuration
- Stream data redaction for secure responses

## Configuration

Configure the server by passing a `server` object to the `Mastra` constructor:

```typescript
import { Mastra } from '@mastra/core'

export const mastra = new Mastra({
  server: {
    port: 3000, // Defaults to PORT env var or 4111
    host: '0.0.0.0', // Defaults to MASTRA_HOST env var or 'localhost'
  },
})
```

> **Info:** Visit the [configuration reference](https://mastra.ai/reference/configuration) for a full list of available server options.

## Server features

- **[Middleware](https://mastra.ai/docs/server/middleware)**: Intercept requests for authentication, logging, CORS, or injecting request-specific context.
- **[Custom API Routes](https://mastra.ai/docs/server/custom-api-routes)**: Extend the server with your own HTTP endpoints that have access to the Mastra instance.
- **[Request Context](https://mastra.ai/docs/server/request-context)**: Pass request-specific values to agents, tools, and workflows based on runtime conditions.
- **[Server Adapters](https://mastra.ai/docs/server/server-adapters)**: Run Mastra with Express, Hono, or your own HTTP server instead of the generated server.
- **[Custom Adapters](https://mastra.ai/docs/server/custom-adapters)**: Build adapters for frameworks not officially supported.
- **[Mastra Client SDK](https://mastra.ai/docs/server/mastra-client)**: Type-safe client for calling agents, workflows, and tools from browser or server environments.
- **[Authentication](https://mastra.ai/docs/server/auth)**: Secure endpoints with JWT, Clerk, Supabase, Firebase, Auth0, or WorkOS.

## REST API

You can explore all available endpoints in the OpenAPI specification at <http://localhost:4111/api/openapi.json>, which details every endpoint and its request and response schemas.

To explore the API interactively, visit the Swagger UI at <http://localhost:4111/swagger-ui>. Here, you can discover endpoints and test them directly from your browser.

> **Note:** The OpenAPI and Swagger endpoints are disabled in production by default. To enable them, set [`server.build.openAPIDocs`](https://mastra.ai/reference/configuration) and [`server.build.swaggerUI`](https://mastra.ai/reference/configuration) to `true` respectively.

## OpenAI Responses API

Mastra exposes OpenAI-compatible Responses and Conversations routes that let you use Mastra Agents as a Responses API. These routes are agent-backed adapters over Mastra agents, memory, and storage, so requests run through the selected Mastra agent instead of acting as a raw provider proxy.

These APIs are currently experimental.

Use `agent_id` to select the Mastra agent that should handle the request. Initial requests target an agent directly, and stored follow-up turns can continue with `previous_response_id`. You can also pass `model` to override the agent's configured model for a single request. If you omit `model`, Mastra uses the model already configured on the agent.

The Responses routes support streaming, function calling (tools), stored continuations with `previous_response_id`, conversation threads through `conversation_id`, provider-specific passthrough with `providerOptions`, and JSON output through `text.format`.

For the full request and response contract, see the [Responses API reference](https://mastra.ai/reference/client-js/responses) and [Conversations API reference](https://mastra.ai/reference/client-js/conversations). For the complete list of HTTP routes, see [server routes](https://mastra.ai/reference/server/routes).

## Stream data redaction

When streaming agent responses, the HTTP layer redacts system prompts, tool definitions, API keys, and similar data from each chunk before sending it to clients. This is enabled by default.

This behavior is only configurable by using [server adapters](https://mastra.ai/docs/server/server-adapters). For server adapters, stream data redaction is enabled by default, too.

## TypeScript configuration

Mastra requires `module` and `moduleResolution` settings compatible with modern Node.js. Legacy options like `CommonJS` or `node` aren't supported.

```json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "ES2022",
    "moduleResolution": "bundler",
    "esModuleInterop": true,
    "forceConsistentCasingInFileNames": true,
    "strict": true,
    "skipLibCheck": true,
    "noEmit": true,
    "outDir": "dist"
  },
  "include": ["src/**/*"]
}
```