Skip to content
This repository was archived by the owner on Apr 7, 2026. It is now read-only.

Commit 9c835a7

Browse files
docs: Add "What is Serverless?" tutorial and link in file handling guide
- Introduced a new tutorial titled "What is Serverless?" to explain serverless architecture for Flutter developers. - Updated the file handling guide to include a link to the new tutorial for better context on serverless concepts.
1 parent 88bae35 commit 9c835a7

File tree

3 files changed

+122
-0
lines changed

3 files changed

+122
-0
lines changed

docs.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -473,6 +473,10 @@
473473
"title": "What is Backend Validation",
474474
"href": "/tutorials/what-is-backend-validation"
475475
},
476+
{
477+
"title": "What is Serverless",
478+
"href": "/tutorials/what-is-serverless"
479+
},
476480
{
477481
"title": "File Uploads in Serverless",
478482
"href": "/tutorials/serverless-file-handling"

docs/tutorials/serverless-file-handling.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -193,6 +193,7 @@ Reserve presigned URLs for scenarios where their benefits outweigh the added com
193193

194194
## What's Next
195195

196+
- **Understand the basics**: [What is Serverless?](/tutorials/what-is-serverless) explains core serverless concepts before diving into file-handling constraints.
196197
- **Build It**: Put these concepts into practice with [Handle File Uploads in Dart Frog with Cloudflare R2](/guides/handle-file-uploads).
197198
- **Add Authentication**: Protect your upload endpoints with [How to Secure Your Dart APIs on Globe](/guides/secure-dart-apis).
198199

Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
---
2+
title: "What is Serverless?"
3+
description: "A plain-language guide to serverless architecture for Flutter developers building their first backend: what it means, how it works, and when to use it."
4+
---
5+
6+
If you're a Flutter developer building an app that needs a backend, you've probably heard the term **serverless**. It sounds contradictory: how can you have a backend without a server? The name is a bit misleading. Serverless doesn't mean there are no servers; it means **you** don't manage them. The platform runs your code on demand, handles scaling, and bills you for what you use.
7+
8+
This tutorial explains what serverless actually means, how statelessness and cold starts affect your app, how scaling and pricing work, and when serverless is a good fit (and when it isn't).
9+
10+
**8 min read**
11+
12+
---
13+
14+
## What You Will Learn
15+
16+
- What "serverless" really means and why the name can be confusing
17+
- Why serverless functions are stateless and what that implies for your design
18+
- What cold starts are and how they affect latency
19+
- How automatic scaling works and how pricing typically works
20+
- When to choose serverless and when to consider alternatives
21+
22+
## Prerequisites
23+
24+
- Basic idea of client-server apps (e.g. a Flutter app calling an API)
25+
- Familiarity with building or planning a backend for a mobile or web app
26+
27+
## Exploring Serverless
28+
29+
### 1. What "Serverless" Actually Means
30+
31+
In a traditional setup, you rent or own a server. You install the OS, run your Dart backend 24/7, and you're responsible for updates, scaling, and keeping it online. **Serverless** flips that: the platform owns the servers. You deploy your code; the platform runs it when a request comes in and shuts it down when it's idle.
32+
33+
Think of it like a gym. Traditional hosting is renting a private room and leaving your equipment there all the time. Serverless is using the gym's equipment only when you're working out. You don't maintain the building or the machines.
34+
35+
On Globe, you deploy a Dart backend (e.g. Dart Frog, Shelf, or Serverpod). Globe runs it in containers that start on demand, serve requests, and scale up or down automatically. You don't configure servers, load balancers, or scaling rules; Globe handles that.
36+
37+
### 2. Stateless Functions: No Memory Between Requests
38+
39+
Serverless runtimes run your code in short-lived execution environments. Each request may be handled by a different instance. **There is no durable in-memory state between requests.** Anything you store in a variable is gone after the request (and possibly the container) ends.
40+
41+
Practical implications:
42+
43+
- **Sessions and user state** must live elsewhere: a database, Globe KV, or signed cookies/tokens the client sends back.
44+
- **Caching** in memory only helps within a single request or a single container's lifetime; for shared or persistent cache, use a store like Globe KV or an external cache.
45+
- **Connection pools** (e.g. to a database) are often recreated per container or request, so you design for that.
46+
47+
This stateless model is why serverless backends rely on databases, key-value stores, and external APIs for anything that must persist or be shared across requests.
48+
49+
### 3. Cold Starts: The First-Request Cost
50+
51+
When no instance of your backend has been used recently, the next request triggers a **cold start**: the platform allocates a container, loads the runtime and your code, and then runs your handler. That one-time cost adds latency to the first request, often a few hundred milliseconds. On Globe, cold starts are typically around 500ms and are being optimized over time.
52+
53+
After that, the instance is **warm** and can serve more requests with much lower latency until it goes idle and is shut down again.
54+
55+
So:
56+
57+
- **Cold start**: First request (or first after idle) pays the startup cost.
58+
- **Warm**: Later requests on the same instance are fast.
59+
60+
Globe uses three start states (cold, warm, and hot) depending on whether a new container is created, a paused one is resumed, or a request is routed to an already-running container. See [Cold Starts](/infrastructure/overview/cold-starts) for Globe-specific details and how to minimize their impact.
61+
62+
For most APIs and web apps, cold starts are acceptable. For very latency-sensitive or real-time flows, you might need to keep traffic steady (e.g. health checks) or accept that the first user after idle may see a slightly slower response.
63+
64+
### 4. Scaling: Automatic and Elastic
65+
66+
The platform scales out when traffic increases and scales in when it drops; you don't choose instance sizes or set thresholds.
67+
68+
- **No capacity planning**: You don't provision for peak load.
69+
- **No idle cost at zero**: If no one calls your API, you don't pay for a sitting server.
70+
- **Spikes are handled**: A burst of traffic is served by new instances instead of overloading one machine.
71+
72+
The tradeoff is less control over capacity and runtime; you gain simplicity and pay-per-use instead.
73+
74+
### 5. Pricing Models: Pay for What You Use
75+
76+
Traditional hosting usually charges for a server (or VM) that runs 24/7, whether you get one request or a million. **Serverless pricing** is typically based on:
77+
78+
- **Requests**: Cost per invocation (e.g. per API call).
79+
- **Execution time**: Cost per unit of time your code runs (e.g. per second or per 100 ms).
80+
81+
In practice:
82+
83+
- **Low or variable traffic**: You pay little when idle and more when traffic grows, often cheaper than a fixed server.
84+
- **Steady, high traffic**: Cost can rise with volume; compare with reserved or always-on capacity if needed.
85+
86+
Globe's model fits this pattern: you're billed for usage (requests and compute) rather than for a reserved server. The exact details are in Globe's pricing and usage docs.
87+
88+
### 6. When to Use Serverless (and When Not)
89+
90+
**Serverless is a good fit when:**
91+
92+
- You're building **APIs, webhooks, or small backends** that respond to HTTP requests.
93+
- Traffic is **variable or unpredictable**; you want to avoid paying for idle capacity.
94+
- You're a **small team or solo developer** and want to focus on app logic, not servers.
95+
- You're already in the **Dart/Flutter ecosystem** and want a Dart-native backend (e.g. on Globe) that deploys and scales without DevOps.
96+
97+
**Consider alternatives when:**
98+
99+
- You need **long-running jobs** (e.g. batch processing for many minutes). Serverless time limits (e.g. 30-second request timeout on Globe) may not suit. Use a job queue and workers, or a different compute model.
100+
- You have **very strict latency requirements** and cannot tolerate cold starts; you may need always-on instances or a way to keep functions warm.
101+
- You rely heavily on **in-memory state** or complex in-process caching. Stateless serverless pushes you to external stores, which may change your design.
102+
103+
For many Flutter apps (REST APIs, auth, webhooks, server-rendered pages), serverless on Globe is a strong fit.
104+
105+
## What's Next
106+
107+
- **See constraints in practice**: [Understanding File Uploads in Serverless](/tutorials/serverless-file-handling) explains how the serverless model affects file handling and what patterns to use.
108+
- **Globe-specific behavior**: [Cold Starts](/infrastructure/overview/cold-starts) and [Infrastructure Overview](/infrastructure/overview) for details on how Globe runs your code and how to optimize.
109+
- **Deploy your backend**: Get started with the [Quickstart](/getting-started/quickstart) or deploy a [Dart Frog](/guides/dart-frog-deployment) or [Shelf](/guides/shelf-backend-globe) backend on Globe.
110+
- **Connect your Flutter app**: Once your backend is on Globe, [connect your Flutter app](/guides/connect-flutter-app) to it for a full-stack setup.
111+
112+
---
113+
114+
<Info>
115+
Didn't find what you were looking for? [Talk to us on
116+
Discord](https://invertase.link/globe-discord)
117+
</Info>

0 commit comments

Comments
 (0)