Pagination - Limit/Offset vs Seek Method
Pagination is common in list APIs. Limit/Offset (e.g. LIMIT 10 OFFSET 100) is simple but becomes slow and unstable for large offsets because the database still scans and skips all previous rows. The seek method (cursor/keyset pagination) uses a condition on an indexed column (e.g. WHERE id > last_seen_id) so each page is a short index range scan. This article compares both and shows when to use which.
Overview
- Limit/Offset:
LIMIT page_size OFFSET (page - 1) * page_size. Easy to implement and supports "jump to page N," but offset 10000 means the engine must read and discard 10000 rows before returning the next 10. - Seek method: Request the next page with "give me rows where
id > last_id(orcreated_at < last_ts), order by id, limit 10." No offset; the query uses the index and stops after 10 rows. Stable for deep pagination and consistent under insert/delete if you use a stable ordering key.
Example
Example 1: Limit/Offset — simple but costly at large offset
SQL-- Page 1 SELECT * FROM orders ORDER BY id LIMIT 10 OFFSET 0; -- Page 1000: database still scans 9990 rows then returns 10 SELECT * FROM orders ORDER BY id LIMIT 10 OFFSET 9990;
- For large offsets, execution time and load grow with offset. Rows can also shift between pages if data is inserted or deleted (e.g. new row at the top pushes content down).
Example 2: Seek method — cursor on primary key
SQL-- First page SELECT * FROM orders ORDER BY id LIMIT 10; -- Return last id to client, e.g. last_id = 1005 -- Next page: no OFFSET, use index range SELECT * FROM orders WHERE id > 1005 ORDER BY id LIMIT 10;
- Each page is a small index range; cost is roughly constant regardless of how many "pages" the user has already seen. No jump-to-page-N unless you store cursors per page.
Example 3: Seek on time + id for uniqueness
SQL-- Order by created_at, then id for stable ordering SELECT * FROM events WHERE (created_at, id) < (:last_ts, :last_id) ORDER BY created_at DESC, id DESC LIMIT 20;
- When multiple rows share the same
created_at, use a second column (e.g.id) so the cursor is unique and no rows are skipped or duplicated.
Core Mechanism / Behavior
- Limit/Offset: Executor may use a filesort or index for ORDER BY but still has to skip offset rows. Memory and I/O cost ≈ offset + limit.
- Seek: Uses index on the cursor column(s). Range
WHERE id > ?+ORDER BY id LIMIT nbecomes an index range scan that stops after n rows. Cost ≈ limit only.
| Aspect | Limit/Offset | Seek (cursor) |
|---|---|---|
| Deep pages | Slow, costly | Fast, constant cost |
| Jump to page N | Yes | No (need stored cursors) |
| Stability | Can shift | Stable if key is immutable |
| API | page, pageSize | cursor or lastId |
Key Rules
- Prefer the seek method for "next/previous" feeds (e.g. infinite scroll, activity lists). Use an indexed column (id, or (created_at, id)) and pass the last seen value as the cursor.
- Use limit/offset only for small offsets (e.g. first few pages) or when you truly need "page number" and accept the cost. Cap maximum offset or discourage deep paging.
- For seek, always order by the same columns used in the cursor condition and include a unique column (e.g. id) so ordering is deterministic and pages are consistent.
What's Next
See Index Fundamentals for B-Tree and covering indexes that make seek pagination efficient. See Slow Query Diagnosis for analyzing pagination query plans.