10 Cleancache is a new optional feature provided by the VFS layer that
11 potentially dramatically increases page cache effectiveness for
12 many workloads in many environments at a negligible cost.
14 Cleancache can be thought of as a page-granularity victim cache for clean
15 pages that the kernel's pageframe replacement algorithm (PFRA) would like
16 to keep around, but can't since there isn't enough memory. So when the
17 PFRA "evicts" a page, it first attempts to use cleancache code to
18 put the data contained in that page into "transcendent memory", memory
19 that is not directly accessible or addressable by the kernel and is
20 of unknown and possibly time-varying size.
22 Later, when a cleancache-enabled filesystem wishes to access a page
23 in a file on disk, it first checks cleancache to see if it already
24 contains it; if it does, the page of data is copied into the kernel
25 and a disk access is avoided.
27 Transcendent memory "drivers" for cleancache are currently implemented
28 in Xen (using hypervisor memory) and zcache (using in-kernel compressed
29 memory) and other implementations are in development.
31 :ref:`FAQs <faq>` are included below.
33 Implementation Overview
34 =======================
36 A cleancache "backend" that provides transcendent memory registers itself
37 to the kernel's cleancache "frontend" by calling cleancache_register_ops,
38 passing a pointer to a cleancache_ops structure with funcs set appropriately.
39 The functions provided must conform to certain semantics as follows:
41 Most important, cleancache is "ephemeral". Pages which are copied into
42 cleancache have an indefinite lifetime which is completely unknowable
43 by the kernel and so may or may not still be in cleancache at any later time.
44 Thus, as its name implies, cleancache is not suitable for dirty pages.
45 Cleancache has complete discretion over what pages to preserve and what
46 pages to discard and when.
48 Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a
49 pool id which, if positive, must be saved in the filesystem's superblock;
50 a negative return value indicates failure. A "put_page" will copy a
51 (presumably about-to-be-evicted) page into cleancache and associate it with
52 the pool id, a file key, and a page index into the file. (The combination
53 of a pool id, a file key, and an index is sometimes called a "handle".)
54 A "get_page" will copy the page, if found, from cleancache into kernel memory.
55 An "invalidate_page" will ensure the page no longer is present in cleancache;
56 an "invalidate_inode" will invalidate all pages associated with the specified
57 file; and, when a filesystem is unmounted, an "invalidate_fs" will invalidate
58 all pages in all files specified by the given pool id and also surrender
61 An "init_shared_fs", like init_fs, obtains a pool id but tells cleancache
62 to treat the pool as shared using a 128-bit UUID as a key. On systems
63 that may run multiple kernels (such as hard partitioned or virtualized
64 systems) that may share a clustered filesystem, and where cleancache
65 may be shared among those kernels, calls to init_shared_fs that specify the
66 same UUID will receive the same pool id, thus allowing the pages to
67 be shared. Note that any security requirements must be imposed outside
68 of the kernel (e.g. by "tools" that control cleancache). Or a
69 cleancache implementation can simply disable shared_init by always
70 returning a negative value.
72 If a get_page is successful on a non-shared pool, the page is invalidated
73 (thus making cleancache an "exclusive" cache). On a shared pool, the page
74 is NOT invalidated on a successful get_page so that it remains accessible to
75 other sharers. The kernel is responsible for ensuring coherency between
76 cleancache (shared or not), the page cache, and the filesystem, using
77 cleancache invalidate operations as required.
79 Note that cleancache must enforce put-put-get coherency and get-get
80 coherency. For the former, if two puts are made to the same handle but
81 with different data, say AAA by the first put and BBB by the second, a
82 subsequent get can never return the stale data (AAA). For get-get coherency,
83 if a get for a given handle fails, subsequent gets for that handle will
84 never succeed unless preceded by a successful put with that handle.
86 Last, cleancache provides no SMP serialization guarantees; if two
87 different Linux threads are simultaneously putting and invalidating a page
88 with the same handle, the results are indeterminate. Callers must
89 lock the page to ensure serial behavior.
91 Cleancache Performance Metrics
92 ==============================
94 If properly configured, monitoring of cleancache is done via debugfs in
95 the `/sys/kernel/debug/cleancache` directory. The effectiveness of cleancache
96 can be measured (across all filesystems) with:
99 number of gets that were successful
102 number of gets that failed
105 number of puts attempted (all "succeed")
108 number of invalidates attempted
110 A backend implementation may provide additional metrics.
117 * Where's the value? (Andrew Morton)
119 Cleancache provides a significant performance benefit to many workloads
120 in many environments with negligible overhead by improving the
121 effectiveness of the pagecache. Clean pagecache pages are
122 saved in transcendent memory (RAM that is otherwise not directly
123 addressable to the kernel); fetching those pages later avoids "refaults"
126 Cleancache (and its sister code "frontswap") provide interfaces for
127 this transcendent memory (aka "tmem"), which conceptually lies between
128 fast kernel-directly-addressable RAM and slower DMA/asynchronous devices.
129 Disallowing direct kernel or userland reads/writes to tmem
130 is ideal when data is transformed to a different form and size (such
131 as with compression) or secretly moved (as might be useful for write-
132 balancing for some RAM-like devices). Evicted page-cache pages (and
133 swap pages) are a great use for this kind of slower-than-RAM-but-much-
134 faster-than-disk transcendent memory, and the cleancache (and frontswap)
135 "page-object-oriented" specification provides a nice way to read and
136 write -- and indirectly "name" -- the pages.
138 In the virtual case, the whole point of virtualization is to statistically
139 multiplex physical resources across the varying demands of multiple
140 virtual machines. This is really hard to do with RAM and efforts to
141 do it well with no kernel change have essentially failed (except in some
142 well-publicized special-case workloads). Cleancache -- and frontswap --
143 with a fairly small impact on the kernel, provide a huge amount
144 of flexibility for more dynamic, flexible RAM multiplexing.
145 Specifically, the Xen Transcendent Memory backend allows otherwise
146 "fallow" hypervisor-owned RAM to not only be "time-shared" between multiple
147 virtual machines, but the pages can be compressed and deduplicated to
148 optimize RAM utilization. And when guest OS's are induced to surrender
149 underutilized RAM (e.g. with "self-ballooning"), page cache pages
150 are the first to go, and cleancache allows those pages to be
151 saved and reclaimed if overall host system memory conditions allow.
153 And the identical interface used for cleancache can be used in
154 physical systems as well. The zcache driver acts as a memory-hungry
155 device that stores pages of data in a compressed state. And
156 the proposed "RAMster" driver shares RAM across multiple physical
159 * Why does cleancache have its sticky fingers so deep inside the
160 filesystems and VFS? (Andrew Morton and Christoph Hellwig)
162 The core hooks for cleancache in VFS are in most cases a single line
163 and the minimum set are placed precisely where needed to maintain
164 coherency (via cleancache_invalidate operations) between cleancache,
165 the page cache, and disk. All hooks compile into nothingness if
166 cleancache is config'ed off and turn into a function-pointer-
167 compare-to-NULL if config'ed on but no backend claims the ops
168 functions, or to a compare-struct-element-to-negative if a
169 backend claims the ops functions but a filesystem doesn't enable
172 Some filesystems are built entirely on top of VFS and the hooks
173 in VFS are sufficient, so don't require an "init_fs" hook; the
174 initial implementation of cleancache didn't provide this hook.
175 But for some filesystems (such as btrfs), the VFS hooks are
176 incomplete and one or more hooks in fs-specific code are required.
177 And for some other filesystems, such as tmpfs, cleancache may
178 be counterproductive. So it seemed prudent to require a filesystem
179 to "opt in" to use cleancache, which requires adding a hook in
180 each filesystem. Not all filesystems are supported by cleancache
181 only because they haven't been tested. The existing set should
182 be sufficient to validate the concept, the opt-in approach means
183 that untested filesystems are not affected, and the hooks in the
184 existing filesystems should make it very easy to add more
185 filesystems in the future.
187 The total impact of the hooks to existing fs and mm files is only
188 about 40 lines added (not counting comments and blank lines).
190 * Why not make cleancache asynchronous and batched so it can more
191 easily interface with real devices with DMA instead of copying each
192 individual page? (Minchan Kim)
194 The one-page-at-a-time copy semantics simplifies the implementation
195 on both the frontend and backend and also allows the backend to
196 do fancy things on-the-fly like page compression and
197 page deduplication. And since the data is "gone" (copied into/out
198 of the pageframe) before the cleancache get/put call returns,
199 a great deal of race conditions and potential coherency issues
200 are avoided. While the interface seems odd for a "real device"
201 or for real kernel-addressable RAM, it makes perfect sense for
204 * Why is non-shared cleancache "exclusive"? And where is the
205 page "invalidated" after a "get"? (Minchan Kim)
207 The main reason is to free up space in transcendent memory and
208 to avoid unnecessary cleancache_invalidate calls. If you want inclusive,
209 the page can be "put" immediately following the "get". If
210 put-after-get for inclusive becomes common, the interface could
211 be easily extended to add a "get_no_invalidate" call.
213 The invalidate is done by the cleancache backend implementation.
215 * What's the performance impact?
217 Performance analysis has been presented at OLS'09 and LCA'10.
218 Briefly, performance gains can be significant on most workloads,
219 especially when memory pressure is high (e.g. when RAM is
220 overcommitted in a virtual workload); and because the hooks are
221 invoked primarily in place of or in addition to a disk read/write,
222 overhead is negligible even in worst case workloads. Basically
223 cleancache replaces I/O with memory-copy-CPU-overhead; on older
224 single-core systems with slow memory-copy speeds, cleancache
225 has little value, but in newer multicore machines, especially
226 consolidated/virtualized machines, it has great value.
228 * How do I add cleancache support for filesystem X? (Boaz Harrash)
230 Filesystems that are well-behaved and conform to certain
231 restrictions can utilize cleancache simply by making a call to
232 cleancache_init_fs at mount time. Unusual, misbehaving, or
233 poorly layered filesystems must either add additional hooks
234 and/or undergo extensive additional testing... or should just
235 not enable the optional cleancache.
237 Some points for a filesystem to consider:
239 - The FS should be block-device-based (e.g. a ram-based FS such
240 as tmpfs should not enable cleancache)
241 - To ensure coherency/correctness, the FS must ensure that all
242 file removal or truncation operations either go through VFS or
243 add hooks to do the equivalent cleancache "invalidate" operations
244 - To ensure coherency/correctness, either inode numbers must
245 be unique across the lifetime of the on-disk file OR the
246 FS must provide an "encode_fh" function.
247 - The FS must call the VFS superblock alloc and deactivate routines
248 or add hooks to do the equivalent cleancache calls done there.
249 - To maximize performance, all pages fetched from the FS should
250 go through the do_mpag_readpage routine or the FS should add
251 hooks to do the equivalent (cf. btrfs)
252 - Currently, the FS blocksize must be the same as PAGESIZE. This
253 is not an architectural restriction, but no backends currently
254 support anything different.
255 - A clustered FS should invoke the "shared_init_fs" cleancache
256 hook to get best performance for some backends.
258 * Why not use the KVA of the inode as the key? (Christoph Hellwig)
260 If cleancache would use the inode virtual address instead of
261 inode/filehandle, the pool id could be eliminated. But, this
262 won't work because cleancache retains pagecache data pages
263 persistently even when the inode has been pruned from the
264 inode unused list, and only invalidates the data page if the file
265 gets removed/truncated. So if cleancache used the inode kva,
266 there would be potential coherency issues if/when the inode
267 kva is reused for a different file. Alternately, if cleancache
268 invalidated the pages when the inode kva was freed, much of the value
269 of cleancache would be lost because the cache of pages in cleanache
270 is potentially much larger than the kernel pagecache and is most
271 useful if the pages survive inode cache removal.
273 * Why is a global variable required?
275 The cleancache_enabled flag is checked in all of the frequently-used
276 cleancache hooks. The alternative is a function call to check a static
277 variable. Since cleancache is enabled dynamically at runtime, systems
278 that don't enable cleancache would suffer thousands (possibly
279 tens-of-thousands) of unnecessary function calls per second. So the
280 global variable allows cleancache to be enabled by default at compile
281 time, but have insignificant performance impact when cleancache remains
284 * Does cleanache work with KVM?
286 The memory model of KVM is sufficiently different that a cleancache
287 backend may have less value for KVM. This remains to be tested,
288 especially in an overcommitted system.
290 * Does cleancache work in userspace? It sounds useful for
291 memory hungry caches like web browsers. (Jamie Lokier)
293 No plans yet, though we agree it sounds useful, at least for
294 apps that bypass the page cache (e.g. O_DIRECT).
296 Last updated: Dan Magenheimer, April 13 2011