VisionFive2 Linux kernel

StarFive Tech Linux Kernel for VisionFive (JH7110) boards (mirror)

More than 9999 Commits   33 Branches   55 Tags
b24413180f560 (Greg Kroah-Hartman             2017-11-01 15:07:57 +0100    1) // SPDX-License-Identifier: GPL-2.0
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700    2) /*
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700    3)  * linux/mm/slab.c
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700    4)  * Written by Mark Hemment, 1996/97.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700    5)  * (markhe@nextd.demon.co.uk)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700    6)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700    7)  * kmem_cache_destroy() + some cleanup - 1999 Andrea Arcangeli
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700    8)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700    9)  * Major cleanup, different bufctl logic, per-cpu arrays
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   10)  *	(c) 2000 Manfred Spraul
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   11)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   12)  * Cleanup, make the head arrays unconditional, preparation for NUMA
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   13)  * 	(c) 2002 Manfred Spraul
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   14)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   15)  * An implementation of the Slab Allocator as described in outline in;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   16)  *	UNIX Internals: The New Frontiers by Uresh Vahalia
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   17)  *	Pub: Prentice Hall	ISBN 0-13-101908-2
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   18)  * or with a little more detail in;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   19)  *	The Slab Allocator: An Object-Caching Kernel Memory Allocator
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   20)  *	Jeff Bonwick (Sun Microsystems).
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   21)  *	Presented at: USENIX Summer 1994 Technical Conference
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   22)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   23)  * The memory is organized in caches, one cache for each object type.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   24)  * (e.g. inode_cache, dentry_cache, buffer_head, vm_area_struct)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   25)  * Each cache consists out of many slabs (they are small (usually one
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   26)  * page long) and always contiguous), and each slab contains multiple
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   27)  * initialized objects.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   28)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   29)  * This means, that your constructor is used only for newly allocated
183ff22bb6bd8 (Simon Arlott                   2007-10-20 01:27:18 +0200   30)  * slabs and you must pass objects with the same initializations to
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   31)  * kmem_cache_free.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   32)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   33)  * Each cache can only support one memory type (GFP_DMA, GFP_HIGHMEM,
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   34)  * normal). If you need a special memory type, then must create a new
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   35)  * cache for that memory type.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   36)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   37)  * In order to reduce fragmentation, the slabs are sorted in 3 groups:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   38)  *   full slabs with 0 free objects
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   39)  *   partial slabs
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   40)  *   empty slabs with no allocated objects
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   41)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   42)  * If partial slabs exist, then new allocations come from these slabs,
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   43)  * otherwise from empty slabs or new slabs are allocated.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   44)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   45)  * kmem_cache_destroy() CAN CRASH if you try to allocate from the cache
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   46)  * during kmem_cache_destroy(). The caller must prevent concurrent allocs.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   47)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   48)  * Each cache has a short per-cpu head array, most allocs
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   49)  * and frees go into that array, and if that array overflows, then 1/2
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   50)  * of the entries in the array are given back into the global cache.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   51)  * The head array is strictly LIFO and should improve the cache hit rates.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   52)  * On SMP, it additionally reduces the spinlock operations.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   53)  *
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800   54)  * The c_cpuarray may not be read with enabled local interrupts -
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   55)  * it's changed with a smp_call_function().
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   56)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   57)  * SMP synchronization:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   58)  *  constructors and destructors are called without any locking.
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800   59)  *  Several members in struct kmem_cache and struct slab never change, they
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   60)  *	are accessed without any locking.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   61)  *  The per-cpu arrays are never accessed from the wrong cpu, no locking,
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   62)  *  	and local interrupts are disabled so slab code is preempt-safe.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   63)  *  The non-constant members are protected with a per-cache irq spinlock.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   64)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   65)  * Many thanks to Mark Hemment, who wrote another per-cpu slab patch
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   66)  * in 2000 - many ideas in the current implementation are derived from
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   67)  * his patch.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   68)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   69)  * Further notes from the original documentation:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   70)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   71)  * 11 April '97.  Started multi-threading - markhe
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500   72)  *	The global cache-chain is protected by the mutex 'slab_mutex'.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   73)  *	The sem is only needed when accessing/extending the cache-chain, which
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   74)  *	can never happen inside an interrupt (kmem_cache_create(),
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   75)  *	kmem_cache_shrink() and kmem_cache_reap()).
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   76)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   77)  *	At present, each engine can be growing a cache.  This should be blocked.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   78)  *
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700   79)  * 15 March 2005. NUMA slab allocator.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700   80)  *	Shai Fultheim <shai@scalex86.org>.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700   81)  *	Shobhit Dayal <shobhit@calsoftinc.com>
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700   82)  *	Alok N Kataria <alokk@calsoftinc.com>
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700   83)  *	Christoph Lameter <christoph@lameter.com>
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700   84)  *
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700   85)  *	Modified the slab allocator to be node aware on NUMA systems.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700   86)  *	Each node has its own list of partial, free and full slabs.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700   87)  *	All object allocations for a node occur from node specific slab lists.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   88)  */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   89) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   90) #include	<linux/slab.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   91) #include	<linux/mm.h>
c9cf55285e87a (Randy Dunlap                   2006-06-27 02:53:52 -0700   92) #include	<linux/poison.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   93) #include	<linux/swap.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   94) #include	<linux/cache.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   95) #include	<linux/interrupt.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   96) #include	<linux/init.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700   97) #include	<linux/compiler.h>
101a50019ae5e (Paul Jackson                   2006-03-24 03:16:07 -0800   98) #include	<linux/cpuset.h>
a0ec95a8e6979 (Alexey Dobriyan                2008-10-06 00:59:10 +0400   99) #include	<linux/proc_fs.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  100) #include	<linux/seq_file.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  101) #include	<linux/notifier.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  102) #include	<linux/kallsyms.h>
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800  103) #include	<linux/kfence.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  104) #include	<linux/cpu.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  105) #include	<linux/sysctl.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  106) #include	<linux/module.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  107) #include	<linux/rcupdate.h>
543537bd92269 (Paulo Marques                  2005-06-23 00:09:02 -0700  108) #include	<linux/string.h>
138ae6631a3d6 (Andrew Morton                  2006-12-06 20:36:41 -0800  109) #include	<linux/uaccess.h>
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  110) #include	<linux/nodemask.h>
d5cff635290ae (Catalin Marinas                2009-06-11 13:22:40 +0100  111) #include	<linux/kmemleak.h>
dc85da15d42b0 (Christoph Lameter              2006-01-18 17:42:36 -0800  112) #include	<linux/mempolicy.h>
fc0abb1451c64 (Ingo Molnar                    2006-01-18 17:42:33 -0800  113) #include	<linux/mutex.h>
8a8b6502fb669 (Akinobu Mita                   2006-12-08 02:39:44 -0800  114) #include	<linux/fault-inject.h>
e7eebaf6a81b9 (Ingo Molnar                    2006-06-27 02:54:55 -0700  115) #include	<linux/rtmutex.h>
6a2d7a955d8de (Eric Dumazet                   2006-12-13 00:34:27 -0800  116) #include	<linux/reciprocal_div.h>
3ac7fe5a4aab4 (Thomas Gleixner                2008-04-30 00:55:01 -0700  117) #include	<linux/debugobjects.h>
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  118) #include	<linux/memory.h>
268bb0ce3e878 (Linus Torvalds                 2011-05-20 12:50:29 -0700  119) #include	<linux/prefetch.h>
3f8c24529b42f (Ingo Molnar                    2017-02-05 14:31:22 +0100  120) #include	<linux/sched/task_stack.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  121) 
381760eadc393 (Mel Gorman                     2012-07-31 16:44:30 -0700  122) #include	<net/sock.h>
381760eadc393 (Mel Gorman                     2012-07-31 16:44:30 -0700  123) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  124) #include	<asm/cacheflush.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  125) #include	<asm/tlbflush.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  126) #include	<asm/page.h>
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  127) 
4dee6b64ee7cf (Steven Rostedt                 2012-01-09 17:15:42 -0500  128) #include <trace/events/kmem.h>
4dee6b64ee7cf (Steven Rostedt                 2012-01-09 17:15:42 -0500  129) 
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700  130) #include	"internal.h"
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700  131) 
b9ce5ef49f00d (Glauber Costa                  2012-12-18 14:22:46 -0800  132) #include	"slab.h"
b9ce5ef49f00d (Glauber Costa                  2012-12-18 14:22:46 -0800  133) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  134) /*
50953fe9e00eb (Christoph Lameter              2007-05-06 14:50:16 -0700  135)  * DEBUG	- 1 for kmem_cache_create() to honour; SLAB_RED_ZONE & SLAB_POISON.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  136)  *		  0 for faster, smaller code (especially in the critical paths).
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  137)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  138)  * STATS	- 1 to collect stats for /proc/slabinfo.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  139)  *		  0 for faster, smaller code (especially in the critical paths).
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  140)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  141)  * FORCED_DEBUG	- 1 enables SLAB_RED_ZONE and SLAB_POISON (if possible)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  142)  */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  143) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  144) #ifdef CONFIG_DEBUG_SLAB
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  145) #define	DEBUG		1
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  146) #define	STATS		1
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  147) #define	FORCED_DEBUG	1
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  148) #else
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  149) #define	DEBUG		0
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  150) #define	STATS		0
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  151) #define	FORCED_DEBUG	0
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  152) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  153) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  154) /* Shouldn't this be in a header file somewhere? */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  155) #define	BYTES_PER_WORD		sizeof(void *)
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400  156) #define	REDZONE_ALIGN		max(BYTES_PER_WORD, __alignof__(unsigned long long))
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  157) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  158) #ifndef ARCH_KMALLOC_FLAGS
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  159) #define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  160) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  161) 
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900  162) #define FREELIST_BYTE_INDEX (((PAGE_SIZE >> BITS_PER_BYTE) \
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900  163) 				<= SLAB_OBJ_MIN_SIZE) ? 1 : 0)
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900  164) 
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900  165) #if FREELIST_BYTE_INDEX
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900  166) typedef unsigned char freelist_idx_t;
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900  167) #else
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900  168) typedef unsigned short freelist_idx_t;
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900  169) #endif
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900  170) 
30321c7b658a5 (David Miller                   2014-05-05 16:20:04 -0400  171) #define SLAB_OBJ_MAX_NUM ((1 << sizeof(freelist_idx_t) * BITS_PER_BYTE) - 1)
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900  172) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  173) /*
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  174)  * struct array_cache
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  175)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  176)  * Purpose:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  177)  * - LIFO ordering, to hand out cache-warm objects from _alloc
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  178)  * - reduce the number of linked list operations
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  179)  * - reduce spinlock operations
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  180)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  181)  * The limit is stored in the per-cpu structure to reduce the data cache
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  182)  * footprint.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  183)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  184)  */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  185) struct array_cache {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  186) 	unsigned int avail;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  187) 	unsigned int limit;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  188) 	unsigned int batchcount;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  189) 	unsigned int touched;
bda5b655fe663 (Robert P. J. Day               2007-10-16 23:30:05 -0700  190) 	void *entry[];	/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  191) 			 * Must have this definition in here for the proper
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  192) 			 * alignment of array_cache. Also simplifies accessing
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  193) 			 * the entries.
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  194) 			 */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  195) };
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  196) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  197) struct alien_cache {
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  198) 	spinlock_t lock;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  199) 	struct array_cache ac;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  200) };
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  201) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  202) /*
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  203)  * Need this for bootstrapping a per node allocator.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  204)  */
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700  205) #define NUM_INIT_LISTS (2 * MAX_NUMNODES)
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  206) static struct kmem_cache_node __initdata init_kmem_cache_node[NUM_INIT_LISTS];
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  207) #define	CACHE_CACHE 0
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700  208) #define	SIZE_NODE (MAX_NUMNODES)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  209) 
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700  210) static int drain_freelist(struct kmem_cache *cache,
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  211) 			struct kmem_cache_node *n, int tofree);
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700  212) static void free_block(struct kmem_cache *cachep, void **objpp, int len,
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700  213) 			int node, struct list_head *list);
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700  214) static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list);
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300  215) static int enable_cpucache(struct kmem_cache *cachep, gfp_t gfp);
65f27f38446e1 (David Howells                  2006-11-22 14:55:48 +0000  216) static void cache_reap(struct work_struct *unused);
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700  217) 
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700  218) static inline void fixup_objfreelist_debug(struct kmem_cache *cachep,
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700  219) 						void **list);
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700  220) static inline void fixup_slab_list(struct kmem_cache *cachep,
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700  221) 				struct kmem_cache_node *n, struct page *page,
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700  222) 				void **list);
e0a42726794f7 (Ingo Molnar                    2006-06-23 02:03:46 -0700  223) static int slab_early_init = 1;
e0a42726794f7 (Ingo Molnar                    2006-06-23 02:03:46 -0700  224) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  225) #define INDEX_NODE kmalloc_index(sizeof(struct kmem_cache_node))
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  226) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  227) static void kmem_cache_node_init(struct kmem_cache_node *parent)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  228) {
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  229) 	INIT_LIST_HEAD(&parent->slabs_full);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  230) 	INIT_LIST_HEAD(&parent->slabs_partial);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  231) 	INIT_LIST_HEAD(&parent->slabs_free);
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800  232) 	parent->total_slabs = 0;
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800  233) 	parent->free_slabs = 0;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  234) 	parent->shared = NULL;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  235) 	parent->alien = NULL;
2e1217cf96b54 (Ravikiran G Thirumalai         2006-02-04 23:27:56 -0800  236) 	parent->colour_next = 0;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  237) 	spin_lock_init(&parent->list_lock);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  238) 	parent->free_objects = 0;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  239) 	parent->free_touched = 0;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  240) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  241) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  242) #define MAKE_LIST(cachep, listp, slab, nodeid)				\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  243) 	do {								\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  244) 		INIT_LIST_HEAD(listp);					\
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700  245) 		list_splice(&get_node(cachep, nodeid)->slab, listp);	\
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  246) 	} while (0)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  247) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  248) #define	MAKE_ALL_LISTS(cachep, ptr, nodeid)				\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  249) 	do {								\
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  250) 	MAKE_LIST((cachep), (&(ptr)->slabs_full), slabs_full, nodeid);	\
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  251) 	MAKE_LIST((cachep), (&(ptr)->slabs_partial), slabs_partial, nodeid); \
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  252) 	MAKE_LIST((cachep), (&(ptr)->slabs_free), slabs_free, nodeid);	\
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  253) 	} while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  254) 
4fd0b46e89879 (Alexey Dobriyan                2017-11-15 17:32:21 -0800  255) #define CFLGS_OBJFREELIST_SLAB	((slab_flags_t __force)0x40000000U)
4fd0b46e89879 (Alexey Dobriyan                2017-11-15 17:32:21 -0800  256) #define CFLGS_OFF_SLAB		((slab_flags_t __force)0x80000000U)
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700  257) #define	OBJFREELIST_SLAB(x)	((x)->flags & CFLGS_OBJFREELIST_SLAB)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  258) #define	OFF_SLAB(x)	((x)->flags & CFLGS_OFF_SLAB)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  259) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  260) #define BATCHREFILL_LIMIT	16
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  261) /*
f0953a1bbaca7 (Ingo Molnar                    2021-05-06 18:06:47 -0700  262)  * Optimization question: fewer reaps means less probability for unnecessary
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  263)  * cpucache drain/refill cycles.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  264)  *
dc6f3f276e2b4 (Adrian Bunk                    2005-11-08 16:44:08 +0100  265)  * OTOH the cpuarrays can contain lots of objects,
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  266)  * which could lock up otherwise freeable slabs.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  267)  */
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800  268) #define REAPTIMEOUT_AC		(2*HZ)
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800  269) #define REAPTIMEOUT_NODE	(4*HZ)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  270) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  271) #if STATS
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  272) #define	STATS_INC_ACTIVE(x)	((x)->num_active++)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  273) #define	STATS_DEC_ACTIVE(x)	((x)->num_active--)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  274) #define	STATS_INC_ALLOCED(x)	((x)->num_allocations++)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  275) #define	STATS_INC_GROWN(x)	((x)->grown++)
0b41163407e2f (Zhiyuan Dai                    2021-02-24 12:01:01 -0800  276) #define	STATS_ADD_REAPED(x, y)	((x)->reaped += (y))
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  277) #define	STATS_SET_HIGH(x)						\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  278) 	do {								\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  279) 		if ((x)->num_active > (x)->high_mark)			\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  280) 			(x)->high_mark = (x)->num_active;		\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  281) 	} while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  282) #define	STATS_INC_ERR(x)	((x)->errors++)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  283) #define	STATS_INC_NODEALLOCS(x)	((x)->node_allocs++)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  284) #define	STATS_INC_NODEFREES(x)	((x)->node_frees++)
fb7faf3313d52 (Ravikiran G Thirumalai         2006-04-10 22:52:54 -0700  285) #define STATS_INC_ACOVERFLOW(x)   ((x)->node_overflow++)
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  286) #define	STATS_SET_FREEABLE(x, i)					\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  287) 	do {								\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  288) 		if ((x)->max_freeable < i)				\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  289) 			(x)->max_freeable = i;				\
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  290) 	} while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  291) #define STATS_INC_ALLOCHIT(x)	atomic_inc(&(x)->allochit)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  292) #define STATS_INC_ALLOCMISS(x)	atomic_inc(&(x)->allocmiss)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  293) #define STATS_INC_FREEHIT(x)	atomic_inc(&(x)->freehit)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  294) #define STATS_INC_FREEMISS(x)	atomic_inc(&(x)->freemiss)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  295) #else
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  296) #define	STATS_INC_ACTIVE(x)	do { } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  297) #define	STATS_DEC_ACTIVE(x)	do { } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  298) #define	STATS_INC_ALLOCED(x)	do { } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  299) #define	STATS_INC_GROWN(x)	do { } while (0)
0b41163407e2f (Zhiyuan Dai                    2021-02-24 12:01:01 -0800  300) #define	STATS_ADD_REAPED(x, y)	do { (void)(y); } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  301) #define	STATS_SET_HIGH(x)	do { } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  302) #define	STATS_INC_ERR(x)	do { } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  303) #define	STATS_INC_NODEALLOCS(x)	do { } while (0)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  304) #define	STATS_INC_NODEFREES(x)	do { } while (0)
fb7faf3313d52 (Ravikiran G Thirumalai         2006-04-10 22:52:54 -0700  305) #define STATS_INC_ACOVERFLOW(x)   do { } while (0)
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  306) #define	STATS_SET_FREEABLE(x, i) do { } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  307) #define STATS_INC_ALLOCHIT(x)	do { } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  308) #define STATS_INC_ALLOCMISS(x)	do { } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  309) #define STATS_INC_FREEHIT(x)	do { } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  310) #define STATS_INC_FREEMISS(x)	do { } while (0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  311) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  312) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  313) #if DEBUG
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  314) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  315) /*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  316)  * memory layout of objects:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  317)  * 0		: objp
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800  318)  * 0 .. cachep->obj_offset - BYTES_PER_WORD - 1: padding. This ensures that
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  319)  * 		the end of an object is aligned with the end of the real
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  320)  * 		allocation. Catches writes behind the end of the allocation.
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800  321)  * cachep->obj_offset - BYTES_PER_WORD .. cachep->obj_offset - 1:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  322)  * 		redzone word.
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800  323)  * cachep->obj_offset: The real object.
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500  324)  * cachep->size - 2* BYTES_PER_WORD: redzone word [BYTES_PER_WORD long]
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500  325)  * cachep->size - 1* BYTES_PER_WORD: last caller address
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  326)  *					[BYTES_PER_WORD long]
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  327)  */
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800  328) static int obj_offset(struct kmem_cache *cachep)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  329) {
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800  330) 	return cachep->obj_offset;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  331) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  332) 
b46b8f19c9cd4 (David Woodhouse                2007-05-08 00:22:59 -0700  333) static unsigned long long *dbg_redzone1(struct kmem_cache *cachep, void *objp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  334) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  335) 	BUG_ON(!(cachep->flags & SLAB_RED_ZONE));
0b41163407e2f (Zhiyuan Dai                    2021-02-24 12:01:01 -0800  336) 	return (unsigned long long *) (objp + obj_offset(cachep) -
b46b8f19c9cd4 (David Woodhouse                2007-05-08 00:22:59 -0700  337) 				      sizeof(unsigned long long));
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  338) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  339) 
b46b8f19c9cd4 (David Woodhouse                2007-05-08 00:22:59 -0700  340) static unsigned long long *dbg_redzone2(struct kmem_cache *cachep, void *objp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  341) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  342) 	BUG_ON(!(cachep->flags & SLAB_RED_ZONE));
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  343) 	if (cachep->flags & SLAB_STORE_USER)
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500  344) 		return (unsigned long long *)(objp + cachep->size -
b46b8f19c9cd4 (David Woodhouse                2007-05-08 00:22:59 -0700  345) 					      sizeof(unsigned long long) -
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400  346) 					      REDZONE_ALIGN);
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500  347) 	return (unsigned long long *) (objp + cachep->size -
b46b8f19c9cd4 (David Woodhouse                2007-05-08 00:22:59 -0700  348) 				       sizeof(unsigned long long));
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  349) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  350) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800  351) static void **dbg_userword(struct kmem_cache *cachep, void *objp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  352) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  353) 	BUG_ON(!(cachep->flags & SLAB_STORE_USER));
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500  354) 	return (void **)(objp + cachep->size - BYTES_PER_WORD);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  355) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  356) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  357) #else
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  358) 
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800  359) #define obj_offset(x)			0
b46b8f19c9cd4 (David Woodhouse                2007-05-08 00:22:59 -0700  360) #define dbg_redzone1(cachep, objp)	({BUG(); (unsigned long long *)NULL;})
b46b8f19c9cd4 (David Woodhouse                2007-05-08 00:22:59 -0700  361) #define dbg_redzone2(cachep, objp)	({BUG(); (unsigned long long *)NULL;})
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  362) #define dbg_userword(cachep, objp)	({BUG(); (void **)NULL;})
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  363) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  364) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  365) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  366) /*
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  367)  * Do not go above this order unless 0 objects fit into the slab or
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  368)  * overridden on the command line.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  369)  */
543585cc5b07f (David Rientjes                 2011-10-18 22:09:24 -0700  370) #define	SLAB_MAX_ORDER_HI	1
543585cc5b07f (David Rientjes                 2011-10-18 22:09:24 -0700  371) #define	SLAB_MAX_ORDER_LO	0
543585cc5b07f (David Rientjes                 2011-10-18 22:09:24 -0700  372) static int slab_max_order = SLAB_MAX_ORDER_LO;
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  373) static bool slab_max_order_set __initdata;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  374) 
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900  375) static inline void *index_to_obj(struct kmem_cache *cache, struct page *page,
8fea4e96a8f29 (Pekka Enberg                   2006-03-22 00:08:10 -0800  376) 				 unsigned int idx)
8fea4e96a8f29 (Pekka Enberg                   2006-03-22 00:08:10 -0800  377) {
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900  378) 	return page->s_mem + cache->size * idx;
8fea4e96a8f29 (Pekka Enberg                   2006-03-22 00:08:10 -0800  379) }
8fea4e96a8f29 (Pekka Enberg                   2006-03-22 00:08:10 -0800  380) 
6fb924304ac35 (Joonsoo Kim                    2016-03-15 14:54:09 -0700  381) #define BOOT_CPUCACHE_ENTRIES	1
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  382) /* internal cache of cache description objs */
9b030cb865f13 (Christoph Lameter              2012-09-05 00:20:33 +0000  383) static struct kmem_cache kmem_cache_boot = {
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800  384) 	.batchcount = 1,
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800  385) 	.limit = BOOT_CPUCACHE_ENTRIES,
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800  386) 	.shared = 1,
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500  387) 	.size = sizeof(struct kmem_cache),
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800  388) 	.name = "kmem_cache",
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  389) };
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  390) 
1871e52c76dd9 (Tejun Heo                      2009-10-29 22:34:13 +0900  391) static DEFINE_PER_CPU(struct delayed_work, slab_reap_work);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  392) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800  393) static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  394) {
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700  395) 	return this_cpu_ptr(cachep->cpu_cache);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  396) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  397) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  398) /*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  399)  * Calculate the number of objects and left-over bytes for a given buffer size.
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  400)  */
70f75067b1565 (Joonsoo Kim                    2016-03-15 14:54:53 -0700  401) static unsigned int cache_estimate(unsigned long gfporder, size_t buffer_size,
d50112edde1d0 (Alexey Dobriyan                2017-11-15 17:32:18 -0800  402) 		slab_flags_t flags, size_t *left_over)
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  403) {
70f75067b1565 (Joonsoo Kim                    2016-03-15 14:54:53 -0700  404) 	unsigned int num;
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  405) 	size_t slab_size = PAGE_SIZE << gfporder;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  406) 
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  407) 	/*
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  408) 	 * The slab management structure can be either off the slab or
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  409) 	 * on it. For the latter case, the memory allocated for a
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  410) 	 * slab is used for:
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  411) 	 *
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  412) 	 * - @buffer_size bytes for each object
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700  413) 	 * - One freelist_idx_t for each object
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700  414) 	 *
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700  415) 	 * We don't need to consider alignment of freelist because
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700  416) 	 * freelist will be at the end of slab page. The objects will be
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700  417) 	 * at the correct alignment.
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  418) 	 *
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  419) 	 * If the slab management structure is off the slab, then the
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  420) 	 * alignment will already be calculated into the size. Because
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  421) 	 * the slabs are all pages aligned, the objects will be at the
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  422) 	 * correct alignment when allocated.
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  423) 	 */
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700  424) 	if (flags & (CFLGS_OBJFREELIST_SLAB | CFLGS_OFF_SLAB)) {
70f75067b1565 (Joonsoo Kim                    2016-03-15 14:54:53 -0700  425) 		num = slab_size / buffer_size;
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700  426) 		*left_over = slab_size % buffer_size;
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  427) 	} else {
70f75067b1565 (Joonsoo Kim                    2016-03-15 14:54:53 -0700  428) 		num = slab_size / (buffer_size + sizeof(freelist_idx_t));
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700  429) 		*left_over = slab_size %
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700  430) 			(buffer_size + sizeof(freelist_idx_t));
fbaccacff1f17 (Steven Rostedt                 2006-02-01 03:05:45 -0800  431) 	}
70f75067b1565 (Joonsoo Kim                    2016-03-15 14:54:53 -0700  432) 
70f75067b1565 (Joonsoo Kim                    2016-03-15 14:54:53 -0700  433) 	return num;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  434) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  435) 
f28510d30c7f0 (Christoph Lameter              2012-09-11 19:49:38 +0000  436) #if DEBUG
d40cee245ff6a (Harvey Harrison                2008-04-30 00:55:07 -0700  437) #define slab_error(cachep, msg) __slab_error(__func__, cachep, msg)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  438) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  439) static void __slab_error(const char *function, struct kmem_cache *cachep,
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  440) 			char *msg)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  441) {
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700  442) 	pr_err("slab error in %s(): cache `%s': %s\n",
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800  443) 	       function, cachep->name, msg);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  444) 	dump_stack();
373d4d099761c (Rusty Russell                  2013-01-21 17:17:39 +1030  445) 	add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  446) }
f28510d30c7f0 (Christoph Lameter              2012-09-11 19:49:38 +0000  447) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  448) 
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  449) /*
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  450)  * By default on NUMA we use alien caches to stage the freeing of
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  451)  * objects allocated from other nodes. This causes massive memory
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  452)  * inefficiencies when using fake NUMA setup to split memory into a
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  453)  * large number of small nodes, so it can be disabled on the command
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  454)  * line
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  455)   */
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  456) 
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  457) static int use_alien_caches __read_mostly = 1;
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  458) static int __init noaliencache_setup(char *s)
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  459) {
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  460) 	use_alien_caches = 0;
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  461) 	return 1;
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  462) }
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  463) __setup("noaliencache", noaliencache_setup);
3395ee0588795 (Paul Menage                    2006-12-06 20:32:16 -0800  464) 
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  465) static int __init slab_max_order_setup(char *str)
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  466) {
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  467) 	get_option(&str, &slab_max_order);
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  468) 	slab_max_order = slab_max_order < 0 ? 0 :
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  469) 				min(slab_max_order, MAX_ORDER - 1);
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  470) 	slab_max_order_set = true;
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  471) 
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  472) 	return 1;
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  473) }
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  474) __setup("slab_max_order=", slab_max_order_setup);
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700  475) 
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  476) #ifdef CONFIG_NUMA
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  477) /*
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  478)  * Special reaping functions for NUMA systems called from cache_reap().
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  479)  * These take care of doing round robin flushing of alien caches (containing
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  480)  * objects freed on different nodes from which they were allocated) and the
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  481)  * flushing of remote pcps by calling drain_node_pages.
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  482)  */
1871e52c76dd9 (Tejun Heo                      2009-10-29 22:34:13 +0900  483) static DEFINE_PER_CPU(unsigned long, slab_reap_node);
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  484) 
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  485) static void init_reap_node(int cpu)
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  486) {
0edaf86cf1a6a (Andrew Morton                  2016-05-19 17:10:58 -0700  487) 	per_cpu(slab_reap_node, cpu) = next_node_in(cpu_to_mem(cpu),
0edaf86cf1a6a (Andrew Morton                  2016-05-19 17:10:58 -0700  488) 						    node_online_map);
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  489) }
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  490) 
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  491) static void next_reap_node(void)
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  492) {
909ea96468096 (Christoph Lameter              2010-12-08 16:22:55 +0100  493) 	int node = __this_cpu_read(slab_reap_node);
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  494) 
0edaf86cf1a6a (Andrew Morton                  2016-05-19 17:10:58 -0700  495) 	node = next_node_in(node, node_online_map);
909ea96468096 (Christoph Lameter              2010-12-08 16:22:55 +0100  496) 	__this_cpu_write(slab_reap_node, node);
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  497) }
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  498) 
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  499) #else
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  500) #define init_reap_node(cpu) do { } while (0)
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  501) #define next_reap_node(void) do { } while (0)
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  502) #endif
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  503) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  504) /*
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  505)  * Initiate the reap timer running on the target CPU.  We run at around 1 to 2Hz
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  506)  * via the workqueue/eventd.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  507)  * Add the CPU number into the expiration time to minimize the possibility of
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  508)  * the CPUs getting into lockstep and contending for the global cache chain
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  509)  * lock.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  510)  */
0db0628d90125 (Paul Gortmaker                 2013-06-19 14:53:51 -0400  511) static void start_cpu_timer(int cpu)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  512) {
1871e52c76dd9 (Tejun Heo                      2009-10-29 22:34:13 +0900  513) 	struct delayed_work *reap_work = &per_cpu(slab_reap_work, cpu);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  514) 
eac0337af12b6 (Tejun Heo                      2016-09-16 15:49:34 -0400  515) 	if (reap_work->work.func == NULL) {
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  516) 		init_reap_node(cpu);
203b42f731749 (Tejun Heo                      2012-08-21 13:18:23 -0700  517) 		INIT_DEFERRABLE_WORK(reap_work, cache_reap);
2b2842146cb41 (Arjan van de Ven               2006-12-10 02:21:28 -0800  518) 		schedule_delayed_work_on(cpu, reap_work,
2b2842146cb41 (Arjan van de Ven               2006-12-10 02:21:28 -0800  519) 					__round_jiffies_relative(HZ, cpu));
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  520) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  521) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  522) 
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  523) static void init_arraycache(struct array_cache *ac, int limit, int batch)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  524) {
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  525) 	if (ac) {
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  526) 		ac->avail = 0;
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  527) 		ac->limit = limit;
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  528) 		ac->batchcount = batch;
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  529) 		ac->touched = 0;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  530) 	}
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  531) }
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  532) 
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  533) static struct array_cache *alloc_arraycache(int node, int entries,
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  534) 					    int batchcount, gfp_t gfp)
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  535) {
5e80478967311 (Joonsoo Kim                    2014-08-06 16:04:40 -0700  536) 	size_t memsize = sizeof(void *) * entries + sizeof(struct array_cache);
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  537) 	struct array_cache *ac = NULL;
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  538) 
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  539) 	ac = kmalloc_node(memsize, gfp, node);
92d1d07daad65 (Qian Cai                       2019-03-05 15:42:03 -0800  540) 	/*
92d1d07daad65 (Qian Cai                       2019-03-05 15:42:03 -0800  541) 	 * The array_cache structures contain pointers to free object.
92d1d07daad65 (Qian Cai                       2019-03-05 15:42:03 -0800  542) 	 * However, when such objects are allocated or transferred to another
92d1d07daad65 (Qian Cai                       2019-03-05 15:42:03 -0800  543) 	 * cache the pointers are not cleared and they could be counted as
92d1d07daad65 (Qian Cai                       2019-03-05 15:42:03 -0800  544) 	 * valid references during a kmemleak scan. Therefore, kmemleak must
92d1d07daad65 (Qian Cai                       2019-03-05 15:42:03 -0800  545) 	 * not scan such objects.
92d1d07daad65 (Qian Cai                       2019-03-05 15:42:03 -0800  546) 	 */
92d1d07daad65 (Qian Cai                       2019-03-05 15:42:03 -0800  547) 	kmemleak_no_scan(ac);
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  548) 	init_arraycache(ac, entries, batchcount);
1fe00d50a9e81 (Joonsoo Kim                    2014-08-06 16:04:27 -0700  549) 	return ac;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  550) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700  551) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  552) static noinline void cache_free_pfmemalloc(struct kmem_cache *cachep,
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  553) 					struct page *page, void *objp)
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700  554) {
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  555) 	struct kmem_cache_node *n;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  556) 	int page_node;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  557) 	LIST_HEAD(list);
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700  558) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  559) 	page_node = page_to_nid(page);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  560) 	n = get_node(cachep, page_node);
381760eadc393 (Mel Gorman                     2012-07-31 16:44:30 -0700  561) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  562) 	spin_lock(&n->list_lock);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  563) 	free_block(cachep, &objp, 1, page_node, &list);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  564) 	spin_unlock(&n->list_lock);
381760eadc393 (Mel Gorman                     2012-07-31 16:44:30 -0700  565) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700  566) 	slabs_destroy(cachep, &list);
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700  567) }
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700  568) 
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  569) /*
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  570)  * Transfer objects in one arraycache to another.
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  571)  * Locking must be handled by the caller.
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  572)  *
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  573)  * Return the number of entries transferred.
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  574)  */
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  575) static int transfer_objects(struct array_cache *to,
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  576) 		struct array_cache *from, unsigned int max)
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  577) {
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  578) 	/* Figure out how many entries to transfer */
732eacc0542d0 (Hagen Paul Pfeifer             2010-10-26 14:22:23 -0700  579) 	int nr = min3(from->avail, max, to->limit - to->avail);
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  580) 
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  581) 	if (!nr)
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  582) 		return 0;
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  583) 
0b41163407e2f (Zhiyuan Dai                    2021-02-24 12:01:01 -0800  584) 	memcpy(to->entry + to->avail, from->entry + from->avail - nr,
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  585) 			sizeof(void *) *nr);
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  586) 
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  587) 	from->avail -= nr;
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  588) 	to->avail += nr;
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  589) 	return nr;
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  590) }
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800  591) 
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  592) /* &alien->lock must be held by alien callers. */
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  593) static __always_inline void __free_one(struct array_cache *ac, void *objp)
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  594) {
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  595) 	/* Avoid trivial double-free. */
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  596) 	if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  597) 	    WARN_ON_ONCE(ac->avail > 0 && ac->entry[ac->avail - 1] == objp))
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  598) 		return;
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  599) 	ac->entry[ac->avail++] = objp;
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  600) }
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  601) 
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  602) #ifndef CONFIG_NUMA
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  603) 
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  604) #define drain_alien_cache(cachep, alien) do { } while (0)
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  605) #define reap_alien(cachep, n) do { } while (0)
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  606) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  607) static inline struct alien_cache **alloc_alien_cache(int node,
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  608) 						int limit, gfp_t gfp)
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  609) {
8888177ea116d (Joonsoo Kim                    2016-05-19 17:10:05 -0700  610) 	return NULL;
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  611) }
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  612) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  613) static inline void free_alien_cache(struct alien_cache **ac_ptr)
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  614) {
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  615) }
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  616) 
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  617) static inline int cache_free_alien(struct kmem_cache *cachep, void *objp)
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  618) {
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  619) 	return 0;
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  620) }
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  621) 
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  622) static inline void *alternate_node_alloc(struct kmem_cache *cachep,
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  623) 		gfp_t flags)
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  624) {
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  625) 	return NULL;
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  626) }
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  627) 
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800  628) static inline void *____cache_alloc_node(struct kmem_cache *cachep,
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  629) 		 gfp_t flags, int nodeid)
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  630) {
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  631) 	return NULL;
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  632) }
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  633) 
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700  634) static inline gfp_t gfp_exact_node(gfp_t flags)
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700  635) {
444eb2a449ef3 (Mel Gorman                     2016-03-17 14:19:23 -0700  636) 	return flags & ~__GFP_NOFAIL;
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700  637) }
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700  638) 
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  639) #else	/* CONFIG_NUMA */
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700  640) 
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800  641) static void *____cache_alloc_node(struct kmem_cache *, gfp_t, int);
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800  642) static void *alternate_node_alloc(struct kmem_cache *, gfp_t);
dc85da15d42b0 (Christoph Lameter              2006-01-18 17:42:36 -0800  643) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  644) static struct alien_cache *__alloc_alien_cache(int node, int entries,
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  645) 						int batch, gfp_t gfp)
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  646) {
5e80478967311 (Joonsoo Kim                    2014-08-06 16:04:40 -0700  647) 	size_t memsize = sizeof(void *) * entries + sizeof(struct alien_cache);
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  648) 	struct alien_cache *alc = NULL;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  649) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  650) 	alc = kmalloc_node(memsize, gfp, node);
09c2e76ed734a (Christoph Lameter              2019-01-08 15:23:00 -0800  651) 	if (alc) {
92d1d07daad65 (Qian Cai                       2019-03-05 15:42:03 -0800  652) 		kmemleak_no_scan(alc);
09c2e76ed734a (Christoph Lameter              2019-01-08 15:23:00 -0800  653) 		init_arraycache(&alc->ac, entries, batch);
09c2e76ed734a (Christoph Lameter              2019-01-08 15:23:00 -0800  654) 		spin_lock_init(&alc->lock);
09c2e76ed734a (Christoph Lameter              2019-01-08 15:23:00 -0800  655) 	}
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  656) 	return alc;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  657) }
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  658) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  659) static struct alien_cache **alloc_alien_cache(int node, int limit, gfp_t gfp)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  660) {
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  661) 	struct alien_cache **alc_ptr;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  662) 	int i;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  663) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  664) 	if (limit > 1)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  665) 		limit = 12;
b9726c26dc21b (Alexey Dobriyan                2019-03-05 15:48:26 -0800  666) 	alc_ptr = kcalloc_node(nr_node_ids, sizeof(void *), gfp, node);
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  667) 	if (!alc_ptr)
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  668) 		return NULL;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  669) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  670) 	for_each_node(i) {
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  671) 		if (i == node || !node_online(i))
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  672) 			continue;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  673) 		alc_ptr[i] = __alloc_alien_cache(node, limit, 0xbaadf00d, gfp);
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  674) 		if (!alc_ptr[i]) {
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  675) 			for (i--; i >= 0; i--)
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  676) 				kfree(alc_ptr[i]);
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  677) 			kfree(alc_ptr);
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  678) 			return NULL;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  679) 		}
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  680) 	}
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  681) 	return alc_ptr;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  682) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  683) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  684) static void free_alien_cache(struct alien_cache **alc_ptr)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  685) {
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  686) 	int i;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  687) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  688) 	if (!alc_ptr)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  689) 		return;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  690) 	for_each_node(i)
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  691) 	    kfree(alc_ptr[i]);
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  692) 	kfree(alc_ptr);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  693) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  694) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800  695) static void __drain_alien_cache(struct kmem_cache *cachep,
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  696) 				struct array_cache *ac, int node,
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  697) 				struct list_head *list)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  698) {
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700  699) 	struct kmem_cache_node *n = get_node(cachep, node);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  700) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  701) 	if (ac->avail) {
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  702) 		spin_lock(&n->list_lock);
e00946fe23513 (Christoph Lameter              2006-03-25 03:06:45 -0800  703) 		/*
e00946fe23513 (Christoph Lameter              2006-03-25 03:06:45 -0800  704) 		 * Stuff objects into the remote nodes shared array first.
e00946fe23513 (Christoph Lameter              2006-03-25 03:06:45 -0800  705) 		 * That way we could avoid the overhead of putting the objects
e00946fe23513 (Christoph Lameter              2006-03-25 03:06:45 -0800  706) 		 * into the free lists and getting them back later.
e00946fe23513 (Christoph Lameter              2006-03-25 03:06:45 -0800  707) 		 */
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  708) 		if (n->shared)
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  709) 			transfer_objects(n->shared, ac, ac->limit);
e00946fe23513 (Christoph Lameter              2006-03-25 03:06:45 -0800  710) 
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  711) 		free_block(cachep, ac->entry, ac->avail, node, list);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  712) 		ac->avail = 0;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  713) 		spin_unlock(&n->list_lock);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  714) 	}
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  715) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  716) 
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  717) /*
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  718)  * Called from cache_reap() to regularly drain alien caches round robin.
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  719)  */
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  720) static void reap_alien(struct kmem_cache *cachep, struct kmem_cache_node *n)
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  721) {
909ea96468096 (Christoph Lameter              2010-12-08 16:22:55 +0100  722) 	int node = __this_cpu_read(slab_reap_node);
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  723) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  724) 	if (n->alien) {
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  725) 		struct alien_cache *alc = n->alien[node];
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  726) 		struct array_cache *ac;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  727) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  728) 		if (alc) {
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  729) 			ac = &alc->ac;
49dfc304ba241 (Joonsoo Kim                    2014-08-06 16:04:31 -0700  730) 			if (ac->avail && spin_trylock_irq(&alc->lock)) {
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  731) 				LIST_HEAD(list);
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  732) 
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  733) 				__drain_alien_cache(cachep, ac, node, &list);
49dfc304ba241 (Joonsoo Kim                    2014-08-06 16:04:31 -0700  734) 				spin_unlock_irq(&alc->lock);
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  735) 				slabs_destroy(cachep, &list);
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  736) 			}
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  737) 		}
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  738) 	}
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  739) }
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800  740) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800  741) static void drain_alien_cache(struct kmem_cache *cachep,
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  742) 				struct alien_cache **alien)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  743) {
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800  744) 	int i = 0;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  745) 	struct alien_cache *alc;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  746) 	struct array_cache *ac;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  747) 	unsigned long flags;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  748) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  749) 	for_each_online_node(i) {
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  750) 		alc = alien[i];
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  751) 		if (alc) {
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  752) 			LIST_HEAD(list);
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  753) 
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  754) 			ac = &alc->ac;
49dfc304ba241 (Joonsoo Kim                    2014-08-06 16:04:31 -0700  755) 			spin_lock_irqsave(&alc->lock, flags);
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  756) 			__drain_alien_cache(cachep, ac, i, &list);
49dfc304ba241 (Joonsoo Kim                    2014-08-06 16:04:31 -0700  757) 			spin_unlock_irqrestore(&alc->lock, flags);
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  758) 			slabs_destroy(cachep, &list);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  759) 		}
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  760) 	}
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  761) }
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700  762) 
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  763) static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  764) 				int node, int page_node)
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700  765) {
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  766) 	struct kmem_cache_node *n;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  767) 	struct alien_cache *alien = NULL;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  768) 	struct array_cache *ac;
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700  769) 	LIST_HEAD(list);
1ca4cb2418c04 (Pekka Enberg                   2006-10-06 00:43:52 -0700  770) 
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700  771) 	n = get_node(cachep, node);
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700  772) 	STATS_INC_NODEFREES(cachep);
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  773) 	if (n->alien && n->alien[page_node]) {
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  774) 		alien = n->alien[page_node];
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  775) 		ac = &alien->ac;
49dfc304ba241 (Joonsoo Kim                    2014-08-06 16:04:31 -0700  776) 		spin_lock(&alien->lock);
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  777) 		if (unlikely(ac->avail == ac->limit)) {
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700  778) 			STATS_INC_ACOVERFLOW(cachep);
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  779) 			__drain_alien_cache(cachep, ac, page_node, &list);
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700  780) 		}
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700  781) 		__free_one(ac, objp);
49dfc304ba241 (Joonsoo Kim                    2014-08-06 16:04:31 -0700  782) 		spin_unlock(&alien->lock);
833b706cc8b7b (Joonsoo Kim                    2014-08-06 16:04:33 -0700  783) 		slabs_destroy(cachep, &list);
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700  784) 	} else {
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  785) 		n = get_node(cachep, page_node);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700  786) 		spin_lock(&n->list_lock);
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  787) 		free_block(cachep, &objp, 1, page_node, &list);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700  788) 		spin_unlock(&n->list_lock);
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700  789) 		slabs_destroy(cachep, &list);
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700  790) 	}
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700  791) 	return 1;
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700  792) }
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  793) 
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  794) static inline int cache_free_alien(struct kmem_cache *cachep, void *objp)
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  795) {
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  796) 	int page_node = page_to_nid(virt_to_page(objp));
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  797) 	int node = numa_mem_id();
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  798) 	/*
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  799) 	 * Make sure we are not freeing a object from another node to the array
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  800) 	 * cache on this cpu.
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  801) 	 */
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  802) 	if (likely(node == page_node))
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  803) 		return 0;
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  804) 
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  805) 	return __cache_free_alien(cachep, objp, node, page_node);
25c4f304be8cd (Joonsoo Kim                    2014-10-09 15:26:09 -0700  806) }
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700  807) 
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700  808) /*
444eb2a449ef3 (Mel Gorman                     2016-03-17 14:19:23 -0700  809)  * Construct gfp mask to allocate from a specific node but do not reclaim or
444eb2a449ef3 (Mel Gorman                     2016-03-17 14:19:23 -0700  810)  * warn about failures.
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700  811)  */
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700  812) static inline gfp_t gfp_exact_node(gfp_t flags)
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700  813) {
444eb2a449ef3 (Mel Gorman                     2016-03-17 14:19:23 -0700  814) 	return (flags | __GFP_THISNODE | __GFP_NOWARN) & ~(__GFP_RECLAIM|__GFP_NOFAIL);
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700  815) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  816) #endif
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700  817) 
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  818) static int init_cache_node(struct kmem_cache *cachep, int node, gfp_t gfp)
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  819) {
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  820) 	struct kmem_cache_node *n;
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  821) 
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  822) 	/*
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  823) 	 * Set up the kmem_cache_node for cpu before we can
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  824) 	 * begin anything. Make sure some other cpu on this
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  825) 	 * node has not already allocated this
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  826) 	 */
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  827) 	n = get_node(cachep, node);
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  828) 	if (n) {
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  829) 		spin_lock_irq(&n->list_lock);
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  830) 		n->free_limit = (1 + nr_cpus_node(node)) * cachep->batchcount +
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  831) 				cachep->num;
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  832) 		spin_unlock_irq(&n->list_lock);
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  833) 
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  834) 		return 0;
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  835) 	}
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  836) 
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  837) 	n = kmalloc_node(sizeof(struct kmem_cache_node), gfp, node);
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  838) 	if (!n)
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  839) 		return -ENOMEM;
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  840) 
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  841) 	kmem_cache_node_init(n);
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  842) 	n->next_reap = jiffies + REAPTIMEOUT_NODE +
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  843) 		    ((unsigned long)cachep) % REAPTIMEOUT_NODE;
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  844) 
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  845) 	n->free_limit =
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  846) 		(1 + nr_cpus_node(node)) * cachep->batchcount + cachep->num;
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  847) 
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  848) 	/*
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  849) 	 * The kmem_cache_nodes don't come and go as CPUs
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  850) 	 * come and go.  slab_mutex is sufficient
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  851) 	 * protection here.
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  852) 	 */
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  853) 	cachep->node[node] = n;
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  854) 
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  855) 	return 0;
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  856) }
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  857) 
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200  858) #if (defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)) || defined(CONFIG_SMP)
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  859) /*
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000  860)  * Allocates and initializes node for a node on each slab cache, used for
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  861)  * either memory or cpu hotplug.  If memory is being hot-added, the kmem_cache_node
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  862)  * will be allocated off-node since memory is not yet online for the new node.
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000  863)  * When hotplugging memory or a cpu, existing node are not replaced if
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  864)  * already in use.
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  865)  *
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500  866)  * Must hold slab_mutex.
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  867)  */
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000  868) static int init_cache_node_node(int node)
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  869) {
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  870) 	int ret;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  871) 	struct kmem_cache *cachep;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  872) 
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500  873) 	list_for_each_entry(cachep, &slab_caches, list) {
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  874) 		ret = init_cache_node(cachep, node, GFP_KERNEL);
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  875) 		if (ret)
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  876) 			return ret;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  877) 	}
ded0ecf611189 (Joonsoo Kim                    2016-05-19 17:10:11 -0700  878) 
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  879) 	return 0;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  880) }
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200  881) #endif
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700  882) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  883) static int setup_kmem_cache_node(struct kmem_cache *cachep,
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  884) 				int node, gfp_t gfp, bool force_change)
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  885) {
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  886) 	int ret = -ENOMEM;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  887) 	struct kmem_cache_node *n;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  888) 	struct array_cache *old_shared = NULL;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  889) 	struct array_cache *new_shared = NULL;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  890) 	struct alien_cache **new_alien = NULL;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  891) 	LIST_HEAD(list);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  892) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  893) 	if (use_alien_caches) {
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  894) 		new_alien = alloc_alien_cache(node, cachep->limit, gfp);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  895) 		if (!new_alien)
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  896) 			goto fail;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  897) 	}
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  898) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  899) 	if (cachep->shared) {
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  900) 		new_shared = alloc_arraycache(node,
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  901) 			cachep->shared * cachep->batchcount, 0xbaadf00d, gfp);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  902) 		if (!new_shared)
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  903) 			goto fail;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  904) 	}
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  905) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  906) 	ret = init_cache_node(cachep, node, gfp);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  907) 	if (ret)
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  908) 		goto fail;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  909) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  910) 	n = get_node(cachep, node);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  911) 	spin_lock_irq(&n->list_lock);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  912) 	if (n->shared && force_change) {
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  913) 		free_block(cachep, n->shared->entry,
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  914) 				n->shared->avail, node, &list);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  915) 		n->shared->avail = 0;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  916) 	}
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  917) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  918) 	if (!n->shared || force_change) {
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  919) 		old_shared = n->shared;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  920) 		n->shared = new_shared;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  921) 		new_shared = NULL;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  922) 	}
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  923) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  924) 	if (!n->alien) {
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  925) 		n->alien = new_alien;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  926) 		new_alien = NULL;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  927) 	}
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  928) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  929) 	spin_unlock_irq(&n->list_lock);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  930) 	slabs_destroy(cachep, &list);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  931) 
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700  932) 	/*
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700  933) 	 * To protect lockless access to n->shared during irq disabled context.
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700  934) 	 * If n->shared isn't NULL in irq disabled context, accessing to it is
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700  935) 	 * guaranteed to be valid until irq is re-enabled, because it will be
6564a25e6c185 (Paul E. McKenney               2018-11-06 19:24:33 -0800  936) 	 * freed after synchronize_rcu().
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700  937) 	 */
86d9f48534e80 (Joonsoo Kim                    2016-10-27 17:46:18 -0700  938) 	if (old_shared && force_change)
6564a25e6c185 (Paul E. McKenney               2018-11-06 19:24:33 -0800  939) 		synchronize_rcu();
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700  940) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  941) fail:
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  942) 	kfree(old_shared);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  943) 	kfree(new_shared);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  944) 	free_alien_cache(new_alien);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  945) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  946) 	return ret;
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  947) }
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700  948) 
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200  949) #ifdef CONFIG_SMP
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200  950) 
0db0628d90125 (Paul Gortmaker                 2013-06-19 14:53:51 -0400  951) static void cpuup_canceled(long cpu)
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  952) {
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  953) 	struct kmem_cache *cachep;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  954) 	struct kmem_cache_node *n = NULL;
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700  955) 	int node = cpu_to_mem(cpu);
a70f730282019 (Rusty Russell                  2009-03-13 14:49:46 +1030  956) 	const struct cpumask *mask = cpumask_of_node(node);
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  957) 
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500  958) 	list_for_each_entry(cachep, &slab_caches, list) {
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  959) 		struct array_cache *nc;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  960) 		struct array_cache *shared;
c8522a3a5832b (Joonsoo Kim                    2014-08-06 16:04:29 -0700  961) 		struct alien_cache **alien;
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700  962) 		LIST_HEAD(list);
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  963) 
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700  964) 		n = get_node(cachep, node);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  965) 		if (!n)
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700  966) 			continue;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  967) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  968) 		spin_lock_irq(&n->list_lock);
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  969) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  970) 		/* Free limit for this kmem_cache_node */
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  971) 		n->free_limit -= cachep->batchcount;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700  972) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700  973) 		/* cpu is dead; no one can alloc from it. */
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700  974) 		nc = per_cpu_ptr(cachep->cpu_cache, cpu);
517f9f1ee5ed0 (Li RongQing                    2019-05-13 17:16:25 -0700  975) 		free_block(cachep, nc->entry, nc->avail, node, &list);
517f9f1ee5ed0 (Li RongQing                    2019-05-13 17:16:25 -0700  976) 		nc->avail = 0;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  977) 
58463c1fe25f7 (Rusty Russell                  2009-12-17 11:43:12 -0600  978) 		if (!cpumask_empty(mask)) {
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  979) 			spin_unlock_irq(&n->list_lock);
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700  980) 			goto free_slab;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  981) 		}
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  982) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  983) 		shared = n->shared;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  984) 		if (shared) {
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  985) 			free_block(cachep, shared->entry,
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700  986) 				   shared->avail, node, &list);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  987) 			n->shared = NULL;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  988) 		}
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  989) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  990) 		alien = n->alien;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  991) 		n->alien = NULL;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  992) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000  993) 		spin_unlock_irq(&n->list_lock);
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  994) 
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  995) 		kfree(shared);
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  996) 		if (alien) {
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  997) 			drain_alien_cache(cachep, alien);
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  998) 			free_alien_cache(alien);
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700  999) 		}
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1000) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1001) free_slab:
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 1002) 		slabs_destroy(cachep, &list);
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1003) 	}
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1004) 	/*
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1005) 	 * In the previous loop, all the objects were freed to
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1006) 	 * the respective cache's slabs,  now we can go ahead and
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1007) 	 * shrink each nodelist to its limit.
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1008) 	 */
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1009) 	list_for_each_entry(cachep, &slab_caches, list) {
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 1010) 		n = get_node(cachep, node);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1011) 		if (!n)
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1012) 			continue;
a5aa63a5f7352 (Joonsoo Kim                    2016-05-19 17:10:08 -0700 1013) 		drain_freelist(cachep, n, INT_MAX);
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1014) 	}
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1015) }
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1016) 
0db0628d90125 (Paul Gortmaker                 2013-06-19 14:53:51 -0400 1017) static int cpuup_prepare(long cpu)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1018) {
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 1019) 	struct kmem_cache *cachep;
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 1020) 	int node = cpu_to_mem(cpu);
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1021) 	int err;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1022) 
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1023) 	/*
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1024) 	 * We need to do this right in the beginning since
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1025) 	 * alloc_arraycache's are going to use this list.
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1026) 	 * kmalloc_node allows us to add the slab to the right
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1027) 	 * kmem_cache_node and not this cpu's kmem_cache_node
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1028) 	 */
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000 1029) 	err = init_cache_node_node(node);
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1030) 	if (err < 0)
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1031) 		goto bad;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1032) 
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1033) 	/*
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1034) 	 * Now we can go ahead with allocating the shared arrays and
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1035) 	 * array caches
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1036) 	 */
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1037) 	list_for_each_entry(cachep, &slab_caches, list) {
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 1038) 		err = setup_kmem_cache_node(cachep, node, GFP_KERNEL, false);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 1039) 		if (err)
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 1040) 			goto bad;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1041) 	}
ce79ddc8e2376 (Pekka Enberg                   2009-11-23 22:01:15 +0200 1042) 
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1043) 	return 0;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1044) bad:
12d00f6a12187 (Akinobu Mita                   2007-10-18 03:05:11 -0700 1045) 	cpuup_canceled(cpu);
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1046) 	return -ENOMEM;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1047) }
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1048) 
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1049) int slab_prepare_cpu(unsigned int cpu)
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1050) {
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1051) 	int err;
fbf1e473bd0ec (Akinobu Mita                   2007-10-18 03:05:09 -0700 1052) 
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1053) 	mutex_lock(&slab_mutex);
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1054) 	err = cpuup_prepare(cpu);
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1055) 	mutex_unlock(&slab_mutex);
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1056) 	return err;
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1057) }
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1058) 
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1059) /*
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1060)  * This is called for a failed online attempt and for a successful
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1061)  * offline.
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1062)  *
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1063)  * Even if all the cpus of a node are down, we don't free the
221503e1281f8 (Xiao Yang                      2020-08-06 23:18:31 -0700 1064)  * kmem_cache_node of any cache. This to avoid a race between cpu_down, and
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1065)  * a kmalloc allocation from another cpu for memory from the node of
70b6d25ec59cb (Chen Tao                       2020-10-15 20:10:01 -0700 1066)  * the cpu going down.  The kmem_cache_node structure is usually allocated from
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1067)  * kmem_cache_create() and gets destroyed at kmem_cache_destroy().
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1068)  */
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1069) int slab_dead_cpu(unsigned int cpu)
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1070) {
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1071) 	mutex_lock(&slab_mutex);
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1072) 	cpuup_canceled(cpu);
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1073) 	mutex_unlock(&slab_mutex);
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1074) 	return 0;
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1075) }
8f5be20bf87da (Ravikiran G Thirumalai         2006-12-06 20:32:14 -0800 1076) #endif
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1077) 
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1078) static int slab_online_cpu(unsigned int cpu)
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1079) {
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1080) 	start_cpu_timer(cpu);
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1081) 	return 0;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1082) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1083) 
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1084) static int slab_offline_cpu(unsigned int cpu)
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1085) {
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1086) 	/*
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1087) 	 * Shutdown cache reaper. Note that the slab_mutex is held so
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1088) 	 * that if cache_reap() is invoked it cannot do anything
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1089) 	 * expensive but will only modify reap_work and reschedule the
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1090) 	 * timer.
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1091) 	 */
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1092) 	cancel_delayed_work_sync(&per_cpu(slab_reap_work, cpu));
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1093) 	/* Now the cache_reaper is guaranteed to be not running. */
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1094) 	per_cpu(slab_reap_work, cpu).work.func = NULL;
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1095) 	return 0;
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1096) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1097) 
76af6a054da40 (Dave Hansen                    2021-10-18 15:15:32 -0700 1098) #if defined(CONFIG_NUMA)
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1099) /*
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1100)  * Drains freelist for a node on each slab cache, used for memory hot-remove.
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1101)  * Returns -EBUSY if all objects cannot be drained so that the node is not
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1102)  * removed.
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1103)  *
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1104)  * Must hold slab_mutex.
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1105)  */
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000 1106) static int __meminit drain_cache_node_node(int node)
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1107) {
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1108) 	struct kmem_cache *cachep;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1109) 	int ret = 0;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1110) 
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1111) 	list_for_each_entry(cachep, &slab_caches, list) {
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1112) 		struct kmem_cache_node *n;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1113) 
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 1114) 		n = get_node(cachep, node);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1115) 		if (!n)
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1116) 			continue;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1117) 
a5aa63a5f7352 (Joonsoo Kim                    2016-05-19 17:10:08 -0700 1118) 		drain_freelist(cachep, n, INT_MAX);
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1119) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1120) 		if (!list_empty(&n->slabs_full) ||
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1121) 		    !list_empty(&n->slabs_partial)) {
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1122) 			ret = -EBUSY;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1123) 			break;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1124) 		}
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1125) 	}
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1126) 	return ret;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1127) }
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1128) 
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1129) static int __meminit slab_memory_callback(struct notifier_block *self,
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1130) 					unsigned long action, void *arg)
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1131) {
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1132) 	struct memory_notify *mnb = arg;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1133) 	int ret = 0;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1134) 	int nid;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1135) 
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1136) 	nid = mnb->status_change_nid;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1137) 	if (nid < 0)
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1138) 		goto out;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1139) 
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1140) 	switch (action) {
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1141) 	case MEM_GOING_ONLINE:
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1142) 		mutex_lock(&slab_mutex);
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000 1143) 		ret = init_cache_node_node(nid);
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1144) 		mutex_unlock(&slab_mutex);
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1145) 		break;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1146) 	case MEM_GOING_OFFLINE:
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1147) 		mutex_lock(&slab_mutex);
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000 1148) 		ret = drain_cache_node_node(nid);
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1149) 		mutex_unlock(&slab_mutex);
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1150) 		break;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1151) 	case MEM_ONLINE:
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1152) 	case MEM_OFFLINE:
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1153) 	case MEM_CANCEL_ONLINE:
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1154) 	case MEM_CANCEL_OFFLINE:
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1155) 		break;
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1156) 	}
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1157) out:
5fda1bd5b8869 (Prarit Bhargava                2011-03-22 16:30:49 -0700 1158) 	return notifier_from_errno(ret);
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1159) }
76af6a054da40 (Dave Hansen                    2021-10-18 15:15:32 -0700 1160) #endif /* CONFIG_NUMA */
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1161) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1162) /*
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1163)  * swap the static kmem_cache_node with kmalloced memory
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1164)  */
6744f087ba2a4 (Christoph Lameter              2013-01-10 19:12:17 +0000 1165) static void __init init_list(struct kmem_cache *cachep, struct kmem_cache_node *list,
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1166) 				int nodeid)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1167) {
6744f087ba2a4 (Christoph Lameter              2013-01-10 19:12:17 +0000 1168) 	struct kmem_cache_node *ptr;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1169) 
6744f087ba2a4 (Christoph Lameter              2013-01-10 19:12:17 +0000 1170) 	ptr = kmalloc_node(sizeof(struct kmem_cache_node), GFP_NOWAIT, nodeid);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1171) 	BUG_ON(!ptr);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1172) 
6744f087ba2a4 (Christoph Lameter              2013-01-10 19:12:17 +0000 1173) 	memcpy(ptr, list, sizeof(struct kmem_cache_node));
2b2d5493e1005 (Ingo Molnar                    2006-07-03 00:25:28 -0700 1174) 	/*
2b2d5493e1005 (Ingo Molnar                    2006-07-03 00:25:28 -0700 1175) 	 * Do not assume that spinlocks can be initialized via memcpy:
2b2d5493e1005 (Ingo Molnar                    2006-07-03 00:25:28 -0700 1176) 	 */
2b2d5493e1005 (Ingo Molnar                    2006-07-03 00:25:28 -0700 1177) 	spin_lock_init(&ptr->list_lock);
2b2d5493e1005 (Ingo Molnar                    2006-07-03 00:25:28 -0700 1178) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1179) 	MAKE_ALL_LISTS(cachep, ptr, nodeid);
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000 1180) 	cachep->node[nodeid] = ptr;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1181) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1182) 
556a169dab38b (Pekka Enberg                   2008-01-25 08:20:51 +0200 1183) /*
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1184)  * For setting up all the kmem_cache_node for cache whose buffer_size is same as
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1185)  * size of kmem_cache_node.
556a169dab38b (Pekka Enberg                   2008-01-25 08:20:51 +0200 1186)  */
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1187) static void __init set_up_node(struct kmem_cache *cachep, int index)
556a169dab38b (Pekka Enberg                   2008-01-25 08:20:51 +0200 1188) {
556a169dab38b (Pekka Enberg                   2008-01-25 08:20:51 +0200 1189) 	int node;
556a169dab38b (Pekka Enberg                   2008-01-25 08:20:51 +0200 1190) 
556a169dab38b (Pekka Enberg                   2008-01-25 08:20:51 +0200 1191) 	for_each_online_node(node) {
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1192) 		cachep->node[node] = &init_kmem_cache_node[index + node];
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000 1193) 		cachep->node[node]->next_reap = jiffies +
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 1194) 		    REAPTIMEOUT_NODE +
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 1195) 		    ((unsigned long)cachep) % REAPTIMEOUT_NODE;
556a169dab38b (Pekka Enberg                   2008-01-25 08:20:51 +0200 1196) 	}
556a169dab38b (Pekka Enberg                   2008-01-25 08:20:51 +0200 1197) }
556a169dab38b (Pekka Enberg                   2008-01-25 08:20:51 +0200 1198) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1199) /*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1200)  * Initialisation.  Called after the page allocator have been initialised and
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1201)  * before smp_init().
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1202)  */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1203) void __init kmem_cache_init(void)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1204) {
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1205) 	int i;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1206) 
9b030cb865f13 (Christoph Lameter              2012-09-05 00:20:33 +0000 1207) 	kmem_cache = &kmem_cache_boot;
9b030cb865f13 (Christoph Lameter              2012-09-05 00:20:33 +0000 1208) 
8888177ea116d (Joonsoo Kim                    2016-05-19 17:10:05 -0700 1209) 	if (!IS_ENABLED(CONFIG_NUMA) || num_possible_nodes() == 1)
62918a0361482 (Siddha, Suresh B               2007-05-02 19:27:18 +0200 1210) 		use_alien_caches = 0;
62918a0361482 (Siddha, Suresh B               2007-05-02 19:27:18 +0200 1211) 
3c58346525d82 (Christoph Lameter              2012-11-28 16:23:01 +0000 1212) 	for (i = 0; i < NUM_INIT_LISTS; i++)
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1213) 		kmem_cache_node_init(&init_kmem_cache_node[i]);
3c58346525d82 (Christoph Lameter              2012-11-28 16:23:01 +0000 1214) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1215) 	/*
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1216) 	 * Fragmentation resistance on low memory - only use bigger
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700 1217) 	 * page orders on machines with more than 32MB of memory if
3df1cccdfb3fa (David Rientjes                 2011-10-18 22:09:28 -0700 1218) 	 * not overridden on the command line.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1219) 	 */
ca79b0c211af6 (Arun KS                        2018-12-28 00:34:29 -0800 1220) 	if (!slab_max_order_set && totalram_pages() > (32 << 20) >> PAGE_SHIFT)
543585cc5b07f (David Rientjes                 2011-10-18 22:09:24 -0700 1221) 		slab_max_order = SLAB_MAX_ORDER_HI;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1222) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1223) 	/* Bootstrap is tricky, because several objects are allocated
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1224) 	 * from caches that do not exist yet:
9b030cb865f13 (Christoph Lameter              2012-09-05 00:20:33 +0000 1225) 	 * 1) initialize the kmem_cache cache: it contains the struct
9b030cb865f13 (Christoph Lameter              2012-09-05 00:20:33 +0000 1226) 	 *    kmem_cache structures of all caches, except kmem_cache itself:
9b030cb865f13 (Christoph Lameter              2012-09-05 00:20:33 +0000 1227) 	 *    kmem_cache is statically allocated.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1228) 	 *    Initially an __init data area is used for the head array and the
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1229) 	 *    kmem_cache_node structures, it's replaced with a kmalloc allocated
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1230) 	 *    array at the end of the bootstrap.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1231) 	 * 2) Create the first kmalloc cache.
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 1232) 	 *    The struct kmem_cache for the new cache is allocated normally.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1233) 	 *    An __init data area is used for the head array.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1234) 	 * 3) Create the remaining kmalloc caches, with minimally sized
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1235) 	 *    head arrays.
9b030cb865f13 (Christoph Lameter              2012-09-05 00:20:33 +0000 1236) 	 * 4) Replace the __init data head arrays for kmem_cache and the first
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1237) 	 *    kmalloc cache with kmalloc allocated arrays.
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1238) 	 * 5) Replace the __init data for kmem_cache_node for kmem_cache and
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1239) 	 *    the other cache's with kmalloc allocated memory.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1240) 	 * 6) Resize the head arrays of the kmalloc caches to their final sizes.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1241) 	 */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1242) 
9b030cb865f13 (Christoph Lameter              2012-09-05 00:20:33 +0000 1243) 	/* 1) create the kmem_cache */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1244) 
8da3430d8a7f8 (Eric Dumazet                   2007-05-06 14:49:29 -0700 1245) 	/*
b56efcf0a45aa (Eric Dumazet                   2011-07-20 19:04:23 +0200 1246) 	 * struct kmem_cache size depends on nr_node_ids & nr_cpu_ids
8da3430d8a7f8 (Eric Dumazet                   2007-05-06 14:49:29 -0700 1247) 	 */
2f9baa9fcf8d0 (Christoph Lameter              2012-11-28 16:23:09 +0000 1248) 	create_boot_cache(kmem_cache, "kmem_cache",
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1249) 		offsetof(struct kmem_cache, node) +
6744f087ba2a4 (Christoph Lameter              2013-01-10 19:12:17 +0000 1250) 				  nr_node_ids * sizeof(struct kmem_cache_node *),
8eb8284b41290 (David Windsor                  2017-06-10 22:50:28 -0400 1251) 				  SLAB_HWCACHE_ALIGN, 0, 0);
2f9baa9fcf8d0 (Christoph Lameter              2012-11-28 16:23:09 +0000 1252) 	list_add(&kmem_cache->list, &slab_caches);
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1253) 	slab_state = PARTIAL;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1254) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1255) 	/*
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1256) 	 * Initialize the caches that provide memory for the  kmem_cache_node
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1257) 	 * structures first.  Without this, further allocations will bug.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1258) 	 */
cc252eae85e09 (Vlastimil Babka                2018-10-26 15:05:34 -0700 1259) 	kmalloc_caches[KMALLOC_NORMAL][INDEX_NODE] = create_kmalloc_cache(
cb5d9fb38c343 (Pengfei Li                     2019-11-30 17:49:21 -0800 1260) 				kmalloc_info[INDEX_NODE].name[KMALLOC_NORMAL],
dc0a7f7558dd5 (Pengfei Li                     2019-11-30 17:49:25 -0800 1261) 				kmalloc_info[INDEX_NODE].size,
dc0a7f7558dd5 (Pengfei Li                     2019-11-30 17:49:25 -0800 1262) 				ARCH_KMALLOC_FLAGS, 0,
dc0a7f7558dd5 (Pengfei Li                     2019-11-30 17:49:25 -0800 1263) 				kmalloc_info[INDEX_NODE].size);
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1264) 	slab_state = PARTIAL_NODE;
34cc6990d4d2d (Daniel Sanders                 2015-06-24 16:55:57 -0700 1265) 	setup_kmalloc_cache_index_table();
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1266) 
e0a42726794f7 (Ingo Molnar                    2006-06-23 02:03:46 -0700 1267) 	slab_early_init = 0;
e0a42726794f7 (Ingo Molnar                    2006-06-23 02:03:46 -0700 1268) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1269) 	/* 5) Replace the bootstrap kmem_cache_node */
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1270) 	{
1ca4cb2418c04 (Pekka Enberg                   2006-10-06 00:43:52 -0700 1271) 		int nid;
1ca4cb2418c04 (Pekka Enberg                   2006-10-06 00:43:52 -0700 1272) 
9c09a95cf431f (Mel Gorman                     2008-01-24 05:49:54 -0800 1273) 		for_each_online_node(nid) {
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1274) 			init_list(kmem_cache, &init_kmem_cache_node[CACHE_CACHE + nid], nid);
556a169dab38b (Pekka Enberg                   2008-01-25 08:20:51 +0200 1275) 
cc252eae85e09 (Vlastimil Babka                2018-10-26 15:05:34 -0700 1276) 			init_list(kmalloc_caches[KMALLOC_NORMAL][INDEX_NODE],
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1277) 					  &init_kmem_cache_node[SIZE_NODE + nid], nid);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1278) 		}
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 1279) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1280) 
f97d5f634d3b5 (Christoph Lameter              2013-01-10 19:12:17 +0000 1281) 	create_kmalloc_caches(ARCH_KMALLOC_FLAGS);
8429db5c63360 (Pekka Enberg                   2009-06-12 15:58:59 +0300 1282) }
8429db5c63360 (Pekka Enberg                   2009-06-12 15:58:59 +0300 1283) 
8429db5c63360 (Pekka Enberg                   2009-06-12 15:58:59 +0300 1284) void __init kmem_cache_init_late(void)
8429db5c63360 (Pekka Enberg                   2009-06-12 15:58:59 +0300 1285) {
8429db5c63360 (Pekka Enberg                   2009-06-12 15:58:59 +0300 1286) 	struct kmem_cache *cachep;
8429db5c63360 (Pekka Enberg                   2009-06-12 15:58:59 +0300 1287) 
8429db5c63360 (Pekka Enberg                   2009-06-12 15:58:59 +0300 1288) 	/* 6) resize the head arrays to their final sizes */
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1289) 	mutex_lock(&slab_mutex);
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1290) 	list_for_each_entry(cachep, &slab_caches, list)
8429db5c63360 (Pekka Enberg                   2009-06-12 15:58:59 +0300 1291) 		if (enable_cpucache(cachep, GFP_NOWAIT))
8429db5c63360 (Pekka Enberg                   2009-06-12 15:58:59 +0300 1292) 			BUG();
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 1293) 	mutex_unlock(&slab_mutex);
056c62418cc63 (Ravikiran G Thirumalai         2006-09-25 23:31:38 -0700 1294) 
97d06609158e6 (Christoph Lameter              2012-07-06 15:25:11 -0500 1295) 	/* Done! */
97d06609158e6 (Christoph Lameter              2012-07-06 15:25:11 -0500 1296) 	slab_state = FULL;
97d06609158e6 (Christoph Lameter              2012-07-06 15:25:11 -0500 1297) 
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1298) #ifdef CONFIG_NUMA
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1299) 	/*
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1300) 	 * Register a memory hotplug callback that initializes and frees
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000 1301) 	 * node.
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1302) 	 */
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1303) 	hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI);
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1304) #endif
8f9f8d9e8080a (David Rientjes                 2010-03-27 19:40:47 -0700 1305) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1306) 	/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1307) 	 * The reap timers are started later, with a module init call: That part
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1308) 	 * of the kernel is not yet operational.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1309) 	 */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1310) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1311) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1312) static int __init cpucache_init(void)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1313) {
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1314) 	int ret;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1315) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1316) 	/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1317) 	 * Register the timers that return unneeded pages to the page allocator
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1318) 	 */
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1319) 	ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "SLAB online",
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1320) 				slab_online_cpu, slab_offline_cpu);
6731d4f12315a (Sebastian Andrzej Siewior      2016-08-23 14:53:19 +0200 1321) 	WARN_ON(ret < 0);
a164f89628fa8 (Glauber Costa                  2012-06-21 00:59:18 +0400 1322) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1323) 	return 0;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1324) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1325) __initcall(cpucache_init);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1326) 
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1327) static noinline void
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1328) slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nodeid)
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1329) {
9a02d699935c9 (David Rientjes                 2014-06-04 16:06:36 -0700 1330) #if DEBUG
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1331) 	struct kmem_cache_node *n;
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1332) 	unsigned long flags;
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1333) 	int node;
9a02d699935c9 (David Rientjes                 2014-06-04 16:06:36 -0700 1334) 	static DEFINE_RATELIMIT_STATE(slab_oom_rs, DEFAULT_RATELIMIT_INTERVAL,
9a02d699935c9 (David Rientjes                 2014-06-04 16:06:36 -0700 1335) 				      DEFAULT_RATELIMIT_BURST);
9a02d699935c9 (David Rientjes                 2014-06-04 16:06:36 -0700 1336) 
9a02d699935c9 (David Rientjes                 2014-06-04 16:06:36 -0700 1337) 	if ((gfpflags & __GFP_NOWARN) || !__ratelimit(&slab_oom_rs))
9a02d699935c9 (David Rientjes                 2014-06-04 16:06:36 -0700 1338) 		return;
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1339) 
5b3810e5c6e1b (Vlastimil Babka                2016-03-15 14:56:33 -0700 1340) 	pr_warn("SLAB: Unable to allocate memory on node %d, gfp=%#x(%pGg)\n",
5b3810e5c6e1b (Vlastimil Babka                2016-03-15 14:56:33 -0700 1341) 		nodeid, gfpflags, &gfpflags);
5b3810e5c6e1b (Vlastimil Babka                2016-03-15 14:56:33 -0700 1342) 	pr_warn("  cache: %s, object size: %d, order: %d\n",
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500 1343) 		cachep->name, cachep->size, cachep->gfporder);
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1344) 
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 1345) 	for_each_kmem_cache_node(cachep, node, n) {
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 1346) 		unsigned long total_slabs, free_slabs, free_objs;
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1347) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1348) 		spin_lock_irqsave(&n->list_lock, flags);
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 1349) 		total_slabs = n->total_slabs;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 1350) 		free_slabs = n->free_slabs;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 1351) 		free_objs = n->free_objects;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 1352) 		spin_unlock_irqrestore(&n->list_lock, flags);
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1353) 
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 1354) 		pr_warn("  node %d: slabs: %ld/%ld, objs: %ld/%ld\n",
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 1355) 			node, total_slabs - free_slabs, total_slabs,
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 1356) 			(total_slabs * cachep->num) - free_objs,
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 1357) 			total_slabs * cachep->num);
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1358) 	}
9a02d699935c9 (David Rientjes                 2014-06-04 16:06:36 -0700 1359) #endif
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1360) }
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1361) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1362) /*
8a7d9b4306258 (Wang Sheng-Hui                 2014-08-06 16:04:46 -0700 1363)  * Interface to system's page allocator. No need to hold the
8a7d9b4306258 (Wang Sheng-Hui                 2014-08-06 16:04:46 -0700 1364)  * kmem_cache_node ->list_lock.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1365)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1366)  * If we requested dmaable memory, we will get it. Even if we
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1367)  * did not request dmaable memory, we might get it, but that
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1368)  * would be relatively rare and ignorable.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1369)  */
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 1370) static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 1371) 								int nodeid)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1372) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1373) 	struct page *page;
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700 1374) 
a618e89f1e6fb (Glauber Costa                  2012-06-14 16:17:21 +0400 1375) 	flags |= cachep->allocflags;
e1b6aa6f1404f (Christoph Hellwig              2006-06-23 02:03:17 -0700 1376) 
75f296d93bceb (Levin, Alexander (Sasha Levin) 2017-11-15 17:35:54 -0800 1377) 	page = __alloc_pages_node(nodeid, flags, cachep->gfporder);
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1378) 	if (!page) {
9a02d699935c9 (David Rientjes                 2014-06-04 16:06:36 -0700 1379) 		slab_out_of_memory(cachep, flags, nodeid);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1380) 		return NULL;
8bdec192b40cf (Rafael Aquini                  2012-03-09 17:27:27 -0300 1381) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1382) 
2e9bd48315993 (Roman Gushchin                 2021-02-24 12:03:11 -0800 1383) 	account_slab_page(page, cachep->gfporder, cachep, flags);
a57a49887eb33 (Joonsoo Kim                    2013-10-24 10:07:44 +0900 1384) 	__SetPageSlab(page);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 1385) 	/* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 1386) 	if (sk_memalloc_socks() && page_is_pfmemalloc(page))
a57a49887eb33 (Joonsoo Kim                    2013-10-24 10:07:44 +0900 1387) 		SetPageSlabPfmemalloc(page);
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 1388) 
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 1389) 	return page;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1390) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1391) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1392) /*
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1393)  * Interface to system's page release.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1394)  */
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 1395) static void kmem_freepages(struct kmem_cache *cachep, struct page *page)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1396) {
27ee57c93ff00 (Vladimir Davydov               2016-03-17 14:17:35 -0700 1397) 	int order = cachep->gfporder;
73293c2f900d0 (Joonsoo Kim                    2013-10-24 10:07:37 +0900 1398) 
a57a49887eb33 (Joonsoo Kim                    2013-10-24 10:07:44 +0900 1399) 	BUG_ON(!PageSlab(page));
73293c2f900d0 (Joonsoo Kim                    2013-10-24 10:07:37 +0900 1400) 	__ClearPageSlabPfmemalloc(page);
a57a49887eb33 (Joonsoo Kim                    2013-10-24 10:07:44 +0900 1401) 	__ClearPageSlab(page);
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1402) 	page_mapcount_reset(page);
0c06dd7551432 (Vlastimil Babka                2020-12-14 19:04:29 -0800 1403) 	/* In union with page->mapping where page allocator expects NULL */
0c06dd7551432 (Vlastimil Babka                2020-12-14 19:04:29 -0800 1404) 	page->slab_cache = NULL;
1f458cbf12228 (Glauber Costa                  2012-12-18 14:22:50 -0800 1405) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1406) 	if (current->reclaim_state)
6cea1d569d24a (Roman Gushchin                 2019-07-11 20:56:16 -0700 1407) 		current->reclaim_state->reclaimed_slab += 1 << order;
74d555bed5d0f (Roman Gushchin                 2020-08-06 23:21:44 -0700 1408) 	unaccount_slab_page(page, order, cachep);
27ee57c93ff00 (Vladimir Davydov               2016-03-17 14:17:35 -0700 1409) 	__free_pages(page, order);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1410) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1411) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1412) static void kmem_rcu_free(struct rcu_head *head)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1413) {
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1414) 	struct kmem_cache *cachep;
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1415) 	struct page *page;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1416) 
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1417) 	page = container_of(head, struct page, rcu_head);
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1418) 	cachep = page->slab_cache;
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1419) 
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1420) 	kmem_freepages(cachep, page);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1421) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1422) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1423) #if DEBUG
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1424) static bool is_debug_pagealloc_cache(struct kmem_cache *cachep)
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1425) {
8e57f8acbbd12 (Vlastimil Babka                2020-01-13 16:29:20 -0800 1426) 	if (debug_pagealloc_enabled_static() && OFF_SLAB(cachep) &&
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1427) 		(cachep->size % PAGE_SIZE) == 0)
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1428) 		return true;
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1429) 
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1430) 	return false;
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1431) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1432) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1433) #ifdef CONFIG_DEBUG_PAGEALLOC
80552f0f7aebd (Qian Cai                       2019-04-16 10:22:57 -0400 1434) static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map)
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1435) {
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1436) 	if (!is_debug_pagealloc_cache(cachep))
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1437) 		return;
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1438) 
77bc7fd607dee (Mike Rapoport                  2020-12-14 19:10:20 -0800 1439) 	__kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map);
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1440) }
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1441) 
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1442) #else
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1443) static inline void slab_kernel_map(struct kmem_cache *cachep, void *objp,
80552f0f7aebd (Qian Cai                       2019-04-16 10:22:57 -0400 1444) 				int map) {}
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1445) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1446) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1447) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 1448) static void poison_obj(struct kmem_cache *cachep, void *addr, unsigned char val)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1449) {
8c138bc009255 (Christoph Lameter              2012-06-13 10:24:58 -0500 1450) 	int size = cachep->object_size;
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800 1451) 	addr = &((char *)addr)[obj_offset(cachep)];
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1452) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1453) 	memset(addr, val, size);
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1454) 	*(unsigned char *)(addr + size - 1) = POISON_END;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1455) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1456) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1457) static void dump_line(char *data, int offset, int limit)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1458) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1459) 	int i;
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1460) 	unsigned char error = 0;
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1461) 	int bad_count = 0;
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1462) 
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 1463) 	pr_err("%03x: ", offset);
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1464) 	for (i = 0; i < limit; i++) {
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1465) 		if (data[offset + i] != POISON_FREE) {
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1466) 			error = data[offset + i];
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1467) 			bad_count++;
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1468) 		}
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1469) 	}
fdde6abb3e8dd (Sebastian Andrzej Siewior      2011-07-29 18:22:13 +0200 1470) 	print_hex_dump(KERN_CONT, "", 0, 16, 1,
fdde6abb3e8dd (Sebastian Andrzej Siewior      2011-07-29 18:22:13 +0200 1471) 			&data[offset], limit, 1);
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1472) 
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1473) 	if (bad_count == 1) {
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1474) 		error ^= POISON_FREE;
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1475) 		if (!(error & (error - 1))) {
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 1476) 			pr_err("Single bit error detected. Probably bad RAM.\n");
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1477) #ifdef CONFIG_X86
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 1478) 			pr_err("Run memtest86+ or a similar memory test tool.\n");
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1479) #else
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 1480) 			pr_err("Run a memory test tool.\n");
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1481) #endif
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1482) 		}
aa83aa40ed2ae (Dave Jones                     2006-09-29 01:59:51 -0700 1483) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1484) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1485) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1486) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1487) #if DEBUG
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1488) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 1489) static void print_objinfo(struct kmem_cache *cachep, void *objp, int lines)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1490) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1491) 	int i, size;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1492) 	char *realobj;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1493) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1494) 	if (cachep->flags & SLAB_RED_ZONE) {
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 1495) 		pr_err("Redzone: 0x%llx/0x%llx\n",
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 1496) 		       *dbg_redzone1(cachep, objp),
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 1497) 		       *dbg_redzone2(cachep, objp));
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1498) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1499) 
85c3e4a5a185f (Geert Uytterhoeven             2017-12-14 15:32:58 -0800 1500) 	if (cachep->flags & SLAB_STORE_USER)
85c3e4a5a185f (Geert Uytterhoeven             2017-12-14 15:32:58 -0800 1501) 		pr_err("Last user: (%pSR)\n", *dbg_userword(cachep, objp));
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800 1502) 	realobj = (char *)objp + obj_offset(cachep);
8c138bc009255 (Christoph Lameter              2012-06-13 10:24:58 -0500 1503) 	size = cachep->object_size;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1504) 	for (i = 0; i < size && lines; i += 16, lines--) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1505) 		int limit;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1506) 		limit = 16;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1507) 		if (i + limit > size)
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1508) 			limit = size - i;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1509) 		dump_line(realobj, i, limit);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1510) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1511) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1512) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 1513) static void check_poison_obj(struct kmem_cache *cachep, void *objp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1514) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1515) 	char *realobj;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1516) 	int size, i;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1517) 	int lines = 0;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1518) 
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1519) 	if (is_debug_pagealloc_cache(cachep))
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1520) 		return;
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 1521) 
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800 1522) 	realobj = (char *)objp + obj_offset(cachep);
8c138bc009255 (Christoph Lameter              2012-06-13 10:24:58 -0500 1523) 	size = cachep->object_size;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1524) 
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1525) 	for (i = 0; i < size; i++) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1526) 		char exp = POISON_FREE;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1527) 		if (i == size - 1)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1528) 			exp = POISON_END;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1529) 		if (realobj[i] != exp) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1530) 			int limit;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1531) 			/* Mismatch ! */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1532) 			/* Print header */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1533) 			if (lines == 0) {
85c3e4a5a185f (Geert Uytterhoeven             2017-12-14 15:32:58 -0800 1534) 				pr_err("Slab corruption (%s): %s start=%px, len=%d\n",
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 1535) 				       print_tainted(), cachep->name,
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 1536) 				       realobj, size);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1537) 				print_objinfo(cachep, objp, 0);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1538) 			}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1539) 			/* Hexdump the affected line */
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1540) 			i = (i / 16) * 16;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1541) 			limit = 16;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1542) 			if (i + limit > size)
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1543) 				limit = size - i;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1544) 			dump_line(realobj, i, limit);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1545) 			i += 16;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1546) 			lines++;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1547) 			/* Limit to 5 lines */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1548) 			if (lines > 5)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1549) 				break;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1550) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1551) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1552) 	if (lines != 0) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1553) 		/* Print some data about the neighboring objects, if they
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1554) 		 * exist:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1555) 		 */
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1556) 		struct page *page = virt_to_head_page(objp);
8fea4e96a8f29 (Pekka Enberg                   2006-03-22 00:08:10 -0800 1557) 		unsigned int objnr;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1558) 
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1559) 		objnr = obj_to_index(cachep, page, objp);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1560) 		if (objnr) {
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1561) 			objp = index_to_obj(cachep, page, objnr - 1);
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800 1562) 			realobj = (char *)objp + obj_offset(cachep);
85c3e4a5a185f (Geert Uytterhoeven             2017-12-14 15:32:58 -0800 1563) 			pr_err("Prev obj: start=%px, len=%d\n", realobj, size);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1564) 			print_objinfo(cachep, objp, 2);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1565) 		}
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1566) 		if (objnr + 1 < cachep->num) {
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1567) 			objp = index_to_obj(cachep, page, objnr + 1);
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800 1568) 			realobj = (char *)objp + obj_offset(cachep);
85c3e4a5a185f (Geert Uytterhoeven             2017-12-14 15:32:58 -0800 1569) 			pr_err("Next obj: start=%px, len=%d\n", realobj, size);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1570) 			print_objinfo(cachep, objp, 2);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1571) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1572) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1573) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1574) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1575) 
12dd36faec5d3 (Matthew Dobson                 2006-02-01 03:05:46 -0800 1576) #if DEBUG
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1577) static void slab_destroy_debugcheck(struct kmem_cache *cachep,
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1578) 						struct page *page)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1579) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1580) 	int i;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1581) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1582) 	if (OBJFREELIST_SLAB(cachep) && cachep->flags & SLAB_POISON) {
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1583) 		poison_obj(cachep, page->freelist - obj_offset(cachep),
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1584) 			POISON_FREE);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1585) 	}
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1586) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1587) 	for (i = 0; i < cachep->num; i++) {
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1588) 		void *objp = index_to_obj(cachep, page, i);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1589) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1590) 		if (cachep->flags & SLAB_POISON) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1591) 			check_poison_obj(cachep, objp);
80552f0f7aebd (Qian Cai                       2019-04-16 10:22:57 -0400 1592) 			slab_kernel_map(cachep, objp, 1);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1593) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1594) 		if (cachep->flags & SLAB_RED_ZONE) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1595) 			if (*dbg_redzone1(cachep, objp) != RED_INACTIVE)
756a025f00091 (Joe Perches                    2016-03-17 14:19:47 -0700 1596) 				slab_error(cachep, "start of a freed object was overwritten");
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1597) 			if (*dbg_redzone2(cachep, objp) != RED_INACTIVE)
756a025f00091 (Joe Perches                    2016-03-17 14:19:47 -0700 1598) 				slab_error(cachep, "end of a freed object was overwritten");
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1599) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1600) 	}
12dd36faec5d3 (Matthew Dobson                 2006-02-01 03:05:46 -0800 1601) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1602) #else
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1603) static void slab_destroy_debugcheck(struct kmem_cache *cachep,
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1604) 						struct page *page)
12dd36faec5d3 (Matthew Dobson                 2006-02-01 03:05:46 -0800 1605) {
12dd36faec5d3 (Matthew Dobson                 2006-02-01 03:05:46 -0800 1606) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1607) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1608) 
911851e6ee6ac (Randy Dunlap                   2006-03-22 00:08:14 -0800 1609) /**
911851e6ee6ac (Randy Dunlap                   2006-03-22 00:08:14 -0800 1610)  * slab_destroy - destroy and release all objects in a slab
911851e6ee6ac (Randy Dunlap                   2006-03-22 00:08:14 -0800 1611)  * @cachep: cache pointer being destroyed
cb8ee1a3d429f (Masanari Iida                  2014-01-28 02:57:08 +0900 1612)  * @page: page pointer being destroyed
911851e6ee6ac (Randy Dunlap                   2006-03-22 00:08:14 -0800 1613)  *
8a7d9b4306258 (Wang Sheng-Hui                 2014-08-06 16:04:46 -0700 1614)  * Destroy all the objs in a slab page, and release the mem back to the system.
8a7d9b4306258 (Wang Sheng-Hui                 2014-08-06 16:04:46 -0700 1615)  * Before calling the slab page must have been unlinked from the cache. The
8a7d9b4306258 (Wang Sheng-Hui                 2014-08-06 16:04:46 -0700 1616)  * kmem_cache_node ->list_lock is not held/needed.
12dd36faec5d3 (Matthew Dobson                 2006-02-01 03:05:46 -0800 1617)  */
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1618) static void slab_destroy(struct kmem_cache *cachep, struct page *page)
12dd36faec5d3 (Matthew Dobson                 2006-02-01 03:05:46 -0800 1619) {
7e00735520ffb (Joonsoo Kim                    2013-10-30 19:04:01 +0900 1620) 	void *freelist;
12dd36faec5d3 (Matthew Dobson                 2006-02-01 03:05:46 -0800 1621) 
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1622) 	freelist = page->freelist;
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1623) 	slab_destroy_debugcheck(cachep, page);
5f0d5a3ae7cff (Paul E. McKenney               2017-01-18 02:53:44 -0800 1624) 	if (unlikely(cachep->flags & SLAB_TYPESAFE_BY_RCU))
bc4f610d5a884 (Kirill A. Shutemov             2015-11-06 16:29:44 -0800 1625) 		call_rcu(&page->rcu_head, kmem_rcu_free);
bc4f610d5a884 (Kirill A. Shutemov             2015-11-06 16:29:44 -0800 1626) 	else
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 1627) 		kmem_freepages(cachep, page);
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1628) 
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1629) 	/*
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1630) 	 * From now on, we don't use freelist
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1631) 	 * although actual page can be freed in rcu context
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1632) 	 */
68126702b419f (Joonsoo Kim                    2013-10-24 10:07:42 +0900 1633) 	if (OFF_SLAB(cachep))
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 1634) 		kmem_cache_free(cachep->freelist_cache, freelist);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1635) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1636) 
678ff6a7afccc (Shakeel Butt                   2020-09-26 07:13:41 -0700 1637) /*
678ff6a7afccc (Shakeel Butt                   2020-09-26 07:13:41 -0700 1638)  * Update the size of the caches before calling slabs_destroy as it may
678ff6a7afccc (Shakeel Butt                   2020-09-26 07:13:41 -0700 1639)  * recursively call kfree.
678ff6a7afccc (Shakeel Butt                   2020-09-26 07:13:41 -0700 1640)  */
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 1641) static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list)
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 1642) {
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 1643) 	struct page *page, *n;
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 1644) 
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 1645) 	list_for_each_entry_safe(page, n, list, slab_list) {
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 1646) 		list_del(&page->slab_list);
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 1647) 		slab_destroy(cachep, page);
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 1648) 	}
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 1649) }
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 1650) 
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1651) /**
a70773ddb96b7 (Randy Dunlap                   2006-02-01 03:05:52 -0800 1652)  * calculate_slab_order - calculate size (page order) of slabs
a70773ddb96b7 (Randy Dunlap                   2006-02-01 03:05:52 -0800 1653)  * @cachep: pointer to the cache that is being created
a70773ddb96b7 (Randy Dunlap                   2006-02-01 03:05:52 -0800 1654)  * @size: size of objects to be created in this cache.
a70773ddb96b7 (Randy Dunlap                   2006-02-01 03:05:52 -0800 1655)  * @flags: slab allocation flags
a70773ddb96b7 (Randy Dunlap                   2006-02-01 03:05:52 -0800 1656)  *
a70773ddb96b7 (Randy Dunlap                   2006-02-01 03:05:52 -0800 1657)  * Also calculates the number of objects per slab.
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1658)  *
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1659)  * This could be made much more intelligent.  For now, try to avoid using
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1660)  * high order pages for slabs.  When the gfp() functions are more friendly
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1661)  * towards high-order requests, this should be changed.
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 1662)  *
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 1663)  * Return: number of left-over bytes in a slab
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1664)  */
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1665) static size_t calculate_slab_order(struct kmem_cache *cachep,
d50112edde1d0 (Alexey Dobriyan                2017-11-15 17:32:18 -0800 1666) 				size_t size, slab_flags_t flags)
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1667) {
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1668) 	size_t left_over = 0;
9888e6fa7b68d (Linus Torvalds                 2006-03-06 17:44:43 -0800 1669) 	int gfporder;
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1670) 
0aa817f078b65 (Christoph Lameter              2007-05-16 22:11:01 -0700 1671) 	for (gfporder = 0; gfporder <= KMALLOC_MAX_ORDER; gfporder++) {
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1672) 		unsigned int num;
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1673) 		size_t remainder;
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1674) 
70f75067b1565 (Joonsoo Kim                    2016-03-15 14:54:53 -0700 1675) 		num = cache_estimate(gfporder, size, flags, &remainder);
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1676) 		if (!num)
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1677) 			continue;
9888e6fa7b68d (Linus Torvalds                 2006-03-06 17:44:43 -0800 1678) 
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900 1679) 		/* Can't handle number of objects more than SLAB_OBJ_MAX_NUM */
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900 1680) 		if (num > SLAB_OBJ_MAX_NUM)
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900 1681) 			break;
f315e3fa1cf5b (Joonsoo Kim                    2013-12-02 17:49:41 +0900 1682) 
b1ab41c494300 (Ingo Molnar                    2006-06-02 15:44:58 +0200 1683) 		if (flags & CFLGS_OFF_SLAB) {
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1684) 			struct kmem_cache *freelist_cache;
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1685) 			size_t freelist_size;
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1686) 
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1687) 			freelist_size = num * sizeof(freelist_idx_t);
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1688) 			freelist_cache = kmalloc_slab(freelist_size, 0u);
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1689) 			if (!freelist_cache)
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1690) 				continue;
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1691) 
b1ab41c494300 (Ingo Molnar                    2006-06-02 15:44:58 +0200 1692) 			/*
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1693) 			 * Needed to avoid possible looping condition
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 1694) 			 * in cache_grow_begin()
b1ab41c494300 (Ingo Molnar                    2006-06-02 15:44:58 +0200 1695) 			 */
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1696) 			if (OFF_SLAB(freelist_cache))
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1697) 				continue;
b1ab41c494300 (Ingo Molnar                    2006-06-02 15:44:58 +0200 1698) 
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1699) 			/* check if off slab has enough benefit */
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1700) 			if (freelist_cache->size > cachep->size / 2)
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1701) 				continue;
b1ab41c494300 (Ingo Molnar                    2006-06-02 15:44:58 +0200 1702) 		}
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1703) 
9888e6fa7b68d (Linus Torvalds                 2006-03-06 17:44:43 -0800 1704) 		/* Found something acceptable - save it away */
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1705) 		cachep->num = num;
9888e6fa7b68d (Linus Torvalds                 2006-03-06 17:44:43 -0800 1706) 		cachep->gfporder = gfporder;
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1707) 		left_over = remainder;
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1708) 
f78bb8ad48226 (Linus Torvalds                 2006-03-08 10:33:05 -0800 1709) 		/*
f78bb8ad48226 (Linus Torvalds                 2006-03-08 10:33:05 -0800 1710) 		 * A VFS-reclaimable slab tends to have most allocations
f78bb8ad48226 (Linus Torvalds                 2006-03-08 10:33:05 -0800 1711) 		 * as GFP_NOFS and we really don't want to have to be allocating
f78bb8ad48226 (Linus Torvalds                 2006-03-08 10:33:05 -0800 1712) 		 * higher-order pages when we are unable to shrink dcache.
f78bb8ad48226 (Linus Torvalds                 2006-03-08 10:33:05 -0800 1713) 		 */
f78bb8ad48226 (Linus Torvalds                 2006-03-08 10:33:05 -0800 1714) 		if (flags & SLAB_RECLAIM_ACCOUNT)
f78bb8ad48226 (Linus Torvalds                 2006-03-08 10:33:05 -0800 1715) 			break;
f78bb8ad48226 (Linus Torvalds                 2006-03-08 10:33:05 -0800 1716) 
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1717) 		/*
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1718) 		 * Large number of objects is good, but very large slabs are
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1719) 		 * currently bad for the gfp()s.
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1720) 		 */
543585cc5b07f (David Rientjes                 2011-10-18 22:09:24 -0700 1721) 		if (gfporder >= slab_max_order)
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1722) 			break;
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1723) 
9888e6fa7b68d (Linus Torvalds                 2006-03-06 17:44:43 -0800 1724) 		/*
9888e6fa7b68d (Linus Torvalds                 2006-03-06 17:44:43 -0800 1725) 		 * Acceptable internal fragmentation?
9888e6fa7b68d (Linus Torvalds                 2006-03-06 17:44:43 -0800 1726) 		 */
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1727) 		if (left_over * 8 <= (PAGE_SIZE << gfporder))
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1728) 			break;
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1729) 	}
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1730) 	return left_over;
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1731) }
4d268eba1187e (Pekka Enberg                   2006-01-08 01:00:36 -0800 1732) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1733) static struct array_cache __percpu *alloc_kmem_cache_cpus(
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1734) 		struct kmem_cache *cachep, int entries, int batchcount)
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1735) {
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1736) 	int cpu;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1737) 	size_t size;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1738) 	struct array_cache __percpu *cpu_cache;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1739) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1740) 	size = sizeof(void *) * entries + sizeof(struct array_cache);
85c9f4b04a08f (Joonsoo Kim                    2014-10-13 15:51:01 -0700 1741) 	cpu_cache = __alloc_percpu(size, sizeof(void *));
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1742) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1743) 	if (!cpu_cache)
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1744) 		return NULL;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1745) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1746) 	for_each_possible_cpu(cpu) {
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1747) 		init_arraycache(per_cpu_ptr(cpu_cache, cpu),
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1748) 				entries, batchcount);
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1749) 	}
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1750) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1751) 	return cpu_cache;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1752) }
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1753) 
bd721ea73e1f9 (Fabian Frederick               2016-08-02 14:03:33 -0700 1754) static int __ref setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp)
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1755) {
97d06609158e6 (Christoph Lameter              2012-07-06 15:25:11 -0500 1756) 	if (slab_state >= FULL)
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300 1757) 		return enable_cpucache(cachep, gfp);
2ed3a4ef95ef1 (Christoph Lameter              2006-09-25 23:31:38 -0700 1758) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1759) 	cachep->cpu_cache = alloc_kmem_cache_cpus(cachep, 1, 1);
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1760) 	if (!cachep->cpu_cache)
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1761) 		return 1;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1762) 
97d06609158e6 (Christoph Lameter              2012-07-06 15:25:11 -0500 1763) 	if (slab_state == DOWN) {
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1764) 		/* Creation of first cache (kmem_cache). */
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1765) 		set_up_node(kmem_cache, CACHE_CACHE);
2f9baa9fcf8d0 (Christoph Lameter              2012-11-28 16:23:09 +0000 1766) 	} else if (slab_state == PARTIAL) {
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1767) 		/* For kmem_cache_node */
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1768) 		set_up_node(cachep, SIZE_NODE);
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1769) 	} else {
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1770) 		int node;
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1771) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1772) 		for_each_online_node(node) {
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1773) 			cachep->node[node] = kmalloc_node(
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1774) 				sizeof(struct kmem_cache_node), gfp, node);
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1775) 			BUG_ON(!cachep->node[node]);
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1776) 			kmem_cache_node_init(cachep->node[node]);
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1777) 		}
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1778) 	}
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 1779) 
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000 1780) 	cachep->node[numa_mem_id()]->next_reap =
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 1781) 			jiffies + REAPTIMEOUT_NODE +
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 1782) 			((unsigned long)cachep) % REAPTIMEOUT_NODE;
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1783) 
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1784) 	cpu_cache_get(cachep)->avail = 0;
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1785) 	cpu_cache_get(cachep)->limit = BOOT_CPUCACHE_ENTRIES;
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1786) 	cpu_cache_get(cachep)->batchcount = 1;
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1787) 	cpu_cache_get(cachep)->touched = 0;
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1788) 	cachep->batchcount = 1;
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1789) 	cachep->limit = BOOT_CPUCACHE_ENTRIES;
2ed3a4ef95ef1 (Christoph Lameter              2006-09-25 23:31:38 -0700 1790) 	return 0;
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1791) }
f30cf7d13eee4 (Pekka Enberg                   2006-03-22 00:08:11 -0800 1792) 
0293d1fdd677a (Alexey Dobriyan                2018-04-05 16:21:24 -0700 1793) slab_flags_t kmem_cache_flags(unsigned int object_size,
3754000872188 (Nikolay Borisov                2021-02-24 12:00:58 -0800 1794) 	slab_flags_t flags, const char *name)
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1795) {
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1796) 	return flags;
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1797) }
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1798) 
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1799) struct kmem_cache *
f4957d5bd0916 (Alexey Dobriyan                2018-04-05 16:20:37 -0700 1800) __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
d50112edde1d0 (Alexey Dobriyan                2017-11-15 17:32:18 -0800 1801) 		   slab_flags_t flags, void (*ctor)(void *))
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1802) {
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1803) 	struct kmem_cache *cachep;
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1804) 
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1805) 	cachep = find_mergeable(size, align, flags, name, ctor);
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1806) 	if (cachep) {
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1807) 		cachep->refcount++;
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1808) 
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1809) 		/*
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1810) 		 * Adjust the object sizes so that we clear
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1811) 		 * the complete object on kzalloc.
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1812) 		 */
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1813) 		cachep->object_size = max_t(int, cachep->object_size, size);
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1814) 	}
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1815) 	return cachep;
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1816) }
12220dea07f1a (Joonsoo Kim                    2014-10-09 15:26:24 -0700 1817) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1818) static bool set_objfreelist_slab_cache(struct kmem_cache *cachep,
d50112edde1d0 (Alexey Dobriyan                2017-11-15 17:32:18 -0800 1819) 			size_t size, slab_flags_t flags)
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1820) {
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1821) 	size_t left;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1822) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1823) 	cachep->num = 0;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1824) 
6471384af2a65 (Alexander Potapenko            2019-07-11 20:59:19 -0700 1825) 	/*
6471384af2a65 (Alexander Potapenko            2019-07-11 20:59:19 -0700 1826) 	 * If slab auto-initialization on free is enabled, store the freelist
6471384af2a65 (Alexander Potapenko            2019-07-11 20:59:19 -0700 1827) 	 * off-slab, so that its contents don't end up in one of the allocated
6471384af2a65 (Alexander Potapenko            2019-07-11 20:59:19 -0700 1828) 	 * objects.
6471384af2a65 (Alexander Potapenko            2019-07-11 20:59:19 -0700 1829) 	 */
6471384af2a65 (Alexander Potapenko            2019-07-11 20:59:19 -0700 1830) 	if (unlikely(slab_want_init_on_free(cachep)))
6471384af2a65 (Alexander Potapenko            2019-07-11 20:59:19 -0700 1831) 		return false;
6471384af2a65 (Alexander Potapenko            2019-07-11 20:59:19 -0700 1832) 
5f0d5a3ae7cff (Paul E. McKenney               2017-01-18 02:53:44 -0800 1833) 	if (cachep->ctor || flags & SLAB_TYPESAFE_BY_RCU)
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1834) 		return false;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1835) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1836) 	left = calculate_slab_order(cachep, size,
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1837) 			flags | CFLGS_OBJFREELIST_SLAB);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1838) 	if (!cachep->num)
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1839) 		return false;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1840) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1841) 	if (cachep->num * sizeof(freelist_idx_t) > cachep->object_size)
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1842) 		return false;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1843) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1844) 	cachep->colour = left / cachep->colour_off;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1845) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1846) 	return true;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1847) }
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 1848) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1849) static bool set_off_slab_cache(struct kmem_cache *cachep,
d50112edde1d0 (Alexey Dobriyan                2017-11-15 17:32:18 -0800 1850) 			size_t size, slab_flags_t flags)
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1851) {
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1852) 	size_t left;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1853) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1854) 	cachep->num = 0;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1855) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1856) 	/*
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1857) 	 * Always use on-slab management when SLAB_NOLEAKTRACE
3217fd9bdf001 (Joonsoo Kim                    2016-03-15 14:54:41 -0700 1858) 	 * to avoid recursive calls into kmemleak.
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1859) 	 */
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1860) 	if (flags & SLAB_NOLEAKTRACE)
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1861) 		return false;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1862) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1863) 	/*
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1864) 	 * Size is large, assume best to place the slab management obj
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1865) 	 * off-slab (should allow better packing of objs).
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1866) 	 */
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1867) 	left = calculate_slab_order(cachep, size, flags | CFLGS_OFF_SLAB);
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1868) 	if (!cachep->num)
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1869) 		return false;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1870) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1871) 	/*
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1872) 	 * If the slab has been placed off-slab, and we have enough space then
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1873) 	 * move it on-slab. This is at the expense of any extra colouring.
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1874) 	 */
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1875) 	if (left >= cachep->num * sizeof(freelist_idx_t))
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1876) 		return false;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1877) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1878) 	cachep->colour = left / cachep->colour_off;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1879) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1880) 	return true;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1881) }
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1882) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1883) static bool set_on_slab_cache(struct kmem_cache *cachep,
d50112edde1d0 (Alexey Dobriyan                2017-11-15 17:32:18 -0800 1884) 			size_t size, slab_flags_t flags)
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1885) {
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1886) 	size_t left;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1887) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1888) 	cachep->num = 0;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1889) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1890) 	left = calculate_slab_order(cachep, size, flags);
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1891) 	if (!cachep->num)
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1892) 		return false;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1893) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1894) 	cachep->colour = left / cachep->colour_off;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1895) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1896) 	return true;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1897) }
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1898) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1899) /**
039363f38bfe5 (Christoph Lameter              2012-07-06 15:25:10 -0500 1900)  * __kmem_cache_create - Create a cache.
a755b76ab4cef (Randy Dunlap                   2012-11-06 17:10:10 -0800 1901)  * @cachep: cache management descriptor
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1902)  * @flags: SLAB flags
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1903)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1904)  * Returns a ptr to the cache on success, NULL on failure.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1905)  * Cannot be called within a int, but can be interrupted.
20c2df83d25c6 (Paul Mundt                     2007-07-20 10:11:58 +0900 1906)  * The @ctor is run when new pages are allocated by the cache.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1907)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1908)  * The flags are
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1909)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1910)  * %SLAB_POISON - Poison the slab with a known test pattern (a5a5a5a5)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1911)  * to catch references to uninitialised memory.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1912)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1913)  * %SLAB_RED_ZONE - Insert `Red' zones around the allocated memory to check
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1914)  * for buffer overruns.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1915)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1916)  * %SLAB_HWCACHE_ALIGN - Align the objects in this cache to a hardware
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1917)  * cacheline.  This can be beneficial if you're counting cycles as closely
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1918)  * as davem.
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 1919)  *
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 1920)  * Return: a pointer to the created cache or %NULL in case of error
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1921)  */
d50112edde1d0 (Alexey Dobriyan                2017-11-15 17:32:18 -0800 1922) int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1923) {
d4a5fca592b9a (David Rientjes                 2014-09-25 16:05:20 -0700 1924) 	size_t ralign = BYTES_PER_WORD;
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300 1925) 	gfp_t gfp;
278b1bb131366 (Christoph Lameter              2012-09-05 00:20:34 +0000 1926) 	int err;
be4a7988b35db (Alexey Dobriyan                2018-04-05 16:21:28 -0700 1927) 	unsigned int size = cachep->size;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1928) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1929) #if DEBUG
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1930) #if FORCED_DEBUG
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1931) 	/*
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1932) 	 * Enable redzoning and last user accounting, except for caches with
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1933) 	 * large objects, if the increased size would increase the object size
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1934) 	 * above the next power of two: caches with object sizes just above a
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1935) 	 * power of two have a significant amount of internal fragmentation.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1936) 	 */
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1937) 	if (size < 4096 || fls(size - 1) == fls(size-1 + REDZONE_ALIGN +
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1938) 						2 * sizeof(unsigned long long)))
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 1939) 		flags |= SLAB_RED_ZONE | SLAB_STORE_USER;
5f0d5a3ae7cff (Paul E. McKenney               2017-01-18 02:53:44 -0800 1940) 	if (!(flags & SLAB_TYPESAFE_BY_RCU))
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1941) 		flags |= SLAB_POISON;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1942) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1943) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1944) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1945) 	/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1946) 	 * Check that size is in terms of words.  This is needed to avoid
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1947) 	 * unaligned accesses for some archs when redzoning is used, and makes
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1948) 	 * sure any on-slab bufctl's are also correctly aligned.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1949) 	 */
e07719502916a (Canjiang Lu                    2017-07-06 15:36:37 -0700 1950) 	size = ALIGN(size, BYTES_PER_WORD);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1951) 
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1952) 	if (flags & SLAB_RED_ZONE) {
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1953) 		ralign = REDZONE_ALIGN;
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1954) 		/* If redzoning, ensure that the second redzone is suitably
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1955) 		 * aligned, by adjusting the object size accordingly. */
e07719502916a (Canjiang Lu                    2017-07-06 15:36:37 -0700 1956) 		size = ALIGN(size, REDZONE_ALIGN);
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1957) 	}
ca5f9703dffa0 (Pekka Enberg                   2006-09-25 23:31:25 -0700 1958) 
a44b56d354b49 (Kevin Hilman                   2006-12-06 20:32:11 -0800 1959) 	/* 3) caller mandated alignment */
8a13a4cc80bb2 (Christoph Lameter              2012-09-04 23:18:33 +0000 1960) 	if (ralign < cachep->align) {
8a13a4cc80bb2 (Christoph Lameter              2012-09-04 23:18:33 +0000 1961) 		ralign = cachep->align;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1962) 	}
3ff84a7f36554 (Pekka Enberg                   2011-02-14 17:46:21 +0200 1963) 	/* disable debug if necessary */
3ff84a7f36554 (Pekka Enberg                   2011-02-14 17:46:21 +0200 1964) 	if (ralign > __alignof__(unsigned long long))
a44b56d354b49 (Kevin Hilman                   2006-12-06 20:32:11 -0800 1965) 		flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 1966) 	/*
ca5f9703dffa0 (Pekka Enberg                   2006-09-25 23:31:25 -0700 1967) 	 * 4) Store it.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1968) 	 */
8a13a4cc80bb2 (Christoph Lameter              2012-09-04 23:18:33 +0000 1969) 	cachep->align = ralign;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1970) 	cachep->colour_off = cache_line_size();
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1971) 	/* Offset must be a multiple of the alignment. */
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1972) 	if (cachep->colour_off < cachep->align)
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 1973) 		cachep->colour_off = cachep->align;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1974) 
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300 1975) 	if (slab_is_available())
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300 1976) 		gfp = GFP_KERNEL;
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300 1977) 	else
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300 1978) 		gfp = GFP_NOWAIT;
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300 1979) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1980) #if DEBUG
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1981) 
ca5f9703dffa0 (Pekka Enberg                   2006-09-25 23:31:25 -0700 1982) 	/*
ca5f9703dffa0 (Pekka Enberg                   2006-09-25 23:31:25 -0700 1983) 	 * Both debugging options require word-alignment which is calculated
ca5f9703dffa0 (Pekka Enberg                   2006-09-25 23:31:25 -0700 1984) 	 * into align above.
ca5f9703dffa0 (Pekka Enberg                   2006-09-25 23:31:25 -0700 1985) 	 */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1986) 	if (flags & SLAB_RED_ZONE) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1987) 		/* add space for red zone words */
3ff84a7f36554 (Pekka Enberg                   2011-02-14 17:46:21 +0200 1988) 		cachep->obj_offset += sizeof(unsigned long long);
3ff84a7f36554 (Pekka Enberg                   2011-02-14 17:46:21 +0200 1989) 		size += 2 * sizeof(unsigned long long);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1990) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1991) 	if (flags & SLAB_STORE_USER) {
ca5f9703dffa0 (Pekka Enberg                   2006-09-25 23:31:25 -0700 1992) 		/* user store requires one word storage behind the end of
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1993) 		 * the real object. But if the second red zone needs to be
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1994) 		 * aligned to 64 bits, we must allow that much space.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 1995) 		 */
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1996) 		if (flags & SLAB_RED_ZONE)
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1997) 			size += REDZONE_ALIGN;
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1998) 		else
87a927c715789 (David Woodhouse                2007-07-04 21:26:44 -0400 1999) 			size += BYTES_PER_WORD;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2000) 	}
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2001) #endif
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2002) 
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2003) 	kasan_cache_create(cachep, &size, &flags);
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2004) 
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2005) 	size = ALIGN(size, cachep->align);
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2006) 	/*
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2007) 	 * We should restrict the number of objects in a slab to implement
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2008) 	 * byte sized index. Refer comment on SLAB_OBJ_MIN_SIZE definition.
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2009) 	 */
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2010) 	if (FREELIST_BYTE_INDEX && size < SLAB_OBJ_MIN_SIZE)
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2011) 		size = ALIGN(SLAB_OBJ_MIN_SIZE, cachep->align);
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2012) 
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2013) #if DEBUG
03a2d2a3eafe4 (Joonsoo Kim                    2015-10-01 15:36:54 -0700 2014) 	/*
03a2d2a3eafe4 (Joonsoo Kim                    2015-10-01 15:36:54 -0700 2015) 	 * To activate debug pagealloc, off-slab management is necessary
03a2d2a3eafe4 (Joonsoo Kim                    2015-10-01 15:36:54 -0700 2016) 	 * requirement. In early phase of initialization, small sized slab
03a2d2a3eafe4 (Joonsoo Kim                    2015-10-01 15:36:54 -0700 2017) 	 * doesn't get initialized so it would not be possible. So, we need
03a2d2a3eafe4 (Joonsoo Kim                    2015-10-01 15:36:54 -0700 2018) 	 * to check size >= 256. It guarantees that all necessary small
03a2d2a3eafe4 (Joonsoo Kim                    2015-10-01 15:36:54 -0700 2019) 	 * sized slab is initialized in current slab initialization sequence.
03a2d2a3eafe4 (Joonsoo Kim                    2015-10-01 15:36:54 -0700 2020) 	 */
8e57f8acbbd12 (Vlastimil Babka                2020-01-13 16:29:20 -0800 2021) 	if (debug_pagealloc_enabled_static() && (flags & SLAB_POISON) &&
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2022) 		size >= 256 && cachep->object_size > cache_line_size()) {
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2023) 		if (size < PAGE_SIZE || size % PAGE_SIZE == 0) {
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2024) 			size_t tmp_size = ALIGN(size, PAGE_SIZE);
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2025) 
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2026) 			if (set_off_slab_cache(cachep, tmp_size, flags)) {
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2027) 				flags |= CFLGS_OFF_SLAB;
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2028) 				cachep->obj_offset += tmp_size - size;
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2029) 				size = tmp_size;
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2030) 				goto done;
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2031) 			}
f3a3c320d54ee (Joonsoo Kim                    2016-03-15 14:54:38 -0700 2032) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2033) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2034) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2035) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2036) 	if (set_objfreelist_slab_cache(cachep, size, flags)) {
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2037) 		flags |= CFLGS_OBJFREELIST_SLAB;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2038) 		goto done;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2039) 	}
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2040) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 2041) 	if (set_off_slab_cache(cachep, size, flags)) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2042) 		flags |= CFLGS_OFF_SLAB;
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 2043) 		goto done;
832a15d209cd2 (Joonsoo Kim                    2016-03-15 14:54:33 -0700 2044) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2045) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 2046) 	if (set_on_slab_cache(cachep, size, flags))
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 2047) 		goto done;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2048) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 2049) 	return -E2BIG;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2050) 
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 2051) done:
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 2052) 	cachep->freelist_size = cachep->num * sizeof(freelist_idx_t);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2053) 	cachep->flags = flags;
a57a49887eb33 (Joonsoo Kim                    2013-10-24 10:07:44 +0900 2054) 	cachep->allocflags = __GFP_COMP;
a3187e438bc65 (Yang Shi                       2016-05-19 17:10:41 -0700 2055) 	if (flags & SLAB_CACHE_DMA)
a618e89f1e6fb (Glauber Costa                  2012-06-14 16:17:21 +0400 2056) 		cachep->allocflags |= GFP_DMA;
6d6ea1e967a24 (Nicolas Boichat                2019-03-28 20:43:42 -0700 2057) 	if (flags & SLAB_CACHE_DMA32)
6d6ea1e967a24 (Nicolas Boichat                2019-03-28 20:43:42 -0700 2058) 		cachep->allocflags |= GFP_DMA32;
a3ba074447824 (David Rientjes                 2017-11-15 17:32:14 -0800 2059) 	if (flags & SLAB_RECLAIM_ACCOUNT)
a3ba074447824 (David Rientjes                 2017-11-15 17:32:14 -0800 2060) 		cachep->allocflags |= __GFP_RECLAIMABLE;
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500 2061) 	cachep->size = size;
6a2d7a955d8de (Eric Dumazet                   2006-12-13 00:34:27 -0800 2062) 	cachep->reciprocal_buffer_size = reciprocal_value(size);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2063) 
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2064) #if DEBUG
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2065) 	/*
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2066) 	 * If we're going to use the generic kernel_map_pages()
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2067) 	 * poisoning, then it's going to smash the contents of
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2068) 	 * the redzone and userword anyhow, so switch them off.
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2069) 	 */
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2070) 	if (IS_ENABLED(CONFIG_PAGE_POISONING) &&
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2071) 		(cachep->flags & SLAB_POISON) &&
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2072) 		is_debug_pagealloc_cache(cachep))
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2073) 		cachep->flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2074) #endif
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2075) 
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2076) 	if (OFF_SLAB(cachep)) {
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 2077) 		cachep->freelist_cache =
158e319bba59e (Joonsoo Kim                    2016-03-15 14:54:35 -0700 2078) 			kmalloc_slab(cachep->freelist_size, 0u);
e5ac9c5aec7c4 (Ravikiran G Thirumalai         2006-09-25 23:31:34 -0700 2079) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2080) 
278b1bb131366 (Christoph Lameter              2012-09-05 00:20:34 +0000 2081) 	err = setup_cpu_cache(cachep, gfp);
278b1bb131366 (Christoph Lameter              2012-09-05 00:20:34 +0000 2082) 	if (err) {
52b4b950b5074 (Dmitry Safonov                 2016-02-17 13:11:37 -0800 2083) 		__kmem_cache_release(cachep);
278b1bb131366 (Christoph Lameter              2012-09-05 00:20:34 +0000 2084) 		return err;
2ed3a4ef95ef1 (Christoph Lameter              2006-09-25 23:31:38 -0700 2085) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2086) 
278b1bb131366 (Christoph Lameter              2012-09-05 00:20:34 +0000 2087) 	return 0;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2088) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2089) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2090) #if DEBUG
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2091) static void check_irq_off(void)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2092) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2093) 	BUG_ON(!irqs_disabled());
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2094) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2095) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2096) static void check_irq_on(void)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2097) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2098) 	BUG_ON(irqs_disabled());
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2099) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2100) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2101) static void check_mutex_acquired(void)
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2102) {
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2103) 	BUG_ON(!mutex_is_locked(&slab_mutex));
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2104) }
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2105) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 2106) static void check_spinlock_acquired(struct kmem_cache *cachep)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2107) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2108) #ifdef CONFIG_SMP
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2109) 	check_irq_off();
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2110) 	assert_spin_locked(&get_node(cachep, numa_mem_id())->list_lock);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2111) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2112) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2113) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 2114) static void check_spinlock_acquired_node(struct kmem_cache *cachep, int node)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2115) {
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2116) #ifdef CONFIG_SMP
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2117) 	check_irq_off();
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2118) 	assert_spin_locked(&get_node(cachep, node)->list_lock);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2119) #endif
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2120) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2121) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2122) #else
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2123) #define check_irq_off()	do { } while(0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2124) #define check_irq_on()	do { } while(0)
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2125) #define check_mutex_acquired()	do { } while(0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2126) #define check_spinlock_acquired(x) do { } while(0)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2127) #define check_spinlock_acquired_node(x, y) do { } while(0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2128) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2129) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2130) static void drain_array_locked(struct kmem_cache *cachep, struct array_cache *ac,
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2131) 				int node, bool free_all, struct list_head *list)
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2132) {
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2133) 	int tofree;
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2134) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2135) 	if (!ac || !ac->avail)
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2136) 		return;
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2137) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2138) 	tofree = free_all ? ac->avail : (ac->limit + 4) / 5;
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2139) 	if (tofree > ac->avail)
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2140) 		tofree = (ac->avail + 1) / 2;
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2141) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2142) 	free_block(cachep, ac->entry, tofree, node, list);
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2143) 	ac->avail -= tofree;
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2144) 	memmove(ac->entry, &(ac->entry[tofree]), sizeof(void *) * ac->avail);
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2145) }
aab2207cf8d9c (Christoph Lameter              2006-03-22 00:09:06 -0800 2146) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2147) static void do_drain(void *arg)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2148) {
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2149) 	struct kmem_cache *cachep = arg;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2150) 	struct array_cache *ac;
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 2151) 	int node = numa_mem_id();
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2152) 	struct kmem_cache_node *n;
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 2153) 	LIST_HEAD(list);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2154) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2155) 	check_irq_off();
9a2dba4b4912b (Pekka Enberg                   2006-02-01 03:05:49 -0800 2156) 	ac = cpu_cache_get(cachep);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2157) 	n = get_node(cachep, node);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2158) 	spin_lock(&n->list_lock);
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 2159) 	free_block(cachep, ac->entry, ac->avail, node, &list);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2160) 	spin_unlock(&n->list_lock);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2161) 	ac->avail = 0;
678ff6a7afccc (Shakeel Butt                   2020-09-26 07:13:41 -0700 2162) 	slabs_destroy(cachep, &list);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2163) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2164) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 2165) static void drain_cpu_caches(struct kmem_cache *cachep)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2166) {
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2167) 	struct kmem_cache_node *n;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2168) 	int node;
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2169) 	LIST_HEAD(list);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2170) 
15c8b6c1aaaf1 (Jens Axboe                     2008-05-09 09:39:44 +0200 2171) 	on_each_cpu(do_drain, cachep, 1);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2172) 	check_irq_on();
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2173) 	for_each_kmem_cache_node(cachep, node, n)
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2174) 		if (n->alien)
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2175) 			drain_alien_cache(cachep, n->alien);
a4523a8b38089 (Roland Dreier                  2006-05-15 11:41:00 -0700 2176) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2177) 	for_each_kmem_cache_node(cachep, node, n) {
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2178) 		spin_lock_irq(&n->list_lock);
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2179) 		drain_array_locked(cachep, n->shared, node, true, &list);
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2180) 		spin_unlock_irq(&n->list_lock);
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2181) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2182) 		slabs_destroy(cachep, &list);
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 2183) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2184) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2185) 
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2186) /*
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2187)  * Remove slabs from the list of free slabs.
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2188)  * Specify the number of slabs to drain in tofree.
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2189)  *
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2190)  * Returns the actual number of slabs released.
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2191)  */
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2192) static int drain_freelist(struct kmem_cache *cache,
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2193) 			struct kmem_cache_node *n, int tofree)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2194) {
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2195) 	struct list_head *p;
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2196) 	int nr_freed;
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2197) 	struct page *page;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2198) 
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2199) 	nr_freed = 0;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2200) 	while (nr_freed < tofree && !list_empty(&n->slabs_free)) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2201) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2202) 		spin_lock_irq(&n->list_lock);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2203) 		p = n->slabs_free.prev;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2204) 		if (p == &n->slabs_free) {
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2205) 			spin_unlock_irq(&n->list_lock);
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2206) 			goto out;
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2207) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2208) 
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2209) 		page = list_entry(p, struct page, slab_list);
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2210) 		list_del(&page->slab_list);
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 2211) 		n->free_slabs--;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 2212) 		n->total_slabs--;
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2213) 		/*
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2214) 		 * Safe to drop the lock. The slab is no longer linked
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2215) 		 * to the cache.
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2216) 		 */
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2217) 		n->free_objects -= cache->num;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2218) 		spin_unlock_irq(&n->list_lock);
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2219) 		slab_destroy(cache, page);
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2220) 		nr_freed++;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2221) 	}
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2222) out:
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2223) 	return nr_freed;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2224) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2225) 
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2226) bool __kmem_cache_empty(struct kmem_cache *s)
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2227) {
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2228) 	int node;
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2229) 	struct kmem_cache_node *n;
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2230) 
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2231) 	for_each_kmem_cache_node(s, node, n)
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2232) 		if (!list_empty(&n->slabs_full) ||
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2233) 		    !list_empty(&n->slabs_partial))
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2234) 			return false;
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2235) 	return true;
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2236) }
f9e13c0a5a33d (Shakeel Butt                   2018-04-05 16:21:57 -0700 2237) 
c9fc586403e7c (Tejun Heo                      2017-02-22 15:41:27 -0800 2238) int __kmem_cache_shrink(struct kmem_cache *cachep)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2239) {
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2240) 	int ret = 0;
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2241) 	int node;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2242) 	struct kmem_cache_node *n;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2243) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2244) 	drain_cpu_caches(cachep);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2245) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2246) 	check_irq_on();
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2247) 	for_each_kmem_cache_node(cachep, node, n) {
a5aa63a5f7352 (Joonsoo Kim                    2016-05-19 17:10:08 -0700 2248) 		drain_freelist(cachep, n, INT_MAX);
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 2249) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2250) 		ret += !list_empty(&n->slabs_full) ||
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2251) 			!list_empty(&n->slabs_partial);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2252) 	}
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2253) 	return (ret ? 1 : 0);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2254) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2255) 
945cf2b6199be (Christoph Lameter              2012-09-04 23:18:33 +0000 2256) int __kmem_cache_shutdown(struct kmem_cache *cachep)
52b4b950b5074 (Dmitry Safonov                 2016-02-17 13:11:37 -0800 2257) {
c9fc586403e7c (Tejun Heo                      2017-02-22 15:41:27 -0800 2258) 	return __kmem_cache_shrink(cachep);
52b4b950b5074 (Dmitry Safonov                 2016-02-17 13:11:37 -0800 2259) }
52b4b950b5074 (Dmitry Safonov                 2016-02-17 13:11:37 -0800 2260) 
52b4b950b5074 (Dmitry Safonov                 2016-02-17 13:11:37 -0800 2261) void __kmem_cache_release(struct kmem_cache *cachep)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2262) {
12c3667fb780e (Christoph Lameter              2012-09-04 23:38:33 +0000 2263) 	int i;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2264) 	struct kmem_cache_node *n;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2265) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2266) 	cache_random_seq_destroy(cachep);
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2267) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 2268) 	free_percpu(cachep->cpu_cache);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2269) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2270) 	/* NUMA: free the node structures */
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2271) 	for_each_kmem_cache_node(cachep, i, n) {
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2272) 		kfree(n->shared);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2273) 		free_alien_cache(n->alien);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2274) 		kfree(n);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2275) 		cachep->node[i] = NULL;
12c3667fb780e (Christoph Lameter              2012-09-04 23:38:33 +0000 2276) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2277) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2278) 
e5ac9c5aec7c4 (Ravikiran G Thirumalai         2006-09-25 23:31:34 -0700 2279) /*
e5ac9c5aec7c4 (Ravikiran G Thirumalai         2006-09-25 23:31:34 -0700 2280)  * Get the memory for a slab management obj.
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 2281)  *
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 2282)  * For a slab cache when the slab descriptor is off-slab, the
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 2283)  * slab descriptor can't come from the same cache which is being created,
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 2284)  * Because if it is the case, that means we defer the creation of
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 2285)  * the kmalloc_{dma,}_cache of size sizeof(slab descriptor) to this point.
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 2286)  * And we eventually call down to __kmem_cache_create(), which
80d015587a62f (Colin Ian King                 2021-05-06 18:06:21 -0700 2287)  * in turn looks up in the kmalloc_{dma,}_caches for the desired-size one.
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 2288)  * This is a "chicken-and-egg" problem.
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 2289)  *
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 2290)  * So the off-slab slab descriptor shall come from the kmalloc_{dma,}_caches,
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 2291)  * which are all initialized during kmem_cache_init().
e5ac9c5aec7c4 (Ravikiran G Thirumalai         2006-09-25 23:31:34 -0700 2292)  */
7e00735520ffb (Joonsoo Kim                    2013-10-30 19:04:01 +0900 2293) static void *alloc_slabmgmt(struct kmem_cache *cachep,
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 2294) 				   struct page *page, int colour_off,
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 2295) 				   gfp_t local_flags, int nodeid)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2296) {
7e00735520ffb (Joonsoo Kim                    2013-10-30 19:04:01 +0900 2297) 	void *freelist;
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 2298) 	void *addr = page_address(page);
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 2299) 
51dedad06b5f6 (Andrey Konovalov               2019-02-20 22:20:28 -0800 2300) 	page->s_mem = addr + colour_off;
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700 2301) 	page->active = 0;
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700 2302) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2303) 	if (OBJFREELIST_SLAB(cachep))
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2304) 		freelist = NULL;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2305) 	else if (OFF_SLAB(cachep)) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2306) 		/* Slab management obj is off-slab. */
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2307) 		freelist = kmem_cache_alloc_node(cachep->freelist_cache,
8759ec50a6cad (Pekka Enberg                   2008-11-26 10:01:31 +0200 2308) 					      local_flags, nodeid);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2309) 	} else {
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700 2310) 		/* We will use last bytes at the slab for freelist */
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700 2311) 		freelist = addr + (PAGE_SIZE << cachep->gfporder) -
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700 2312) 				cachep->freelist_size;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2313) 	}
2e6b360216879 (Joonsoo Kim                    2016-03-15 14:54:30 -0700 2314) 
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2315) 	return freelist;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2316) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2317) 
7cc68973c36d9 (Joonsoo Kim                    2014-04-18 16:24:09 +0900 2318) static inline freelist_idx_t get_free_obj(struct page *page, unsigned int idx)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2319) {
a41adfaa23dfe (Joonsoo Kim                    2013-12-02 17:49:42 +0900 2320) 	return ((freelist_idx_t *)page->freelist)[idx];
e5c58dfdcbd36 (Joonsoo Kim                    2013-12-02 17:49:40 +0900 2321) }
e5c58dfdcbd36 (Joonsoo Kim                    2013-12-02 17:49:40 +0900 2322) 
e5c58dfdcbd36 (Joonsoo Kim                    2013-12-02 17:49:40 +0900 2323) static inline void set_free_obj(struct page *page,
7cc68973c36d9 (Joonsoo Kim                    2014-04-18 16:24:09 +0900 2324) 					unsigned int idx, freelist_idx_t val)
e5c58dfdcbd36 (Joonsoo Kim                    2013-12-02 17:49:40 +0900 2325) {
a41adfaa23dfe (Joonsoo Kim                    2013-12-02 17:49:42 +0900 2326) 	((freelist_idx_t *)(page->freelist))[idx] = val;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2327) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2328) 
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2329) static void cache_init_objs_debug(struct kmem_cache *cachep, struct page *page)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2330) {
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2331) #if DEBUG
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2332) 	int i;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2333) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2334) 	for (i = 0; i < cachep->num; i++) {
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2335) 		void *objp = index_to_obj(cachep, page, i);
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2336) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2337) 		if (cachep->flags & SLAB_STORE_USER)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2338) 			*dbg_userword(cachep, objp) = NULL;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2339) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2340) 		if (cachep->flags & SLAB_RED_ZONE) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2341) 			*dbg_redzone1(cachep, objp) = RED_INACTIVE;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2342) 			*dbg_redzone2(cachep, objp) = RED_INACTIVE;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2343) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2344) 		/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2345) 		 * Constructors are not allowed to allocate memory from the same
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2346) 		 * cache which they are a constructor for.  Otherwise, deadlock.
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2347) 		 * They must also be threaded.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2348) 		 */
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2349) 		if (cachep->ctor && !(cachep->flags & SLAB_POISON)) {
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2350) 			kasan_unpoison_object_data(cachep,
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2351) 						   objp + obj_offset(cachep));
51cc50685a427 (Alexey Dobriyan                2008-07-25 19:45:34 -0700 2352) 			cachep->ctor(objp + obj_offset(cachep));
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2353) 			kasan_poison_object_data(
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2354) 				cachep, objp + obj_offset(cachep));
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2355) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2356) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2357) 		if (cachep->flags & SLAB_RED_ZONE) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2358) 			if (*dbg_redzone2(cachep, objp) != RED_INACTIVE)
756a025f00091 (Joe Perches                    2016-03-17 14:19:47 -0700 2359) 				slab_error(cachep, "constructor overwrote the end of an object");
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2360) 			if (*dbg_redzone1(cachep, objp) != RED_INACTIVE)
756a025f00091 (Joe Perches                    2016-03-17 14:19:47 -0700 2361) 				slab_error(cachep, "constructor overwrote the start of an object");
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2362) 		}
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2363) 		/* need to poison the objs? */
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2364) 		if (cachep->flags & SLAB_POISON) {
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2365) 			poison_obj(cachep, objp, POISON_FREE);
80552f0f7aebd (Qian Cai                       2019-04-16 10:22:57 -0400 2366) 			slab_kernel_map(cachep, objp, 0);
40b44137971c2 (Joonsoo Kim                    2016-03-15 14:54:21 -0700 2367) 		}
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2368) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2369) #endif
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2370) }
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2371) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2372) #ifdef CONFIG_SLAB_FREELIST_RANDOM
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2373) /* Hold information during a freelist initialization */
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2374) union freelist_init_state {
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2375) 	struct {
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2376) 		unsigned int pos;
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2377) 		unsigned int *list;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2378) 		unsigned int count;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2379) 	};
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2380) 	struct rnd_state rnd_state;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2381) };
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2382) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2383) /*
f0953a1bbaca7 (Ingo Molnar                    2021-05-06 18:06:47 -0700 2384)  * Initialize the state based on the randomization method available.
f0953a1bbaca7 (Ingo Molnar                    2021-05-06 18:06:47 -0700 2385)  * return true if the pre-computed list is available, false otherwise.
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2386)  */
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2387) static bool freelist_state_initialize(union freelist_init_state *state,
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2388) 				struct kmem_cache *cachep,
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2389) 				unsigned int count)
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2390) {
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2391) 	bool ret;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2392) 	unsigned int rand;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2393) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2394) 	/* Use best entropy available to define a random shift */
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2395) 	rand = get_random_int();
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2396) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2397) 	/* Use a random state if the pre-computed list is not available */
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2398) 	if (!cachep->random_seq) {
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2399) 		prandom_seed_state(&state->rnd_state, rand);
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2400) 		ret = false;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2401) 	} else {
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2402) 		state->list = cachep->random_seq;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2403) 		state->count = count;
c4e490cf148e8 (John Sperbeck                  2017-01-10 16:58:24 -0800 2404) 		state->pos = rand % count;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2405) 		ret = true;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2406) 	}
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2407) 	return ret;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2408) }
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2409) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2410) /* Get the next entry on the list and randomize it using a random shift */
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2411) static freelist_idx_t next_random_slot(union freelist_init_state *state)
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2412) {
c4e490cf148e8 (John Sperbeck                  2017-01-10 16:58:24 -0800 2413) 	if (state->pos >= state->count)
c4e490cf148e8 (John Sperbeck                  2017-01-10 16:58:24 -0800 2414) 		state->pos = 0;
c4e490cf148e8 (John Sperbeck                  2017-01-10 16:58:24 -0800 2415) 	return state->list[state->pos++];
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2416) }
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2417) 
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2418) /* Swap two freelist entries */
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2419) static void swap_free_obj(struct page *page, unsigned int a, unsigned int b)
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2420) {
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2421) 	swap(((freelist_idx_t *)page->freelist)[a],
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2422) 		((freelist_idx_t *)page->freelist)[b]);
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2423) }
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2424) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2425) /*
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2426)  * Shuffle the freelist initialization state based on pre-computed lists.
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2427)  * return true if the list was successfully shuffled, false otherwise.
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2428)  */
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2429) static bool shuffle_freelist(struct kmem_cache *cachep, struct page *page)
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2430) {
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2431) 	unsigned int objfreelist = 0, i, rand, count = cachep->num;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2432) 	union freelist_init_state state;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2433) 	bool precomputed;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2434) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2435) 	if (count < 2)
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2436) 		return false;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2437) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2438) 	precomputed = freelist_state_initialize(&state, cachep, count);
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2439) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2440) 	/* Take a random entry as the objfreelist */
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2441) 	if (OBJFREELIST_SLAB(cachep)) {
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2442) 		if (!precomputed)
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2443) 			objfreelist = count - 1;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2444) 		else
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2445) 			objfreelist = next_random_slot(&state);
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2446) 		page->freelist = index_to_obj(cachep, page, objfreelist) +
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2447) 						obj_offset(cachep);
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2448) 		count--;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2449) 	}
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2450) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2451) 	/*
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2452) 	 * On early boot, generate the list dynamically.
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2453) 	 * Later use a pre-computed list for speed.
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2454) 	 */
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2455) 	if (!precomputed) {
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2456) 		for (i = 0; i < count; i++)
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2457) 			set_free_obj(page, i, i);
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2458) 
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2459) 		/* Fisher-Yates shuffle */
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2460) 		for (i = count - 1; i > 0; i--) {
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2461) 			rand = prandom_u32_state(&state.rnd_state);
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2462) 			rand %= (i + 1);
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2463) 			swap_free_obj(page, i, rand);
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 2464) 		}
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2465) 	} else {
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2466) 		for (i = 0; i < count; i++)
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2467) 			set_free_obj(page, i, next_random_slot(&state));
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2468) 	}
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2469) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2470) 	if (OBJFREELIST_SLAB(cachep))
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2471) 		set_free_obj(page, cachep->num - 1, objfreelist);
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2472) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2473) 	return true;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2474) }
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2475) #else
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2476) static inline bool shuffle_freelist(struct kmem_cache *cachep,
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2477) 				struct page *page)
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2478) {
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2479) 	return false;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2480) }
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2481) #endif /* CONFIG_SLAB_FREELIST_RANDOM */
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2482) 
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2483) static void cache_init_objs(struct kmem_cache *cachep,
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2484) 			    struct page *page)
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2485) {
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2486) 	int i;
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2487) 	void *objp;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2488) 	bool shuffled;
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2489) 
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2490) 	cache_init_objs_debug(cachep, page);
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2491) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2492) 	/* Try to randomize the freelist if enabled */
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2493) 	shuffled = shuffle_freelist(cachep, page);
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2494) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2495) 	if (!shuffled && OBJFREELIST_SLAB(cachep)) {
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2496) 		page->freelist = index_to_obj(cachep, page, cachep->num - 1) +
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2497) 						obj_offset(cachep);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2498) 	}
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2499) 
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2500) 	for (i = 0; i < cachep->num; i++) {
b3cbd9bf77cd1 (Andrey Ryabinin                2016-08-02 14:02:52 -0700 2501) 		objp = index_to_obj(cachep, page, i);
4d176711ea7a8 (Andrey Konovalov               2018-12-28 00:30:23 -0800 2502) 		objp = kasan_init_slab_obj(cachep, objp);
b3cbd9bf77cd1 (Andrey Ryabinin                2016-08-02 14:02:52 -0700 2503) 
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2504) 		/* constructor could break poison info */
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2505) 		if (DEBUG == 0 && cachep->ctor) {
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2506) 			kasan_unpoison_object_data(cachep, objp);
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2507) 			cachep->ctor(objp);
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2508) 			kasan_poison_object_data(cachep, objp);
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 2509) 		}
10b2e9e8e808b (Joonsoo Kim                    2016-03-15 14:54:47 -0700 2510) 
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2511) 		if (!shuffled)
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 2512) 			set_free_obj(page, i, i);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2513) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2514) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2515) 
260b61dd46ed0 (Joonsoo Kim                    2016-03-15 14:54:12 -0700 2516) static void *slab_get_obj(struct kmem_cache *cachep, struct page *page)
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2517) {
b1cb0982bdd6f (Joonsoo Kim                    2013-10-24 10:07:45 +0900 2518) 	void *objp;
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2519) 
e5c58dfdcbd36 (Joonsoo Kim                    2013-12-02 17:49:40 +0900 2520) 	objp = index_to_obj(cachep, page, get_free_obj(page, page->active));
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2521) 	page->active++;
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2522) 
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2523) 	return objp;
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2524) }
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2525) 
260b61dd46ed0 (Joonsoo Kim                    2016-03-15 14:54:12 -0700 2526) static void slab_put_obj(struct kmem_cache *cachep,
260b61dd46ed0 (Joonsoo Kim                    2016-03-15 14:54:12 -0700 2527) 			struct page *page, void *objp)
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2528) {
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2529) 	unsigned int objnr = obj_to_index(cachep, page, objp);
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2530) #if DEBUG
16025177e1e16 (Joonsoo Kim                    2013-10-24 10:07:46 +0900 2531) 	unsigned int i;
b1cb0982bdd6f (Joonsoo Kim                    2013-10-24 10:07:45 +0900 2532) 
b1cb0982bdd6f (Joonsoo Kim                    2013-10-24 10:07:45 +0900 2533) 	/* Verify double free bug */
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2534) 	for (i = page->active; i < cachep->num; i++) {
e5c58dfdcbd36 (Joonsoo Kim                    2013-12-02 17:49:40 +0900 2535) 		if (get_free_obj(page, i) == objnr) {
85c3e4a5a185f (Geert Uytterhoeven             2017-12-14 15:32:58 -0800 2536) 			pr_err("slab: double free detected in cache '%s', objp %px\n",
756a025f00091 (Joe Perches                    2016-03-17 14:19:47 -0700 2537) 			       cachep->name, objp);
b1cb0982bdd6f (Joonsoo Kim                    2013-10-24 10:07:45 +0900 2538) 			BUG();
b1cb0982bdd6f (Joonsoo Kim                    2013-10-24 10:07:45 +0900 2539) 		}
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2540) 	}
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2541) #endif
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2542) 	page->active--;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2543) 	if (!page->freelist)
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2544) 		page->freelist = objp + obj_offset(cachep);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2545) 
e5c58dfdcbd36 (Joonsoo Kim                    2013-12-02 17:49:40 +0900 2546) 	set_free_obj(page, page->active, objnr);
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2547) }
78d382d77c842 (Matthew Dobson                 2006-02-01 03:05:47 -0800 2548) 
4776874ff096c (Pekka Enberg                   2006-06-23 02:03:07 -0700 2549) /*
4776874ff096c (Pekka Enberg                   2006-06-23 02:03:07 -0700 2550)  * Map pages beginning at addr to the given cache and slab. This is required
4776874ff096c (Pekka Enberg                   2006-06-23 02:03:07 -0700 2551)  * for the slab allocator to be able to lookup the cache and slab of a
ccd35fb9f4da8 (Nicholas Piggin                2011-01-07 17:49:17 +1100 2552)  * virtual address for kfree, ksize, and slab debugging.
4776874ff096c (Pekka Enberg                   2006-06-23 02:03:07 -0700 2553)  */
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2554) static void slab_map_pages(struct kmem_cache *cache, struct page *page,
7e00735520ffb (Joonsoo Kim                    2013-10-30 19:04:01 +0900 2555) 			   void *freelist)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2556) {
a57a49887eb33 (Joonsoo Kim                    2013-10-24 10:07:44 +0900 2557) 	page->slab_cache = cache;
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2558) 	page->freelist = freelist;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2559) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2560) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2561) /*
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2562)  * Grow (by 1) the number of slabs within a cache.  This is called by
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2563)  * kmem_cache_alloc() when there are no active objs left in a cache.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2564)  */
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2565) static struct page *cache_grow_begin(struct kmem_cache *cachep,
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2566) 				gfp_t flags, int nodeid)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2567) {
7e00735520ffb (Joonsoo Kim                    2013-10-30 19:04:01 +0900 2568) 	void *freelist;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 2569) 	size_t offset;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 2570) 	gfp_t local_flags;
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 2571) 	int page_node;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2572) 	struct kmem_cache_node *n;
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 2573) 	struct page *page;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2574) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2575) 	/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2576) 	 * Be lazy and only check for valid flags here,  keeping it out of the
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2577) 	 * critical path in kmem_cache_alloc().
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2578) 	 */
444050990db4a (Long Li                        2020-08-06 23:18:28 -0700 2579) 	if (unlikely(flags & GFP_SLAB_BUG_MASK))
444050990db4a (Long Li                        2020-08-06 23:18:28 -0700 2580) 		flags = kmalloc_fix_flags(flags);
444050990db4a (Long Li                        2020-08-06 23:18:28 -0700 2581) 
128227e7fe408 (Matthew Wilcox                 2018-06-07 17:05:13 -0700 2582) 	WARN_ON_ONCE(cachep->ctor && (flags & __GFP_ZERO));
6cb062296f73e (Christoph Lameter              2007-10-16 01:25:41 -0700 2583) 	local_flags = flags & (GFP_CONSTRAINT_MASK|GFP_RECLAIM_MASK);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2584) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2585) 	check_irq_off();
d0164adc89f6b (Mel Gorman                     2015-11-06 16:28:21 -0800 2586) 	if (gfpflags_allow_blocking(local_flags))
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2587) 		local_irq_enable();
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2588) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2589) 	/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2590) 	 * Get mem for the objs.  Attempt to allocate a physical page from
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2591) 	 * 'nodeid'.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2592) 	 */
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 2593) 	page = kmem_getpages(cachep, local_flags, nodeid);
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 2594) 	if (!page)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2595) 		goto failed;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2596) 
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 2597) 	page_node = page_to_nid(page);
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 2598) 	n = get_node(cachep, page_node);
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2599) 
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2600) 	/* Get colour for the slab, and cal the next value. */
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2601) 	n->colour_next++;
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2602) 	if (n->colour_next >= cachep->colour)
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2603) 		n->colour_next = 0;
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2604) 
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2605) 	offset = n->colour_next;
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2606) 	if (offset >= cachep->colour)
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2607) 		offset = 0;
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2608) 
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2609) 	offset *= cachep->colour_off;
03d1d43a1262b (Joonsoo Kim                    2016-05-19 17:10:20 -0700 2610) 
51dedad06b5f6 (Andrey Konovalov               2019-02-20 22:20:28 -0800 2611) 	/*
51dedad06b5f6 (Andrey Konovalov               2019-02-20 22:20:28 -0800 2612) 	 * Call kasan_poison_slab() before calling alloc_slabmgmt(), so
51dedad06b5f6 (Andrey Konovalov               2019-02-20 22:20:28 -0800 2613) 	 * page_address() in the latter returns a non-tagged pointer,
51dedad06b5f6 (Andrey Konovalov               2019-02-20 22:20:28 -0800 2614) 	 * as it should be for slab pages.
51dedad06b5f6 (Andrey Konovalov               2019-02-20 22:20:28 -0800 2615) 	 */
51dedad06b5f6 (Andrey Konovalov               2019-02-20 22:20:28 -0800 2616) 	kasan_poison_slab(page);
51dedad06b5f6 (Andrey Konovalov               2019-02-20 22:20:28 -0800 2617) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2618) 	/* Get slab management. */
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2619) 	freelist = alloc_slabmgmt(cachep, page, offset,
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 2620) 			local_flags & ~GFP_CONSTRAINT_MASK, page_node);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2621) 	if (OFF_SLAB(cachep) && !freelist)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2622) 		goto opps1;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2623) 
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2624) 	slab_map_pages(cachep, page, freelist);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2625) 
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2626) 	cache_init_objs(cachep, page);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2627) 
d0164adc89f6b (Mel Gorman                     2015-11-06 16:28:21 -0800 2628) 	if (gfpflags_allow_blocking(local_flags))
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2629) 		local_irq_disable();
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2630) 
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2631) 	return page;
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2632) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2633) opps1:
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 2634) 	kmem_freepages(cachep, page);
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2635) failed:
d0164adc89f6b (Mel Gorman                     2015-11-06 16:28:21 -0800 2636) 	if (gfpflags_allow_blocking(local_flags))
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2637) 		local_irq_disable();
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2638) 	return NULL;
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2639) }
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2640) 
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2641) static void cache_grow_end(struct kmem_cache *cachep, struct page *page)
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2642) {
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2643) 	struct kmem_cache_node *n;
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2644) 	void *list = NULL;
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2645) 
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2646) 	check_irq_off();
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2647) 
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2648) 	if (!page)
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2649) 		return;
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2650) 
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2651) 	INIT_LIST_HEAD(&page->slab_list);
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2652) 	n = get_node(cachep, page_to_nid(page));
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2653) 
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2654) 	spin_lock(&n->list_lock);
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 2655) 	n->total_slabs++;
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 2656) 	if (!page->active) {
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2657) 		list_add_tail(&page->slab_list, &n->slabs_free);
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 2658) 		n->free_slabs++;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 2659) 	} else
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2660) 		fixup_slab_list(cachep, n, page, &list);
07a63c41fa1f6 (Aruna Ramakrishna              2016-10-27 17:46:32 -0700 2661) 
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2662) 	STATS_INC_GROWN(cachep);
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2663) 	n->free_objects += cachep->num - page->active;
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2664) 	spin_unlock(&n->list_lock);
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2665) 
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2666) 	fixup_objfreelist_debug(cachep, &list);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2667) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2668) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2669) #if DEBUG
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2670) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2671) /*
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2672)  * Perform extra freeing checks:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2673)  * - detect bad pointers.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2674)  * - POISON/RED_ZONE checking
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2675)  */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2676) static void kfree_debugcheck(const void *objp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2677) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2678) 	if (!virt_addr_valid(objp)) {
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 2679) 		pr_err("kfree_debugcheck: out of range ptr %lxh\n",
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 2680) 		       (unsigned long)objp);
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 2681) 		BUG();
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2682) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2683) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2684) 
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2685) static inline void verify_redzone_free(struct kmem_cache *cache, void *obj)
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2686) {
b46b8f19c9cd4 (David Woodhouse                2007-05-08 00:22:59 -0700 2687) 	unsigned long long redzone1, redzone2;
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2688) 
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2689) 	redzone1 = *dbg_redzone1(cache, obj);
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2690) 	redzone2 = *dbg_redzone2(cache, obj);
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2691) 
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2692) 	/*
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2693) 	 * Redzone is ok.
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2694) 	 */
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2695) 	if (redzone1 == RED_ACTIVE && redzone2 == RED_ACTIVE)
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2696) 		return;
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2697) 
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2698) 	if (redzone1 == RED_INACTIVE && redzone2 == RED_INACTIVE)
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2699) 		slab_error(cache, "double free detected");
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2700) 	else
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2701) 		slab_error(cache, "memory outside object was overwritten");
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2702) 
85c3e4a5a185f (Geert Uytterhoeven             2017-12-14 15:32:58 -0800 2703) 	pr_err("%px: redzone 1:0x%llx, redzone 2:0x%llx\n",
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 2704) 	       obj, redzone1, redzone2);
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2705) }
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2706) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 2707) static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp,
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 2708) 				   unsigned long caller)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2709) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2710) 	unsigned int objnr;
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2711) 	struct page *page;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2712) 
80cbd911ca255 (Matthew Wilcox                 2007-11-29 12:05:13 -0700 2713) 	BUG_ON(virt_to_cache(objp) != cachep);
80cbd911ca255 (Matthew Wilcox                 2007-11-29 12:05:13 -0700 2714) 
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800 2715) 	objp -= obj_offset(cachep);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2716) 	kfree_debugcheck(objp);
b49af68ff9fc5 (Christoph Lameter              2007-05-06 14:49:41 -0700 2717) 	page = virt_to_head_page(objp);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2718) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2719) 	if (cachep->flags & SLAB_RED_ZONE) {
58ce1fd580564 (Pekka Enberg                   2006-06-23 02:03:24 -0700 2720) 		verify_redzone_free(cachep, objp);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2721) 		*dbg_redzone1(cachep, objp) = RED_INACTIVE;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2722) 		*dbg_redzone2(cachep, objp) = RED_INACTIVE;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2723) 	}
7878c231dae05 (Qian Cai                       2019-05-16 15:57:41 -0400 2724) 	if (cachep->flags & SLAB_STORE_USER)
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 2725) 		*dbg_userword(cachep, objp) = (void *)caller;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2726) 
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2727) 	objnr = obj_to_index(cachep, page, objp);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2728) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2729) 	BUG_ON(objnr >= cachep->num);
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 2730) 	BUG_ON(objp != index_to_obj(cachep, page, objnr));
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2731) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2732) 	if (cachep->flags & SLAB_POISON) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2733) 		poison_obj(cachep, objp, POISON_FREE);
80552f0f7aebd (Qian Cai                       2019-04-16 10:22:57 -0400 2734) 		slab_kernel_map(cachep, objp, 0);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2735) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2736) 	return objp;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2737) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2738) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2739) #else
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2740) #define kfree_debugcheck(x) do { } while(0)
0b41163407e2f (Zhiyuan Dai                    2021-02-24 12:01:01 -0800 2741) #define cache_free_debugcheck(x, objp, z) (objp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2742) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2743) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2744) static inline void fixup_objfreelist_debug(struct kmem_cache *cachep,
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2745) 						void **list)
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2746) {
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2747) #if DEBUG
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2748) 	void *next = *list;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2749) 	void *objp;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2750) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2751) 	while (next) {
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2752) 		objp = next - obj_offset(cachep);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2753) 		next = *(void **)next;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2754) 		poison_obj(cachep, objp, POISON_FREE);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2755) 	}
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2756) #endif
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2757) }
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2758) 
d8410234db6a1 (Joonsoo Kim                    2016-03-15 14:54:44 -0700 2759) static inline void fixup_slab_list(struct kmem_cache *cachep,
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2760) 				struct kmem_cache_node *n, struct page *page,
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2761) 				void **list)
d8410234db6a1 (Joonsoo Kim                    2016-03-15 14:54:44 -0700 2762) {
d8410234db6a1 (Joonsoo Kim                    2016-03-15 14:54:44 -0700 2763) 	/* move slabp to correct slabp list: */
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2764) 	list_del(&page->slab_list);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2765) 	if (page->active == cachep->num) {
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2766) 		list_add(&page->slab_list, &n->slabs_full);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2767) 		if (OBJFREELIST_SLAB(cachep)) {
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2768) #if DEBUG
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2769) 			/* Poisoning will be done without holding the lock */
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2770) 			if (cachep->flags & SLAB_POISON) {
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2771) 				void **objp = page->freelist;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2772) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2773) 				*objp = *list;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2774) 				*list = objp;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2775) 			}
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2776) #endif
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2777) 			page->freelist = NULL;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2778) 		}
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2779) 	} else
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2780) 		list_add(&page->slab_list, &n->slabs_partial);
d8410234db6a1 (Joonsoo Kim                    2016-03-15 14:54:44 -0700 2781) }
d8410234db6a1 (Joonsoo Kim                    2016-03-15 14:54:44 -0700 2782) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2783) /* Try to find non-pfmemalloc slab if needed */
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2784) static noinline struct page *get_valid_first_slab(struct kmem_cache_node *n,
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 2785) 					struct page *page, bool pfmemalloc)
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2786) {
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2787) 	if (!page)
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2788) 		return NULL;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2789) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2790) 	if (pfmemalloc)
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2791) 		return page;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2792) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2793) 	if (!PageSlabPfmemalloc(page))
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2794) 		return page;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2795) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2796) 	/* No need to keep pfmemalloc slab if we have enough free objects */
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2797) 	if (n->free_objects > n->free_limit) {
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2798) 		ClearPageSlabPfmemalloc(page);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2799) 		return page;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2800) 	}
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2801) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2802) 	/* Move pfmemalloc slab to the end of list to speed up next search */
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2803) 	list_del(&page->slab_list);
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 2804) 	if (!page->active) {
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2805) 		list_add_tail(&page->slab_list, &n->slabs_free);
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 2806) 		n->free_slabs++;
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 2807) 	} else
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2808) 		list_add_tail(&page->slab_list, &n->slabs_partial);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2809) 
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2810) 	list_for_each_entry(page, &n->slabs_partial, slab_list) {
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2811) 		if (!PageSlabPfmemalloc(page))
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2812) 			return page;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2813) 	}
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2814) 
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 2815) 	n->free_touched = 1;
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2816) 	list_for_each_entry(page, &n->slabs_free, slab_list) {
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 2817) 		if (!PageSlabPfmemalloc(page)) {
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 2818) 			n->free_slabs--;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2819) 			return page;
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 2820) 		}
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2821) 	}
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2822) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2823) 	return NULL;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2824) }
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2825) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2826) static struct page *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc)
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2827) {
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2828) 	struct page *page;
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2829) 
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 2830) 	assert_spin_locked(&n->list_lock);
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2831) 	page = list_first_entry_or_null(&n->slabs_partial, struct page,
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2832) 					slab_list);
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2833) 	if (!page) {
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2834) 		n->free_touched = 1;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 2835) 		page = list_first_entry_or_null(&n->slabs_free, struct page,
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 2836) 						slab_list);
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 2837) 		if (page)
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 2838) 			n->free_slabs--;
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2839) 	}
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2840) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2841) 	if (sk_memalloc_socks())
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 2842) 		page = get_valid_first_slab(n, page, pfmemalloc);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2843) 
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2844) 	return page;
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2845) }
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2846) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2847) static noinline void *cache_alloc_pfmemalloc(struct kmem_cache *cachep,
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2848) 				struct kmem_cache_node *n, gfp_t flags)
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2849) {
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2850) 	struct page *page;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2851) 	void *obj;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2852) 	void *list = NULL;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2853) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2854) 	if (!gfp_pfmemalloc_allowed(flags))
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2855) 		return NULL;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2856) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2857) 	spin_lock(&n->list_lock);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2858) 	page = get_first_slab(n, true);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2859) 	if (!page) {
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2860) 		spin_unlock(&n->list_lock);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2861) 		return NULL;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2862) 	}
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2863) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2864) 	obj = slab_get_obj(cachep, page);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2865) 	n->free_objects--;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2866) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2867) 	fixup_slab_list(cachep, n, page, &list);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2868) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2869) 	spin_unlock(&n->list_lock);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2870) 	fixup_objfreelist_debug(cachep, &list);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2871) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2872) 	return obj;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2873) }
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2874) 
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2875) /*
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2876)  * Slab list should be fixed up by fixup_slab_list() for existing slab
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2877)  * or cache_grow_end() for new slab
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2878)  */
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2879) static __always_inline int alloc_block(struct kmem_cache *cachep,
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2880) 		struct array_cache *ac, struct page *page, int batchcount)
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2881) {
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2882) 	/*
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2883) 	 * There must be at least one object available for
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2884) 	 * allocation.
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2885) 	 */
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2886) 	BUG_ON(page->active >= cachep->num);
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2887) 
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2888) 	while (page->active < cachep->num && batchcount--) {
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2889) 		STATS_INC_ALLOCED(cachep);
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2890) 		STATS_INC_ACTIVE(cachep);
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2891) 		STATS_SET_HIGH(cachep);
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2892) 
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2893) 		ac->entry[ac->avail++] = slab_get_obj(cachep, page);
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2894) 	}
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2895) 
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2896) 	return batchcount;
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2897) }
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2898) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2899) static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2900) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2901) 	int batchcount;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2902) 	struct kmem_cache_node *n;
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700 2903) 	struct array_cache *ac, *shared;
1ca4cb2418c04 (Pekka Enberg                   2006-10-06 00:43:52 -0700 2904) 	int node;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2905) 	void *list = NULL;
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2906) 	struct page *page;
1ca4cb2418c04 (Pekka Enberg                   2006-10-06 00:43:52 -0700 2907) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2908) 	check_irq_off();
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 2909) 	node = numa_mem_id();
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2910) 
9a2dba4b4912b (Pekka Enberg                   2006-02-01 03:05:49 -0800 2911) 	ac = cpu_cache_get(cachep);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2912) 	batchcount = ac->batchcount;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2913) 	if (!ac->touched && batchcount > BATCHREFILL_LIMIT) {
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2914) 		/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2915) 		 * If there was little recent activity on this cache, then
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2916) 		 * perform only a partial refill.  Otherwise we could generate
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2917) 		 * refill bouncing.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2918) 		 */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2919) 		batchcount = BATCHREFILL_LIMIT;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2920) 	}
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 2921) 	n = get_node(cachep, node);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2922) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2923) 	BUG_ON(ac->avail > 0 || !n);
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700 2924) 	shared = READ_ONCE(n->shared);
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700 2925) 	if (!n->free_objects && (!shared || !shared->avail))
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700 2926) 		goto direct_grow;
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700 2927) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2928) 	spin_lock(&n->list_lock);
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700 2929) 	shared = READ_ONCE(n->shared);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2930) 
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800 2931) 	/* See if we can refill from the shared array */
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700 2932) 	if (shared && transfer_objects(ac, shared, batchcount)) {
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700 2933) 		shared->touched = 1;
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800 2934) 		goto alloc_done;
44b57f1cc72a4 (Nicholas Piggin                2010-01-27 22:27:40 +1100 2935) 	}
3ded175a4b7a4 (Christoph Lameter              2006-03-25 03:06:44 -0800 2936) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2937) 	while (batchcount > 0) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2938) 		/* Get slab alloc is to come from. */
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2939) 		page = get_first_slab(n, false);
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2940) 		if (!page)
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 2941) 			goto must_grow;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2942) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2943) 		check_spinlock_acquired(cachep);
714b8171af9c9 (Pekka Enberg                   2007-05-06 14:49:03 -0700 2944) 
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2945) 		batchcount = alloc_block(cachep, ac, page, batchcount);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2946) 		fixup_slab_list(cachep, n, page, &list);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2947) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2948) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2949) must_grow:
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2950) 	n->free_objects -= ac->avail;
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2951) alloc_done:
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 2952) 	spin_unlock(&n->list_lock);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 2953) 	fixup_objfreelist_debug(cachep, &list);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2954) 
801faf0db8947 (Joonsoo Kim                    2016-05-19 17:10:31 -0700 2955) direct_grow:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2956) 	if (unlikely(!ac->avail)) {
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2957) 		/* Check if we can use obj in pfmemalloc slab */
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2958) 		if (sk_memalloc_socks()) {
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2959) 			void *obj = cache_alloc_pfmemalloc(cachep, n, flags);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2960) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2961) 			if (obj)
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2962) 				return obj;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2963) 		}
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2964) 
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2965) 		page = cache_grow_begin(cachep, gfp_exact_node(flags), node);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 2966) 
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2967) 		/*
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2968) 		 * cache_grow_begin() can reenable interrupts,
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2969) 		 * then ac could change.
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 2970) 		 */
9a2dba4b4912b (Pekka Enberg                   2006-02-01 03:05:49 -0800 2971) 		ac = cpu_cache_get(cachep);
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2972) 		if (!ac->avail && page)
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2973) 			alloc_block(cachep, ac, page, batchcount);
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2974) 		cache_grow_end(cachep, page);
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 2975) 
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 2976) 		if (!ac->avail)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2977) 			return NULL;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2978) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2979) 	ac->touched = 1;
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 2980) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 2981) 	return ac->entry[--ac->avail];
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2982) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2983) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2984) static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2985) 						gfp_t flags)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2986) {
d0164adc89f6b (Mel Gorman                     2015-11-06 16:28:21 -0800 2987) 	might_sleep_if(gfpflags_allow_blocking(flags));
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2988) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2989) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2990) #if DEBUG
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 2991) static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep,
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 2992) 				gfp_t flags, void *objp, unsigned long caller)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2993) {
128227e7fe408 (Matthew Wilcox                 2018-06-07 17:05:13 -0700 2994) 	WARN_ON_ONCE(cachep->ctor && (flags & __GFP_ZERO));
df3ae2c9941d3 (Marco Elver                    2021-03-12 21:07:53 -0800 2995) 	if (!objp || is_kfence_address(objp))
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2996) 		return objp;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 2997) 	if (cachep->flags & SLAB_POISON) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 2998) 		check_poison_obj(cachep, objp);
80552f0f7aebd (Qian Cai                       2019-04-16 10:22:57 -0400 2999) 		slab_kernel_map(cachep, objp, 1);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3000) 		poison_obj(cachep, objp, POISON_INUSE);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3001) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3002) 	if (cachep->flags & SLAB_STORE_USER)
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3003) 		*dbg_userword(cachep, objp) = (void *)caller;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3004) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3005) 	if (cachep->flags & SLAB_RED_ZONE) {
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3006) 		if (*dbg_redzone1(cachep, objp) != RED_INACTIVE ||
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3007) 				*dbg_redzone2(cachep, objp) != RED_INACTIVE) {
756a025f00091 (Joe Perches                    2016-03-17 14:19:47 -0700 3008) 			slab_error(cachep, "double free, or memory outside object was overwritten");
85c3e4a5a185f (Geert Uytterhoeven             2017-12-14 15:32:58 -0800 3009) 			pr_err("%px: redzone 1:0x%llx, redzone 2:0x%llx\n",
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 3010) 			       objp, *dbg_redzone1(cachep, objp),
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 3011) 			       *dbg_redzone2(cachep, objp));
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3012) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3013) 		*dbg_redzone1(cachep, objp) = RED_ACTIVE;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3014) 		*dbg_redzone2(cachep, objp) = RED_ACTIVE;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3015) 	}
0378730142037 (Joonsoo Kim                    2014-06-23 13:22:06 -0700 3016) 
3dafccf227514 (Manfred Spraul                 2006-02-01 03:05:42 -0800 3017) 	objp += obj_offset(cachep);
4f104934591ed (Christoph Lameter              2007-05-06 14:50:17 -0700 3018) 	if (cachep->ctor && cachep->flags & SLAB_POISON)
51cc50685a427 (Alexey Dobriyan                2008-07-25 19:45:34 -0700 3019) 		cachep->ctor(objp);
7ea466f2256b0 (Tetsuo Handa                   2011-07-21 09:42:45 +0900 3020) 	if (ARCH_SLAB_MINALIGN &&
7ea466f2256b0 (Tetsuo Handa                   2011-07-21 09:42:45 +0900 3021) 	    ((unsigned long)objp & (ARCH_SLAB_MINALIGN-1))) {
85c3e4a5a185f (Geert Uytterhoeven             2017-12-14 15:32:58 -0800 3022) 		pr_err("0x%px: not aligned to ARCH_SLAB_MINALIGN=%d\n",
c225150b86fef (Hugh Dickins                   2011-07-11 13:35:08 -0700 3023) 		       objp, (int)ARCH_SLAB_MINALIGN);
a44b56d354b49 (Kevin Hilman                   2006-12-06 20:32:11 -0800 3024) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3025) 	return objp;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3026) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3027) #else
0b41163407e2f (Zhiyuan Dai                    2021-02-24 12:01:01 -0800 3028) #define cache_alloc_debugcheck_after(a, b, objp, d) (objp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3029) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3030) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 3031) static inline void *____cache_alloc(struct kmem_cache *cachep, gfp_t flags)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3032) {
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3033) 	void *objp;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3034) 	struct array_cache *ac;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3035) 
5c382300876f2 (Alok N Kataria                 2005-09-27 21:45:46 -0700 3036) 	check_irq_off();
8a8b6502fb669 (Akinobu Mita                   2006-12-08 02:39:44 -0800 3037) 
9a2dba4b4912b (Pekka Enberg                   2006-02-01 03:05:49 -0800 3038) 	ac = cpu_cache_get(cachep);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3039) 	if (likely(ac->avail)) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3040) 		ac->touched = 1;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3041) 		objp = ac->entry[--ac->avail];
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3042) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3043) 		STATS_INC_ALLOCHIT(cachep);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3044) 		goto out;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3045) 	}
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3046) 
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3047) 	STATS_INC_ALLOCMISS(cachep);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3048) 	objp = cache_alloc_refill(cachep, flags);
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3049) 	/*
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3050) 	 * the 'ac' may be updated by cache_alloc_refill(),
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3051) 	 * and kmemleak_erase() requires its correct value.
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3052) 	 */
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3053) 	ac = cpu_cache_get(cachep);
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3054) 
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3055) out:
d5cff635290ae (Catalin Marinas                2009-06-11 13:22:40 +0100 3056) 	/*
d5cff635290ae (Catalin Marinas                2009-06-11 13:22:40 +0100 3057) 	 * To avoid a false negative, if an object that is in one of the
d5cff635290ae (Catalin Marinas                2009-06-11 13:22:40 +0100 3058) 	 * per-CPU caches is leaked, we need to make sure kmemleak doesn't
d5cff635290ae (Catalin Marinas                2009-06-11 13:22:40 +0100 3059) 	 * treat the array pointers as a reference to the object.
d5cff635290ae (Catalin Marinas                2009-06-11 13:22:40 +0100 3060) 	 */
f3d8b53a3abbf (J. R. Okajima                  2009-12-02 16:55:49 +0900 3061) 	if (objp)
f3d8b53a3abbf (J. R. Okajima                  2009-12-02 16:55:49 +0900 3062) 		kmemleak_erase(&ac->entry[ac->avail]);
5c382300876f2 (Alok N Kataria                 2005-09-27 21:45:46 -0700 3063) 	return objp;
5c382300876f2 (Alok N Kataria                 2005-09-27 21:45:46 -0700 3064) }
5c382300876f2 (Alok N Kataria                 2005-09-27 21:45:46 -0700 3065) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3066) #ifdef CONFIG_NUMA
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3067) /*
2ad654bc5e2b2 (Zefan Li                       2014-09-25 09:41:02 +0800 3068)  * Try allocating on another node if PFA_SPREAD_SLAB is a mempolicy is set.
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3069)  *
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3070)  * If we are in_interrupt, then process context, including cpusets and
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3071)  * mempolicy, may not apply and should not be used for allocation policy.
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3072)  */
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3073) static void *alternate_node_alloc(struct kmem_cache *cachep, gfp_t flags)
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3074) {
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3075) 	int nid_alloc, nid_here;
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3076) 
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700 3077) 	if (in_interrupt() || (flags & __GFP_THISNODE))
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3078) 		return NULL;
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 3079) 	nid_alloc = nid_here = numa_mem_id();
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3080) 	if (cpuset_do_slab_mem_spread() && (cachep->flags & SLAB_MEM_SPREAD))
6adef3ebe570b (Jack Steiner                   2010-05-26 14:42:49 -0700 3081) 		nid_alloc = cpuset_slab_spread_node();
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3082) 	else if (current->mempolicy)
2a389610a7331 (David Rientjes                 2014-04-07 15:37:29 -0700 3083) 		nid_alloc = mempolicy_slab_node();
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3084) 	if (nid_alloc != nid_here)
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3085) 		return ____cache_alloc_node(cachep, flags, nid_alloc);
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3086) 	return NULL;
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3087) }
c61afb181c649 (Paul Jackson                   2006-03-24 03:16:08 -0800 3088) 
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700 3089) /*
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700 3090)  * Fallback function if there was no memory available and no objects on a
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3091)  * certain node and fall back is permitted. First we scan all the
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000 3092)  * available node for available objects. If that fails then we
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3093)  * perform an allocation without specifying a node. This allows the page
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3094)  * allocator to do its reclaim / fallback magic. We then insert the
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3095)  * slab into the proper nodelist and then allocate from it.
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700 3096)  */
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3097) static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags)
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700 3098) {
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3099) 	struct zonelist *zonelist;
dd1a239f6f2d4 (Mel Gorman                     2008-04-28 02:12:17 -0700 3100) 	struct zoneref *z;
54a6eb5c4765a (Mel Gorman                     2008-04-28 02:12:16 -0700 3101) 	struct zone *zone;
97a225e69a1f8 (Joonsoo Kim                    2020-06-03 15:59:01 -0700 3102) 	enum zone_type highest_zoneidx = gfp_zone(flags);
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700 3103) 	void *obj = NULL;
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 3104) 	struct page *page;
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3105) 	int nid;
cc9a6c8776615 (Mel Gorman                     2012-03-21 16:34:11 -0700 3106) 	unsigned int cpuset_mems_cookie;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3107) 
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3108) 	if (flags & __GFP_THISNODE)
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3109) 		return NULL;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3110) 
cc9a6c8776615 (Mel Gorman                     2012-03-21 16:34:11 -0700 3111) retry_cpuset:
d26914d11751b (Mel Gorman                     2014-04-03 14:47:24 -0700 3112) 	cpuset_mems_cookie = read_mems_allowed_begin();
2a389610a7331 (David Rientjes                 2014-04-07 15:37:29 -0700 3113) 	zonelist = node_zonelist(mempolicy_slab_node(), flags);
cc9a6c8776615 (Mel Gorman                     2012-03-21 16:34:11 -0700 3114) 
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3115) retry:
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3116) 	/*
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3117) 	 * Look through allowed nodes for objects available
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3118) 	 * from existing per node queues.
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3119) 	 */
97a225e69a1f8 (Joonsoo Kim                    2020-06-03 15:59:01 -0700 3120) 	for_each_zone_zonelist(zone, z, zonelist, highest_zoneidx) {
54a6eb5c4765a (Mel Gorman                     2008-04-28 02:12:16 -0700 3121) 		nid = zone_to_nid(zone);
aedb0eb107961 (Christoph Lameter              2006-10-21 10:24:16 -0700 3122) 
061d7074e1eb4 (Vladimir Davydov               2014-12-12 16:58:25 -0800 3123) 		if (cpuset_zone_allowed(zone, flags) &&
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3124) 			get_node(cache, nid) &&
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3125) 			get_node(cache, nid)->free_objects) {
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3126) 				obj = ____cache_alloc_node(cache,
4167e9b2cf10f (David Rientjes                 2015-04-14 15:46:55 -0700 3127) 					gfp_exact_node(flags), nid);
481c5346d0981 (Christoph Lameter              2008-06-21 16:46:35 -0700 3128) 				if (obj)
481c5346d0981 (Christoph Lameter              2008-06-21 16:46:35 -0700 3129) 					break;
481c5346d0981 (Christoph Lameter              2008-06-21 16:46:35 -0700 3130) 		}
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3131) 	}
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3132) 
cfce66047f189 (Christoph Lameter              2007-05-06 14:50:17 -0700 3133) 	if (!obj) {
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3134) 		/*
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3135) 		 * This allocation will be performed within the constraints
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3136) 		 * of the current cpuset / memory policy requirements.
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3137) 		 * We may trigger various forms of reclaim on the allowed
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3138) 		 * set and go into memory reserves if necessary.
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3139) 		 */
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 3140) 		page = cache_grow_begin(cache, flags, numa_mem_id());
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 3141) 		cache_grow_end(cache, page);
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 3142) 		if (page) {
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 3143) 			nid = page_to_nid(page);
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 3144) 			obj = ____cache_alloc_node(cache,
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 3145) 				gfp_exact_node(flags), nid);
0c3aa83e00a9c (Joonsoo Kim                    2013-10-24 10:07:38 +0900 3146) 
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3147) 			/*
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 3148) 			 * Another processor may allocate the objects in
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 3149) 			 * the slab since we are not holding any locks.
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3150) 			 */
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 3151) 			if (!obj)
511e3a0588122 (Joonsoo Kim                    2016-05-19 17:10:23 -0700 3152) 				goto retry;
3c517a6132098 (Christoph Lameter              2006-12-06 20:33:29 -0800 3153) 		}
aedb0eb107961 (Christoph Lameter              2006-10-21 10:24:16 -0700 3154) 	}
cc9a6c8776615 (Mel Gorman                     2012-03-21 16:34:11 -0700 3155) 
d26914d11751b (Mel Gorman                     2014-04-03 14:47:24 -0700 3156) 	if (unlikely(!obj && read_mems_allowed_retry(cpuset_mems_cookie)))
cc9a6c8776615 (Mel Gorman                     2012-03-21 16:34:11 -0700 3157) 		goto retry_cpuset;
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700 3158) 	return obj;
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700 3159) }
765c4507af71c (Christoph Lameter              2006-09-27 01:50:08 -0700 3160) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3161) /*
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3162)  * A interface to enable slab creation on nodeid
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3163)  */
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3164) static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags,
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3165) 				int nodeid)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3166) {
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 3167) 	struct page *page;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3168) 	struct kmem_cache_node *n;
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 3169) 	void *obj = NULL;
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 3170) 	void *list = NULL;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3171) 
7c3fbbdd04a68 (Paul Mackerras                 2014-12-02 15:59:48 -0800 3172) 	VM_BUG_ON(nodeid < 0 || nodeid >= MAX_NUMNODES);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3173) 	n = get_node(cachep, nodeid);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3174) 	BUG_ON(!n);
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3175) 
ca3b9b9173531 (Ravikiran G Thirumalai         2006-02-04 23:27:58 -0800 3176) 	check_irq_off();
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3177) 	spin_lock(&n->list_lock);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3178) 	page = get_first_slab(n, false);
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 3179) 	if (!page)
7aa0d22785dee (Geliang Tang                   2016-01-14 15:18:02 -0800 3180) 		goto must_grow;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3181) 
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3182) 	check_spinlock_acquired_node(cachep, nodeid);
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3183) 
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3184) 	STATS_INC_NODEALLOCS(cachep);
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3185) 	STATS_INC_ACTIVE(cachep);
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3186) 	STATS_SET_HIGH(cachep);
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3187) 
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 3188) 	BUG_ON(page->active == cachep->num);
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3189) 
260b61dd46ed0 (Joonsoo Kim                    2016-03-15 14:54:12 -0700 3190) 	obj = slab_get_obj(cachep, page);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3191) 	n->free_objects--;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3192) 
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 3193) 	fixup_slab_list(cachep, n, page, &list);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3194) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3195) 	spin_unlock(&n->list_lock);
b03a017bebc40 (Joonsoo Kim                    2016-03-15 14:54:50 -0700 3196) 	fixup_objfreelist_debug(cachep, &list);
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 3197) 	return obj;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3198) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3199) must_grow:
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3200) 	spin_unlock(&n->list_lock);
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 3201) 	page = cache_grow_begin(cachep, gfp_exact_node(flags), nodeid);
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 3202) 	if (page) {
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 3203) 		/* This slab isn't counted yet so don't update free_objects */
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 3204) 		obj = slab_get_obj(cachep, page);
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 3205) 	}
76b342bdc71ba (Joonsoo Kim                    2016-05-19 17:10:26 -0700 3206) 	cache_grow_end(cachep, page);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3207) 
213b46958c65c (Joonsoo Kim                    2016-05-19 17:10:29 -0700 3208) 	return obj ? obj : fallback_alloc(cachep, flags);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3209) }
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3210) 
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3211) static __always_inline void *
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3212) slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size,
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3213) 		   unsigned long caller)
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3214) {
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3215) 	unsigned long save_flags;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3216) 	void *ptr;
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 3217) 	int slab_node = numa_mem_id();
964d4bd370d55 (Roman Gushchin                 2020-08-06 23:20:56 -0700 3218) 	struct obj_cgroup *objcg = NULL;
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3219) 	bool init = false;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3220) 
dcce284a25937 (Benjamin Herrenschmidt         2009-06-18 13:24:12 +1000 3221) 	flags &= gfp_allowed_mask;
964d4bd370d55 (Roman Gushchin                 2020-08-06 23:20:56 -0700 3222) 	cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags);
011eceaf0ad54 (Jesper Dangaard Brouer         2016-03-15 14:53:41 -0700 3223) 	if (unlikely(!cachep))
824ebef122153 (Akinobu Mita                   2007-05-06 14:49:58 -0700 3224) 		return NULL;
824ebef122153 (Akinobu Mita                   2007-05-06 14:49:58 -0700 3225) 
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3226) 	ptr = kfence_alloc(cachep, orig_size, flags);
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3227) 	if (unlikely(ptr))
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3228) 		goto out_hooks;
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3229) 
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3230) 	cache_alloc_debugcheck_before(cachep, flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3231) 	local_irq_save(save_flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3232) 
eacbbae385bf4 (Andrew Morton                  2011-07-28 13:59:49 -0700 3233) 	if (nodeid == NUMA_NO_NODE)
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 3234) 		nodeid = slab_node;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3235) 
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3236) 	if (unlikely(!get_node(cachep, nodeid))) {
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3237) 		/* Node not bootstrapped yet */
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3238) 		ptr = fallback_alloc(cachep, flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3239) 		goto out;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3240) 	}
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3241) 
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 3242) 	if (nodeid == slab_node) {
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3243) 		/*
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3244) 		 * Use the locally cached objects if possible.
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3245) 		 * However ____cache_alloc does not allow fallback
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3246) 		 * to other nodes. It may fail while we still have
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3247) 		 * objects on other nodes available.
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3248) 		 */
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3249) 		ptr = ____cache_alloc(cachep, flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3250) 		if (ptr)
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3251) 			goto out;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3252) 	}
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3253) 	/* ___cache_alloc_node can fall back to other nodes */
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3254) 	ptr = ____cache_alloc_node(cachep, flags, nodeid);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3255)   out:
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3256) 	local_irq_restore(save_flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3257) 	ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3258) 	init = slab_want_init_on_alloc(flags, cachep);
d07dbea46405b (Christoph Lameter              2007-07-17 04:03:23 -0700 3259) 
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3260) out_hooks:
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3261) 	slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3262) 	return ptr;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3263) }
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3264) 
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3265) static __always_inline void *
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3266) __do_cache_alloc(struct kmem_cache *cache, gfp_t flags)
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3267) {
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3268) 	void *objp;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3269) 
2ad654bc5e2b2 (Zefan Li                       2014-09-25 09:41:02 +0800 3270) 	if (current->mempolicy || cpuset_do_slab_mem_spread()) {
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3271) 		objp = alternate_node_alloc(cache, flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3272) 		if (objp)
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3273) 			goto out;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3274) 	}
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3275) 	objp = ____cache_alloc(cache, flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3276) 
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3277) 	/*
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3278) 	 * We may just have run out of memory on the local node.
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3279) 	 * ____cache_alloc_node() knows how to locate memory on other nodes
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3280) 	 */
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 3281) 	if (!objp)
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 3282) 		objp = ____cache_alloc_node(cache, flags, numa_mem_id());
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3283) 
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3284)   out:
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3285) 	return objp;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3286) }
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3287) #else
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3288) 
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3289) static __always_inline void *
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3290) __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags)
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3291) {
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3292) 	return ____cache_alloc(cachep, flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3293) }
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3294) 
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3295) #endif /* CONFIG_NUMA */
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3296) 
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3297) static __always_inline void *
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3298) slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller)
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3299) {
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3300) 	unsigned long save_flags;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3301) 	void *objp;
964d4bd370d55 (Roman Gushchin                 2020-08-06 23:20:56 -0700 3302) 	struct obj_cgroup *objcg = NULL;
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3303) 	bool init = false;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3304) 
dcce284a25937 (Benjamin Herrenschmidt         2009-06-18 13:24:12 +1000 3305) 	flags &= gfp_allowed_mask;
964d4bd370d55 (Roman Gushchin                 2020-08-06 23:20:56 -0700 3306) 	cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags);
011eceaf0ad54 (Jesper Dangaard Brouer         2016-03-15 14:53:41 -0700 3307) 	if (unlikely(!cachep))
824ebef122153 (Akinobu Mita                   2007-05-06 14:49:58 -0700 3308) 		return NULL;
824ebef122153 (Akinobu Mita                   2007-05-06 14:49:58 -0700 3309) 
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3310) 	objp = kfence_alloc(cachep, orig_size, flags);
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3311) 	if (unlikely(objp))
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3312) 		goto out;
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3313) 
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3314) 	cache_alloc_debugcheck_before(cachep, flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3315) 	local_irq_save(save_flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3316) 	objp = __do_cache_alloc(cachep, flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3317) 	local_irq_restore(save_flags);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3318) 	objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3319) 	prefetchw(objp);
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3320) 	init = slab_want_init_on_alloc(flags, cachep);
d07dbea46405b (Christoph Lameter              2007-07-17 04:03:23 -0700 3321) 
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3322) out:
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3323) 	slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init);
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3324) 	return objp;
8c8cc2c10c219 (Pekka Enberg                   2007-02-10 01:42:53 -0800 3325) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3326) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3327) /*
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 3328)  * Caller needs to acquire correct kmem_cache_node's list_lock
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 3329)  * @list: List of detached free slabs should be freed by caller
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3330)  */
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 3331) static void free_block(struct kmem_cache *cachep, void **objpp,
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 3332) 			int nr_objects, int node, struct list_head *list)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3333) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3334) 	int i;
25c063fbd5512 (Joonsoo Kim                    2014-08-06 16:04:22 -0700 3335) 	struct kmem_cache_node *n = get_node(cachep, node);
6052b7880a955 (Joonsoo Kim                    2016-05-19 17:10:17 -0700 3336) 	struct page *page;
6052b7880a955 (Joonsoo Kim                    2016-05-19 17:10:17 -0700 3337) 
6052b7880a955 (Joonsoo Kim                    2016-05-19 17:10:17 -0700 3338) 	n->free_objects += nr_objects;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3339) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3340) 	for (i = 0; i < nr_objects; i++) {
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3341) 		void *objp;
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 3342) 		struct page *page;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3343) 
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3344) 		objp = objpp[i];
072bb0aa5e062 (Mel Gorman                     2012-07-31 16:43:58 -0700 3345) 
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 3346) 		page = virt_to_head_page(objp);
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 3347) 		list_del(&page->slab_list);
ff69416e6323f (Christoph Lameter              2005-09-22 21:44:02 -0700 3348) 		check_spinlock_acquired_node(cachep, node);
260b61dd46ed0 (Joonsoo Kim                    2016-03-15 14:54:12 -0700 3349) 		slab_put_obj(cachep, page, objp);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3350) 		STATS_DEC_ACTIVE(cachep);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3351) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3352) 		/* fixup slab chains */
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 3353) 		if (page->active == 0) {
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 3354) 			list_add(&page->slab_list, &n->slabs_free);
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 3355) 			n->free_slabs++;
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 3356) 		} else {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3357) 			/* Unconditionally move a slab to the end of the
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3358) 			 * partial list on free - maximum time for the
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3359) 			 * other objects to be freed, too.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3360) 			 */
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 3361) 			list_add_tail(&page->slab_list, &n->slabs_partial);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3362) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3363) 	}
6052b7880a955 (Joonsoo Kim                    2016-05-19 17:10:17 -0700 3364) 
6052b7880a955 (Joonsoo Kim                    2016-05-19 17:10:17 -0700 3365) 	while (n->free_objects > n->free_limit && !list_empty(&n->slabs_free)) {
6052b7880a955 (Joonsoo Kim                    2016-05-19 17:10:17 -0700 3366) 		n->free_objects -= cachep->num;
6052b7880a955 (Joonsoo Kim                    2016-05-19 17:10:17 -0700 3367) 
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 3368) 		page = list_last_entry(&n->slabs_free, struct page, slab_list);
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 3369) 		list_move(&page->slab_list, list);
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 3370) 		n->free_slabs--;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 3371) 		n->total_slabs--;
6052b7880a955 (Joonsoo Kim                    2016-05-19 17:10:17 -0700 3372) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3373) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3374) 
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 3375) static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3376) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3377) 	int batchcount;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3378) 	struct kmem_cache_node *n;
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 3379) 	int node = numa_mem_id();
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 3380) 	LIST_HEAD(list);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3381) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3382) 	batchcount = ac->batchcount;
260b61dd46ed0 (Joonsoo Kim                    2016-03-15 14:54:12 -0700 3383) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3384) 	check_irq_off();
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3385) 	n = get_node(cachep, node);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3386) 	spin_lock(&n->list_lock);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3387) 	if (n->shared) {
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3388) 		struct array_cache *shared_array = n->shared;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3389) 		int max = shared_array->limit - shared_array->avail;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3390) 		if (max) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3391) 			if (batchcount > max)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3392) 				batchcount = max;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3393) 			memcpy(&(shared_array->entry[shared_array->avail]),
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3394) 			       ac->entry, sizeof(void *) * batchcount);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3395) 			shared_array->avail += batchcount;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3396) 			goto free_done;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3397) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3398) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3399) 
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 3400) 	free_block(cachep, ac->entry, batchcount, node, &list);
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3401) free_done:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3402) #if STATS
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3403) 	{
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3404) 		int i = 0;
73c0219d8eca4 (Geliang Tang                   2016-01-14 15:17:59 -0800 3405) 		struct page *page;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3406) 
16cb0ec75b346 (Tobin C. Harding               2019-05-13 17:16:15 -0700 3407) 		list_for_each_entry(page, &n->slabs_free, slab_list) {
8456a648cf44f (Joonsoo Kim                    2013-10-24 10:07:49 +0900 3408) 			BUG_ON(page->active);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3409) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3410) 			i++;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3411) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3412) 		STATS_SET_FREEABLE(cachep, i);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3413) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3414) #endif
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3415) 	spin_unlock(&n->list_lock);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3416) 	ac->avail -= batchcount;
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3417) 	memmove(ac->entry, &(ac->entry[batchcount]), sizeof(void *)*ac->avail);
678ff6a7afccc (Shakeel Butt                   2020-09-26 07:13:41 -0700 3418) 	slabs_destroy(cachep, &list);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3419) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3420) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3421) /*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3422)  * Release an obj back to its cache. If the obj has a constructed state, it must
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3423)  * be in this state _before_ it is released.  Called with disabled ints.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3424)  */
ee3ce779b58c3 (Dmitry Vyukov                  2018-02-06 15:36:27 -0800 3425) static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp,
ee3ce779b58c3 (Dmitry Vyukov                  2018-02-06 15:36:27 -0800 3426) 					 unsigned long caller)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3427) {
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3428) 	bool init;
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3429) 
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3430) 	if (is_kfence_address(objp)) {
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3431) 		kmemleak_free_recursive(objp, cachep->flags);
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3432) 		__kfence_free(objp);
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3433) 		return;
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3434) 	}
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3435) 
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3436) 	/*
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3437) 	 * As memory initialization might be integrated into KASAN,
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3438) 	 * kasan_slab_free and initialization memset must be
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3439) 	 * kept together to avoid discrepancies in behavior.
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3440) 	 */
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3441) 	init = slab_want_init_on_free(cachep);
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3442) 	if (init && !kasan_has_integrated_init())
a32d654db5438 (Alexander Popov                2020-12-14 19:04:33 -0800 3443) 		memset(objp, 0, cachep->object_size);
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3444) 	/* KASAN might put objp into memory quarantine, delaying its reuse. */
d57a964e09c22 (Andrey Konovalov               2021-04-29 23:00:09 -0700 3445) 	if (kasan_slab_free(cachep, objp, init))
55834c59098d0 (Alexander Potapenko            2016-05-20 16:59:11 -0700 3446) 		return;
55834c59098d0 (Alexander Potapenko            2016-05-20 16:59:11 -0700 3447) 
cfbe1636c3585 (Marco Elver                    2020-08-06 23:19:12 -0700 3448) 	/* Use KCSAN to help debug racy use-after-free. */
cfbe1636c3585 (Marco Elver                    2020-08-06 23:19:12 -0700 3449) 	if (!(cachep->flags & SLAB_TYPESAFE_BY_RCU))
cfbe1636c3585 (Marco Elver                    2020-08-06 23:19:12 -0700 3450) 		__kcsan_check_access(objp, cachep->object_size,
cfbe1636c3585 (Marco Elver                    2020-08-06 23:19:12 -0700 3451) 				     KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT);
cfbe1636c3585 (Marco Elver                    2020-08-06 23:19:12 -0700 3452) 
55834c59098d0 (Alexander Potapenko            2016-05-20 16:59:11 -0700 3453) 	___cache_free(cachep, objp, caller);
55834c59098d0 (Alexander Potapenko            2016-05-20 16:59:11 -0700 3454) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3455) 
55834c59098d0 (Alexander Potapenko            2016-05-20 16:59:11 -0700 3456) void ___cache_free(struct kmem_cache *cachep, void *objp,
55834c59098d0 (Alexander Potapenko            2016-05-20 16:59:11 -0700 3457) 		unsigned long caller)
55834c59098d0 (Alexander Potapenko            2016-05-20 16:59:11 -0700 3458) {
55834c59098d0 (Alexander Potapenko            2016-05-20 16:59:11 -0700 3459) 	struct array_cache *ac = cpu_cache_get(cachep);
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 3460) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3461) 	check_irq_off();
d5cff635290ae (Catalin Marinas                2009-06-11 13:22:40 +0100 3462) 	kmemleak_free_recursive(objp, cachep->flags);
a947eb95ea031 (Suleiman Souhlal               2011-06-02 00:16:42 -0700 3463) 	objp = cache_free_debugcheck(cachep, objp, caller);
d1b2cf6cb84a9 (Bharata B Rao                  2020-10-13 16:53:09 -0700 3464) 	memcg_slab_free_hook(cachep, &objp, 1);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3465) 
1807a1aaf5f2a (Siddha, Suresh B               2007-08-22 14:01:49 -0700 3466) 	/*
1807a1aaf5f2a (Siddha, Suresh B               2007-08-22 14:01:49 -0700 3467) 	 * Skip calling cache_free_alien() when the platform is not numa.
1807a1aaf5f2a (Siddha, Suresh B               2007-08-22 14:01:49 -0700 3468) 	 * This will avoid cache misses that happen while accessing slabp (which
1807a1aaf5f2a (Siddha, Suresh B               2007-08-22 14:01:49 -0700 3469) 	 * is per page memory  reference) to get nodeid. Instead use a global
1807a1aaf5f2a (Siddha, Suresh B               2007-08-22 14:01:49 -0700 3470) 	 * variable to skip the call, which is mostly likely to be present in
1807a1aaf5f2a (Siddha, Suresh B               2007-08-22 14:01:49 -0700 3471) 	 * the cache.
1807a1aaf5f2a (Siddha, Suresh B               2007-08-22 14:01:49 -0700 3472) 	 */
b6e68bc1baed9 (Mel Gorman                     2009-06-16 15:32:16 -0700 3473) 	if (nr_online_nodes > 1 && cache_free_alien(cachep, objp))
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700 3474) 		return;
729bd0b74ce9a (Pekka Enberg                   2006-06-23 02:03:05 -0700 3475) 
3d88019408d6f (Joonsoo Kim                    2014-10-09 15:26:04 -0700 3476) 	if (ac->avail < ac->limit) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3477) 		STATS_INC_FREEHIT(cachep);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3478) 	} else {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3479) 		STATS_INC_FREEMISS(cachep);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3480) 		cache_flusharray(cachep, ac);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3481) 	}
42c8c99cd8911 (Zhao Jin                       2011-08-27 00:26:17 +0800 3482) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3483) 	if (sk_memalloc_socks()) {
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3484) 		struct page *page = virt_to_head_page(objp);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3485) 
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3486) 		if (unlikely(PageSlabPfmemalloc(page))) {
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3487) 			cache_free_pfmemalloc(cachep, page, objp);
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3488) 			return;
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3489) 		}
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3490) 	}
f68f8dddb5e91 (Joonsoo Kim                    2016-03-15 14:54:56 -0700 3491) 
dabc3e291d56e (Kees Cook                      2020-08-06 23:18:24 -0700 3492) 	__free_one(ac, objp);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3493) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3494) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3495) /**
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3496)  * kmem_cache_alloc - Allocate an object
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3497)  * @cachep: The cache to allocate from.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3498)  * @flags: See kmalloc().
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3499)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3500)  * Allocate an object from this cache.  The flags are only relevant
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3501)  * if the cache has no available objects.
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 3502)  *
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 3503)  * Return: pointer to the new object or %NULL in case of error
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3504)  */
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 3505) void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3506) {
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3507) 	void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_);
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3508) 
ca2b84cb3c4a0 (Eduard - Gabriel Munteanu      2009-03-23 15:12:24 +0200 3509) 	trace_kmem_cache_alloc(_RET_IP_, ret,
8c138bc009255 (Christoph Lameter              2012-06-13 10:24:58 -0500 3510) 			       cachep->object_size, cachep->size, flags);
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3511) 
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3512) 	return ret;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3513) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3514) EXPORT_SYMBOL(kmem_cache_alloc);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3515) 
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3516) static __always_inline void
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3517) cache_alloc_debugcheck_after_bulk(struct kmem_cache *s, gfp_t flags,
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3518) 				  size_t size, void **p, unsigned long caller)
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3519) {
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3520) 	size_t i;
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3521) 
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3522) 	for (i = 0; i < size; i++)
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3523) 		p[i] = cache_alloc_debugcheck_after(s, flags, p[i], caller);
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3524) }
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3525) 
865762a8119e7 (Jesper Dangaard Brouer         2015-11-20 15:57:58 -0800 3526) int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3527) 			  void **p)
484748f0b65a1 (Christoph Lameter              2015-09-04 15:45:34 -0700 3528) {
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3529) 	size_t i;
964d4bd370d55 (Roman Gushchin                 2020-08-06 23:20:56 -0700 3530) 	struct obj_cgroup *objcg = NULL;
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3531) 
964d4bd370d55 (Roman Gushchin                 2020-08-06 23:20:56 -0700 3532) 	s = slab_pre_alloc_hook(s, &objcg, size, flags);
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3533) 	if (!s)
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3534) 		return 0;
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3535) 
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3536) 	cache_alloc_debugcheck_before(s, flags);
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3537) 
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3538) 	local_irq_disable();
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3539) 	for (i = 0; i < size; i++) {
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3540) 		void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags);
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3541) 
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3542) 		if (unlikely(!objp))
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3543) 			goto error;
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3544) 		p[i] = objp;
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3545) 	}
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3546) 	local_irq_enable();
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3547) 
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3548) 	cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_);
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3549) 
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3550) 	/*
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3551) 	 * memcg and kmem_cache debug support and memory initialization.
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3552) 	 * Done outside of the IRQ disabled section.
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3553) 	 */
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3554) 	slab_post_alloc_hook(s, objcg, flags, size, p,
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3555) 				slab_want_init_on_alloc(flags, s));
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3556) 	/* FIXME: Trace call missing. Christoph would like a bulk variant */
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3557) 	return size;
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3558) error:
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3559) 	local_irq_enable();
7b0501dd6b186 (Jesper Dangaard Brouer         2016-03-15 14:53:53 -0700 3560) 	cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_);
da844b7872451 (Andrey Konovalov               2021-04-29 23:00:06 -0700 3561) 	slab_post_alloc_hook(s, objcg, flags, i, p, false);
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3562) 	__kmem_cache_free_bulk(s, i, p);
2a777eac173a5 (Jesper Dangaard Brouer         2016-03-15 14:53:50 -0700 3563) 	return 0;
484748f0b65a1 (Christoph Lameter              2015-09-04 15:45:34 -0700 3564) }
484748f0b65a1 (Christoph Lameter              2015-09-04 15:45:34 -0700 3565) EXPORT_SYMBOL(kmem_cache_alloc_bulk);
484748f0b65a1 (Christoph Lameter              2015-09-04 15:45:34 -0700 3566) 
0f24f1287a86b (Li Zefan                       2009-12-11 15:45:30 +0800 3567) #ifdef CONFIG_TRACING
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3568) void *
4052147c0afa1 (Ezequiel Garcia                2012-09-08 17:47:56 -0300 3569) kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size)
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3570) {
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3571) 	void *ret;
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3572) 
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3573) 	ret = slab_alloc(cachep, flags, size, _RET_IP_);
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3574) 
0116523cfffa6 (Andrey Konovalov               2018-12-28 00:29:37 -0800 3575) 	ret = kasan_kmalloc(cachep, ret, size, flags);
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3576) 	trace_kmalloc(_RET_IP_, ret,
ff4fcd01ec86d (Ezequiel Garcia                2012-09-08 17:47:52 -0300 3577) 		      size, cachep->size, flags);
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3578) 	return ret;
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3579) }
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3580) EXPORT_SYMBOL(kmem_cache_alloc_trace);
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3581) #endif
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3582) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3583) #ifdef CONFIG_NUMA
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3584) /**
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3585)  * kmem_cache_alloc_node - Allocate an object on the specified node
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3586)  * @cachep: The cache to allocate from.
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3587)  * @flags: See kmalloc().
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3588)  * @nodeid: node number of the target node.
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3589)  *
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3590)  * Identical to kmem_cache_alloc but it will allocate memory on the given
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3591)  * node, which can improve the performance for cpu bound structures.
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3592)  *
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3593)  * Fallback to other node is possible if __GFP_THISNODE is not set.
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 3594)  *
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 3595)  * Return: pointer to the new object or %NULL in case of error
d0d04b78f403b (Zhouping Liu                   2013-05-16 11:36:23 +0800 3596)  */
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3597) void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid)
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3598) {
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3599) 	void *ret = slab_alloc_node(cachep, flags, nodeid, cachep->object_size, _RET_IP_);
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3600) 
ca2b84cb3c4a0 (Eduard - Gabriel Munteanu      2009-03-23 15:12:24 +0200 3601) 	trace_kmem_cache_alloc_node(_RET_IP_, ret,
8c138bc009255 (Christoph Lameter              2012-06-13 10:24:58 -0500 3602) 				    cachep->object_size, cachep->size,
ca2b84cb3c4a0 (Eduard - Gabriel Munteanu      2009-03-23 15:12:24 +0200 3603) 				    flags, nodeid);
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3604) 
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3605) 	return ret;
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3606) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3607) EXPORT_SYMBOL(kmem_cache_alloc_node);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3608) 
0f24f1287a86b (Li Zefan                       2009-12-11 15:45:30 +0800 3609) #ifdef CONFIG_TRACING
4052147c0afa1 (Ezequiel Garcia                2012-09-08 17:47:56 -0300 3610) void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep,
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3611) 				  gfp_t flags,
4052147c0afa1 (Ezequiel Garcia                2012-09-08 17:47:56 -0300 3612) 				  int nodeid,
4052147c0afa1 (Ezequiel Garcia                2012-09-08 17:47:56 -0300 3613) 				  size_t size)
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3614) {
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3615) 	void *ret;
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3616) 
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3617) 	ret = slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_);
505f5dcb1c419 (Alexander Potapenko            2016-03-25 14:22:02 -0700 3618) 
0116523cfffa6 (Andrey Konovalov               2018-12-28 00:29:37 -0800 3619) 	ret = kasan_kmalloc(cachep, ret, size, flags);
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3620) 	trace_kmalloc_node(_RET_IP_, ret,
ff4fcd01ec86d (Ezequiel Garcia                2012-09-08 17:47:52 -0300 3621) 			   size, cachep->size,
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3622) 			   flags, nodeid);
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3623) 	return ret;
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3624) }
85beb5869a4f6 (Steven Rostedt                 2010-11-24 16:23:34 -0500 3625) EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3626) #endif
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3627) 
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3628) static __always_inline void *
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3629) __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller)
97e2bde47f886 (Manfred Spraul                 2005-05-01 08:58:38 -0700 3630) {
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 3631) 	struct kmem_cache *cachep;
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 3632) 	void *ret;
97e2bde47f886 (Manfred Spraul                 2005-05-01 08:58:38 -0700 3633) 
61448479a9f2c (Dmitry Vyukov                  2018-10-26 15:03:12 -0700 3634) 	if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
61448479a9f2c (Dmitry Vyukov                  2018-10-26 15:03:12 -0700 3635) 		return NULL;
2c59dd6544212 (Christoph Lameter              2013-01-10 19:14:19 +0000 3636) 	cachep = kmalloc_slab(size, flags);
6cb8f91320d3e (Christoph Lameter              2007-07-17 04:03:22 -0700 3637) 	if (unlikely(ZERO_OR_NULL_PTR(cachep)))
6cb8f91320d3e (Christoph Lameter              2007-07-17 04:03:22 -0700 3638) 		return cachep;
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 3639) 	ret = kmem_cache_alloc_node_trace(cachep, flags, node, size);
0116523cfffa6 (Andrey Konovalov               2018-12-28 00:29:37 -0800 3640) 	ret = kasan_kmalloc(cachep, ret, size, flags);
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 3641) 
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 3642) 	return ret;
97e2bde47f886 (Manfred Spraul                 2005-05-01 08:58:38 -0700 3643) }
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3644) 
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3645) void *__kmalloc_node(size_t size, gfp_t flags, int node)
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3646) {
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3647) 	return __do_kmalloc_node(size, flags, node, _RET_IP_);
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3648) }
dbe5e69d2d6e5 (Christoph Hellwig              2006-09-25 23:31:36 -0700 3649) EXPORT_SYMBOL(__kmalloc_node);
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3650) 
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3651) void *__kmalloc_node_track_caller(size_t size, gfp_t flags,
ce71e27c6fdc4 (Eduard - Gabriel Munteanu      2008-08-19 20:43:25 +0300 3652) 		int node, unsigned long caller)
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3653) {
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3654) 	return __do_kmalloc_node(size, flags, node, caller);
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3655) }
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3656) EXPORT_SYMBOL(__kmalloc_node_track_caller);
8b98c1699eba2 (Christoph Hellwig              2006-12-06 20:32:30 -0800 3657) #endif /* CONFIG_NUMA */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3658) 
5bb1bb353cfe3 (Paul E. McKenney               2021-01-07 13:46:11 -0800 3659) #ifdef CONFIG_PRINTK
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3660) void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3661) {
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3662) 	struct kmem_cache *cachep;
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3663) 	unsigned int objnr;
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3664) 	void *objp;
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3665) 
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3666) 	kpp->kp_ptr = object;
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3667) 	kpp->kp_page = page;
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3668) 	cachep = page->slab_cache;
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3669) 	kpp->kp_slab_cache = cachep;
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3670) 	objp = object - obj_offset(cachep);
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3671) 	kpp->kp_data_offset = obj_offset(cachep);
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3672) 	page = virt_to_head_page(objp);
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3673) 	objnr = obj_to_index(cachep, page, objp);
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3674) 	objp = index_to_obj(cachep, page, objnr);
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3675) 	kpp->kp_objp = objp;
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3676) 	if (DEBUG && cachep->flags & SLAB_STORE_USER)
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3677) 		kpp->kp_ret = *dbg_userword(cachep, objp);
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3678) }
5bb1bb353cfe3 (Paul E. McKenney               2021-01-07 13:46:11 -0800 3679) #endif
8e7f37f2aaa56 (Paul E. McKenney               2020-12-07 17:41:02 -0800 3680) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3681) /**
800590f523bf3 (Paul Drynoff                   2006-06-23 02:03:48 -0700 3682)  * __do_kmalloc - allocate memory
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3683)  * @size: how many bytes of memory are required.
800590f523bf3 (Paul Drynoff                   2006-06-23 02:03:48 -0700 3684)  * @flags: the type of memory to allocate (see kmalloc).
911851e6ee6ac (Randy Dunlap                   2006-03-22 00:08:14 -0800 3685)  * @caller: function caller for debug tracking of the caller
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 3686)  *
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 3687)  * Return: pointer to the allocated memory or %NULL in case of error
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3688)  */
7fd6b1413082c (Pekka Enberg                   2006-02-01 03:05:52 -0800 3689) static __always_inline void *__do_kmalloc(size_t size, gfp_t flags,
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3690) 					  unsigned long caller)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3691) {
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 3692) 	struct kmem_cache *cachep;
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3693) 	void *ret;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3694) 
61448479a9f2c (Dmitry Vyukov                  2018-10-26 15:03:12 -0700 3695) 	if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
61448479a9f2c (Dmitry Vyukov                  2018-10-26 15:03:12 -0700 3696) 		return NULL;
2c59dd6544212 (Christoph Lameter              2013-01-10 19:14:19 +0000 3697) 	cachep = kmalloc_slab(size, flags);
a5c96d8a1c67f (Linus Torvalds                 2007-07-19 13:17:15 -0700 3698) 	if (unlikely(ZERO_OR_NULL_PTR(cachep)))
a5c96d8a1c67f (Linus Torvalds                 2007-07-19 13:17:15 -0700 3699) 		return cachep;
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 3700) 	ret = slab_alloc(cachep, flags, size, caller);
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3701) 
0116523cfffa6 (Andrey Konovalov               2018-12-28 00:29:37 -0800 3702) 	ret = kasan_kmalloc(cachep, ret, size, flags);
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3703) 	trace_kmalloc(caller, ret,
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500 3704) 		      size, cachep->size, flags);
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3705) 
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3706) 	return ret;
7fd6b1413082c (Pekka Enberg                   2006-02-01 03:05:52 -0800 3707) }
7fd6b1413082c (Pekka Enberg                   2006-02-01 03:05:52 -0800 3708) 
7fd6b1413082c (Pekka Enberg                   2006-02-01 03:05:52 -0800 3709) void *__kmalloc(size_t size, gfp_t flags)
7fd6b1413082c (Pekka Enberg                   2006-02-01 03:05:52 -0800 3710) {
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3711) 	return __do_kmalloc(size, flags, _RET_IP_);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3712) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3713) EXPORT_SYMBOL(__kmalloc);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3714) 
ce71e27c6fdc4 (Eduard - Gabriel Munteanu      2008-08-19 20:43:25 +0300 3715) void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller)
7fd6b1413082c (Pekka Enberg                   2006-02-01 03:05:52 -0800 3716) {
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3717) 	return __do_kmalloc(size, flags, caller);
7fd6b1413082c (Pekka Enberg                   2006-02-01 03:05:52 -0800 3718) }
7fd6b1413082c (Pekka Enberg                   2006-02-01 03:05:52 -0800 3719) EXPORT_SYMBOL(__kmalloc_track_caller);
1d2c8eea69851 (Christoph Hellwig              2006-10-04 02:15:25 -0700 3720) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3721) /**
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3722)  * kmem_cache_free - Deallocate an object
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3723)  * @cachep: The cache the allocation was from.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3724)  * @objp: The previously allocated object.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3725)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3726)  * Free an object which was previously allocated from this
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3727)  * cache.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3728)  */
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 3729) void kmem_cache_free(struct kmem_cache *cachep, void *objp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3730) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3731) 	unsigned long flags;
b9ce5ef49f00d (Glauber Costa                  2012-12-18 14:22:46 -0800 3732) 	cachep = cache_from_obj(cachep, objp);
b9ce5ef49f00d (Glauber Costa                  2012-12-18 14:22:46 -0800 3733) 	if (!cachep)
b9ce5ef49f00d (Glauber Costa                  2012-12-18 14:22:46 -0800 3734) 		return;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3735) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3736) 	local_irq_save(flags);
d97d476b1bb11 (Feng Tang                      2012-07-02 14:29:10 +0800 3737) 	debug_check_no_locks_freed(objp, cachep->object_size);
3ac7fe5a4aab4 (Thomas Gleixner                2008-04-30 00:55:01 -0700 3738) 	if (!(cachep->flags & SLAB_DEBUG_OBJECTS))
8c138bc009255 (Christoph Lameter              2012-06-13 10:24:58 -0500 3739) 		debug_check_no_obj_freed(objp, cachep->object_size);
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3740) 	__cache_free(cachep, objp, _RET_IP_);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3741) 	local_irq_restore(flags);
36555751c6751 (Eduard - Gabriel Munteanu      2008-08-10 20:14:05 +0300 3742) 
3544de8ee6e48 (Jacob Wen                      2021-02-24 12:00:55 -0800 3743) 	trace_kmem_cache_free(_RET_IP_, objp, cachep->name);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3744) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3745) EXPORT_SYMBOL(kmem_cache_free);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3746) 
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3747) void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p)
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3748) {
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3749) 	struct kmem_cache *s;
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3750) 	size_t i;
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3751) 
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3752) 	local_irq_disable();
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3753) 	for (i = 0; i < size; i++) {
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3754) 		void *objp = p[i];
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3755) 
ca257195511d5 (Jesper Dangaard Brouer         2016-03-15 14:54:00 -0700 3756) 		if (!orig_s) /* called via kfree_bulk */
ca257195511d5 (Jesper Dangaard Brouer         2016-03-15 14:54:00 -0700 3757) 			s = virt_to_cache(objp);
ca257195511d5 (Jesper Dangaard Brouer         2016-03-15 14:54:00 -0700 3758) 		else
ca257195511d5 (Jesper Dangaard Brouer         2016-03-15 14:54:00 -0700 3759) 			s = cache_from_obj(orig_s, objp);
a64b53780ec35 (Kees Cook                      2019-07-11 20:53:26 -0700 3760) 		if (!s)
a64b53780ec35 (Kees Cook                      2019-07-11 20:53:26 -0700 3761) 			continue;
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3762) 
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3763) 		debug_check_no_locks_freed(objp, s->object_size);
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3764) 		if (!(s->flags & SLAB_DEBUG_OBJECTS))
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3765) 			debug_check_no_obj_freed(objp, s->object_size);
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3766) 
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3767) 		__cache_free(s, objp, _RET_IP_);
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3768) 	}
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3769) 	local_irq_enable();
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3770) 
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3771) 	/* FIXME: add tracing */
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3772) }
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3773) EXPORT_SYMBOL(kmem_cache_free_bulk);
e6cdb58d1c830 (Jesper Dangaard Brouer         2016-03-15 14:53:56 -0700 3774) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3775) /**
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3776)  * kfree - free previously allocated memory
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3777)  * @objp: pointer returned by kmalloc.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3778)  *
80e93effce550 (Pekka Enberg                   2005-09-09 13:10:16 -0700 3779)  * If @objp is NULL, no operation is performed.
80e93effce550 (Pekka Enberg                   2005-09-09 13:10:16 -0700 3780)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3781)  * Don't free memory not originally allocated by kmalloc()
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3782)  * or you will run into trouble.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3783)  */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3784) void kfree(const void *objp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3785) {
343e0d7a93951 (Pekka Enberg                   2006-02-01 03:05:50 -0800 3786) 	struct kmem_cache *c;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3787) 	unsigned long flags;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3788) 
2121db74ba0fd (Pekka Enberg                   2009-03-25 11:05:57 +0200 3789) 	trace_kfree(_RET_IP_, objp);
2121db74ba0fd (Pekka Enberg                   2009-03-25 11:05:57 +0200 3790) 
6cb8f91320d3e (Christoph Lameter              2007-07-17 04:03:22 -0700 3791) 	if (unlikely(ZERO_OR_NULL_PTR(objp)))
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3792) 		return;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3793) 	local_irq_save(flags);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3794) 	kfree_debugcheck(objp);
6ed5eb2211204 (Pekka Enberg                   2006-02-01 03:05:49 -0800 3795) 	c = virt_to_cache(objp);
a64b53780ec35 (Kees Cook                      2019-07-11 20:53:26 -0700 3796) 	if (!c) {
a64b53780ec35 (Kees Cook                      2019-07-11 20:53:26 -0700 3797) 		local_irq_restore(flags);
a64b53780ec35 (Kees Cook                      2019-07-11 20:53:26 -0700 3798) 		return;
a64b53780ec35 (Kees Cook                      2019-07-11 20:53:26 -0700 3799) 	}
8c138bc009255 (Christoph Lameter              2012-06-13 10:24:58 -0500 3800) 	debug_check_no_locks_freed(objp, c->object_size);
8c138bc009255 (Christoph Lameter              2012-06-13 10:24:58 -0500 3801) 
8c138bc009255 (Christoph Lameter              2012-06-13 10:24:58 -0500 3802) 	debug_check_no_obj_freed(objp, c->object_size);
7c0cb9c64f83d (Ezequiel Garcia                2012-09-08 17:47:55 -0300 3803) 	__cache_free(c, (void *)objp, _RET_IP_);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3804) 	local_irq_restore(flags);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3805) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3806) EXPORT_SYMBOL(kfree);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3807) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3808) /*
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3809)  * This initializes kmem_cache_node or resizes various caches for all nodes.
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3810)  */
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 3811) static int setup_kmem_cache_nodes(struct kmem_cache *cachep, gfp_t gfp)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3812) {
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 3813) 	int ret;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3814) 	int node;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3815) 	struct kmem_cache_node *n;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3816) 
9c09a95cf431f (Mel Gorman                     2008-01-24 05:49:54 -0800 3817) 	for_each_online_node(node) {
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 3818) 		ret = setup_kmem_cache_node(cachep, node, gfp, true);
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 3819) 		if (ret)
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3820) 			goto fail;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3821) 
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3822) 	}
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 3823) 
cafeb02e098ec (Christoph Lameter              2006-03-25 03:06:46 -0800 3824) 	return 0;
0718dc2a82c86 (Christoph Lameter              2006-03-25 03:06:47 -0800 3825) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3826) fail:
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500 3827) 	if (!cachep->list.next) {
0718dc2a82c86 (Christoph Lameter              2006-03-25 03:06:47 -0800 3828) 		/* Cache is not active yet. Roll back what we did */
0718dc2a82c86 (Christoph Lameter              2006-03-25 03:06:47 -0800 3829) 		node--;
0718dc2a82c86 (Christoph Lameter              2006-03-25 03:06:47 -0800 3830) 		while (node >= 0) {
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3831) 			n = get_node(cachep, node);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3832) 			if (n) {
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3833) 				kfree(n->shared);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3834) 				free_alien_cache(n->alien);
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3835) 				kfree(n);
6a67368c36e2c (Christoph Lameter              2013-01-10 19:14:19 +0000 3836) 				cachep->node[node] = NULL;
0718dc2a82c86 (Christoph Lameter              2006-03-25 03:06:47 -0800 3837) 			}
0718dc2a82c86 (Christoph Lameter              2006-03-25 03:06:47 -0800 3838) 			node--;
0718dc2a82c86 (Christoph Lameter              2006-03-25 03:06:47 -0800 3839) 		}
0718dc2a82c86 (Christoph Lameter              2006-03-25 03:06:47 -0800 3840) 	}
cafeb02e098ec (Christoph Lameter              2006-03-25 03:06:46 -0800 3841) 	return -ENOMEM;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3842) }
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3843) 
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 3844) /* Always called with the slab_mutex held */
10befea91b61c (Roman Gushchin                 2020-08-06 23:21:27 -0700 3845) static int do_tune_cpucache(struct kmem_cache *cachep, int limit,
10befea91b61c (Roman Gushchin                 2020-08-06 23:21:27 -0700 3846) 			    int batchcount, int shared, gfp_t gfp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3847) {
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3848) 	struct array_cache __percpu *cpu_cache, *prev;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3849) 	int cpu;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3850) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3851) 	cpu_cache = alloc_kmem_cache_cpus(cachep, limit, batchcount);
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3852) 	if (!cpu_cache)
d2e7b7d0aa021 (Siddha, Suresh B               2006-09-25 23:31:47 -0700 3853) 		return -ENOMEM;
d2e7b7d0aa021 (Siddha, Suresh B               2006-09-25 23:31:47 -0700 3854) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3855) 	prev = cachep->cpu_cache;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3856) 	cachep->cpu_cache = cpu_cache;
a87c75fbcc8dc (Greg Thelen                    2017-05-03 14:51:47 -0700 3857) 	/*
a87c75fbcc8dc (Greg Thelen                    2017-05-03 14:51:47 -0700 3858) 	 * Without a previous cpu_cache there's no need to synchronize remote
a87c75fbcc8dc (Greg Thelen                    2017-05-03 14:51:47 -0700 3859) 	 * cpus, so skip the IPIs.
a87c75fbcc8dc (Greg Thelen                    2017-05-03 14:51:47 -0700 3860) 	 */
a87c75fbcc8dc (Greg Thelen                    2017-05-03 14:51:47 -0700 3861) 	if (prev)
a87c75fbcc8dc (Greg Thelen                    2017-05-03 14:51:47 -0700 3862) 		kick_all_cpus_sync();
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3863) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3864) 	check_irq_on();
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3865) 	cachep->batchcount = batchcount;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3866) 	cachep->limit = limit;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 3867) 	cachep->shared = shared;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3868) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3869) 	if (!prev)
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 3870) 		goto setup_node;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3871) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3872) 	for_each_online_cpu(cpu) {
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 3873) 		LIST_HEAD(list);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3874) 		int node;
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3875) 		struct kmem_cache_node *n;
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3876) 		struct array_cache *ac = per_cpu_ptr(prev, cpu);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3877) 
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3878) 		node = cpu_to_mem(cpu);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3879) 		n = get_node(cachep, node);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3880) 		spin_lock_irq(&n->list_lock);
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3881) 		free_block(cachep, ac->entry, ac->avail, node, &list);
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 3882) 		spin_unlock_irq(&n->list_lock);
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 3883) 		slabs_destroy(cachep, &list);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3884) 	}
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3885) 	free_percpu(prev);
bf0dea23a9c09 (Joonsoo Kim                    2014-10-09 15:26:27 -0700 3886) 
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 3887) setup_node:
c3d332b6b2c11 (Joonsoo Kim                    2016-05-19 17:10:14 -0700 3888) 	return setup_kmem_cache_nodes(cachep, gfp);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3889) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3890) 
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 3891) /* Called with slab_mutex held always */
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300 3892) static int enable_cpucache(struct kmem_cache *cachep, gfp_t gfp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3893) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3894) 	int err;
943a451a87d22 (Glauber Costa                  2012-12-18 14:23:03 -0800 3895) 	int limit = 0;
943a451a87d22 (Glauber Costa                  2012-12-18 14:23:03 -0800 3896) 	int shared = 0;
943a451a87d22 (Glauber Costa                  2012-12-18 14:23:03 -0800 3897) 	int batchcount = 0;
943a451a87d22 (Glauber Costa                  2012-12-18 14:23:03 -0800 3898) 
7c00fce98c3e1 (Thomas Garnier                 2016-07-26 15:21:56 -0700 3899) 	err = cache_random_seq_create(cachep, cachep->num, gfp);
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 3900) 	if (err)
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 3901) 		goto end;
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 3902) 
943a451a87d22 (Glauber Costa                  2012-12-18 14:23:03 -0800 3903) 	if (limit && shared && batchcount)
943a451a87d22 (Glauber Costa                  2012-12-18 14:23:03 -0800 3904) 		goto skip_setup;
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3905) 	/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3906) 	 * The head array serves three purposes:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3907) 	 * - create a LIFO ordering, i.e. return objects that are cache-warm
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3908) 	 * - reduce the number of spinlock operations.
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3909) 	 * - reduce the number of linked list operations on the slab and
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3910) 	 *   bufctl chains: array operations are cheaper.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3911) 	 * The numbers are guessed, we should auto-tune as described by
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3912) 	 * Bonwick.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3913) 	 */
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500 3914) 	if (cachep->size > 131072)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3915) 		limit = 1;
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500 3916) 	else if (cachep->size > PAGE_SIZE)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3917) 		limit = 8;
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500 3918) 	else if (cachep->size > 1024)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3919) 		limit = 24;
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500 3920) 	else if (cachep->size > 256)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3921) 		limit = 54;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3922) 	else
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3923) 		limit = 120;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3924) 
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3925) 	/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3926) 	 * CPU bound tasks (e.g. network routing) can exhibit cpu bound
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3927) 	 * allocation behaviour: Most allocs on one cpu, most free operations
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3928) 	 * on another cpu. For these cases, an efficient object passing between
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3929) 	 * cpus is necessary. This is provided by a shared array. The array
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3930) 	 * replaces Bonwick's magazine layer.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3931) 	 * On uniprocessor, it's functionally equivalent (but less efficient)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3932) 	 * to a larger limit. Thus disabled by default.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3933) 	 */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3934) 	shared = 0;
3b0efdfa1e719 (Christoph Lameter              2012-06-13 10:24:57 -0500 3935) 	if (cachep->size <= PAGE_SIZE && num_possible_cpus() > 1)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3936) 		shared = 8;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3937) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3938) #if DEBUG
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3939) 	/*
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3940) 	 * With debugging enabled, large batchcount lead to excessively long
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3941) 	 * periods with disabled local interrupts. Limit the batchcount
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3942) 	 */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3943) 	if (limit > 32)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3944) 		limit = 32;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3945) #endif
943a451a87d22 (Glauber Costa                  2012-12-18 14:23:03 -0800 3946) 	batchcount = (limit + 1) / 2;
943a451a87d22 (Glauber Costa                  2012-12-18 14:23:03 -0800 3947) skip_setup:
943a451a87d22 (Glauber Costa                  2012-12-18 14:23:03 -0800 3948) 	err = do_tune_cpucache(cachep, limit, batchcount, shared, gfp);
c7ce4f60ac199 (Thomas Garnier                 2016-05-19 17:10:37 -0700 3949) end:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3950) 	if (err)
1170532bb49f9 (Joe Perches                    2016-03-17 14:19:50 -0700 3951) 		pr_err("enable_cpucache failed for %s, error %d\n",
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 3952) 		       cachep->name, -err);
2ed3a4ef95ef1 (Christoph Lameter              2006-09-25 23:31:38 -0700 3953) 	return err;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3954) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3955) 
1b55253a7f95a (Christoph Lameter              2006-03-22 00:09:07 -0800 3956) /*
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3957)  * Drain an array if it contains any elements taking the node lock only if
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3958)  * necessary. Note that the node listlock also protects the array_cache
b18e7e654d7af (Christoph Lameter              2006-03-22 00:09:07 -0800 3959)  * if drain_array() is used on the shared array.
1b55253a7f95a (Christoph Lameter              2006-03-22 00:09:07 -0800 3960)  */
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3961) static void drain_array(struct kmem_cache *cachep, struct kmem_cache_node *n,
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3962) 			 struct array_cache *ac, int node)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3963) {
97654dfa20caa (Joonsoo Kim                    2014-08-06 16:04:25 -0700 3964) 	LIST_HEAD(list);
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3965) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3966) 	/* ac from n->shared can be freed if we don't hold the slab_mutex. */
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3967) 	check_mutex_acquired();
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3968) 
1b55253a7f95a (Christoph Lameter              2006-03-22 00:09:07 -0800 3969) 	if (!ac || !ac->avail)
1b55253a7f95a (Christoph Lameter              2006-03-22 00:09:07 -0800 3970) 		return;
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3971) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3972) 	if (ac->touched) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3973) 		ac->touched = 0;
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3974) 		return;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3975) 	}
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3976) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3977) 	spin_lock_irq(&n->list_lock);
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3978) 	drain_array_locked(cachep, ac, node, false, &list);
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3979) 	spin_unlock_irq(&n->list_lock);
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3980) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 3981) 	slabs_destroy(cachep, &list);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3982) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3983) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3984) /**
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3985)  * cache_reap - Reclaim memory from caches.
05fb6bf0b2955 (Randy Dunlap                   2007-02-28 20:12:13 -0800 3986)  * @w: work descriptor
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3987)  *
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3988)  * Called from workqueue/eventd every few seconds.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3989)  * Purpose:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3990)  * - clear the per-cpu caches for this CPU.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3991)  * - return freeable pages to the main free memory pool.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3992)  *
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3993)  * If we cannot acquire the cache chain mutex then just give up - we'll try
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 3994)  * again on the next iteration.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3995)  */
7c5cae368a6c4 (Christoph Lameter              2007-02-10 01:42:55 -0800 3996) static void cache_reap(struct work_struct *w)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 3997) {
7a7c381d25067 (Christoph Hellwig              2006-06-23 02:03:17 -0700 3998) 	struct kmem_cache *searchp;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 3999) 	struct kmem_cache_node *n;
7d6e6d09de82c (Lee Schermerhorn               2010-05-26 14:45:03 -0700 4000) 	int node = numa_mem_id();
bf6aede712334 (Jean Delvare                   2009-04-02 16:56:54 -0700 4001) 	struct delayed_work *work = to_delayed_work(w);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4002) 
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 4003) 	if (!mutex_trylock(&slab_mutex))
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4004) 		/* Give up. Setup the next iteration. */
7c5cae368a6c4 (Christoph Lameter              2007-02-10 01:42:55 -0800 4005) 		goto out;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4006) 
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 4007) 	list_for_each_entry(searchp, &slab_caches, list) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4008) 		check_irq_on();
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4009) 
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4010) 		/*
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4011) 		 * We only take the node lock if absolutely necessary and we
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4012) 		 * have established with reasonable certainty that
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4013) 		 * we can do some work if the lock was obtained.
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4014) 		 */
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 4015) 		n = get_node(searchp, node);
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4016) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4017) 		reap_alien(searchp, n);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4018) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 4019) 		drain_array(searchp, n, cpu_cache_get(searchp), node);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4020) 
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4021) 		/*
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4022) 		 * These are racy checks but it does not matter
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4023) 		 * if we skip one check or scan twice.
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4024) 		 */
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4025) 		if (time_after(n->next_reap, jiffies))
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4026) 			goto next;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4027) 
5f0985bb1123b (Jianyu Zhan                    2014-03-30 17:02:20 +0800 4028) 		n->next_reap = jiffies + REAPTIMEOUT_NODE;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4029) 
18726ca8b34bb (Joonsoo Kim                    2016-05-19 17:10:02 -0700 4030) 		drain_array(searchp, n, n->shared, node);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4031) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4032) 		if (n->free_touched)
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4033) 			n->free_touched = 0;
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 4034) 		else {
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 4035) 			int freed;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4036) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4037) 			freed = drain_freelist(searchp, n, (n->free_limit +
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 4038) 				5 * searchp->num - 1) / (5 * searchp->num));
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 4039) 			STATS_ADD_REAPED(searchp, freed);
ed11d9eb2228a (Christoph Lameter              2006-06-30 01:55:45 -0700 4040) 		}
35386e3b0f876 (Christoph Lameter              2006-03-22 00:09:05 -0800 4041) next:
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4042) 		cond_resched();
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4043) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4044) 	check_irq_on();
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 4045) 	mutex_unlock(&slab_mutex);
8fce4d8e3b9e3 (Christoph Lameter              2006-03-09 17:33:54 -0800 4046) 	next_reap_node();
7c5cae368a6c4 (Christoph Lameter              2007-02-10 01:42:55 -0800 4047) out:
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 4048) 	/* Set up the next iteration */
a9f2a846f0503 (Vlastimil Babka                2018-04-13 15:35:38 -0700 4049) 	schedule_delayed_work_on(smp_processor_id(), work,
a9f2a846f0503 (Vlastimil Babka                2018-04-13 15:35:38 -0700 4050) 				round_jiffies_relative(REAPTIMEOUT_AC));
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4051) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4052) 
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4053) void get_slabinfo(struct kmem_cache *cachep, struct slabinfo *sinfo)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4054) {
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 4055) 	unsigned long active_objs, num_objs, active_slabs;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 4056) 	unsigned long total_slabs = 0, free_objs = 0, shared_avail = 0;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 4057) 	unsigned long free_slabs = 0;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 4058) 	int node;
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4059) 	struct kmem_cache_node *n;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4060) 
18bf854117c6c (Christoph Lameter              2014-08-06 16:04:11 -0700 4061) 	for_each_kmem_cache_node(cachep, node, n) {
ca3b9b9173531 (Ravikiran G Thirumalai         2006-02-04 23:27:58 -0800 4062) 		check_irq_on();
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4063) 		spin_lock_irq(&n->list_lock);
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 4064) 
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 4065) 		total_slabs += n->total_slabs;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 4066) 		free_slabs += n->free_slabs;
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 4067) 		free_objs += n->free_objects;
07a63c41fa1f6 (Aruna Ramakrishna              2016-10-27 17:46:32 -0700 4068) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4069) 		if (n->shared)
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4070) 			shared_avail += n->shared->avail;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 4071) 
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4072) 		spin_unlock_irq(&n->list_lock);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4073) 	}
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 4074) 	num_objs = total_slabs * cachep->num;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 4075) 	active_slabs = total_slabs - free_slabs;
f728b0a5d72ae (Greg Thelen                    2016-12-12 16:41:41 -0800 4076) 	active_objs = num_objs - free_objs;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4077) 
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4078) 	sinfo->active_objs = active_objs;
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4079) 	sinfo->num_objs = num_objs;
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4080) 	sinfo->active_slabs = active_slabs;
bf00bd3458041 (David Rientjes                 2016-12-12 16:41:44 -0800 4081) 	sinfo->num_slabs = total_slabs;
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4082) 	sinfo->shared_avail = shared_avail;
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4083) 	sinfo->limit = cachep->limit;
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4084) 	sinfo->batchcount = cachep->batchcount;
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4085) 	sinfo->shared = cachep->shared;
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4086) 	sinfo->objects_per_slab = cachep->num;
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4087) 	sinfo->cache_order = cachep->gfporder;
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4088) }
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4089) 
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4090) void slabinfo_show_stats(struct seq_file *m, struct kmem_cache *cachep)
0d7561c61d766 (Glauber Costa                  2012-10-19 18:20:27 +0400 4091) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4092) #if STATS
ce8eb6c424c79 (Christoph Lameter              2013-01-10 19:14:19 +0000 4093) 	{			/* node stats */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4094) 		unsigned long high = cachep->high_mark;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4095) 		unsigned long allocs = cachep->num_allocations;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4096) 		unsigned long grown = cachep->grown;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4097) 		unsigned long reaped = cachep->reaped;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4098) 		unsigned long errors = cachep->errors;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4099) 		unsigned long max_freeable = cachep->max_freeable;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4100) 		unsigned long node_allocs = cachep->node_allocs;
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 4101) 		unsigned long node_frees = cachep->node_frees;
fb7faf3313d52 (Ravikiran G Thirumalai         2006-04-10 22:52:54 -0700 4102) 		unsigned long overflows = cachep->node_overflow;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4103) 
756a025f00091 (Joe Perches                    2016-03-17 14:19:47 -0700 4104) 		seq_printf(m, " : globalstat %7lu %6lu %5lu %4lu %4lu %4lu %4lu %4lu %4lu",
e92dd4fd1aa1c (Joe Perches                    2010-03-26 19:27:58 -0700 4105) 			   allocs, high, grown,
e92dd4fd1aa1c (Joe Perches                    2010-03-26 19:27:58 -0700 4106) 			   reaped, errors, max_freeable, node_allocs,
e92dd4fd1aa1c (Joe Perches                    2010-03-26 19:27:58 -0700 4107) 			   node_frees, overflows);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4108) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4109) 	/* cpu stats */
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4110) 	{
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4111) 		unsigned long allochit = atomic_read(&cachep->allochit);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4112) 		unsigned long allocmiss = atomic_read(&cachep->allocmiss);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4113) 		unsigned long freehit = atomic_read(&cachep->freehit);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4114) 		unsigned long freemiss = atomic_read(&cachep->freemiss);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4115) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4116) 		seq_printf(m, " : cpustat %6lu %6lu %6lu %6lu",
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 4117) 			   allochit, allocmiss, freehit, freemiss);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4118) 	}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4119) #endif
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4120) }
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4121) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4122) #define MAX_SLABINFO_WRITE 128
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4123) /**
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4124)  * slabinfo_write - Tuning for the slab allocator
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4125)  * @file: unused
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4126)  * @buffer: user buffer
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4127)  * @count: data length
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4128)  * @ppos: unused
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 4129)  *
a862f68a8b360 (Mike Rapoport                  2019-03-05 15:48:42 -0800 4130)  * Return: %0 on success, negative error code otherwise.
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4131)  */
b7454ad3cfc30 (Glauber Costa                  2012-10-19 18:20:25 +0400 4132) ssize_t slabinfo_write(struct file *file, const char __user *buffer,
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 4133) 		       size_t count, loff_t *ppos)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4134) {
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 4135) 	char kbuf[MAX_SLABINFO_WRITE + 1], *tmp;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4136) 	int limit, batchcount, shared, res;
7a7c381d25067 (Christoph Hellwig              2006-06-23 02:03:17 -0700 4137) 	struct kmem_cache *cachep;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 4138) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4139) 	if (count > MAX_SLABINFO_WRITE)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4140) 		return -EINVAL;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4141) 	if (copy_from_user(&kbuf, buffer, count))
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4142) 		return -EFAULT;
b28a02de8c70d (Pekka Enberg                   2006-01-08 01:00:37 -0800 4143) 	kbuf[MAX_SLABINFO_WRITE] = '\0';
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4144) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4145) 	tmp = strchr(kbuf, ' ');
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4146) 	if (!tmp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4147) 		return -EINVAL;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4148) 	*tmp = '\0';
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4149) 	tmp++;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4150) 	if (sscanf(tmp, " %d %d %d", &limit, &batchcount, &shared) != 3)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4151) 		return -EINVAL;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4152) 
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4153) 	/* Find the cache in the chain of caches. */
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 4154) 	mutex_lock(&slab_mutex);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4155) 	res = -EINVAL;
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 4156) 	list_for_each_entry(cachep, &slab_caches, list) {
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4157) 		if (!strcmp(cachep->name, kbuf)) {
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 4158) 			if (limit < 1 || batchcount < 1 ||
a737b3e2fcf96 (Andrew Morton                  2006-03-22 00:08:11 -0800 4159) 					batchcount > limit || shared < 0) {
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 4160) 				res = 0;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4161) 			} else {
e498be7dafd72 (Christoph Lameter              2005-09-09 13:03:32 -0700 4162) 				res = do_tune_cpucache(cachep, limit,
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300 4163) 						       batchcount, shared,
83b519e8b9572 (Pekka Enberg                   2009-06-10 19:40:04 +0300 4164) 						       GFP_KERNEL);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4165) 			}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4166) 			break;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4167) 		}
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4168) 	}
18004c5d4084d (Christoph Lameter              2012-07-06 15:25:12 -0500 4169) 	mutex_unlock(&slab_mutex);
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4170) 	if (res >= 0)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4171) 		res = count;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4172) 	return res;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4173) }
871751e25d956 (Al Viro                        2006-03-25 03:06:39 -0800 4174) 
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4175) #ifdef CONFIG_HARDENED_USERCOPY
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4176) /*
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4177)  * Rejects incorrectly sized objects and objects that are to be copied
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4178)  * to/from userspace but do not fall entirely within the containing slab
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4179)  * cache's usercopy region.
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4180)  *
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4181)  * Returns NULL if check passes, otherwise const char * to name of cache
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4182)  * to indicate an error.
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4183)  */
f4e6e289cb9cf (Kees Cook                      2018-01-10 14:48:22 -0800 4184) void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
f4e6e289cb9cf (Kees Cook                      2018-01-10 14:48:22 -0800 4185) 			 bool to_user)
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4186) {
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4187) 	struct kmem_cache *cachep;
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4188) 	unsigned int objnr;
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4189) 	unsigned long offset;
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4190) 
219667c23c68e (Andrey Konovalov               2019-02-20 22:20:25 -0800 4191) 	ptr = kasan_reset_tag(ptr);
219667c23c68e (Andrey Konovalov               2019-02-20 22:20:25 -0800 4192) 
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4193) 	/* Find and validate object. */
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4194) 	cachep = page->slab_cache;
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4195) 	objnr = obj_to_index(cachep, page, (void *)ptr);
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4196) 	BUG_ON(objnr >= cachep->num);
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4197) 
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4198) 	/* Find offset within object. */
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 4199) 	if (is_kfence_address(ptr))
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 4200) 		offset = ptr - kfence_object_start(ptr);
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 4201) 	else
d3fb45f370d92 (Alexander Potapenko            2021-02-25 17:19:11 -0800 4202) 		offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4203) 
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4204) 	/* Allow address range falling entirely within usercopy region. */
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4205) 	if (offset >= cachep->useroffset &&
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4206) 	    offset - cachep->useroffset <= cachep->usersize &&
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4207) 	    n <= cachep->useroffset - offset + cachep->usersize)
f4e6e289cb9cf (Kees Cook                      2018-01-10 14:48:22 -0800 4208) 		return;
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4209) 
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4210) 	/*
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4211) 	 * If the copy is still within the allocated object, produce
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4212) 	 * a warning instead of rejecting the copy. This is intended
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4213) 	 * to be a temporary method to find any missing usercopy
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4214) 	 * whitelists.
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4215) 	 */
2d891fbc3bb68 (Kees Cook                      2017-11-30 13:04:32 -0800 4216) 	if (usercopy_fallback &&
2d891fbc3bb68 (Kees Cook                      2017-11-30 13:04:32 -0800 4217) 	    offset <= cachep->object_size &&
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4218) 	    n <= cachep->object_size - offset) {
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4219) 		usercopy_warn("SLAB object", cachep->name, to_user, offset, n);
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4220) 		return;
afcc90f8621e2 (Kees Cook                      2018-01-10 15:17:01 -0800 4221) 	}
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4222) 
f4e6e289cb9cf (Kees Cook                      2018-01-10 14:48:22 -0800 4223) 	usercopy_abort("SLAB object", cachep->name, to_user, offset, n);
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4224) }
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4225) #endif /* CONFIG_HARDENED_USERCOPY */
04385fc5e8fff (Kees Cook                      2016-06-23 15:20:59 -0700 4226) 
00e145b6d59a1 (Manfred Spraul                 2005-09-03 15:55:07 -0700 4227) /**
10d1f8cb3965a (Marco Elver                    2019-07-11 20:54:14 -0700 4228)  * __ksize -- Uninstrumented ksize.
87bf4f71af4fb (Randy Dunlap                   2019-10-14 14:12:26 -0700 4229)  * @objp: pointer to the object
00e145b6d59a1 (Manfred Spraul                 2005-09-03 15:55:07 -0700 4230)  *
10d1f8cb3965a (Marco Elver                    2019-07-11 20:54:14 -0700 4231)  * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same
10d1f8cb3965a (Marco Elver                    2019-07-11 20:54:14 -0700 4232)  * safety checks as ksize() with KASAN instrumentation enabled.
87bf4f71af4fb (Randy Dunlap                   2019-10-14 14:12:26 -0700 4233)  *
87bf4f71af4fb (Randy Dunlap                   2019-10-14 14:12:26 -0700 4234)  * Return: size of the actual memory used by @objp in bytes
00e145b6d59a1 (Manfred Spraul                 2005-09-03 15:55:07 -0700 4235)  */
10d1f8cb3965a (Marco Elver                    2019-07-11 20:54:14 -0700 4236) size_t __ksize(const void *objp)
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4237) {
a64b53780ec35 (Kees Cook                      2019-07-11 20:53:26 -0700 4238) 	struct kmem_cache *c;
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 4239) 	size_t size;
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 4240) 
ef8b4520bd9f8 (Christoph Lameter              2007-10-16 01:24:46 -0700 4241) 	BUG_ON(!objp);
ef8b4520bd9f8 (Christoph Lameter              2007-10-16 01:24:46 -0700 4242) 	if (unlikely(objp == ZERO_SIZE_PTR))
00e145b6d59a1 (Manfred Spraul                 2005-09-03 15:55:07 -0700 4243) 		return 0;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4244) 
a64b53780ec35 (Kees Cook                      2019-07-11 20:53:26 -0700 4245) 	c = virt_to_cache(objp);
a64b53780ec35 (Kees Cook                      2019-07-11 20:53:26 -0700 4246) 	size = c ? c->object_size : 0;
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 4247) 
7ed2f9e663854 (Alexander Potapenko            2016-03-25 14:21:59 -0700 4248) 	return size;
^1da177e4c3f4 (Linus Torvalds                 2005-04-16 15:20:36 -0700 4249) }
10d1f8cb3965a (Marco Elver                    2019-07-11 20:54:14 -0700 4250) EXPORT_SYMBOL(__ksize);