OLD | NEW |
(Empty) | |
| 1 Metadata-Version: 1.1 |
| 2 Name: pylru |
| 3 Version: 1.0.9 |
| 4 Summary: A least recently used (LRU) cache implementation |
| 5 Home-page: https://github.com/jlhutch/pylru |
| 6 Author: Jay Hutchinson |
| 7 Author-email: jlhutch+pylru@gmail.com |
| 8 License: UNKNOWN |
| 9 Description: |
| 10 |
| 11 PyLRU |
| 12 ===== |
| 13 |
| 14 A least recently used (LRU) cache for Python. |
| 15 |
| 16 Introduction |
| 17 ============ |
| 18 |
| 19 Pylru implements a true LRU cache along with several support classes. Th
e cache is efficient and written in pure Python. It works with Python 2.6+ inclu
ding the 3.x series. Basic operations (lookup, insert, delete) all run in a cons
tant amount of time. Pylru provides a cache class with a simple dict interface.
It also provides classes to wrap any object that has a dict interface with a cac
he. Both write-through and write-back semantics are supported. Pylru also provid
es classes to wrap functions in a similar way, including a function decorator. |
| 20 |
| 21 You can install pylru or you can just copy the source file pylru.py and
use it directly in your own project. The rest of this file explains what the pyl
ru module provides and how to use it. If you want to know more examine pylru.py.
The code is straightforward and well commented. |
| 22 |
| 23 Usage |
| 24 ===== |
| 25 |
| 26 lrucache |
| 27 -------- |
| 28 |
| 29 An lrucache object has a dictionary like interface and can be used in th
e same way:: |
| 30 |
| 31 import pylru |
| 32 |
| 33 size = 100 # Size of the cache. The maximum number of key/v
alue |
| 34 # pairs you want the cache to hold. |
| 35 |
| 36 cache = pylru.lrucache(size) |
| 37 # Create a cache object. |
| 38 |
| 39 value = cache[key] # Lookup a value given its key. |
| 40 cache[key] = value # Insert a key/value pair. |
| 41 del cache[key] # Delete a value given its key. |
| 42 # |
| 43 # These three operations affect the order of the
cache. |
| 44 # Lookup and insert both move the key/value to t
he most |
| 45 # recently used position. Delete (obviously) rem
oves a |
| 46 # key/value from whatever position it was in. |
| 47 |
| 48 key in cache # Test for membership. Does not affect the cache
order. |
| 49 |
| 50 value = cache.peek(key) |
| 51 # Lookup a value given its key. Does not affect
the |
| 52 # cache order. |
| 53 |
| 54 cache.keys() # Return an iterator over the keys in the cache |
| 55 cache.values() # Return an iterator over the values in the cach
e |
| 56 cache.items() # Return an iterator over the (key, value) pairs
in the |
| 57 # cache. |
| 58 # |
| 59 # These calls have no effect on the cache order. |
| 60 # lrucache is scan resistant when these calls ar
e used. |
| 61 # The iterators iterate over their respective el
ements |
| 62 # in the order of most recently used to least re
cently |
| 63 # used. |
| 64 # |
| 65 # WARNING - While these iterators do not affect
the |
| 66 # cache order the lookup, insert, and delete ope
rations |
| 67 # do. The result of changing the cache's order |
| 68 # during iteration is undefined. If you really n
eed to |
| 69 # do something of the sort use list(cache.keys()
), then |
| 70 # loop over the list elements. |
| 71 |
| 72 for key in cache: # Caches support __iter__ so you can use them di
rectly |
| 73 pass # in a for loop to loop over the keys just like |
| 74 # cache.keys() |
| 75 |
| 76 cache.size() # Returns the size of the cache |
| 77 cache.size(x) # Changes the size of the cache. x MUST be great
er than |
| 78 # zero. Returns the new size x. |
| 79 |
| 80 x = len(cache) # Returns the number of items stored in the cach
e. |
| 81 # x will be less than or equal to cache.size() |
| 82 |
| 83 cache.clear() # Remove all items from the cache. |
| 84 |
| 85 |
| 86 Lrucache takes an optional callback function as a second argument. Since
the cache has a fixed size, some operations (such as an insertion) may cause th
e least recently used key/value pair to be ejected. If the optional callback fun
ction is given it will be called when this occurs. For example:: |
| 87 |
| 88 import pylru |
| 89 |
| 90 def callback(key, value): |
| 91 print (key, value) # A dumb callback that just prints the key
/value |
| 92 |
| 93 size = 100 |
| 94 cache = pylru.lrucache(size, callback) |
| 95 |
| 96 # Use the cache... When it gets full some pairs may be ejected due t
o |
| 97 # the fixed cache size. But, not before the callback is called to le
t you |
| 98 # know. |
| 99 |
| 100 WriteThroughCacheManager |
| 101 ------------------------ |
| 102 |
| 103 Often a cache is used to speed up access to some other high latency obje
ct. For example, imagine you have a backend storage object that reads/writes fro
m/to a remote server. Let us call this object *store*. If store has a dictionary
interface a cache manager class can be used to compose the store object and an
lrucache. The manager object exposes a dictionary interface. The programmer can
then interact with the manager object as if it were the store. The manager objec
t takes care of communicating with the store and caching key/value pairs in the
lrucache object. |
| 104 |
| 105 Two different semantics are supported, write-through (WriteThroughCacheM
anager class) and write-back (WriteBackCacheManager class). With write-through,
lookups from the store are cached for future lookups. Insertions and deletions a
re updated in the cache and written through to the store immediately. Write-back
works the same way, but insertions are updated only in the cache. These "dirty"
key/value pair will only be updated to the underlying store when they are eject
ed from the cache or when a sync is performed. The WriteBackCacheManager class i
s discussed more below. |
| 106 |
| 107 The WriteThroughCacheManager class takes as arguments the store object y
ou want to compose and the cache size. It then creates an LRU cache and automati
cally manages it:: |
| 108 |
| 109 import pylru |
| 110 |
| 111 size = 100 |
| 112 cached = pylru.WriteThroughCacheManager(store, size) |
| 113 # Or |
| 114 cached = pylru.lruwrap(store, size) |
| 115 # This is a factory function that does the same
thing. |
| 116 |
| 117 # Now the object *cached* can be used just like store, except cachin
g is |
| 118 # automatically handled. |
| 119 |
| 120 value = cached[key] # Lookup a value given its key. |
| 121 cached[key] = value # Insert a key/value pair. |
| 122 del cached[key] # Delete a value given its key. |
| 123 |
| 124 key in cache # Test for membership. Does not affect the cache
order. |
| 125 |
| 126 cached.keys() # Returns store.keys() |
| 127 cached.values() # Returns store.values() |
| 128 cached.items() # Returns store.items() |
| 129 # |
| 130 # These calls have no effect on the cache order. |
| 131 # The iterators iterate over their respective el
ements |
| 132 # in the order dictated by store. |
| 133 |
| 134 for key in cached: # Same as store.keys() |
| 135 |
| 136 cached.size() # Returns the size of the cache |
| 137 cached.size(x) # Changes the size of the cache. x MUST be great
er than |
| 138 # zero. Returns the new size x. |
| 139 |
| 140 x = len(cached) # Returns the number of items stored in the stor
e. |
| 141 |
| 142 cached.clear() # Remove all items from the store and cache. |
| 143 |
| 144 |
| 145 WriteBackCacheManager |
| 146 --------------------- |
| 147 |
| 148 Similar to the WriteThroughCacheManager class except write-back semantic
s are used to manage the cache. The programmer is responsible for one more thing
as well. They MUST call sync() when they are finished. This ensures that the la
st of the "dirty" entries in the cache are written back. This is not too bad as
WriteBackCacheManager objects can be used in with statements. More about that be
low:: |
| 149 |
| 150 |
| 151 import pylru |
| 152 |
| 153 size = 100 |
| 154 cached = pylru.WriteBackCacheManager(store, size) |
| 155 # Or |
| 156 cached = pylru.lruwrap(store, size, True) |
| 157 # This is a factory function that does the same
thing. |
| 158 |
| 159 value = cached[key] # Lookup a value given its key. |
| 160 cached[key] = value # Insert a key/value pair. |
| 161 del cached[key] # Delete a value given its key. |
| 162 |
| 163 key in cache # Test for membership. Does not affect the cache
order. |
| 164 |
| 165 |
| 166 cached.keys() # Return an iterator over the keys in the cache/
store |
| 167 cached.values() # Return an iterator over the values in the cach
e/store |
| 168 cached.items() # Return an iterator over the (key, value) pairs
in the |
| 169 # cache/store. |
| 170 # |
| 171 # The iterators iterate over a consistent view o
f the |
| 172 # respective elements. That is, except for the o
rder, |
| 173 # the elements are the same as those returned if
you |
| 174 # first called sync() then called |
| 175 # store.keys()[ or values() or items()] |
| 176 # |
| 177 # These calls have no effect on the cache order. |
| 178 # The iterators iterate over their respective el
ements |
| 179 # in arbitrary order. |
| 180 # |
| 181 # WARNING - While these iterators do not effect
the |
| 182 # cache order the lookup, insert, and delete ope
rations |
| 183 # do. The results of changing the cache's order |
| 184 # during iteration is undefined. If you really n
eed to |
| 185 # do something of the sort use list(cached.keys(
)), |
| 186 # then loop over the list elements. |
| 187 |
| 188 for key in cached: # Same as cached.keys() |
| 189 |
| 190 cached.size() # Returns the size of the cache |
| 191 cached.size(x) # Changes the size of the cache. x MUST be great
er than |
| 192 # zero. Returns the new size x. |
| 193 |
| 194 cached.clear() # Remove all items from the store and cache. |
| 195 |
| 196 cached.sync() # Make the store and cache consistent. Write all |
| 197 # cached changes to the store that have not been |
| 198 # yet. |
| 199 |
| 200 cached.flush() # Calls sync() then clears the cache. |
| 201 |
| 202 |
| 203 To help the programmer ensure that the final sync() is called, WriteBack
CacheManager objects can be used in a with statement:: |
| 204 |
| 205 with pylru.WriteBackCacheManager(store, size) as cached: |
| 206 # Use cached just like you would store. sync() is called automat
ically |
| 207 # for you when leaving the with statement block. |
| 208 |
| 209 |
| 210 FunctionCacheManager |
| 211 --------------------- |
| 212 |
| 213 FunctionCacheManager allows you to compose a function with an lrucache.
The resulting object can be called just like the original function, but the resu
lts are cached to speed up future calls. The fuction must have arguments that ar
e hashable:: |
| 214 |
| 215 import pylru |
| 216 |
| 217 def square(x): |
| 218 return x * x |
| 219 |
| 220 size = 100 |
| 221 cached = pylru.FunctionCacheManager(square, size) |
| 222 |
| 223 y = cached(7) |
| 224 |
| 225 # The results of cached are the same as square, but automatically ca
ched |
| 226 # to speed up future calls. |
| 227 |
| 228 cached.size() # Returns the size of the cache |
| 229 cached.size(x) # Changes the size of the cache. x MUST be great
er than |
| 230 # zero. Returns the new size x. |
| 231 |
| 232 cached.clear() # Remove all items from the cache. |
| 233 |
| 234 |
| 235 |
| 236 lrudecorator |
| 237 ------------ |
| 238 |
| 239 PyLRU also provides a function decorator. This is basically the same fun
ctionality as FunctionCacheManager, but in the form of a decorator:: |
| 240 |
| 241 from pylru import lrudecorator |
| 242 |
| 243 @lrudecorator(100) |
| 244 def square(x): |
| 245 return x * x |
| 246 |
| 247 # The results of the square function are cached to speed up future c
alls. |
| 248 |
| 249 square.size() # Returns the size of the cache |
| 250 square.size(x) # Changes the size of the cache. x MUST be great
er than |
| 251 # zero. Returns the new size x. |
| 252 |
| 253 square.clear() # Remove all items from the cache. |
| 254 |
| 255 Platform: UNKNOWN |
| 256 Classifier: Programming Language :: Python :: 2.6 |
| 257 Classifier: Programming Language :: Python :: 2.7 |
| 258 Classifier: Programming Language :: Python :: 3 |
| 259 Classifier: Development Status :: 5 - Production/Stable |
| 260 Classifier: Intended Audience :: Developers |
| 261 Classifier: License :: OSI Approved :: GNU General Public License (GPL) |
| 262 Classifier: Operating System :: OS Independent |
| 263 Classifier: Topic :: Software Development :: Libraries :: Python Modules |
OLD | NEW |