Commit Graph

10 Commits

Author SHA1 Message Date
tidwall 9e68703841 Update expiration logic
This commit changes the logic for managing the expiration of
objects in the database.

Before: There was a server-wide hashmap that stored the
collection key, id, and expiration timestamp for all objects
that had a TTL. The hashmap was occasionally probed at 20
random positions, looking for objects that have expired. Those
expired objects were immediately deleted, and if there was 5
or more objects deleted, then the probe happened again, with
no delay. If the number of objects was less than 5 then the
there was a 1/10th of a second delay before the next probe.

Now: Rather than a server-wide hashmap, each collection has
its own ordered priority queue that stores objects with TTLs.
Rather than probing, there is a background routine that
executes every 1/10th of a second, which pops the expired
objects from the collection queues, and deletes them.

The collection/queue method is a more stable approach than
the hashmap/probing method. With probing, we can run into
major cache misses for some cases where there is wide
TTL duration, such as in the hours or days. This may cause
the system to occasionally fall behind, leaving should-be
expired objects in memory. Using a queue, there is no
cache misses, all objects that should be expired will be
right away, regardless of the TTL durations.

Fixes #616
2021-07-12 13:37:50 -07:00
tidwall df8d3d7b12 Close follower files before finishing aofshrink
fixes #449
2021-06-13 07:53:27 -07:00
tidwall 6b08f7fa9e Code cleanup
- Removed unused functions and variables
- Wrapped client formatted errors
- Updated deprecated packages
- Changed suggested code patterns
2021-03-31 08:13:44 -07:00
tidwall 9ce20331e4 Fixed fields being shuffled after AOFSHRINK 2020-11-09 14:45:40 -07:00
tidwall 474ff810c0 Fixed panic on AOFSHRINK
closes #508
2019-11-17 07:25:25 -07:00
tidwall 23b016d192 Fix excessive memory usage for objects with TTLs
This commit fixes an issue where Tile38 was using lots of extra
memory to track objects that are marked to expire. This was
creating problems with applications that set big TTLs.

How it worked before:

Every collection had a unique hashmap that stores expiration
timestamps for every object in that collection. Along with
the hashmaps, there's also one big server-wide list that gets
appended every time a new SET+EX is performed.

From a background routine, this list is looped over at least
10 times per second and is randomly searched for potential
candidates that might need expiring. The routine then removes
those entries from the list and tests if the objects matching
the entries have actually expired. If so, these objects are
deleted them from the database. When at least 25% of
the 20 candidates are deleted the loop is immediately
continued, otherwise the loop backs off with a 100ms pause.

Why this was a problem.

The list grows one entry for every SET+EX. When TTLs are long,
like 24-hours or more, it would take at least that much time
before the entry is removed. So for databased that have objects
that use TTLs and are updated often this could lead to a very
large list.

How it was fixed.

The list was removed and the hashmap is now search randomly. This
required a new hashmap implementation, as the built-in Go map
does not provide an operation for randomly geting entries. The
chosen implementation is a robinhood-hash because it provides
open-addressing, which makes for simple random bucket selections.

Issue #502
2019-10-29 11:19:33 -07:00
tidwall 0aecef6a5c Added TIMEOUT command 2019-04-24 05:09:41 -07:00
tidwall 5333fab870 Recycle aof buffer 2019-03-10 10:48:14 -07:00
tidwall 07bae979a5 Added Cursor interface 2018-11-02 06:09:56 -07:00
tidwall 555e47036c Replaced net package with evio
- Added threads startup flag
- Replaced net package with evio
- Refactored controller into server
2018-10-28 15:51:47 -07:00