This commit fixes a performance issue with the algorithm that
determines with geofences are potential candidates for
notifications following a SET operation.
Details
Prior to commit b471873 (10 commits ago) there was a bug where
the "cross" detection was not firing in all cases. This happened
because when looking for candidates for "cross" due to a SET
operation, only the geofences that overlapped the previous
position of the object and the geofences that overlapped the new
position where searched. But, in fac, all of the geofences that
overlapped the union rectangle of the old and new position should
have been searched.
That commit fixed the problem by searching a union rect of the
old and new positions. While this is an accurate solution, it
caused a slowdown on systems that have big/wild position changes
that might cross a huge number of geofences, even when those
geofences did not need actually need "cross" detection.
The fix
With this commit the geofences that have a "cross" detection
are stored in a seperated tree from those that do not. This
allows for a hybrid of the functionality prior and post b471873.
Fixes#583
This commit addresses issue #230, where an AOF file will sometimes
not load due to the file being padded with trailing zeros. It's
uncertain what is causing this corruption, but it appears to be
coming from outside of the tile38-server process. I suspect it's
due to some block store layer in Kubernetes/Docker cloud
environments.
This fix allows for Tile38 to start up by discovering the trailing
zeros while loading the AOF and safely truncating the file as to
not include the zeros in the future.
This commit fixes an issue that happens when running SCAN on a
collection that has objects with fields, causing field values
to be mismatched with their respective keys.
This only occured with json output, and is a regression from #534.
Fixes#569
This commit fixes an issue where the OUTPUT command requires
authentication when a server password has been set with
CONFIG SET requirepass. This was causing problems with clients
that use json responses, like the tile38-cli.
Fixes#564
This commit fixes a case where a roaming geofence will not fire
a "faraway" event when it's supposed to.
The fix required rewriting the nearby/faraway detection logic. It
is now much more accurate and takes overall less memory, but it's
also a little slower per operation because each object proximity
is checked twice per update. Once to compare the old object's
surrounding, and once to evaulated the new object. The two lists
are then used to generate accurate "nearby" and "faraway" results.
This commit addresses an issue that began on 1.19 where the
deprecated tile38 native line protocol was removed in favor of
the more robust resp protocol. In turn the tile38 cli required
that all args are quoteless or quote escaped.
The commit ensures that the server returns the correct error
message and also loosens the strictness of the need for quoted
arguments in the tile38-cli.
fixes#513
This commit cleans up various Go code in the internal directory.
- Ensures comments on exported functions
- Changes all *Server receiver in all files to be "s", instead
of mixed "c", "s", "server", etc.
- Silenced Go warnings for if/else with returns.
- Cleaned up import ordering.
This commit fixes an issue where Tile38 was using lots of extra
memory to track objects that are marked to expire. This was
creating problems with applications that set big TTLs.
How it worked before:
Every collection had a unique hashmap that stores expiration
timestamps for every object in that collection. Along with
the hashmaps, there's also one big server-wide list that gets
appended every time a new SET+EX is performed.
From a background routine, this list is looped over at least
10 times per second and is randomly searched for potential
candidates that might need expiring. The routine then removes
those entries from the list and tests if the objects matching
the entries have actually expired. If so, these objects are
deleted them from the database. When at least 25% of
the 20 candidates are deleted the loop is immediately
continued, otherwise the loop backs off with a 100ms pause.
Why this was a problem.
The list grows one entry for every SET+EX. When TTLs are long,
like 24-hours or more, it would take at least that much time
before the entry is removed. So for databased that have objects
that use TTLs and are updated often this could lead to a very
large list.
How it was fixed.
The list was removed and the hashmap is now search randomly. This
required a new hashmap implementation, as the built-in Go map
does not provide an operation for randomly geting entries. The
chosen implementation is a robinhood-hash because it provides
open-addressing, which makes for simple random bucket selections.
Issue #502
Each mqtt hook establishes separate connection to the MQTT broker. If
their clientIds are all equal the MQTT broker will disconnect the clients - the
protocol does not allow 2 connected clients with the same name