The core package uses global variables that keep from having
more than one Tile38 instance runnning in the same process.
Move the core variables in the server.Options type which are
uniquely stated per Server instance.
The build variables are still present in the core package.
Prior to this commit all objects in the Collection data structures
were boxed in an Go interface{} which adds an extra 8 bytes per
object and requires assertion to unbox.
Go 1.18, released early 2022, introduced generics, which allows
for storing the objects without boxing. This provides a extra
boost in performance and lower in-memory footprint.
?ssl=true previously would require the user to provide a cacertfile
stripping the option to use the hosts ca set.
bumping sarama to version 1.36.0
bumping alpine to 3.16.2
fix: tls path
Each MATCH is inclusive OR, thus
WITHIN fleet MATCH train* truck* BOUNDS 33 -112 34 -113
will find all trains and trucks that within the provides bounds.
This commit fixes an issue where Tile38 will fail to start
because the AOF file contains a partially written command, which
is caused by the server not having enough disk space to complete
the previous write.
This was discovered and reported by a Theresa D on the Tile38
Slack channel.
This commit fixes an issue where the server may start up without
a "server_id" assigned, which in turn will cause a follower to
be unable to connect.
This issues is caused by including a pre-generated "data/config"
file that does not include the "server_id" field.
Move the LogJSON check into the log function so that the caller
function can be inlined. This is helpful for hot functions like
`log.Debug` where it's likely that the `-vv` flag is not set thus
the to avoid the extra function call.
This commit allows for buffering any GeoJSON object.
For example:
INTERSECTS fleet BUFFER 1000 OBJECT {...LineString...}
This will buffer add a 1 kilometer buffer to a linesting and
search the 'fleet' collection for all objects that
intersect the buffered linestring.
This commit also allows for performing INTERSECTS with a POINT
type. Thus allowing for a polygon-over-point operation, which is
an inverted point-in-polygon.
This commit accepts incoming connections even before the AOF
dataset has been loaded into memory. Though only a very limited
command set is allowed.
Allowed commands:
PING, ECHO, OUTPUT, QUIT
All other commands will return:
LOADING Tile38 is loading the dataset in memory
This is useful for establishing connections for the purpose of
checking process and network state.
This issues fixes an issue where a search command with a where
clause using the "z" field would not match correctly for point
that where contained inside a GeoJSON Feature type.
Tile38 now extracts the Z coordinate from Point and Feature/Point
types.
fixes#622
This commit changes the collection type that holds all of the
hooks from a hashmap to a btree. This allows for better
flexibility for operations that need to perform range searches
and scanning of the collection.
This commit ensures that the TIMEOUT is always checked prior to
returning data to the client, and that the elapsed command time
cannot be greater than the timeout value.
This commit changes the logic for managing the expiration of
objects in the database.
Before: There was a server-wide hashmap that stored the
collection key, id, and expiration timestamp for all objects
that had a TTL. The hashmap was occasionally probed at 20
random positions, looking for objects that have expired. Those
expired objects were immediately deleted, and if there was 5
or more objects deleted, then the probe happened again, with
no delay. If the number of objects was less than 5 then the
there was a 1/10th of a second delay before the next probe.
Now: Rather than a server-wide hashmap, each collection has
its own ordered priority queue that stores objects with TTLs.
Rather than probing, there is a background routine that
executes every 1/10th of a second, which pops the expired
objects from the collection queues, and deletes them.
The collection/queue method is a more stable approach than
the hashmap/probing method. With probing, we can run into
major cache misses for some cases where there is wide
TTL duration, such as in the hours or days. This may cause
the system to occasionally fall behind, leaving should-be
expired objects in memory. Using a queue, there is no
cache misses, all objects that should be expired will be
right away, regardless of the TTL durations.
Fixes#616
This commit addresses an issue where the sarama kafka library
leaks memory when a connection closes unless the metrics
configuration that was passed to new connection is also closed.
Fixes#613
Returns 'ok' if the server is the leader or a follower with
a 'caught up' log.
This is mainly for HTTP connections that are using an
orchestration environment like kubernetes, but will work as a
general RESP command.
For HTTP a '200 OK' for 'caught up' and
'500 Internal Server Error' otherwise.
See #608