GemFire 6.0: New innovations in data management
GemStone has unveiled GemFire 6.0 which is the culmination of several years of development and the continuous solving of the hardest data management problems in the world. With this release GemFire touts some of the latest innovative features in data management.
In this release:
- GemFire introduces a resource manager to continuously monitor and protect cache instances from running out of memory, triggering rebalancing to migrate data to less loaded nodes or allow dynamic increase/decrease in the number of nodes hosting data for linear scalability without impeding ongoing operations (no contention points).
- GemFire provides explicit control over when rebalancing can be triggered, on what class of data and even allows the administrator to simulate a "rebalance" operation to quantify the benefits before actually doing it.
- With built in instrumentation that captures throughput and latency metrics, GemFire now enables applications to sense changing performance patterns and proactively provision extra resources and trigger rebalancing. The end result is predictable data access throughput and latency without the need to overprovision capacity.
- We continue down the path of making the product more resilient than ever before - dealing with complex membership issues when operating in large clusters and allowing thresholds to be set in terms of consumption of memory in any server JVM that significantly reduces the probability of "stop the world" garbage collection cycles.
- Advanced Data Partitioning: Applications are no longer restricted by the memory available across the cluster to manage partitioned data. Applications can pool available memory as well as disk and stripe the data across memory and disk across the cluster. When the data fabric is configured as a cache, partitioned data can be expired or evicted so that only the most frequently used data is managed.
- Data-aware application behavior routing: There are several extensions added to the GemFire data-aware function execution service - a simple grid programming model that allows the application to synchronously or asynchronously execute application behavior on the data nodes. Applications invoke functions hinting the data they are dependent on and the service parallelizes the execution of the application function on all the grid nodes where the data is being managed. Applications can now define relationships between different classes of data to colocate all related data sets and application functions when routed to the data nodes can execute complex queries on in-process data. These and other features offered in the 'Function execution service' offers linear scalability for compute and data intensive applications. Simply add more nodes when demand spikes to rebalance data and behavior to increase the overall throughput for your application.
- API additions for C++, C#: Support for continuous querying, client side connection pooling and dynamic load balancing and ability to invoke server side functions.
- Cost based Query optimization: A new compact index to conserve memory utilizaton and enhanced query processor design with cost-based optimization has been introduced as part of this release.
- Developer productivity tools: It can be daunting when developers have to quickly develop and test their clustered application. Developers need the capability to browse the distributed data using ad-hoc queries, apply corrections or monitor resource utilization and performance metrics. A new graphical Data browser permits browsing and editing of data across the entire cluster, execution of ad-hoc queries and even create real-time table views that are continuously kept up-to-date through continuous queries. The GemFire Monitor tool (GFMon) also has several enhancements making the tool much more developer friendly.
For more information on GemFire, view our newly rewritten technical white paper at:
http://community.gemstone.com/download/attachments/4752318/GemFire+Data+Fabric+-+Technical+White+paper.pdf?version=1