fbpx

Making a Data Lake Real-Time with Transactional Hadoop

February 12, 2015
Data Lake

The phrase “data lakes” has become popular for describing the moving of data into Hadoop to create a repository for large quantities of structured and unstructured data in native formats. Yet, many are unfamiliar with the concept of operational data lakes, which enables the structured content of the data lake to be stored in its native relational framework.

Because of its RDBMS functionality, an operational data lake allows companies to expand on what they are already doing with traditional operational data stores (ODS), but with the scale-out capabilities of Hadoop. The combination of Hadoop with traditional CRUD (create, read, update, delete) operations makes it a great first project for companies looking to get into Big Data.

The benefits of the operational data lake are closely tied to cost and scale because it enables the offloading of analytics and reporting data from more expensive OLTP and data warehouse systems to a Hadoop-based platform. Because it is built on a transactional Hadoop platform, the operational data lake allows for real-time updates and access.

With the Splice Machine Hadoop RDBMS, companies can build operational data lakes, which offer exceptional price and performance value when replacing obsolete ODSs.

To learn more about the concept of operational data lakes and its potential as an on-ramp to Big Data, we invite you to download the white paper, The Operational Data Lake:
Your On-Ramp to Big Data by CITO Research here.