What are Data Lakes?

A data lake is a Big Data storage repository that holds vast quantities of unrefined information.

Data is loaded directly into the data lake without passing through an integration layer or a transformation layer. The imported data can be structured, such as relational database tables, semi-structured, like CSV and JSON files, or unstructured, such as PDFs and images.

Differences Between a Data Lake and a Data Warehouse

Data lakes and data warehouses are two common approaches to storage problems. One of the main differences between the two lies in the way data is processed before ingestion.

Data warehouses often use an ETL (Extract, Transform, Load) layer. In this approach, data is extracted from sources, processed according to a master schema, and uploaded to the target. Data in a warehouse is ready for immediate use without requiring further processing. This method is also known as schema-on-write.

Data lakes generally use ELT (Extract, Load, Transform). In this approach, there is no pre-processing of data before upload. Data is ingested as-is, regardless of whether it matches the format of other data contained in the data lake. Transformation is performed on-demand by specialist tools, such as Big Data analytics platforms, which is referred to as schema-on-read.

Data warehouses focus on quality, while data lakes are better for quantity. This is one reason why data lakes are generally associated with Big Data operations.

Why are Data Lakes Useful?

There are a number of use cases that see enterprise turn to data lakes, such as:

  • For high-volume data sources: Data lakes are suitable repositories for data sources that produce enormous amounts of information. Examples of this include website activity logs, IoT data, social media data, and logistics updates.
  • To perform Big Data analytics: Large-scale analytics efforts require enormous sets of data from numerous sources. Often, it may be impossible to pre-process this data. Instead, organizations may upload it all to a data lake and use tools like Amazon Athena to analyze and produce insights.
  • For long-term storage: Cloud-based data lakes tend to be competitively priced, making them an attractive long-term storage option. Some organizations may need to store data for an extended period due to compliance requirements or for further analytics down the road. A data lake is an effective way to create persistence at a reasonable price.
  • To unify diverse data sources: Data lakes are an extremely flexible storage solution, with no conflicts between otherwise incompatible data types. This approach allows organizations to store diverse data in a single location without needing to spend time on data integration and harmonization.
  • To train machine learning and AI: Machine learning and AI can solve many of the problems arising from the data lake structure, as these tools progressively learn how to tag and interpret structured and unstructured information. Equally, a data lake is an ideal environment in which to train and develop a machine learning or AI tool.

These are a few typical use cases for data lakes. Ultimately, organizations use this approach when they want to move data as quickly as possible and store it as quickly as possible.

What are the Drawbacks of Data Lakes?

Data lakes are not ideal in every situation. When misapplied, this structure can lead to issues such as:

  • Unreliable data: Because incoming data is not subject to validation, the quality of data may be inconsistent. If such data isn’t validated further down the line, it could mislead analysts and other data users.
  • Data swamps: A data swamp is a data lake that has grown stagnant. This occurs when the lake is overloaded with poor-quality or expired data. The structure starts to become unusable, even with the best analytics tools. 
  • Slow analytics and production use: Schema-on-read is not a suitable approach when users are interacting with the database in real-time. If users need to write or they need prompt query results, they’ll require a schema-on-write data warehouse.
  • Data governance and transparency: Good data governance is more important than ever when working with a data lake, as data quality can’t be guaranteed unless the source is known. Inadequate tracking of data lineage and data provenance can lead to reliability issues, which could undermine the value of all data in the lake.

None of these problems are inherent flaws in the data lake structure—instead, they show the importance of good planning and strong data governance.  

How are Data Lakes Implemented?

Data lakes can be implemented on-premise, although this can be expensive and difficult to scale. A large, on-premise data repository can run into the issue of tightly coupled storage and compute, where both storage space and processing power need to scale up in relation to each other. This is not optimal for a data lake, which often requires a growing amount of storage without requiring an equivalent growth in processing requirements.

Cloud-based data lakes are a more common option in enterprise, especially those using the lakes for long-term data storage. Cloud adopters have a number of choices, with data lake support available on Amazon AWS, Google BigQuery, Microsoft Azure, and other platforms.

Data lakes operate on three core principles:

  • Openness: Lakes should be structured to accept as much data as possible while still remaining functional. Lake designers must think ahead to future requirements to ensure that all data has a place in the lake. 
  • Adaptability: Lakes should be prepared to support any possible business usage, from analytics to supporting applications. Again, designers must look ahead and build a structure that can serve as a foundation for future development. 
  • Collaboration: Organizations that pool their data in a single repository should see some organization-wide return on investment. This will come in the form of better collaboration, more accurate insights, and a 360 understanding of customers.

These principles are the difference between a data lake and a data swamp. A functioning data lake should create some additional value for the owners of the lake.