Apache Iceberg
Apache Iceberg is an open data format for big analytic datasets. Developed by Netflix, Iceberg was designed to be an open community standard and a table format specification that allows compatibility across multiple languages and implementations. After being made open source, organizations like Apple have actively helped in its development.
Between 2016 and 2018, Iceberg, alongside Delta Tables and Apache Hudi emerged to challenge Apache Hive's table format used since 2010. Besides working as a query engine for large batch jobs, Hive works as a metadata catalog and table format used by query engines such as Spark and Presto. The main issue with Hive was handling data changes over large datasets while coordinating multiple applications and not corrupting the data. To solve this atomic transactions were required.
According to Iceberg creators, the project brings the reliability and simplicity of SQL tables to big data while making it possible for engines like Spark, Trino, Flink, Presto, and Hive to work with the same tables simultaneously and safely. It is written in Java and offers a Scala API. The center of its architectural design contains a catalog that supports operations for updating the current metadata pointer, allowing for atomic transactions.
Iceberg is still in active development and has started to be integrated and implemented by multiple organizations like AWS, Adobe, Apple, Netflix, Dremio, Linkedin, Expedia.
- Learn more
- Official website
Related articles

Comparison of database architectures: data warehouse, data lake and data lakehouse
Categories: Big Data, Data Engineering | Tags: Data Governance, Infrastructure, Iceberg, Parquet, Spark, Data Lake, Data Warehouse, File Format
Database architectures have experienced constant innovation, evolving with the appearence of new use cases, technical constraints, and requirements. From the three database structures we are comparing…
By Gonzalo ETSE
May 17, 2022