This article describes my little adventure around a startup error with the Hive Metastore. It shall be reproducable with any  secure installation, meaning with Kerberos, with high availability enabled and with the storage of the delegation token in a database. The version of Hive is the 1.2 packaged inside the Hortonworks 2.4.2 distribution.

Storage for delegation token is defined by the property. The available choices are Zookeeper, the Metastore database and memory. Both Cloudera and Hortonworks recommend using the database as org.apache.hadoop.hive.thrift.DBTokenStore .

The error in question occurs at the launch of the Metastore and bears the following signature:

The message printed into stdout is quite clear:

Let’s dive into the source code. Hortonworks publishes on GitHub the source code of all the components of its distribution. Each distribution version corresponds to an associated tag under Git. Hive for HDP 2.4.2 can be imported into our local workstation with the commands:

Indeed, there is no fonction HiveMetaStore$HMSHandler.addMasterKey in version 2.4.2. We can find the exact same function inside the ObjectStore class of the same package. In Hive version 1.3 like in version 2.1, the code didn’t seem to have changed. In the logs, we confirm that it is the ObjectStore class which is mentioned:

So from a configuration point of view, the parameter is well transmitted from the configuration and everything should be OK. The error is called in class DBTokenStore line 42 by the code:

Then again on line 156 with the code:

So, let’s find out how rawStore  is built:

  • it arrives through the DBTokenStore.init  method
  • not very sure where init is called
  • but after a quick search, objectStore  is instantiated in HiveAuthFactory  by rawStore = baseHandler.getMS ();  where baseHandler  is HiveMetaStore.HMSHandler
  • in HiveMetaStore.HMSHandler.getMS , the getMS  method returns an instance of threadLocalMS.get()  or newRawStore()  if null
  • the latter, newRawStore , feeds on the rawStoreClassName  property
  • this property is initialized from the configurationrawStoreClassName = hiveConf.getVar(HiveConf.ConfVars.METASTORE_RAW_STORE_IMPL);
  • he default value is declared in HiveConf METASTORE_RAW_STORE_IMPL (“hive.metastore.rawstore.impl”, “org.apache.hadoop.hive.metastore.ObjectStore”, …)
  • we return to the class ObjectStore , which contains our method getMasterKey , so what is happening?

By re-browsing the source code, we see that HiveAuthFactory, where rawstore  is built and passed to the function startDelegationTokenSecretManager, applies to HiveServer2. Hmm, could there be an equivalent for Metastore?

A search with startDelegationTokenSecretManager  leads us again to the HiveMetaStore. And once there, what can we read:

With baseHandler being instantiated a little above by the code:

The signature of startDelegationTokenSecretManager  is:

Any object can be passed as a second argument which is logical since Hive is using reflection.

Inspired by HiveAuthFactory, we can see how rawStore is itself obtained from baseHandler . So the parameter is not baseHandler  but baseHandler.getMS ().

So, HiveMetaStore on line 6031 should look like:

Once the modification are applied, it is now time to compile all of this.

The command mvn clean package -DskipTests -Phadoop-2 is not working on the first attempt. It could actually not happen. Being interested only in the compilation of the jar associated with the Metastore, let us take the option of quickly correcting the compilation errors. Fortunately there will be only one. It consists of removing all references to the class CallerContext  in Hadoop23Shims. After that, Maven does not compile all the projects but goes far enough to generate the MetaStore jar.

In terms of deployment, everything is not so simple either. Replacing the jar in question is not enough because another previously loaded jar also contains the impacted classes. I took the solution to rewrite the environment variable HADOOP_CLASSPATH by prefixing it with our newly imported jar. In the file “/usr/hdp/current/hive-metastore/bin/ext/”, add:

before the line :

You can now execute the command hive --service metastore and the Metastore shall start.

At the time of this writing, it seems that this problem is still present in the master branch of Hive Metastore. However, it was not present in the previous release HDP 2.4.0.