This blog post from Oracle explains well how to connect to Hive from SQL Developer. In short, you need to download the JDBC driver from Cloudera and make sure you use the correct version, version 4 and not 4.1 Three zip files are included, use the one that starts with Cloudera_HiveJDBC4_2.*.
Anyway, after adding the jar files to SQL Developer, and restarting it, the connection failed with:
Error setting/closing session: Open Session Error
Not really informative for anyone used to ORA error codes.
I tested the connection locally with the beeline command that is included with Hive. It worked OK when connecting with the URL jdbc:hive2://, but when I connected with URL jdbc:hive2://rio:10000 (rio is the hostname), it returned the error:
hadoop is not allowed to impersonate anonymous
I tried options -n and -p to connect as user hadoop, but the error message was similar:
hadoop is not allowed to impersonate hadoop
Turned out that I needed to change the configuration of Hadoop itself, and added the following to core-site.xml:
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
Yes, this is lax security, but OK in my lab. After a restart of hdfs etc, SQL Developer connected successfully to Hive. Proxy user in Hadoop is explained here .
Running simple queries from Hive, either from beeline or SQL Developer is slow, which makes you think twice before you submit a query, unless you are going for a cup of coffee. That is a good thing, I guess.