One of my old reported vulnerabilities was published: CVE-2014-8733

CVE-2014-8733: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-8733

It was fun to work with Hadoop security in 2014… This vuln was a tricky one because I was responsible for Hadoop managed service platform security, and our clients had SSH access to Hadoop cluster nodes in some cases.

If I remember correctly, fix wasn’t easy – required release of new CDH version which moved configuration parameters between files (world readable access was required for Hadoop client to function). And then, several months later, it reappeared again after another patch.

Hadoop without Kerberos – simple attack examples

In this post, I am going to illustrate that it’s practically impossible to protect any data in Hadoop clusters without Kerberos (‘Secure mode’) enabled. I hope this will help admins and security folks see that Kerberos is the only way to make Hadoop more or less secure – without it, there is no authentication in Hadoop at all. But as you can see from my previous posts about Hadoop, even with Kerberos enabled, there are still very serious challenges, so Kerberos is just a start, not the final solution.

At this time, I will focus on the most important component of Hadoop ecosystem – HDFS, Hadoop’s distributed file system which is used to store all data in Hadoop in most cases.

Continue reading

hadoop.security.auth_to_local examples

In my previous post “An important Hadoop security configuration parameter you may have missed” I was talking about importance of the hadoop.security.auth_to_local configuration parameter and promised to provide some solutions using this parameter.

I want to focus on a couple of practical use examples in this post, and if you want to learn more about this, here are links to the existing documentation:

Continue reading

Configuring Cloudera Navigator to use external authentication

Cloudera, author of one of the most popular Hadoop distributions, has created a great tool for Hadoop security monitoring and auditing, called Cloudera Navigator. I find its initial configuration process a little bit tricky, so I wanted to document it in this post. Cloudera’s original document on how to do this is located here:
http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cn_sg_external_auth.html

I currently use the latest version of Cloudera Hadoop distribution with Cloudera Manager 5.3.1 (trial enterprise license) and Navigator 2.2.1. It openly shows its full version and build in a tool-tip on its logo and in ‘About’ section right at the login page (so in case there’s a vulnerability published in future, hackers won’t need to spend time finding out target’s version ;-) ):

1

Continue reading

An important Hadoop security configuration parameter you may have missed

Hadoop has one security parameter, which importance I think is not stressed well enough in currently published documentation. While there are instructions on how to configure it, I did not see anyone talking about the consequences of leaving this parameter with its default value, and as far as I know, almost nobody ever changes it due to complexity. This parameter is

hadoop.security.auth_to_local – “Maps kerberos principals to local user names”

(description from current core-default.xml)

It’s telling Hadoop how to translate Kerberos principals into Hadoop user names. By default, it simply translates <user>/<part2>@<DOMAIN> into <user> for default domain (ignores the 2nd part of Kerberos principal). Here’s what current Apache Hadoop documentation says about it:

“By default, it picks the first component of principal name as a user name if the realms matches to the default_realm (usually defined in /etc/krb5.conf). For example, host/full.qualified.domain.name@REALM.TLD is mapped to host by default rule.”

This means that for example if you have users with names hdfs, Alyce and Bob, and they use the following principals to authenticate with your cluster:

HDFS – hdfs@YOUR.DOMAIN,
Alyce – alyce@YOUR.DOMAIN,
Bob – bob@YOUR.DOMAIN

If auth_to_local is not configured in your cluster, those are actually not the only principals that can authenticate as your Hadoop users, because the following principals, if exist, will also become your HDFS, Alyce and Bob per the default mapping:

hdfs/host123.your.domain@YOUR.DOMAIN => hdfs
hdfs/clusterB@YOUR.DOMAIN => hdfs
alyce/team2@YOUR.DOMAIN => Alyce
alyce/something.else@YOUR.DOMAIN => Alyce
bob/library@YOUR.DOMAIN => Bob
bob/research@YOUR.DOMAIN => Bob

… (very, very large list of possible combinations of second part of Kerberos principal and domain name) …

hdfs/<anything>@YOUR.DOMAIN is HDFS
alyce/<anything>@YOUR.DOMAIN is Alyce
bob/<anything>@YOUR.DOMAIN is Bob

For many regulatory bodies and auditing companies, this is a baseline security requirement for every user on the system to have only one unique identity. As we just learned, in Hadoop, by default, users de-facto can be identified with almost an infinite number of IDs. And this can be exploited by malicious users inside company to get access to sensitive data or fully take over control of the cluster.

Let’s look at an example:

First, user Bob with principal bob@LAB.LOCAL uploads a file ‘secret.txt’ to his home directory in HDFS and ensures its protected by access lists:

1

Continue reading

Myth about hard-coded ‘hdfs’ superuser in Hadoop

I often hear about the hard-coded ‘hdfs’ superuser in Hadoop clusters, and various challenges around managing it in scenarios when there is more than one team in the same organization using Hadoop in their projects.

I think it’s very important to mention that there is no hardcoded ‘hdfs’ superuser in Hadoop. Name Node just gives admin rights to the system user name which started its process. So if you are starting Name Node as root (please don’t do this), your superuser name will be ‘root’. If you are starting it as ‘namenode’, this will make ‘namenode’ user a superuser.

Here’s what HDFS Permissions Guide says about this (quoting entire ‘Super-User’ section):

The super-user is the user with the same identity as name node process itself. Loosely, if you started the name node, then you are the super-user. The super-user can do anything in that permissions checks never fail for the super-user. There is no persistent notion of who was the super-user; when the name node is started the process identity determines who is the super-user for now. The HDFS super-user does not have to be the super-user of the name node host, nor is it necessary that all clusters have the same super-user. Also, an experimenter running HDFS on a personal workstation, conveniently becomes that installation’s super-user without any configuration.

In addition, the administrator my identify a distinguished group using a configuration parameter. If set, members of this group are also super-users.

And that’s just HDFS admin. For other components of Hadoop ecosystem, they all have their own admin users, but some in default configurations will allow other components’ admin users manage them.

I guess this myth exists because the default system user name used to start HDFS daemons by majority of automated Hadoop installations is ‘hdfs’.

(and of course don’t forget about dfs.permissions.superusergroup and dfs.cluster.administrators)