public class HadoopPrms extends BasePrms
HadoopDescription.
A given hydra test will typically use a single cluster, though some tests
could possibly use a separate cluster to store data for validation.
The number of description instances is gated by names. For other
parameters, if fewer values than names are given, the remaining instances
will use the last value in the list. See $JTESTS/hydra/hydra.txt for more
details.
Unused parameters default to null, except where noted. This uses the product default, except where noted.
Values, fields, and subfields of a parameter can be set to BasePrms.DEFAULT,
except where noted. This uses the product default, except where noted.
Values, fields, and subfields can be set to BasePrms.NONE where noted, with
the documented effect.
Values, fields, and subfields of a parameter can use oneof, range, or robing except where noted, but each description created will use a fixed value chosen at test configuration time. Use as a task attribute is illegal.
Subfields are order-dependent, as stated in the javadocs for parameters that use them.
Example:
hydra.HadoopPrms-names = hdfs;
hydra.HadoopPrms-nameNodeHosts = shep;
hydra.HadoopPrms-nameNodeLogDrives = a;
hydra.HadoopPrms-nameNodeDataDrives = a;
hydra.HadoopPrms-dataNodeHosts = larry moe curly;
hydra.HadoopPrms-dataNodeLogDrives = a b c;
hydra.HadoopPrms-dataNodeDataDrives = b:c a:c a:b;
For best performance, put the NameNode on its own host and DataNode logs on different drives from the data.
| Modifier and Type | Class and Description |
|---|---|
static class |
HadoopPrms.NodeType |
| Modifier and Type | Field and Description |
|---|---|
static Long |
addHDFSConfigurationToClassPath
(boolean(s))
Whether to add the HDFS configuration to the classpath of all hydra client
JVMs.
|
static String |
APACHE220 |
protected static String |
APACHE220_Jars |
static String |
APACHE220_OPT |
static String |
APACHE220_VERSION |
static String |
APACHE241 |
protected static String |
APACHE241_Jars |
static String |
APACHE241_OPT |
static String |
APACHE241_VERSION |
static Long |
baseHDFSDirName
(String)
The name of a scratch directory to include on the path of various HDFS
directories for logs and data.
|
static Long |
dataNodeDataDrives
(Comma-separated Lists of colon-separated String(s))
Drives to use for data on each of the
dataNodeHosts. |
static Long |
dataNodeHosts
(Comma-separated Lists of String(s))
Physical host names for the DataNodes.
|
static Long |
dataNodeLogDrives
(Comma-separated Lists of String(s))
Drives to use for logs on each of the
dataNodeHosts. |
static String |
DEFAULT_APACHE_DIST |
static String |
DEFAULT_BASE_HDFS_DIR_NAME |
static String |
DEFAULT_HADOOP_DIST |
static int |
DEFAULT_REPLICATION |
static Long |
hadoopDist
(String(s))
Path to the Hadoop distribution for each logical cluster.
|
static String |
HORTONWORKS26 |
protected static String |
HORTONWORKS26_Jars |
static String |
HORTONWORKS26_OPT |
static String |
HORTONWORKS26_version |
static String |
KERBEROS |
static String |
KERBEROS_KINIT |
static String |
LOGICAL_HOST_PREFIX |
static Long |
nameNodeDataDrives
(Comma-separated Lists of colon-separated String(s))
Drives to use for data on each of the
nameNodeHosts. |
static Long |
nameNodeHosts
(Comma-separated Lists of String(s))
Physical host names for the NameNodes.
|
static Long |
nameNodeLogDrives
(Comma-separated Lists of String(s))
Drives to use for logs on each of the
nameNodeHosts. |
static Long |
nameNodeURL
(String(s))
NameNode URL for each logical cluster.
|
static Long |
names
(String(s))
Logical names of the Hadoop cluster descriptions.
|
static Long |
nodeManagerDataDrives
(Comma-separated Lists of colon-separated String(s))
Drives to use for NodeManager data on each of the
dataNodeHosts. |
static Long |
nodeManagerLogDrives
(Comma-separated Lists of String(s))
Drives to use for NodeManager logs on each of the
dataNodeHosts. |
static String |
PHD3000_138 |
protected static String |
PHD3000_138_Jars |
static String |
PHD3000_138_OPT |
static String |
PHD3000_138_VERSION |
static String |
PHD3100_175 |
protected static String |
PHD3100_175_Jars |
static String |
PHD3100_175_OPT |
static String |
PHD3100_175_VERSION |
static String |
PHD3200_54 |
protected static String |
PHD3200_54_Jars |
static String |
PHD3200_54_OPT |
static String |
PHD3200_54_VERSION |
static Long |
replication
(int(s))
HDFS block replication default.
|
static Long |
resourceManagerDataDrives
(List of colon-separated String(s))
Drives to use for ResourceManager data on the
resourceManagerHost. |
static Long |
resourceManagerHost
(List of String(s))
Physical host name for the ResourceManager.
|
static Long |
resourceManagerLogDrive
(List of String(s))
Drive to use for ResourceManager log on the
resourceManagerHost. |
static Long |
resourceManagerURL
(String(s))
ResourceManager URL for each logical cluster.
|
static Long |
resourceTrackerAddress
(String(s))
ResourceManager resource tracker host:port for each logical cluster.
|
static Long |
schedulerAddress
(String(s))
ResourceManager scheduler host:port for each logical cluster.
|
static Long |
securityAuthentication
(String(s))
Type of Hadoop security authentication to use.
|
static String |
SIMPLE |
| Constructor and Description |
|---|
HadoopPrms() |
| Modifier and Type | Method and Description |
|---|---|
static String |
getServerJars(String hadoopDist,
int n)
Hydra test configuration function that returns the jars needed for a
Hadoop-enabled server using the specified Hadoop distribution.
|
dumpKeys, keyForName, nameForKey, setValues, tab, tasktabpublic static final String PHD3000_138
public static final String PHD3000_138_OPT
public static final String PHD3000_138_VERSION
protected static final String PHD3000_138_Jars
public static final String PHD3100_175
public static final String PHD3100_175_OPT
public static final String PHD3100_175_VERSION
protected static final String PHD3100_175_Jars
public static final String PHD3200_54
public static final String PHD3200_54_OPT
public static final String PHD3200_54_VERSION
protected static final String PHD3200_54_Jars
public static final String APACHE220
public static final String APACHE220_OPT
public static final String APACHE220_VERSION
protected static final String APACHE220_Jars
public static final String APACHE241
public static final String APACHE241_OPT
public static final String APACHE241_VERSION
protected static final String APACHE241_Jars
public static final String HORTONWORKS26
public static final String HORTONWORKS26_OPT
public static final String HORTONWORKS26_version
protected static final String HORTONWORKS26_Jars
public static final String DEFAULT_HADOOP_DIST
public static final String DEFAULT_APACHE_DIST
public static final String DEFAULT_BASE_HDFS_DIR_NAME
public static final int DEFAULT_REPLICATION
public static final String KERBEROS
public static final String KERBEROS_KINIT
public static final String SIMPLE
public static final String LOGICAL_HOST_PREFIX
public static Long names
public static Long addHDFSConfigurationToClassPath
securityAuthentication
to KERBEROS.public static Long baseHDFSDirName
DEFAULT_BASE_HDFS_DIR_NAME.public static Long dataNodeHosts
public static Long dataNodeDataDrives
dataNodeHosts. Defaults
to the drive used by the test result directory.public static Long dataNodeLogDrives
dataNodeHosts. Defaults
to the drive used by the test result directory.public static Long hadoopDist
-DHADOOP_DIST in the hydra master controller,
passed in through batterytest. The value of this property defaults to
the latest PivotalHD at DEFAULT_HADOOP_DIST. For Apache Hadoop,
use DEFAULT_APACHE_DIST or any other supported distribution.
Be sure to use the same distribution when configuring the #extraClassPaths. See getServerJars(String,int) for a handy
test configuration function which can be used to pass in the desired
distribution.
public static Long nameNodeHosts
public static Long nameNodeDataDrives
nameNodeHosts. Defaults
to the drive used by the test result directory.public static Long nameNodeLogDrives
nameNodeHosts. Defaults
to the drive used by the test result directory.public static Long nameNodeURL
nameNodeHosts. Set this parameter to attach
to an existing cluster. Otherwise, simply allow it to default.public static Long nodeManagerDataDrives
dataNodeHosts.
Defaults to the drive used by the test result directory.public static Long nodeManagerLogDrives
dataNodeHosts.
Defaults to the drive used by the test result directory.public static Long replication
DEFAULT_REPLICATION.public static Long resourceManagerDataDrives
resourceManagerHost.
Defaults to the drive used by the test result directory.public static Long resourceManagerHost
public static Long resourceManagerLogDrive
resourceManagerHost.
Defaults to the drive used by the test result directory.public static Long resourceManagerURL
resourceManagerHost. Set this parameter to attach
to an existing resource manager. Otherwise, simply allow it to default.public static Long resourceTrackerAddress
resourceManagerHost.
Set this parameter to attach to an existing resource manager. Otherwise,
simply allow it to default.public static Long schedulerAddress
resourceManagerHost.
Set this parameter to attach to an existing resource manager. Otherwise,
simply allow it to default.public static Long securityAuthentication
KERBEROS and KERBEROS_KINIT, which use Kerberos, and
SIMPLE (default), which disables security.
When using Kerberos, you must follow the special instructions at https://wiki.gemstone.com/display/Hydra/Secure+Hadoop
When using authentication, hydra automatically turns on authorization using
the ACLs in $JTESTS/hydra/hadoop/conf/hadoop-policy.xml.
KERBEROS_KINIT adds basic security configuration for
Kerberos to core-site.xml and hdfs-site.xml. The kinit utility is used at
runtime to obtain temporary tokens. This option requires using an HDFSStorePrms#clientConfigFile to set additional security properties,
including GemFireXD-specific properties.
KERBEROS adds hadoop.security.auth_to_local
to the basic security configuration in core-site.xml and puts
additional security properties in hdfs-site.xml. This is the strategy used
by production applications. If addHDFSConfigurationToClassPath is
set true and servers have NFS access to the NameNode configuration
directory, then no HDFSStorePrms#clientConfigFile is required for
configuring security.
public static String getServerJars(String hadoopDist, int n)
VmPrms.extraClassPaths. Specify the number of comma-separated lists
of jars needed (such as the number of servers).Copyright © 2010-2015 Pivotal Software, Inc. All rights reserved.