|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||
AccessControlException
instead.RemoteException.
AccessControlException
with the specified detail message.
ResourceRequest to update the
ResourceManager about the application's resource requirements.
ContainerId of unused containers being
released by the ApplicationMaster
NodeManager.
Path to the list of inputs for the map-reduce job.
Path with a custom InputFormat to the list of
inputs for the map-reduce job.
Path with a custom InputFormat and
Mapper to the list of inputs for the map-reduce job.
Path to the list of inputs for the map-reduce job.
Path with a custom InputFormat to the list of
inputs for the map-reduce job.
Path with a custom InputFormat and
Mapper to the list of inputs for the map-reduce job.
Mapper class to the chain mapper.
Mapper class to the chain reducer.
ApplicationMaster
and the ResourceManager.
ApplicationMaster to the
ResourceManager to obtain resources in the cluster.ResourceManager the
ApplicationMaster during resource negotiation.ResourceManager the
ApplicationMaster during resource negotiation.ApplicationMaster
and the ResourceManager.ApplicationAttemptId denotes the particular attempt
of an ApplicationMaster for a given ApplicationId.ApplicationId represents the globally unique
identifier for an application.ApplicationReport is a report of an application.ApplicationSubmissionContext represents all of the
information needed by the ResourceManager to launch
the ApplicationMaster for an application.FSDataInputStream to Avro's SeekableInput interface.FSDataInputStream and its length.
FileContext and a Path.
WritableComparable
types supporting ordering/permutation by a representative set of bytes.BinaryComparable keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes().BinaryComparable keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes().CompressorStream which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.BlockCompressorStream.
BlockCompressorStream with given output-stream and
compressor.
DecompressorStream which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.BlockDecompressorStream.
BlockDecompressorStream.
MapFile and provides very much the same
functionality.Token.cancel(org.apache.hadoop.conf.Configuration) instead
Token.cancel(org.apache.hadoop.conf.Configuration) instead
position.
IOException or
null pointers.
OutputCommitter.commitJob(JobContext) or
OutputCommitter.abortJob(JobContext, int) instead.
OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext)
or OutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State)
instead.
OutputCommitter.commitJob(JobContext) and
OutputCommitter.abortJob(JobContext, JobStatus.State) instead.
ResourceManager
to submit/abort jobs and to get information on applications, cluster metrics,
nodes, queues and ACLs.JobClient.
InputSplit to future operations.
RecordWriter to future operations.
Cluster.
RecordWriter to future operations.
IOException
IOException.
MultiFilterRecordReader.emit(org.apache.hadoop.mapred.join.TupleWritable) every Tuple from the
collector (the outer join of child RRs).
MultiFilterRecordReader.emit(org.apache.hadoop.mapreduce.lib.join.TupleWritable) every Tuple from the
collector (the outer join of child RRs).
InputFormat that returns CombineFileSplit's
in InputFormat.getSplits(JobConf, int) method.InputFormat that returns CombineFileSplit's in
InputFormat.getSplits(JobContext) method.CombineFileSplit.CombineFileSplit.CompressionOutputStream to compress data.Configuration.JobConf.
JobConf.
Configuration.Configuration.
Container represents an allocated resource in the cluster.ContainerId represents a globally unique identifier
for a Container in the cluster.ContainerLaunchContext represents all of the information
needed by the NodeManager to launch a container.ApplicationMaster and a
NodeManager to start/stop containers and to get status
of running containers.ContainerStatus represents the current status of a
Container.ContainerToken is the security token used by the framework
to verify authenticity of any Container.Counters that logically belong together.Counters holds per job/task counters, defined either by the
Map-Reduce framework or applications.FileContext.create(Path, EnumSet, Options.CreateOpts...) except
that the Path f must be fully qualified and the permission is absolute
(i.e.
Compressor for use by this CompressionCodec.
Decompressor for use by this CompressionCodec.
FsPermission object.
CompressionInputStream that will read from the given
input stream.
CompressionInputStream that will read from the given
InputStream with the given Decompressor.
AbstractFileSystem.create(Path, EnumSet, Options.CreateOpts...) except that the opts
have been declared explicitly.
IOException.
CompressionOutputStream that will write to the given
OutputStream.
CompressionOutputStream that will write to the given
OutputStream with the given Compressor.
CombineFileInputFormat.createPool(List).
CombineFileInputFormat.createPool(PathFilter...).
recordName.
FileContext.createSymlink(Path, Path, boolean);
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
DBWritable.CompressionInputStream to compress data.Stringifier
interface which stringifies the objects using base64 encoding of the
serialized version of the objects.WritableComparable
implementation.
Record implementation.
AbstractDelegationTokenIdentifier.FileContext.delete(Path, boolean) except that Path f must be for
this file system.
FileSystem.delete(Path, boolean) instead.
Writer
The format of the output would be
{ "properties" : [ {key1,value1,key1.isFinal,key1.resource}, {key2,value2,
key2.isFinal,key2.resource}...
o is a ByteWritable with the same value.
o is a DoubleWritable with the same value.
o is an EnumSetWritable with the same value,
or both are null.
o is a FloatWritable with the same value.
o is a IntWritable with the same value.
o is a LongWritable with the same value.
o is an MD5Hash whose digest contains the
same values.
o is a ShortWritable with the same value.
o is a Text with the same contents.
o is a VIntWritable with the same value.
o is a VLongWritable with the same value.
InputFormat.InputFormats.OutputCommitter that commits files specified
in job output directory i.e.OutputCommitter that commits files specified
in job output directory i.e.OutputFormat.OutputFormats that read from FileSystems.FilterFileSystem contains
some other file system, which it uses as
its basic file system, possibly transforming
the data along the way or providing additional
functionality.Application.what in the backing
buffer, starting as position start.
Counters.findCounter(String, String) instead
ApplicationMaster to notify the
ResourceManager about its completion (success or failed).
ResourceManager to a
ApplicationMaster on it's completion.true if the end of the decompressed
data output stream has been reached.
ResourceManager to abort submitted application.
Counters.makeEscapedCompactString() counter
representation into a counter object.
FSInputStream in a DataInputStream
and buffers input through a BufferedInputStream.OutputStream in a DataOutputStream,
buffers output through a BufferedOutputStream and creates a checksum
file.FsAction.
FileSystem.Throwable into a Runtime Exception.FileSystem backed by an FTP client provided by Apache Commons Net.FileSystem.delete(Path, boolean)
name property, null if
no such property exists.
name.
BytesWritable.getBytes() instead.
WritableComparable implementation.
ResourceManager.
ResourceManager.ResourceManager to a client
requesting an ApplicationReport for all applications.Cluster.getAllJobStatuses() instead.
Container by the
ResourceManager.
NodeManager.
ContainerLaunchContext to describe the
Container with which the ApplicationMaster is
launched.
AMResponse sent by the ResourceManager.
ApplicationACLs for the application.
ApplicationACLs for the application.
ApplicationAttemptId being managed by the
ApplicationMaster.
ApplicationAttemptId being managed by the
ApplicationMaster.
ApplicationAttemptId of the application to which
the Container was assigned.
ApplicationId of the application.
ApplicationId allocated by the
ResourceManager.
ApplicationId of the application to be aborted.
ApplicationId of the ApplicationAttempId.
ApplicationId of the application.
ApplicationId of the submitted application.
ApplicationReport for all applications.
ResourceManager.
ApplicationReport for the application.
ResourceManager to
get an ApplicationReport for an application.ResourceManager to a client
requesting an application report.ApplicationSubmissionContext for the application.
ResourceRequest to update the
ResourceManager about the application's resource requirements.
attempt id of the Application.
SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented
by MapRunner after invoking the map function.
SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented
by framework after invoking the reduce function.
name property as a boolean.
Text.getLength() is
valid.
Resource on the node.
Resource capability of the request.
name property as a Class.
name property as a Class
implementing the interface specified by xface.
Class of the given object.
name property
as an array of Class.
ClassLoader for this job.
ApplicationMaster.
ResourceManager.
YarnClusterMetrics for the cluster.
ResourceManager.ResourceManager to a client
requesting cluster metrics.ResourceManager.
ResourceManager.ResourceManager to a client
requesting a NodeReport for all nodes.ResourceManager which is
used to generate globally unique ApplicationId.
Compressor for the given CompressionCodec from the
pool or a new one.
Compressor needed by this CompressionCodec.
name.
Reader attached to the configuration resource with the
given name.
ContainerId of container for which to obtain the
ContainerStatus.
ContainerId of the container to be stopped.
ContainerId of container to be launched.
ContainerId of the container.
ContainerLaunchContext for the container to be started
by the NodeManager.
ApplicationMaster to request for
current status of a Container from the
NodeManager.
ContainerStatus of the container.
ApplicationMaster to the
NodeManager to get ContainerStatus of a container.NodeManager to the
ApplicationMaster when asked to obtain the status
of a container.ContainerToken for the container.
ContentSummary of a given Path.
Counters.Counter of the given group with the given name.
Counters.Counter of the given group with the given name.
Counter for the given counterName.
Counter for the given groupName and
counterName.
Decompressor for the given CompressionCodec from the
pool or a new one.
Decompressor needed by this CompressionCodec.
ResourceManager.GetDelegationTokenRequest request
from the client.Runnable that periodically empties the trash of all
users, intended to be run by the superuser.
Runnable that periodically empties the trash of all
users, intended to be run by the superuser.
FileContext.getFileBlockLocations(Path, long, long) except that
Path f must be for this file system.
FileContext.getFileChecksum(Path) except that Path f must be for
this file system.
FileContext.getFileLinkStatus(Path)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path leading up to the final path component.
FileContext.getFileStatus(Path)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path.
name property as a float.
FileContext.getFsStatus(Path) except that Path f must be for this
file system.
FileContext.getFsStatus(Path).
FsAction.
RawComparator comparator for
grouping keys of inputs to the reduce.
ApplicationMaster is
running.
ApplicationMaster
is running.
ApplicationId
which is unique for all applications started by a particular instance
of the ResourceManager.
ContainerId.
InputFormat implementation for the map-reduce job,
defaults to TextInputFormat if not specified explicity.
InputFormat class for the job.
Paths for the map-reduce job.
Paths for the map-reduce job.
InputSplit object for a map.
Job with no particular Cluster .
Job with no particular Cluster and a
given Configuration.
Job with no particular Cluster and a given jobName.
Job with no particular Cluster and given
Configuration and JobStatus.
Job.getInstance()
Job.getInstance(Configuration)
Job with no particular Cluster and given
Configuration and JobStatus.
name property as a List
of objects implementing the interface specified by xface.
name property as an int.
RunningJob object to track an ongoing job.
JobClient.getJob(JobID).
RunningJob.getID().
JobID object that this task attempt belongs to
JobID object that this tip belongs to
JobPriority for this job.
SequenceFileRecordReader.next(Object, Object)..
KeyFieldBasedComparator options
KeyFieldBasedComparator options
KeyFieldBasedPartitioner options
KeyFieldBasedPartitioner options
InputSplit.
FileContext.getLinkTarget(Path);
LocalResource required by the container.
name property as a long.
name property as a long or
human readable format.
WrappedMapper.Context for custom implementations.
CompressionCodec for compressing the map outputs.
Mapper class for the job.
Mapper class for the job.
MapRunnable class for the job.
true.
JobClient.getMapTaskReports(JobID)
Resource allocated by the
ResourceManager in the cluster.
Resource allocated by the
ResourceManager in the cluster.
mapreduce.map.maxattempts
property.
mapred.map.max.attempts
property.
mapreduce.reduce.maxattempts
property.
mapred.reduce.max.attempts
property.
JobConf.getMemoryForMapTask() and
JobConf.getMemoryForReduceTask()
Resource allocated by the
ResourceManager in the cluster.
Resource allocated by the
ResourceManager in the cluster.
Resource
ApplicationId for
submitting new applications.
ApplicationId for
submitting an application.ResourceManager to the client for
a request to get a new ApplicationId for submitting applications.NodeHealthStatus of the node.
NodeId of the node.
NodeReport for all nodes in the cluster.
NodeManagers in the cluster.
FsAction.
OutputCommitter implementation for the map-reduce job,
defaults to FileOutputCommitter if not specified explicitly.
OutputCommitter for the task-attempt.
SequenceFile.CompressionType for the output SequenceFile.
SequenceFile.CompressionType for the output SequenceFile.
CompressionCodec for compressing the job outputs.
CompressionCodec for compressing the job outputs.
OutputFormat implementation for the map-reduce job,
defaults to TextOutputFormat if not specified explicity.
OutputFormat class for the job.
RawComparator comparator used to compare keys.
Path to the output directory for the map-reduce job.
Path to the output directory for the map-reduce job.
WritableComparable comparator for
grouping keys of inputs to the reduce.
Object.hashCode() to partition.
BinaryComparable.getBytes() to partition.
Object.hashCode() to partition.
Partitioner used to partition Mapper-outputs
to be sent to the Reducers.
Partitioner class for the job.
Path for a file that is unique for
the task within the job output directory.
Path for a file that is unique for
the task within the job output directory.
name property as a Pattern.
Priority of the application.
Priority at which the Container was
allocated.
Priority of the request.
RecordReader consumed i.e.
ResourceManager.
QueueInfo for the specified queue.
ResourceManager.ResourceManager to a client
requesting information about queues in the system.QueueState of the queue.
ResourceManager.
ResourceManager to
get queue acls for the current user.ResourceManager to clients
seeking queue acls for the user.name property, without doing
variable expansion.If the key is
deprecated, it returns the value of the first key which replaces
the deprecated key and is not null.
ApplicationMaster reboot for being horribly
out-of-sync with the ResourceManager as deigned by
AMResponse.getResponseId()?
RecordReader for the given InputSplit.
RecordReader for the given InputSplit.
RecordWriter for the given job.
RecordWriter for the given job.
RecordWriter for the given task.
RecordWriter for the given task.
Reducer class for the job.
Reducer class for the job.
WrappedReducer.Context for custom implementations.
true.
JobClient.getReduceTaskReports(JobID)
ContainerId of unused containers being
released by the ApplicationMaster.
TaskType
Resource
URL for the named resource.
Resource allocated to the container.
Resource allocated to the container by the
ResourceManager.
ApplicationMaster
is responding.
ApplicationMaster.
SequenceFile
SequenceFile
SequenceFile
SequenceFile
NodeManager
BytesWritable.getLength() instead.
RawComparator comparator used to compare keys.
true.
FileInputFormat.listStatus(JobConf) when
they're too big.
ContainerState of the container.
ContainerState of the container.
FileSystem.getAllStatistics() instead
ContainerStatus of the container.
name property as
a collection of Strings.
name property as
an array of Strings.
name property as
an array of Strings.
TaskCompletionEvent.getTaskAttemptId() instead.
TaskID object that this task attempt belongs to
TaskID.getTaskIDsPattern(String, Integer, TaskType,
Integer)
TaskType corresponding to the character
ApplicationMaster.
name property as a trimmed String,
null if no such property exists.
name property as
a collection of Strings, trimmed of the leading and trailing whitespace.
name property as
an array of Strings, trimmed of the leading and trailing whitespace.
name property as
an array of Strings, trimmed of the leading and trailing whitespace.
LocalResourceType of the resource to be localized.
UMASK_LABEL config param has umask value that is either symbolic
or octal.
Resource on the node.
Resource
QueueACL for the given user.
QueueUserACLInfo per queue for the user.
FsAction.
SequenceFileRecordReader.next(Object, Object)..
LocalResourceVisibility of the resource to be
localized.
Path to the task's temporary output directory
for the map-reduce job
Path to the task's temporary output directory
for the map-reduce job
YarnApplicationState of the application.
Groups.Object.hashCode().Object.hashCode().Enum type, by the specified amount.
InputFormat describes the input-specification for a
Map-Reduce job.InputFormat describes the input-specification for a
Map-Reduce job.TotalOrderPartitioner.InputSplit represents the data to be processed by an
individual Mapper.InputSplit represents the data to be processed by an
individual Mapper.Mapper that swaps keys and values.Mapper that swaps keys and values.FileStatus.isFile(),
FileStatus.isDirectory(), and FileStatus.isSymlink()
instead.
DNSToSwitchMapping instance being on a single
switch.
AbstractDNSToSwitchMapping.isMappingSingleSwitch(DNSToSwitchMapping)
Iterator to go through the list of String
key-value pairs in the configuration.
Serialization for Java Serializable classes.RawComparator that uses a JavaSerialization
Deserializer to deserialize objects that are then compared via
their Comparable interfaces.JobClient is the primary interface for the user-job to interact
with the cluster.JobConf, and connect to the
default cluster
Configuration,
and connect to the default cluster
KeyFieldBasedComparator.KeyFieldBasedComparator.InputFormat for plain text files.InputFormat for plain text files.ResourceManager
to abort a submitted application.ResourceManager to the client
aborting a submitted application.RunningJob.killTask(TaskAttemptID, boolean)
File.list().
File.listFiles().
FileContext.listLocatedStatus(Path) except that Path f
must be for this file system.
FileContext.Util.listStatus(Path) except that Path f must be
for this file system.
f is a file, this method will make a single call to S3.
FileContext.listStatus(Path) except that Path f must be for this
file system.
LocalResource represents a local resource required to
run a container.LocalResourceType specifies the type
of a resource localized by the NodeManager.LocalResourceVisibility specifies the visibility
of a resource localized by the NodeManager.Reducer that sums long values.map(...) methods of the Mappers in the chain.
Mapper.OutputFormat that writes MapFiles.OutputFormat that writes
MapFiles.Level for the map task.
Level for the reduce task.
JobConf.MAPRED_MAP_TASK_ENV or
JobConf.MAPRED_REDUCE_TASK_ENV
JobConf.MAPRED_MAP_TASK_JAVA_OPTS or
JobConf.MAPRED_REDUCE_TASK_JAVA_OPTS
JobConf.MAPRED_JOB_MAP_MEMORY_MB_PROPERTY and
JobConf.MAPRED_JOB_REDUCE_MEMORY_MB_PROPERTY
JobConf.MAPRED_MAP_TASK_ULIMIT or
JobConf.MAPRED_REDUCE_TASK_ULIMIT
Mapper and Reducer implementations.Mappers.MapRunnable implementation.MarkableIterator is a wrapper iterator class that
implements the MarkableIteratorInterface.MBeans.register(String, String, Object)FileContext.mkdir(Path, FsPermission, boolean) except that the Path
f must be fully qualified and the permission is absolute (i.e.
FileSystem.mkdirs(Path, FsPermission) with default permission.
InputFormat that returns MultiFileSplit's
in MultiFileInputFormat.getSplits(JobConf, int) method.InputFormat and Mapper for each pathInputFormat and Mapper for each pathIOException into an IOExceptionOutputCollector passed to
the map() and reduce() methods of the
Mapper and Reducer implementations.FileSystem for reading and writing files stored on
Amazon S3.true if a preset dictionary is needed for decompression.
true if the input data buffer is empty and
Decompressor.setInput(byte[], int, int) should be called to
provide more input.
WritableComparable instance.
DBRecordReader.nextKeyValue()
NodeHealthStatus is a summary of the health status of the
node.NodeId is the unique identifier for a node.NodeReport is a summary of runtime information of a
node in the cluster.FileContext.open(Path) except that Path f must be for this
file system.
FileContext.open(Path, int) except that Path f must be for this
file system.
FileSystem that uses Amazon S3
as a backing store.FileSystem for reading and writing files on
Amazon S3.JMXJsonServlet class.org.apache.hadoop.mapred
package.<key, value> pairs output by Mappers
and Reducers.OutputCommitter describes the commit of task output for a
Map-Reduce job.OutputCommitter describes the commit of task output for a
Map-Reduce job.OutputFormat describes the output-specification for a
Map-Reduce job.OutputFormat describes the output-specification for a
Map-Reduce job.FileSystem.QueueACL enumerates the various ACLs for queues.QueueUserACLInfo provides information QueueACL for
the given user.RawComparator.Comparator that operates directly on byte representations of
objects.FsPermission from DataInput.
in.
in.
in.
in.
in.
in.
in.
in.
ResultSet.
in.
in.
CompressedWritable.readFields(DataInput).
FSDataInputStream.readFully(long, byte[], int, int).
Writable, String, primitive type, or an array of
the preceding.
Writable, String, primitive type, or an array of
the preceding.
Record comparison implementation.
RecordReader reads <key, value> pairs from an
InputSplit.Mapper.RecordWriter writes the output <key, value> pairs
to an output file.RecordWriter writes the output <key, value> pairs
to an output file.reduce(...) method of the Reducer with the
map(...) methods of the Mappers in the chain.
Reducer.Mapper that extracts text matching a regular expression.Mapper that extracts text matching a regular expression.ApplicationMaster to register
with the ResourceManager.
ApplicationMaster to
ResourceManager on registration.ResourceManager to a new
ApplicationMaster on registration.FileContext.rename(Path, Path, Options.Rename...) except that Path
f must be for this file system.
FileContext.rename(Path, Path, Options.Rename...) except that Path
f must be for this file system and NO OVERWRITE is performed.
FileContext.rename(Path, Path, Options.Rename...) except that Path
f must be for this file system.
Token.renew(org.apache.hadoop.conf.Configuration) instead
Token.renew(org.apache.hadoop.conf.Configuration) instead
Resource models a set of computer resources in the
cluster.ResourceRequest represents the request made by an
application to the ResourceManager to obtain various
Container allocations.Compressor to the pool.
Decompressor to the pool.
Reducer.run(org.apache.hadoop.mapreduce.Reducer.Context) method to
control how the reduce task works.
Tool by Tool.run(String[]), after
parsing with the given generic arguments.
Tool with its Configuration.
RunningJob is the user-interface to query for details on a
running Map-Reduce job.FileSystem backed by
Amazon S3.S3FileSystem.DNSToSwitchMapping interface using a
script configured via the
CommonConfigurationKeysPublic.NET_TOPOLOGY_SCRIPT_FILE_NAME_KEY option.SequenceFiles are flat files consisting of binary key/value
pairs.OutputFormat that writes keys, values to
SequenceFiles in binary(raw) formatOutputFormat that writes keys,
values to SequenceFiles in binary(raw) formatInputFormat for SequenceFiles.InputFormat for SequenceFiles.OutputFormat that writes SequenceFiles.OutputFormat that writes SequenceFiles.RecordReader for SequenceFiles.RecordReader for SequenceFiles.value of the name property.
Container by the
ResourceManager.
ContainerLaunchContext to describe the
Container with which the ApplicationMaster is
launched.
ApplicationACLs for the application.
ApplicationACLs for the application.
ApplicationAttemptId being managed by the
ApplicationMaster.
ApplicationAttemptId being managed by the
ApplicationMaster.
ApplicationId of the application
ApplicationId of the submitted application.
ApplicationSubmissionContext for the application.
SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented
by MapRunner after invoking the map function.
SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented
by framework after invoking the reduce function.
name property to a boolean.
Resource capability of the request
name property to the name of a
theClass implementing the given interface xface.
ContainerId of container for which to obtain the
ContainerStatus
ContainerId of the container to be stopped.
ContainerId of container to be launched.
ContainerLaunchContext for the container to be started
by the NodeManager
name property to the given type.
name property to a float.
Reducer.reduce(Object, Iterable,
org.apache.hadoop.mapreduce.Reducer.Context)
ApplicationMaster is
running.
InputFormat implementation for the map-reduce job.
InputFormat for the job.
Paths as the list of inputs
for the map-reduce job.
Paths as the list of inputs
for the map-reduce job.
name property to an int.
JobPriority for this job.
KeyFieldBasedComparator options used to compare keys.
KeyFieldBasedComparator options used to compare keys.
KeyFieldBasedPartitioner options used for
Partitioner
KeyFieldBasedPartitioner options used for
Partitioner
bytes[offset:] in Python syntax.
LocalResource required by the container.
name property to a long.
CompressionCodec for the map outputs.
Mapper class for the job.
Mapper for the job.
MapRunnable class for the job.
JobConf.setMemoryForMapTask(long mem) and
Use JobConf.setMemoryForReduceTask(long mem)
bytes[left:(right+1)] in Python syntax.
OutputCommitter implementation for the map-reduce job.
SequenceFile.CompressionType for the output SequenceFile.
SequenceFile.CompressionType for the output SequenceFile.
CompressionCodec to be used to compress job outputs.
CompressionCodec to be used to compress job outputs.
OutputFormat implementation for the map-reduce job.
OutputFormat for the job.
RawComparator comparator used to compare keys.
Path of the output directory for the map-reduce job.
Path of the output directory for the map-reduce job.
RawComparator comparator for
grouping keys in the input to the reduce.
FileContext.setOwner(Path, String, String) except that Path f must
be for this file system.
Partitioner class used to partition
Mapper-outputs to be sent to the Reducers.
Partitioner for the job.
Pattern.
FileContext.setPermission(Path, FsPermission) except that Path f
must be for this file system.
Priority of the application.
Priority of the request
Reducer class to the chain job.
Reducer class for the job.
Reducer for the job.
FileContext.setReplication(Path, short) except that Path f must be
for this file system.
Resource allocated to the container by the
ResourceManager.
bytes[:(offset+1)] in Python syntax.
ApplicationMaster is
responding.
SequenceFile
SequenceFile
SequenceFile
SequenceFile
NodeManager.
Reducer.
name property as
as comma delimited values.
TaskCompletionEvent.setTaskAttemptId(TaskAttemptID) instead.
FileContext.setTimes(Path, long, long) except that Path f must be
for this file system.
ApplicationMaster.
LocalResourceType of the resource to be localized.
FileContext.setVerifyChecksum(boolean, Path) except that Path f
must be for this file system.
LocalResourceVisibility of the resource to be
localized.
AbstractCounters.countCounters() instead
ApplicationMaster requests a NodeManager
to start a Container allocated to it using this interface.
ApplicationMaster to the
NodeManager to start a container.NodeManager to the
ApplicationMaster when asked to start an
allocated container.fileName attribute,
if specified.
ApplicationMaster requests a NodeManager
to stop a Container allocated to it using this interface.
ApplicationMaster to the
NodeManager to stop a container.NodeManager to the
ApplicationMaster when asked to stop an
allocated container.ResourceManager.
ResourceManager.ResourceManager to a client on
application submission.Submitter.runJob(JobConf)
TaskID.
TaskAttemptID.TaskAttemptID(String, int, TaskType, int, int).
TaskID.
TaskID.TaskID(String, int, TaskType, int)
TaskID.TaskID(org.apache.hadoop.mapreduce.JobID, TaskType,
int)
JobID.
JobID.
InputFormat for plain text files.InputFormat for plain text files.OutputFormat that writes plain text files.OutputFormat that writes plain text files.Mapper that maps text values into Tools.Writables.Writables.URL represents a serializable URL.S3FileSystem.VersionedWritable.readFields(DataInput) when the
version of an object being read does not match the current implementation
version as returned by VersionedWritable.getVersion().FileSystem.createFileSystem(URI, Configuration)
After this constructor is called initialize() is called.
Mapper which wraps a given one to allow custom
WrappedMapper.Context implementations.Reducer which wraps a given one to allow for custom
WrappedReducer.Context implementations.DataInput and DataOutput.Writable which is also Comparable.WritableComparables.WritableComparable implementation.
Serialization for Writables that delegates to
Writable.write(java.io.DataOutput) and
Writable.readFields(java.io.DataInput).out.
out.
out.
out.
out.
out.
out.
PreparedStatement.
out.
CompressedWritable.write(DataOutput).
Writable, String, primitive type, or an array of
the preceding.
Writable, String, primitive type, or an array of
the preceding.
OutputStream.
Writer.
ApplicationMaster.YarnClusterMetrics represents cluster metrics.
|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||