Big Data on Amazon: Elastic MapReduce, step by step

Big Data on Amazon: Elastic MapReduce, step by step

Summary: Curious how to go about doing Hadoop in Amazon's cloud? Here's some guidance.

SHARE:
TOPICS: Big Data
0

 |  Image 24 of 29

  • Thumbnail 1
  • Thumbnail 2
  • Thumbnail 3
  • Thumbnail 4
  • Thumbnail 5
  • Thumbnail 6
  • Thumbnail 7
  • Thumbnail 8
  • Thumbnail 9
  • Thumbnail 10
  • Thumbnail 11
  • Thumbnail 12
  • Thumbnail 13
  • Thumbnail 14
  • Thumbnail 15
  • Thumbnail 16
  • Thumbnail 17
  • Thumbnail 18
  • Thumbnail 19
  • Thumbnail 20
  • Thumbnail 21
  • Thumbnail 22
  • Thumbnail 23
  • Thumbnail 24
  • Thumbnail 25
  • Thumbnail 26
  • Thumbnail 27
  • Thumbnail 28
  • Thumbnail 29
  • HBase prompt and "grunt"

    Use the "hbase shell" command to get to the HBase prompt or use the "pig" command to get to the Pig prompt (called "grunt").

    Although not shown here, you can also use the "hadoop fs" command to perform Hadoop Distributed File System (HDFS) operations and, of course, the "hadoop jar" command to run a Hadoop MapReduce job.

  • Change termination protection

    When you're all done, don't forget to terminate the instances in your cluster, otherwise you will continue to be billed for them!  To terminate the instances, you'll first need to select the Change Termination Protection option in the Actions menu, shown here.

  • Disable termination protection

    Now click the Yes, Disable button.

Topic: Big Data

Andrew Brust

About Andrew Brust

Andrew J. Brust has worked in the software industry for 25 years as a developer, consultant, entrepreneur and CTO, specializing in application development, databases and business intelligence technology.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

Talkback

0 comments
Log in or register to start the discussion