Big Data on Amazon: Elastic MapReduce, step by step

Big Data on Amazon: Elastic MapReduce, step by step

Summary: Curious how to go about doing Hadoop in Amazon's cloud? Here's some guidance.

SHARE:
TOPICS: Big Data
0

 |  Image 21 of 29

  • Log in!

    You're almost there!  When you see the "login as:" prompt in PuTTY's terminal window, enter "hadoop" (without the quotes) and tap Enter.  That should log you in.

  • Welcome to your cluster

    Upon successful login, you should see a welcome screen and be presented with a command prompt.  The message telling you how to gain access to the Hadoop "UI" should be taken with a grain of salt, however, as that user interface is presented in Lynx, a text-based Web browser.

  • The bin folder

    Switch to the bin folder (using the "cd bin" command) and list its contents (using the "ls" command).  You will see that Hadoop, HBase, Hive and Pig are all neatly installed for you.

    They're ready to run, too.  To check this out, enter the "hive" command, and you'll be placed at Hive's command  line prompt.

  • Thumbnail 1
  • Thumbnail 2
  • Thumbnail 3
  • Thumbnail 4
  • Thumbnail 5
  • Thumbnail 6
  • Thumbnail 7
  • Thumbnail 8
  • Thumbnail 9
  • Thumbnail 10
  • Thumbnail 11
  • Thumbnail 12
  • Thumbnail 13
  • Thumbnail 14
  • Thumbnail 15
  • Thumbnail 16
  • Thumbnail 17
  • Thumbnail 18
  • Thumbnail 19
  • Thumbnail 20
  • Thumbnail 21
  • Thumbnail 22
  • Thumbnail 23
  • Thumbnail 24
  • Thumbnail 25
  • Thumbnail 26
  • Thumbnail 27
  • Thumbnail 28
  • Thumbnail 29

Topic: Big Data

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

Talkback

0 comments
Log in or register to start the discussion