![]() ![]() We will use this tag to re-label our instances as “namenode”, “datanode1” and so on later on. Instance TagsĪ tag allows you to identify your instance with a name you can choose.Ĭlick Add Tag, set the Key to “Name” and value to “Hadoop”. Since this is a beginner tutorial, these steps are not covered here.Ĭlick Next to Add Tags to your instances. If you add a volume, you will need to attach the volume to your instance, format it and mount it. If you need more storage, either increase the size or attach a disk by clicking “Add Volume”. We also choose a subnet (us-west-1b) just so we can launch into the same location if we need more machines.įor our purpose, the default instance storage of 8GB is sufficient. Here, we request 4 instances of the selected machine type. If you have a need for a high-memory or high-cpu instance, you can select one of those.Ĭlick Next to Configure Instance Details Instance Details Go to your AWS Console, Click on Launch Instance and select Ubuntu Server 16.04 LTS.įor the instance type, we choose t2.micro since that is sufficient for the purposes of the demo. We will now create 4 instances of Ubuntu Server 16.04 LTS using Amazon EC2. You get some resources free for the first year, including an EC2 Micro Instance. Sign up for an AWS account if you don’t already have one. We use Apache Hadoop 2.7.3 for this demonstration. ![]() Starting with setting up the AWS EC2 resources, we take you all the way through complete configuration of the machines in this arrangement. Our setup involves a single NameNode and three DataNodes which serve as processing slaves. In the setup we discuss here, we setup a multi-node cluster to run processing jobs. That involved running all the components of Hadoop on a single machine. In a previous article, we discussed setting up a Hadoop processing pipeline on a single node (laptop). Lets talk about how to setup an Apache Hadoop cluster on AWS. ![]()
0 Comments
Leave a Reply. |