This post is about setting up hadoop configuration and editing the deployment configuration files for MapReduce and HDFS.
Here are the steps to setup configuration files-
1. You must edit and source files present in companion files, including script files and configuration files.
Or else, you can copy the content to ~/.bash_profile) and set up environmental variables inside your environment.
2. You can extract the files from the downloaded scripts, from the configuration_files/core_hadoopdirectroy to a momentary directory.
3. Make modifications to configuration files.
In the momentary directories, you can locate the following files and make modifications in properties as per your environment. Look for TODO in the file to replace the properties.
A. Edit the file named- core-site.xml and modify the listed properties:
B. Now do editing of hdfs-site.xml file and make modifications in the following properties:
C. Now edit the mapred-site.xml file and make modifications to following properties:
D. Make edits to taskcontroller.cfg file and modify-
4. You may now copy the configuration files
a. You need to replace your installed hadoop configurations with enhanced core_hadoop config files.
b. you need to copy your modified config files to $hadoop_conf_cir on all nodes.
C. to set apt permissions, use the following code-
In this way you can setup big data hadoop configuration. To know more about hadoop development, contact Aegis experts as they are offering Hadoop related services at best deals.