<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:media="http://search.yahoo.com/mrss/"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Map - 四号程序员</title>
	<atom:link href="https://www.coder4.com/archives/tag/map/feed" rel="self" type="application/rss+xml" />
	<link>https://www.coder4.com</link>
	<description>Keep It Simple and Stupid</description>
	<lastBuildDate>Thu, 12 Nov 2020 09:33:26 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3</generator>
	<item>
		<title>定制Hadoop的MapReduce任务的FileOutputFormat</title>
		<link>https://www.coder4.com/archives/7121</link>
					<comments>https://www.coder4.com/archives/7121#respond</comments>
		
		<dc:creator><![CDATA[coder4]]></dc:creator>
		<pubDate>Thu, 12 Nov 2020 09:33:09 +0000</pubDate>
				<category><![CDATA[大数据技术]]></category>
		<category><![CDATA[FileOutputFormat]]></category>
		<category><![CDATA[Hadoop]]></category>
		<category><![CDATA[Map]]></category>
		<category><![CDATA[Reduce]]></category>
		<category><![CDATA[定制]]></category>
		<guid isPermaLink="false">https://www.coder4.com/?p=7121</guid>

					<description><![CDATA[需求：Reduce输出特殊的格式结果 例如：如Reducer的结果，压到Guava的BloomFilter中 import com.google.common.hash.BloomFilter; import com.google.common.hash.Funnels; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.h[......] 继续阅读]]></description>
		
					<wfw:commentRss>https://www.coder4.com/archives/7121/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>SpringBoot实现从配置中注入多级Ｍap到bean中</title>
		<link>https://www.coder4.com/archives/5819</link>
					<comments>https://www.coder4.com/archives/5819#respond</comments>
		
		<dc:creator><![CDATA[coder4]]></dc:creator>
		<pubDate>Wed, 22 Nov 2017 05:35:26 +0000</pubDate>
				<category><![CDATA[Java]]></category>
		<category><![CDATA[Map]]></category>
		<category><![CDATA[SpringBoot]]></category>
		<category><![CDATA[注入]]></category>
		<guid isPermaLink="false">https://www.coder4.com/?p=5819</guid>

					<description><![CDATA[假设要搞一个２级map: type -&#62; level -&#62; score 先看配置： xxxx.old.type2Level2ScoreMap: type_1.level2ScoreMap.level_1: 1 type_1.level2ScoreMap.level_2: 2 type_2.level2ScoreMap.level_1: 1 type_3.level2ScoreMap.level_1: 1 首先搞定2个数据结构，注意一定要字段名对应，层[......] 继续阅读]]></description>
		
					<wfw:commentRss>https://www.coder4.com/archives/5819/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>如何在Hadoop中控制map的个数</title>
		<link>https://www.coder4.com/archives/4242</link>
					<comments>https://www.coder4.com/archives/4242#respond</comments>
		
		<dc:creator><![CDATA[coder4]]></dc:creator>
		<pubDate>Tue, 13 May 2014 08:57:06 +0000</pubDate>
				<category><![CDATA[大数据技术]]></category>
		<category><![CDATA[Hadoop]]></category>
		<category><![CDATA[Map]]></category>
		<category><![CDATA[个数]]></category>
		<guid isPermaLink="false">http://www.coder4.com/?p=4242</guid>

					<description><![CDATA[转载自：如何在hadoop中控制map的个数 hadoop提供了一个设置map个数的参数mapred.map.tasks，我们可以通过这个参数来控制map的个数。但是通过这种方式设置map的个数，并不是每次都有效的。原因是mapred.map.tasks只是一个hadoop的参考数值，最终map的个数，还取决于其他的因素。 为了方便介绍，先来看几个名词： block_size : hdfs的文件块大小，默认为64M，可以通过参数dfs.block.size设置 total_size[......] 继续阅读]]></description>
		
					<wfw:commentRss>https://www.coder4.com/archives/4242/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Hadoop小集群(5结点)测试</title>
		<link>https://www.coder4.com/archives/2021</link>
					<comments>https://www.coder4.com/archives/2021#respond</comments>
		
		<dc:creator><![CDATA[coder4]]></dc:creator>
		<pubDate>Sun, 07 Aug 2011 05:59:09 +0000</pubDate>
				<category><![CDATA[Java]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[大数据技术]]></category>
		<category><![CDATA[Hadoop]]></category>
		<category><![CDATA[Map]]></category>
		<category><![CDATA[Reduce]]></category>
		<category><![CDATA[实例]]></category>
		<category><![CDATA[集群]]></category>
		<guid isPermaLink="false">http://www.coder4.com/?p=2021</guid>

					<description><![CDATA[1、Map/Reduce任务 输入： 文件格式 id value 其中id是1~100之间的随机整数，value为1~100之间的随机浮点数。 输出： 每个id的最大value 生成这类文件，可以用python搞定，见本文末尾的附录。 2、Map/Reduce程序 这里就直接使用新(0.20.2)的API了，即org.apache.hadoop.mapreduce.*下的接口。 特别注意： job.setNumReduceTasks(5) 指定了本Job的Redu[......] 继续阅读]]></description>
		
					<wfw:commentRss>https://www.coder4.com/archives/2021/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
