Manish is a Data warehouse/Business Intelligence Solution expert. He has a Master’s degree in Computer Applications with 11+ years of experience resulting from associations with organizations such as Mahindra Satyam, Verizon, ITC Infotech and Aditi Technologies. While associated with these companies, he has had the opportunity to work with well-known names such as Microsoft (R&D, AdCenter India) and Danske Bank Denmark. He is MCITP (SQL 2008 Business Intelligence Developer) certified and his skills and expertise include Design and Architecture definition for BI/DW solutions. He has successfully implemented various BI & DW projects with coverage across ETL, Data Modeling, OLAP, Data Analysis, Data Quality, Analytical Reporting, Data Mining, Dash Board design and implementation as well in fine tuning complex queries and databases. Manish’s hobbies include writing and reading Hindi poems, cooking, hiking , sports, philanthropy, sharing technical knowledge and listening to music.
This blog is not for pure Python lover. In this article I will describe usefulness of Python in Big Data and Hadoop environment. To discover more about Python please visit to official website of Python (http://www.python.org/).
Python is a powerful, flexible, open-source language that is easy to learn, easy to use, and has powerful libraries for data manipulation and analysis. It’s simple syntax is very accessible to programming novices, and will look familiar to anyone with experience in C/C++, Java, or Visual Basic. Python has a unique combination of being both a capable general-purpose programming language as well as being easy to use for analytical and quantitative computing. Python is one of the most popular languages in the world and widely used by Google.
In addition to Java, we can write map and reduce functions in other languages and invoke them using an API known as Hadoop Streaming. Streaming is based on the concept of UNIX streaming, where input is read from standard input, and output is written to standard output. These data streams represent the interface between Hadoop and our applications. The Streaming interface lends itself best to short and simple applications we would typically develop using a scripting language such as Python. A major reason for this is the text-based nature of the data flow, where each line of text represents a single record.
The below example shows the execution of map and reduce functions written in Python using Hadoop streaming:
hadoop jar contrib/streaming/hadoop-streaming.jar \
-input input/dataset.txt \
-output output \
-mapper text_processor_map.py \
Please refer to below link for an excellent and detailed example of MapReduce program in Python:
I will end it here because I feel exploring and learning programming syntax is programmer’s choice. Before wrapping it up, I would like to mention that Python is easy for analysts to learn and use. It is also powerful enough to tackle even the most difficult problems in any domain. It integrates well with existing IT infrastructure. The last and most important fact is that it is platform independent. The agility and the productivity of Python-based solutions are changing the world of Big Data and Hadoop.