Gjør som tusenvis av andre bokelskere
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.Du kan når som helst melde deg av våre nyhetsbrev.
The development in medical field leads to produce massive amount of medical data. In 2002, more than 12000 images a day were produced by the Department of Radiology of a hospital in Geneva. The medical datasets are available for further exploration and research which have far reaching impact on the progress and execution of health programs. The information archived from exploring medical datasets paves the way for health administration, e-health diagnosis and therapy. So, there is urgent need to accentuate the research in medical data. The medical data is a huge growing industry and its size normally lies in terabytes. Such a big data puts forward many challenges and issues due to its large volume, variety, velocity, value and variability. Moreover, the working of traditional file management systems is slowing down due to its incapability of managing unstructured, variable and complex big data. The managing of such big data is very cumbersome and time-consuming task which requires new computing techniques. So, the exponential growth of medical data has necessitated a paradigm shift in the way the data is managed and processed. The recent technological advancements influenced the way of storing and processing big data. This motivated us to think about finding new solutions for managing volumetric medical datasets and to obtain valuable information efficiently. Hadoop is a top-level Apache project and is written in Java. Hadoop was developed by Doug Cutting as a collection of open-source projects. It is presently used on massive amount of unstructured data. With Hadoop, the data can be harnessed that was previously difficult to analyze. Hadoop has the ability to process extremely large data with changing structure. Hadoop is composed of different modules like HBase, Pig, HCatalog, Hive, Zookeeper, Oozie and Kafka, but the most common paradigms for big data are Hadoop Distributed File sSystem (HDFS) and MapReduce.
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.