Creating large size of data with apache hadoop
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F61989100%3A27240%2F17%3A10236715" target="_blank" >RIV/61989100:27240/17:10236715 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/61989100:27350/17:10236715 RIV/61989100:27740/17:10236715
Výsledek na webu
<a href="http://dx.doi.org/10.1007/978-3-319-45123-7_22" target="_blank" >http://dx.doi.org/10.1007/978-3-319-45123-7_22</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/978-3-319-45123-7_22" target="_blank" >10.1007/978-3-319-45123-7_22</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Creating large size of data with apache hadoop
Popis výsledku v původním jazyce
The paper is focused on research in the area of building large datasets using Apache Hadoop. Our team is managing an information system that is able to calculate probability of existence of different objects in space and time. The system works with a lot of different data sources, including large datasets. The workflow of data processing is quite complicated and time consuming, so we were looking for some framework that could help with system management and, if possible, to speed up data processing as well. Apache Hadoop was selected as a platform for enhance our information system. Apache Hadoop is usually used for processing large datasets, but in a case of our information scystem is necessary to perform other types of tasks as well. The systems computes spatio-temporal relations between different types of objects. This means that from relatively small amount of records (thousands) are built relatively large datasets (millions of records). For this purposes is usually used PostgreSQL/PostGIS database or tools written in Java or other language. Our research was focused to determination if we could simply move some of this tasks to Apache Hadoop platform using simple SQL editor like Hive. We have selected two types of common tasks and tested them on PostgreSQL and Apache Hadoop (Hive) platform to be able compare time necessary to complete these tasks. The paper presents results of our research. © Springer International Publishing AG 2017.
Název v anglickém jazyce
Creating large size of data with apache hadoop
Popis výsledku anglicky
The paper is focused on research in the area of building large datasets using Apache Hadoop. Our team is managing an information system that is able to calculate probability of existence of different objects in space and time. The system works with a lot of different data sources, including large datasets. The workflow of data processing is quite complicated and time consuming, so we were looking for some framework that could help with system management and, if possible, to speed up data processing as well. Apache Hadoop was selected as a platform for enhance our information system. Apache Hadoop is usually used for processing large datasets, but in a case of our information scystem is necessary to perform other types of tasks as well. The systems computes spatio-temporal relations between different types of objects. This means that from relatively small amount of records (thousands) are built relatively large datasets (millions of records). For this purposes is usually used PostgreSQL/PostGIS database or tools written in Java or other language. Our research was focused to determination if we could simply move some of this tasks to Apache Hadoop platform using simple SQL editor like Hive. We have selected two types of common tasks and tested them on PostgreSQL and Apache Hadoop (Hive) platform to be able compare time necessary to complete these tasks. The paper presents results of our research. © Springer International Publishing AG 2017.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/TB0500MD011" target="_blank" >TB0500MD011: Specifický způsob odbavení cestujících a počet přepravených cestujících</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2017
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Lecture Notes in Geoinformation and Cartography. Volume F3
ISBN
978-3-319-45122-0
ISSN
1863-2246
e-ISSN
1863-2351
Počet stran výsledku
8
Strana od-do
307-314
Název nakladatele
Springer
Místo vydání
Cham
Místo konání akce
Ostrava
Datum konání akce
16. 3. 2016
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—