DEVELOPMENT OF A DISTRIBUTED BIG DATA FUSION ARCHITECTURE FOR MACHINE-TO-MACHINE COMMUNICATION USING ENSEMBLE LEARNING
DEVELOPMENT OF A DISTRIBUTED BIG DATA FUSION ARCHITECTURE FOR MACHINE-TO-MACHINE COMMUNICATION USING ENSEMBLE LEARNING
dc.contributor.author | SALEFU, Ngbede Odaudu | |
dc.date.accessioned | 2019-06-18T13:11:36Z | |
dc.date.available | 2019-06-18T13:11:36Z | |
dc.date.issued | 2019-02 | |
dc.description | A DISSERTATION SUBMITTED TO THE SCHOOL OF POSTGRADUATE STUDIES, AHMADU BELLO UNIVERSITY, ZARIA IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE AWARD OF MASTER OF SCIENCE (M.Sc.) DEGREE IN COMPUTER ENGINEERING DEPARTMENT OF COMPUTER ENGINEERING FACULTY OF ENGINEERING AHMADU BELLO UNIVERSITY, ZARIA NIGERIA | en_US |
dc.description.abstract | This research developed a distributed big data fusion architecture for machine to machine communication using ensemble learning. This is implemented to mitigate the challenges that characterize centralized big data fusion architecture commonly adopted through the use of Hadoop MapReduce platform. These challenges include bandwidth consumption, latency, and high computational cost. Fog computing technique approach was adopted through the implementation of ensemble learning; feature engineering was implemented to extract information (pixel values, number of layers (nlayers), number of cell (ncell), number of row (nrow), and coordinates) from the data, water bodies and vegetation index (NDWI and NDVI) were calculated. The extracted information was used as a training dataset for both centralized and distributed architecture using adaboost as bases of comparison between centralized and distributed architecture. Performance evaluation was based on Bandwidth consumption and Latency. Results were presented in the form of confusion matrix. The developed architecture achieved a 31.44 minutes and 1.9% improvement in latency and accuracy between the centralized and the distributed architecture respectively. The result also showed 5.8% and 4.81 minutes improvement in accuracy and latency were recorded in performance comparison of base learner and ensemble Adaboost. | en_US |
dc.identifier.uri | http://hdl.handle.net/123456789/11757 | |
dc.language.iso | en | en_US |
dc.subject | DEVELOPMENT, | en_US |
dc.subject | DISTRIBUTED, | en_US |
dc.subject | BIG DATA FUSION ARCHITECTURE, | en_US |
dc.subject | MACHINE-TO-MACHINE COMMUNICATION, | en_US |
dc.subject | ENSEMBLE LEARNING | en_US |
dc.title | DEVELOPMENT OF A DISTRIBUTED BIG DATA FUSION ARCHITECTURE FOR MACHINE-TO-MACHINE COMMUNICATION USING ENSEMBLE LEARNING | en_US |
dc.type | Thesis | en_US |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- DEVELOPMENT OF A DISTRIBUTED BIG DATA FUSION ARCHITECTURE FOR.pdf
- Size:
- 2.17 MB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.62 KB
- Format:
- Item-specific license agreed upon to submission
- Description: