International Journal for Research and Development in Engineering
Abstract
Search Engines are tremendous force multipliers for end hosts trying to discover content on the Web. As the amount of content online grows, so does dependence on web crawlers to discover relevant content. The motive is to develop an efficient Web Crawler that will give results more relevant to search keyword and faster, which will support Semantics extraction, multithreading and distributed computing.
Keywords
Multithreading ; Distributed Systems ; URL Frontier Queue ; Map Reduce
Full Text:
PDFDOI: http://doi.org/10.11591/ijset.v1i2.4571
Total views : 99 times
Refbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.