tsms’s Story

Site created on November 27, 2016

Welcome to our CaringBridge website. We are using it to keep family and friends updated in one place. We appreciate your support and words of hope and encouragement. Thank you for visiting.

Newest Update

Journal entry by tsms Blog

We have given a detailed design of Web-page prediction using Boolean bit mask. In this post we are going to present a mechanism of lucky searching, which saves Web searcher search time. Lucky Search is a type of search mechanism, which does not produce a list of Web-page links. It directly hits the Web-page of the most appropriate Web site. The key feature of this search engine [1–3] is that it depends on Web search string and it fully depends on the Web searcher’s luck. However, there exist some advantages and disadvantages of this kind of search engine [4, 5]. Several important aspects of the problem have been explored, which are already highlighted in post “Introduction”. In conventional search engines, while any Web searcher performs a lucky searching, the search engine hits or redirects either right or wrong Web-pages without any preference. To overcome this situation we have incorporated a domain-specific concept for reducing the search engine resources, which minimizes the miss hit ratio and finally it produces more appropriate result for the Web searchers. In this post, we discuss the basic idea of domain-specific lucky search and describe a design and development methodology for domain-specific lucky search based on an Ontology [6, 7]. Here we generate Domain-specific Lucky Search Database (DSLSDB), which provides the Lucky URL after parsing the input search string.
This post is organized as follows. In Sect. 2, we describe in brief of our proposed approach. This section is further divided into three subsections. In Sect. 2.1, we have presented a design and development methodology of DSLSDB. The lucky URL searching mechanism is discussed in Sect. 2.2. In Sect. 2.3 we have given an overview of our proposed user interface. The experimental results are shared in Sect. 3. Finally, the important findings as obtained from this study and conclusions reached there from are highlighted in the last section.
Proposed Approach
In our approach, we have constructed a DSLSDB, which contains domain-specific Web-pages. This database has been created in Web-page crawling phase [8, 9]. Web-page domain identification is done by Ontology and Synterms [10–12]. We have produced the lucky URLs from DSLSDB for a valid search string given by Web searcher. In this section, we have discussed about DSLSDB construction mechanism, how the lucky URL is retrieved from DSLSDB, our workflow and our user interface.
DSLSDB Construction
DSLSDB construction mechanism has shown in this subsection. As this database construction happens at crawling phase, so that we first identified the crawled Web-page belongs to our considered domain or not. If the system has found the crawled Web-page belongs to our domain then only we call our algorithm for taking the Web-page under the consideration of DSLSDB construction.
Ontology Terms
In the World Wide Web (WWW) majority of the Web-pages are in HTML format and there are no tags as such that it tells the crawler to find any specific domain. To find a domain, we use knowledge base information called Ontology. Ontology contains a particular domain-related terms. For example, Computer, Computer Science, Computer Application, Event, Pen drive, Paper, Journal, Conference, University, College, etc. are Ontology terms for Computer Science domain.
DSLSDB Construction Algorithm
The DSLSDB generation algorithm has been illustrated below. In this algorithm, we have considered a table in DSLSDB, which contains three fields, first one Ontology term, second one LURL and last one is term count. DSLSDB generation happens at crawling phase. When a crawler, crawls down a domain-specific Web-page, we use that Web-page content as an input to our algorithm.
he functionality of our algorithm looks like; first insert a record for all Ontology term in the DSLSDB table, where LURL and TERM_COUNT are taken as Dummy URL and 0 respectively. In the following step, parse the Web-page content and find the Ontology term which appears most in the Web-page content and the number of occurrences called MAX_COUNT. Now, we follow a logical operation whether MAX_COUNT for an Ontology term is greater than already existing record TERM_COUNT in DSLSDB table. If it is true then update the DSLSDB table record by the new URL and MAX_COUNT value. This method has been called for all domain-specific Web-pages, which were found by the domain-specific crawler. We have shown the work flow of our algorithm.
Lucky URL Search from DSLSDB
Here, we have described the key structure of our algorithm, which finds a lucky URL from DSLSDB for a given valid search string. In this algorithm, search string and DSLSDB are taken as input and based on that input we get Web-page of lucky URL.
In our algorithm, we first parse the search string for extracting Ontology terms. Now if there does not exist any Ontology term, an error message such as “Invalid Search String” will be generated; otherwise it will be continuing the process. In the next step, the algorithm finds maximum term count from DSLSDB for all Ontology terms present in the search string. Then it finds LURL from DSLSDB for that maximum term count value. If that value is more than one LURL present for the maximum term count then it chooses only one LURL based on the importance of Ontology term in the specified domain, i.e. weight value. And finally it passes the LURL for displaying. This method is called in run time. 
Patients and caregivers love hearing from you; add a comment to show your support.

Comments Hide comments

Help tsms Stay Connected to Family and Friends

A $30 donation to CaringBridge powers a site like tsms's for one month. Will you make a gift to ensure that this site stays online for them and for you?

Show Your Support

See the Ways to Help page to get even more involved.